From sonyablade2010 at hotmail.com Mon Apr 1 04:03:08 2013 From: sonyablade2010 at hotmail.com (Sonya Blade) Date: Mon, 1 Apr 2013 10:03:08 +0100 Subject: [petsc-users] Petsc/Slepc Integration with Code Blocks IDE Message-ID: Dear All, Does any of you succeed to integrate Code Blocks IDE with Petsc and SLepc? I've tried this? with configuring properly the compiler, linker path, header files and feeding with libraries? namely "libslepc.a" and "libpetsc.a" but still compiler complains for some of the missing routines. I gave below couple of errors if it makes you recall anything. Your help will be appreciated. /cygdrive/D/TEST_FOLDER_dell/slepc/src/vec/contiguous.c:186: undefined reference to `_dgemm_' /cygdrive/D/TEST_FOLDER_dell/slepc/src/vec/contiguous.c:199: undefined reference to `_dgemm_' ../../../slepc/arch-mswin-c-debug/lib/libslepc.a(contiguous.c.o): In function `SlepcUpdateStrideVectors': /cygdrive/D/TEST_FOLDER_dell/slepc/src/vec/contiguous.c:391: undefined reference to `_dgemm_' /cygdrive/D/TEST_FOLDER_dell/slepc/src/vec/contiguous.c:401: undefined reference to `_dgemm_' ../../../slepc/arch-mswin-c-debug/lib/libslepc.a(contiguous.c.o): In function `SlepcVecMAXPBY': /cygdrive/D/TEST_FOLDER_dell/slepc/src/vec/contiguous.c:455: undefined reference to `_MPI_Allreduce' /cygdrive/D/TEST_FOLDER_dell/slepc/src/vec/contiguous.c:456: undefined reference to `_MPI_Allreduce' /cygdrive/D/TEST_FOLDER_dell/slepc/src/vec/contiguous.c:457: undefined reference to `_MPI_Allreduce' /cygdrive/D/TEST_FOLDER_dell/slepc/src/vec/contiguous.c:463: undefined reference to `_MPI_Comm_compare' /cygdrive/D/TEST_FOLDER_dell/slepc/src/vec/contiguous.c:475: undefined reference to `_dgemv_' ../../../slepc/arch-mswin-c-debug/lib/libslepc.a(basic.c.o): In function `EPSView': /cygdrive/D/TEST_FOLDER_dell/slepc/src/eps/interface/basic.c:144: undefined reference to `_MPI_Comm_compare' ../../../slepc/arch-mswin-c-debug/lib/libslepc.a(basic.c.o): In function `EPSPrintSolution': /cygdrive/D/TEST_FOLDER_dell/slepc/src/eps/interface/basic.c:344: undefined reference to `_MPI_Comm_compare' ../../../slepc/arch-mswin-c-debug/lib/libslepc.a(basic.c.o): In function `EPSSetTarget': /cygdrive/D/TEST_FOLDER_dell/slepc/src/eps/interface/basic.c:728: undefined reference to `_MPI_Allreduce' ../../../slepc/arch-mswin-c-debug/lib/libslepc.a(basic.c.o): In function `EPSSetInterval': /cygdrive/D/TEST_FOLDER_dell/slepc/src/eps/interface/basic.c:798: undefined reference to `_MPI_Allreduce' /cygdrive/D/TEST_FOLDER_dell/slepc/src/eps/interface/basic.c:799: undefined reference to `_MPI_Allreduce' ../../../slepc/arch-mswin-c-debug/lib/libslepc.a(basic.c.o): In function `EPSSetST': /cygdrive/D/TEST_FOLDER_dell/slepc/src/eps/interface/basic.c:864: undefined reference to `_MPI_Comm_compare' ../../../slepc/arch-mswin-c-debug/lib/libslepc.a(basic.c.o): In function `EPSSetIP': /cygdrive/D/TEST_FOLDER_dell/slepc/src/eps/interface/basic.c:931: undefined reference to `_MPI_Comm_compare' ../../../slepc/arch-mswin-c-debug/lib/libslepc.a(basic.c.o): In function `EPSSetDS': /cygdrive/D/TEST_FOLDER_dell/slepc/src/eps/interface/basic.c:997: undefined reference to `_MPI_Comm_compare' ../../../slepc/arch-mswin-c-debug/lib/libslepc.a(dvd_blas.c.o): In function `SlepcDenseMatProd': /cygdrive/D/TEST_FOLDER_dell/slepc/src/eps/impls/davidson/common/dvd_blas.c:81: undefined reference to `_dgemm_' ../../../slepc/arch-mswin-c-debug/lib/libslepc.a(dvd_blas.c.o): In function `SlepcDenseMatProdTriang': /cygdrive/D/TEST_FOLDER_dell/slepc/src/eps/impls/davidson/common/dvd_blas.c:157: undefined reference to `_dsymm_' /cygdrive/D/TEST_FOLDER_dell/slepc/src/eps/impls/davidson/common/dvd_blas.c:170: undefined reference to `_dsymm_' ../../../slepc/arch-mswin-c-debug/lib/libslepc.a(dvd_blas.c.o): In function `SlepcDenseNorm': /cygdrive/D/TEST_FOLDER_dell/slepc/src/eps/impls/davidson/common/dvd_blas.c:205: undefined reference to `_dnrm2_' /cygdrive/D/TEST_FOLDER_dell/slepc/src/eps/impls/davidson/common/dvd_blas.c:206: undefined reference to `_dnrm2_' /cygdrive/D/TEST_FOLDER_dell/slepc/src/eps/impls/davidson/common/dvd_blas.c:208: undefined reference to `_dscal_' /cygdrive/D/TEST_FOLDER_dell/slepc/src/eps/impls/davidson/common/dvd_blas.c:209: undefined reference to `_dscal_' /cygdrive/D/TEST_FOLDER_dell/slepc/src/eps/impls/davidson/common/dvd_blas.c:212: undefined reference to `_dnrm2_' /cygdrive/D/TEST_FOLDER_dell/slepc/src/eps/impls/davidson/common/dvd_blas.c:214: undefined reference to `_dscal_' From sonyablade2010 at hotmail.com Mon Apr 1 06:30:40 2013 From: sonyablade2010 at hotmail.com (Sonya Blade) Date: Mon, 1 Apr 2013 12:30:40 +0100 Subject: [petsc-users] Petsc/Slepc Integration with Code Blocks IDE In-Reply-To: References: Message-ID: The best that I achieved so far was those function, It seems that they are related to the? visual representation of the matrices. I deliberately added the libblas.a, liblapack.a,? libopa.a, libmpich.a etc... libpmpich.a to the linker search path and with every add of those? libraries some of the errors vanished.? In the ARCH/lib folder I couldn't find any library name which will account for those functions calls. It seems that the remaining errors are not related to the Petsc and SLepc installation .. Any guidance will be appreciated. ? undefined reference to `_XSetForeground' /cygdrive/c/Users/...../Downloads/petsc-3.3-p6/src/sys/draw/impls/x/xops.c:219: undefined reference to `_XDrawString' C:/Users/...../Downloads/petsc-3.3-p6/arch-mswin-c-debug/lib/libpetsc.a(xops.c.o): In function `PetscDrawFlush_X': /cygdrive/c/Users/...../Downloads/petsc-3.3-p6/src/sys/draw/impls/x/xops.c:232: undefined reference to `_XCopyArea' /cygdrive/c/Users/...../Downloads/petsc-3.3-p6/src/sys/draw/impls/x/xops.c:234: undefined reference to `_XFlush' /cygdrive/c/Users/...../Downloads/petsc-3.3-p6/src/sys/draw/impls/x/xops.c:234: undefined reference to `_XSync' ? C:/Users/...../Downloads/petsc-3.3-p6/arch-mswin-c-debug/lib/libpetsc.a(xops.c.o): In function PetscDrawSynchronizedFlush_X': /cygdrive/c/Users/...../Downloads/petsc-3.3-p6/src/sys/draw/impls/x/xops.c:247: undefined reference to `_XFlush' /cygdrive/c/Users/...../Downloads/petsc-3.3-p6/src/sys/draw/impls/x/xops.c:251: undefined reference to `_XSync' /cygdrive/c/Users/...../Downloads/petsc-3.3-p6/src/sys/draw/impls/x/xops.c:254: undefined reference to `_XCopyArea' /cygdrive/c/Users/...../Downloads/petsc-3.3-p6/src/sys/draw/impls/x/xops.c:255: undefined reference to `_XFlush' /cygdrive/c/Users/...../Downloads/petsc-3.3-p6/src/sys/draw/impls/x/xops.c:257: undefined reference to `_XSync' /cygdrive/c/Users/...../Downloads/petsc-3.3-p6/src/sys/draw/impls/x/xops.c:261: undefined reference to `_XSync' ? Regards, From jedbrown at mcs.anl.gov Mon Apr 1 07:05:24 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 1 Apr 2013 07:05:24 -0500 Subject: [petsc-users] Petsc/Slepc Integration with Code Blocks IDE In-Reply-To: References: Message-ID: Run 'make getlinklibs' from PETSC_DIR and add everything exactly as shown to your IDE. On Apr 1, 2013 6:30 AM, "Sonya Blade" wrote: > The best that I achieved so far was those function, It seems that they are > related to the > visual representation of the matrices. I deliberately added the libblas.a, > liblapack.a, > libopa.a, libmpich.a etc... libpmpich.a to the linker search path and with > every add of those > libraries some of the errors vanished. > > > In the ARCH/lib folder I couldn't find any library name which will account > for those functions calls. > It seems that the remaining errors are not related to the Petsc and SLepc > installation .. > > > Any guidance will be appreciated. > > > undefined reference to `_XSetForeground' > /cygdrive/c/Users/...../Downloads/petsc-3.3-p6/src/sys/draw/impls/x/xops.c:219: > undefined reference to `_XDrawString' > C:/Users/...../Downloads/petsc-3.3-p6/arch-mswin-c-debug/lib/libpetsc.a(xops.c.o): > In function `PetscDrawFlush_X': > /cygdrive/c/Users/...../Downloads/petsc-3.3-p6/src/sys/draw/impls/x/xops.c:232: > undefined reference to `_XCopyArea' > /cygdrive/c/Users/...../Downloads/petsc-3.3-p6/src/sys/draw/impls/x/xops.c:234: > undefined reference to `_XFlush' > /cygdrive/c/Users/...../Downloads/petsc-3.3-p6/src/sys/draw/impls/x/xops.c:234: > undefined reference to `_XSync' > > C:/Users/...../Downloads/petsc-3.3-p6/arch-mswin-c-debug/lib/libpetsc.a(xops.c.o): > In function PetscDrawSynchronizedFlush_X': > /cygdrive/c/Users/...../Downloads/petsc-3.3-p6/src/sys/draw/impls/x/xops.c:247: > undefined reference to `_XFlush' > /cygdrive/c/Users/...../Downloads/petsc-3.3-p6/src/sys/draw/impls/x/xops.c:251: > undefined reference to `_XSync' > /cygdrive/c/Users/...../Downloads/petsc-3.3-p6/src/sys/draw/impls/x/xops.c:254: > undefined reference to `_XCopyArea' > /cygdrive/c/Users/...../Downloads/petsc-3.3-p6/src/sys/draw/impls/x/xops.c:255: > undefined reference to `_XFlush' > /cygdrive/c/Users/...../Downloads/petsc-3.3-p6/src/sys/draw/impls/x/xops.c:257: > undefined reference to `_XSync' > /cygdrive/c/Users/...../Downloads/petsc-3.3-p6/src/sys/draw/impls/x/xops.c:261: > undefined reference to `_XSync' > > > Regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: From sonyablade2010 at hotmail.com Mon Apr 1 07:47:20 2013 From: sonyablade2010 at hotmail.com (Sonya Blade) Date: Mon, 1 Apr 2013 13:47:20 +0100 Subject: [petsc-users] Petsc/Slepc Integration with Code Blocks IDE In-Reply-To: References: , Message-ID: Thank you Jed, ? After running the code it retrieves, the given results ? $ make getlinklibs 1> -Wl,-rpath,/cygdrive/c/Users/...../Downloads/petsc-3.3-p6/arch-mswin-c-debug/lib? 2> -L/cygdrive/c/Users/...../Downloads/petsc-3.3-p6/arch-mswin-c-debug/lib -lpetsc -lX11 -lpthread -Wl, 3>-rpath,/cygdrive/c/Users/...../Downloads/petsc-3.3-p6/arch-mswin-c-debug/lib -lflapack -lfblas? 4>-L/usr/lib/gcc/i686-pc-cygwin/4.5.3 -lmpichf90 -lgfortran -lgcc_s -lgdi32 -luser32 -ladvapi32 -lkernel32 -ldl -lpmpich? 5>-lmpich -lopa -lmpl -lpthread -lgcc_eh -luser32 -ladvapi32 -lshell32 -ldl ? I've tried to parse out so it can be more readable, in the second line there is -lX11 and -lpthread? libraries which I don't have in the mentioned folder. ? And should I add those libraries gdi32, advapi32, kernel32, shell32 depicted? in the 4 and 5 line ? It seems they are intrinsic libraries that I'm not supposed to? dealt with. ? Regards, From jedbrown at mcs.anl.gov Mon Apr 1 07:52:20 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 01 Apr 2013 07:52:20 -0500 Subject: [petsc-users] Petsc/Slepc Integration with Code Blocks IDE In-Reply-To: References: Message-ID: <876206za4b.fsf@59A2.org> Sonya Blade writes: > Thank you Jed, > ? > After running the code it retrieves, the given results > ? > $ make getlinklibs > 1> -Wl,-rpath,/cygdrive/c/Users/...../Downloads/petsc-3.3-p6/arch-mswin-c-debug/lib? > 2> -L/cygdrive/c/Users/...../Downloads/petsc-3.3-p6/arch-mswin-c-debug/lib -lpetsc -lX11 -lpthread -Wl, > 3>-rpath,/cygdrive/c/Users/...../Downloads/petsc-3.3-p6/arch-mswin-c-debug/lib -lflapack -lfblas? > 4>-L/usr/lib/gcc/i686-pc-cygwin/4.5.3 -lmpichf90 -lgfortran -lgcc_s -lgdi32 -luser32 -ladvapi32 -lkernel32 -ldl -lpmpich? > 5>-lmpich -lopa -lmpl -lpthread -lgcc_eh -luser32 -ladvapi32 -lshell32 -ldl > ? > I've tried to parse out so it can be more readable, in the second line there is -lX11 and -lpthread? > libraries which I don't have in the mentioned folder. They are likely in "system paths", thus available using only -lX11. I don't know what format Code Blocks expects, but libX11 is the one accounting for most/all of your errors. > And should I add those libraries gdi32, advapi32, kernel32, shell32 depicted? > in the 4 and 5 line ? It seems they are intrinsic libraries that I'm not supposed to? > dealt with. You can try without, and add them if you get weird link errors. It depends whether Code Blocks and/or your compiler add them automatically. From sonyablade2010 at hotmail.com Mon Apr 1 09:15:27 2013 From: sonyablade2010 at hotmail.com (Sonya Blade) Date: Mon, 1 Apr 2013 15:15:27 +0100 Subject: [petsc-users] Petsc/Slepc Integration with Code Blocks IDE In-Reply-To: References: , , , <876206za4b.fsf@59A2.org>, Message-ID: Is it possible that I missed to install some crucial packages during the ?CygWin installation? ?Because those packages got nothing to do with the petsc or slepcs compilation ?Then problem boils down to what are the minimum requirements. ?Regards, From balay at mcs.anl.gov Mon Apr 1 13:15:04 2013 From: balay at mcs.anl.gov (Satish Balay) Date: Mon, 1 Apr 2013 13:15:04 -0500 (CDT) Subject: [petsc-users] Petsc/Slepc Integration with Code Blocks IDE In-Reply-To: References: , , , <876206za4b.fsf@59A2.org>, Message-ID: On Mon, 1 Apr 2013, Sonya Blade wrote: > Is it possible that I missed to install some crucial packages during the > ?CygWin installation? How do you come to this conclusion? gcc found and used them at PETSc configure and build time [with makefiles]. > > ?Because those packages got nothing to do with the petsc or slepcs compilation How do you know that? > ?Then problem boils down to what are the minimum requirements. sure - for minimal build - you can configure petsc with --with-mpi=0 --with-x=0 etc. But gcc on windows might still need '-luser32 -ladvapi32 -lshell32' stuff. Satish From dharmareddy84 at gmail.com Mon Apr 1 18:16:04 2013 From: dharmareddy84 at gmail.com (Dharmendar Reddy) Date: Mon, 1 Apr 2013 18:16:04 -0500 Subject: [petsc-users] DMSNESSet Message-ID: Hello, I am confused about the usage of SNESSetFunction, DMSNESSetFunction, DMSNESSetFunctionLocal and the corresponding Jacobin routines. At present my code is set up as follows. I initiate SNES with getFunction and getJacobain routines required as per SNESSet. I pass a user context which has the mesh and field layout information. I assemble the residual function and Jacobin element wise using the information from user context. My code at this point is serial, so i run the element loop from 1 to numberofTotalElements. Now i want to switch to using DMPlex. How should i change the interface to my code. If i set the dm for snes using SNESSetDM, i can see that the DM can be accessed via SNESGetDM inside getFunction and getJacobian, then i get the elementStartID and elementEndID to run the loop for assmbler. But now i will have to assemble to local vector into global vector right ? Looks like the DMSNESSetFunctionLocal will assemble the local vector into global vector. The function provided to this should just evaluate the local vector. Did i understand this right ? I am confused about passing the DM in DMSNESSet. When the DM can be accessed via SNESGetDM, why do we pass it again explicitly ? -- ----------------------------------------------------- Dharmendar Reddy Palle Graduate Student Microelectronics Research center, University of Texas at Austin, 10100 Burnet Road, Bldg. 160 MER 2.608F, TX 78758-4445 e-mail: dharmareddy84 at gmail.com Phone: +1-512-350-9082 United States of America. Homepage: https://webspace.utexas.edu/~dpr342 -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Apr 1 20:24:23 2013 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 2 Apr 2013 12:24:23 +1100 Subject: [petsc-users] DMSNESSet In-Reply-To: References: Message-ID: On Tue, Apr 2, 2013 at 10:16 AM, Dharmendar Reddy wrote: > Hello, > I am confused about the usage of SNESSetFunction, > DMSNESSetFunction, DMSNESSetFunctionLocal and the corresponding Jacobin > routines. > > At present my code is set up as follows. > > I initiate SNES with getFunction and getJacobain routines required as per > SNESSet. I pass a user context which has the mesh and > field layout information. I assemble the residual function and Jacobin > element wise using the information from user context. My code at this point > is serial, so i run the element loop from 1 to numberofTotalElements. > > Now i want to switch to using DMPlex. How should i change the interface to > my code. > You can always look at SNES ex62 http://www.mcs.anl.gov/petsc/petsc-dev/src/snes/examples/tutorials/ex62.c.html > If i set the dm for snes using SNESSetDM, i can see that the DM can be > accessed via SNESGetDM inside getFunction and getJacobian, then i get the > elementStartID and elementEndID to run the loop for assmbler. But now i > will have to assemble to local vector into global vector right ? > No, here is the sequence: 1) SNESSetDM(snes, dm) 2) DMSNESSetFunctionLocal(dm, userResidual, userCtx) where we have userResidual(DM dm, Vec X, Vec F, void *userCtx) Both X and F are local vectors which I normally interact with using DMPlexVecGet/SetClosure(). If you are using FEM, you can DMPlexComputeResidualFEM() here and use DMPlexSetFEMIntegration() to input point-wise physics functions as is done in ex62. > Looks like the DMSNESSetFunctionLocal will assemble the local > vector into global vector. The function provided to this should just > evaluate the local vector. Did i understand this right ? > > I am confused about passing the DM in DMSNESSet. When > the DM can be accessed via SNESGetDM, why do we pass it again explicitly ? > You are not passing the SNES, so where would it come from in this call? Matt > > -- > ----------------------------------------------------- > Dharmendar Reddy Palle > Graduate Student > Microelectronics Research center, > University of Texas at Austin, > 10100 Burnet Road, Bldg. 160 > MER 2.608F, TX 78758-4445 > e-mail: dharmareddy84 at gmail.com > Phone: +1-512-350-9082 > United States of America. > Homepage: https://webspace.utexas.edu/~dpr342 > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From dharmareddy84 at gmail.com Mon Apr 1 21:47:43 2013 From: dharmareddy84 at gmail.com (Dharmendar Reddy) Date: Mon, 1 Apr 2013 21:47:43 -0500 Subject: [petsc-users] DMSNESSet In-Reply-To: References: Message-ID: Hello, I looked at the source code of SNESSetFunction after i sent the email. From what i understood, SNESSetFunction calls DMSNESSetFunction internally. I was like, why not just use SNESSetFunction with out passing DM. But i get it now, not all problems have to use DM. If i am using DM i should rather use DMSNESSetFunction. A better way is DMSNESSetFunctionLocal right ? DMSNesSetFunction i called inside DMSNESSetFunctionLocal. I did look at DMPlexComputeResidualFEM but that requires userContext to have PetscFEM object. Is PetscFEM object accisable from Fortran or is it genrated using bin/pythonscripts/PetscGenerateFEMQuadrature.py I structured my code such that, user provides the Linear and bilinear form subroutines: subroutine biLinearFrom(u,v,cell,Kelm,eqnCtx) type(FunctionSpace) :: u,v type(FEMCell) :: cell PetscScalar :: Kelm class(*),pointer :: eqnCtx end subroutine Function space object provide the routines for evaluation of basis functions and gradients, given cell coordinates, and cell degrees of freedom. So i should be able to get the cell geometry information and cell degree of freedom data via DMPlexVecGet/SetClosure right ? At this point i was able to create a DMPlex object from a mesh created by gmsh. I used DMPLexCreateLabel to add the physical region labels i created in GMSh . So no i have a mapping from given a region label, list of cells in that region. If i am doing a physical region wise assemble of the problem, i could use the DMPlexgetStratumIS for each regionlabel. since i am doing a cell wise assemble, i would be interested in a mapping from cell id to regionId (i can have a local array of labels indexed by region Id) so that i can pass the material information to FEMCell. Right now i have a integer array which holds the map from cellId to regionId. I was wondering if i could add a integer section to DM object ? I also need to have some other auxiliary scalar data associated with the DM. You suggested in one of the earlier email that i created another DM using DMplexClone ? If i clone a object is it creating another copy of the mesh in memory or is it just storing pointers ? I was thinking i could create a section using DMPlexCreateSection and pass that section to DMPlexVecGetClosure when i have to access auxilary data. Basically the problem is to solve Poisson equation with current continuity equation for a semiconductor. I solve the problem by solving each equation until the solutions of both are consistent. For Poisson problem, the degree freedom is potential and takes charge density as input. For continuation equation degree of freedom is charge density and take potential as input. Do you have alternative approaches to what i am thinking ? On Mon, Apr 1, 2013 at 8:24 PM, Matthew Knepley wrote: > On Tue, Apr 2, 2013 at 10:16 AM, Dharmendar Reddy > wrote: > >> Hello, >> I am confused about the usage of SNESSetFunction, >> DMSNESSetFunction, DMSNESSetFunctionLocal and the corresponding Jacobin >> routines. >> >> At present my code is set up as follows. >> >> I initiate SNES with getFunction and getJacobain routines required as per >> SNESSet. I pass a user context which has the mesh and >> field layout information. I assemble the residual function and Jacobin >> element wise using the information from user context. My code at this point >> is serial, so i run the element loop from 1 to numberofTotalElements. >> >> Now i want to switch to using DMPlex. How should i change the interface >> to my code. >> > > You can always look at SNES ex62 > http://www.mcs.anl.gov/petsc/petsc-dev/src/snes/examples/tutorials/ex62.c.html > > >> If i set the dm for snes using SNESSetDM, i can see that the DM can be >> accessed via SNESGetDM inside getFunction and getJacobian, then i get the >> elementStartID and elementEndID to run the loop for assmbler. But now i >> will have to assemble to local vector into global vector right ? >> > > No, here is the sequence: > > 1) SNESSetDM(snes, dm) > > 2) DMSNESSetFunctionLocal(dm, userResidual, userCtx) > > where we have > > userResidual(DM dm, Vec X, Vec F, void *userCtx) > > Both X and F are local vectors which I normally interact with using > DMPlexVecGet/SetClosure(). If you > are using FEM, you can DMPlexComputeResidualFEM() here and > use DMPlexSetFEMIntegration() to > input point-wise physics functions as is done in ex62. > > >> Looks like the DMSNESSetFunctionLocal will assemble the local >> vector into global vector. The function provided to this should just >> evaluate the local vector. Did i understand this right ? >> >> I am confused about passing the DM in DMSNESSet. When >> the DM can be accessed via SNESGetDM, why do we pass it again explicitly ? >> > > You are not passing the SNES, so where would it come from in this call? > > > Matt > > >> >> -- >> ----------------------------------------------------- >> Dharmendar Reddy Palle >> Graduate Student >> Microelectronics Research center, >> University of Texas at Austin, >> 10100 Burnet Road, Bldg. 160 >> MER 2.608F, TX 78758-4445 >> e-mail: dharmareddy84 at gmail.com >> Phone: +1-512-350-9082 >> United States of America. >> Homepage: https://webspace.utexas.edu/~dpr342 >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -- ----------------------------------------------------- Dharmendar Reddy Palle Graduate Student Microelectronics Research center, University of Texas at Austin, 10100 Burnet Road, Bldg. 160 MER 2.608F, TX 78758-4445 e-mail: dharmareddy84 at gmail.com Phone: +1-512-350-9082 United States of America. Homepage: https://webspace.utexas.edu/~dpr342 -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Apr 1 23:02:10 2013 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 2 Apr 2013 15:02:10 +1100 Subject: [petsc-users] DMSNESSet In-Reply-To: References: Message-ID: On Tue, Apr 2, 2013 at 1:47 PM, Dharmendar Reddy wrote: > Hello, > I looked at the source code of SNESSetFunction after i sent the > email. From what i understood, SNESSetFunction calls DMSNESSetFunction > internally. I was like, why not just use SNESSetFunction with out passing > DM. But i get it now, not all problems have to use DM. If i am using DM i > should rather use DMSNESSetFunction. A better way is DMSNESSetFunctionLocal > right ? DMSNesSetFunction i called inside DMSNESSetFunctionLocal. > > > I did look at DMPlexComputeResidualFEM but that requires userContext to > have PetscFEM object. Is PetscFEM object accisable from Fortran or is it > genrated using > The object is just a struct, and has only information you must have for FEM, namely a quadrature rule, and tabulation of the basis functions are quadrature points. > bin/pythonscripts/PetscGenerateFEMQuadrature.py > > This script just generates a header with the struct information. You can make this header yourself and obviate the need for the script. > I structured my code such that, user provides the Linear and bilinear form > subroutines: > > subroutine biLinearFrom(u,v,cell,Kelm,eqnCtx) > type(FunctionSpace) :: u,v > type(FEMCell) :: cell > PetscScalar :: Kelm > class(*),pointer :: eqnCtx > end subroutine > > Function space object provide the routines for evaluation of basis > functions and gradients, given cell coordinates, and cell degrees of > freedom. > Can you just use these to create the PetscFEM struct? > > So i should be able to get the cell geometry information and cell degree > of freedom data via DMPlexVecGet/SetClosure right ? > Yes, although tell me why DMPlexComputeCellGeometry() does not work, and if not you can look at the code to see me getting the coordinate information. > At this point i was able to create a DMPlex object from a mesh created by > gmsh. I used DMPLexCreateLabel to add the physical region labels i created > in GMSh . So no i have a mapping from given a region label, list of cells > in that region. > > If i am doing a physical region wise assemble of the problem, i could use > the DMPlexgetStratumIS for each regionlabel. > Yes. > since i am doing a cell wise assemble, i would be interested in a mapping > from cell id to regionId (i can have a local array of labels indexed by > region Id) so that i can pass the material information to FEMCell. Right > now i have a integer array which holds the map from cellId to regionId. > > I was wondering if i could add a integer section to DM object ? > You use DMLabelGetValue(), and to trade memory for speed call DMLabelCreateIndex() first if you want. > I also need to have some other auxiliary scalar data associated with the > DM. You suggested in one of the earlier email that i created another DM > using DMplexClone ? If i clone a object is it creating another copy of the > mesh in memory or is it just storing pointers ? > Just a pointer. > I was thinking i could create a section using DMPlexCreateSection and pass > that section to DMPlexVecGetClosure when i have to access auxilary data. > This is exactly what I was suggesting, but you have the section be the "default section" of the cloned DM. That way it interacts with all PETSc functions nicely and you only have the extra memory for a DM object which is tiny. Matt Basically the problem is to solve Poisson equation with current continuity > equation for a semiconductor. I solve the problem by solving each equation > until the solutions of both are consistent. For Poisson problem, the degree > freedom is potential and takes charge density as input. For continuation > equation degree of freedom is charge density and take potential as input. > > Do you have alternative approaches to what i am thinking ? > > On Mon, Apr 1, 2013 at 8:24 PM, Matthew Knepley wrote: > >> On Tue, Apr 2, 2013 at 10:16 AM, Dharmendar Reddy < >> dharmareddy84 at gmail.com> wrote: >> >>> Hello, >>> I am confused about the usage of SNESSetFunction, >>> DMSNESSetFunction, DMSNESSetFunctionLocal and the corresponding Jacobin >>> routines. >>> >>> At present my code is set up as follows. >>> >>> I initiate SNES with getFunction and getJacobain routines required as >>> per SNESSet. I pass a user context which has the mesh >>> and field layout information. I assemble the residual function and Jacobin >>> element wise using the information from user context. My code at this point >>> is serial, so i run the element loop from 1 to numberofTotalElements. >>> >>> Now i want to switch to using DMPlex. How should i change the interface >>> to my code. >>> >> >> You can always look at SNES ex62 >> http://www.mcs.anl.gov/petsc/petsc-dev/src/snes/examples/tutorials/ex62.c.html >> >> >>> If i set the dm for snes using SNESSetDM, i can see that the DM can be >>> accessed via SNESGetDM inside getFunction and getJacobian, then i get the >>> elementStartID and elementEndID to run the loop for assmbler. But now i >>> will have to assemble to local vector into global vector right ? >>> >> >> No, here is the sequence: >> >> 1) SNESSetDM(snes, dm) >> >> 2) DMSNESSetFunctionLocal(dm, userResidual, userCtx) >> >> where we have >> >> userResidual(DM dm, Vec X, Vec F, void *userCtx) >> >> Both X and F are local vectors which I normally interact with using >> DMPlexVecGet/SetClosure(). If you >> are using FEM, you can DMPlexComputeResidualFEM() here and >> use DMPlexSetFEMIntegration() to >> input point-wise physics functions as is done in ex62. >> >> >>> Looks like the DMSNESSetFunctionLocal will assemble the local >>> vector into global vector. The function provided to this should just >>> evaluate the local vector. Did i understand this right ? >>> >>> I am confused about passing the DM in DMSNESSet. When >>> the DM can be accessed via SNESGetDM, why do we pass it again explicitly ? >>> >> >> You are not passing the SNES, so where would it come from in this call? >> > > > > > >> >> Matt >> >> >>> >>> -- >>> ----------------------------------------------------- >>> Dharmendar Reddy Palle >>> Graduate Student >>> Microelectronics Research center, >>> University of Texas at Austin, >>> 10100 Burnet Road, Bldg. 160 >>> MER 2.608F, TX 78758-4445 >>> e-mail: dharmareddy84 at gmail.com >>> Phone: +1-512-350-9082 >>> United States of America. >>> Homepage: https://webspace.utexas.edu/~dpr342 >>> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > > > -- > ----------------------------------------------------- > Dharmendar Reddy Palle > Graduate Student > Microelectronics Research center, > University of Texas at Austin, > 10100 Burnet Road, Bldg. 160 > MER 2.608F, TX 78758-4445 > e-mail: dharmareddy84 at gmail.com > Phone: +1-512-350-9082 > United States of America. > Homepage: https://webspace.utexas.edu/~dpr342 > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From dharmareddy84 at gmail.com Tue Apr 2 00:18:25 2013 From: dharmareddy84 at gmail.com (Dharmendar Reddy) Date: Tue, 2 Apr 2013 00:18:25 -0500 Subject: [petsc-users] DMSNESSet In-Reply-To: References: Message-ID: Hello, I should be able to generate petscFEM struct with my code, i have the required data, I just need to prepare it in the format used by DMPLexComputeResidualFEM. I will give it a shot. What is the pStart and pEnd in the call to DMLabelCreateIndex referring to ? Also,my code complains if i use DMLabel :: labelName This name does not have a type, and must have an explicit type. For now i can do without DMLabelCreateIndex as fortran interface to DMLabel doesn't seem to exist. On Mon, Apr 1, 2013 at 11:02 PM, Matthew Knepley wrote: > On Tue, Apr 2, 2013 at 1:47 PM, Dharmendar Reddy wrote: > >> Hello, >> I looked at the source code of SNESSetFunction after i sent the >> email. From what i understood, SNESSetFunction calls DMSNESSetFunction >> internally. I was like, why not just use SNESSetFunction with out passing >> DM. But i get it now, not all problems have to use DM. If i am using DM i >> should rather use DMSNESSetFunction. A better way is DMSNESSetFunctionLocal >> right ? DMSNesSetFunction i called inside DMSNESSetFunctionLocal. >> >> >> I did look at DMPlexComputeResidualFEM but that requires userContext to >> have PetscFEM object. Is PetscFEM object accisable from Fortran or is it >> genrated using >> > > The object is just a struct, and has only information you must have for > FEM, namely > a quadrature rule, and tabulation of the basis functions are quadrature > points. > > >> bin/pythonscripts/PetscGenerateFEMQuadrature.py >> >> This script just generates a header with the struct information. You can > make this header > yourself and obviate the need for the script. > > >> I structured my code such that, user provides the Linear and bilinear >> form subroutines: >> >> subroutine biLinearFrom(u,v,cell,Kelm,eqnCtx) >> type(FunctionSpace) :: u,v >> type(FEMCell) :: cell >> PetscScalar :: Kelm >> class(*),pointer :: eqnCtx >> end subroutine >> >> Function space object provide the routines for evaluation of basis >> functions and gradients, given cell coordinates, and cell degrees of >> freedom. >> > > Can you just use these to create the PetscFEM struct? > > >> >> So i should be able to get the cell geometry information and cell degree >> of freedom data via DMPlexVecGet/SetClosure right ? >> > > Yes, although tell me why DMPlexComputeCellGeometry() does not work, and > if not you can look at the code > to see me getting the coordinate information. > > >> At this point i was able to create a DMPlex object from a mesh created by >> gmsh. I used DMPLexCreateLabel to add the physical region labels i created >> in GMSh . So no i have a mapping from given a region label, list of cells >> in that region. >> >> If i am doing a physical region wise assemble of the problem, i could use >> the DMPlexgetStratumIS for each regionlabel. >> > > Yes. > > >> since i am doing a cell wise assemble, i would be interested in a mapping >> from cell id to regionId (i can have a local array of labels indexed by >> region Id) so that i can pass the material information to FEMCell. Right >> now i have a integer array which holds the map from cellId to regionId. >> >> I was wondering if i could add a integer section to DM object ? >> > > You use DMLabelGetValue(), and to trade memory for speed > call DMLabelCreateIndex() first if you want. > > >> I also need to have some other auxiliary scalar data associated with the >> DM. You suggested in one of the earlier email that i created another DM >> using DMplexClone ? If i clone a object is it creating another copy of the >> mesh in memory or is it just storing pointers ? >> > > Just a pointer. > > >> I was thinking i could create a section using DMPlexCreateSection and >> pass that section to DMPlexVecGetClosure when i have to access auxilary >> data. >> > > This is exactly what I was suggesting, but you have the section be the > "default section" of the cloned DM. That way > it interacts with all PETSc functions nicely and you only have the extra > memory for a DM object which is tiny. > > Matt > > Basically the problem is to solve Poisson equation with current continuity >> equation for a semiconductor. I solve the problem by solving each equation >> until the solutions of both are consistent. For Poisson problem, the degree >> freedom is potential and takes charge density as input. For continuation >> equation degree of freedom is charge density and take potential as input. >> >> Do you have alternative approaches to what i am thinking ? >> >> On Mon, Apr 1, 2013 at 8:24 PM, Matthew Knepley wrote: >> >>> On Tue, Apr 2, 2013 at 10:16 AM, Dharmendar Reddy < >>> dharmareddy84 at gmail.com> wrote: >>> >>>> Hello, >>>> I am confused about the usage of SNESSetFunction, >>>> DMSNESSetFunction, DMSNESSetFunctionLocal and the corresponding Jacobin >>>> routines. >>>> >>>> At present my code is set up as follows. >>>> >>>> I initiate SNES with getFunction and getJacobain routines required as >>>> per SNESSet. I pass a user context which has the mesh >>>> and field layout information. I assemble the residual function and Jacobin >>>> element wise using the information from user context. My code at this point >>>> is serial, so i run the element loop from 1 to numberofTotalElements. >>>> >>>> Now i want to switch to using DMPlex. How should i change the interface >>>> to my code. >>>> >>> >>> You can always look at SNES ex62 >>> http://www.mcs.anl.gov/petsc/petsc-dev/src/snes/examples/tutorials/ex62.c.html >>> >>> >>>> If i set the dm for snes using SNESSetDM, i can see that the DM can be >>>> accessed via SNESGetDM inside getFunction and getJacobian, then i get the >>>> elementStartID and elementEndID to run the loop for assmbler. But now i >>>> will have to assemble to local vector into global vector right ? >>>> >>> >>> No, here is the sequence: >>> >>> 1) SNESSetDM(snes, dm) >>> >>> 2) DMSNESSetFunctionLocal(dm, userResidual, userCtx) >>> >>> where we have >>> >>> userResidual(DM dm, Vec X, Vec F, void *userCtx) >>> >>> Both X and F are local vectors which I normally interact with using >>> DMPlexVecGet/SetClosure(). If you >>> are using FEM, you can DMPlexComputeResidualFEM() here and >>> use DMPlexSetFEMIntegration() to >>> input point-wise physics functions as is done in ex62. >>> >>> >>>> Looks like the DMSNESSetFunctionLocal will assemble the local >>>> vector into global vector. The function provided to this should just >>>> evaluate the local vector. Did i understand this right ? >>>> >>>> I am confused about passing the DM in DMSNESSet. >>>> When the DM can be accessed via SNESGetDM, why do we pass it again >>>> explicitly ? >>>> >>> >>> You are not passing the SNES, so where would it come from in this call? >>> >> >> >> >> >> >>> >>> Matt >>> >>> >>>> >>>> -- >>>> ----------------------------------------------------- >>>> Dharmendar Reddy Palle >>>> Graduate Student >>>> Microelectronics Research center, >>>> University of Texas at Austin, >>>> 10100 Burnet Road, Bldg. 160 >>>> MER 2.608F, TX 78758-4445 >>>> e-mail: dharmareddy84 at gmail.com >>>> Phone: +1-512-350-9082 >>>> United States of America. >>>> Homepage: https://webspace.utexas.edu/~dpr342 >>>> >>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >> >> >> >> -- >> ----------------------------------------------------- >> Dharmendar Reddy Palle >> Graduate Student >> Microelectronics Research center, >> University of Texas at Austin, >> 10100 Burnet Road, Bldg. 160 >> MER 2.608F, TX 78758-4445 >> e-mail: dharmareddy84 at gmail.com >> Phone: +1-512-350-9082 >> United States of America. >> Homepage: https://webspace.utexas.edu/~dpr342 >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -- ----------------------------------------------------- Dharmendar Reddy Palle Graduate Student Microelectronics Research center, University of Texas at Austin, 10100 Burnet Road, Bldg. 160 MER 2.608F, TX 78758-4445 e-mail: dharmareddy84 at gmail.com Phone: +1-512-350-9082 United States of America. Homepage: https://webspace.utexas.edu/~dpr342 -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Apr 2 00:28:52 2013 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 2 Apr 2013 16:28:52 +1100 Subject: [petsc-users] DMSNESSet In-Reply-To: References: Message-ID: On Tue, Apr 2, 2013 at 4:18 PM, Dharmendar Reddy wrote: > Hello, > I should be able to generate petscFEM struct with my code, i > have the required data, I just need to prepare it in the format used by > DMPLexComputeResidualFEM. I will give it a shot. > Cool. If its not general enough, you can always write the loop yourself, or we can look into expanding it. > What is the pStart and pEnd in the call to DMLabelCreateIndex referring to > ? Also,my code complains if i use DMLabel :: labelName > It builds an index, so it needs bounds on the possible points you will be mapping. > This name does not have a type, and must have an explicit type. > Ah, I did not put DMLabel into Fortran. Goes on the list. > For now i can do without DMLabelCreateIndex as fortran interface to > DMLabel doesn't seem to exist. > Yes, its only an optimization. Thanks, Matt > On Mon, Apr 1, 2013 at 11:02 PM, Matthew Knepley wrote: > >> On Tue, Apr 2, 2013 at 1:47 PM, Dharmendar Reddy > > wrote: >> >>> Hello, >>> I looked at the source code of SNESSetFunction after i sent the >>> email. From what i understood, SNESSetFunction calls DMSNESSetFunction >>> internally. I was like, why not just use SNESSetFunction with out passing >>> DM. But i get it now, not all problems have to use DM. If i am using DM i >>> should rather use DMSNESSetFunction. A better way is DMSNESSetFunctionLocal >>> right ? DMSNesSetFunction i called inside DMSNESSetFunctionLocal. >>> >>> >>> I did look at DMPlexComputeResidualFEM but that requires userContext to >>> have PetscFEM object. Is PetscFEM object accisable from Fortran or is it >>> genrated using >>> >> >> The object is just a struct, and has only information you must have for >> FEM, namely >> a quadrature rule, and tabulation of the basis functions are quadrature >> points. >> >> >>> bin/pythonscripts/PetscGenerateFEMQuadrature.py >>> >>> This script just generates a header with the struct information. You can >> make this header >> yourself and obviate the need for the script. >> >> >>> I structured my code such that, user provides the Linear and bilinear >>> form subroutines: >>> >>> subroutine biLinearFrom(u,v,cell,Kelm,eqnCtx) >>> type(FunctionSpace) :: u,v >>> type(FEMCell) :: cell >>> PetscScalar :: Kelm >>> class(*),pointer :: eqnCtx >>> end subroutine >>> >>> Function space object provide the routines for evaluation of basis >>> functions and gradients, given cell coordinates, and cell degrees of >>> freedom. >>> >> >> Can you just use these to create the PetscFEM struct? >> >> >>> >>> So i should be able to get the cell geometry information and cell degree >>> of freedom data via DMPlexVecGet/SetClosure right ? >>> >> >> Yes, although tell me why DMPlexComputeCellGeometry() does not work, and >> if not you can look at the code >> to see me getting the coordinate information. >> >> >>> At this point i was able to create a DMPlex object from a mesh created >>> by gmsh. I used DMPLexCreateLabel to add the physical region labels i >>> created in GMSh . So no i have a mapping from given a region label, list of >>> cells in that region. >>> >>> If i am doing a physical region wise assemble of the problem, i could >>> use the DMPlexgetStratumIS for each regionlabel. >>> >> >> Yes. >> >> >>> since i am doing a cell wise assemble, i would be interested in a >>> mapping from cell id to regionId (i can have a local array of labels >>> indexed by region Id) so that i can pass the material information to >>> FEMCell. Right now i have a integer array which holds the map from cellId >>> to regionId. >>> >>> I was wondering if i could add a integer section to DM object ? >>> >> >> You use DMLabelGetValue(), and to trade memory for speed >> call DMLabelCreateIndex() first if you want. >> >> >>> I also need to have some other auxiliary scalar data associated with >>> the DM. You suggested in one of the earlier email that i created another >>> DM using DMplexClone ? If i clone a object is it creating another copy of >>> the mesh in memory or is it just storing pointers ? >>> >> >> Just a pointer. >> >> >>> I was thinking i could create a section using DMPlexCreateSection and >>> pass that section to DMPlexVecGetClosure when i have to access auxilary >>> data. >>> >> >> This is exactly what I was suggesting, but you have the section be the >> "default section" of the cloned DM. That way >> it interacts with all PETSc functions nicely and you only have the extra >> memory for a DM object which is tiny. >> >> Matt >> >> Basically the problem is to solve Poisson equation with current >>> continuity equation for a semiconductor. I solve the problem by solving >>> each equation until the solutions of both are consistent. For Poisson >>> problem, the degree freedom is potential and takes charge density as input. >>> For continuation equation degree of freedom is charge density and take >>> potential as input. >>> >>> Do you have alternative approaches to what i am thinking ? >>> >>> On Mon, Apr 1, 2013 at 8:24 PM, Matthew Knepley wrote: >>> >>>> On Tue, Apr 2, 2013 at 10:16 AM, Dharmendar Reddy < >>>> dharmareddy84 at gmail.com> wrote: >>>> >>>>> Hello, >>>>> I am confused about the usage of SNESSetFunction, >>>>> DMSNESSetFunction, DMSNESSetFunctionLocal and the corresponding Jacobin >>>>> routines. >>>>> >>>>> At present my code is set up as follows. >>>>> >>>>> I initiate SNES with getFunction and getJacobain routines required as >>>>> per SNESSet. I pass a user context which has the mesh >>>>> and field layout information. I assemble the residual function and Jacobin >>>>> element wise using the information from user context. My code at this point >>>>> is serial, so i run the element loop from 1 to numberofTotalElements. >>>>> >>>>> Now i want to switch to using DMPlex. How should i change the >>>>> interface to my code. >>>>> >>>> >>>> You can always look at SNES ex62 >>>> http://www.mcs.anl.gov/petsc/petsc-dev/src/snes/examples/tutorials/ex62.c.html >>>> >>>> >>>>> If i set the dm for snes using SNESSetDM, i can see that the DM can be >>>>> accessed via SNESGetDM inside getFunction and getJacobian, then i get the >>>>> elementStartID and elementEndID to run the loop for assmbler. But now i >>>>> will have to assemble to local vector into global vector right ? >>>>> >>>> >>>> No, here is the sequence: >>>> >>>> 1) SNESSetDM(snes, dm) >>>> >>>> 2) DMSNESSetFunctionLocal(dm, userResidual, userCtx) >>>> >>>> where we have >>>> >>>> userResidual(DM dm, Vec X, Vec F, void *userCtx) >>>> >>>> Both X and F are local vectors which I normally interact with using >>>> DMPlexVecGet/SetClosure(). If you >>>> are using FEM, you can DMPlexComputeResidualFEM() here and >>>> use DMPlexSetFEMIntegration() to >>>> input point-wise physics functions as is done in ex62. >>>> >>>> >>>>> Looks like the DMSNESSetFunctionLocal will assemble the local >>>>> vector into global vector. The function provided to this should just >>>>> evaluate the local vector. Did i understand this right ? >>>>> >>>>> I am confused about passing the DM in DMSNESSet. >>>>> When the DM can be accessed via SNESGetDM, why do we pass it again >>>>> explicitly ? >>>>> >>>> >>>> You are not passing the SNES, so where would it come from in this call? >>>> >>> >>> >>> >>> >>> >>>> >>>> Matt >>>> >>>> >>>>> >>>>> -- >>>>> ----------------------------------------------------- >>>>> Dharmendar Reddy Palle >>>>> Graduate Student >>>>> Microelectronics Research center, >>>>> University of Texas at Austin, >>>>> 10100 Burnet Road, Bldg. 160 >>>>> MER 2.608F, TX 78758-4445 >>>>> e-mail: dharmareddy84 at gmail.com >>>>> Phone: +1-512-350-9082 >>>>> United States of America. >>>>> Homepage: https://webspace.utexas.edu/~dpr342 >>>>> >>>> >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>> >>> >>> >>> -- >>> ----------------------------------------------------- >>> Dharmendar Reddy Palle >>> Graduate Student >>> Microelectronics Research center, >>> University of Texas at Austin, >>> 10100 Burnet Road, Bldg. 160 >>> MER 2.608F, TX 78758-4445 >>> e-mail: dharmareddy84 at gmail.com >>> Phone: +1-512-350-9082 >>> United States of America. >>> Homepage: https://webspace.utexas.edu/~dpr342 >>> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > > > -- > ----------------------------------------------------- > Dharmendar Reddy Palle > Graduate Student > Microelectronics Research center, > University of Texas at Austin, > 10100 Burnet Road, Bldg. 160 > MER 2.608F, TX 78758-4445 > e-mail: dharmareddy84 at gmail.com > Phone: +1-512-350-9082 > United States of America. > Homepage: https://webspace.utexas.edu/~dpr342 > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From friedmud at gmail.com Tue Apr 2 12:32:19 2013 From: friedmud at gmail.com (Derek Gaston) Date: Tue, 2 Apr 2013 11:32:19 -0600 Subject: [petsc-users] Redo PetscOptions From Commandline Message-ID: Hello all, A quick question: If I call PetscOptionsClear()... what method do I need to call to reparse any options set on the command line? Is it even possible? Thanks, Derek -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Apr 2 13:09:51 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 2 Apr 2013 13:09:51 -0500 Subject: [petsc-users] Redo PetscOptions From Commandline In-Reply-To: References: Message-ID: Derek, You can use PetscGetArgs() followed by by PetscOptionsInsert(). Let us know if you have any troubles with this. Barry On Apr 2, 2013, at 12:32 PM, Derek Gaston wrote: > Hello all, > > A quick question: If I call PetscOptionsClear()... what method do I need to call to reparse any options set on the command line? Is it even possible? > > Thanks, > Derek From friedmud at gmail.com Tue Apr 2 13:30:55 2013 From: friedmud at gmail.com (Derek Gaston) Date: Tue, 2 Apr 2013 12:30:55 -0600 Subject: [petsc-users] Redo PetscOptions From Commandline In-Reply-To: References: Message-ID: Thanks Barry, that works! Derek On Tue, Apr 2, 2013 at 12:09 PM, Barry Smith wrote: > > Derek, > > You can use PetscGetArgs() followed by by PetscOptionsInsert(). Let us > know if you have any troubles with this. > > Barry > > > On Apr 2, 2013, at 12:32 PM, Derek Gaston wrote: > > > Hello all, > > > > A quick question: If I call PetscOptionsClear()... what method do I need > to call to reparse any options set on the command line? Is it even > possible? > > > > Thanks, > > Derek > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dharmareddy84 at gmail.com Wed Apr 3 07:04:21 2013 From: dharmareddy84 at gmail.com (Dharmendar Reddy) Date: Wed, 3 Apr 2013 07:04:21 -0500 Subject: [petsc-users] DMPlexProjectFunction and Boundary Conditions Message-ID: Hello, I see that in DMPlexComputeResidualFEM boundary conditions are applied using DMPlexProjectFunctionLocal. I was wondering why is there a loop over all local vertex Ids and a call to evaluation function on line 241 below. If i am not wrong, i can see that VecSetValuesSection is only to the points indicated as boundary points in DMPlexCreatSection. Should one call lines 235:242 only if v is constrained node ? Also, should the user provide functions for all components of all fields, is there a way to update add value of only the field that is constrained ? 234: for (v = vStart; v < vEnd; ++v) {235: PetscInt dof, off; 237: PetscSectionGetDof (cSection, v, &dof);238: PetscSectionGetOffset (cSection, v, &off);239: if (dof > dim) SETERRQ2 (PetscObjectComm ((PetscObject )dm), PETSC_ERR_ARG_WRONG, "Cannot have more coordinates %d then dimensions %d", dof, dim);240: for (d = 0; d < dof; ++d) coords[d] = PetscRealPart(cArray[off+d]);241: for (c = 0; c < numComp; ++c) values[c] = (*funcs[c])(coords);242: VecSetValuesSection (localX, section, v, values, mode);243: } -- ----------------------------------------------------- Dharmendar Reddy Palle Graduate Student Microelectronics Research center, University of Texas at Austin, 10100 Burnet Road, Bldg. 160 MER 2.608F, TX 78758-4445 e-mail: dharmareddy84 at gmail.com Phone: +1-512-350-9082 United States of America. Homepage: https://webspace.utexas.edu/~dpr342 -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Apr 3 10:41:23 2013 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 4 Apr 2013 02:41:23 +1100 Subject: [petsc-users] DMPlexProjectFunction and Boundary Conditions In-Reply-To: References: Message-ID: On Wed, Apr 3, 2013 at 11:04 PM, Dharmendar Reddy wrote: > Hello, > I see that in DMPlexComputeResidualFEM boundary conditions are > applied using DMPlexProjectFunctionLocal. I was wondering why is there a > loop over all local vertex Ids and a call to > evaluation function on line 241 below. If i am not wrong, i can see that > VecSetValuesSection is only to the points indicated as boundary points in > DMPlexCreatSection. Should one call lines 235:242 only if v is constrained > node ? Also, should the user provide functions for all components of all > fields, is there a way to update add value of only the field that is > constrained ? > > 234: for (v = vStart; v < vEnd; ++v) {235: PetscInt dof, off; > 237: PetscSectionGetDof (cSection, v, &dof);238: PetscSectionGetOffset (cSection, v, &off);239: if (dof > dim) SETERRQ2 (PetscObjectComm ((PetscObject )dm), PETSC_ERR_ARG_WRONG, "Cannot have more coordinates %d then dimensions %d", dof, dim);240: for (d = 0; d < dof; ++d) coords[d] = PetscRealPart(cArray[off+d]);241: for (c = 0; c < numComp; ++c) values[c] = (*funcs[c])(coords);242: VecSetValuesSection (localX, section, v, values, mode);243: } > > ProjectFunction() just implements \int_\Omega v f(x). If you really don't want to add a no-op function, I can check for NULL, but I think that is slower in the inner loop. Matt > -- > ----------------------------------------------------- > Dharmendar Reddy Palle > Graduate Student > Microelectronics Research center, > University of Texas at Austin, > 10100 Burnet Road, Bldg. 160 > MER 2.608F, TX 78758-4445 > e-mail: dharmareddy84 at gmail.com > Phone: +1-512-350-9082 > United States of America. > Homepage: https://webspace.utexas.edu/~dpr342 > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From dharmareddy84 at gmail.com Wed Apr 3 11:09:10 2013 From: dharmareddy84 at gmail.com (Dharmendar Reddy) Date: Wed, 3 Apr 2013 11:09:10 -0500 Subject: [petsc-users] DMPlexProjectFunction and Boundary Conditions In-Reply-To: References: Message-ID: On Wed, Apr 3, 2013 at 10:41 AM, Matthew Knepley wrote: > On Wed, Apr 3, 2013 at 11:04 PM, Dharmendar Reddy > wrote: > >> Hello, >> I see that in DMPlexComputeResidualFEM boundary conditions are >> applied using DMPlexProjectFunctionLocal. I was wondering why is there a >> loop over all local vertex Ids and a call to >> evaluation function on line 241 below. If i am not wrong, i can see that >> VecSetValuesSection is only to the points indicated as boundary points in >> DMPlexCreatSection. Should one call lines 235:242 only if v is constrained >> node ? Also, should the user provide functions for all components of all >> fields, is there a way to update add value of only the field that is >> constrained ? >> >> 234: for (v = vStart; v < vEnd; ++v) {235: PetscInt dof, off; >> 237: PetscSectionGetDof (cSection, v, &dof);238: PetscSectionGetOffset (cSection, v, &off);239: if (dof > dim) SETERRQ2 (PetscObjectComm ((PetscObject )dm), PETSC_ERR_ARG_WRONG, "Cannot have more coordinates %d then dimensions %d", dof, dim);240: for (d = 0; d < dof; ++d) coords[d] = PetscRealPart(cArray[off+d]);241: for (c = 0; c < numComp; ++c) values[c] = (*funcs[c])(coords);242: VecSetValuesSection (localX, section, v, values, mode);243: } >> >> ProjectFunction() just implements \int_\Omega v f(x). If you really don't > want to add a no-op function, I can > check for NULL, but I think that is slower in the inner loop. > > Hello, I do not know how to pass Array of functions in Fortran. At this point i am not that worried about performance, but my understanding was that ProjectFunctionLocal along with fem.bcFuns was used set boundary conditions in DMPlexComputeResidualFEM. I think i miss understood the code then. How do i use the bcFields and bcPoints information used to create defualtSection to set boundary conditions ? I was thinking VecSetValuesSection with mode=Insert_BC_values was updating the field values on the points indicated as boundary. > Matt > > > >> -- >> ----------------------------------------------------- >> Dharmendar Reddy Palle >> Graduate Student >> Microelectronics Research center, >> University of Texas at Austin, >> 10100 Burnet Road, Bldg. 160 >> MER 2.608F, TX 78758-4445 >> e-mail: dharmareddy84 at gmail.com >> Phone: +1-512-350-9082 >> United States of America. >> Homepage: https://webspace.utexas.edu/~dpr342 >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -- ----------------------------------------------------- Dharmendar Reddy Palle Graduate Student Microelectronics Research center, University of Texas at Austin, 10100 Burnet Road, Bldg. 160 MER 2.608F, TX 78758-4445 e-mail: dharmareddy84 at gmail.com Phone: +1-512-350-9082 United States of America. Homepage: https://webspace.utexas.edu/~dpr342 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mpovolot at purdue.edu Wed Apr 3 11:55:01 2013 From: mpovolot at purdue.edu (Michael Povolotskyi) Date: Wed, 03 Apr 2013 12:55:01 -0400 Subject: [petsc-users] copy vector into a matrix Message-ID: <515C5EE5.5040209@purdue.edu> Dear PETSc developers, I have a following question: I have a dense serial matrix of M x N size I have a N vectors of size M. I need to copy content of those vectors to the columns of the matrix. What is the fastest way of doing this? Thank you, Michael. -- Michael Povolotskyi, PhD Research Assistant Professor Network for Computational Nanotechnology 207 S Martin Jischke Drive Purdue University, DLR, room 441-10 West Lafayette, Indiana 47907 phone: +1-765-494-9396 fax: +1-765-496-6026 From jedbrown at mcs.anl.gov Wed Apr 3 12:52:05 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 3 Apr 2013 12:52:05 -0500 Subject: [petsc-users] copy vector into a matrix In-Reply-To: <515C5EE5.5040209@purdue.edu> References: <515C5EE5.5040209@purdue.edu> Message-ID: On Wed, Apr 3, 2013 at 11:55 AM, Michael Povolotskyi wrote: > Dear PETSc developers, > I have a following question: > > I have a dense serial matrix of M x N size > I have a N vectors of size M. > I need to copy content of those vectors to the columns of the matrix. > What is the fastest way of doing this? > MatGetLocalSize(A,&m,NULL); MatDenseGetArray(A,&a); for (i=0; i From mpovolot at purdue.edu Wed Apr 3 12:58:16 2013 From: mpovolot at purdue.edu (Michael Povolotskyi) Date: Wed, 03 Apr 2013 13:58:16 -0400 Subject: [petsc-users] copy vector into a matrix In-Reply-To: References: <515C5EE5.5040209@purdue.edu> Message-ID: <515C6DB8.50405@purdue.edu> On 04/03/2013 01:52 PM, Jed Brown wrote: > On Wed, Apr 3, 2013 at 11:55 AM, Michael Povolotskyi > > wrote: > > Dear PETSc developers, > I have a following question: > > I have a dense serial matrix of M x N size > I have a N vectors of size M. > I need to copy content of those vectors to the columns of the matrix. > What is the fastest way of doing this? > > > MatGetLocalSize(A,&m,NULL); > MatDenseGetArray(A,&a); > for (i=0; i const PetscScalar *x; > VecGetArrayRead(X[i],&x); > PetscMemcpy(a+i*m,x,m*sizeof(PetscScalar)); > VecRestoreArrayRead(X[i],&x); > } > MatDenseRestoreArray(A,&a); > Thank you for help. Michael. -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Apr 3 15:31:52 2013 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 4 Apr 2013 07:31:52 +1100 Subject: [petsc-users] DMPlexProjectFunction and Boundary Conditions In-Reply-To: References: Message-ID: On Thu, Apr 4, 2013 at 3:09 AM, Dharmendar Reddy wrote: > > > > On Wed, Apr 3, 2013 at 10:41 AM, Matthew Knepley wrote: > >> On Wed, Apr 3, 2013 at 11:04 PM, Dharmendar Reddy < >> dharmareddy84 at gmail.com> wrote: >> >>> Hello, >>> I see that in DMPlexComputeResidualFEM boundary conditions are >>> applied using DMPlexProjectFunctionLocal. I was wondering why is there a >>> loop over all local vertex Ids and a call to >>> evaluation function on line 241 below. If i am not wrong, i can see that >>> VecSetValuesSection is only to the points indicated as boundary points in >>> DMPlexCreatSection. Should one call lines 235:242 only if v is constrained >>> node ? Also, should the user provide functions for all components of all >>> fields, is there a way to update add value of only the field that is >>> constrained ? >>> >>> 234: for (v = vStart; v < vEnd; ++v) {235: PetscInt dof, off; >>> 237: PetscSectionGetDof (cSection, v, &dof);238: PetscSectionGetOffset (cSection, v, &off);239: if (dof > dim) SETERRQ2 (PetscObjectComm ((PetscObject )dm), PETSC_ERR_ARG_WRONG, "Cannot have more coordinates %d then dimensions %d", dof, dim);240: for (d = 0; d < dof; ++d) coords[d] = PetscRealPart(cArray[off+d]);241: for (c = 0; c < numComp; ++c) values[c] = (*funcs[c])(coords);242: VecSetValuesSection (localX, section, v, values, mode);243: } >>> >>> ProjectFunction() just implements \int_\Omega v f(x). If you really >> don't want to add a no-op function, I can >> check for NULL, but I think that is slower in the inner loop. >> >> Hello, > > I do not know how to pass Array of functions in Fortran. At this point i > am not that worried about performance, but my understanding was that > We have just redone the way function pointers are handled in PETSc. I was waiting to code this until someone asked. > ProjectFunctionLocal along with fem.bcFuns was used set boundary > conditions in DMPlexComputeResidualFEM. > Yes that is right. > I think i miss understood the code then. How do i use the bcFields and > bcPoints information used to create defualtSection to set boundary > conditions ? > There are several ways to do BC, and the DMPlex stuff should support them all. However, the one I am using here is to eliminate the constrained dofs from the global system, but not the local system. So, first constrained dofs are marked in the default PetscSection for the DM. In ProjectFunction() the INSERT_BC_VALUES makes sure only marked dofs get values. You are correct that I could optimize the loop to run over only points with constrained dofs, however this is a < 10% optimization which I am ignoring now. > I was thinking VecSetValuesSection with mode=Insert_BC_values was updating > the field values on the points indicated as boundary. > Yes, that is right. Thanks, Matt > Matt >> >> >> >>> -- >>> ----------------------------------------------------- >>> Dharmendar Reddy Palle >>> Graduate Student >>> Microelectronics Research center, >>> University of Texas at Austin, >>> 10100 Burnet Road, Bldg. 160 >>> MER 2.608F, TX 78758-4445 >>> e-mail: dharmareddy84 at gmail.com >>> Phone: +1-512-350-9082 >>> United States of America. >>> Homepage: https://webspace.utexas.edu/~dpr342 >>> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > > > -- > ----------------------------------------------------- > Dharmendar Reddy Palle > Graduate Student > Microelectronics Research center, > University of Texas at Austin, > 10100 Burnet Road, Bldg. 160 > MER 2.608F, TX 78758-4445 > e-mail: dharmareddy84 at gmail.com > Phone: +1-512-350-9082 > United States of America. > Homepage: https://webspace.utexas.edu/~dpr342 > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From cpachaj at gmail.com Wed Apr 3 16:24:54 2013 From: cpachaj at gmail.com (Carlos Pachajoa) Date: Wed, 03 Apr 2013 23:24:54 +0200 Subject: [petsc-users] DM arbitrary assignment of ranks Message-ID: <515C9E26.5070008@gmail.com> Hello, I'm working on a CFD simulator that will be used for educational purposes. I'm using PETSc to solve a Poisson equation for the pressure, using DM and a 5-point stencil. I would like to extend it to work on parallel. Since the code will also be used to teach some fundamental concepts of parallel programming, it would be ideal if the user can set which rank deals with each subdomain. Out of the DM, one can obtain the corners of the region corresponding to a rank, however, I have not found a function to assign a processor to a region. I know that you can assign a set of rows to any processor when explicitly setting the columns of a matrix, but I wonder if this is also possible using DM. Thank you for your time and your hints. Best regards, Carlos From knepley at gmail.com Wed Apr 3 16:49:22 2013 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 4 Apr 2013 08:49:22 +1100 Subject: [petsc-users] DM arbitrary assignment of ranks In-Reply-To: <515C9E26.5070008@gmail.com> References: <515C9E26.5070008@gmail.com> Message-ID: On Thu, Apr 4, 2013 at 8:24 AM, Carlos Pachajoa wrote: > Hello, > > I'm working on a CFD simulator that will be > used for educational purposes. > > I'm using PETSc to solve a Poisson equation for the pressure, using DM and > a 5-point stencil. I would like to extend it to work on parallel. > > Since the code will also be used to teach some fundamental concepts of > parallel programming, it would be ideal if the user can set which rank > deals with each subdomain. Out of the DM, one can obtain the corners of the > region corresponding to a rank, however, I have not found a function to > assign a processor to a region. I know that you can assign a set of rows to > any processor when explicitly setting the columns of a matrix, but I wonder > if this is also possible using DM. > The DMDA prescribes a process layout grid which you can set yourself using the lx,ly,lz arguments to the constructor. Matt > Thank you for your time and your hints. > > Best regards, > > Carlos > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Apr 3 16:53:41 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 3 Apr 2013 16:53:41 -0500 Subject: [petsc-users] DM arbitrary assignment of ranks In-Reply-To: <515C9E26.5070008@gmail.com> References: <515C9E26.5070008@gmail.com> Message-ID: <8EDC0B7B-A497-4F9E-9A56-4A330D47CC80@mcs.anl.gov> Carlos, It is not clear what you want. As Matt noted you can use the lx,ly,lz arguments to control the sizes of the subdomains if that is what you want. PETSc always uses the "natural ordering of processes" for the MPI communicator passed into the DMDA creation routine. In 2d MPI rank 0 gets i coordinates 0 to m_0 - 1, j coordinates 0 to n_0 -1; rank 1 gets i coordinates m_0 to m_1 -1, j coordinates 0 to n_0 - 1 etc. I believe the users manual has a graphic describing this. Say we have 6 MPI processes and have partitioned them for the DMDA as 3 in the i coordinates and 2 in the j coordinates this looks like 3 4 5 0 1 2 If you wish to have different MPI ranks for each domain you need to write some MPI code. You construct a new MPI communicator with the same processes as the MPI communicator passed to the DMDA create routine but with the other ordering you want to use. Now you just use MPI_Comm_rank() of the new MPI communicator to "label" each subdomain. So you could (if you wanted to) label them as 1 3 5 0 2 4 relative to the other communicator. Barry On Apr 3, 2013, at 4:24 PM, Carlos Pachajoa wrote: > Hello, > I'm working on a CFD simulator that will be used for educational purposes. > > I'm using PETSc to solve a Poisson equation for the pressure, using DM and a 5-point stencil. I would like to extend it to work on parallel. > > Since the code will also be used to teach some fundamental concepts of parallel programming, it would be ideal if the user can set which rank deals with each subdomain. Out of the DM, one can obtain the corners of the region corresponding to a rank, however, I have not found a function to assign a processor to a region. I know that you can assign a set of rows to any processor when explicitly setting the columns of a matrix, but I wonder if this is also possible using DM. > > Thank you for your time and your hints. > > Best regards, > > Carlos From cpachaj at gmail.com Wed Apr 3 18:11:13 2013 From: cpachaj at gmail.com (Carlos Pachajoa) Date: Thu, 04 Apr 2013 01:11:13 +0200 Subject: [petsc-users] DM arbitrary assignment of ranks In-Reply-To: <8EDC0B7B-A497-4F9E-9A56-4A330D47CC80@mcs.anl.gov> References: <515C9E26.5070008@gmail.com> <8EDC0B7B-A497-4F9E-9A56-4A330D47CC80@mcs.anl.gov> Message-ID: <515CB711.3060506@gmail.com> Hello, thank you very much. This is what I was looking for. Best regards, Carlos On 04/03/13 23:53, Barry Smith wrote: > > Carlos, > > It is not clear what you want. As Matt noted you can use the lx,ly,lz arguments to control the sizes of the subdomains if that is what you want. > > PETSc always uses the "natural ordering of processes" for the MPI communicator passed into the DMDA creation routine. In 2d MPI rank 0 gets i coordinates 0 to m_0 - 1, j coordinates 0 to n_0 -1; rank 1 gets i coordinates m_0 to m_1 -1, j coordinates 0 to n_0 - 1 etc. I believe the users manual has a graphic describing this. Say we have 6 MPI processes and have partitioned them for the DMDA as 3 in the i coordinates and 2 in the j coordinates this looks like > > 3 4 5 > 0 1 2 > > If you wish to have different MPI ranks for each domain you need to write some MPI code. You construct a new MPI communicator with the same processes as the MPI communicator passed to the DMDA create routine but with the other ordering you want to use. Now you just use MPI_Comm_rank() of the new MPI communicator to "label" each subdomain. So you could (if you wanted to) label them as > > 1 3 5 > 0 2 4 > > relative to the other communicator. > > Barry > > On Apr 3, 2013, at 4:24 PM, Carlos Pachajoa wrote: > >> Hello, >> I'm working on a CFD simulator that will be used for educational purposes. >> >> I'm using PETSc to solve a Poisson equation for the pressure, using DM and a 5-point stencil. I would like to extend it to work on parallel. >> >> Since the code will also be used to teach some fundamental concepts of parallel programming, it would be ideal if the user can set which rank deals with each subdomain. Out of the DM, one can obtain the corners of the region corresponding to a rank, however, I have not found a function to assign a processor to a region. I know that you can assign a set of rows to any processor when explicitly setting the columns of a matrix, but I wonder if this is also possible using DM. >> >> Thank you for your time and your hints. >> >> Best regards, >> >> Carlos > From nico.schloemer at gmail.com Thu Apr 4 11:43:19 2013 From: nico.schloemer at gmail.com (=?ISO-8859-1?Q?Nico_Schl=F6mer?=) Date: Thu, 4 Apr 2013 18:43:19 +0200 Subject: [petsc-users] complex-valued problem, singular preconditioner Message-ID: Hi, I've got this complex-valued problem - \Delta u + i omega u = f (where omega typically >>1). According to , this problem can be solved well with, e.g., BiCGStab and and AMG approach (after it's been broken up into real and imaginary part). Using PETSc, this approach indeed works well. Except, that is, for somewhat rough discretizations. The output I'm getting would be something like 0 KSP preconditioned resid norm 4.941153318127e+75 true resid norm 5.201089914056e+02 ||r(i)||/||b|| 1.000000000000e+00 1 KSP preconditioned resid norm 6.449747027242e+59 true resid norm 2.018314797006e+03 ||r(i)||/||b|| 3.880561248425e+00 after which PETSc happily aborts with CONVERGED_RTOL. First of all, ||r(i)||/||b|| doesn't seem to be what the stopping criterion looks at (I always thought it would be). Second, obviously there's something fishy going on with the hypre_amg preconditioner, but I can't quite point my finger at it. Anyone else? --Nico From knepley at gmail.com Thu Apr 4 11:54:18 2013 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 5 Apr 2013 03:54:18 +1100 Subject: [petsc-users] complex-valued problem, singular preconditioner In-Reply-To: References: Message-ID: On Fri, Apr 5, 2013 at 3:43 AM, Nico Schl?mer wrote: > Hi, > > I've got this complex-valued problem > > - \Delta u + i omega u = f > > (where omega typically >>1). According to > , > this problem can be solved well with, e.g., BiCGStab and and AMG > approach (after it's been broken up into real and imaginary part). > > Using PETSc, this approach indeed works well. Except, that is, for > somewhat rough discretizations. The output I'm getting would be > something like > > 0 KSP preconditioned resid norm 4.941153318127e+75 true resid norm > 5.201089914056e+02 ||r(i)||/||b|| 1.000000000000e+00 > 1 KSP preconditioned resid norm 6.449747027242e+59 true resid norm > 2.018314797006e+03 ||r(i)||/||b|| 3.880561248425e+00 > > after which PETSc happily aborts with CONVERGED_RTOL. First of all, > "abort" is used incorrectly here. > ||r(i)||/||b|| doesn't seem to be what the stopping criterion looks at > Its looking at preconditioned r / b. > (I always thought it would be). Second, obviously there's something > fishy going on with the hypre_amg preconditioner, but I can't quite > point my finger at it. > Your problem is likely close o singular and Hypre is known to crap out there. Use ML, and you can use -coarse_pc_type svd. Matt > Anyone else? > > --Nico > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Thu Apr 4 13:12:50 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 4 Apr 2013 13:12:50 -0500 Subject: [petsc-users] complex-valued problem, singular preconditioner In-Reply-To: References: Message-ID: On Apr 4, 2013, at 11:43 AM, Nico Schl?mer wrote: > Hi, > > I've got this complex-valued problem > > - \Delta u + i omega u = f > > (where omega typically >>1). According to > , > this problem can be solved well with, e.g., BiCGStab and and AMG > approach (after it's been broken up into real and imaginary part). > > Using PETSc, this approach indeed works well. Except, that is, for > somewhat rough discretizations. The output I'm getting would be > something like > > 0 KSP preconditioned resid norm 4.941153318127e+75 true resid norm > 5.201089914056e+02 ||r(i)||/||b|| 1.000000000000e+00 > 1 KSP preconditioned resid norm 6.449747027242e+59 true resid norm > 2.018314797006e+03 ||r(i)||/||b|| 3.880561248425e+00 By default it is using the preconditioned residual for convergence which as gone from 4.941153318127e+75 to 6.449747027242e+59 (of course in this situation the preconditioner is garbage so these numbers are meaningless). You can run with -ksp_pc_side right to get it to use right preconditioning and hence the unpreconditioned residual norm for the convergence test. But as Matt notes the preconditioner is failing for this problem Barry > > after which PETSc happily aborts with CONVERGED_RTOL. First of all, > ||r(i)||/||b|| doesn't seem to be what the stopping criterion looks at > (I always thought it would be). Second, obviously there's something > fishy going on with the hypre_amg preconditioner, but I can't quite > point my finger at it. > > Anyone else? > > --Nico From alexvg77 at gmail.com Thu Apr 4 13:18:11 2013 From: alexvg77 at gmail.com (Alexander Goncharov) Date: Thu, 04 Apr 2013 11:18:11 -0700 Subject: [petsc-users] MPI partitioning Message-ID: <515DC3E3.3020802@gmail.com> Dear Petsc users, I have a question regarding FEM mesh partitioning. When I create MPI partioning using MPIAdj matrix do I need to optimize FEM bandwidth using MatGetOrdering before calling MatPartitioningCreate function? Thank you in advance! From bsmith at mcs.anl.gov Thu Apr 4 13:39:37 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 4 Apr 2013 13:39:37 -0500 Subject: [petsc-users] MPI partitioning In-Reply-To: <515DC3E3.3020802@gmail.com> References: <515DC3E3.3020802@gmail.com> Message-ID: <8F3CC6DA-255C-4816-9A8C-B89A4F5DCC79@mcs.anl.gov> On Apr 4, 2013, at 1:18 PM, Alexander Goncharov wrote: > Dear Petsc users, > > I have a question regarding FEM mesh partitioning. > > When I create MPI partioning using MPIAdj matrix do I need to optimize FEM bandwidth using MatGetOrdering before calling MatPartitioningCreate function? No, you do not want to do that. > > Thank you in advance! After you have determined the partitioning you may want to renumber on each process to reduce FEM bandwidth (note this reordering is sequential and just for the local nodes/elements on each process.) Barry From alexvg77 at gmail.com Thu Apr 4 13:45:08 2013 From: alexvg77 at gmail.com (Alexander Goncharov) Date: Thu, 04 Apr 2013 11:45:08 -0700 Subject: [petsc-users] MPI partitioning In-Reply-To: <8F3CC6DA-255C-4816-9A8C-B89A4F5DCC79@mcs.anl.gov> References: <515DC3E3.3020802@gmail.com> <8F3CC6DA-255C-4816-9A8C-B89A4F5DCC79@mcs.anl.gov> Message-ID: <515DCA34.8000504@gmail.com> On 04/04/2013 11:39 AM, Barry Smith wrote: > On Apr 4, 2013, at 1:18 PM, Alexander Goncharov wrote: > >> Dear Petsc users, >> >> I have a question regarding FEM mesh partitioning. >> >> When I create MPI partioning using MPIAdj matrix do I need to optimize FEM bandwidth using MatGetOrdering before calling MatPartitioningCreate function? > No, you do not want to do that. >> Thank you in advance! > After you have determined the partitioning you may want to renumber on each process to reduce FEM bandwidth (note this reordering is sequential and just for the local nodes/elements on each process.) > > Barry > > Thank you, Barry! From dharmareddy84 at gmail.com Thu Apr 4 14:29:30 2013 From: dharmareddy84 at gmail.com (Dharmendar Reddy) Date: Thu, 4 Apr 2013 14:29:30 -0500 Subject: [petsc-users] coordinate point location in mesh Message-ID: Hello, Is there any DMPlex function which will tell me to which DMPlex point a given coordinate point belongs to ? I am looking for cell id given a coordinates of a point. If not, I can think of writing of a test for point in cell and loop over cells to find the cell id. Any suggestions for doing this ? The physical problem i am looking at is a Poisson equation in a given domain. - \Delta u(x) = rho(x) where rho is a sum of delta functions randomly located in the domain. -- ----------------------------------------------------- Dharmendar Reddy Palle Graduate Student Microelectronics Research center, University of Texas at Austin, 10100 Burnet Road, Bldg. 160 MER 2.608F, TX 78758-4445 e-mail: dharmareddy84 at gmail.com Phone: +1-512-350-9082 United States of America. Homepage: https://webspace.utexas.edu/~dpr342 -------------- next part -------------- An HTML attachment was scrubbed... URL: From gpau at lbl.gov Thu Apr 4 15:01:48 2013 From: gpau at lbl.gov (George Pau) Date: Thu, 4 Apr 2013 13:01:48 -0700 Subject: [petsc-users] New nonzero caused a malloc Message-ID: Hi, I am trying to determine where there is malloc when I am doing a MatSetValuesBlocked. The following is the error: [1]PETSC ERROR: --------------------- Error Message ------------------------------------ [1]PETSC ERROR: Argument out of range! [1]PETSC ERROR: New nonzero at (69216,95036) caused a malloc! However, when I look at the info during the assembly, I have [0] MatAssemblyBegin_MPIBAIJ(): Stash has 0 entries,uses 0 mallocs. [0] MatAssemblyBegin_MPIBAIJ(): Block-Stash has 0 entries, uses 0 mallocs. [1] MatAssemblyEnd_SeqBAIJ(): Matrix size: 381300 X 381300, block size 4; storage space: 14143776 unneeded, 10259424 used [1] MatAssemblyEnd_SeqBAIJ(): Number of mallocs during MatSetValues is 0 [1] MatAssemblyEnd_SeqBAIJ(): Most nonzeros blocks in any row is 16 [0] MatAssemblyEnd_SeqBAIJ(): Matrix size: 381300 X 381300, block size 4; storage space: 13696176 unneeded, 10707024 used [0] MatAssemblyEnd_SeqBAIJ(): Number of mallocs during MatSetValues is 0 [0] MatAssemblyEnd_SeqBAIJ(): Most nonzeros blocks in any row is 11 which indicates there is mallocs. The following is how I set up the matrix: call MatCreate(PETSC_COMM_WORLD,petsc_obj%jacmat, pierr) call MatSetSizes(petsc_obj%jacmat,PETSC_DECIDE,PETSC_DECIDE,nel*neq,nel*neq,pierr) call MatSetFromOptions(petsc_obj%jacmat,pierr) call MatSetBlockSize(petsc_obj%jacmat,neq,pierr) call MatSetType(petsc_obj%jacmat,MATBAIJ,pierr) call MatSeqBAIJSetPreallocation(petsc_obj%jacmat,neq,(mncon+1)*neq*neq,PETSC_NULL_INTEGER,pierr) call MatMPIBAIJSetPreallocation(petsc_obj%jacmat, neq,neq*neq,PETSC_NULL_INTEGER,mncon*neq*neq,PETSC_NULL_INTEGER,pierr) call MatSetOption(petsc_obj%jacmat,MAT_ROW_ORIENTED,PETSC_FALSE,pierr) !because of how aval is stored. -- George Pau Earth Sciences Division Lawrence Berkeley National Laboratory One Cyclotron, MS 74-120 Berkeley, CA 94720 (510) 486-7196 gpau at lbl.gov http://esd.lbl.gov/about/staff/georgepau/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From gpau at lbl.gov Thu Apr 4 15:05:08 2013 From: gpau at lbl.gov (George Pau) Date: Thu, 4 Apr 2013 13:05:08 -0700 Subject: [petsc-users] New nonzero caused a malloc In-Reply-To: References: Message-ID: Hi Just a follow up on the previous email which was sent accidentally. The info during assembly indicates there is no mallocs (typo in previous email). In addition, mncon is 18 and so the total nonzero column blocks per row block is greater than the nonzero blocks shown. If I use a single processor, I do not see the above error. Thanks, George On Thu, Apr 4, 2013 at 1:01 PM, George Pau wrote: > Hi, > > I am trying to determine where there is malloc when I am doing a > MatSetValuesBlocked. The following is the error: > > [1]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [1]PETSC ERROR: Argument out of range! > [1]PETSC ERROR: New nonzero at (69216,95036) caused a malloc! > > However, when I look at the info during the assembly, I have > > [0] MatAssemblyBegin_MPIBAIJ(): Stash has 0 entries,uses 0 mallocs. > [0] MatAssemblyBegin_MPIBAIJ(): Block-Stash has 0 entries, uses 0 mallocs. > [1] MatAssemblyEnd_SeqBAIJ(): Matrix size: 381300 X 381300, block size 4; > storage space: 14143776 unneeded, 10259424 used > [1] MatAssemblyEnd_SeqBAIJ(): Number of mallocs during MatSetValues is 0 > [1] MatAssemblyEnd_SeqBAIJ(): Most nonzeros blocks in any row is 16 > [0] MatAssemblyEnd_SeqBAIJ(): Matrix size: 381300 X 381300, block size 4; > storage space: 13696176 unneeded, 10707024 used > [0] MatAssemblyEnd_SeqBAIJ(): Number of mallocs during MatSetValues is 0 > [0] MatAssemblyEnd_SeqBAIJ(): Most nonzeros blocks in any row is 11 > > which indicates there is mallocs. > > The following is how I set up the matrix: > > call MatCreate(PETSC_COMM_WORLD,petsc_obj%jacmat, pierr) > call > MatSetSizes(petsc_obj%jacmat,PETSC_DECIDE,PETSC_DECIDE,nel*neq,nel*neq,pierr) > call MatSetFromOptions(petsc_obj%jacmat,pierr) > call MatSetBlockSize(petsc_obj%jacmat,neq,pierr) > call MatSetType(petsc_obj%jacmat,MATBAIJ,pierr) > call > MatSeqBAIJSetPreallocation(petsc_obj%jacmat,neq,(mncon+1)*neq*neq,PETSC_NULL_INTEGER,pierr) > call MatMPIBAIJSetPreallocation(petsc_obj%jacmat, > neq,neq*neq,PETSC_NULL_INTEGER,mncon*neq*neq,PETSC_NULL_INTEGER,pierr) > call MatSetOption(petsc_obj%jacmat,MAT_ROW_ORIENTED,PETSC_FALSE,pierr) > !because of how aval is stored. > > -- > George Pau > Earth Sciences Division > Lawrence Berkeley National Laboratory > One Cyclotron, MS 74-120 > Berkeley, CA 94720 > > (510) 486-7196 > gpau at lbl.gov > http://esd.lbl.gov/about/staff/georgepau/ > -- George Pau Earth Sciences Division Lawrence Berkeley National Laboratory One Cyclotron, MS 74-120 Berkeley, CA 94720 (510) 486-7196 gpau at lbl.gov http://esd.lbl.gov/about/staff/georgepau/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Thu Apr 4 15:36:05 2013 From: balay at mcs.anl.gov (Satish Balay) Date: Thu, 4 Apr 2013 15:36:05 -0500 (CDT) Subject: [petsc-users] New nonzero caused a malloc In-Reply-To: References: Message-ID: This is fortran where pierr return status is ignored. So the -info for this run is useless. You can do the following to tell petsc to go ahead with the malloc - and not set an error [for the extra mallocs]. Now '-info' should list the mallocs that took place. call MatSetOption(petsc_obj%jacmat,MAT_NEW_NONZERO_ALLOCATION_ERR,PETSC_FALSE,pierr) Or use a debugger [with a breakpoint in PetscError] to determine the exact place in code that causes the extra malloc - and then see why that value was not counted towards preallocation Satish On Thu, 4 Apr 2013, George Pau wrote: > Hi > > Just a follow up on the previous email which was sent accidentally. The > info during assembly indicates there is no mallocs (typo in previous > email). In addition, mncon is 18 and so the total nonzero column blocks > per row block is greater than the nonzero blocks shown. If I use a single > processor, I do not see the above error. > > Thanks, > George > > > > On Thu, Apr 4, 2013 at 1:01 PM, George Pau wrote: > > > Hi, > > > > I am trying to determine where there is malloc when I am doing a > > MatSetValuesBlocked. The following is the error: > > > > [1]PETSC ERROR: --------------------- Error Message > > ------------------------------------ > > [1]PETSC ERROR: Argument out of range! > > [1]PETSC ERROR: New nonzero at (69216,95036) caused a malloc! > > > > However, when I look at the info during the assembly, I have > > > > [0] MatAssemblyBegin_MPIBAIJ(): Stash has 0 entries,uses 0 mallocs. > > [0] MatAssemblyBegin_MPIBAIJ(): Block-Stash has 0 entries, uses 0 mallocs. > > [1] MatAssemblyEnd_SeqBAIJ(): Matrix size: 381300 X 381300, block size 4; > > storage space: 14143776 unneeded, 10259424 used > > [1] MatAssemblyEnd_SeqBAIJ(): Number of mallocs during MatSetValues is 0 > > [1] MatAssemblyEnd_SeqBAIJ(): Most nonzeros blocks in any row is 16 > > [0] MatAssemblyEnd_SeqBAIJ(): Matrix size: 381300 X 381300, block size 4; > > storage space: 13696176 unneeded, 10707024 used > > [0] MatAssemblyEnd_SeqBAIJ(): Number of mallocs during MatSetValues is 0 > > [0] MatAssemblyEnd_SeqBAIJ(): Most nonzeros blocks in any row is 11 > > > > which indicates there is mallocs. > > > > The following is how I set up the matrix: > > > > call MatCreate(PETSC_COMM_WORLD,petsc_obj%jacmat, pierr) > > call > > MatSetSizes(petsc_obj%jacmat,PETSC_DECIDE,PETSC_DECIDE,nel*neq,nel*neq,pierr) > > call MatSetFromOptions(petsc_obj%jacmat,pierr) > > call MatSetBlockSize(petsc_obj%jacmat,neq,pierr) > > call MatSetType(petsc_obj%jacmat,MATBAIJ,pierr) > > call > > MatSeqBAIJSetPreallocation(petsc_obj%jacmat,neq,(mncon+1)*neq*neq,PETSC_NULL_INTEGER,pierr) > > call MatMPIBAIJSetPreallocation(petsc_obj%jacmat, > > neq,neq*neq,PETSC_NULL_INTEGER,mncon*neq*neq,PETSC_NULL_INTEGER,pierr) > > call MatSetOption(petsc_obj%jacmat,MAT_ROW_ORIENTED,PETSC_FALSE,pierr) > > !because of how aval is stored. > > > > -- > > George Pau > > Earth Sciences Division > > Lawrence Berkeley National Laboratory > > One Cyclotron, MS 74-120 > > Berkeley, CA 94720 > > > > (510) 486-7196 > > gpau at lbl.gov > > http://esd.lbl.gov/about/staff/georgepau/ > > > > > > From knepley at gmail.com Thu Apr 4 15:37:49 2013 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 5 Apr 2013 07:37:49 +1100 Subject: [petsc-users] coordinate point location in mesh In-Reply-To: References: Message-ID: On Fri, Apr 5, 2013 at 6:29 AM, Dharmendar Reddy wrote: > Hello, > Is there any DMPlex function which will tell me to which DMPlex > point a given coordinate point belongs to ? > I am looking for cell id given a coordinates of a point. > I have experimental code to do this, DMPlexLocatePoint(). There are tests, but they are not that stringent for hexes. Let me know if this works for you. Matt > If not, I can think of writing of a test for point in cell > and loop over cells to find the cell id. Any suggestions for doing this ? > > > The physical problem i am looking at is a Poisson equation in a given > domain. > > - \Delta u(x) = rho(x) where rho is a sum of delta functions randomly > located in the domain. > -- > ----------------------------------------------------- > Dharmendar Reddy Palle > Graduate Student > Microelectronics Research center, > University of Texas at Austin, > 10100 Burnet Road, Bldg. 160 > MER 2.608F, TX 78758-4445 > e-mail: dharmareddy84 at gmail.com > Phone: +1-512-350-9082 > United States of America. > Homepage: https://webspace.utexas.edu/~dpr342 > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Apr 4 16:02:08 2013 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 5 Apr 2013 08:02:08 +1100 Subject: [petsc-users] New nonzero caused a malloc In-Reply-To: References: Message-ID: On Fri, Apr 5, 2013 at 7:05 AM, George Pau wrote: > Hi > > Just a follow up on the previous email which was sent accidentally. The > info during assembly indicates there is no mallocs (typo in previous > email). In addition, mncon is 18 and so the total nonzero column blocks > per row block is greater than the nonzero blocks shown. If I use a single > processor, I do not see the above error. > Then this is likely you are not dividing the nonzeros between the diagonal and off-diagonal blocks correctly. Matt > Thanks, > George > > > > On Thu, Apr 4, 2013 at 1:01 PM, George Pau wrote: > >> Hi, >> >> I am trying to determine where there is malloc when I am doing a >> MatSetValuesBlocked. The following is the error: >> >> [1]PETSC ERROR: --------------------- Error Message >> ------------------------------------ >> [1]PETSC ERROR: Argument out of range! >> [1]PETSC ERROR: New nonzero at (69216,95036) caused a malloc! >> >> However, when I look at the info during the assembly, I have >> >> [0] MatAssemblyBegin_MPIBAIJ(): Stash has 0 entries,uses 0 mallocs. >> [0] MatAssemblyBegin_MPIBAIJ(): Block-Stash has 0 entries, uses 0 mallocs. >> [1] MatAssemblyEnd_SeqBAIJ(): Matrix size: 381300 X 381300, block size 4; >> storage space: 14143776 unneeded, 10259424 used >> [1] MatAssemblyEnd_SeqBAIJ(): Number of mallocs during MatSetValues is 0 >> [1] MatAssemblyEnd_SeqBAIJ(): Most nonzeros blocks in any row is 16 >> [0] MatAssemblyEnd_SeqBAIJ(): Matrix size: 381300 X 381300, block size 4; >> storage space: 13696176 unneeded, 10707024 used >> [0] MatAssemblyEnd_SeqBAIJ(): Number of mallocs during MatSetValues is 0 >> [0] MatAssemblyEnd_SeqBAIJ(): Most nonzeros blocks in any row is 11 >> >> which indicates there is mallocs. >> >> The following is how I set up the matrix: >> >> call MatCreate(PETSC_COMM_WORLD,petsc_obj%jacmat, pierr) >> call >> MatSetSizes(petsc_obj%jacmat,PETSC_DECIDE,PETSC_DECIDE,nel*neq,nel*neq,pierr) >> call MatSetFromOptions(petsc_obj%jacmat,pierr) >> call MatSetBlockSize(petsc_obj%jacmat,neq,pierr) >> call MatSetType(petsc_obj%jacmat,MATBAIJ,pierr) >> call >> MatSeqBAIJSetPreallocation(petsc_obj%jacmat,neq,(mncon+1)*neq*neq,PETSC_NULL_INTEGER,pierr) >> call MatMPIBAIJSetPreallocation(petsc_obj%jacmat, >> neq,neq*neq,PETSC_NULL_INTEGER,mncon*neq*neq,PETSC_NULL_INTEGER,pierr) >> call >> MatSetOption(petsc_obj%jacmat,MAT_ROW_ORIENTED,PETSC_FALSE,pierr) !because >> of how aval is stored. >> >> -- >> George Pau >> Earth Sciences Division >> Lawrence Berkeley National Laboratory >> One Cyclotron, MS 74-120 >> Berkeley, CA 94720 >> >> (510) 486-7196 >> gpau at lbl.gov >> http://esd.lbl.gov/about/staff/georgepau/ >> > > > > -- > George Pau > Earth Sciences Division > Lawrence Berkeley National Laboratory > One Cyclotron, MS 74-120 > Berkeley, CA 94720 > > (510) 486-7196 > gpau at lbl.gov > http://esd.lbl.gov/about/staff/georgepau/ > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From gpau at lbl.gov Thu Apr 4 16:57:53 2013 From: gpau at lbl.gov (George Pau) Date: Thu, 4 Apr 2013 14:57:53 -0700 Subject: [petsc-users] New nonzero caused a malloc In-Reply-To: References: Message-ID: Hi Matt, Thanks, redividing between diagonal and off-diagonal solves the malloc issue. George On Thu, Apr 4, 2013 at 2:02 PM, Matthew Knepley wrote: > On Fri, Apr 5, 2013 at 7:05 AM, George Pau wrote: > >> Hi >> >> Just a follow up on the previous email which was sent accidentally. The >> info during assembly indicates there is no mallocs (typo in previous >> email). In addition, mncon is 18 and so the total nonzero column blocks >> per row block is greater than the nonzero blocks shown. If I use a single >> processor, I do not see the above error. >> > > Then this is likely you are not dividing the nonzeros between the diagonal > and off-diagonal blocks correctly. > > Matt > > >> Thanks, >> George >> >> >> >> On Thu, Apr 4, 2013 at 1:01 PM, George Pau wrote: >> >>> Hi, >>> >>> I am trying to determine where there is malloc when I am doing a >>> MatSetValuesBlocked. The following is the error: >>> >>> [1]PETSC ERROR: --------------------- Error Message >>> ------------------------------------ >>> [1]PETSC ERROR: Argument out of range! >>> [1]PETSC ERROR: New nonzero at (69216,95036) caused a malloc! >>> >>> However, when I look at the info during the assembly, I have >>> >>> [0] MatAssemblyBegin_MPIBAIJ(): Stash has 0 entries,uses 0 mallocs. >>> [0] MatAssemblyBegin_MPIBAIJ(): Block-Stash has 0 entries, uses 0 >>> mallocs. >>> [1] MatAssemblyEnd_SeqBAIJ(): Matrix size: 381300 X 381300, block size >>> 4; storage space: 14143776 unneeded, 10259424 used >>> [1] MatAssemblyEnd_SeqBAIJ(): Number of mallocs during MatSetValues is 0 >>> [1] MatAssemblyEnd_SeqBAIJ(): Most nonzeros blocks in any row is 16 >>> [0] MatAssemblyEnd_SeqBAIJ(): Matrix size: 381300 X 381300, block size >>> 4; storage space: 13696176 unneeded, 10707024 used >>> [0] MatAssemblyEnd_SeqBAIJ(): Number of mallocs during MatSetValues is 0 >>> [0] MatAssemblyEnd_SeqBAIJ(): Most nonzeros blocks in any row is 11 >>> >>> which indicates there is mallocs. >>> >>> The following is how I set up the matrix: >>> >>> call MatCreate(PETSC_COMM_WORLD,petsc_obj%jacmat, pierr) >>> call >>> MatSetSizes(petsc_obj%jacmat,PETSC_DECIDE,PETSC_DECIDE,nel*neq,nel*neq,pierr) >>> call MatSetFromOptions(petsc_obj%jacmat,pierr) >>> call MatSetBlockSize(petsc_obj%jacmat,neq,pierr) >>> call MatSetType(petsc_obj%jacmat,MATBAIJ,pierr) >>> call >>> MatSeqBAIJSetPreallocation(petsc_obj%jacmat,neq,(mncon+1)*neq*neq,PETSC_NULL_INTEGER,pierr) >>> call MatMPIBAIJSetPreallocation(petsc_obj%jacmat, >>> neq,neq*neq,PETSC_NULL_INTEGER,mncon*neq*neq,PETSC_NULL_INTEGER,pierr) >>> call >>> MatSetOption(petsc_obj%jacmat,MAT_ROW_ORIENTED,PETSC_FALSE,pierr) !because >>> of how aval is stored. >>> >>> -- >>> George Pau >>> Earth Sciences Division >>> Lawrence Berkeley National Laboratory >>> One Cyclotron, MS 74-120 >>> Berkeley, CA 94720 >>> >>> (510) 486-7196 >>> gpau at lbl.gov >>> http://esd.lbl.gov/about/staff/georgepau/ >>> >> >> >> >> -- >> George Pau >> Earth Sciences Division >> Lawrence Berkeley National Laboratory >> One Cyclotron, MS 74-120 >> Berkeley, CA 94720 >> >> (510) 486-7196 >> gpau at lbl.gov >> http://esd.lbl.gov/about/staff/georgepau/ >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -- George Pau Earth Sciences Division Lawrence Berkeley National Laboratory One Cyclotron, MS 74-120 Berkeley, CA 94720 (510) 486-7196 gpau at lbl.gov http://esd.lbl.gov/about/staff/georgepau/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From friedmud at gmail.com Thu Apr 4 17:53:27 2013 From: friedmud at gmail.com (Derek Gaston) Date: Thu, 4 Apr 2013 16:53:27 -0600 Subject: [petsc-users] Redo PetscOptions From Commandline In-Reply-To: References: Message-ID: On Tue, Apr 2, 2013 at 12:09 PM, Barry Smith wrote: > You can use PetscGetArgs() followed by by PetscOptionsInsert(). Let us > know if you have any troubles with this. > Hmmm... any chance of getting a version of that function that takes an MPI_Comm? It is using PETSC_COMM_WORLD implicitly inside it.... and I need to call it asynchronously on sub-communicators. For now I have it working by swapping out PETSC_COMM_WORLD with my sub-communicator before calling that function.... but that's less than ideal ;-) Derek -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Apr 4 19:10:10 2013 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 5 Apr 2013 11:10:10 +1100 Subject: [petsc-users] Redo PetscOptions From Commandline In-Reply-To: References: Message-ID: On Fri, Apr 5, 2013 at 9:53 AM, Derek Gaston wrote: > On Tue, Apr 2, 2013 at 12:09 PM, Barry Smith wrote: > >> You can use PetscGetArgs() followed by by PetscOptionsInsert(). Let >> us know if you have any troubles with this. >> > > Hmmm... any chance of getting a version of that function that takes an > MPI_Comm? It is using PETSC_COMM_WORLD implicitly inside it.... and I need > to call it asynchronously on sub-communicators. > I don't think we would put that in the interface since it would allow inconsistencies among options. You can call PetscOptionsInsertString() yourself to manage that. Matt > For now I have it working by swapping out PETSC_COMM_WORLD with my > sub-communicator before calling that function.... but that's less than > ideal ;-) > > Derek > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Thu Apr 4 19:42:09 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Thu, 4 Apr 2013 19:42:09 -0500 Subject: [petsc-users] Redo PetscOptions From Commandline In-Reply-To: References: Message-ID: On Thu, Apr 4, 2013 at 7:10 PM, Matthew Knepley wrote: > On Fri, Apr 5, 2013 at 9:53 AM, Derek Gaston wrote: > >> On Tue, Apr 2, 2013 at 12:09 PM, Barry Smith wrote: >> >>> You can use PetscGetArgs() followed by by PetscOptionsInsert(). Let >>> us know if you have any troubles with this. >>> >> >> Hmmm... any chance of getting a version of that function that takes an >> MPI_Comm? It is using PETSC_COMM_WORLD implicitly inside it.... and I need >> to call it asynchronously on sub-communicators. >> > > I don't think we would put that in the interface since it would allow > inconsistencies among options. You can > call PetscOptionsInsertString() yourself to manage that. > Derek, what problem are you trying to solve by resetting the options like this? -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Thu Apr 4 19:53:08 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 4 Apr 2013 19:53:08 -0500 Subject: [petsc-users] Redo PetscOptions From Commandline In-Reply-To: References: Message-ID: <96B4F931-6204-494A-8BC1-6F8AEDF98682@mcs.anl.gov> If you truly want to insert the command line options again (and not from the environment or files) then you could use PetscOptionsInsertArgs_Private() which doesn't care about comm. But we would need to make that public. Insertions from the environment may be iffy with the hack you are doing if the original MPI start up process isn't including in that sub communicator. Barry On Apr 4, 2013, at 5:53 PM, Derek Gaston wrote: > On Tue, Apr 2, 2013 at 12:09 PM, Barry Smith wrote: > You can use PetscGetArgs() followed by by PetscOptionsInsert(). Let us know if you have any troubles with this. > > Hmmm... any chance of getting a version of that function that takes an MPI_Comm? It is using PETSC_COMM_WORLD implicitly inside it.... and I need to call it asynchronously on sub-communicators. > > For now I have it working by swapping out PETSC_COMM_WORLD with my sub-communicator before calling that function.... but that's less than ideal ;-) > > Derek From bsmith at mcs.anl.gov Thu Apr 4 19:53:57 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 4 Apr 2013 19:53:57 -0500 Subject: [petsc-users] Redo PetscOptions From Commandline In-Reply-To: References: Message-ID: <74DE5752-FA8D-421D-9265-F35F734D89D1@mcs.anl.gov> On Apr 4, 2013, at 7:42 PM, Jed Brown wrote: > > On Thu, Apr 4, 2013 at 7:10 PM, Matthew Knepley wrote: > On Fri, Apr 5, 2013 at 9:53 AM, Derek Gaston wrote: > On Tue, Apr 2, 2013 at 12:09 PM, Barry Smith wrote: > You can use PetscGetArgs() followed by by PetscOptionsInsert(). Let us know if you have any troubles with this. > > Hmmm... any chance of getting a version of that function that takes an MPI_Comm? It is using PETSC_COMM_WORLD implicitly inside it.... and I need to call it asynchronously on sub-communicators. > > I don't think we would put that in the interface since it would allow inconsistencies among options. You can > call PetscOptionsInsertString() yourself to manage that. We don't technically forbid having different options on different communicators but unless you are careful it could be risky. > > Derek, what problem are you trying to solve by resetting the options like this? From dharmareddy84 at gmail.com Thu Apr 4 22:33:45 2013 From: dharmareddy84 at gmail.com (Dharmendar Reddy) Date: Thu, 4 Apr 2013 22:33:45 -0500 Subject: [petsc-users] coordinate point location in mesh In-Reply-To: References: Message-ID: Hello Do i am currently cloned petsc using git. Looks like the function you mentioned is in knepley/pylith. How do i switch to that branch ? I guess i need to pull the branch before switching ? git branch knepley/pylith and reconfigure ? May not be of (immediate) concern but an observation about the LocatePoint_ codes. I see that test is done for every cell by calculating the ref cell mapped coordinates for a given point. It might help speed up things if there is a test to first check if the given point is with in the square/cube containing the simplex. The above test will make sure that the point is inside the cell or with in one of the adjoining cells. I am assuming the DMPlex object has information about cells adjoining a given cell. Since the LocatePoint_ calculates the ref cell mapped coordinates, I may be wrong here but i am thinking we can use that information to check outside which facet of the refcell the given point is and then try to locate the point in the cell sharing that facet. Thanks Reddy On Thu, Apr 4, 2013 at 3:37 PM, Matthew Knepley wrote: > On Fri, Apr 5, 2013 at 6:29 AM, Dharmendar Reddy wrote: > >> Hello, >> Is there any DMPlex function which will tell me to which DMPlex >> point a given coordinate point belongs to ? >> I am looking for cell id given a coordinates of a point. >> > > I have experimental code to do this, DMPlexLocatePoint(). There are tests, > but they are not that stringent for > hexes. Let me know if this works for you. > > Matt > > >> If not, I can think of writing of a test for point in cell >> and loop over cells to find the cell id. Any suggestions for doing this ? >> >> >> The physical problem i am looking at is a Poisson equation in a given >> domain. >> >> - \Delta u(x) = rho(x) where rho is a sum of delta functions randomly >> located in the domain. >> -- >> ----------------------------------------------------- >> Dharmendar Reddy Palle >> Graduate Student >> Microelectronics Research center, >> University of Texas at Austin, >> 10100 Burnet Road, Bldg. 160 >> MER 2.608F, TX 78758-4445 >> e-mail: dharmareddy84 at gmail.com >> Phone: +1-512-350-9082 >> United States of America. >> Homepage: https://webspace.utexas.edu/~dpr342 >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -- ----------------------------------------------------- Dharmendar Reddy Palle Graduate Student Microelectronics Research center, University of Texas at Austin, 10100 Burnet Road, Bldg. 160 MER 2.608F, TX 78758-4445 e-mail: dharmareddy84 at gmail.com Phone: +1-512-350-9082 United States of America. Homepage: https://webspace.utexas.edu/~dpr342 -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Thu Apr 4 22:42:54 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Thu, 4 Apr 2013 22:42:54 -0500 Subject: [petsc-users] coordinate point location in mesh In-Reply-To: References: Message-ID: On Thu, Apr 4, 2013 at 10:33 PM, Dharmendar Reddy wrote: > Do i am currently cloned petsc using git. Looks like the function > you mentioned is in knepley/pylith. How do i switch to that branch ? > I guess i need to pull the branch before switching ? > git branch knepley/pylith > That would create a new branch at the current HEAD, you want 'git checkout knepley/pylith' which should tell you: Branch knepley/pylith set up to track remote branch knepley/pylith from origin. Switched to a new branch 'knepley/pylith' Since the old 'knepley/pylith' was merged to 'master' a while back, Matt probably would have been better off to have fast-forwarded his branch so that it used all the latest stuff in 'master'. > and > reconfigure ? > > May not be of (immediate) concern but an observation about the > LocatePoint_ codes. > > I see that test is done for every cell by calculating the ref cell mapped > coordinates for a given point. It might help speed up things if there is a > test to first check if the given point is with in the square/cube > containing the simplex. > > The above test will make sure that the point is inside the cell or with in > one of the adjoining cells. > > I am assuming the DMPlex object has information about cells adjoining a > given cell. Since the LocatePoint_ calculates the ref cell mapped > coordinates, I may be wrong here but i am thinking we can use that > information to check outside which facet of the refcell the given point is > and then try to locate the point in the cell sharing that facet. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dharmareddy84 at gmail.com Fri Apr 5 01:20:32 2013 From: dharmareddy84 at gmail.com (Dharmendar Reddy) Date: Fri, 5 Apr 2013 01:20:32 -0500 Subject: [petsc-users] input mesh to DMPlex with duplicate nodes Message-ID: Hello, Does DMPlexCreateFromCellList remove duplicate nodes in mesh ? I am creating mesh using gmsh Extrude options, I see that the resulting mesh has duplicate nodes. Currently i preprocess the mesh data to remove the duplicate nodes before passing it to DMPlex CreateFromCellList. Is there a change in node ordering of the cell ? One a different note, any plans to provide functionality similar to DMPlexComputeCellGeometry for FVM based assembly. I am looking for things like the following to do cell wise assembly 1. Contribution to Voronoi volume associated with each point 2. Contribution to Surface area 3. Edge length Thanks Reddy -- ----------------------------------------------------- Dharmendar Reddy Palle Graduate Student Microelectronics Research center, University of Texas at Austin, 10100 Burnet Road, Bldg. 160 MER 2.608F, TX 78758-4445 e-mail: dharmareddy84 at gmail.com Phone: +1-512-350-9082 United States of America. Homepage: https://webspace.utexas.edu/~dpr342 -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Apr 5 07:11:00 2013 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 5 Apr 2013 23:11:00 +1100 Subject: [petsc-users] coordinate point location in mesh In-Reply-To: References: Message-ID: On Fri, Apr 5, 2013 at 2:33 PM, Dharmendar Reddy wrote: > Hello > Do i am currently cloned petsc using git. Looks like the function > you mentioned is in knepley/pylith. How do i switch to that branch ? > I guess i need to pull the branch before switching ? > git branch knepley/pylith > and > reconfigure ? > > May not be of (immediate) concern but an observation about the > LocatePoint_ codes. > > I see that test is done for every cell by calculating the ref cell mapped > coordinates for a given point. It might help speed up things if there is a > test to first check if the given point is with in the square/cube > containing the simplex. > That does not change the order of the method. So far, this had not been worth optimizing, since its a very small fraction of time (< 1%). When it gets optimized, we will use a hierarchical division like a kd-tree. Thanks, Matt > The above test will make sure that the point is inside the cell or with in > one of the adjoining cells. > > I am assuming the DMPlex object has information about cells adjoining a > given cell. Since the LocatePoint_ calculates the ref cell mapped > coordinates, I may be wrong here but i am thinking we can use that > information to check outside which facet of the refcell the given point is > and then try to locate the point in the cell sharing that facet. > > Thanks > Reddy > > On Thu, Apr 4, 2013 at 3:37 PM, Matthew Knepley wrote: > >> On Fri, Apr 5, 2013 at 6:29 AM, Dharmendar Reddy > > wrote: >> >>> Hello, >>> Is there any DMPlex function which will tell me to which DMPlex >>> point a given coordinate point belongs to ? >>> I am looking for cell id given a coordinates of a point. >>> >> >> I have experimental code to do this, DMPlexLocatePoint(). There are >> tests, but they are not that stringent for >> hexes. Let me know if this works for you. >> >> Matt >> >> >>> If not, I can think of writing of a test for point in cell >>> and loop over cells to find the cell id. Any suggestions for doing this >>> ? >>> >>> >>> The physical problem i am looking at is a Poisson equation in a given >>> domain. >>> >>> - \Delta u(x) = rho(x) where rho is a sum of delta functions randomly >>> located in the domain. >>> -- >>> ----------------------------------------------------- >>> Dharmendar Reddy Palle >>> Graduate Student >>> Microelectronics Research center, >>> University of Texas at Austin, >>> 10100 Burnet Road, Bldg. 160 >>> MER 2.608F, TX 78758-4445 >>> e-mail: dharmareddy84 at gmail.com >>> Phone: +1-512-350-9082 >>> United States of America. >>> Homepage: https://webspace.utexas.edu/~dpr342 >>> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > > > -- > ----------------------------------------------------- > Dharmendar Reddy Palle > Graduate Student > Microelectronics Research center, > University of Texas at Austin, > 10100 Burnet Road, Bldg. 160 > MER 2.608F, TX 78758-4445 > e-mail: dharmareddy84 at gmail.com > Phone: +1-512-350-9082 > United States of America. > Homepage: https://webspace.utexas.edu/~dpr342 > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Apr 5 07:15:38 2013 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 5 Apr 2013 23:15:38 +1100 Subject: [petsc-users] input mesh to DMPlex with duplicate nodes In-Reply-To: References: Message-ID: On Fri, Apr 5, 2013 at 5:20 PM, Dharmendar Reddy wrote: > Hello, > Does DMPlexCreateFromCellList remove duplicate nodes in mesh ? I > am creating mesh using gmsh Extrude options, I see that the resulting mesh > has duplicate nodes. Currently i preprocess the mesh data to remove the > duplicate nodes before passing it to DMPlex CreateFromCellList. > Do you mean nodes with the same coordinates? No, sometimes these are legitimate, as in cohesive cell formulations. > Is there a change in node ordering of the cell ? > No. > One a different note, any plans to provide functionality similar to > DMPlexComputeCellGeometry for FVM based assembly. I am looking for things > like the following to do cell wise assembly > 1. Contribution to Voronoi volume associated with each point > 2. Contribution to Surface area > 3. Edge length > You can look at TS ex11 for the preliminary code. This will be incorporated into DMPlex in the near future. Matt > Thanks > Reddy > -- > ----------------------------------------------------- > Dharmendar Reddy Palle > Graduate Student > Microelectronics Research center, > University of Texas at Austin, > 10100 Burnet Road, Bldg. 160 > MER 2.608F, TX 78758-4445 > e-mail: dharmareddy84 at gmail.com > Phone: +1-512-350-9082 > United States of America. > Homepage: https://webspace.utexas.edu/~dpr342 > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From sonyablade2010 at hotmail.com Fri Apr 5 08:20:17 2013 From: sonyablade2010 at hotmail.com (Sonya Blade) Date: Fri, 5 Apr 2013 14:20:17 +0100 Subject: [petsc-users] EVP Solutions with Subspace Message-ID: Dear All, I'd like to solve the problem of Generalized EVP where A is positive definite and where B has products only on its diagonal and some of them might be zeros. I started implementation with modifying the existing example(namely example 2 in the official Slepc site). Instead of using the Matrix viewer object I used? reading A and B matrices directly from text file. Program produces the following? error message, this has been triggered right after?ierr = EPSSolve(eps);CHKERRQ(ierr); call. I supply with main program body and required matrices stored in files. [0]PETSC ERROR: --------------------- Error Message ---------------------------- -------- [0]PETSC ERROR: Object is in wrong state! [0]PETSC ERROR: Not for unassembled matrix! [0]PETSC ERROR: ---------------------------------------------------------------- -------- [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 6, Mon Feb 11 12:26:34 CST 20 13 I appreciated your help, any other example of solving GEVP with subspace, arnoldi? or Lanczos methods is also wellcome. Regards, -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: main.c URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: IN.txt URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: mass.txt URL: From bsmith at mcs.anl.gov Fri Apr 5 08:26:39 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 5 Apr 2013 08:26:39 -0500 Subject: [petsc-users] EVP Solutions with Subspace In-Reply-To: References: Message-ID: <551AFA7F-35DC-451A-9DB5-7EF07AD13F16@mcs.anl.gov> For each matrix, after you finish calling MatSetValues() for that matrix you need to call MatAssemblyBegin(mat,MAT_FINAL_ASSEMBLY) then MatAssemblyEnd(mat,MAT_FINAL_ASSEMBLY) See the manual page. Barry On Apr 5, 2013, at 8:20 AM, Sonya Blade wrote: > Dear All, > > I'd like to solve the problem of Generalized EVP where A is positive definite > and where B has products only on its diagonal and some of them might be zeros. > > I started implementation with modifying the existing example(namely example 2 > in the official Slepc site). Instead of using the Matrix viewer object I used > reading A and B matrices directly from text file. Program produces the following > error message, this has been triggered right after ierr = EPSSolve(eps);CHKERRQ(ierr); > call. I supply with main program body and required matrices stored in files. > > [0]PETSC ERROR: --------------------- Error Message ---------------------------- > -------- > [0]PETSC ERROR: Object is in wrong state! > [0]PETSC ERROR: Not for unassembled matrix! > [0]PETSC ERROR: ---------------------------------------------------------------- > -------- > [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 6, Mon Feb 11 12:26:34 CST 20 > 13 > > I appreciated your help, any other example of solving GEVP with subspace, arnoldi > or Lanczos methods is also wellcome. > > Regards, From sonyablade2010 at hotmail.com Fri Apr 5 08:36:00 2013 From: sonyablade2010 at hotmail.com (Sonya Blade) Date: Fri, 5 Apr 2013 14:36:00 +0100 Subject: [petsc-users] EVP Solutions with Subspace In-Reply-To: References: Message-ID: > For each matrix, after you finish calling MatSetValues() for that matrix you need to call >MatAssemblyBegin(mat,MAT_FINAL_ASSEMBLY) then MatAssemblyEnd(mat,MAT_FINAL_ASSEMBLY) See the manual page. Thank you Bary, And how to set the requested eigenvalue numbers, for example if I prefer only the? 6 lowest or 6 highest eigenvalues to be found? Regards, From jbakosi at lanl.gov Fri Apr 5 09:34:15 2013 From: jbakosi at lanl.gov (Jozsef Bakosi) Date: Fri, 5 Apr 2013 08:34:15 -0600 Subject: [petsc-users] Any changes in ML usage between 3.1-p8 -> 3.3-p6? Message-ID: <20130405143415.GH17937@karman> Hi folks, In switching from 3.1-p8 to 3.3-p6, keeping the same ML ml-6.2.tar.gz, I get indefinite preconditioner with the newer PETSc version. Has there been anything substantial changed around how PCs are handled, e.g. in the defaults? I know this request is pretty general, I would just like to know where to start looking, where changes in PETSc might be clobbering the (supposedly same) behavior of ML. Thanks, Jozsef From karpeev at mcs.anl.gov Fri Apr 5 10:09:09 2013 From: karpeev at mcs.anl.gov (Dmitry Karpeev) Date: Fri, 5 Apr 2013 10:09:09 -0500 Subject: [petsc-users] Redo PetscOptions From Commandline In-Reply-To: References: Message-ID: On Thu, Apr 4, 2013 at 7:42 PM, Jed Brown wrote: > It seems to me this is related to Moose's new MultiApp capability: solving different systems on subcommunicators (with the interaction between the systems handled outside of PETSc)? It may be that the cleaner approach is to have the subsystems (their solvers, rather) use prefixes to set their specific options. Would that be enough? Dmitry. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jroman at dsic.upv.es Fri Apr 5 10:11:13 2013 From: jroman at dsic.upv.es (Jose E. Roman) Date: Fri, 5 Apr 2013 17:11:13 +0200 Subject: [petsc-users] EVP Solutions with Subspace In-Reply-To: References: Message-ID: <58EE15AB-2889-4DFD-B6BC-A4889CB07F45@dsic.upv.es> El 05/04/2013, a las 15:36, Sonya Blade escribi?: >> For each matrix, after you finish calling MatSetValues() for that matrix you need to call >MatAssemblyBegin(mat,MAT_FINAL_ASSEMBLY) then MatAssemblyEnd(mat,MAT_FINAL_ASSEMBLY) See the manual page. > > Thank you Bary, > > And how to set the requested eigenvalue numbers, for example if I prefer only the > 6 lowest or 6 highest eigenvalues to be found? > > Regards, http://www.grycap.upv.es/slepc/documentation/current/docs/manualpages/EPS/EPSSetDimensions.html http://www.grycap.upv.es/slepc/documentation/current/docs/manualpages/EPS/EPSSetWhichEigenpairs.html For these simple questions it is faster to look for the answer in the manual. Jose From bsmith at mcs.anl.gov Fri Apr 5 10:15:28 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 5 Apr 2013 10:15:28 -0500 Subject: [petsc-users] Redo PetscOptions From Commandline In-Reply-To: References: Message-ID: On Apr 5, 2013, at 10:09 AM, Dmitry Karpeev wrote: > > On Thu, Apr 4, 2013 at 7:42 PM, Jed Brown wrote: > > It seems to me this is related to Moose's new MultiApp capability: solving different systems on subcommunicators (with the interaction between the systems handled outside of PETSc)? It may be that the cleaner approach is to have the subsystems (their solvers, rather) use prefixes to set their specific options. > Would that be enough? > > Dmitry. > If PETSc is only aware of the subcommunicator then it could be that the right model is to set PETSC_COMM_WORLD to be the sub comm of MPI_COMM_WORLD before PetscInitialize(), then PETSc only sees that sub communicator. But you will NOT be able to use PETSc on anything that "connects" these various sub communicators together. Barry > From karpeev at mcs.anl.gov Fri Apr 5 10:18:31 2013 From: karpeev at mcs.anl.gov (Dmitry Karpeev) Date: Fri, 5 Apr 2013 10:18:31 -0500 Subject: [petsc-users] Redo PetscOptions From Commandline In-Reply-To: References: Message-ID: Should PetscInitialize() take a comm argument, then? On Fri, Apr 5, 2013 at 10:15 AM, Barry Smith wrote: > > On Apr 5, 2013, at 10:09 AM, Dmitry Karpeev wrote: > > > > > On Thu, Apr 4, 2013 at 7:42 PM, Jed Brown wrote: > > > > It seems to me this is related to Moose's new MultiApp capability: > solving different systems on subcommunicators (with the interaction between > the systems handled outside of PETSc)? It may be that the cleaner approach > is to have the subsystems (their solvers, rather) use prefixes to set their > specific options. > > Would that be enough? > > > > Dmitry. > > > > If PETSc is only aware of the subcommunicator then it could be that > the right model is to set PETSC_COMM_WORLD to be the sub comm of > MPI_COMM_WORLD before PetscInitialize(), then PETSc only sees that sub > communicator. But you will NOT be able to use PETSc on anything that > "connects" these various sub communicators together. > > Barry > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Fri Apr 5 11:32:23 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 5 Apr 2013 11:32:23 -0500 Subject: [petsc-users] Redo PetscOptions From Commandline In-Reply-To: References: Message-ID: On Apr 5, 2013, at 10:18 AM, Dmitry Karpeev wrote: > Should PetscInitialize() take a comm argument, then? No, this is such an uncommon case that it shouldn't be exposed to everyone. Note I actually would advocate that "Moose's new MultiApp capability" use PETSc "above" the individual runs and thus would not design "Moose's new MultiApp capability" to set the PETSC_COMM_WORLD to be related to a single app; I was just pointing out this was a possibility if "Moose's new MultiApp capability" specifically only knew about PETSc at the individual App level (which it sounds like its current design). Also note that I previously proposed making PetscOptionsInsertArgs_Private() public so that one may reset the command line options and not the environmental options and Moose could then use this option safely and not need to muck with PETSC_COMM_WORLD. Barry > > > On Fri, Apr 5, 2013 at 10:15 AM, Barry Smith wrote: > > On Apr 5, 2013, at 10:09 AM, Dmitry Karpeev wrote: > > > > > On Thu, Apr 4, 2013 at 7:42 PM, Jed Brown wrote: > > > > It seems to me this is related to Moose's new MultiApp capability: solving different systems on subcommunicators (with the interaction between the systems handled outside of PETSc)? It may be that the cleaner approach is to have the subsystems (their solvers, rather) use prefixes to set their specific options. > > Would that be enough? > > > > Dmitry. > > > > If PETSc is only aware of the subcommunicator then it could be that the right model is to set PETSC_COMM_WORLD to be the sub comm of MPI_COMM_WORLD before PetscInitialize(), then PETSc only sees that sub communicator. But you will NOT be able to use PETSc on anything that "connects" these various sub communicators together. > > Barry > > > > > > > > From sonyablade2010 at hotmail.com Fri Apr 5 11:43:33 2013 From: sonyablade2010 at hotmail.com (Sonya Blade) Date: Fri, 5 Apr 2013 17:43:33 +0100 Subject: [petsc-users] EVP Solutions with Subspace In-Reply-To: References: , Message-ID: >http://www.grycap.upv.es/slepc/documentation/current/docs/manualpages/EPS/EPSSetDimensions.html >http://www.grycap.upv.es/slepc/documentation/current/docs/manualpages/EPS/EPSSetWhichEigenpairs.html > >For these simple questions it is faster to look for the answer in the manual. > >Jose Sorry for that Jose, The last message I received says that: [0]PETSC ERROR: ? ! [0]PETSC ERROR: KSP did not converge (reason=DIVERGED_ITS)! What could be the reason of not convergence, due to the fact that B matrix has? some zero values on its diagonal? Is there any other methods which solves GEP? even if there is zeros on diagonals? Regards, From jroman at dsic.upv.es Fri Apr 5 11:47:03 2013 From: jroman at dsic.upv.es (Jose E. Roman) Date: Fri, 5 Apr 2013 18:47:03 +0200 Subject: [petsc-users] EVP Solutions with Subspace In-Reply-To: References: , Message-ID: El 05/04/2013, a las 18:43, Sonya Blade escribi?: >> http://www.grycap.upv.es/slepc/documentation/current/docs/manualpages/EPS/EPSSetDimensions.html >> http://www.grycap.upv.es/slepc/documentation/current/docs/manualpages/EPS/EPSSetWhichEigenpairs.html >> >> For these simple questions it is faster to look for the answer in the manual. >> >> Jose > > Sorry for that Jose, > > The last message I received says that: > [0]PETSC ERROR: ! > [0]PETSC ERROR: KSP did not converge (reason=DIVERGED_ITS)! > > What could be the reason of not convergence, due to the fact that B matrix has > some zero values on its diagonal? Is there any other methods which solves GEP > even if there is zeros on diagonals? > > Regards, By default SLEPc uses a direct solver, so you would not get this message. Which options are you using for the KSP? Jose From zhenglun.wei at gmail.com Fri Apr 5 11:59:32 2013 From: zhenglun.wei at gmail.com (Zhenglun (Alan) Wei) Date: Fri, 05 Apr 2013 11:59:32 -0500 Subject: [petsc-users] Fwd: Re: A question on the PETSc options for non-uniform grid In-Reply-To: <5158C504.4010508@gmail.com> References: <5158C504.4010508@gmail.com> Message-ID: <515F02F4.5020307@gmail.com> Dear folks, I forget to forward my reply to the list. Please check the 'Forwarded Message' about my code of Poisson solver with non-uniform grid and the procedure to reproduce the error for 'gamg'. However, here I have some questions on the HYPRE preconditioner for the GMRES. One of my friend suggested me to add this to my code: ierr = KSPGetPC(solver, &pc);CHKERRQ(ierr); ierr = PCSetType(pc, PCHYPRE);CHKERRQ(ierr); ierr = PCHYPRESetType(pc,"boomeramg");CHKERRQ(ierr); ierr = PetscOptionsSetValue("-pc_hypre_boomeramg_max_levels", "25"); CHKERRQ(ierr); ierr = PetscOptionsSetValue("-pc_hypre_boomeramg_strong_threshold", "0.0"); CHKERRQ(ierr); ierr = PetscOptionsSetValue("-pc_hypre_boomeramg_relax_type_all", "SOR/Jacobi"); CHKERRQ(ierr); After this is done, the time for solving the Poisson equation indeed reduced to half; however, it is still very slow. Here I attached output file (out_GMRES_HYPRE-pc in the attachment) with -ksp_monitor and --ksp_view. Could you please help me to take a look and see which part I can modified to speed up the solver. thanks, Alan -------- Forwarded Message -------- ??: Re: [petsc-users] A question on the PETSc options for non-uniform grid ??: Sun, 31 Mar 2013 18:21:40 -0500 ???: Zhenglun (Alan) Wei ???: Jed Brown , "Mark F. Adams" Dear Dr. Brown, Thank you so much for your answers. Here are my reply. I reduced my code and attached here. It derives from the KSP ex45.c plus my own code for the non uniform grid. The boundary condtion follows the original method of ex45.c. Firstly, I should mention that if the code runs for uniform grid, it is very fine. In the 'ITTCRun' script, there are four executable lines. 1, it uses ksp = 'CG' and pc = 'GAMG'. It will comes up the 'un-symmetric graphic' problem; 2, it adds '-pc_gamg_sym_graph true' at the end. As I mentioned before, it provides a very crazy norm since the '-pc_gamg_sym_graph true' is used. The 'out_ksp=CG' shows the log and the norm. 3, No matter if the ksp type changed to 'GMRES' (which is the 3rd executable line) or '-pc_gamg_sym_graph true' changed to '-pc_gamg_threhold 0.0', the code will stacks there. However, it seems not to be a 'deadlock' because if I reduce the computational load, i.e. mesh size, it will complete but just very slow. 4, it uses everything for the default; I believe it is ksp = 'GMRES' without any preconditioner. It can solve the fine mesh problem yet just very slow. I really appreciate your time and help, :) Alan > "Zhenglun (Alan) Wei" writes: > >> Dear All, >> I hope you're having a nice day. >> Based on ksp ex45, a 3D Poisson solver with non-uniform grid is coded. >> The PETSc options I used is: >> /mpiexec -np 32 ./ex45 -pc_type gamg -ksp_type cg -pc_gamg_type agg >> -pc_gamg_agg_nsmooths 1 -mg_levels_ksp_max_it 1 -mg_levels_ksp_type >> richardson -ksp_rtol 1.0e-7/ >> There are some problems: >> 1, if the mesh is very coarse with very small amount of the grid, the >> code runs well; >> 2, if the mesh is fine, an error message comes up saying 'un-symmetric >> graph'. It suggests me to use '-pc_gamg_sym_graph true' or >> '-pc_gamg_thredhold 0.0'. > How are you implementing boundary conditions? > >> a) if '-pc_gamg_sym_graph true' is used, the code runs but blows up very >> quickly with crazy norm; >> b) if '-pc_gamg_thredhold 0.0' is used, the code stops on the KSPSolve() >> forever; >> 3, because of the 'un-symmetric graph' error, it reminds me that the >> matrix may not be symmetric, which indicates that '-ksp_type cg' may not >> be a good option. Therefore, I changed it to '-ksp_type gmres'. It makes >> the code run with 'moderate fine' mesh. However, it stops at KSPSolve() >> also with fine mesh. >> a) does there any other '-ksp_type' fit better for this case? >> b) does multigrid preconditioner work well for gmres? > All of the above should work. Can you reproduce with our version of > ex45.c, or can you send your version with the full error messages you > are seeing (or other symptoms, like deadlock?) and instructions to > reproduce? -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: NonUniformPoisson.zip Type: application/zip Size: 10573 bytes Desc: not available URL: -------------- next part -------------- NumRank = 32 Ready for BuildingNonUniformGrid!! Ready for ApplyNonUniformGrid#1!! Ready for Initialization!! Ready for the main iteration!! Ready for ApplyNonUniform of the KSPSolve#1!! Ready for ComputePpRHS of the KSPSolve#1!! Ready for ComputeMatrix of the KSPSolve#1!! Ready for KspSetDM of the KSPSolve#1!! Ready for KSPSetFromOptions of the KSPSolve#1!! Ready for KSPSetUp of the KSPSolve#1!! Finish KSPSetUp of the KSPSolve#1!! Ready for KSPSolve#1!! Finish KSPSolve#1!! Residual norm 121.02 ************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- ./ex45 on a linux-gnu-c-nodebug named n028 with 32 processors, by zlwei Sun Mar 31 18:01:09 2013 Using Petsc Development HG revision: 60bf177dda1f5f14babbace55c4a430ac46ab2a6 HG Date: Thu Feb 14 11:32:41 2013 -0600 Max Max/Min Avg Total Time (sec): 2.793e+01 1.00005 2.793e+01 Objects: 5.510e+02 1.00364 5.491e+02 Flops: 5.698e+08 1.22785 5.141e+08 1.645e+10 Flops/sec: 2.040e+07 1.22784 1.841e+07 5.890e+08 MPI Messages: 4.776e+03 2.14822 3.434e+03 1.099e+05 MPI Message Lengths: 4.300e+07 1.63512 1.016e+04 1.117e+09 MPI Reductions: 1.139e+03 1.00176 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flops and VecAXPY() for complex vectors of length N --> 8N flops Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total counts %Total Avg %Total counts %Total 0: Main Stage: 1.1397e-01 0.4% 2.0336e+08 1.2% 1.280e+02 0.1% 5.241e+01 0.5% 3.000e+00 0.3% 1: DMMG Setup: 4.1170e-01 1.5% 0.0000e+00 0.0% 1.360e+03 1.2% 1.597e+02 1.6% 3.700e+01 3.2% 2: Pressure RHS Setup: 2.6268e+01 94.0% 1.3900e+10 84.5% 9.833e+04 89.5% 9.435e+03 92.9% 1.074e+03 94.3% 3: Pressure Solve: 1.1388e+00 4.1% 2.3473e+09 14.3% 1.008e+04 9.2% 5.131e+02 5.1% 2.206e+01 1.9% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flops: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent Avg. len: average message length (bytes) Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %f - percent flops in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ Event Count Time (sec) Flops --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %f %M %L %R %T %f %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage VecNorm 1 1.0 2.2058e-0218.1 7.50e+05 1.0 0.0e+00 0.0e+00 1.0e+00 0 0 0 0 0 7 12 0 0 33 1088 VecAXPY 1 1.0 1.0166e-02 5.0 7.50e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 6 12 0 0 0 2361 VecScatterBegin 1 1.0 2.9681e-03 6.1 0.00e+00 0.0 1.3e+02 4.5e+04 0.0e+00 0 0 0 1 0 2 0100100 0 0 VecScatterEnd 1 1.0 1.7600e-03 2.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 0 0 0 0 0 MatMult 1 1.0 6.9038e-02 1.4 4.87e+06 1.0 1.3e+02 4.5e+04 0.0e+00 0 1 0 1 0 54 76100100 0 2250 --- Event Stage 1: DMMG Setup VecSet 1 1.0 2.9253e-02 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 6 0 0 0 0 0 VecScatterBegin 1 1.0 3.5793e-02 1.1 0.00e+00 0.0 3.7e+02 3.2e+04 0.0e+00 0 0 0 1 0 8 0 27 67 0 0 VecScatterEnd 1 1.0 1.8298e-02 4.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 3 0 0 0 0 0 --- Event Stage 2: Pressure RHS Setup KSPGMRESOrthog 40 1.0 8.6936e-01 1.4 9.39e+07 1.0 0.0e+00 0.0e+00 4.0e+01 3 18 0 0 4 3 21 0 0 4 3402 KSPSetUp 10 1.0 9.0960e-02 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 1.8e+01 0 0 0 0 2 0 0 0 0 2 0 VecMDot 40 1.0 6.0814e-01 1.7 4.69e+07 1.0 0.0e+00 0.0e+00 4.0e+01 2 9 0 0 4 2 11 0 0 4 2431 VecNorm 44 1.0 2.4593e-01 2.2 9.39e+06 1.0 0.0e+00 0.0e+00 4.4e+01 1 2 0 0 4 1 2 0 0 4 1202 VecScale 44 1.0 7.4415e-02 4.9 4.69e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 1987 VecCopy 4 1.0 9.3884e-03 2.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecSet 74 1.0 6.2634e-02 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAXPY 4 1.0 1.1530e-02 3.2 8.54e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 2332 VecMAXPY 44 1.0 3.7019e-01 1.2 5.55e+07 1.0 0.0e+00 0.0e+00 0.0e+00 1 11 0 0 0 1 13 0 0 0 4720 VecAssemblyBegin 64 1.0 3.5270e-01 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 1.9e+02 1 0 0 0 17 1 0 0 0 18 0 VecAssemblyEnd 64 1.0 3.2663e-04 2.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecPointwiseMult 44 1.0 1.5967e-01 1.2 4.69e+06 1.0 0.0e+00 0.0e+00 0.0e+00 1 1 0 0 0 1 1 0 0 0 926 VecScatterBegin 108 1.0 1.2571e-01 1.7 0.00e+00 0.0 4.5e+04 7.9e+03 0.0e+00 0 0 41 32 0 0 0 46 34 0 0 VecScatterEnd 108 1.0 4.2095e-01 1.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 VecSetRandom 4 1.0 1.9181e-02 2.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecNormalize 44 1.0 3.0430e-01 1.9 1.41e+07 1.0 0.0e+00 0.0e+00 4.4e+01 1 3 0 0 4 1 3 0 0 4 1458 MatMult 40 1.0 9.8190e-01 1.2 9.89e+07 1.3 1.5e+04 6.5e+03 0.0e+00 3 17 13 8 0 4 20 15 9 0 2863 MatConvert 4 1.0 2.4130e-01 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 4.0e+00 1 0 0 0 0 1 0 0 0 0 0 MatScale 12 1.0 2.4393e-01 1.2 1.46e+07 1.3 1.5e+03 6.5e+03 0.0e+00 1 3 1 1 0 1 3 1 1 0 1701 MatAssemblyBegin 66 1.0 1.0796e+00 1.6 0.00e+00 0.0 1.0e+04 1.3e+04 7.2e+01 3 0 9 12 6 3 0 10 13 7 0 MatAssemblyEnd 66 1.0 1.8091e+00 1.1 0.00e+00 0.0 1.7e+04 1.4e+03 2.2e+02 6 0 16 2 19 7 0 18 2 20 0 MatGetRow 1707200 1.0 3.9819e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 14 0 0 0 0 15 0 0 0 0 0 MatCoarsen 4 1.0 7.2832e-01 1.0 0.00e+00 0.0 1.7e+04 7.1e+03 1.4e+02 3 0 15 11 12 3 0 17 12 13 0 MatAXPY 4 1.0 5.4486e-02 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatTranspose 4 1.0 1.1323e+00 1.0 0.00e+00 0.0 1.1e+04 9.8e+03 6.4e+01 4 0 10 10 6 4 0 11 10 6 0 MatMatMult 4 1.0 8.7380e-01 1.1 1.03e+07 1.3 8.7e+03 3.3e+03 9.6e+01 3 2 8 3 8 3 2 9 3 9 337 MatPtAP 4 1.0 3.0574e+00 1.0 1.82e+08 2.1 1.4e+04 1.3e+04 1.0e+02 11 26 13 16 9 12 30 14 18 9 1382 MatTrnMatMult 4 1.0 5.4484e+00 1.0 9.66e+07 1.8 8.3e+03 3.7e+04 1.2e+02 19 14 8 27 10 21 17 8 29 11 424 MatGetLocalMat 20 1.0 4.8721e-01 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 2.4e+01 2 0 0 0 2 2 0 0 0 2 0 MatGetBrAoCol 12 1.0 2.6553e-01 1.6 0.00e+00 0.0 1.0e+04 1.2e+04 1.6e+01 1 0 9 11 1 1 0 10 12 1 0 MatGetSymTrans 8 1.0 4.9516e-02 1.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 PCSetUp 1 1.0 2.4610e+01 1.0 4.86e+08 1.2 9.7e+04 1.1e+04 1.0e+03 88 84 88 91 88 94100 98 98 94 565 PCGAMGgraph_AGG 4 1.0 8.8514e+00 1.0 1.03e+07 1.3 1.8e+04 8.8e+03 1.4e+02 32 2 17 14 12 34 2 19 15 13 33 PCGAMGcoarse_AGG 4 1.0 6.6358e+00 1.0 9.66e+07 1.8 3.4e+04 1.5e+04 3.4e+02 24 14 31 45 29 25 17 34 49 31 348 PCGAMGProl_AGG 4 1.0 5.1585e-01 1.1 0.00e+00 0.0 7.6e+03 5.5e+03 1.1e+02 2 0 7 4 10 2 0 8 4 10 0 PCGAMGPOpt_AGG 4 1.0 5.5509e+00 1.0 2.36e+08 1.1 2.3e+04 5.3e+03 2.2e+02 20 43 21 11 20 21 51 24 12 21 1274 --- Event Stage 3: Pressure Solve KSPSetUp 2 1.0 7.1526e-06 0.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 KSPSolve 1 1.0 1.1429e+00 1.0 8.00e+07 1.2 1.0e+04 5.6e+03 2.2e+01 4 14 9 5 2 100100100100100 2054 VecTDot 4 1.0 6.5522e-02 2.4 3.00e+06 1.0 0.0e+00 0.0e+00 4.0e+00 0 1 0 0 0 4 4 0 0 18 1465 VecNorm 2 1.0 4.0948e-02 8.3 1.50e+06 1.0 0.0e+00 0.0e+00 2.0e+00 0 0 0 0 0 2 2 0 0 9 1172 VecCopy 10 1.0 3.5149e-02 1.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 2 0 0 0 0 0 VecSet 34 1.0 2.3161e-02 2.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 0 0 0 0 0 VecAXPY 18 1.0 5.9955e-02 1.5 4.91e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 4 7 0 0 0 2594 VecAYPX 17 1.0 4.8838e-02 1.2 2.46e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 4 3 0 0 0 1592 VecAssemblyBegin 1 1.0 3.1125e-02 7.0 0.00e+00 0.0 0.0e+00 0.0e+00 3.0e+00 0 0 0 0 0 1 0 0 0 14 0 VecAssemblyEnd 1 1.0 1.5974e-05 8.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecPointwiseMult 16 1.0 5.5181e-02 1.2 1.71e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 5 2 0 0 0 974 VecScatterBegin 34 1.0 2.6413e-02 2.1 0.00e+00 0.0 1.0e+04 5.6e+03 0.0e+00 0 0 9 5 0 2 0100100 0 0 VecScatterEnd 34 1.0 7.9229e-02 4.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 4 0 0 0 0 0 MatMult 18 1.0 5.3817e-01 1.1 4.93e+07 1.2 6.1e+03 8.1e+03 0.0e+00 2 9 6 4 0 45 61 60 87 0 2667 MatMultAdd 8 1.0 1.0760e-01 1.2 8.54e+06 1.3 2.0e+03 1.8e+03 0.0e+00 0 1 2 0 0 9 10 20 6 0 2236 MatMultTranspose 8 1.0 1.1546e-01 1.4 8.54e+06 1.3 2.0e+03 1.8e+03 0.0e+00 0 1 2 0 0 8 10 20 6 0 2084 MatSolve 2 0.0 2.0981e-05 0.0 5.70e+03 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 272 MatLUFactorSym 1 1.0 4.7207e-05 4.3 0.00e+00 0.0 0.0e+00 0.0e+00 3.0e+00 0 0 0 0 0 0 0 0 0 14 0 MatLUFactorNum 1 1.0 7.2956e-0525.5 3.59e+04 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 492 MatGetRowIJ 1 0.0 8.8215e-06 0.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetOrdering 1 0.0 7.1049e-05 0.0 0.00e+00 0.0 0.0e+00 0.0e+00 1.2e-01 0 0 0 0 0 0 0 0 0 1 0 PCSetUp 1 1.0 6.8903e-04 7.5 3.59e+04 0.0 0.0e+00 0.0e+00 5.1e+00 0 0 0 0 0 0 0 0 0 23 52 PCSetUpOnBlocks 2 1.0 7.0190e-04 7.1 3.59e+04 0.0 0.0e+00 0.0e+00 5.1e+00 0 0 0 0 0 0 0 0 0 23 51 PCApply 2 1.0 8.5915e-01 1.1 6.35e+07 1.3 9.8e+03 4.6e+03 1.3e+01 3 11 9 4 1 73 78 97 80 59 2119 --- Event Stage 4: Unknown --- Event Stage 5: Unknown ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage Krylov Solver 0 8 26784 0 DMKSP interface 0 1 656 0 Vector 1 46 72328368 0 Vector Scatter 0 17 18020 0 Matrix 0 48 132500240 0 Distributed Mesh 0 4 18810208 0 Bipartite Graph 0 8 6400 0 Index Set 0 5 4144 0 IS L to G Mapping 0 6 12530760 0 Preconditioner 0 7 7052 0 Viewer 1 0 0 0 --- Event Stage 1: DMMG Setup Krylov Solver 1 0 0 0 Vector 6 4 6080 0 Vector Scatter 4 0 0 0 Distributed Mesh 2 0 0 0 Bipartite Graph 4 0 0 0 Index Set 10 10 3138568 0 IS L to G Mapping 3 0 0 0 --- Event Stage 2: Pressure RHS Setup Krylov Solver 12 5 121792 0 DMKSP interface 1 0 0 0 Vector 223 184 88029856 0 Vector Scatter 36 23 24380 0 Matrix 123 76 527242404 0 Matrix Coarsen 4 4 2544 0 Distributed Mesh 3 1 4368 0 Bipartite Graph 10 6 5024 0 Index Set 78 76 3772892 0 IS L to G Mapping 3 0 0 0 Preconditioner 12 5 4520 0 PetscRandom 4 4 2528 0 --- Event Stage 3: Pressure Solve Vector 4 0 0 0 Matrix 1 0 0 0 Index Set 5 2 1600 0 --- Event Stage 4: Unknown --- Event Stage 5: Unknown ======================================================================================================================== Average time to get PetscTime(): 5.00679e-07 Average time for MPI_Barrier(): 0.000554371 Average time for zero size MPI_Send(): 0.000241652 #PETSc Option Table entries: -ksp_rtol 1.0e-7 -ksp_type cg -log_summary -mg_levels_ksp_max_it 1 -mg_levels_ksp_type richardson -pc_gamg_agg_nsmooths 1 -pc_gamg_sym_graph true -pc_gamg_type agg -pc_type gamg #End of PETSc Option Table entries Compiled without FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4 Configure run at: Thu Feb 14 14:17:01 2013 Configure options: --download-f-blas-lapack --download-mpich --with-debugging=0 PETSC_ARCH=linux-gnu-c-nodebug ----------------------------------------- Libraries compiled on Thu Feb 14 14:17:01 2013 on login1.ittc.ku.edu Machine characteristics: Linux-2.6.32-220.13.1.el6.x86_64-x86_64-with-redhat-6.2-Santiago Using PETSc directory: /bio/work1/zlwei/PETSc/petsc-dev Using PETSc arch: linux-gnu-c-nodebug ----------------------------------------- Using C compiler: /bio/work1/zlwei/PETSc/petsc-dev/linux-gnu-c-nodebug/bin/mpicc -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O ${COPTFLAGS} ${CFLAGS} Using Fortran compiler: /bio/work1/zlwei/PETSc/petsc-dev/linux-gnu-c-nodebug/bin/mpif90 -fPIC -Wall -Wno-unused-variable -O ${FOPTFLAGS} ${FFLAGS} ----------------------------------------- Using include paths: -I/bio/work1/zlwei/PETSc/petsc-dev/linux-gnu-c-nodebug/include -I/bio/work1/zlwei/PETSc/petsc-dev/include -I/bio/work1/zlwei/PETSc/petsc-dev/include -I/bio/work1/zlwei/PETSc/petsc-dev/linux-gnu-c-nodebug/include ----------------------------------------- Using C linker: /bio/work1/zlwei/PETSc/petsc-dev/linux-gnu-c-nodebug/bin/mpicc Using Fortran linker: /bio/work1/zlwei/PETSc/petsc-dev/linux-gnu-c-nodebug/bin/mpif90 Using libraries: -Wl,-rpath,/bio/work1/zlwei/PETSc/petsc-dev/linux-gnu-c-nodebug/lib -L/bio/work1/zlwei/PETSc/petsc-dev/linux-gnu-c-nodebug/lib -lpetsc -Wl,-rpath,/bio/work1/zlwei/PETSc/petsc-dev/linux-gnu-c-nodebug/lib -L/bio/work1/zlwei/PETSc/petsc-dev/linux-gnu-c-nodebug/lib -lflapack -lfblas -lX11 -lpthread -lm -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.4.6 -L/usr/lib/gcc/x86_64-redhat-linux/4.4.6 -lmpichf90 -lgfortran -lm -lm -ldl -lmpich -lopa -lmpl -lrt -lpthread -lgcc_s -ldl ----------------------------------------- -------------- next part -------------- NumRank = 32 Ready for BuildingNonUniformGrid!! Ready for ApplyNonUniformGrid#1!! rank = 25, NbrRanks are: L: 24, R: 26, B:21, T:29, A:17, F:33!! rank = 25: xs = 75, ys = 0, zs = 150, mx = 300, my = 200, mz = 200! rank = 25: left BC = 0, rightBC = 0, bottom BC = 1, top BC = 0, aft BC = 0, front BC = 1 rank = 25: xc = 75, yc = 100, zc = 50, xw = 75, yw = 100, zw = 50 rank = 31, NbrRanks are: L: 30, R: 32, B:27, T:35, A:23, F:39!! rank = 31: xs = 225, ys = 100, zs = 150, mx = 300, my = 200, mz = 200! rank = 31: left BC = 0, rightBC = 1, bottom BC = 0, top BC = 1, aft BC = 0, front BC = 1 rank = 31: xc = 74, yc = 100, zc = 50, xw = 75, yw = 100, zw = 50 rank = 28, NbrRanks are: L: 27, R: 29, B:24, T:32, A:20, F:36!! rank = 28: xs = 0, ys = 100, zs = 150, mx = 300, my = 200, mz = 200! rank = 28: left BC = 1, rightBC = 0, bottom BC = 0, top BC = 1, aft BC = 0, front BC = 1 rank = 28: xc = 75, yc = 100, zc = 50, xw = 75, yw = 100, zw = 50 rank = 29, NbrRanks are: L: 28, R: 30, B:25, T:33, A:21, F:37!! rank = 29: xs = 75, ys = 100, zs = 150, mx = 300, my = 200, mz = 200! rank = 29: left BC = 0, rightBC = 0, bottom BC = 0, top BC = 1, aft BC = 0, front BC = 1 rank = 29: xc = 75, yc = 100, zc = 50, xw = 75, yw = 100, zw = 50 rank = 24, NbrRanks are: L: 23, R: 25, B:20, T:28, A:16, F:32!! rank = 24: xs = 0, ys = 0, zs = 150, mx = 300, my = 200, mz = 200! rank = 24: left BC = 1, rightBC = 0, bottom BC = 1, top BC = 0, aft BC = 0, front BC = 1 rank = 24: xc = 75, yc = 100, zc = 50, xw = 75, yw = 100, zw = 50 rank = 26, NbrRanks are: L: 25, R: 27, B:22, T:30, A:18, F:34!! rank = 26: xs = 150, ys = 0, zs = 150, mx = 300, my = 200, mz = 200! rank = 26: left BC = 0, rightBC = 0, bottom BC = 1, top BC = 0, aft BC = 0, front BC = 1 rank = 26: xc = 75, yc = 100, zc = 50, xw = 75, yw = 100, zw = 50 rank = 27, NbrRanks are: L: 26, R: 28, B:23, T:31, A:19, F:35!! rank = 27: xs = 225, ys = 0, zs = 150, mx = 300, my = 200, mz = 200! rank = 27: left BC = 0, rightBC = 1, bottom BC = 1, top BC = 0, aft BC = 0, front BC = 1 rank = 27: xc = 74, yc = 100, zc = 50, xw = 75, yw = 100, zw = 50 rank = 20, NbrRanks are: L: 19, R: 21, B:16, T:24, A:12, F:28!! rank = 20: xs = 0, ys = 100, zs = 100, mx = 300, my = 200, mz = 200! rank = 20: left BC = 1, rightBC = 0, bottom BC = 0, top BC = 1, aft BC = 0, front BC = 0 rank = 20: xc = 75, yc = 100, zc = 50, xw = 75, yw = 100, zw = 50 rank = 30, NbrRanks are: L: 29, R: 31, B:26, T:34, A:22, F:38!! rank = 30: xs = 150, ys = 100, zs = 150, mx = 300, my = 200, mz = 200! rank = 30: left BC = 0, rightBC = 0, bottom BC = 0, top BC = 1, aft BC = 0, front BC = 1 rank = 30: xc = 75, yc = 100, zc = 50, xw = 75, yw = 100, zw = 50 rank = 4, NbrRanks are: L: 3, R: 5, B:0, T:8, A:-4, F:12!! rank = 4: xs = 0, ys = 100, zs = 0, mx = 300, my = 200, mz = 200! rank = 4: left BC = 1, rightBC = 0, bottom BC = 0, top BC = 1, aft BC = 1, front BC = 0 rank = 4: xc = 75, yc = 100, zc = 50, xw = 75, yw = 100, zw = 50 rank = 12, NbrRanks are: L: 11, R: 13, B:8, T:16, A:4, F:20!! rank = 12: xs = 0, ys = 100, zs = 50, mx = 300, my = 200, mz = 200! rank = 12: left BC = 1, rightBC = 0, bottom BC = 0, top BC = 1, aft BC = 0, front BC = 0 rank = 12: xc = 75, yc = 100, zc = 50, xw = 75, yw = 100, zw = 50 rank = 19, NbrRanks are: L: 18, R: 20, B:15, T:23, A:11, F:27!! rank = 19: xs = 225, ys = 0, zs = 100, mx = 300, my = 200, mz = 200! rank = 19: left BC = 0, rightBC = 1, bottom BC = 1, top BC = 0, aft BC = 0, front BC = 0 rank = 19: xc = 74, yc = 100, zc = 50, xw = 75, yw = 100, zw = 50 rank = 1, NbrRanks are: L: 0, R: 2, B:-3, T:5, A:-7, F:9!! rank = 1: xs = 75, ys = 0, zs = 0, mx = 300, my = 200, mz = 200! rank = 1: left BC = 0, rightBC = 0, bottom BC = 1, top BC = 0, aft BC = 1, front BC = 0 rank = 18, NbrRanks are: L: 17, R: 19, B:14, T:22, A:10, F:26!! rank = 18: xs = 150, ys = 0, zs = 100, mx = 300, my = 200, mz = 200! rank = 18: left BC = 0, rightBC = 0, bottom BC = 1, top BC = 0, aft BC = 0, front BC = 0 rank = 18: xc = 75, yc = 100, zc = 50, xw = 75, yw = 100, zw = 50 rank = 1: xc = 75, yc = 100, zc = 50, xw = 75, yw = 100, zw = 50 rank = 3, NbrRanks are: L: 2, R: 4, B:-1, T:7, A:-5, F:11!! rank = 3: xs = 225, ys = 0, zs = 0, mx = 300, my = 200, mz = 200! rank = 3: left BC = 0, rightBC = 1, bottom BC = 1, top BC = 0, aft BC = 1, front BC = 0 rank = 3: xc = 74, yc = 100, zc = 50, xw = 75, yw = 100, zw = 50 rank = 9, NbrRanks are: L: 8, R: 10, B:5, T:13, A:1, F:17!! rank = 9: xs = 75, ys = 0, zs = 50, mx = 300, my = 200, mz = 200! rank = 9: left BC = 0, rightBC = 0, bottom BC = 1, top BC = 0, aft BC = 0, front BC = 0 rank = 9: xc = 75, yc = 100, zc = 50, xw = 75, yw = 100, zw = 50 rank = 23, NbrRanks are: L: 22, R: 24, B:19, T:27, A:15, F:31!! rank = 23: xs = 225, ys = 100, zs = 100, mx = 300, my = 200, mz = 200! rank = 23: left BC = 0, rightBC = 1, bottom BC = 0, top BC = 1, aft BC = 0, front BC = 0 rank = 23: xc = 74, yc = 100, zc = 50, xw = 75, yw = 100, zw = 50 rank = 15, NbrRanks are: L: 14, R: 16, B:11, T:19, A:7, F:23!! rank = 15: xs = 225, ys = 100, zs = 50, mx = 300, my = 200, mz = 200! rank = 15: left BC = 0, rightBC = 1, bottom BC = 0, top BC = 1, aft BC = 0, front BC = 0 rank = 15: xc = 74, yc = 100, zc = 50, xw = 75, yw = 100, zw = 50 rank = 11, NbrRanks are: L: 10, R: 12, B:7, T:15, A:3, F:19!! rank = 11: xs = 225, ys = 0, zs = 50, mx = 300, my = 200, mz = 200! rank = 11: left BC = 0, rightBC = 1, bottom BC = 1, top BC = 0, aft BC = 0, front BC = 0 rank = 11: xc = 74, yc = 100, zc = 50, xw = 75, yw = 100, zw = 50 rank = 8, NbrRanks are: L: 7, R: 9, B:4, T:12, A:0, F:16!! rank = 8: xs = 0, ys = 0, zs = 50, mx = 300, my = 200, mz = 200! rank = 8: left BC = 1, rightBC = 0, bottom BC = 1, top BC = 0, aft BC = 0, front BC = 0 rank = 8: xc = 75, yc = 100, zc = 50, xw = 75, yw = 100, zw = 50 rank = 13, NbrRanks are: L: 12, R: 14, B:9, T:17, A:5, F:21!! rank = 13: xs = 75, ys = 100, zs = 50, mx = 300, my = 200, mz = 200! rank = 13: left BC = 0, rightBC = 0, bottom BC = 0, top BC = 1, aft BC = 0, front BC = 0 rank = 14, NbrRanks are: L: 13, R: 15, B:10, T:18, A:6, F:22!! rank = 14: xs = 150, ys = 100, zs = 50, mx = 300, my = 200, mz = 200! rank = 14: left BC = 0, rightBC = 0, bottom BC = 0, top BC = 1, aft BC = 0, front BC = 0 rank = 14: xc = 75, yc = 100, zc = 50, xw = 75, yw = 100, zw = 50 rank = 13: xc = 75, yc = 100, zc = 50, xw = 75, yw = 100, zw = 50 Ready for Initialization!! node = 0 mx = 300 my = 200 mz = 200 mm = 4 nn = 2 pp = 4 rank = 0, NbrRanks are: L: -1, R: 1, B:-4, T:4, A:-8, F:8!! rank = 0: xs = 0, ys = 0, zs = 0, mx = 300, my = 200, mz = 200! rank = 0: left BC = 1, rightBC = 0, bottom BC = 1, top BC = 0, aft BC = 1, front BC = 0 rank = 0: xc = 75, yc = 100, zc = 50, xw = 75, yw = 100, zw = 50 rank = 7, NbrRanks are: L: 6, R: 8, B:3, T:11, A:-1, F:15!! rank = 7: xs = 225, ys = 100, zs = 0, mx = 300, my = 200, mz = 200! rank = 7: left BC = 0, rightBC = 1, bottom BC = 0, top BC = 1, aft BC = 1, front BC = 0 rank = 7: xc = 74, yc = 100, zc = 50, xw = 75, yw = 100, zw = 50 rank = 22, NbrRanks are: L: 21, R: 23, B:18, T:26, A:14, F:30!! rank = 22: xs = 150, ys = 100, zs = 100, mx = 300, my = 200, mz = 200! rank = 22: left BC = 0, rightBC = 0, bottom BC = 0, top BC = 1, aft BC = 0, front BC = 0 rank = 22: xc = 75, yc = 100, zc = 50, xw = 75, yw = 100, zw = 50 rank = 16, NbrRanks are: L: 15, R: 17, B:12, T:20, A:8, F:24!! rank = 16: xs = 0, ys = 0, zs = 100, mx = 300, my = 200, mz = 200! rank = 16: left BC = 1, rightBC = 0, bottom BC = 1, top BC = 0, aft BC = 0, front BC = 0 rank = 16: xc = 75, yc = 100, zc = 50, xw = 75, yw = 100, zw = 50 rank = 5, NbrRanks are: L: 4, R: 6, B:1, T:9, A:-3, F:13!! rank = 5: xs = 75, ys = 100, zs = 0, mx = 300, my = 200, mz = 200! rank = 5: left BC = 0, rightBC = 0, bottom BC = 0, top BC = 1, aft BC = 1, front BC = 0 rank = 5: xc = 75, yc = 100, zc = 50, xw = 75, yw = 100, zw = 50 rank = 6, NbrRanks are: L: 5, R: 7, B:2, T:10, A:-2, F:14!! rank = 6: xs = 150, ys = 100, zs = 0, mx = 300, my = 200, mz = 200! rank = 6: left BC = 0, rightBC = 0, bottom BC = 0, top BC = 1, aft BC = 1, front BC = 0 rank = 6: xc = 75, yc = 100, zc = 50, xw = 75, yw = 100, zw = 50 rank = 2, NbrRanks are: L: 1, R: 3, B:-2, T:6, A:-6, F:10!! rank = 2: xs = 150, ys = 0, zs = 0, mx = 300, my = 200, mz = 200! rank = 2: left BC = 0, rightBC = 0, bottom BC = 1, top BC = 0, aft BC = 1, front BC = 0 rank = 10, NbrRanks are: L: 9, R: 11, B:6, T:14, A:2, F:18!! rank = 10: xs = 150, ys = 0, zs = 50, mx = 300, my = 200, mz = 200! rank = 2: xc = 75, yc = 100, zc = 50, xw = 75, yw = 100, zw = 50 rank = 10: left BC = 0, rightBC = 0, bottom BC = 1, top BC = 0, aft BC = 0, front BC = 0 rank = 10: xc = 75, yc = 100, zc = 50, xw = 75, yw = 100, zw = 50 rank = 17, NbrRanks are: L: 16, R: 18, B:13, T:21, A:9, F:25!! rank = 17: xs = 75, ys = 0, zs = 100, mx = 300, my = 200, mz = 200! rank = 17: left BC = 0, rightBC = 0, bottom BC = 1, top BC = 0, aft BC = 0, front BC = 0 rank = 17: xc = 75, yc = 100, zc = 50, xw = 75, yw = 100, zw = 50 rank = 21, NbrRanks are: L: 20, R: 22, B:17, T:25, A:13, F:29!! rank = 21: xs = 75, ys = 100, zs = 100, mx = 300, my = 200, mz = 200! rank = 21: left BC = 0, rightBC = 0, bottom BC = 0, top BC = 1, aft BC = 0, front BC = 0 rank = 21: xc = 75, yc = 100, zc = 50, xw = 75, yw = 100, zw = 50 rank 8 Cylinder: neighbor = left: 59 right: 75 bottom: 49 top: 100 aft: 49 front: 100 rank 9 Cylinder: neighbor = left: 74 right: 150 bottom: 49 top: 100 aft: 49 front: 100 rank 11 Cylinder: neighbor = left: 224 right: 161 bottom: 49 top: 100 aft: 49 front: 100 rank 14 Cylinder: neighbor = left: 149 right: 161 bottom: 99 top: 151 aft: 49 front: 100 rank 12 Cylinder: neighbor = left: 59 right: 75 bottom: 99 top: 151 aft: 49 front: 100 rank 10 Cylinder: neighbor = left: 149 right: 161 bottom: 49 top: 100 aft: 49 front: 100 rank 13 Cylinder: neighbor = left: 74 right: 150 bottom: 99 top: 151 aft: 49 front: 100 rank 15 Cylinder: neighbor = left: 224 right: 161 bottom: 99 top: 151 aft: 49 front: 100 rank 1 Cylinder: neighbor = left: 74 right: 150 bottom: 49 top: 100 aft: 49 front: 50 Center of the sphere = (8.635204, 12.139085, 12.139085) !! rank 17 Cylinder: neighbor = left: 74 right: 150 bottom: 49 top: 100 aft: 99 front: 150 rank 27 Cylinder: neighbor = left: 224 right: 161 bottom: 49 top: 100 aft: 149 front: 151 rank 24 Cylinder: neighbor = left: 59 right: 75 bottom: 49 top: 100 aft: 149 front: 151 rank 26 Cylinder: neighbor = left: 149 right: 161 bottom: 49 top: 100 aft: 149 front: 151 rank 31 Cylinder: neighbor = left: 224 right: 161 bottom: 99 top: 151 aft: 149 front: 151 rank 28 Cylinder: neighbor = left: 59 right: 75 bottom: 99 top: 151 aft: 149 front: 151 rank 19 Cylinder: neighbor = left: 224 right: 161 bottom: 49 top: 100 aft: 99 front: 150 rank 21 Cylinder: neighbor = left: 74 right: 150 bottom: 99 top: 151 aft: 99 front: 150 rank 18 Cylinder: neighbor = left: 149 right: 161 bottom: 49 top: 100 aft: 99 front: 150 rank 20 Cylinder: neighbor = left: 59 right: 75 bottom: 99 top: 151 aft: 99 front: 150 rank 23 Cylinder: neighbor = left: 224 right: 161 bottom: 99 top: 151 aft: 99 front: 150 rank 30 Cylinder: neighbor = left: 149 right: 161 bottom: 99 top: 151 aft: 149 front: 151 rank 16 Cylinder: neighbor = left: 59 right: 75 bottom: 49 top: 100 aft: 99 front: 150 rank 5 Cylinder: neighbor = left: 74 right: 150 bottom: 99 top: 151 aft: 49 front: 50 rank 7 Cylinder: neighbor = left: 224 right: 161 bottom: 99 top: 151 aft: 49 front: 50 rank 2 Cylinder: neighbor = left: 149 right: 161 bottom: 49 top: 100 aft: 49 front: 50 rank 3 Cylinder: neighbor = left: 224 right: 161 bottom: 49 top: 100 aft: 49 front: 50 rank 0 Cylinder: neighbor = left: 59 right: 75 bottom: 49 top: 100 aft: 49 front: 50 rank 4 Cylinder: neighbor = left: 59 right: 75 bottom: 99 top: 151 aft: 49 front: 50 rank 6 Cylinder: neighbor = left: 149 right: 161 bottom: 99 top: 151 aft: 49 front: 50 rank 25 Cylinder: neighbor = left: 74 right: 150 bottom: 49 top: 100 aft: 149 front: 151 rank 22 Cylinder: neighbor = left: 149 right: 161 bottom: 99 top: 151 aft: 99 front: 150 rank 29 Cylinder: neighbor = left: 74 right: 150 bottom: 99 top: 151 aft: 149 front: 151 Ready for the main iteration!! Current time step = 0, Total time step number = 1 Check ApplyBCs!! Check CalConvTerm!! Check CommUpdate after CalConvTerm! Check CalIBVel! Check CalIBForce!! Check AssyForce!! Check Finished AssyForce!! Ready for ApplyNonUniform of the KSPSolve#1!! Ready for KSPSolve#1!! 0 KSP Residual norm 2.997792736227e+02 1 KSP Residual norm 1.591262605100e+01 2 KSP Residual norm 2.660865065368e+00 3 KSP Residual norm 5.661457826860e-01 4 KSP Residual norm 6.273811571672e-02 5 KSP Residual norm 8.541664768857e-03 6 KSP Residual norm 1.196432453759e-03 7 KSP Residual norm 1.366852070104e-04 8 KSP Residual norm 4.984885606812e-05 9 KSP Residual norm 2.967025150410e-05 KSP Object: 32 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=10000 tolerances: relative=1e-07, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using PRECONDITIONED norm type for convergence test PC Object: 32 MPI processes type: hypre HYPRE BoomerAMG preconditioning HYPRE BoomerAMG: Cycle type V HYPRE BoomerAMG: Maximum number of levels 25 HYPRE BoomerAMG: Maximum number of iterations PER hypre call 1 HYPRE BoomerAMG: Convergence tolerance PER hypre call 0 HYPRE BoomerAMG: Threshold for strong coupling 0.25 HYPRE BoomerAMG: Interpolation truncation factor 0 HYPRE BoomerAMG: Interpolation: max elements per row 0 HYPRE BoomerAMG: Number of levels of aggressive coarsening 0 HYPRE BoomerAMG: Number of paths for aggressive coarsening 1 HYPRE BoomerAMG: Maximum row sums 0.9 HYPRE BoomerAMG: Sweeps down 1 HYPRE BoomerAMG: Sweeps up 1 HYPRE BoomerAMG: Sweeps on coarse 1 HYPRE BoomerAMG: Relax down symmetric-SOR/Jacobi HYPRE BoomerAMG: Relax up symmetric-SOR/Jacobi HYPRE BoomerAMG: Relax on coarse Gaussian-elimination HYPRE BoomerAMG: Relax weight (all) 1 HYPRE BoomerAMG: Outer relax weight (all) 1 HYPRE BoomerAMG: Using CF-relaxation HYPRE BoomerAMG: Measure type local HYPRE BoomerAMG: Coarsen type Falgout HYPRE BoomerAMG: Interpolation type classical linear system matrix = precond matrix: Matrix Object: 32 MPI processes type: mpiaij rows=12000000, cols=12000000 Solver#1, Rank = 1, Solve One Time = 165.783827 total: nonzeros=83680000, allocated nonzeros=83680000 Solver#1, Rank = 8, Solve One Time = 165.783472 Solver#1, Rank = 12, Solve One Time = 165.783675 Solver#1, Rank = 10, Solve One Time = 165.773235 Solver#1, Rank = 2, Solve One Time = 165.783872 Solver#1, Rank = 13, Solve One Time = 165.783859 Solver#1, Rank = 9, Solve One Time = 165.783889 total number of mallocs used during MatSetValues calls =0 Solver#1, Rank = 6, Solve One Time = 165.782438 Solver#1, Rank = 0, Solve One Time = 165.783631 Solver#1, Rank = 3, Solve One Time = 165.782258 Solver#1, Rank = 7, Solve One Time = 165.782349 Solver#1, Rank = 24, Solve One Time = 165.783867 Finish KSPSolve#1!! Solver#1, Rank = 11, Solve One Time = 165.773358 Solver#1, Rank = 16, Solve One Time = 165.783255 Solver#1, Rank = 25, Solve One Time = 165.784052 Solver#1, Rank = 4, Solve One Time = 165.783982 Solver#1, Rank = 14, Solve One Time = 165.781645 Solver#1, Rank = 26, Solve One Time = 165.784008 Solver#1, Rank = 28, Solve One Time = 165.784051 Solver#1, Rank = 5, Solve One Time = 165.782255 Solver#1, Rank = 15, Solve One Time = 165.781732 Solver#1, Rank = 18, Solve One Time = 165.776823 Solver#1, Rank = 27, Solve One Time = 165.773497 Solver#1, Rank = 17, Solve One Time = 165.783936 Solver#1, Rank = 29, Solve One Time = 165.784046 Solver#1, Rank = 20, Solve One Time = 165.784034 Solver#1, Rank = 19, Solve One Time = 165.771977 Solver#1, Rank = 30, Solve One Time = 165.773829 Solver#1, Rank = 21, Solve One Time = 165.784062 Solver#1, Rank = 31, Solve One Time = 165.773872 Solver#1, Rank = 22, Solve One Time = 165.766045 Solver#1, Rank = 23, Solve One Time = 165.766240 Residual norm 2.05135e-07 Check#2 ApplyBCs!! Check#2 AssyForce!! Check#2 CalVelField!! 0 KSP Residual norm 2.870247935899e+02 1 KSP Residual norm 1.606442422016e+01 2 KSP Residual norm 3.090064201333e+00 3 KSP Residual norm 5.867933774996e-01 4 KSP Residual norm 7.308417770262e-02 5 KSP Residual norm 1.103028493864e-02 6 KSP Residual norm 2.170332527761e-03 7 KSP Residual norm 6.906994551513e-04 8 KSP Residual norm 3.045998288429e-04 9 KSP Residual norm 5.136309989396e-05 10 KSP Residual norm 2.055236160250e-05 KSP Object: 32 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=10000 tolerances: relative=1e-07, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using PRECONDITIONED norm type for convergence test PC Object: 32 MPI processes type: hypre HYPRE BoomerAMG preconditioning HYPRE BoomerAMG: Cycle type V HYPRE BoomerAMG: Maximum number of levels 25 HYPRE BoomerAMG: Maximum number of iterations PER hypre call 1 HYPRE BoomerAMG: Convergence tolerance PER hypre call 0 HYPRE BoomerAMG: Threshold for strong coupling 0.25 HYPRE BoomerAMG: Interpolation truncation factor 0 HYPRE BoomerAMG: Interpolation: max elements per row 0 HYPRE BoomerAMG: Number of levels of aggressive coarsening 0 HYPRE BoomerAMG: Number of paths for aggressive coarsening 1 HYPRE BoomerAMG: Maximum row sums 0.9 HYPRE BoomerAMG: Sweeps down 1 HYPRE BoomerAMG: Sweeps up 1 HYPRE BoomerAMG: Sweeps on coarse 1 HYPRE BoomerAMG: Relax down symmetric-SOR/Jacobi HYPRE BoomerAMG: Relax up symmetric-SOR/Jacobi HYPRE BoomerAMG: Relax on coarse Gaussian-elimination HYPRE BoomerAMG: Relax weight (all) 1 HYPRE BoomerAMG: Outer relax weight (all) 1 HYPRE BoomerAMG: Using CF-relaxation HYPRE BoomerAMG: Measure type local HYPRE BoomerAMG: Coarsen type Falgout HYPRE BoomerAMG: Interpolation type classical linear system matrix = precond matrix: Matrix Object: 32 MPI processes type: mpiaij rows=12000000, cols=12000000 Solver#2, Rank = 8, Solve One Time = 171.588158 total: nonzeros=83680000, allocated nonzeros=83680000 Solver#2, Rank = 1, Solve One Time = 171.588110 Solver#2, Rank = 9, Solve One Time = 171.586617 Solver#2, Rank = 4, Solve One Time = 171.588193 Solver#2, Rank = 14, Solve One Time = 171.588117 Solver#2, Rank = 15, Solve One Time = 171.587124 Solver#2, Rank = 24, Solve One Time = 171.588076 Solver#2, Rank = 10, Solve One Time = 171.586486 total number of mallocs used during MatSetValues calls =0 Solver#2, Rank = 11, Solve One Time = 171.586354 Solver#2, Rank = 26, Solve One Time = 171.588102 Solver#2, Rank = 5, Solve One Time = 171.582924 Solver#2, Rank = 13, Solve One Time = 171.588282 Solver#2, Rank = 28, Solve One Time = 171.588074 Solver#2, Rank = 0, Solve One Time = 171.588347 Solver#2, Rank = 12, Solve One Time = 171.588397 Solver#2, Rank = 25, Solve One Time = 171.588135 Solver#2, Rank = 6, Solve One Time = 171.575956 Solver#2, Rank = 3, Solve One Time = 171.578867 Solver#2, Rank = 2, Solve One Time = 171.588348 Solver#2, Rank = 7, Solve One Time = 171.575976 Solver#2, Rank = 16, Solve One Time = 171.588326 Solver#2, Rank = 27, Solve One Time = 171.580220 Solver#2, Rank = 17, Solve One Time = 171.588328 Solver#2, Rank = 29, Solve One Time = 171.583465 Solver#2, Rank = 18, Solve One Time = 171.588341 Solver#2, Rank = 30, Solve One Time = 171.582318 Solver#2, Rank = 20, Solve One Time = 171.588311 Solver#2, Rank = 31, Solve One Time = 171.582314 Solver#2, Rank = 19, Solve One Time = 171.586152 Solver#2, Rank = 21, Solve One Time = 171.586144 Solver#2, Rank = 22, Solve One Time = 171.586817 Solver#2, Rank = 23, Solve One Time = 171.586864 Residual norm 9.57625e-08 Rank#18, Max dp = 1.231050e+00 @ (8, 81, 4) Rank#13, Re = 300.000000, time step = 0, continuity = 2.612951e-01 @ Phys(107, 150, 96) & Local(36, 54, 50) with (0.010000, 0.010000, 0.010000), (0.839689,0.625750,-0.110498,0.055790,-0.011703,0.033334) Restart files have been written!! PlotU is ready!! PlotU is finished!! From sonyablade2010 at hotmail.com Fri Apr 5 12:12:00 2013 From: sonyablade2010 at hotmail.com (Sonya Blade) Date: Fri, 5 Apr 2013 18:12:00 +0100 Subject: [petsc-users] EVP Solutions with Subspace In-Reply-To: References: , , Message-ID: >By default SLEPc uses a direct solver, so you would not get this message. Which options are you using for the KSP? >Jose ?I use Subspace if you mean that by option. Probably I'm misinterpreting? something here. As far as I know KSP stands for the Kyrlov subspace, can? you guide me on that? ? ierr = EPSCreate(PETSC_COMM_WORLD,&eps);CHKERRQ(ierr); ?? ? ierr = EPSSetDimensions(eps,12,PETSC_NULL,PETSC_NULL); ? ierr = EPSSetType(eps, EPSSUBSPACE); ? ierr = EPSSetOperators(eps,B,C);CHKERRQ(ierr); ? ierr = EPSSetProblemType(eps,EPS_GNHEP);CHKERRQ(ierr); ? /* ? ? Set the initial vector. This is optional, if not done the initial ? ? vector is set to random values ? */ ? ierr = MatGetVecs(B,&v0,PETSC_NULL);CHKERRQ(ierr); ? ierr = VecSet(v0,1.0);CHKERRQ(ierr); ? ierr = EPSSetInitialSpace(eps,1,&v0);CHKERRQ(ierr); ? /* - - - Solve the eigensystem - - - */ ? ierr = EPSSolve(eps);CHKERRQ(ierr); Regards, From jroman at dsic.upv.es Fri Apr 5 13:19:54 2013 From: jroman at dsic.upv.es (Jose E. Roman) Date: Fri, 5 Apr 2013 20:19:54 +0200 Subject: [petsc-users] EVP Solutions with Subspace In-Reply-To: References: , , Message-ID: <22053C9D-9A87-4258-B694-D05685F65BF9@dsic.upv.es> El 05/04/2013, a las 19:12, Sonya Blade escribi?: >> By default SLEPc uses a direct solver, so you would not get this message. Which options are you using for the KSP? >> Jose > I use Subspace if you mean that by option. Probably I'm misinterpreting > something here. As far as I know KSP stands for the Kyrlov subspace, can > you guide me on that? > > ierr = EPSCreate(PETSC_COMM_WORLD,&eps);CHKERRQ(ierr); > > ierr = EPSSetDimensions(eps,12,PETSC_NULL,PETSC_NULL); > ierr = EPSSetType(eps, EPSSUBSPACE); > ierr = EPSSetOperators(eps,B,C);CHKERRQ(ierr); > ierr = EPSSetProblemType(eps,EPS_GNHEP);CHKERRQ(ierr); > /* > Set the initial vector. This is optional, if not done the initial > vector is set to random values > */ > ierr = MatGetVecs(B,&v0,PETSC_NULL);CHKERRQ(ierr); > ierr = VecSet(v0,1.0);CHKERRQ(ierr); > ierr = EPSSetInitialSpace(eps,1,&v0);CHKERRQ(ierr); > /* - - - Solve the eigensystem - - - */ > > ierr = EPSSolve(eps);CHKERRQ(ierr); > > Regards, Do not use EPSSUBSPACE. Do not use any solver different from the default unless you know what you are doing. KSP is PETSc's linear solver. Please read the documentation, not only SLEPc's but also PETSc's. Run with the additional option "-eps_view before" and send the output. Also, send the complete error message, not just the last line. Jose From bsmith at mcs.anl.gov Fri Apr 5 13:52:39 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 5 Apr 2013 13:52:39 -0500 Subject: [petsc-users] Any changes in ML usage between 3.1-p8 -> 3.3-p6? In-Reply-To: <20130405143415.GH17937@karman> References: <20130405143415.GH17937@karman> Message-ID: On Apr 5, 2013, at 9:34 AM, Jozsef Bakosi wrote: > Hi folks, > > In switching from 3.1-p8 to 3.3-p6, keeping the same ML ml-6.2.tar.gz, I get > indefinite preconditioner with the newer PETSc version. Has there been anything > substantial changed around how PCs are handled, e.g. in the defaults? > > I know this request is pretty general, I would just like to know where to start > looking, where changes in PETSc might be clobbering the (supposedly same) > behavior of ML. The two versions of PETSc are suppose to work with the same ML. Run both with -ksp_view -ksp_monitor_true_residual (forcing KSPSolve to not generate any error by using a -ksp_max_its 2 or something small) and see if there are any differences in the output. Feel free to send us the two outputs if different if you have questions. I cannot think of anything, off the cuff that would cause a change. Barry > > Thanks, > Jozsef From sonyablade2010 at hotmail.com Fri Apr 5 22:58:13 2013 From: sonyablade2010 at hotmail.com (Sonya Blade) Date: Sat, 6 Apr 2013 04:58:13 +0100 Subject: [petsc-users] EVP Solutions with Subspace In-Reply-To: References: , , , Message-ID: Hi again, I'm still reading the manual of Petsc and Slepc but at the same? I just wanted to see some vital signs that I'm on right path. The problem? that I'm trying to solve still produces the non convergence messages as? shown below. One thing has drawn my attention, although I set the convergence tolerance? and max iterations in ESP step KSP still uses its default values I believe, such as KSP tolerances=10e-8 which is very high precision that I don't need. EPS Object: 1 MPI processes ? type not yet set ? problem type: generalized non-symmetric eigenvalue problem with symmetric pos tive definite B ? selected portion of the spectrum: not yet set ? number of eigenvalues (nev): 12 ? number of column vectors (ncv): 12 ? maximum dimension of projected problem (mpd): 0 ? maximum number of iterations: 300 ? tolerance: 0.001 ? convergence test: relative to the eigenvalue ? estimates of matrix norms (constant): norm(A)=1, norm(B)=1 IP Object: 1 MPI processes ? type not yet set ? orthogonalization method: classical Gram-Schmidt ? orthogonalization refinement: if needed (eta: 0.7071) DS Object: 1 MPI processes ? type not yet set ST Object: 1 MPI processes ? type not yet set ? shift: 0 ? matrices A and B have different nonzero pattern ? KSP Object: ?(st_) ? 1 MPI processes ? ? type not yet set ? ? maximum iterations=10000, initial guess is zero ? ? tolerances: ?relative=1e-08, absolute=1e-50, divergence=10000 ? ? left preconditioning ? ? using DEFAULT norm type for convergence test ? PC Object: ?(st_) ? 1 MPI processes ? ? type not yet set [0]PETSC ERROR: --------------------- Error Message --------------------------- -------- [0]PETSC ERROR: ? ! [0]PETSC ERROR: KSP did not converge (reason=DIVERGED_ITS)! [0]PETSC ERROR: --------------------------------------------------------------- -------- From jroman at dsic.upv.es Sat Apr 6 04:38:57 2013 From: jroman at dsic.upv.es (Jose E. Roman) Date: Sat, 6 Apr 2013 11:38:57 +0200 Subject: [petsc-users] EVP Solutions with Subspace In-Reply-To: References: , , , Message-ID: <2C78D631-693F-40A6-91F6-0C436B94E8C1@dsic.upv.es> El 06/04/2013, a las 05:58, Sonya Blade escribi?: > Hi again, > > I'm still reading the manual of Petsc and Slepc but at the same > I just wanted to see some vital signs that I'm on right path. The problem > that I'm trying to solve still produces the non convergence messages as > shown below. > > One thing has drawn my attention, although I set the convergence tolerance > and max iterations in ESP step KSP still uses its default values I believe, > such as KSP tolerances=10e-8 which is very high precision that I don't need. > Send your code to slepc-maint and I will have a look at it. Jose From friedmud at gmail.com Sat Apr 6 09:40:38 2013 From: friedmud at gmail.com (Derek Gaston) Date: Sat, 6 Apr 2013 08:40:38 -0600 Subject: [petsc-users] Redo PetscOptions From Commandline In-Reply-To: References: Message-ID: Sorry for the delay in my replies! I got diverted off to something else and am just now making it back to this! Thank you guys for all the good replies, I'll try to answer some of the questions: Derek, what problem are you trying to solve by resetting the options like > this? Well, of course, I am trying to do something in the wrong way. I should be using prefixing, like Dmitry posits. However, we don't have the ability to assign names to individual solves at this moment... but we do have different sets of options for each solve (that we're storing in our own parameters objects). So, my "quick and dirty" idea was to clear the PETSc options before each solve and fill them back in with the correct options for this solve. That works fine... but we do also want to allow setting "global" options on the command-line and having those effect all of the solves. So I needed to pull those back in too. Unfortunately, some of these solves are on sub-communicators (actually, all of them might be... but at least in my case there is at least one solve over the entire set of processors that's been given to us... even if that might not be over COMM_WORLD)... so calling PetscOptionsInsert() on those sub-communicators will cause the program to hang. This is most definitely my own fault... we should be using the mechanisms built into PETSc for assigning per-solve options (prefixing). It's just that in this particular case that can get arbitrarily complicated (we have multiple levels of hierarchical solves happening... where only the level above a given level knows about a level (i.e. a level doesn't even know about the level above it that spawned it) so you have to be a bit careful to make sure prefixes don't collide). But it can be done ;-) For now, swapping PETSC_COMM_WORLD is allowing me to solve my problems. But I'll put in the proper prefixing stuff later. Now - about PetscInitialize()... we are in the situation where we might be passed a sub-communicator to do our whole solve on and will need to set PETSC_COMM_WORLD to that sub-comunicator. It's good to know that I just need to do that before PetscInitialize(). It seems to me that it could be convenient to have another version of PetscInitialize that takes a COMM for just this case. But, I do understand that it's fairly uncommon. For us, this is happening because of coupling multiple codes together (which might not be PETSc based codes) that are running on disjoint communicators in parallel. Thanks again for the replies! Derek -------------- next part -------------- An HTML attachment was scrubbed... URL: From karpeev at mcs.anl.gov Sun Apr 7 10:54:29 2013 From: karpeev at mcs.anl.gov (Dmitry Karpeev) Date: Sun, 7 Apr 2013 10:54:29 -0500 Subject: [petsc-users] Redo PetscOptions From Commandline In-Reply-To: References: Message-ID: On Fri, Apr 5, 2013 at 11:32 AM, Barry Smith wrote: > > On Apr 5, 2013, at 10:18 AM, Dmitry Karpeev wrote: > > > Should PetscInitialize() take a comm argument, then? > > No, this is such an uncommon case that it shouldn't be exposed to > everyone. It seems to me that the burden on the user is fairly small -- the boilerplate code becomes PetscInitialize(MPI_COMM_WORLD) instead of PetscInitialize(). From the library design perspective, making it more general by taking an initialization parameter also sounds (to me) like the right thing to do. And since the hack that swaps PETSC_COMM_WORLD works (at least for Derek), it sounds like making it "legitimate" wouldn't take much extra code. Dmitry. > Note I actually would advocate that "Moose's new MultiApp capability" use > PETSc "above" the individual runs and thus would not design "Moose's new > MultiApp capability" to set the PETSC_COMM_WORLD to be related to a single > app; I was just pointing out this was a possibility if "Moose's new > MultiApp capability" specifically only knew about PETSc at the individual > App level (which it sounds like its current design). > > Also note that I previously proposed making > PetscOptionsInsertArgs_Private() public so that one may reset the command > line options and not the environmental options and Moose could then use > this option safely and not need to muck with PETSC_COMM_WORLD. > > Barry > > > > > > > > On Fri, Apr 5, 2013 at 10:15 AM, Barry Smith wrote: > > > > On Apr 5, 2013, at 10:09 AM, Dmitry Karpeev wrote: > > > > > > > > On Thu, Apr 4, 2013 at 7:42 PM, Jed Brown > wrote: > > > > > > It seems to me this is related to Moose's new MultiApp capability: > solving different systems on subcommunicators (with the interaction between > the systems handled outside of PETSc)? It may be that the cleaner approach > is to have the subsystems (their solvers, rather) use prefixes to set their > specific options. > > > Would that be enough? > > > > > > Dmitry. > > > > > > > If PETSc is only aware of the subcommunicator then it could be that > the right model is to set PETSC_COMM_WORLD to be the sub comm of > MPI_COMM_WORLD before PetscInitialize(), then PETSc only sees that sub > communicator. But you will NOT be able to use PETSc on anything that > "connects" these various sub communicators together. > > > > Barry > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Sun Apr 7 11:43:19 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sun, 07 Apr 2013 11:43:19 -0500 Subject: [petsc-users] Redo PetscOptions From Commandline In-Reply-To: References: Message-ID: <87mwtagul4.fsf@mcs.anl.gov> Dmitry Karpeev writes: > On Fri, Apr 5, 2013 at 11:32 AM, Barry Smith wrote: > >> >> On Apr 5, 2013, at 10:18 AM, Dmitry Karpeev wrote: >> >> > Should PetscInitialize() take a comm argument, then? >> >> No, this is such an uncommon case that it shouldn't be exposed to >> everyone. > > It seems to me that the burden on the user is fairly small -- the > boilerplate code becomes > PetscInitialize(MPI_COMM_WORLD) instead of PetscInitialize(). From the > library design > perspective, making it more general by taking an initialization parameter > also sounds (to me) > like the right thing to do. Users have always been able to set PETSC_COMM_WORLD _before_ calling PetscInitialize(). Adding another parameter is not necessary or desirable. From zyzhang at nuaa.edu.cn Sun Apr 7 22:21:03 2013 From: zyzhang at nuaa.edu.cn (Zhang) Date: Mon, 8 Apr 2013 11:21:03 +0800 (CST) Subject: [petsc-users] Compiling Error of petsc-dev Message-ID: <126726b.e0d5.13de7a945e2.Coremail.zyzhang@nuaa.edu.cn> Dear all, I use git to pull the petsc-dev and configured with the folliwing options: ./configure --with-shared-libraries=1 --with-dynamic-loading=1 --with-x=1 --with-blas-lapack-dir=/usr/lib/lapack --with-valgrind=1 --download-openmpi=1 --with-cc=gcc --with-fc=gfortran --with-clanguage=C++ --with-c++-support=1 --with-valgrind=1 --with-sieve=1 --with-opt-sieve=1 --with-fiat=1 --download-scientificpython --download-fiat --download-generator --download-triangle --with-ctetgen --download-chaco --download-boost=1 --download-ctetgen --download-hypre=1 --download-metis --download-parmetis --with-cuda=1 --with-thrust=1 --with-cusp=1 --with-cusp-dir=/usr/local/cuda/include/ --download-txpetscgpu=1 --download-cmake When make it, error appeared in make.log ... ainvcusp.cu:373:118: error: macro "PetscObjectComposeFunction" requires 3 arguments, but only 2 given /usr/bin/ar: ainvcusp.o: No such file or directory ... ========================================= **************************ERROR************************************ Error during compile, check arch-linux2-c-opt/conf/make.log Send it and arch-linux2-c-opt/conf/configure.log to petsc-maint at mcs.anl.gov ******************************************************************** Please leave me any light on solving it, thanks first BTW, any instruction on correct installation of development version is welcome, :-) Zhenyu Zhang -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Sun Apr 7 23:05:24 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sun, 07 Apr 2013 23:05:24 -0500 Subject: [petsc-users] Compiling Error of petsc-dev In-Reply-To: <126726b.e0d5.13de7a945e2.Coremail.zyzhang@nuaa.edu.cn> References: <126726b.e0d5.13de7a945e2.Coremail.zyzhang@nuaa.edu.cn> Message-ID: <87li8tekfv.fsf@mcs.anl.gov> Zhang writes: > Dear all, > > I use git to pull the petsc-dev and configured with the folliwing options: > > ./configure --with-shared-libraries=1 --with-dynamic-loading=1 > --with-x=1 --with-blas-lapack-dir=/usr/lib/lapack --with-valgrind=1 > --download-openmpi=1 --with-cc=gcc --with-fc=gfortran > --with-clanguage=C++ --with-c++-support=1 --with-valgrind=1 > --with-sieve=1 --with-opt-sieve=1 --with-fiat=1 > --download-scientificpython --download-fiat --download-generator > --download-triangle --with-ctetgen --download-chaco --download-boost=1 > --download-ctetgen --download-hypre=1 --download-metis > --download-parmetis --with-cuda=1 --with-thrust=1 --with-cusp=1 > --with-cusp-dir=/usr/local/cuda/include/ --download-txpetscgpu=1 > --download-cmake > > When make it, error appeared in make.log > ... > ainvcusp.cu:373:118: error: macro "PetscObjectComposeFunction" requires 3 arguments, but only 2 given > /usr/bin/ar: ainvcusp.o: No such file or directory This was a bad batch rename, though I don't know why nobody noticed it yet. Can you: $ git checkout next $ git pull $ make reconfigure all test Let us know if you encounter any other problems. From sonyablade2010 at hotmail.com Mon Apr 8 00:47:39 2013 From: sonyablade2010 at hotmail.com (Sonya Blade) Date: Mon, 8 Apr 2013 06:47:39 +0100 Subject: [petsc-users] LDL^T Decomposition and matrix rows/columns reordering Message-ID: Dear All, I'd like to know is there a function in Petsc or in Slepc for calculating the? LTL^T decomposition? Aside from that If I want to perform the rows and column reordering which function? or function set should I use in Petsc? I haven't see any vector extraction? from matrices on the manual.? How can I perform the gauss elimination method in Petsc? Your help will be appreciated. Regards, From knepley at gmail.com Mon Apr 8 07:08:05 2013 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 8 Apr 2013 07:08:05 -0500 Subject: [petsc-users] LDL^T Decomposition and matrix rows/columns reordering In-Reply-To: References: Message-ID: On Mon, Apr 8, 2013 at 12:47 AM, Sonya Blade wrote: > Dear All, > > I'd like to know is there a function in Petsc or in Slepc for calculating > the > LTL^T decomposition? > > Aside from that If I want to perform the rows and column reordering which > function > or function set should I use in Petsc? I haven't see any vector extraction > from matrices on the manual. > > How can I perform the gauss elimination method in Petsc? > http://www.mcs.anl.gov/petsc/petsc-dev/docs/linearsolvertable.html Matt > Your help will be appreciated. > > Regards, -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From dharmareddy84 at gmail.com Mon Apr 8 08:33:45 2013 From: dharmareddy84 at gmail.com (Dharmendar Reddy) Date: Mon, 8 Apr 2013 08:33:45 -0500 Subject: [petsc-users] FIAT Error Message-ID: Hello, I am getting a FIAT related error. I am using branch knepley/pylith. Am i missing something ? login3$ $PETSC_DIR/bin/pythonscripts/PetscGenerateFEMQuadrature.py dim order dim 1 lapla cian dim order 1 1 gradient ex62.h Traceback (most recent call last): File "/home1/00924/Reddy135/LocalApps/petsc/bin/pythonscripts/PetscGenerateFEMQuadratur e.py", line 15, in from FIAT.reference_element import default_simplex ImportError: No module named FIAT.reference_element I have used the following configOpts configOpts_Opt= --with-debugging=0 \ --with-petsc-arch=${PETSC_ARCH_Opt} \ --with-petsc-dir=${PETSC_DIR} \ --with-mpi-dir=${mpiDir} \ --with-scalar-type=real \ --with-blas-lapack-dir=${lapackDIR} \ --with-clanguage=cxx \ --download-boost=1 \ --download-umfpack=1 \ --download-superlu_dist=1 \ --download-metis=1 \ --download-parmetis=1 \ --download-mumps=1 \ --download-scalapack=1 \ --download-blacs=1\ --download-triangle=1 \ --download-ctetgen=1 \ --download-chaco=1 \ --download-scientificpython \ --download-fiat \ --download-generator -- ----------------------------------------------------- Dharmendar Reddy Palle Graduate Student Microelectronics Research center, University of Texas at Austin, 10100 Burnet Road, Bldg. 160 MER 2.608F, TX 78758-4445 e-mail: dharmareddy84 at gmail.com Phone: +1-512-350-9082 United States of America. Homepage: https://webspace.utexas.edu/~dpr342 -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Apr 8 08:35:45 2013 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 8 Apr 2013 08:35:45 -0500 Subject: [petsc-users] FIAT Error In-Reply-To: References: Message-ID: On Mon, Apr 8, 2013 at 8:33 AM, Dharmendar Reddy wrote: > Hello, > I am getting a FIAT related error. I am using branch > knepley/pylith. Am i missing something ? > You need your Python path pointing to the installation: export PYTHONPATH=${PYTHONPATH}:${PETSC_DIR}/${PETSC_ARCH}/lib:${PETSC_DIR}/${PETSC_ARCH}/lib/python2.6/site-packages We are planning to move all this functionality to C soon. Matt > > login3$ $PETSC_DIR/bin/pythonscripts/PetscGenerateFEMQuadrature.py dim > order dim 1 lapla > cian dim order 1 1 gradient ex62.h > Traceback (most recent call last): > File > "/home1/00924/Reddy135/LocalApps/petsc/bin/pythonscripts/PetscGenerateFEMQuadratur > e.py", line 15, in > from FIAT.reference_element import default_simplex > ImportError: No module named FIAT.reference_element > > > I have used the following configOpts > configOpts_Opt= --with-debugging=0 \ > --with-petsc-arch=${PETSC_ARCH_Opt} \ > --with-petsc-dir=${PETSC_DIR} \ > --with-mpi-dir=${mpiDir} \ > --with-scalar-type=real \ > --with-blas-lapack-dir=${lapackDIR} \ > --with-clanguage=cxx \ > --download-boost=1 \ > --download-umfpack=1 \ > --download-superlu_dist=1 \ > --download-metis=1 \ > --download-parmetis=1 \ > --download-mumps=1 \ > --download-scalapack=1 \ > --download-blacs=1\ > --download-triangle=1 \ > --download-ctetgen=1 \ > --download-chaco=1 \ > --download-scientificpython \ > --download-fiat \ > --download-generator > -- > ----------------------------------------------------- > Dharmendar Reddy Palle > Graduate Student > Microelectronics Research center, > University of Texas at Austin, > 10100 Burnet Road, Bldg. 160 > MER 2.608F, TX 78758-4445 > e-mail: dharmareddy84 at gmail.com > Phone: +1-512-350-9082 > United States of America. > Homepage: https://webspace.utexas.edu/~dpr342 > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From dharmareddy84 at gmail.com Mon Apr 8 08:44:53 2013 From: dharmareddy84 at gmail.com (Dharmendar Reddy) Date: Mon, 8 Apr 2013 08:44:53 -0500 Subject: [petsc-users] FIAT Error In-Reply-To: References: Message-ID: do you mean PetscGenerateFEMQuadrature.py will be moved to C ? using FIAT internally ? What should be the dim and order for P2-P1 on triangle i am trying to generate ex62.h 58: This code can be generated using 'bin/pythonscripts/PetscGenerateFEMQuadrature.py dim order dim 1 laplacian dim order 1 1 gradient src/snes/examples/tutorials/ex62.h' On Mon, Apr 8, 2013 at 8:35 AM, Matthew Knepley wrote: > On Mon, Apr 8, 2013 at 8:33 AM, Dharmendar Reddy wrote: > >> Hello, >> I am getting a FIAT related error. I am using branch >> knepley/pylith. Am i missing something ? >> > > You need your Python path pointing to the installation: > > export > PYTHONPATH=${PYTHONPATH}:${PETSC_DIR}/${PETSC_ARCH}/lib:${PETSC_DIR}/${PETSC_ARCH}/lib/python2.6/site-packages > > We are planning to move all this functionality to C soon. > > Matt > > >> >> login3$ $PETSC_DIR/bin/pythonscripts/PetscGenerateFEMQuadrature.py dim >> order dim 1 lapla >> cian dim order 1 1 gradient ex62.h >> Traceback (most recent call last): >> File >> "/home1/00924/Reddy135/LocalApps/petsc/bin/pythonscripts/PetscGenerateFEMQuadratur >> e.py", line 15, in >> from FIAT.reference_element import default_simplex >> ImportError: No module named FIAT.reference_element >> >> >> I have used the following configOpts >> configOpts_Opt= --with-debugging=0 \ >> --with-petsc-arch=${PETSC_ARCH_Opt} \ >> --with-petsc-dir=${PETSC_DIR} \ >> --with-mpi-dir=${mpiDir} \ >> --with-scalar-type=real \ >> --with-blas-lapack-dir=${lapackDIR} \ >> --with-clanguage=cxx \ >> --download-boost=1 \ >> --download-umfpack=1 \ >> --download-superlu_dist=1 \ >> --download-metis=1 \ >> --download-parmetis=1 \ >> --download-mumps=1 \ >> --download-scalapack=1 \ >> --download-blacs=1\ >> --download-triangle=1 \ >> --download-ctetgen=1 \ >> --download-chaco=1 \ >> --download-scientificpython \ >> --download-fiat \ >> --download-generator >> -- >> ----------------------------------------------------- >> Dharmendar Reddy Palle >> Graduate Student >> Microelectronics Research center, >> University of Texas at Austin, >> 10100 Burnet Road, Bldg. 160 >> MER 2.608F, TX 78758-4445 >> e-mail: dharmareddy84 at gmail.com >> Phone: +1-512-350-9082 >> United States of America. >> Homepage: https://webspace.utexas.edu/~dpr342 >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -- ----------------------------------------------------- Dharmendar Reddy Palle Graduate Student Microelectronics Research center, University of Texas at Austin, 10100 Burnet Road, Bldg. 160 MER 2.608F, TX 78758-4445 e-mail: dharmareddy84 at gmail.com Phone: +1-512-350-9082 United States of America. Homepage: https://webspace.utexas.edu/~dpr342 -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Apr 8 08:55:33 2013 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 8 Apr 2013 08:55:33 -0500 Subject: [petsc-users] FIAT Error In-Reply-To: References: Message-ID: On Mon, Apr 8, 2013 at 8:44 AM, Dharmendar Reddy wrote: > do you mean PetscGenerateFEMQuadrature.py will be moved to C ? using FIAT > internally ? > Yes. > What should be the dim and order for P2-P1 on triangle > > i am trying to generate ex62.h > > 58: This code can be generated using 'bin/pythonscripts/PetscGenerateFEMQuadrature.py dim order dim 1 laplacian dim order 1 1 gradient src/snes/examples/tutorials/ex62.h' > > You can always look in builder.py. I run the tests using it. bin/pythonscripts/PetscGenerateFEMQuadrature.py 2 2 2 1 laplacian 2 1 1 1 gradient src/snes/examples/tutorials/ex62.h Matt > On Mon, Apr 8, 2013 at 8:35 AM, Matthew Knepley wrote: > >> On Mon, Apr 8, 2013 at 8:33 AM, Dharmendar Reddy > > wrote: >> >>> Hello, >>> I am getting a FIAT related error. I am using branch >>> knepley/pylith. Am i missing something ? >>> >> >> You need your Python path pointing to the installation: >> >> export >> PYTHONPATH=${PYTHONPATH}:${PETSC_DIR}/${PETSC_ARCH}/lib:${PETSC_DIR}/${PETSC_ARCH}/lib/python2.6/site-packages >> >> We are planning to move all this functionality to C soon. >> >> Matt >> >> >>> >>> login3$ $PETSC_DIR/bin/pythonscripts/PetscGenerateFEMQuadrature.py dim >>> order dim 1 lapla >>> cian dim order 1 1 gradient ex62.h >>> Traceback (most recent call last): >>> File >>> "/home1/00924/Reddy135/LocalApps/petsc/bin/pythonscripts/PetscGenerateFEMQuadratur >>> e.py", line 15, in >>> from FIAT.reference_element import default_simplex >>> ImportError: No module named FIAT.reference_element >>> >>> >>> I have used the following configOpts >>> configOpts_Opt= --with-debugging=0 \ >>> --with-petsc-arch=${PETSC_ARCH_Opt} \ >>> --with-petsc-dir=${PETSC_DIR} \ >>> --with-mpi-dir=${mpiDir} \ >>> --with-scalar-type=real \ >>> --with-blas-lapack-dir=${lapackDIR} \ >>> --with-clanguage=cxx \ >>> --download-boost=1 \ >>> --download-umfpack=1 \ >>> --download-superlu_dist=1 \ >>> --download-metis=1 \ >>> --download-parmetis=1 \ >>> --download-mumps=1 \ >>> --download-scalapack=1 \ >>> --download-blacs=1\ >>> --download-triangle=1 \ >>> --download-ctetgen=1 \ >>> --download-chaco=1 \ >>> --download-scientificpython \ >>> --download-fiat \ >>> --download-generator >>> -- >>> ----------------------------------------------------- >>> Dharmendar Reddy Palle >>> Graduate Student >>> Microelectronics Research center, >>> University of Texas at Austin, >>> 10100 Burnet Road, Bldg. 160 >>> MER 2.608F, TX 78758-4445 >>> e-mail: dharmareddy84 at gmail.com >>> Phone: +1-512-350-9082 >>> United States of America. >>> Homepage: https://webspace.utexas.edu/~dpr342 >>> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > > > -- > ----------------------------------------------------- > Dharmendar Reddy Palle > Graduate Student > Microelectronics Research center, > University of Texas at Austin, > 10100 Burnet Road, Bldg. 160 > MER 2.608F, TX 78758-4445 > e-mail: dharmareddy84 at gmail.com > Phone: +1-512-350-9082 > United States of America. > Homepage: https://webspace.utexas.edu/~dpr342 > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From dharmareddy84 at gmail.com Mon Apr 8 08:56:56 2013 From: dharmareddy84 at gmail.com (Dharmendar Reddy) Date: Mon, 8 Apr 2013 08:56:56 -0500 Subject: [petsc-users] FIAT Error In-Reply-To: References: Message-ID: Thanks. I figured out. On Mon, Apr 8, 2013 at 8:55 AM, Matthew Knepley wrote: > On Mon, Apr 8, 2013 at 8:44 AM, Dharmendar Reddy wrote: > >> do you mean PetscGenerateFEMQuadrature.py will be moved to C ? using >> FIAT internally ? >> > > Yes. > > >> What should be the dim and order for P2-P1 on triangle >> >> i am trying to generate ex62.h >> >> 58: This code can be generated using 'bin/pythonscripts/PetscGenerateFEMQuadrature.py dim order dim 1 laplacian dim order 1 1 gradient src/snes/examples/tutorials/ex62.h' >> >> > You can always look in builder.py. I run the tests using it. > > bin/pythonscripts/PetscGenerateFEMQuadrature.py 2 2 2 1 laplacian 2 1 1 1 gradient src/snes/examples/tutorials/ex62.h > > Matt > > >> On Mon, Apr 8, 2013 at 8:35 AM, Matthew Knepley wrote: >> >>> On Mon, Apr 8, 2013 at 8:33 AM, Dharmendar Reddy < >>> dharmareddy84 at gmail.com> wrote: >>> >>>> Hello, >>>> I am getting a FIAT related error. I am using branch >>>> knepley/pylith. Am i missing something ? >>>> >>> >>> You need your Python path pointing to the installation: >>> >>> export >>> PYTHONPATH=${PYTHONPATH}:${PETSC_DIR}/${PETSC_ARCH}/lib:${PETSC_DIR}/${PETSC_ARCH}/lib/python2.6/site-packages >>> >>> We are planning to move all this functionality to C soon. >>> >>> Matt >>> >>> >>>> >>>> login3$ $PETSC_DIR/bin/pythonscripts/PetscGenerateFEMQuadrature.py dim >>>> order dim 1 lapla >>>> cian dim order 1 1 gradient ex62.h >>>> Traceback (most recent call last): >>>> File >>>> "/home1/00924/Reddy135/LocalApps/petsc/bin/pythonscripts/PetscGenerateFEMQuadratur >>>> e.py", line 15, in >>>> from FIAT.reference_element import default_simplex >>>> ImportError: No module named FIAT.reference_element >>>> >>>> >>>> I have used the following configOpts >>>> configOpts_Opt= --with-debugging=0 \ >>>> --with-petsc-arch=${PETSC_ARCH_Opt} \ >>>> --with-petsc-dir=${PETSC_DIR} \ >>>> --with-mpi-dir=${mpiDir} \ >>>> --with-scalar-type=real \ >>>> --with-blas-lapack-dir=${lapackDIR} \ >>>> --with-clanguage=cxx \ >>>> --download-boost=1 \ >>>> --download-umfpack=1 \ >>>> --download-superlu_dist=1 \ >>>> --download-metis=1 \ >>>> --download-parmetis=1 \ >>>> --download-mumps=1 \ >>>> --download-scalapack=1 \ >>>> --download-blacs=1\ >>>> --download-triangle=1 \ >>>> --download-ctetgen=1 \ >>>> --download-chaco=1 \ >>>> --download-scientificpython \ >>>> --download-fiat \ >>>> --download-generator >>>> -- >>>> ----------------------------------------------------- >>>> Dharmendar Reddy Palle >>>> Graduate Student >>>> Microelectronics Research center, >>>> University of Texas at Austin, >>>> 10100 Burnet Road, Bldg. 160 >>>> MER 2.608F, TX 78758-4445 >>>> e-mail: dharmareddy84 at gmail.com >>>> Phone: +1-512-350-9082 >>>> United States of America. >>>> Homepage: https://webspace.utexas.edu/~dpr342 >>>> >>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >> >> >> >> -- >> ----------------------------------------------------- >> Dharmendar Reddy Palle >> Graduate Student >> Microelectronics Research center, >> University of Texas at Austin, >> 10100 Burnet Road, Bldg. 160 >> MER 2.608F, TX 78758-4445 >> e-mail: dharmareddy84 at gmail.com >> Phone: +1-512-350-9082 >> United States of America. >> Homepage: https://webspace.utexas.edu/~dpr342 >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -- ----------------------------------------------------- Dharmendar Reddy Palle Graduate Student Microelectronics Research center, University of Texas at Austin, 10100 Burnet Road, Bldg. 160 MER 2.608F, TX 78758-4445 e-mail: dharmareddy84 at gmail.com Phone: +1-512-350-9082 United States of America. Homepage: https://webspace.utexas.edu/~dpr342 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ling.zou at inl.gov Mon Apr 8 17:05:23 2013 From: ling.zou at inl.gov (Zou (Non-US), Ling) Date: Mon, 8 Apr 2013 16:05:23 -0600 Subject: [petsc-users] working with cmake Message-ID: Hi All, I am trying to use PETSc working with cmake, following the instructions found on: ================================================================ http://www.mcs.anl.gov/petsc/documentation/faq.html#cmake Can I use CMake to build my own project that depends on PETSc?Use the FindPETSc.cmake module from this repository. See the CMakeLists.txt from Dohp for example usage. ================================================================ I still have some difficulty to make it work, and I appreciate it if anyone could give me a hand to resolve the issue. It is a simple test: step 1, I simply copied the file /opt/packages/petsc/petsc-3.3-p5/src/snes/examples/tutorials/ex3.c to my working directory /projects/CTest/PETSc/ex3/ex3.c I know ex3 is working, as when I compile it under /opt/packages/petsc/petsc-3.3-p5/src/snes/examples/tutorials/ with the command line make PETSC_DIR=/opt/packages/petsc/petsc-3.3-p5 PETSC_ARCH=arch-darwin-c-debug ex3 I could run ./ex3 and I could get results like: =============================================== atol=1e-50, rtol=1e-08, stol=1e-08, maxit=50, maxf=10000 iter = 0,SNES Function norm 5.41468 iter = 1,SNES Function norm 0.295258 iter = 2,SNES Function norm 0.000450229 iter = 3,SNES Function norm 1.38967e-09 Number of SNES iterations = 3 Norm of error 1.49751e-10 Iterations 3 =============================================== step 2, I copied FindPETSc.cmake, FindPackageMultipass.cmake, ResolveCompilerPaths.cmake, CorrectWindowsPaths.cmake from https://github.com/jedbrown/cmake-modules/ to the same working directory /projects/CTest/PETSc/ex3 step 3, I copied CMakeList.txt from https://github.com/jedbrown/dohp/blob/master/CMakeLists.txt to /projects/CTest/PETSc/ex3 then modified it as attached. step 4, under /projects/CTest/PETSc/ex3 make clean rm CMakeCache.txt PETSC_DIR=/opt/packages/petsc/petsc-3.3-p5 PETSC_ARCH=arch-darwin-c-debug cmake . Here is the print out info I got (note there are two fails there but I ignored them...hmm...maybe I should have not ignored them?): ========================================= -- The C compiler identification is GNU -- The CXX compiler identification is GNU -- Checking whether C compiler has -isysroot -- Checking whether C compiler has -isysroot - yes -- Checking whether C compiler supports OSX deployment target flag -- Checking whether C compiler supports OSX deployment target flag - yes -- Check for working C compiler: /opt/packages/openmpi/openmpi-1.6.3/gcc-opt/bin/mpicc -- Check for working C compiler: /opt/packages/openmpi/openmpi-1.6.3/gcc-opt/bin/mpicc -- works -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Checking whether CXX compiler has -isysroot -- Checking whether CXX compiler has -isysroot - yes -- Checking whether CXX compiler supports OSX deployment target flag -- Checking whether CXX compiler supports OSX deployment target flag - yes -- Check for working CXX compiler: /opt/packages/openmpi/openmpi-1.6.3/gcc-opt/bin/mpicxx -- Check for working CXX compiler: /opt/packages/openmpi/openmpi-1.6.3/gcc-opt/bin/mpicxx -- works -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- petsc_lib_dir /opt/packages/petsc/petsc-3.3-p5/arch-darwin-c-debug/lib -- Recognized PETSc install with single library for all packages -- Performing Test MULTIPASS_TEST_1_petsc_works_minimal -- Performing Test MULTIPASS_TEST_1_petsc_works_minimal - Failed -- Performing Test MULTIPASS_TEST_2_petsc_works_allincludes -- Performing Test MULTIPASS_TEST_2_petsc_works_allincludes - Failed -- Performing Test MULTIPASS_TEST_3_petsc_works_alllibraries -- Performing Test MULTIPASS_TEST_3_petsc_works_alllibraries - Success -- PETSc only need minimal includes, but requires explicit linking to all dependencies. This is expected when PETSc is built with static libraries. -- Found PETSc: /opt/packages/petsc/petsc-3.3-p5/arch-darwin-c-debug/include;/opt/packages/petsc/petsc-3.3-p5/include -- Performing Test dHAVE_PRAGMA_GCC -- Performing Test dHAVE_PRAGMA_GCC - Success -- Configuring done -- Generating done -- Build files have been written to: /Users/zoul/projects/CTest/PETSc/ex3 ============================================================== step 5 make However I got the error messages: ========================================== Scanning dependencies of target MyTest [100%] Building C object CMakeFiles/MyTest.dir/ex3.c.o Linking C executable MyTest Undefined symbols for architecture x86_64: "_DMCreateGlobalVector", referenced from: _main in ex3.c.o "_DMDACreate1d", referenced from: _main in ex3.c.o "_DMDAGetCorners", referenced from: _main in ex3.c.o _FormFunction in ex3.c.o _FormJacobian in ex3.c.o _PostCheck in ex3.c.o "_DMDAGetInfo", referenced from: _FormFunction in ex3.c.o _FormJacobian in ex3.c.o "_DMDAVecGetArray", referenced from: _main in ex3.c.o _FormFunction in ex3.c.o _FormJacobian in ex3.c.o _PostCheck in ex3.c.o "_DMDAVecRestoreArray", referenced from: _main in ex3.c.o _FormFunction in ex3.c.o _FormJacobian in ex3.c.o _PostCheck in ex3.c.o "_DMDestroy", referenced from: _main in ex3.c.o "_DMGetLocalVector", referenced from: _FormFunction in ex3.c.o "_DMGlobalToLocalBegin", referenced from: _FormFunction in ex3.c.o "_DMGlobalToLocalEnd", referenced from: _FormFunction in ex3.c.o "_DMRestoreLocalVector", referenced from: _FormFunction in ex3.c.o "_KSPGetIterationNumber", referenced from: _PostSetSubKSP in ex3.c.o "_KSPGetPC", referenced from: _PostSetSubKSP in ex3.c.o "_KSPSetTolerances", referenced from: _PostSetSubKSP in ex3.c.o "_MatAssemblyBegin", referenced from: _FormJacobian in ex3.c.o "_MatAssemblyEnd", referenced from: _FormJacobian in ex3.c.o "_MatCreate", referenced from: _main in ex3.c.o "_MatDestroy", referenced from: _main in ex3.c.o "_MatMPIAIJSetPreallocation", referenced from: _main in ex3.c.o "_MatSeqAIJSetPreallocation", referenced from: _main in ex3.c.o "_MatSetFromOptions", referenced from: _main in ex3.c.o "_MatSetSizes", referenced from: _main in ex3.c.o "_MatSetValues", referenced from: _FormJacobian in ex3.c.o "_PCBJacobiGetSubKSP", referenced from: _PostSetSubKSP in ex3.c.o "_PETSC_COMM_WORLD", referenced from: _main in ex3.c.o _Monitor in ex3.c.o _PostCheck in ex3.c.o _PostSetSubKSP in ex3.c.o "_PetscError", referenced from: _main in ex3.c.o _FormInitialGuess in ex3.c.o _FormFunction in ex3.c.o _FormJacobian in ex3.c.o _Monitor in ex3.c.o _PostCheck in ex3.c.o _PostSetSubKSP in ex3.c.o ... "_PetscFinalize", referenced from: _main in ex3.c.o "_PetscInitialize", referenced from: _main in ex3.c.o "_PetscObjectSetName", referenced from: _main in ex3.c.o "_PetscOptionsGetInt", referenced from: _main in ex3.c.o "_PetscOptionsGetReal", referenced from: _main in ex3.c.o "_PetscOptionsHasName", referenced from: _main in ex3.c.o "_PetscPrintf", referenced from: _main in ex3.c.o _Monitor in ex3.c.o _PostCheck in ex3.c.o _PostSetSubKSP in ex3.c.o "_PetscViewerDestroy", referenced from: _main in ex3.c.o "_PetscViewerDrawOpen", referenced from: _main in ex3.c.o "_SNESCreate", referenced from: _main in ex3.c.o "_SNESDestroy", referenced from: _main in ex3.c.o "_SNESGetIterationNumber", referenced from: _main in ex3.c.o _PostCheck in ex3.c.o _PostSetSubKSP in ex3.c.o "_SNESGetKSP", referenced from: _PostSetSubKSP in ex3.c.o "_SNESGetSNESLineSearch", referenced from: _main in ex3.c.o "_SNESGetSolution", referenced from: _Monitor in ex3.c.o "_SNESGetTolerances", referenced from: _main in ex3.c.o "_SNESLineSearchGetPreCheck", referenced from: _PostCheck in ex3.c.o "_SNESLineSearchGetSNES", referenced from: _PostCheck in ex3.c.o _PostSetSubKSP in ex3.c.o "_SNESLineSearchSetPostCheck", referenced from: _main in ex3.c.o "_SNESLineSearchSetPreCheck", referenced from: _main in ex3.c.o "_SNESMonitorSet", referenced from: _main in ex3.c.o "_SNESSetFromOptions", referenced from: _main in ex3.c.o "_SNESSetFunction", referenced from: _main in ex3.c.o "_SNESSetJacobian", referenced from: _main in ex3.c.o "_SNESSolve", referenced from: _main in ex3.c.o "_VecAXPY", referenced from: _main in ex3.c.o "_VecCopy", referenced from: _PostCheck in ex3.c.o "_VecDestroy", referenced from: _main in ex3.c.o "_VecDuplicate", referenced from: _main in ex3.c.o "_VecNorm", referenced from: _main in ex3.c.o "_VecSet", referenced from: _FormInitialGuess in ex3.c.o "_VecView", referenced from: _Monitor in ex3.c.o "_petscstack", referenced from: _main in ex3.c.o _FormInitialGuess in ex3.c.o _FormFunction in ex3.c.o _FormJacobian in ex3.c.o _Monitor in ex3.c.o _PreCheck in ex3.c.o _PostCheck in ex3.c.o ... ld: symbol(s) not found for architecture x86_64 collect2: error: ld returned 1 exit status make[2]: *** [MyTest] Error 1 make[1]: *** [CMakeFiles/MyTest.dir/all] Error 2 make: *** [all] Error 2 ========================================== Thank you, Ling -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- ####################################################################################### # The original CMakeList.txt file is downloaded from: # https://github.com/jedbrown/dohp/blob/master/CMakeLists.txt # It was modified for my own project. ####################################################################################### cmake_minimum_required (VERSION 2.8) project (MyTest) list (APPEND CMAKE_MODULE_PATH ${CMAKE_SOURCE_DIR}) # Normally PETSc is built with MPI, if not, use CC=mpicc, etc find_package (PETSc REQUIRED) include (CheckCSourceCompiles) # The name is misleading, this also tries to link check_c_source_compiles (" #define PragmaQuote(a) _Pragma(#a) #define PragmaGCC(a) PragmaQuote(GCC a) int main(int argc,char *argv[]) { PragmaGCC(diagnostic ignored \"-Wconversion\") char c = (int)argv[0][0] + argv[argc-1][0]; return c; }" dHAVE_PRAGMA_GCC) # LZ: Adds flags to the compiler command line. I probably don't need it atm. add_definitions (-std=c99) # Essential: include our directories first otherwise we can get internal headers from some installed path include_directories (${PETSC_INCLUDES}) add_definitions (${PETSC_DEFINITIONS}) set (CMAKE_INSTALL_RPATH "${CMAKE_INSTALL_PREFIX}/lib") set (CMAKE_INSTALL_RPATH_USE_LINK_PATH TRUE) #add_subdirectory (include) #add_subdirectory (src) FILE (GLOB SourceFileList *.C) ADD_EXECUTABLE (MyTest ${SourceFileList}) From jedbrown at mcs.anl.gov Mon Apr 8 17:17:53 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 08 Apr 2013 17:17:53 -0500 Subject: [petsc-users] working with cmake In-Reply-To: References: Message-ID: <87vc7wbram.fsf@mcs.anl.gov> "Zou (Non-US), Ling" writes: > -- Recognized PETSc install with single library for all packages > -- Performing Test MULTIPASS_TEST_1_petsc_works_minimal > -- Performing Test MULTIPASS_TEST_1_petsc_works_minimal - Failed > -- Performing Test MULTIPASS_TEST_2_petsc_works_allincludes > -- Performing Test MULTIPASS_TEST_2_petsc_works_allincludes - Failed > -- Performing Test MULTIPASS_TEST_3_petsc_works_alllibraries > -- Performing Test MULTIPASS_TEST_3_petsc_works_alllibraries - Success > -- PETSc only need minimal includes, but requires explicit linking to all > dependencies. This is expected when PETSc is built with static libraries. This is okay, and we can do no better if you built with static libraries. > step 5 > make > However I got the error messages: > ========================================== > Scanning dependencies of target MyTest > [100%] Building C object CMakeFiles/MyTest.dir/ex3.c.o > Linking C executable MyTest > Undefined symbols for architecture x86_64: > "_DMCreateGlobalVector", referenced from: > _main in ex3.c.o Your CMakeLists.txt does not actually link any libraries. Try the file below: cmake_minimum_required (VERSION 2.8) project (MyTest) list (APPEND CMAKE_MODULE_PATH ${CMAKE_SOURCE_DIR}) # Normally PETSc is built with MPI, if not, use CC=mpicc, etc find_package (PETSc REQUIRED) # Essential: include our directories first otherwise we can get internal headers from some installed path include_directories (${PETSC_INCLUDES}) add_definitions (${PETSC_DEFINITIONS}) set (CMAKE_INSTALL_RPATH "${CMAKE_INSTALL_PREFIX}/lib") set (CMAKE_INSTALL_RPATH_USE_LINK_PATH TRUE) FILE (GLOB SourceFileList *.C) ADD_EXECUTABLE (MyTest ${SourceFileList}) TARGET_LINK_LIBRARIES(MyTest ${PETSC_LIBRARIES}) # <---- new line From jedbrown at mcs.anl.gov Tue Apr 9 00:02:45 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 09 Apr 2013 00:02:45 -0500 Subject: [petsc-users] Compiling Error of petsc-dev In-Reply-To: <91b62c.e68c.13dec5024ca.Coremail.zyzhang@nuaa.edu.cn> References: <87li8tekfv.fsf@mcs.anl.gov> <126726b.e0d5.13de7a945e2.Coremail.zyzhang@nuaa.edu.cn> <91b62c.e68c.13dec5024ca.Coremail.zyzhang@nuaa.edu.cn> Message-ID: <87sj308fey.fsf@mcs.anl.gov> Zhang writes: > Thank you Jed. I tried as your suggested. > > Here I am using CUDA 5.0, what version is supported by the current petsc-dev? I'm using CUDA 5, CUDA 4 should also work. > But Other error found here. Any help is welcome. > > Best regards, > > Zhenyu > > aijcusparse.cu(290): error: argument of type "cusparseDiagType_t" is incompatible with parameter of type "cusparseOperation_t" This is probably caused by an old version of txpetscgpu, which has been upgraded in 'next'. If you used --download-txpetscgpu=1, you can update to the new version using: $ rm -r externalpackages/txpetscgpu $PETSC_ARCH/include/txpetscgpu $ git pull $ make reconfigure all > plex.c: In function ?PetscErrorCode DMPlexGenerate_CTetgen(DM, PetscBool, _p_DM**)?: > plex.c:3663:66: error: invalid conversion from ?const int*? to ?int*? [-fpermissive] > plex.c:2970:23: error: initializing argument 3 of ?PetscErrorCode DMPlexInvertCells_Internal(PetscInt, PetscInt, int*)? [-fpermissive] > plex.c: In function ?PetscErrorCode DMPlexRefine_CTetgen(DM, PetscReal*, _p_DM**)?: > plex.c:3798:66: error: invalid conversion from ?const int*? to ?int*? [-fpermissive] > plex.c:2970:23: error: initializing argument 3 of ?PetscErrorCode DMPlexInvertCells_Internal(PetscInt, PetscInt, int*)? [-fpermissive] > /usr/bin/ar: plex.o: No such file or directory This was improper use of const, not caught perhaps because ctetgen was not being tested. I've fixed the problem in 'next'. Matt, please merge my branch 'jed/fix-plex-const' into 'knepley/plex' at your convenience. From member at linkedin.com Tue Apr 9 08:33:31 2013 From: member at linkedin.com (Yixun Liu) Date: Tue, 9 Apr 2013 13:33:31 +0000 (UTC) Subject: [petsc-users] Invitation to connect on LinkedIn Message-ID: <1991736718.24845885.1365514411213.JavaMail.app@ela4-app1201.prod> LinkedIn ------------ Yixun Liu requested to add you as a connection on LinkedIn: ------------------------------------------ Matt, I'd like to add you to my professional network on LinkedIn. - Yixun Accept invitation from Yixun Liu http://www.linkedin.com/e/-r9oj6w-hfb40uyf-6p/NPBLyes6_CJvfaFX95qTY0Fn_yVIxe9EWtXp/blk/I357989890_125/e39SrCAJoS5vrCAJoyRJtCVFnSRJrScJr6RBfnhv9ClRsDgZp6lQs6lzoQ5AomZIpn8_dj8NnP0Ve3AUejsRcQALljtBemgNukALdzARdz4PcP8Ocz4LrCBxbOYWrSlI/eml-comm_invm-b-in_ac-inv28/?hs=false&tok=2_YBNCIASJh5I1 View profile of Yixun Liu http://www.linkedin.com/e/-r9oj6w-hfb40uyf-6p/rso/244467618/sXDl/name/129573335_I357989890_125/?hs=false&tok=11AWxAI-SJh5I1 ------------------------------------------ You are receiving Invitation emails. This email was intended for Matt Funk. Learn why this is included: http://www.linkedin.com/e/-r9oj6w-hfb40uyf-6p/plh/http%3A%2F%2Fhelp%2Elinkedin%2Ecom%2Fapp%2Fanswers%2Fdetail%2Fa_id%2F4788/-GXI/?hs=false&tok=08JSDgUeWJh5I1 (c) 2012, LinkedIn Corporation. 2029 Stierlin Ct, Mountain View, CA 94043, USA. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stali at geology.wisc.edu Tue Apr 9 09:33:51 2013 From: stali at geology.wisc.edu (Tabrez Ali) Date: Tue, 09 Apr 2013 09:33:51 -0500 Subject: [petsc-users] nightly tarball Message-ID: <516426CF.8010001@geology.wisc.edu> Hello The 105MB nightly tarball available at http://ftp.mcs.anl.gov/pub/petsc/petsc-dev.tar.gz doesn't seem to work (see below) for me. How can I download the the latest petsc-dev (including the build-system) without all the commit history? Also is v3.4 coming anytime soon? Thanks in advance. Tabrez stali at i5:/tmp/petsc-dev$ ./configure --with-mpi-dir=/opt/mpich2-gcc --with-metis=1 --download-metis=1 --with-debugging=0 =============================================================================== Configuring PETSc to compile on your system =============================================================================== TESTING: configureInstallationMethod from PETSc.utilities.petscdir(config/PETSc/utilities/petscdir.py:84) ******************************************************************************* UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details): ------------------------------------------------------------------------------- Your petsc-dev directory is broken, remove the entire directory and start all over again ******************************************************************************* From balay at mcs.anl.gov Tue Apr 9 09:37:59 2013 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 9 Apr 2013 09:37:59 -0500 (CDT) Subject: [petsc-users] nightly tarball In-Reply-To: <516426CF.8010001@geology.wisc.edu> References: <516426CF.8010001@geology.wisc.edu> Message-ID: Ah - the .git stuff is in the tarball. try 'rm -rf .git' and redo configure Satish On Tue, 9 Apr 2013, Tabrez Ali wrote: > Hello > > The 105MB nightly tarball available at > http://ftp.mcs.anl.gov/pub/petsc/petsc-dev.tar.gz doesn't seem to work (see > below) for me. How can I download the the latest petsc-dev (including the > build-system) without all the commit history? > > Also is v3.4 coming anytime soon? > > Thanks in advance. > > Tabrez > > stali at i5:/tmp/petsc-dev$ ./configure --with-mpi-dir=/opt/mpich2-gcc > --with-metis=1 --download-metis=1 --with-debugging=0 > =============================================================================== > Configuring PETSc to compile on your system > =============================================================================== > TESTING: configureInstallationMethod from > PETSc.utilities.petscdir(config/PETSc/utilities/petscdir.py:84) > ******************************************************************************* > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for > details): > ------------------------------------------------------------------------------- > Your petsc-dev directory is broken, remove the entire directory and start all > over again > ******************************************************************************* > > From jedbrown at mcs.anl.gov Tue Apr 9 09:46:05 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 09 Apr 2013 09:46:05 -0500 Subject: [petsc-users] nightly tarball In-Reply-To: <516426CF.8010001@geology.wisc.edu> References: <516426CF.8010001@geology.wisc.edu> Message-ID: <87sj2z7oeq.fsf@mcs.anl.gov> Tabrez Ali writes: > Hello > > The 105MB nightly tarball available at > http://ftp.mcs.anl.gov/pub/petsc/petsc-dev.tar.gz doesn't seem to work > (see below) for me. How can I download the the latest petsc-dev > (including the build-system) without all the commit history? If you want a small download of latest petsc-dev, I recommend $ git clone --depth 1 https://bitbucket.org/petsc/petsc This is fast, downloads only 10 MB, and you can update later using 'git pull'. The tarball accidentally included the entire Git history, which is why it was so large. Satish will fix this. > Also is v3.4 coming anytime soon? Yes, we're tying up loose ends and will probably have an RC in a few days. From doougsini at gmail.com Wed Apr 10 09:01:23 2013 From: doougsini at gmail.com (Seungbum Koo) Date: Wed, 10 Apr 2013 09:01:23 -0500 Subject: [petsc-users] What version of 'MPICH' and 'HDF5' does PETSc download? Message-ID: What version of mpich and hdf5 does petsc download when configuring with --download-mpich and --download-hdf5 options? Seungbum -------------- next part -------------- An HTML attachment was scrubbed... URL: From hzhang at mcs.anl.gov Wed Apr 10 09:04:42 2013 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Wed, 10 Apr 2013 09:04:42 -0500 Subject: [petsc-users] What version of 'MPICH' and 'HDF5' does PETSc download? In-Reply-To: References: Message-ID: hdf5-1.8.8-p1 Hong On Wed, Apr 10, 2013 at 9:01 AM, Seungbum Koo wrote: > What version of mpich and hdf5 does petsc download when configuring with > --download-mpich and --download-hdf5 options? > > Seungbum > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doougsini at gmail.com Wed Apr 10 09:09:43 2013 From: doougsini at gmail.com (Seungbum Koo) Date: Wed, 10 Apr 2013 09:09:43 -0500 Subject: [petsc-users] What version of 'MPICH' and 'HDF5' does PETSc download? In-Reply-To: References: Message-ID: Does it work with mpich-3.0.3 and hdf5-1.8.10-p1? Seungbum On Wed, Apr 10, 2013 at 9:04 AM, Hong Zhang wrote: > hdf5-1.8.8-p1 > > Hong > > > On Wed, Apr 10, 2013 at 9:01 AM, Seungbum Koo wrote: > >> What version of mpich and hdf5 does petsc download when configuring with >> --download-mpich and --download-hdf5 options? >> >> Seungbum >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Apr 10 09:11:32 2013 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 10 Apr 2013 09:11:32 -0500 Subject: [petsc-users] What version of 'MPICH' and 'HDF5' does PETSc download? In-Reply-To: References: Message-ID: On Wed, Apr 10, 2013 at 9:09 AM, Seungbum Koo wrote: > Does it work with mpich-3.0.3 and hdf5-1.8.10-p1? > It will work with that MPICH, but we have no idea whether HDF5 made interface changes. Try it and see. Matt > Seungbum > > > On Wed, Apr 10, 2013 at 9:04 AM, Hong Zhang wrote: > >> hdf5-1.8.8-p1 >> >> Hong >> >> >> On Wed, Apr 10, 2013 at 9:01 AM, Seungbum Koo wrote: >> >>> What version of mpich and hdf5 does petsc download when configuring with >>> --download-mpich and --download-hdf5 options? >>> >>> Seungbum >>> >> >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Wed Apr 10 09:24:15 2013 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 10 Apr 2013 09:24:15 -0500 (CDT) Subject: [petsc-users] What version of 'MPICH' and 'HDF5' does PETSc download? In-Reply-To: References: Message-ID: depends upon the version of petsc you have. check MPI.py, hdf.py files in your petsc source tree. Satish On Wed, 10 Apr 2013, Seungbum Koo wrote: > What version of mpich and hdf5 does petsc download when configuring with > --download-mpich and --download-hdf5 options? > > Seungbum > From zyzhang at nuaa.edu.cn Wed Apr 10 09:07:04 2013 From: zyzhang at nuaa.edu.cn (Zhang) Date: Wed, 10 Apr 2013 22:07:04 +0800 (CST) Subject: [petsc-users] Problem with petsc-dev+openmpi Message-ID: <121b820.f596.13df4456f7b.Coremail.zyzhang@nuaa.edu.cn> Hi, I installed petsc-dev. Everything thing's fine when compiling it. But problems appeared with the downloaded openmpi-1.6.3 when I make /ksp/ex2 and ran by mpirun -n 2 ex2 Errors as follows, It looks like orte_init failed for some reason; your parallel process is likely to abort. There are many reasons that a parallel process can fail during orte_init; some of which are due to configuration or environment problems. This failure appears to be an internal failure; here's some additional information (which may only be relevant to an Open MPI developer): orte_util_nidmap_init failed --> Returned value Data unpack would read past end of buffer (-26) instead of ORTE_SUCCESS -------------------------------------------------------------------------- [ubuntu:03237] [[3831,1],0] ORTE_ERROR_LOG: Data unpack would read past end of buffer in file util/nidmap.c at line 118 [ubuntu:03237] [[3831,1],0] ORTE_ERROR_LOG: Data unpack would read past end of buffer in file ess_env_module.c at line 174 -------------------------------------------------------------------------- It looks like orte_init failed for some reason; your parallel process is likely to abort. There are many reasons that a parallel process can fail during orte_init; some of which are due to configuration or environment problems. This failure appears to be an internal failure; here's some additional information (which may only be relevant to an Open MPI developer): orte_ess_set_name failed --> Returned value Data unpack would read past end of buffer (-26) instead of ORTE_SUCCESS -------------------------------------------------------------------------- [ubuntu:03237] [[3831,1],0] ORTE_ERROR_LOG: Data unpack would read past end of buffer in file runtime/orte_init.c at line 128 -------------------------------------------------------------------------- It looks like MPI_INIT failed for some reason; your parallel process is likely to abort. There are many reasons that a parallel process can fail during MPI_INIT; some of which are due to configuration or environment problems. This failure appears to be an internal failure; here's some additional information (which may only be relevant to an Open MPI developer): ompi_mpi_init: orte_init failed --> Returned "Data unpack would read past end of buffer" (-26) instead of "Success" (0) -------------------------------------------------------------------------- [ubuntu:3237] *** An error occurred in MPI_Init_thread [ubuntu:3237] *** on a NULL communicator [ubuntu:3237] *** Unknown error [ubuntu:3237] *** MPI_ERRORS_ARE_FATAL: your MPI job will now abort -------------------------------------------------------------------------- An MPI process is aborting at a time when it cannot guarantee that all of its peer processes in the job will be killed properly. You should double check that everything has shut down cleanly. Reason: Before MPI_INIT completed Local host: ubuntu PID: 3237 -------------------------------------------------------------------------- [ubuntu:03240] [[3831,1],1] ORTE_ERROR_LOG: Data unpack would read past end of buffer in file util/nidmap.c at line 118 [ubuntu:03240] [[3831,1],1] ORTE_ERROR_LOG: Data unpack would read past end of buffer in file ess_env_module.c at line 174 [ubuntu:03240] [[3831,1],1] ORTE_ERROR_LOG: Data unpack would read past end of buffer in file runtime/orte_init.c at line 128 -------------------------------------------------------------------------- mpirun has exited due to process rank 0 with PID 3237 on node ubuntu exiting without calling "finalize". This may have caused other processes in the application to be terminated by signals sent by mpirun (as reported here). -------------------------------------------------------------------------- [ubuntu:03236] 1 more process has sent help message help-orte-runtime.txt / orte_init:startup:internal-failure [ubuntu:03236] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages [ubuntu:03236] 1 more process has sent help message help-orte-runtime / orte_init:startup:internal-failure [ubuntu:03236] 1 more process has sent help message help-mpi-runtime / mpi_init:startup:internal-failure [ubuntu:03236] 1 more process has sent help message help-mpi-errors.txt / mpi_errors_are_fatal unknown handle [ubuntu:03236] 1 more process has sent help message help-mpi-runtime.txt / ompi mpi abort:cannot guarantee all killed Please help solve this, and many thanks Zhenyu -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Wed Apr 10 09:36:40 2013 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 10 Apr 2013 09:36:40 -0500 (CDT) Subject: [petsc-users] Problem with petsc-dev+openmpi In-Reply-To: <121b820.f596.13df4456f7b.Coremail.zyzhang@nuaa.edu.cn> References: <121b820.f596.13df4456f7b.Coremail.zyzhang@nuaa.edu.cn> Message-ID: Which branch of petsc-dev? Please send relavent logs to petsc-maint. Satish On Wed, 10 Apr 2013, Zhang wrote: > Hi, > > I installed petsc-dev. Everything thing's fine when compiling it. > > But problems appeared with the downloaded openmpi-1.6.3 when I make /ksp/ex2 and ran by > > > mpirun -n 2 ex2 > > Errors as follows, > > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > orte_util_nidmap_init failed > --> Returned value Data unpack would read past end of buffer (-26) instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > [ubuntu:03237] [[3831,1],0] ORTE_ERROR_LOG: Data unpack would read past end of buffer in file util/nidmap.c at line 118 > [ubuntu:03237] [[3831,1],0] ORTE_ERROR_LOG: Data unpack would read past end of buffer in file ess_env_module.c at line 174 > -------------------------------------------------------------------------- > It looks like orte_init failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during orte_init; some of which are due to configuration or > environment problems. This failure appears to be an internal failure; > here's some additional information (which may only be relevant to an > Open MPI developer): > > orte_ess_set_name failed > --> Returned value Data unpack would read past end of buffer (-26) instead of ORTE_SUCCESS > -------------------------------------------------------------------------- > [ubuntu:03237] [[3831,1],0] ORTE_ERROR_LOG: Data unpack would read past end of buffer in file runtime/orte_init.c at line 128 > -------------------------------------------------------------------------- > It looks like MPI_INIT failed for some reason; your parallel process is > likely to abort. There are many reasons that a parallel process can > fail during MPI_INIT; some of which are due to configuration or environment > problems. This failure appears to be an internal failure; here's some > additional information (which may only be relevant to an Open MPI > developer): > > ompi_mpi_init: orte_init failed > --> Returned "Data unpack would read past end of buffer" (-26) instead of "Success" (0) > -------------------------------------------------------------------------- > [ubuntu:3237] *** An error occurred in MPI_Init_thread > [ubuntu:3237] *** on a NULL communicator > [ubuntu:3237] *** Unknown error > [ubuntu:3237] *** MPI_ERRORS_ARE_FATAL: your MPI job will now abort > -------------------------------------------------------------------------- > An MPI process is aborting at a time when it cannot guarantee that all > of its peer processes in the job will be killed properly. You should > double check that everything has shut down cleanly. > > Reason: Before MPI_INIT completed > Local host: ubuntu > PID: 3237 > -------------------------------------------------------------------------- > [ubuntu:03240] [[3831,1],1] ORTE_ERROR_LOG: Data unpack would read past end of buffer in file util/nidmap.c at line 118 > [ubuntu:03240] [[3831,1],1] ORTE_ERROR_LOG: Data unpack would read past end of buffer in file ess_env_module.c at line 174 > [ubuntu:03240] [[3831,1],1] ORTE_ERROR_LOG: Data unpack would read past end of buffer in file runtime/orte_init.c at line 128 > -------------------------------------------------------------------------- > mpirun has exited due to process rank 0 with PID 3237 on > node ubuntu exiting without calling "finalize". This may > have caused other processes in the application to be > terminated by signals sent by mpirun (as reported here). > -------------------------------------------------------------------------- > [ubuntu:03236] 1 more process has sent help message help-orte-runtime.txt / orte_init:startup:internal-failure > [ubuntu:03236] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages > [ubuntu:03236] 1 more process has sent help message help-orte-runtime / orte_init:startup:internal-failure > [ubuntu:03236] 1 more process has sent help message help-mpi-runtime / mpi_init:startup:internal-failure > [ubuntu:03236] 1 more process has sent help message help-mpi-errors.txt / mpi_errors_are_fatal unknown handle > [ubuntu:03236] 1 more process has sent help message help-mpi-runtime.txt / ompi mpi abort:cannot guarantee all killed > > > Please help solve this, and many thanks > > Zhenyu > From doougsini at gmail.com Wed Apr 10 10:51:39 2013 From: doougsini at gmail.com (Seungbum Koo) Date: Wed, 10 Apr 2013 10:51:39 -0500 Subject: [petsc-users] What version of 'MPICH' and 'HDF5' does PETSc download? In-Reply-To: References: Message-ID: Oh. Thank you. I installed PETSc-3.3-p6 and it downloaded mpich2-1.4.1p1 and hdf5-1.8.8-p1. Seungbum On Wed, Apr 10, 2013 at 9:24 AM, Satish Balay wrote: > depends upon the version of petsc you have. check MPI.py, hdf.py files in > your > petsc source tree. > > Satish > > On Wed, 10 Apr 2013, Seungbum Koo wrote: > > > What version of mpich and hdf5 does petsc download when configuring with > > --download-mpich and --download-hdf5 options? > > > > Seungbum > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doougsini at gmail.com Wed Apr 10 16:55:41 2013 From: doougsini at gmail.com (Seungbum Koo) Date: Wed, 10 Apr 2013 16:55:41 -0500 Subject: [petsc-users] cannot find 'libz.a' when configuring Message-ID: Hi. I tried to add hdf5 when configuring. HDF5-1.8.9 is currently installed with szip-2.1 and zlib-1.2.7. It stops with message ******************************************************************************* UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details): ------------------------------------------------------------------------------- Compression library [libz.a or equivalent] not found ******************************************************************************* What I don't understand is that 'libz.a' file exists in '/opt/zlib-1.2.7/lib'. What should I do? Seungbum -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Wed Apr 10 17:21:38 2013 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 10 Apr 2013 17:21:38 -0500 (CDT) Subject: [petsc-users] cannot find 'libz.a' when configuring In-Reply-To: References: Message-ID: try using configure option LIBS="-L/opt/zlib-1.2.7/lib -lz' Satish On Wed, 10 Apr 2013, Seungbum Koo wrote: > Hi. I tried to add hdf5 when configuring. HDF5-1.8.9 is currently installed > with szip-2.1 and zlib-1.2.7. > > It stops with message > > ******************************************************************************* > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for > details): > ------------------------------------------------------------------------------- > Compression library [libz.a or equivalent] not found > ******************************************************************************* > > What I don't understand is that 'libz.a' file exists in > '/opt/zlib-1.2.7/lib'. What should I do? > > Seungbum > From doougsini at gmail.com Wed Apr 10 18:59:02 2013 From: doougsini at gmail.com (Seungbum Koo) Date: Wed, 10 Apr 2013 18:59:02 -0500 Subject: [petsc-users] cannot find 'libz.a' when configuring In-Reply-To: References: Message-ID: Thank you. It fixed the problem. But it stopped with other message, saying that ******************************************************************************* UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details): ------------------------------------------------------------------------------- --with-hdf5-dir=/opt/hdf5-1.8.9 did not work ******************************************************************************* I asked up to which version of HDF5 does PETSc support this morning because of this problem, thinking if this is a compatibility reason or some sort similar. I tried to attach the configure.log file but there is 512kb limit to this mailing list. Any suggestion will help. Thanks. Seungbum On Wed, Apr 10, 2013 at 5:21 PM, Satish Balay wrote: > try using configure option LIBS="-L/opt/zlib-1.2.7/lib -lz' > > Satish > > On Wed, 10 Apr 2013, Seungbum Koo wrote: > > > Hi. I tried to add hdf5 when configuring. HDF5-1.8.9 is currently > installed > > with szip-2.1 and zlib-1.2.7. > > > > It stops with message > > > > > ******************************************************************************* > > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for > > details): > > > ------------------------------------------------------------------------------- > > Compression library [libz.a or equivalent] not found > > > ******************************************************************************* > > > > What I don't understand is that 'libz.a' file exists in > > '/opt/zlib-1.2.7/lib'. What should I do? > > > > Seungbum > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Wed Apr 10 19:11:23 2013 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 10 Apr 2013 19:11:23 -0500 (CDT) Subject: [petsc-users] cannot find 'libz.a' when configuring In-Reply-To: References: Message-ID: send logs to petsc-maint Satish On Wed, 10 Apr 2013, Seungbum Koo wrote: > Thank you. It fixed the problem. But it stopped with other message, saying > that > > ******************************************************************************* > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for > details): > ------------------------------------------------------------------------------- > --with-hdf5-dir=/opt/hdf5-1.8.9 did not work > ******************************************************************************* > > I asked up to which version of HDF5 does PETSc support this morning because > of this problem, thinking if this is a compatibility reason or some sort > similar. > > I tried to attach the configure.log file but there is 512kb limit to this > mailing list. > > Any suggestion will help. > > Thanks. > > Seungbum > > > On Wed, Apr 10, 2013 at 5:21 PM, Satish Balay wrote: > > > try using configure option LIBS="-L/opt/zlib-1.2.7/lib -lz' > > > > Satish > > > > On Wed, 10 Apr 2013, Seungbum Koo wrote: > > > > > Hi. I tried to add hdf5 when configuring. HDF5-1.8.9 is currently > > installed > > > with szip-2.1 and zlib-1.2.7. > > > > > > It stops with message > > > > > > > > ******************************************************************************* > > > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for > > > details): > > > > > ------------------------------------------------------------------------------- > > > Compression library [libz.a or equivalent] not found > > > > > ******************************************************************************* > > > > > > What I don't understand is that 'libz.a' file exists in > > > '/opt/zlib-1.2.7/lib'. What should I do? > > > > > > Seungbum > > > > > > > > From balay at mcs.anl.gov Wed Apr 10 19:37:31 2013 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 10 Apr 2013 19:37:31 -0500 (CDT) Subject: [petsc-users] cannot find 'libz.a' when configuring In-Reply-To: References: Message-ID: I could retrieve it from the one you attempted to send to petsc-users. And see: >>>>>> Possible ERROR while running linker: /opt/hdf5-1.8.9/lib/libhdf5.a(H5Z.o): In function `H5Zunregister': H5Z.c:(.text+0xa9b): undefined reference to `SZ_encoder_enabled' /opt/hdf5-1.8.9/lib/libhdf5.a(H5Z.o): In function `H5Z_unregister': H5Z.c:(.text+0x1872): undefined reference to `SZ_encoder_enabled' /opt/hdf5-1.8.9/lib/libhdf5.a(H5Z.o): In function `H5Zfilter_avail': H5Z.c:(.text+0x2592): undefined reference to `SZ_encoder_enabled' <<<<<<< You did mention building hdf4 with szip-2.1. So perhaps you sould specify hdf5 - and all the dependent libraries using --with-hdf5-include --with-hdf5-lib options [and not --with-hdf5 option] Satish On Wed, 10 Apr 2013, Satish Balay wrote: > send logs to petsc-maint > > Satish > > On Wed, 10 Apr 2013, Seungbum Koo wrote: > > > Thank you. It fixed the problem. But it stopped with other message, saying > > that > > > > ******************************************************************************* > > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for > > details): > > ------------------------------------------------------------------------------- > > --with-hdf5-dir=/opt/hdf5-1.8.9 did not work > > ******************************************************************************* > > > > I asked up to which version of HDF5 does PETSc support this morning because > > of this problem, thinking if this is a compatibility reason or some sort > > similar. > > > > I tried to attach the configure.log file but there is 512kb limit to this > > mailing list. > > > > Any suggestion will help. > > > > Thanks. > > > > Seungbum > > > > > > On Wed, Apr 10, 2013 at 5:21 PM, Satish Balay wrote: > > > > > try using configure option LIBS="-L/opt/zlib-1.2.7/lib -lz' > > > > > > Satish > > > > > > On Wed, 10 Apr 2013, Seungbum Koo wrote: > > > > > > > Hi. I tried to add hdf5 when configuring. HDF5-1.8.9 is currently > > > installed > > > > with szip-2.1 and zlib-1.2.7. > > > > > > > > It stops with message > > > > > > > > > > > ******************************************************************************* > > > > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for > > > > details): > > > > > > > ------------------------------------------------------------------------------- > > > > Compression library [libz.a or equivalent] not found > > > > > > > ******************************************************************************* > > > > > > > > What I don't understand is that 'libz.a' file exists in > > > > '/opt/zlib-1.2.7/lib'. What should I do? > > > > > > > > Seungbum > > > > > > > > > > > > > > From doougsini at gmail.com Thu Apr 11 09:30:34 2013 From: doougsini at gmail.com (Seungbum Koo) Date: Thu, 11 Apr 2013 09:30:34 -0500 Subject: [petsc-users] cannot find 'libz.a' when configuring In-Reply-To: References: Message-ID: Thank you, it worked. --with-hdf5-include=' ' and so for --with-hdf5-lib made configuration and installation successful. Seungbum On Wed, Apr 10, 2013 at 7:37 PM, Satish Balay wrote: > I could retrieve it from the one you attempted to send to petsc-users. And > see: > > >>>>>> > Possible ERROR while running linker: /opt/hdf5-1.8.9/lib/libhdf5.a(H5Z.o): > In function `H5Zunregister': > H5Z.c:(.text+0xa9b): undefined reference to `SZ_encoder_enabled' > /opt/hdf5-1.8.9/lib/libhdf5.a(H5Z.o): In function `H5Z_unregister': > H5Z.c:(.text+0x1872): undefined reference to `SZ_encoder_enabled' > /opt/hdf5-1.8.9/lib/libhdf5.a(H5Z.o): In function `H5Zfilter_avail': > H5Z.c:(.text+0x2592): undefined reference to `SZ_encoder_enabled' > <<<<<<< > > You did mention building hdf4 with szip-2.1. So perhaps you sould > specify hdf5 - and all the dependent libraries using > --with-hdf5-include --with-hdf5-lib options [and not --with-hdf5 > option] > > Satish > > > On Wed, 10 Apr 2013, Satish Balay wrote: > > > send logs to petsc-maint > > > > Satish > > > > On Wed, 10 Apr 2013, Seungbum Koo wrote: > > > > > Thank you. It fixed the problem. But it stopped with other message, > saying > > > that > > > > > > > ******************************************************************************* > > > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log > for > > > details): > > > > ------------------------------------------------------------------------------- > > > --with-hdf5-dir=/opt/hdf5-1.8.9 did not work > > > > ******************************************************************************* > > > > > > I asked up to which version of HDF5 does PETSc support this morning > because > > > of this problem, thinking if this is a compatibility reason or some > sort > > > similar. > > > > > > I tried to attach the configure.log file but there is 512kb limit to > this > > > mailing list. > > > > > > Any suggestion will help. > > > > > > Thanks. > > > > > > Seungbum > > > > > > > > > On Wed, Apr 10, 2013 at 5:21 PM, Satish Balay > wrote: > > > > > > > try using configure option LIBS="-L/opt/zlib-1.2.7/lib -lz' > > > > > > > > Satish > > > > > > > > On Wed, 10 Apr 2013, Seungbum Koo wrote: > > > > > > > > > Hi. I tried to add hdf5 when configuring. HDF5-1.8.9 is currently > > > > installed > > > > > with szip-2.1 and zlib-1.2.7. > > > > > > > > > > It stops with message > > > > > > > > > > > > > > > ******************************************************************************* > > > > > UNABLE to CONFIGURE with GIVEN OPTIONS (see > configure.log for > > > > > details): > > > > > > > > > > ------------------------------------------------------------------------------- > > > > > Compression library [libz.a or equivalent] not found > > > > > > > > > > ******************************************************************************* > > > > > > > > > > What I don't understand is that 'libz.a' file exists in > > > > > '/opt/zlib-1.2.7/lib'. What should I do? > > > > > > > > > > Seungbum > > > > > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Thu Apr 11 09:50:31 2013 From: balay at mcs.anl.gov (Satish Balay) Date: Thu, 11 Apr 2013 09:50:31 -0500 (CDT) Subject: [petsc-users] cannot find 'libz.a' when configuring In-Reply-To: References: Message-ID: Glad its working. Specifying all hdf5 dependencies with --with-hdf5-include= and --with-hdf5-lib= options is the right thing. [except for the LIBS quirk for -lz] Satish On Thu, 11 Apr 2013, Seungbum Koo wrote: > Thank you, it worked. > > --with-hdf5-include=' ' > and so for --with-hdf5-lib > > made configuration and installation successful. > > Seungbum > > > On Wed, Apr 10, 2013 at 7:37 PM, Satish Balay wrote: > > > I could retrieve it from the one you attempted to send to petsc-users. And > > see: > > > > >>>>>> > > Possible ERROR while running linker: /opt/hdf5-1.8.9/lib/libhdf5.a(H5Z.o): > > In function `H5Zunregister': > > H5Z.c:(.text+0xa9b): undefined reference to `SZ_encoder_enabled' > > /opt/hdf5-1.8.9/lib/libhdf5.a(H5Z.o): In function `H5Z_unregister': > > H5Z.c:(.text+0x1872): undefined reference to `SZ_encoder_enabled' > > /opt/hdf5-1.8.9/lib/libhdf5.a(H5Z.o): In function `H5Zfilter_avail': > > H5Z.c:(.text+0x2592): undefined reference to `SZ_encoder_enabled' > > <<<<<<< > > > > You did mention building hdf4 with szip-2.1. So perhaps you sould > > specify hdf5 - and all the dependent libraries using > > --with-hdf5-include --with-hdf5-lib options [and not --with-hdf5 > > option] > > > > Satish > > > > > > On Wed, 10 Apr 2013, Satish Balay wrote: > > > > > send logs to petsc-maint > > > > > > Satish > > > > > > On Wed, 10 Apr 2013, Seungbum Koo wrote: > > > > > > > Thank you. It fixed the problem. But it stopped with other message, > > saying > > > > that > > > > > > > > > > ******************************************************************************* > > > > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log > > for > > > > details): > > > > > > ------------------------------------------------------------------------------- > > > > --with-hdf5-dir=/opt/hdf5-1.8.9 did not work > > > > > > ******************************************************************************* > > > > > > > > I asked up to which version of HDF5 does PETSc support this morning > > because > > > > of this problem, thinking if this is a compatibility reason or some > > sort > > > > similar. > > > > > > > > I tried to attach the configure.log file but there is 512kb limit to > > this > > > > mailing list. > > > > > > > > Any suggestion will help. > > > > > > > > Thanks. > > > > > > > > Seungbum > > > > > > > > > > > > On Wed, Apr 10, 2013 at 5:21 PM, Satish Balay > > wrote: > > > > > > > > > try using configure option LIBS="-L/opt/zlib-1.2.7/lib -lz' > > > > > > > > > > Satish > > > > > > > > > > On Wed, 10 Apr 2013, Seungbum Koo wrote: > > > > > > > > > > > Hi. I tried to add hdf5 when configuring. HDF5-1.8.9 is currently > > > > > installed > > > > > > with szip-2.1 and zlib-1.2.7. > > > > > > > > > > > > It stops with message > > > > > > > > > > > > > > > > > > > ******************************************************************************* > > > > > > UNABLE to CONFIGURE with GIVEN OPTIONS (see > > configure.log for > > > > > > details): > > > > > > > > > > > > > ------------------------------------------------------------------------------- > > > > > > Compression library [libz.a or equivalent] not found > > > > > > > > > > > > > ******************************************************************************* > > > > > > > > > > > > What I don't understand is that 'libz.a' file exists in > > > > > > '/opt/zlib-1.2.7/lib'. What should I do? > > > > > > > > > > > > Seungbum > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > From zhang.wei at chalmers.se Fri Apr 12 08:22:15 2013 From: zhang.wei at chalmers.se (Zhang Wei) Date: Fri, 12 Apr 2013 13:22:15 +0000 Subject: [petsc-users] algorithm to get pure real eigen valule for general eigenvalue problem(Non-hermitian type) Message-ID: Dear Sir: I am using slepc for hydrodynamic instability analysis. From my understanding most of these unstable modes in hydrodynamic are large scale structure with low frequency, which means very small angle for the complex eigenvalue. Actually I tried a channel flow case, finally I got several Eigen value with imaginary part is exact 0. and from the Eigen modes, it gives "correct" results in one direction but not wave, since the wave number is 0. So I suppose there must be algorithm to control so. Does any one know what could influence so? Best Regards Wei From mike.hui.zhang at hotmail.com Fri Apr 12 09:44:31 2013 From: mike.hui.zhang at hotmail.com (Hui Zhang) Date: Fri, 12 Apr 2013 16:44:31 +0200 Subject: [petsc-users] Assembly of local Mat's to a parallel Mat Message-ID: Assembly of local Mat's to a parallel Mat formally defined by A = \sum_i Ri^T Ai Ri. My Ai's are originally defined in sub-communicators. In Petsc, I was told before there are no counterpart of Ri that can promote the mat Ai to sup-communicator where A stays. (VecScatter can only acts on Vec but not Mat) So, what is the easiest way to achieve this task? Note that Ai's are already defined in sub-communicators. From knepley at gmail.com Fri Apr 12 09:55:33 2013 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 12 Apr 2013 09:55:33 -0500 Subject: [petsc-users] Assembly of local Mat's to a parallel Mat In-Reply-To: References: Message-ID: On Fri, Apr 12, 2013 at 9:44 AM, Hui Zhang wrote: > Assembly of local Mat's to a parallel Mat formally defined by A = \sum_i > Ri^T Ai Ri. > > My Ai's are originally defined in sub-communicators. In Petsc, I was told > before there are no counterpart of Ri that can promote the mat Ai to > sup-communicator where A stays. (VecScatter can only acts on Vec but not > Mat) > > So, what is the easiest way to achieve this task? Note that Ai's are > already defined in sub-communicators. > What are you trying to do? Get the action of A, or assemble it directly? Matt -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From mike.hui.zhang at hotmail.com Fri Apr 12 10:03:03 2013 From: mike.hui.zhang at hotmail.com (Hui Zhang) Date: Fri, 12 Apr 2013 17:03:03 +0200 Subject: [petsc-users] Assembly of local Mat's to a parallel Mat In-Reply-To: References: Message-ID: On Apr 12, 2013, at 4:55 PM, Matthew Knepley wrote: > On Fri, Apr 12, 2013 at 9:44 AM, Hui Zhang wrote: > Assembly of local Mat's to a parallel Mat formally defined by A = \sum_i Ri^T Ai Ri. > > My Ai's are originally defined in sub-communicators. In Petsc, I was told before there are no counterpart of Ri that can promote the mat Ai to sup-communicator where A stays. (VecScatter can only acts on Vec but not Mat) > > So, what is the easiest way to achieve this task? Note that Ai's are already defined in sub-communicators. > > What are you trying to do? Get the action of A, or assemble it directly? Assembly it directly. I just found MatGetSubMatricesParallel. Maybe I can learn from the source codes of this function but I can not find where they are. > > Matt > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener From jedbrown at mcs.anl.gov Fri Apr 12 10:22:35 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Fri, 12 Apr 2013 10:22:35 -0500 Subject: [petsc-users] Assembly of local Mat's to a parallel Mat In-Reply-To: References: Message-ID: <8738uvrcxw.fsf@mcs.anl.gov> Hui Zhang writes: > Assembly it directly. I just found MatGetSubMatricesParallel. This is not a private implementation function so it's not a good place to learn. It also doesn't do what you want. For your purpose, you can either loop over the entries of A_i inserting them according to the R_i or you can create a block diagonal parallel matrix A_i constructed by joining together all the diagonal blocks. Presumably R is already parallel, so then you can use MatPtAP to assemble the product. Note that you might be able to assemble the block diagonal A directly. > Maybe I can learn from the source codes of this function but I can not > find where they are. For navigating source code, see the manual section about setting up GNU Global tags or etags. PETSc functions are named to tab complete well with tags. From mike.hui.zhang at hotmail.com Fri Apr 12 11:49:12 2013 From: mike.hui.zhang at hotmail.com (Hui Zhang) Date: Fri, 12 Apr 2013 18:49:12 +0200 Subject: [petsc-users] Assembly of local Mat's to a parallel Mat In-Reply-To: <8738uvrcxw.fsf@mcs.anl.gov> References: <8738uvrcxw.fsf@mcs.anl.gov> Message-ID: On Apr 12, 2013, at 5:22 PM, Jed Brown wrote: > Hui Zhang writes: > >> Assembly it directly. I just found MatGetSubMatricesParallel. > > This is not a private implementation function so it's not a good place > to learn. It also doesn't do what you want. > > For your purpose, you can either loop over the entries of A_i inserting > them according to the R_i I can understand this method. A further question: since Ai itself is shared by many processors, MatSetValues to A should be called by only one of the processor sharing Ai. Is it right? > or you can create a block diagonal parallel > matrix A_i constructed by joining together all the diagonal blocks. Which function can do this? I only found diagonal block Mat with each block of the same size and dense. Thanks! > Presumably R is already parallel, so then you can use MatPtAP to > assemble the product. > > Note that you might be able to assemble the block diagonal A directly. > >> Maybe I can learn from the source codes of this function but I can not >> find where they are. > > For navigating source code, see the manual section about setting up GNU > Global tags or etags. PETSc functions are named to tab complete well > with tags. > From knepley at gmail.com Fri Apr 12 12:09:55 2013 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 12 Apr 2013 12:09:55 -0500 Subject: [petsc-users] Assembly of local Mat's to a parallel Mat In-Reply-To: References: <8738uvrcxw.fsf@mcs.anl.gov> Message-ID: On Fri, Apr 12, 2013 at 11:49 AM, Hui Zhang wrote: > > On Apr 12, 2013, at 5:22 PM, Jed Brown wrote: > > > Hui Zhang writes: > > > >> Assembly it directly. I just found MatGetSubMatricesParallel. > > > > This is not a private implementation function so it's not a good place > > to learn. It also doesn't do what you want. > > > > For your purpose, you can either loop over the entries of A_i inserting > > them according to the R_i > > I can understand this method. A further question: since Ai itself is > shared > by many processors, MatSetValues to A should be called by only one of the > processor sharing Ai. Is it right? > MatSetValues() is not collective. > > or you can create a block diagonal parallel > > matrix A_i constructed by joining together all the diagonal blocks. > > Which function can do this? I only found diagonal block Mat with each > block > of the same size and dense. Thanks! > This is just a simple shift in index with normal MatSetValues(). Matt > > Presumably R is already parallel, so then you can use MatPtAP to > > assemble the product. > > > > Note that you might be able to assemble the block diagonal A directly. > > > >> Maybe I can learn from the source codes of this function but I can not > >> find where they are. > > > > For navigating source code, see the manual section about setting up GNU > > Global tags or etags. PETSc functions are named to tab complete well > > with tags. > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Fri Apr 12 12:10:35 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Fri, 12 Apr 2013 12:10:35 -0500 Subject: [petsc-users] Assembly of local Mat's to a parallel Mat In-Reply-To: References: <8738uvrcxw.fsf@mcs.anl.gov> Message-ID: <87hajby8s4.fsf@mcs.anl.gov> Hui Zhang writes: > On Apr 12, 2013, at 5:22 PM, Jed Brown wrote: > > I can understand this method. A further question: since Ai itself is shared > by many processors, MatSetValues to A should be called by only one of the > processor sharing Ai. Is it right? If your Ai are already shared, why can't you just start by creating the big block-diagonal system containing all the Ai along the diagonal? Then MatGetSubMatrix() will give you the part if you really need to do something separate with it. >> or you can create a block diagonal parallel >> matrix A_i constructed by joining together all the diagonal blocks. > > Which function can do this? I only found diagonal block Mat with each block > of the same size and dense. Thanks! There is no function to do it in-place, but you can just create the big matrix and loop through the small matrix inserting rows. It's usually better to start with the big matrix. From mike.hui.zhang at hotmail.com Fri Apr 12 12:44:06 2013 From: mike.hui.zhang at hotmail.com (Hui Zhang) Date: Fri, 12 Apr 2013 19:44:06 +0200 Subject: [petsc-users] Assembly of local Mat's to a parallel Mat In-Reply-To: <87hajby8s4.fsf@mcs.anl.gov> References: <8738uvrcxw.fsf@mcs.anl.gov> <87hajby8s4.fsf@mcs.anl.gov> Message-ID: On Apr 12, 2013, at 7:10 PM, Jed Brown wrote: > Hui Zhang writes: > >> On Apr 12, 2013, at 5:22 PM, Jed Brown wrote: >> >> I can understand this method. A further question: since Ai itself is shared >> by many processors, MatSetValues to A should be called by only one of the >> processor sharing Ai. Is it right? > > If your Ai are already shared, why can't you just start by creating the > big block-diagonal system containing all the Ai along the diagonal? > Then MatGetSubMatrix() will give you the part if you really need to do > something separate with it. Thanks! My difficulty lie in that Ai is a result of matrix computations in a sub-communcator (of the communicator of A). So I do not know a priori the non-zero structure of Ai. Following the instructions of you, I think I need to first MatGetRow of Ai, then MatSetValues to A. Is there a better way? > >>> or you can create a block diagonal parallel >>> matrix A_i constructed by joining together all the diagonal blocks. >> >> Which function can do this? I only found diagonal block Mat with each block >> of the same size and dense. Thanks! > > There is no function to do it in-place, but you can just create the big > matrix and loop through the small matrix inserting rows. It's usually > better to start with the big matrix. > From jedbrown at mcs.anl.gov Fri Apr 12 13:37:10 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Fri, 12 Apr 2013 13:37:10 -0500 Subject: [petsc-users] Assembly of local Mat's to a parallel Mat In-Reply-To: References: <8738uvrcxw.fsf@mcs.anl.gov> <87hajby8s4.fsf@mcs.anl.gov> Message-ID: <87ehefy4rt.fsf@mcs.anl.gov> Hui Zhang writes: > Thanks! My difficulty lie in that Ai is a result of matrix computations in a > sub-communcator (of the communicator of A). So I do not know a priori > the non-zero structure of Ai. Following the instructions of you, I think > I need to first MatGetRow of Ai, then MatSetValues to A. Is there a better way? When they're on different communicators, this is the recommended way. From dharmareddy84 at gmail.com Fri Apr 12 16:50:22 2013 From: dharmareddy84 at gmail.com (Dharmendar Reddy) Date: Fri, 12 Apr 2013 16:50:22 -0500 Subject: [petsc-users] small doubt about Fortran Arrays Message-ID: Hello, If a petsc function XXX has no corresponding XXXF90, then where ever, say, PetscScalar values[] is expected, one should pass PetscScalar, pointer :: values(:) not PetscScalar, allocatable :: values(:) Right ? Can i request for VecSetValuesSectionF90 ? Thanks Reddy -- ----------------------------------------------------- Dharmendar Reddy Palle Graduate Student Microelectronics Research center, University of Texas at Austin, 10100 Burnet Road, Bldg. 160 MER 2.608F, TX 78758-4445 e-mail: dharmareddy84 at gmail.com Phone: +1-512-350-9082 United States of America. Homepage: https://webspace.utexas.edu/~dpr342 -------------- next part -------------- An HTML attachment was scrubbed... URL: From dharmareddy84 at gmail.com Fri Apr 12 16:53:32 2013 From: dharmareddy84 at gmail.com (Dharmendar Reddy) Date: Fri, 12 Apr 2013 16:53:32 -0500 Subject: [petsc-users] INSERT_BC_VALUES Message-ID: I am getting an error using INSERT_BC_VALUES ./VariationalProblemBoundProcedures.F90(467): error #6404: This name does not have a type, and must have an explicit type. [INSERT_BC_VALUES] call this%applyBoundaryConditions(dm,x,INSERT_BC_VALUES,ierr) i have included petsc.h90 in the code. I have had no issue using ADD_VALUES or INSERT_VALUES i am using petsc branch knepley/pylith thanks Reddy -- ----------------------------------------------------- Dharmendar Reddy Palle Graduate Student Microelectronics Research center, University of Texas at Austin, 10100 Burnet Road, Bldg. 160 MER 2.608F, TX 78758-4445 e-mail: dharmareddy84 at gmail.com Phone: +1-512-350-9082 United States of America. Homepage: https://webspace.utexas.edu/~dpr342 -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Apr 12 18:09:27 2013 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 12 Apr 2013 18:09:27 -0500 Subject: [petsc-users] INSERT_BC_VALUES In-Reply-To: References: Message-ID: On Fri, Apr 12, 2013 at 4:53 PM, Dharmendar Reddy wrote: > I am getting an error using INSERT_BC_VALUES > > ./VariationalProblemBoundProcedures.F90(467): error #6404: This name does > not have a type, and must have an explicit type. [INSERT_BC_VALUES] > call this%applyBoundaryConditions(dm,x,INSERT_BC_VALUES,ierr) > > i have included petsc.h90 in the code. > > I have had no issue using ADD_VALUES or INSERT_VALUES > Fixed. > i am using petsc branch knepley/pylith > I assume you mean knepley/plex Thanks, Matt > thanks > Reddy > > > -- > ----------------------------------------------------- > Dharmendar Reddy Palle > Graduate Student > Microelectronics Research center, > University of Texas at Austin, > 10100 Burnet Road, Bldg. 160 > MER 2.608F, TX 78758-4445 > e-mail: dharmareddy84 at gmail.com > Phone: +1-512-350-9082 > United States of America. > Homepage: https://webspace.utexas.edu/~dpr342 > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From dharmareddy84 at gmail.com Fri Apr 12 18:21:43 2013 From: dharmareddy84 at gmail.com (Dharmendar Reddy) Date: Fri, 12 Apr 2013 18:21:43 -0500 Subject: [petsc-users] INSERT_BC_VALUES In-Reply-To: References: Message-ID: Hello, I am on knepley/pylith Should i be using knepley/plex ? I need to use DMPlex Submesh functionality. Thanks Reddy On Fri, Apr 12, 2013 at 6:09 PM, Matthew Knepley wrote: > On Fri, Apr 12, 2013 at 4:53 PM, Dharmendar Reddy > wrote: > >> I am getting an error using INSERT_BC_VALUES >> >> ./VariationalProblemBoundProcedures.F90(467): error #6404: This name does >> not have a type, and must have an explicit type. [INSERT_BC_VALUES] >> call this%applyBoundaryConditions(dm,x,INSERT_BC_VALUES,ierr) >> >> i have included petsc.h90 in the code. >> >> I have had no issue using ADD_VALUES or INSERT_VALUES >> > > Fixed. > > >> i am using petsc branch knepley/pylith >> > > I assume you mean knepley/plex > > Thanks, > > Matt > > >> thanks >> Reddy >> >> >> -- >> ----------------------------------------------------- >> Dharmendar Reddy Palle >> Graduate Student >> Microelectronics Research center, >> University of Texas at Austin, >> 10100 Burnet Road, Bldg. 160 >> MER 2.608F, TX 78758-4445 >> e-mail: dharmareddy84 at gmail.com >> Phone: +1-512-350-9082 >> United States of America. >> Homepage: https://webspace.utexas.edu/~dpr342 >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -- ----------------------------------------------------- Dharmendar Reddy Palle Graduate Student Microelectronics Research center, University of Texas at Austin, 10100 Burnet Road, Bldg. 160 MER 2.608F, TX 78758-4445 e-mail: dharmareddy84 at gmail.com Phone: +1-512-350-9082 United States of America. Homepage: https://webspace.utexas.edu/~dpr342 -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Apr 12 18:38:45 2013 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 12 Apr 2013 18:38:45 -0500 Subject: [petsc-users] small doubt about Fortran Arrays In-Reply-To: References: Message-ID: On Fri, Apr 12, 2013 at 4:50 PM, Dharmendar Reddy wrote: > Hello, > If a petsc function XXX has no corresponding XXXF90, then where > ever, say, PetscScalar values[] is expected, one should pass > > PetscScalar, pointer :: values(:) > > not > PetscScalar, allocatable :: values(:) > > Right ? > No, if its not F90, then it uses F77-style arrays. > Can i request for VecSetValuesSectionF90 ? > Added. Matt > Thanks > Reddy > > -- > ----------------------------------------------------- > Dharmendar Reddy Palle > Graduate Student > Microelectronics Research center, > University of Texas at Austin, > 10100 Burnet Road, Bldg. 160 > MER 2.608F, TX 78758-4445 > e-mail: dharmareddy84 at gmail.com > Phone: +1-512-350-9082 > United States of America. > Homepage: https://webspace.utexas.edu/~dpr342 > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Apr 12 18:40:17 2013 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 12 Apr 2013 18:40:17 -0500 Subject: [petsc-users] INSERT_BC_VALUES In-Reply-To: References: Message-ID: On Fri, Apr 12, 2013 at 6:21 PM, Dharmendar Reddy wrote: > Hello, > I am on knepley/pylith > Should i be using knepley/plex ? > I need to use DMPlex Submesh functionality. > Both of them are in 'next', so maybe that is the one to use. It should not be a big problem since we are about to release. Matt > Thanks > Reddy > > > On Fri, Apr 12, 2013 at 6:09 PM, Matthew Knepley wrote: > >> On Fri, Apr 12, 2013 at 4:53 PM, Dharmendar Reddy < >> dharmareddy84 at gmail.com> wrote: >> >>> I am getting an error using INSERT_BC_VALUES >>> >>> ./VariationalProblemBoundProcedures.F90(467): error #6404: This name >>> does not have a type, and must have an explicit type. [INSERT_BC_VALUES] >>> call this%applyBoundaryConditions(dm,x,INSERT_BC_VALUES,ierr) >>> >>> i have included petsc.h90 in the code. >>> >>> I have had no issue using ADD_VALUES or INSERT_VALUES >>> >> >> Fixed. >> >> >>> i am using petsc branch knepley/pylith >>> >> >> I assume you mean knepley/plex >> >> Thanks, >> >> Matt >> >> >>> thanks >>> Reddy >>> >>> >>> -- >>> ----------------------------------------------------- >>> Dharmendar Reddy Palle >>> Graduate Student >>> Microelectronics Research center, >>> University of Texas at Austin, >>> 10100 Burnet Road, Bldg. 160 >>> MER 2.608F, TX 78758-4445 >>> e-mail: dharmareddy84 at gmail.com >>> Phone: +1-512-350-9082 >>> United States of America. >>> Homepage: https://webspace.utexas.edu/~dpr342 >>> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > > > -- > ----------------------------------------------------- > Dharmendar Reddy Palle > Graduate Student > Microelectronics Research center, > University of Texas at Austin, > 10100 Burnet Road, Bldg. 160 > MER 2.608F, TX 78758-4445 > e-mail: dharmareddy84 at gmail.com > Phone: +1-512-350-9082 > United States of America. > Homepage: https://webspace.utexas.edu/~dpr342 > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Fri Apr 12 18:46:45 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Fri, 12 Apr 2013 18:46:45 -0500 Subject: [petsc-users] INSERT_BC_VALUES In-Reply-To: References: Message-ID: Let's keep it simple and always ask people to follow 'next' for the latest features unless there are special circumstances. On Apr 12, 2013 6:40 PM, "Matthew Knepley" wrote: > On Fri, Apr 12, 2013 at 6:21 PM, Dharmendar Reddy > wrote: > >> Hello, >> I am on knepley/pylith >> Should i be using knepley/plex ? >> I need to use DMPlex Submesh functionality. >> > > Both of them are in 'next', so maybe that is the one to use. It should not > be a big problem > since we are about to release. > > Matt > > >> Thanks >> Reddy >> >> >> On Fri, Apr 12, 2013 at 6:09 PM, Matthew Knepley wrote: >> >>> On Fri, Apr 12, 2013 at 4:53 PM, Dharmendar Reddy < >>> dharmareddy84 at gmail.com> wrote: >>> >>>> I am getting an error using INSERT_BC_VALUES >>>> >>>> ./VariationalProblemBoundProcedures.F90(467): error #6404: This name >>>> does not have a type, and must have an explicit type. [INSERT_BC_VALUES] >>>> call this%applyBoundaryConditions(dm,x,INSERT_BC_VALUES,ierr) >>>> >>>> i have included petsc.h90 in the code. >>>> >>>> I have had no issue using ADD_VALUES or INSERT_VALUES >>>> >>> >>> Fixed. >>> >>> >>>> i am using petsc branch knepley/pylith >>>> >>> >>> I assume you mean knepley/plex >>> >>> Thanks, >>> >>> Matt >>> >>> >>>> thanks >>>> Reddy >>>> >>>> >>>> -- >>>> ----------------------------------------------------- >>>> Dharmendar Reddy Palle >>>> Graduate Student >>>> Microelectronics Research center, >>>> University of Texas at Austin, >>>> 10100 Burnet Road, Bldg. 160 >>>> MER 2.608F, TX 78758-4445 >>>> e-mail: dharmareddy84 at gmail.com >>>> Phone: +1-512-350-9082 >>>> United States of America. >>>> Homepage: https://webspace.utexas.edu/~dpr342 >>>> >>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >> >> >> >> -- >> ----------------------------------------------------- >> Dharmendar Reddy Palle >> Graduate Student >> Microelectronics Research center, >> University of Texas at Austin, >> 10100 Burnet Road, Bldg. 160 >> MER 2.608F, TX 78758-4445 >> e-mail: dharmareddy84 at gmail.com >> Phone: +1-512-350-9082 >> United States of America. >> Homepage: https://webspace.utexas.edu/~dpr342 >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sonyablade2010 at hotmail.com Sat Apr 13 10:52:47 2013 From: sonyablade2010 at hotmail.com (Sonya Blade) Date: Sat, 13 Apr 2013 16:52:47 +0100 Subject: [petsc-users] How to get the eigenvectors in Slepc Message-ID: Dear all, I experience a difficulty with retrieving the eigenvectors? obtained after eigen solution of generalized eigenvalue A*X=lambda*B*X problem.The Slepc nicely finds the eigenvalues but doesn't retrieve the? eigenvectors after calling the?EPSGetEigenvector(eps,1,vec,vec1). I also? tried the EPSGetEigenpairs function but neither of them? I attach the main program codes. Your help will be appreciated. Regards, -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: main.c URL: From hzhang at mcs.anl.gov Sat Apr 13 11:07:16 2013 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Sat, 13 Apr 2013 11:07:16 -0500 Subject: [petsc-users] How to get the eigenvectors in Slepc In-Reply-To: References: Message-ID: Sonya : Test slpec example first, e.g., slepc-3.3-p3/src/eps/examples/tutorials>grep GetEigenvector *.c ex12.c: ierr = EPSGetEigenvector(eps,i,X[i],PETSC_NULL);CHKERRQ(ierr); ex12.c: ierr = EPSGetEigenvectorLeft(eps,i,Y[i],PETSC_NULL);CHKERRQ(ierr); ex7.c: ierr = EPSGetEigenvector(eps,i,xr,xi);CHKERRQ(ierr); i.e., ex7.c and ex12.c call EPSGetEigenvector(). Hong Dear all, > > I experience a difficulty with retrieving the eigenvectors > obtained after eigen solution of generalized eigenvalue A*X=lambda*B*X > problem.The Slepc nicely finds the eigenvalues but doesn't retrieve the > eigenvectors after calling the EPSGetEigenvector(eps,1,vec,vec1). I also > tried the EPSGetEigenpairs function but neither of them > > I attach the main program codes. > > Your help will be appreciated. > > Regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: From sonyablade2010 at hotmail.com Sat Apr 13 11:34:41 2013 From: sonyablade2010 at hotmail.com (Sonya Blade) Date: Sat, 13 Apr 2013 17:34:41 +0100 Subject: [petsc-users] How to get the eigenvectors in Slepc In-Reply-To: References: Message-ID: Thank you Hong, I already looked at those examples after not achieving the successful results I decide to share it. It seems that I don't know how to read? values from the vectors, EPSGetEigenvector(eps,1,vec,PETSC_NULL); retrieves no errors which I assume that it returns the eigenvectors to corresponding? eigenvalue(which is second in my case). I use following notation to read values from vectors, ?here I'm not sure? whether the "&vec[i] or vec[i]" ?can be used to read the values from vector.? Vec vec; for (i=0;i<9;i++) ? { ? ? PetscPrintf(MPI_COMM_WORLD," %3.8f: ",&vec[i] ); ? } Regards, From knepley at gmail.com Sat Apr 13 13:02:06 2013 From: knepley at gmail.com (Matthew Knepley) Date: Sat, 13 Apr 2013 13:02:06 -0500 Subject: [petsc-users] How to get the eigenvectors in Slepc In-Reply-To: References: Message-ID: On Sat, Apr 13, 2013 at 11:34 AM, Sonya Blade wrote: > Thank you Hong, > > I already looked at those examples after not achieving the successful > results I decide to share it. It seems that I don't know how to read > values from the vectors, EPSGetEigenvector(eps,1,vec,PETSC_NULL); retrieves > no errors which I assume that it returns the eigenvectors to corresponding > eigenvalue(which is second in my case). > > I use following notation to read values from vectors, here I'm not sure > whether the "&vec[i] or vec[i]" can be used to read the values from > vector. > > Vec vec; > > for (i=0;i<9;i++) > { > PetscPrintf(MPI_COMM_WORLD," %3.8f: ",&vec[i] ); > } > Read the Manual chapter on Vectors. Matt > Regards, -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From kassiopik at gmail.com Sat Apr 13 13:59:02 2013 From: kassiopik at gmail.com (Kassiopi Kassiopi2) Date: Sat, 13 Apr 2013 21:59:02 +0300 Subject: [petsc-users] Matrix assembly too slow Message-ID: Hello, I am trying to use PETSc in my code. My numerical scheme is BEM and requires a dense matrix. I use the mpidense matrix type, and each matrix entry is populated incrementally. This results in many calls to matSetValue, for every entry of the matrix. However, I don't need to get values from the matrix, until all the calculations are done. Moreover, when the matrix is created I use PETSC_DECIDE for the local rows and columns and I also do preallocation using MatMPIDenseSetPreallocation. Each process writes to specific rows of the matrix, and assuming that mpidense matrices are distributed row-wise to the processes, each process should write more or less to its own rows. Moreover, to avoid a bottleneck at the final matrix assembly, I do a matAssemblyBegin-End with MAT_FLUSH_ASSEMBLY, every time the stash size reaches a critical value (of 1 million). However, when all operations are done and matAssemblyBegin-End is called with MAT_FINAL_ASSEMBLY the whole program gets stuck there. It doesn't crash, but it doesn't gets through the assembly either. When to do 'top', the processes seem to be in sleep status. I have tried waiting for many hours, but without any development. Even though the remaining items in the stash are less than 1million, which had an acceptable time cost for MAT_FLUSH_ASSEMBLY, it seems as if MAT_FINAL_ASSEMBLY just cannot deal with it. I would expect that this would take a few seconds but definitely not hours... The matrix dimensions are 28356 x 28356. For smaller problems, i.e. ~9000 rows and colums there is no significant delay. My questions are the following: 1) I know that the general advice is to fill the matrix in large blocks, but I am trying to avoid it for now. I would expect that doing matAssemblyBegin-End with MAT_FLUSH_ASSEMBLY every now and then, would reduce the load during the final assembly. Is my assumption wrong? 2) How is MatAssemblyBegin-End different when called with MAT_FINAL_ASSEMBLY instead of MAT_FLUSH_ASSEMBLY? 3) If this is the expected behavior, and it takes so long for a 28000 x 28000 linear system, it would be impossible to scale up to millions of dofs. It seems hard to believe that the cost communicating the matrix with matAssemblyBegin-End is much bigger or even comparable to the cost of actually calculating the values with numerical integration. 4) Unfortunately I am not experienced in debugging parallel programs. Is there a way to see if the processes are blocked waiting for each other? I apologize for the long email and thank you for taking the time reading it. Best Regards, Kassiopik -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sat Apr 13 14:23:59 2013 From: knepley at gmail.com (Matthew Knepley) Date: Sat, 13 Apr 2013 14:23:59 -0500 Subject: [petsc-users] Matrix assembly too slow In-Reply-To: References: Message-ID: On Sat, Apr 13, 2013 at 1:59 PM, Kassiopi Kassiopi2 wrote: > Hello, > > I am trying to use PETSc in my code. My numerical scheme is BEM and > requires a dense matrix. I use the mpidense matrix type, and each matrix > entry is populated incrementally. This results in many calls to > matSetValue, for every entry of the matrix. However, I don't need to get > values from the matrix, until all the calculations are done. Moreover, when > the matrix is created I use PETSC_DECIDE for the local rows and columns and > I also do preallocation using MatMPIDenseSetPreallocation. > > Each process writes to specific rows of the matrix, and assuming that > mpidense matrices are distributed row-wise to the processes, each process > should write more or less to its own rows. Moreover, to avoid a bottleneck > at the final matrix assembly, I do a matAssemblyBegin-End with > MAT_FLUSH_ASSEMBLY, every time the stash size reaches a critical value (of > 1 million). > How many off-process values are you writing? This seems like tremendous overkill. > However, when all operations are done and matAssemblyBegin-End is called > with MAT_FINAL_ASSEMBLY the whole program gets stuck there. It doesn't > crash, but it doesn't gets through the assembly either. When to do 'top', > the processes seem to be in sleep status. I have tried waiting for many > hours, but without any development. Even though the remaining items in the > stash are less than 1million, which had an acceptable time cost for > MAT_FLUSH_ASSEMBLY, it seems as if MAT_FINAL_ASSEMBLY just cannot deal with > it. I would expect that this would take a few seconds but definitely not > hours... > This sounds like it goes to virtual (disk) memory for the transfer, which would explain why it does not happen for smaller sizes. Flush the assembly more frequently. > The matrix dimensions are 28356 x 28356. For smaller problems, i.e. ~9000 > rows and colums there is no significant delay. > > My questions are the following: > > 1) I know that the general advice is to fill the matrix in large blocks, > but I am trying to avoid it for now. I would expect that doing > matAssemblyBegin-End with MAT_FLUSH_ASSEMBLY every now and then, would > reduce the load during the final assembly. Is my assumption wrong? > It is not often enough. > 2) How is MatAssemblyBegin-End different when called with > MAT_FINAL_ASSEMBLY instead of MAT_FLUSH_ASSEMBLY? > Lots of things are setup for sparse matrices, but it should not be different for MPIDENSE. > 3) If this is the expected behavior, and it takes so long for a 28000 x > 28000 linear system, it would be impossible to scale up to millions of > dofs. It seems hard to believe that the cost communicating the matrix with > matAssemblyBegin-End is much bigger or even comparable to the cost of > actually calculating the values with numerical integration. > That intuition is exactly wrong. On modern hardware, you can do 1000 floating point operations for each memory reference. > 4) Unfortunately I am not experienced in debugging parallel programs. Is > there a way to see if the processes are blocked waiting for each other? > gdb should be easy to use. Run with -start_in_debugger, then C-c when it seems to hang and type 'where' to get the stack trace. Matt > I apologize for the long email and thank you for taking the time reading > it. > > Best Regards, > Kassiopik > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From rupp at mcs.anl.gov Sat Apr 13 15:45:55 2013 From: rupp at mcs.anl.gov (Karl Rupp) Date: Sat, 13 Apr 2013 15:45:55 -0500 Subject: [petsc-users] Matrix assembly too slow In-Reply-To: References: Message-ID: <5169C403.5000004@mcs.anl.gov> Hi, > 3) If this is the expected behavior, and it takes so long for a 28000 x > 28000 linear system, it would be impossible to scale up to millions of > dofs. It seems hard to believe that the cost communicating the matrix > with matAssemblyBegin-End is much bigger or even comparable to the cost > of actually calculating the values with numerical integration. You should also keep in mind that since you are assembling a dense matrix, you have *much* higher memory requirements as compared to the frequently used H-matrix approaches for BEM. You really need to exploit the structure of the integral kernel 'to scale up to millions of dofs'. Best regards, Karli From dharmareddy84 at gmail.com Sat Apr 13 20:20:22 2013 From: dharmareddy84 at gmail.com (Dharmendar Reddy) Date: Sat, 13 Apr 2013 20:20:22 -0500 Subject: [petsc-users] DMPlexSetClosure Message-ID: Hello, I am getting an undefined reference error :: FEMModules.F90:(.text+0xba77): undefined reference to `dmplexvecsetclosuref90_' FEMModules.F90:(.text+0xbea9): undefined reference to `dmplexmatsetclosuref90_' I can see that DMPlexVecSetClosure is defined in /finclude/ftn-custom/petscdmplex.h90:159: but the name is DMPlexVecSetClosure instead of DMPlexVecSetClosureF90. And there is no DMPlexMatSetClosureF90 What should i do ? Thanks Reddy -- ----------------------------------------------------- Dharmendar Reddy Palle Graduate Student Microelectronics Research center, University of Texas at Austin, 10100 Burnet Road, Bldg. 160 MER 2.608F, TX 78758-4445 e-mail: dharmareddy84 at gmail.com Phone: +1-512-350-9082 United States of America. Homepage: https://webspace.utexas.edu/~dpr342 -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sat Apr 13 20:22:53 2013 From: knepley at gmail.com (Matthew Knepley) Date: Sat, 13 Apr 2013 20:22:53 -0500 Subject: [petsc-users] DMPlexSetClosure In-Reply-To: References: Message-ID: On Sat, Apr 13, 2013 at 8:20 PM, Dharmendar Reddy wrote: > Hello, > I am getting an undefined reference error :: > FEMModules.F90:(.text+0xba77): undefined reference to > `dmplexvecsetclosuref90_' > FEMModules.F90:(.text+0xbea9): undefined reference to > `dmplexmatsetclosuref90_' > > I can see that DMPlexVecSetClosure is defined in > > /finclude/ftn-custom/petscdmplex.h90:159: > > but the name is DMPlexVecSetClosure instead of DMPlexVecSetClosureF90. > > And there is no DMPlexMatSetClosureF90 > > > What should i do ? > I was not consistent here with the naming. Since an F77 version was not possible, I did not add F90. That is probably wrong, however I would like to scrap the F77 version of Plex since everyone uses F90 now and the extra letters are annoying. Go ahead and use the function. Matt > Thanks > Reddy > -- > ----------------------------------------------------- > Dharmendar Reddy Palle > Graduate Student > Microelectronics Research center, > University of Texas at Austin, > 10100 Burnet Road, Bldg. 160 > MER 2.608F, TX 78758-4445 > e-mail: dharmareddy84 at gmail.com > Phone: +1-512-350-9082 > United States of America. > Homepage: https://webspace.utexas.edu/~dpr342 > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From dharmareddy84 at gmail.com Sat Apr 13 21:18:54 2013 From: dharmareddy84 at gmail.com (Dharmendar Reddy) Date: Sat, 13 Apr 2013 21:18:54 -0500 Subject: [petsc-users] DMPlexSetClosure In-Reply-To: References: Message-ID: I am getting bunch of erros in my code related to DMPlex If i use DMPlexVecSetClosure I get the following error. A pointer dummy argument may only be argument associated with a pointer. [FELM] call DMPlexVecSetClosure(dm,PETSC_NULL_OBJECT,F,cellId,Felm,ADD_VALUES,ierr) Felm is defined as : PetscScalar,allocatable :: Felm(:) I do a similar call to DMPlexMatSetClosure, i get no error. Now if i use DMPlexVecSetClosureF90, code compiles, but i see undefined reference error during link stage: FEMModules.F90:(.text+0xba77): undefined reference to `dmplexvecsetclosuref90_' FEMModules.F90:(.text+0xbea9): undefined reference to `dmplexmatsetclosure_' FEMModules.F90:(.text+0xbfe0): undefined reference to `dmplexgetdefaultsection_' FEMModules.F90:(.text+0xc048): undefined reference to `petscsectiongetconstraintdof_ Solver.F90:(.text+0xaa1): undefined reference to `dmsnessetjacobianlocal_' Solver.F90:(.text+0xabc): undefined reference to `dmsnessetfunctionlocal_' On Sat, Apr 13, 2013 at 8:22 PM, Matthew Knepley wrote: > On Sat, Apr 13, 2013 at 8:20 PM, Dharmendar Reddy > wrote: > >> Hello, >> I am getting an undefined reference error :: >> FEMModules.F90:(.text+0xba77): undefined reference to >> `dmplexvecsetclosuref90_' >> FEMModules.F90:(.text+0xbea9): undefined reference to >> `dmplexmatsetclosuref90_' >> >> I can see that DMPlexVecSetClosure is defined in >> >> /finclude/ftn-custom/petscdmplex.h90:159: >> >> but the name is DMPlexVecSetClosure instead of DMPlexVecSetClosureF90. >> >> And there is no DMPlexMatSetClosureF90 >> >> >> What should i do ? >> > > I was not consistent here with the naming. Since an F77 version was not > possible, I did not > add F90. That is probably wrong, however I would like to scrap the F77 > version of Plex since > everyone uses F90 now and the extra letters are annoying. Go ahead and use > the function. > > Matt > > >> Thanks >> Reddy >> -- >> ----------------------------------------------------- >> Dharmendar Reddy Palle >> Graduate Student >> Microelectronics Research center, >> University of Texas at Austin, >> 10100 Burnet Road, Bldg. 160 >> MER 2.608F, TX 78758-4445 >> e-mail: dharmareddy84 at gmail.com >> Phone: +1-512-350-9082 >> United States of America. >> Homepage: https://webspace.utexas.edu/~dpr342 >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -- ----------------------------------------------------- Dharmendar Reddy Palle Graduate Student Microelectronics Research center, University of Texas at Austin, 10100 Burnet Road, Bldg. 160 MER 2.608F, TX 78758-4445 e-mail: dharmareddy84 at gmail.com Phone: +1-512-350-9082 United States of America. Homepage: https://webspace.utexas.edu/~dpr342 -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sat Apr 13 21:32:30 2013 From: knepley at gmail.com (Matthew Knepley) Date: Sat, 13 Apr 2013 21:32:30 -0500 Subject: [petsc-users] DMPlexSetClosure In-Reply-To: References: Message-ID: On Sat, Apr 13, 2013 at 9:18 PM, Dharmendar Reddy wrote: > I am getting bunch of erros in my code related to DMPlex > > If i use DMPlexVecSetClosure I get the following error. > > A pointer dummy argument may only be argument associated with a pointer. > [FELM] > call > DMPlexVecSetClosure(dm,PETSC_NULL_OBJECT,F,cellId,Felm,ADD_VALUES,ierr) > > Felm is defined as : PetscScalar,allocatable :: Felm(:) > Did you look at the sample code? http://www.mcs.anl.gov/petsc/petsc-dev/src/dm/impls/plex/examples/tests/ex2f90.F.html You define pointers. You can see what function I have defined by looking at the header https://bitbucket.org/petsc/petsc/src/62a20339e027b37fab44424f1466054586f1dc85/include/finclude/ftn-custom/petscdmplex.h90?at=master and its clear from the file that DMPlexMatSetClosure() has not been defined in Fortran. Matt > I do a similar call to DMPlexMatSetClosure, i get no error. > > Now if i use DMPlexVecSetClosureF90, code compiles, but i see undefined > reference error during link stage: > > FEMModules.F90:(.text+0xba77): undefined reference to > `dmplexvecsetclosuref90_' > > FEMModules.F90:(.text+0xbea9): undefined reference to > `dmplexmatsetclosure_' > > FEMModules.F90:(.text+0xbfe0): undefined reference to > `dmplexgetdefaultsection_' > FEMModules.F90:(.text+0xc048): undefined reference to > `petscsectiongetconstraintdof_ > > Solver.F90:(.text+0xaa1): undefined reference to `dmsnessetjacobianlocal_' > Solver.F90:(.text+0xabc): undefined reference to `dmsnessetfunctionlocal_' > > > > > On Sat, Apr 13, 2013 at 8:22 PM, Matthew Knepley wrote: > >> On Sat, Apr 13, 2013 at 8:20 PM, Dharmendar Reddy < >> dharmareddy84 at gmail.com> wrote: >> >>> Hello, >>> I am getting an undefined reference error :: >>> FEMModules.F90:(.text+0xba77): undefined reference to >>> `dmplexvecsetclosuref90_' >>> FEMModules.F90:(.text+0xbea9): undefined reference to >>> `dmplexmatsetclosuref90_' >>> >>> I can see that DMPlexVecSetClosure is defined in >>> >>> /finclude/ftn-custom/petscdmplex.h90:159: >>> >>> but the name is DMPlexVecSetClosure instead of DMPlexVecSetClosureF90. >>> >>> And there is no DMPlexMatSetClosureF90 >>> >>> >>> What should i do ? >>> >> >> I was not consistent here with the naming. Since an F77 version was not >> possible, I did not >> add F90. That is probably wrong, however I would like to scrap the F77 >> version of Plex since >> everyone uses F90 now and the extra letters are annoying. Go ahead and >> use the function. >> >> Matt >> >> >>> Thanks >>> Reddy >>> -- >>> ----------------------------------------------------- >>> Dharmendar Reddy Palle >>> Graduate Student >>> Microelectronics Research center, >>> University of Texas at Austin, >>> 10100 Burnet Road, Bldg. 160 >>> MER 2.608F, TX 78758-4445 >>> e-mail: dharmareddy84 at gmail.com >>> Phone: +1-512-350-9082 >>> United States of America. >>> Homepage: https://webspace.utexas.edu/~dpr342 >>> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > > > -- > ----------------------------------------------------- > Dharmendar Reddy Palle > Graduate Student > Microelectronics Research center, > University of Texas at Austin, > 10100 Burnet Road, Bldg. 160 > MER 2.608F, TX 78758-4445 > e-mail: dharmareddy84 at gmail.com > Phone: +1-512-350-9082 > United States of America. > Homepage: https://webspace.utexas.edu/~dpr342 > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From dharmareddy84 at gmail.com Sat Apr 13 21:51:11 2013 From: dharmareddy84 at gmail.com (Dharmendar Reddy) Date: Sat, 13 Apr 2013 21:51:11 -0500 Subject: [petsc-users] DMPlexSetClosure In-Reply-To: References: Message-ID: Hello, Got it. I understand the reason for errors. I was using XXXSetF90 functions in my code so i was using allocatable arrays. I thought all set/getvlaues had corresponding F90 functions. I was trying to define and use things consistently in the code. I can fix the compile errors using pointers now. Now, can i request for Fortran interface for DMPlexMatSetClosure ? will you be adding Fortran interfaces to the functions listed below ? FEMModules.F90:(.text+0xbfe0): undefined reference to `dmplexgetdefaultsection_' FEMModules.F90:(.text+0xc048): undefined reference to `petscsectiongetconstraintdof_ Solver.F90:(.text+0xaa1): undefined reference to `dmsnessetjacobianlocal_' Solver.F90:(.text+0xabc): undefined reference to `dmsnessetfunctionlocal_' Thanks Reddy On Sat, Apr 13, 2013 at 9:32 PM, Matthew Knepley wrote: > On Sat, Apr 13, 2013 at 9:18 PM, Dharmendar Reddy > wrote: > >> I am getting bunch of erros in my code related to DMPlex >> >> If i use DMPlexVecSetClosure I get the following error. >> >> A pointer dummy argument may only be argument associated with a >> pointer. [FELM] >> call >> DMPlexVecSetClosure(dm,PETSC_NULL_OBJECT,F,cellId,Felm,ADD_VALUES,ierr) >> >> Felm is defined as : PetscScalar,allocatable :: Felm(:) >> > > Did you look at the sample code? > > > http://www.mcs.anl.gov/petsc/petsc-dev/src/dm/impls/plex/examples/tests/ex2f90.F.html > > You define pointers. You can see what function I have defined by looking > at the header > > > https://bitbucket.org/petsc/petsc/src/62a20339e027b37fab44424f1466054586f1dc85/include/finclude/ftn-custom/petscdmplex.h90?at=master > > and its clear from the file that DMPlexMatSetClosure() has not been > defined in Fortran. > > Matt > > >> I do a similar call to DMPlexMatSetClosure, i get no error. >> >> Now if i use DMPlexVecSetClosureF90, code compiles, but i see undefined >> reference error during link stage: >> >> FEMModules.F90:(.text+0xba77): undefined reference to >> `dmplexvecsetclosuref90_' >> >> FEMModules.F90:(.text+0xbea9): undefined reference to >> `dmplexmatsetclosure_' >> >> FEMModules.F90:(.text+0xbfe0): undefined reference to >> `dmplexgetdefaultsection_' >> FEMModules.F90:(.text+0xc048): undefined reference to >> `petscsectiongetconstraintdof_ >> >> Solver.F90:(.text+0xaa1): undefined reference to `dmsnessetjacobianlocal_' >> Solver.F90:(.text+0xabc): undefined reference to `dmsnessetfunctionlocal_' >> >> >> >> >> On Sat, Apr 13, 2013 at 8:22 PM, Matthew Knepley wrote: >> >>> On Sat, Apr 13, 2013 at 8:20 PM, Dharmendar Reddy < >>> dharmareddy84 at gmail.com> wrote: >>> >>>> Hello, >>>> I am getting an undefined reference error :: >>>> FEMModules.F90:(.text+0xba77): undefined reference to >>>> `dmplexvecsetclosuref90_' >>>> FEMModules.F90:(.text+0xbea9): undefined reference to >>>> `dmplexmatsetclosuref90_' >>>> >>>> I can see that DMPlexVecSetClosure is defined in >>>> >>>> /finclude/ftn-custom/petscdmplex.h90:159: >>>> >>>> but the name is DMPlexVecSetClosure instead of DMPlexVecSetClosureF90. >>>> >>>> And there is no DMPlexMatSetClosureF90 >>>> >>>> >>>> What should i do ? >>>> >>> >>> I was not consistent here with the naming. Since an F77 version was not >>> possible, I did not >>> add F90. That is probably wrong, however I would like to scrap the F77 >>> version of Plex since >>> everyone uses F90 now and the extra letters are annoying. Go ahead and >>> use the function. >>> >>> Matt >>> >>> >>>> Thanks >>>> Reddy >>>> -- >>>> ----------------------------------------------------- >>>> Dharmendar Reddy Palle >>>> Graduate Student >>>> Microelectronics Research center, >>>> University of Texas at Austin, >>>> 10100 Burnet Road, Bldg. 160 >>>> MER 2.608F, TX 78758-4445 >>>> e-mail: dharmareddy84 at gmail.com >>>> Phone: +1-512-350-9082 >>>> United States of America. >>>> Homepage: https://webspace.utexas.edu/~dpr342 >>>> >>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >> >> >> >> -- >> ----------------------------------------------------- >> Dharmendar Reddy Palle >> Graduate Student >> Microelectronics Research center, >> University of Texas at Austin, >> 10100 Burnet Road, Bldg. 160 >> MER 2.608F, TX 78758-4445 >> e-mail: dharmareddy84 at gmail.com >> Phone: +1-512-350-9082 >> United States of America. >> Homepage: https://webspace.utexas.edu/~dpr342 >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -- ----------------------------------------------------- Dharmendar Reddy Palle Graduate Student Microelectronics Research center, University of Texas at Austin, 10100 Burnet Road, Bldg. 160 MER 2.608F, TX 78758-4445 e-mail: dharmareddy84 at gmail.com Phone: +1-512-350-9082 United States of America. Homepage: https://webspace.utexas.edu/~dpr342 -------------- next part -------------- An HTML attachment was scrubbed... URL: From sonyablade2010 at hotmail.com Sun Apr 14 01:04:47 2013 From: sonyablade2010 at hotmail.com (Sonya Blade) Date: Sun, 14 Apr 2013 07:04:47 +0100 Subject: [petsc-users] How to get the eigenvectors in Slepc In-Reply-To: References: , Message-ID: Dear All, If I have all eigenvalues as real numbers is it possible mathematically that? I get the complex eigenvectors? Because nowhere in my solution I obtain the? complex eigenvalues but Slepc returns the complex for eigenvectors. Your enlightenment will be appreciated. Regards, From jroman at dsic.upv.es Sun Apr 14 09:40:22 2013 From: jroman at dsic.upv.es (Jose E. Roman) Date: Sun, 14 Apr 2013 16:40:22 +0200 Subject: [petsc-users] algorithm to get pure real eigen valule for general eigenvalue problem(Non-hermitian type) In-Reply-To: References: Message-ID: El 12/04/2013, a las 15:22, Zhang Wei escribi?: > Dear Sir: > I am using slepc for hydrodynamic instability analysis. From my understanding most of these unstable modes in hydrodynamic are large scale structure with low frequency, which means very small angle for the complex eigenvalue. Actually I tried a channel flow case, finally I got several Eigen value with imaginary part is exact 0. and from the Eigen modes, it gives "correct" results in one direction but not wave, since the wave number is 0. So I suppose there must be algorithm to control so. Does any one know what could influence so? > > > Best Regards > > Wei I don't understand what your problem is. Can you express it in terms of the algebraic eigenproblem? Are you getting wrong solutions for the eigenproblem? Did you check the residual of the eigenpairs? Jose From jroman at dsic.upv.es Sun Apr 14 09:44:13 2013 From: jroman at dsic.upv.es (Jose E. Roman) Date: Sun, 14 Apr 2013 16:44:13 +0200 Subject: [petsc-users] How to get the eigenvectors in Slepc In-Reply-To: References: , Message-ID: El 14/04/2013, a las 08:04, Sonya Blade escribi?: > Dear All, > > If I have all eigenvalues as real numbers is it possible mathematically that > I get the complex eigenvectors? Because nowhere in my solution I obtain the > complex eigenvalues but Slepc returns the complex for eigenvectors. > > Your enlightenment will be appreciated. > > Regards, If the eigenvector is complex then of course the eigenvalue is complex as well (I assume your matrix is real non-symmetric). You have to get both the real and imaginary parts of the eigenvalue. http://www.grycap.upv.es/slepc/documentation/current/docs/manualpages/EPS/EPSGetEigenpair.html An alternative is to do all the computation in complex arithmetic (configure --with-scalar-type=complex). Jose From sonyablade2010 at hotmail.com Sun Apr 14 10:11:56 2013 From: sonyablade2010 at hotmail.com (Sonya Blade) Date: Sun, 14 Apr 2013 16:11:56 +0100 Subject: [petsc-users] How to get the eigenvectors in Slepc In-Reply-To: References: , , Message-ID: >If the eigenvector is complex then of course the eigenvalue is complex as well >(I assume your matrix is real non-symmetric). You have to get both the real and >imaginary parts of the eigenvalue.>http://www.grycap.upv.es/slepc/documentation/current/docs/manualpages/EPS/EPSGetEigenpair.html>An alternative is to do all the computation in complex arithmetic (configure --with-scalar-type=complex).>Jose Hi Jose, Probably you misunderstood my question my problem is: ierr = EPSGetEigenvalue(eps,i,&eigen_r,&eigen_i);CHKERRQ(ierr);returns all eigenvalues for each iteration where there is no imaginary parts (and set to zeros in eigen_i as expected). I cross checked that with another algorithm which proves that all the eigenvalues are correct (no imaginary parts),but glitch is with trying to fetch the eigenvectors. Eigenvectors obtained from the code below are mostly filled with zeros and only the 11th row(Slepc returns 11 eigenvalue) has real and imaginary part which is not possible. for (j=0;j -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: main.c URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: IN.txt URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: mass.txt URL: From jroman at dsic.upv.es Sun Apr 14 10:40:30 2013 From: jroman at dsic.upv.es (Jose E. Roman) Date: Sun, 14 Apr 2013 17:40:30 +0200 Subject: [petsc-users] How to get the eigenvectors in Slepc In-Reply-To: References: , , Message-ID: El 14/04/2013, a las 17:11, Sonya Blade escribi?: > >If the eigenvector is complex then of course the eigenvalue is complex as well > >(I assume your matrix is real non-symmetric). You have to get both the real and >imaginary parts of the eigenvalue. > >http://www.grycap.upv.es/slepc/documentation/current/docs/manualpages/EPS/EPSGetEigenpair.html > >An alternative is to do all the computation in complex arithmetic (configure --with-scalar-type=complex). > >Jose > > Hi Jose, > > Probably you misunderstood my question my problem is: ierr = EPSGetEigenvalue(eps,i,&eigen_r,&eigen_i);CHKERRQ(ierr); > returns all eigenvalues for each iteration where there is no imaginary parts (and set to zeros in eigen_i as expected). > > I cross checked that with another algorithm which proves that all the eigenvalues are correct (no imaginary parts),but > glitch is with trying to fetch the eigenvectors. Eigenvectors obtained from the code below are mostly filled with zeros > and only the 11th row(Slepc returns 11 eigenvalue) has real and imaginary part which is not possible. > > > for (j=0;j { > ierr = EPSGetEigenvector(eps,j,&vec[0],&vec1[0]);CHKERRQ(ierr); > PetscPrintf(MPI_COMM_WORLD," %d || %4.8f || %4.8f \n",j,vec[j],vec1[j]); > } > > > I also provide you the main code and required files. > > Your help will be appreciated, > > Regards, > As Matt said, read the manual section on vectors. Page 43 "Basic Vector operations": "On occasion, the user needs to access the actual elements of the vector. The routine VecGetArray() returns a pointer to the elements local to the process: VecGetArray(Vec v,PetscScalar **array); When access to the array is no longer needed, the user should call VecRestoreArray(Vec v, PetscScalar **array); " Then have a look at any of the examples, for instance those listed here http://www.mcs.anl.gov/petsc/petsc-3.3/docs/manualpages/Vec/VecGetArray.html Please read the documentation. We cannot develop your code for you. Jose From sonyablade2010 at hotmail.com Sun Apr 14 10:53:11 2013 From: sonyablade2010 at hotmail.com (Sonya Blade) Date: Sun, 14 Apr 2013 16:53:11 +0100 Subject: [petsc-users] How to get the eigenvectors in Slepc In-Reply-To: <2A10E224-62D3-4F79-9A66-5C9C30307B4C@dsic.upv.es> References: , , , <2A10E224-62D3-4F79-9A66-5C9C30307B4C@dsic.upv.es> Message-ID: > As Matt said, read the manual section on vectors. Page 43 "Basic Vector operations": > > "On occasion, the user needs to access the actual elements of the vector. The routine VecGetArray() returns a pointer to the elements local to the process: > VecGetArray(Vec v,PetscScalar **array); > When access to the array is no longer needed, the user should call > VecRestoreArray(Vec v, PetscScalar **array); > Then have a look at any of the examples, for instance those listed here > http://www.mcs.anl.gov/petsc/petsc-3.3/docs/manualpages/Vec/VecGetArray.html > Please read the documentation. We cannot develop your code for you. > Jose Thanks in advance, I'm already reading those manuals Petsc,Slepc, online Petsc/Slepc documents are always open in my browser.But it seems it really take times to get used to it, which I think that this is what I'm suffering for the time being. Can you confirm that having all the eigenvalues as real and corresponding eigenvectors as zero and having imaginary parts is something weird. At least to confirm that I'm on a right path and this should be the point that I've to focus. Regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: From jroman at dsic.upv.es Sun Apr 14 11:02:38 2013 From: jroman at dsic.upv.es (Jose E. Roman) Date: Sun, 14 Apr 2013 18:02:38 +0200 Subject: [petsc-users] How to get the eigenvectors in Slepc In-Reply-To: References: , , , <2A10E224-62D3-4F79-9A66-5C9C30307B4C@dsic.upv.es> Message-ID: <7FD06C08-8F27-497D-9C19-AEF57BCF2FDB@dsic.upv.es> El 14/04/2013, a las 17:53, Sonya Blade escribi?: > Thanks in advance, > > I'm already reading those manuals Petsc,Slepc, online Petsc/Slepc documents are always open in my browser. > But it seems it really take times to get used to it, which I think that this is what I'm suffering for the time being. > > Can you confirm that having all the eigenvalues as real and corresponding eigenvectors as zero and having > imaginary parts is something weird. At least to confirm that I'm on a right path and this should be the point > that I've to focus. > > Regards, Let me put it more clearly: you are not getting eigenvector entries, your printing statement is nonsense (you print a pointer as a floating point number), so you cannot say the imaginary part is nonzero. It is indeed zero, SLEPc gives the right solution, your program is wrong. Jose From zhang.wei at chalmers.se Sun Apr 14 12:51:42 2013 From: zhang.wei at chalmers.se (Zhang Wei) Date: Sun, 14 Apr 2013 17:51:42 +0000 Subject: [petsc-users] algorithm to get pure real eigen valule for general eigenvalue problem(Non-hermitian type) In-Reply-To: References: , Message-ID: Hi! Thanks for your reply. I am now dealing with non symmetric eigen problem, where I expect to get these complex pairs. I am looking for the largest magnitude eigen pairs. But I always get many pure real eigen pairs,which in my case make no sense. The thing is that these are not totally wrong.since it is a standard case I compared with others, these real eigen values are somehow close to the magnitude of "correct" results.for eigen vectors, the "correct" results can be express as "T*exp(kx)",where the T is chebyshev polynomial. And all real eigen vectors I got are extract chebyshev polynomial. Actually I already set the problem to NHEP,and can get some complex eigen pairs. Comparing with expected one those eigen values are larger in angle. On the other hand I set the tolerance to 1e-9. Thanks in advance! Yours Sincerely ------------------------ Wei Zhang Ph.D Hydrodynamic Group Dept. of Shipping and Marine Technology Chalmers University of Technology Sweden Phone:+46-31 772 2703 On 14 apr 2013, at 16:40, "Jose E. Roman" wrote: > > El 12/04/2013, a las 15:22, Zhang Wei escribi?: > >> Dear Sir: >> I am using slepc for hydrodynamic instability analysis. From my understanding most of these unstable modes in hydrodynamic are large scale structure with low frequency, which means very small angle for the complex eigenvalue. Actually I tried a channel flow case, finally I got several Eigen value with imaginary part is exact 0. and from the Eigen modes, it gives "correct" results in one direction but not wave, since the wave number is 0. So I suppose there must be algorithm to control so. Does any one know what could influence so? >> >> >> Best Regards >> >> Wei > > I don't understand what your problem is. Can you express it in terms of the algebraic eigenproblem? Are you getting wrong solutions for the eigenproblem? Did you check the residual of the eigenpairs? > > Jose > From jroman at dsic.upv.es Sun Apr 14 15:54:31 2013 From: jroman at dsic.upv.es (Jose E. Roman) Date: Sun, 14 Apr 2013 22:54:31 +0200 Subject: [petsc-users] algorithm to get pure real eigen valule for general eigenvalue problem(Non-hermitian type) In-Reply-To: References: , Message-ID: <7A3E1852-8C57-41B2-BB03-3423328C79C9@dsic.upv.es> El 14/04/2013, a las 19:51, Zhang Wei escribi?: > Hi! > Thanks for your reply. I am now dealing with non symmetric eigen problem, where I expect to get these complex pairs. I am looking for the largest magnitude eigen pairs. But I always get many pure real eigen pairs,which in my case make no sense. The thing is that these are not totally wrong.since it is a standard case I compared with others, these real eigen values are somehow close to the magnitude of "correct" results.for eigen vectors, the "correct" results can be express as "T*exp(kx)",where the T is chebyshev polynomial. And all real eigen vectors I got are extract chebyshev polynomial. Actually I already set the problem to NHEP,and can get some complex eigen pairs. Comparing with expected one those eigen values are larger in angle. On the other hand I set the tolerance to 1e-9. > > Thanks in advance! > Are you completely sure that your matrix is being formed correctly? I would suggest running a small example with -mat_view_matlab then load the matrix and try eigs(A) in Matlab to make sure you get the expected results. Jose From opensource.petsc at user.fastmail.fm Sun Apr 14 17:03:31 2013 From: opensource.petsc at user.fastmail.fm (Hugo Gagnon) Date: Sun, 14 Apr 2013 18:03:31 -0400 Subject: [petsc-users] Seq vs MPI convergence Message-ID: Hi, I have a problem that converges fine in sequential mode but diverges in MPI (other problems seem to converge fine for both modes). I am no expert in parallel solvers but, is this something I should expect? I'm using BiCGSTAB with BJACOBI ILU(3). Perhaps I'm overseeing some parameters that could improve my convergences? Thanks, -- Hugo Gagnon -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Sun Apr 14 17:23:15 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sun, 14 Apr 2013 17:23:15 -0500 Subject: [petsc-users] Seq vs MPI convergence In-Reply-To: References: Message-ID: <87ehecpx9o.fsf@mcs.anl.gov> Hugo Gagnon writes: > Hi, > > I have a problem that converges fine in sequential mode but diverges > in MPI (other problems seem to converge fine for both modes). I am no > expert in parallel solvers but, is this something I should expect? > I'm using BiCGSTAB with BJACOBI ILU(3). Perhaps I'm overseeing some > parameters that could improve my convergences? The most common problem is that you assemble a different operator in parallel. So check that and check that you get the same answer when using redundant solves. -pc_type redundant -redundant_pc_type ilu From gpau at lbl.gov Sun Apr 14 23:19:10 2013 From: gpau at lbl.gov (George Pau) Date: Sun, 14 Apr 2013 21:19:10 -0700 Subject: [petsc-users] segmentation fault when using AOCreateBasicIS with v3.3.6 in Fortran Message-ID: Hi, I called the AOCreateBasicIS by call AOCreateBasicIS(is_global,PETSC_NULL_OBJECT,global_map,pierr) where I am trying to map the indices partitioned by the partitioner (is_global is obtained through MatPartitioning-related functions followed by ISPartitioningToNumbering) to the global petsc numbering. I wanted to use the natural numbering for the second argument. But I am getting a segmentation fault at #0 0x00000000006eae2d in aocreatebasicis_ (isapp=0x7fffffffda40, ispetsc=0x0, aoout=0x7fffffffdaa0, ierr=0x17c9644) at /home/gpau/tough_codes/esd-tough3/build/Linux-x86_64-MPI-EOS-eco2n-Debug/toughlib/tpls/petsc/petsc-3.3-p6-source/src/dm/ao/impls/basic/ftn-custom/zaobasicf.c:29 I didn't have this problem with v3.3.3 where PETSC_NULL is used instead of PETSC_NULL_OBJECT for the second argument Any help will be appreciated. Thanks, George -- George Pau Earth Sciences Division Lawrence Berkeley National Laboratory One Cyclotron, MS 74-120 Berkeley, CA 94720 (510) 486-7196 gpau at lbl.gov http://esd.lbl.gov/about/staff/georgepau/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Sun Apr 14 23:35:29 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sun, 14 Apr 2013 23:35:29 -0500 Subject: [petsc-users] segmentation fault when using AOCreateBasicIS with v3.3.6 in Fortran In-Reply-To: References: Message-ID: <87wqs4o1gu.fsf@mcs.anl.gov> George Pau writes: > Hi, > > I called the AOCreateBasicIS by > > call AOCreateBasicIS(is_global,PETSC_NULL_OBJECT,global_map,pierr) > > where I am trying to map the indices partitioned by the partitioner > (is_global is obtained through MatPartitioning-related functions followed > by ISPartitioningToNumbering) to the global petsc numbering. I wanted to > use the natural numbering for the second argument. But I am getting a > segmentation fault at > > #0 0x00000000006eae2d in aocreatebasicis_ (isapp=0x7fffffffda40, > ispetsc=0x0, aoout=0x7fffffffdaa0, ierr=0x17c9644) at > /home/gpau/tough_codes/esd-tough3/build/Linux-x86_64-MPI-EOS-eco2n-Debug/toughlib/tpls/petsc/petsc-3.3-p6-source/src/dm/ao/impls/basic/ftn-custom/zaobasicf.c:29 Thanks, I've fixed this bug in 'maint'. You can get it with $ git clone --branch maint https://bitbucket.org/petsc/petsc or, if you already have a clone of PETSc, $ git checkout maint $ git pull > I didn't have this problem with v3.3.3 where PETSC_NULL is used instead of > PETSC_NULL_OBJECT for the second argument It was probably not correct in that case, but it didn't crash. > Any help will be appreciated. > > Thanks, > George > > > -- > George Pau > Earth Sciences Division > Lawrence Berkeley National Laboratory > One Cyclotron, MS 74-120 > Berkeley, CA 94720 > > (510) 486-7196 > gpau at lbl.gov > http://esd.lbl.gov/about/staff/georgepau/ From sonyablade2010 at hotmail.com Mon Apr 15 02:22:15 2013 From: sonyablade2010 at hotmail.com (Sonya Blade) Date: Mon, 15 Apr 2013 08:22:15 +0100 Subject: [petsc-users] How to get the eigenvectors in Slepc In-Reply-To: References: , , , , <2A10E224-62D3-4F79-9A66-5C9C30307B4C@dsic.upv.es>, Message-ID: >Let me put it more clearly: you are not getting eigenvector entries, your printing statement is? >nonsense (you >print a pointer as a floating point number), so you cannot say the imaginary part is? >nonzero. It is indeed >zero, SLEPc gives the right solution, your program is wrong. >Jose Sorry and thank you for clarifying that, One last question, I got the correct eigenvalues, now I got the? eigenvectors, but they differ from the exact solution.? For example, for the first eigenvalue(2405.247) I got the following eigenvector? set where it differ from the exact solution, what could be the possible reason of that? Regards, ? ? Row Exact Results ? SLEPC RESULTS 0 0.2255511 -0.014234 1 -5.2313502 0.330131 2 3.1352583 -0.197855 3 -4.4245184 0.279215 4 0.0898345 -0.005669 5 1.9278406 -0.121659 6 0.0033757 -0.000213 7 -0.7077308 0.044662 8 0.0687009 -0.004335 9 0.1684281 -0.010629 10 -2.81293611 0.177514 11 1.93270712 -0.121966 12 -0.00306213 0.000193 13 0.88278714 -0.055709 14 -0.70857415: 0.044715 15 0.03025516: -0.001909 16 -2.81094417: 0.177388 17 1.12005518: -0.070683 18 2.73596119: -0.172656 19 0.22734020: -0.014347 20 -4.42534221: 0.279267 21 2.22134222: -0.140181 22 -5.00448323: 0.315815 23 0.17399224: -0.01098 24 2.38934725: -0.150783 25 -3.75380226: 0.236889 26 0.09633427: -0.006079 27 0.48140228: -0.03038 28 -1.52250229: 0.09608 29 -0.00132830: 0.000084 30 1.16923331: -0.073786 31 0.09701232: -0.006122 32 0.00268833: -0.00017 33 1.59855934: -0.100879 34 -3.75642735: 0.237054 35 0.13951936: -0.008805 36 1.17316037: -0.074034 37 -1.32576838: 0.083664 38 -0.00203439: 0.000128 39 0.47943940: -0.030256 40 0.09694141: -0.006118 41 0.00071442: -0.000045 42 0.14814343: -0.009349 43 -5.23126844: 0.330126 44 2.33293345: -0.147223 45 3.01784046: -0.190445 46 -5.00416947: 0.315795 47 0.17591748: -0.011101 From jroman at dsic.upv.es Mon Apr 15 02:38:21 2013 From: jroman at dsic.upv.es (Jose E. Roman) Date: Mon, 15 Apr 2013 09:38:21 +0200 Subject: [petsc-users] How to get the eigenvectors in Slepc In-Reply-To: References: , , , , <2A10E224-62D3-4F79-9A66-5C9C30307B4C@dsic.upv.es>, Message-ID: El 15/04/2013, a las 09:22, Sonya Blade escribi?: >> Let me put it more clearly: you are not getting eigenvector entries, your printing statement is >> nonsense (you >print a pointer as a floating point number), so you cannot say the imaginary part is >> nonzero. It is indeed >zero, SLEPc gives the right solution, your program is wrong. >> Jose > > Sorry and thank you for clarifying that, > One last question, I got the correct eigenvalues, now I got the > eigenvectors, but they differ from the exact solution. > > For example, for the first eigenvalue(2405.247) I got the following eigenvector > set where it differ from the exact solution, what could be the possible reason of that? > > Regards, > > Row Exact Results SLEPC RESULTS > 0 0.2255511 -0.014234 > 1 -5.2313502 0.330131 > 2 3.1352583 -0.197855 > 3 -4.4245184 0.279215 > 4 0.0898345 -0.005669 > 5 1.9278406 -0.121659 > 6 0.0033757 -0.000213 > 7 -0.7077308 0.044662 > 8 0.0687009 -0.004335 > 9 0.1684281 -0.010629 > 10 -2.81293611 0.177514 > 11 1.93270712 -0.121966 > 12 -0.00306213 0.000193 > 13 0.88278714 -0.055709 > 14 -0.70857415: 0.044715 > 15 0.03025516: -0.001909 > 16 -2.81094417: 0.177388 > 17 1.12005518: -0.070683 > 18 2.73596119: -0.172656 > 19 0.22734020: -0.014347 > 20 -4.42534221: 0.279267 > 21 2.22134222: -0.140181 > 22 -5.00448323: 0.315815 > 23 0.17399224: -0.01098 > 24 2.38934725: -0.150783 > 25 -3.75380226: 0.236889 > 26 0.09633427: -0.006079 > 27 0.48140228: -0.03038 > 28 -1.52250229: 0.09608 > 29 -0.00132830: 0.000084 > 30 1.16923331: -0.073786 > 31 0.09701232: -0.006122 > 32 0.00268833: -0.00017 > 33 1.59855934: -0.100879 > 34 -3.75642735: 0.237054 > 35 0.13951936: -0.008805 > 36 1.17316037: -0.074034 > 37 -1.32576838: 0.083664 > 38 -0.00203439: 0.000128 > 39 0.47943940: -0.030256 > 40 0.09694141: -0.006118 > 41 0.00071442: -0.000045 > 42 0.14814343: -0.009349 > 43 -5.23126844: 0.330126 > 44 2.33293345: -0.147223 > 45 3.01784046: -0.190445 > 46 -5.00416947: 0.315795 > 47 0.17591748: -0.011101 Eigenvectors are not unique. If you normalize your "exact" solution you will see that it coincides with SLEPc's answer. If you don't know what an eigenvector is, you will have a hard time using SLEPc. Jose From sonyablade2010 at hotmail.com Mon Apr 15 04:02:17 2013 From: sonyablade2010 at hotmail.com (Sonya Blade) Date: Mon, 15 Apr 2013 10:02:17 +0100 Subject: [petsc-users] How to get the eigenvectors in Slepc In-Reply-To: References: , , , , <2A10E224-62D3-4F79-9A66-5C9C30307B4C@dsic.upv.es>, , Message-ID: Even If I have normalization at the exact solution this doesn't explain how the sign of it has changed, as a result square roots of sum of squares? can never have negative value (vector length).? Basically what I need to do, to get the Slepc results matching with the exact ones.? Do I need to call any function before Slepc normalizes them or something else? Regards, From zhang.wei at chalmers.se Mon Apr 15 04:28:47 2013 From: zhang.wei at chalmers.se (Zhang Wei) Date: Mon, 15 Apr 2013 09:28:47 +0000 Subject: [petsc-users] algorithm to get pure real eigen valule for general eigenvalue problem(Non-hermitian type) In-Reply-To: <7A3E1852-8C57-41B2-BB03-3423328C79C9@dsic.upv.es> References: , , <7A3E1852-8C57-41B2-BB03-3423328C79C9@dsic.upv.es> Message-ID: <85C55751-4131-4365-BBF0-5351D39D2E0B@chalmers.se> Hi Sorry I forgot to say it is matrix free case. So if I can export matrix like that? Or any another way to do so? Yours Sincerely ------------------------ Wei Zhang Ph.D Hydrodynamic Group Dept. of Shipping and Marine Technology Chalmers University of Technology Sweden Phone:+46-31 772 2703 On 14 apr 2013, at 22:54, "Jose E. Roman" wrote: > > El 14/04/2013, a las 19:51, Zhang Wei escribi?: > >> Hi! >> Thanks for your reply. I am now dealing with non symmetric eigen problem, where I expect to get these complex pairs. I am looking for the largest magnitude eigen pairs. But I always get many pure real eigen pairs,which in my case make no sense. The thing is that these are not totally wrong.since it is a standard case I compared with others, these real eigen values are somehow close to the magnitude of "correct" results.for eigen vectors, the "correct" results can be express as "T*exp(kx)",where the T is chebyshev polynomial. And all real eigen vectors I got are extract chebyshev polynomial. Actually I already set the problem to NHEP,and can get some complex eigen pairs. Comparing with expected one those eigen values are larger in angle. On the other hand I set the tolerance to 1e-9. >> >> Thanks in advance! >> > > Are you completely sure that your matrix is being formed correctly? I would suggest running a small example with -mat_view_matlab then load the matrix and try eigs(A) in Matlab to make sure you get the expected results. > > Jose > From jroman at dsic.upv.es Mon Apr 15 04:44:17 2013 From: jroman at dsic.upv.es (Jose E. Roman) Date: Mon, 15 Apr 2013 11:44:17 +0200 Subject: [petsc-users] algorithm to get pure real eigen valule for general eigenvalue problem(Non-hermitian type) In-Reply-To: <85C55751-4131-4365-BBF0-5351D39D2E0B@chalmers.se> References: , , <7A3E1852-8C57-41B2-BB03-3423328C79C9@dsic.upv.es> <85C55751-4131-4365-BBF0-5351D39D2E0B@chalmers.se> Message-ID: El 15/04/2013, a las 11:28, Zhang Wei escribi?: > Hi > Sorry I forgot to say it is matrix free case. So if I can export matrix like that? Or any another way to do so? > > Yours Sincerely > ------------------------ > Wei Zhang > Ph.D > Hydrodynamic Group > Dept. of Shipping and Marine Technology > Chalmers University of Technology > Sweden > Phone:+46-31 772 2703 You could run the SLEPc program with "-eps_type lapack", then a dense matrix will be created using matrix-vector products (you will see it if you set the -mat_view_matlab flag). This is very rudimentary and only viable for very small dimension. A better alternative would be to call MatComputeExplicitOperator() from within your code, use a binary viewer to save the resulting matrix and load it in Matlab with PetscBinaryRead.m. Jose From rupp at mcs.anl.gov Mon Apr 15 06:56:14 2013 From: rupp at mcs.anl.gov (Karl Rupp) Date: Mon, 15 Apr 2013 06:56:14 -0500 Subject: [petsc-users] How to get the eigenvectors in Slepc In-Reply-To: References: , , , , <2A10E224-62D3-4F79-9A66-5C9C30307B4C@dsic.upv.es>, , Message-ID: <516BEADE.3070004@mcs.anl.gov> Hi Sonya, I really suggest you familiarize yourself with eigenvalues and eigenvectors. For each eigenvector v, alpha*v is also an eigenvector to the same eigenvalue, where alpha can be any nonzero scalar. If the sign doesn't fit your expectations, just multiply it with -1. :-) Best regards, Karli On 04/15/2013 04:02 AM, Sonya Blade wrote: > Even If I have normalization at the exact solution this doesn't explain > how the sign of it has changed, as a result square roots of sum of squares > can never have negative value (vector length). > > Basically what I need to do, to get the Slepc results matching with the exact ones. > Do I need to call any function before Slepc normalizes them or something else? > > Regards, > From gokhalen at gmail.com Mon Apr 15 08:08:32 2013 From: gokhalen at gmail.com (Nachiket Gokhale) Date: Mon, 15 Apr 2013 09:08:32 -0400 Subject: [petsc-users] Subject: Re: How to get the eigenvectors in Slepc Message-ID: I checked the ratio of your real and imaginary part and it appears to be about -15. If your eigenvalues are real and x_r is the real eigenvector then x_r + (i * c x_r) is also an eigenvector corresponding to the same real eigenvalue. You may just discard the imaginary part. A(x_r + (i *c*x_r) ) = w(x_r + ic_r) -Nachiket -------------- next part -------------- An HTML attachment was scrubbed... URL: From mike.hui.zhang at hotmail.com Mon Apr 15 09:39:56 2013 From: mike.hui.zhang at hotmail.com (Hui Zhang) Date: Mon, 15 Apr 2013 16:39:56 +0200 Subject: [petsc-users] AOCreateBasic & AOCreateBasicIS Message-ID: Is the following correct? From my understanding before, AOCreateBasic(MPI_Comm comm,PetscInt napp,const PetscInt myapp[],const PetscInt mypetsc[],AO *aoout) should be called by all the processors in 'comm' and each involved processor provides only part of all the indices. For example, 'comm' contains two processors proc0, on proc0: napp= 3, myapp[]= {0,1,2}, mypetsc[]= {4,3,2} on proc1: napp= 2, myapp[]= {3,4}, mypetsc[]= {0,1} so that union of myapp[] of proc0 and proc1 gives 0,..,4, and union of myapp[] of proc0 and proc1 also gives 0,..,4. This seemed to work even if I fed AOApplicationToPetsc with indices on proc0 larger than 2. But now I suspect I was wrong after reading http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/AO/AOCreateBasic.html The question is whether AOCreateBasic will do union of the input indices automatically, or the AO is only for the input indices? From zhang.wei at chalmers.se Mon Apr 15 09:50:25 2013 From: zhang.wei at chalmers.se (Zhang Wei) Date: Mon, 15 Apr 2013 14:50:25 +0000 Subject: [petsc-users] algorithm to get pure real eigen valule for general eigenvalue problem(Non-hermitian type) In-Reply-To: References: , , <7A3E1852-8C57-41B2-BB03-3423328C79C9@dsic.upv.es> <85C55751-4131-4365-BBF0-5351D39D2E0B@chalmers.se> Message-ID: Hi Since the case is not that big(size of matrix is 9600x9600. ). I have done test in the first way, which gives me a 2.1 Gb ASCII file. The eigen values I get from matlab are : > In eigs>processEUPDinfo at 1340 In eigs at 357 >> c c = 0 0 0 0 1.9405 - 0.4733i 1.9405 + 0.4733i And what I got from slepc are : 0 1.99988 1 1.99974 2 1.99971 3 1.99913+0.0370552j 4 1.99913+-0.0370552j 5 1.99894+0.0370115j Non of them is right one. And on the other hand there are lots of small values in the matrix which are almost in order of 1e-7 and even smaller. Could it be the reason resulting such problem? -----Original Message----- From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Jose E. Roman Sent: den 15 april 2013 11:44 To: PETSc users list Subject: Re: [petsc-users] algorithm to get pure real eigen valule for general eigenvalue problem(Non-hermitian type) El 15/04/2013, a las 11:28, Zhang Wei escribi?: > Hi > Sorry I forgot to say it is matrix free case. So if I can export matrix like that? Or any another way to do so? > > Yours Sincerely > ------------------------ > Wei Zhang > Ph.D > Hydrodynamic Group > Dept. of Shipping and Marine Technology Chalmers University of > Technology Sweden > Phone:+46-31 772 2703 You could run the SLEPc program with "-eps_type lapack", then a dense matrix will be created using matrix-vector products (you will see it if you set the -mat_view_matlab flag). This is very rudimentary and only viable for very small dimension. A better alternative would be to call MatComputeExplicitOperator() from within your code, use a binary viewer to save the resulting matrix and load it in Matlab with PetscBinaryRead.m. Jose From jedbrown at mcs.anl.gov Mon Apr 15 09:53:19 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 15 Apr 2013 09:53:19 -0500 Subject: [petsc-users] AOCreateBasic & AOCreateBasicIS In-Reply-To: References: Message-ID: <87obdfonfk.fsf@mcs.anl.gov> Hui Zhang writes: > Is the following correct? > > From my understanding before, > > AOCreateBasic(MPI_Comm comm,PetscInt napp,const PetscInt myapp[],const PetscInt mypetsc[],AO *aoout) > > should be called by all the processors in 'comm' and each involved processor provides only part of all the indices. > For example, 'comm' contains two processors proc0, > > on proc0: napp= 3, myapp[]= {0,1,2}, mypetsc[]= {4,3,2} > > on proc1: napp= 2, myapp[]= {3,4}, mypetsc[]= {0,1} > > so that union of myapp[] of proc0 and proc1 gives 0,..,4, and union of myapp[] of proc0 and proc1 also gives 0,..,4. > > This seemed to work even if I fed AOApplicationToPetsc with indices on proc0 larger than 2. > But now I suspect I was wrong after reading > > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/AO/AOCreateBasic.html > > The question is whether AOCreateBasic will do union of the input indices automatically, or the AO is only for the input indices? AOCreateBasic does an allgather on the indices so it can translate any indices. This is not memory scalable and we don't recommend using it unless you know your problem sizes are not very large, but some people want to translate arbitrary indices non-collectively. From jroman at dsic.upv.es Mon Apr 15 10:00:54 2013 From: jroman at dsic.upv.es (Jose E. Roman) Date: Mon, 15 Apr 2013 17:00:54 +0200 Subject: [petsc-users] algorithm to get pure real eigen valule for general eigenvalue problem(Non-hermitian type) In-Reply-To: References: , , <7A3E1852-8C57-41B2-BB03-3423328C79C9@dsic.upv.es> <85C55751-4131-4365-BBF0-5351D39D2E0B@chalmers.se> Message-ID: <1ECBC813-436E-45B8-B1ED-5CA9A83A4E6D@dsic.upv.es> El 15/04/2013, a las 16:50, Zhang Wei escribi?: > Hi > Since the case is not that big(size of matrix is 9600x9600. ). I have done test in the first way, which gives me a 2.1 Gb ASCII file. The eigen values I get from matlab are : >> In eigs>processEUPDinfo at 1340 > In eigs at 357 >>> c > > c = > > 0 > 0 > 0 > 0 > 1.9405 - 0.4733i > 1.9405 + 0.4733i > > And what I got from slepc are : > > 0 1.99988 > 1 1.99974 > 2 1.99971 > 3 1.99913+0.0370552j > 4 1.99913+-0.0370552j > 5 1.99894+0.0370115j > > Non of them is right one. And on the other hand there are lots of small values in the matrix which are almost in order of 1e-7 and even smaller. > Could it be the reason resulting such problem? Note that if you are trasferring the matrix via a text file, then the matrix loaded in Matlab will differ slightly form the one being handled by SLEPc, so you can expect differences in eigenvalues. For a more equivalent computation you should transfer the matrix in binary format. Anyway, my impression is that you have a bug in the code that generates the matrix, so I would suggest checking that. Having small nonzero entries in the matrix should not be a problem for the eigensolver. Jose From mike.hui.zhang at hotmail.com Mon Apr 15 10:04:58 2013 From: mike.hui.zhang at hotmail.com (Hui Zhang) Date: Mon, 15 Apr 2013 17:04:58 +0200 Subject: [petsc-users] AOCreateBasic & AOCreateBasicIS In-Reply-To: <87obdfonfk.fsf@mcs.anl.gov> References: <87obdfonfk.fsf@mcs.anl.gov> Message-ID: On Apr 15, 2013, at 4:53 PM, Jed Brown wrote: > Hui Zhang writes: > >> Is the following correct? >> >> From my understanding before, >> >> AOCreateBasic(MPI_Comm comm,PetscInt napp,const PetscInt myapp[],const PetscInt mypetsc[],AO *aoout) >> >> should be called by all the processors in 'comm' and each involved processor provides only part of all the indices. >> For example, 'comm' contains two processors proc0, >> >> on proc0: napp= 3, myapp[]= {0,1,2}, mypetsc[]= {4,3,2} >> >> on proc1: napp= 2, myapp[]= {3,4}, mypetsc[]= {0,1} >> >> so that union of myapp[] of proc0 and proc1 gives 0,..,4, and union of myapp[] of proc0 and proc1 also gives 0,..,4. >> >> This seemed to work even if I fed AOApplicationToPetsc with indices on proc0 larger than 2. >> But now I suspect I was wrong after reading >> >> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/AO/AOCreateBasic.html >> >> The question is whether AOCreateBasic will do union of the input indices automatically, or the AO is only for the input indices? > > AOCreateBasic does an allgather on the indices so it can translate any > indices. This is not memory scalable and we don't recommend using it > unless you know your problem sizes are not very large, but some people > want to translate arbitrary indices non-collectively. Thanks! Is the problem you mentioned serious when the indices to be translated on each processor include only a few ones beyond the input myapp[] of AOCreateBasic? Because I only use AO for FEM assembly so I would not translate too many beyond local ranges. Another question: do the inputs IS's to AOCreateBasicIS include all the indices, and AOCreateBasicIS would not do any gathering of indices? From knepley at gmail.com Mon Apr 15 10:11:45 2013 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 15 Apr 2013 10:11:45 -0500 Subject: [petsc-users] AOCreateBasic & AOCreateBasicIS In-Reply-To: References: <87obdfonfk.fsf@mcs.anl.gov> Message-ID: On Mon, Apr 15, 2013 at 10:04 AM, Hui Zhang wrote: > > On Apr 15, 2013, at 4:53 PM, Jed Brown wrote: > > > Hui Zhang writes: > > > >> Is the following correct? > >> > >> From my understanding before, > >> > >> AOCreateBasic(MPI_Comm comm,PetscInt napp,const PetscInt myapp[],const > PetscInt mypetsc[],AO *aoout) > >> > >> should be called by all the processors in 'comm' and each involved > processor provides only part of all the indices. > >> For example, 'comm' contains two processors proc0, > >> > >> on proc0: napp= 3, myapp[]= {0,1,2}, mypetsc[]= {4,3,2} > >> > >> on proc1: napp= 2, myapp[]= {3,4}, mypetsc[]= {0,1} > >> > >> so that union of myapp[] of proc0 and proc1 gives 0,..,4, and union of > myapp[] of proc0 and proc1 also gives 0,..,4. > >> > >> This seemed to work even if I fed AOApplicationToPetsc with indices on > proc0 larger than 2. > >> But now I suspect I was wrong after reading > >> > >> > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/AO/AOCreateBasic.html > >> > >> The question is whether AOCreateBasic will do union of the input > indices automatically, or the AO is only for the input indices? > > > > AOCreateBasic does an allgather on the indices so it can translate any > > indices. This is not memory scalable and we don't recommend using it > > unless you know your problem sizes are not very large, but some people > > want to translate arbitrary indices non-collectively. > > Thanks! Is the problem you mentioned serious when the indices to be > translated on each processor include only a few ones beyond the input > myapp[] of AOCreateBasic? > Because I only use AO for FEM assembly so I would not translate too many > beyond local ranges. > If your problem gets large, this will be a problem. > Another question: do the inputs IS's to AOCreateBasicIS include all the > indices, and AOCreateBasicIS would not do any gathering of indices? No, its the same thing. Matt -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Mon Apr 15 10:14:55 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 15 Apr 2013 10:14:55 -0500 Subject: [petsc-users] AOCreateBasic & AOCreateBasicIS In-Reply-To: References: <87obdfonfk.fsf@mcs.anl.gov> Message-ID: <87ehebomfk.fsf@mcs.anl.gov> Hui Zhang writes: > Thanks! Is the problem you mentioned serious when the indices to be > translated on each processor include only a few ones beyond the input > myapp[] of AOCreateBasic? Because I only use AO for FEM assembly so I > would not translate too many beyond local ranges. Don't use AO for FEM assembly. Use a local-to-global mapping and MatSetValuesLocal(). That is memory scalable and simpler code. > Another question: do the inputs IS's to AOCreateBasicIS include all > the indices, and AOCreateBasicIS would not do any gathering of > indices? You could create an AO on PETSC_COMM_SELF, but I don't think you should use AO. From mike.hui.zhang at hotmail.com Mon Apr 15 10:22:21 2013 From: mike.hui.zhang at hotmail.com (Hui Zhang) Date: Mon, 15 Apr 2013 17:22:21 +0200 Subject: [petsc-users] AOCreateBasic & AOCreateBasicIS In-Reply-To: References: <87obdfonfk.fsf@mcs.anl.gov> Message-ID: On Apr 15, 2013, at 5:11 PM, Matthew Knepley wrote: > On Mon, Apr 15, 2013 at 10:04 AM, Hui Zhang wrote: > > On Apr 15, 2013, at 4:53 PM, Jed Brown wrote: > > > Hui Zhang writes: > > > >> Is the following correct? > >> > >> From my understanding before, > >> > >> AOCreateBasic(MPI_Comm comm,PetscInt napp,const PetscInt myapp[],const PetscInt mypetsc[],AO *aoout) > >> > >> should be called by all the processors in 'comm' and each involved processor provides only part of all the indices. > >> For example, 'comm' contains two processors proc0, > >> > >> on proc0: napp= 3, myapp[]= {0,1,2}, mypetsc[]= {4,3,2} > >> > >> on proc1: napp= 2, myapp[]= {3,4}, mypetsc[]= {0,1} > >> > >> so that union of myapp[] of proc0 and proc1 gives 0,..,4, and union of myapp[] of proc0 and proc1 also gives 0,..,4. > >> > >> This seemed to work even if I fed AOApplicationToPetsc with indices on proc0 larger than 2. > >> But now I suspect I was wrong after reading > >> > >> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/AO/AOCreateBasic.html > >> > >> The question is whether AOCreateBasic will do union of the input indices automatically, or the AO is only for the input indices? > > > > AOCreateBasic does an allgather on the indices so it can translate any > > indices. This is not memory scalable and we don't recommend using it > > unless you know your problem sizes are not very large, but some people > > want to translate arbitrary indices non-collectively. > > Thanks! Is the problem you mentioned serious when the indices to be translated on each processor include only a few ones beyond the input myapp[] of AOCreateBasic? > Because I only use AO for FEM assembly so I would not translate too many beyond local ranges. > > If your problem gets large, this will be a problem. Thanks! I see. I can make another local AO based on the global AO. For assembly, as I learned from you, I use ISLocalToGlobalMapping. > Another question: do the inputs IS's to AOCreateBasicIS include all the indices, and AOCreateBasicIS would not do any gathering of indices? > > No, its the same thing. Maybe my question was unclear. From the manual http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/AO/AOCreateBasicIS.html#AOCreateBasicIS AOCreateBasicIS(IS isapp,IS ispetsc,AO *aoout) is collective on IS so the parallel IS's must already conceptually contain all the indices from all the processors in the 'comm' of IS. Is this also what you meant? > > Matt > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener From zhang.wei at chalmers.se Mon Apr 15 10:25:51 2013 From: zhang.wei at chalmers.se (Zhang Wei) Date: Mon, 15 Apr 2013 15:25:51 +0000 Subject: [petsc-users] algorithm to get pure real eigen valule for general eigenvalue problem(Non-hermitian type) In-Reply-To: <1ECBC813-436E-45B8-B1ED-5CA9A83A4E6D@dsic.upv.es> References: , , <7A3E1852-8C57-41B2-BB03-3423328C79C9@dsic.upv.es> <85C55751-4131-4365-BBF0-5351D39D2E0B@chalmers.se> <1ECBC813-436E-45B8-B1ED-5CA9A83A4E6D@dsic.upv.es> Message-ID: Hi, thanks for your reply! Here is my code: // Compute the operator matrix that defines the eigensystem, Ax=kx ierr = MatCreateShell(PETSC_COMM_WORLD,n,n,PETSC_DETERMINE,PETSC_DETERMINE,NULL,&A);CHKERRQ(ierr); ierr = MatSetFromOptions(A);CHKERRQ(ierr); ierr = MatShellSetContext(A, &pa); CHKERRQ(ierr); ierr = MatShellSetOperation(A, MATOP_MULT, (void(*)()) MatVecMult); CHKERRQ(ierr); ierr = MatGetVecs(A, PETSC_NULL,&xr); CHKERRQ(ierr); ierr = MatGetVecs(A, PETSC_NULL,&xi); CHKERRQ(ierr); ierr = MatGetVecs(A, PETSC_NULL,&iv); CHKERRQ(ierr); // Create the eigensolver and set various options ierr = EPSCreate(PETSC_COMM_WORLD,&eps);CHKERRQ(ierr); // Set operators. In this case, it is a standard eigenvalue problem ierr = EPSSetOperators(eps, A, PETSC_NULL);CHKERRQ(ierr); ierr = EPSSetProblemType(eps, EPS_NHEP);CHKERRQ(ierr); ierr = EPSSetDimensions(eps, nev, ncv, mpd);CHKERRQ(ierr); ierr = EPSSetWhichEigenpairs(eps,which);CHKERRQ(ierr); ierr = EPSSetType(eps,method.c_str());CHKERRQ(ierr); ierr = EPSSetTolerances(eps,s_tol,s_maxit);CHKERRQ(ierr); ierr = EPSSetConvergenceTest(eps,cov);CHKERRQ(ierr); ierr = EPSKrylovSchurSetRestart(eps,keep);CHKERRQ(ierr); // Set solver parameters at runtime ierr = EPSSetFromOptions(eps); CHKERRQ(ierr); // Set initial vector ierr = pa.setInitialVector(iv); ierr = EPSSetInitialSpace(eps,1,&iv);CHKERRQ(ierr); ierr = EPSView(eps, PETSC_VIEWER_STDOUT_WORLD); CHKERRQ(ierr); // Solve the eigensystem ierr = EPSSolve(eps);CHKERRQ(ierr); and pa is a class, where I define the OP, MatVecMult. It actually looks like this: PetscErrorCode MatVecMult(Mat A, Vec x, Vec y) { linearizedTimeStepper *pa; MatShellGetContext(A, &pa); return pa->MatVecMult(A, x, y); } -----Original Message----- From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Jose E. Roman Sent: den 15 april 2013 17:01 To: PETSc users list Subject: Re: [petsc-users] algorithm to get pure real eigen valule for general eigenvalue problem(Non-hermitian type) El 15/04/2013, a las 16:50, Zhang Wei escribi?: > Hi > Since the case is not that big(size of matrix is 9600x9600. ). I have done test in the first way, which gives me a 2.1 Gb ASCII file. The eigen values I get from matlab are : >> In eigs>processEUPDinfo at 1340 > In eigs at 357 >>> c > > c = > > 0 > 0 > 0 > 0 > 1.9405 - 0.4733i > 1.9405 + 0.4733i > > And what I got from slepc are : > > 0 1.99988 > 1 1.99974 > 2 1.99971 > 3 1.99913+0.0370552j > 4 1.99913+-0.0370552j > 5 1.99894+0.0370115j > > Non of them is right one. And on the other hand there are lots of small values in the matrix which are almost in order of 1e-7 and even smaller. > Could it be the reason resulting such problem? Note that if you are trasferring the matrix via a text file, then the matrix loaded in Matlab will differ slightly form the one being handled by SLEPc, so you can expect differences in eigenvalues. For a more equivalent computation you should transfer the matrix in binary format. Anyway, my impression is that you have a bug in the code that generates the matrix, so I would suggest checking that. Having small nonzero entries in the matrix should not be a problem for the eigensolver. Jose From jedbrown at mcs.anl.gov Mon Apr 15 10:26:02 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 15 Apr 2013 10:26:02 -0500 Subject: [petsc-users] AOCreateBasic & AOCreateBasicIS In-Reply-To: References: <87obdfonfk.fsf@mcs.anl.gov> Message-ID: <8738urolx1.fsf@mcs.anl.gov> Hui Zhang writes: > Maybe my question was unclear. From the manual > > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/AO/AOCreateBasicIS.html#AOCreateBasicIS > > AOCreateBasicIS(IS isapp,IS ispetsc,AO *aoout) > > is collective on IS so the parallel IS's must already conceptually > contain all the indices from all the processors in the 'comm' of IS. > Is this also what you meant? No, "collective" means that all processes in the communicator must call the function together. Usually (not always, documentation should explain), PETSc makes collective interfaces memory scalable so that each process provides only its local part. From mike.hui.zhang at hotmail.com Mon Apr 15 10:31:54 2013 From: mike.hui.zhang at hotmail.com (Hui Zhang) Date: Mon, 15 Apr 2013 17:31:54 +0200 Subject: [petsc-users] AOCreateBasic & AOCreateBasicIS In-Reply-To: <8738urolx1.fsf@mcs.anl.gov> References: <87obdfonfk.fsf@mcs.anl.gov> <8738urolx1.fsf@mcs.anl.gov> Message-ID: On Apr 15, 2013, at 5:26 PM, Jed Brown wrote: > Hui Zhang writes: > >> Maybe my question was unclear. From the manual >> >> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/AO/AOCreateBasicIS.html#AOCreateBasicIS >> >> AOCreateBasicIS(IS isapp,IS ispetsc,AO *aoout) >> >> is collective on IS so the parallel IS's must already conceptually >> contain all the indices from all the processors in the 'comm' of IS. >> Is this also what you meant? > > No, "collective" means that all processes in the communicator must call > the function together. Usually (not always, documentation should > explain), PETSc makes collective interfaces memory scalable so that each > process provides only its local part. Ok, I see. Thanks very much! From jroman at dsic.upv.es Mon Apr 15 10:34:37 2013 From: jroman at dsic.upv.es (Jose E. Roman) Date: Mon, 15 Apr 2013 17:34:37 +0200 Subject: [petsc-users] algorithm to get pure real eigen valule for general eigenvalue problem(Non-hermitian type) In-Reply-To: References: , , <7A3E1852-8C57-41B2-BB03-3423328C79C9@dsic.upv.es> <85C55751-4131-4365-BBF0-5351D39D2E0B@chalmers.se> <1ECBC813-436E-45B8-B1ED-5CA9A83A4E6D@dsic.upv.es> Message-ID: El 15/04/2013, a las 17:25, Zhang Wei escribi?: > Hi, thanks for your reply! > Here is my code: > // Compute the operator matrix that defines the eigensystem, Ax=kx > ierr = MatCreateShell(PETSC_COMM_WORLD,n,n,PETSC_DETERMINE,PETSC_DETERMINE,NULL,&A);CHKERRQ(ierr); > ierr = MatSetFromOptions(A);CHKERRQ(ierr); > > ierr = MatShellSetContext(A, &pa); CHKERRQ(ierr); > ierr = MatShellSetOperation(A, MATOP_MULT, (void(*)()) MatVecMult); CHKERRQ(ierr); > > ierr = MatGetVecs(A, PETSC_NULL,&xr); CHKERRQ(ierr); > ierr = MatGetVecs(A, PETSC_NULL,&xi); CHKERRQ(ierr); > ierr = MatGetVecs(A, PETSC_NULL,&iv); CHKERRQ(ierr); > > // Create the eigensolver and set various options > ierr = EPSCreate(PETSC_COMM_WORLD,&eps);CHKERRQ(ierr); > > // Set operators. In this case, it is a standard eigenvalue problem > ierr = EPSSetOperators(eps, A, PETSC_NULL);CHKERRQ(ierr); > ierr = EPSSetProblemType(eps, EPS_NHEP);CHKERRQ(ierr); > ierr = EPSSetDimensions(eps, nev, ncv, mpd);CHKERRQ(ierr); > ierr = EPSSetWhichEigenpairs(eps,which);CHKERRQ(ierr); > ierr = EPSSetType(eps,method.c_str());CHKERRQ(ierr); > ierr = EPSSetTolerances(eps,s_tol,s_maxit);CHKERRQ(ierr); > ierr = EPSSetConvergenceTest(eps,cov);CHKERRQ(ierr); > ierr = EPSKrylovSchurSetRestart(eps,keep);CHKERRQ(ierr); > // Set solver parameters at runtime > ierr = EPSSetFromOptions(eps); CHKERRQ(ierr); > > // Set initial vector > ierr = pa.setInitialVector(iv); > ierr = EPSSetInitialSpace(eps,1,&iv);CHKERRQ(ierr); > > ierr = EPSView(eps, PETSC_VIEWER_STDOUT_WORLD); CHKERRQ(ierr); > > // Solve the eigensystem > ierr = EPSSolve(eps);CHKERRQ(ierr); > > and pa is a class, where I define the OP, MatVecMult. It actually looks like this: > > PetscErrorCode MatVecMult(Mat A, Vec x, Vec y) > { > linearizedTimeStepper *pa; > MatShellGetContext(A, &pa); > return pa->MatVecMult(A, x, y); > } This code is basically correct. What I was asking is whether you had tested the matrix (that is, pa->MatVecMult) in other contexts before computing eigenvalues. Jose From mike.hui.zhang at hotmail.com Mon Apr 15 10:52:41 2013 From: mike.hui.zhang at hotmail.com (Hui Zhang) Date: Mon, 15 Apr 2013 17:52:41 +0200 Subject: [petsc-users] AOApplicationToPetscIS Message-ID: I'm implementing a domain decomposition preconditioner. The dof is ordered by myapp and using AO (and LocalToGlobalMapping for assembly) to map to petsc ordering. The task I'm doing is building VecScatter's from subdomains to the global domain. So my program is Step 1. I can map subdomain petsc ordering to subdomain natural ordering. Step 2. I can also map subdomain natural ordering to global domain natural ordering. Step 3. I have an AO for mapping global domain natural ordering to petsc ordering. Since each subdomain is defined on a sub-communicator of the communicator of the global domain. My question is for AOApplicationToPetscIS(AO ao,IS is) can ao and is have different communicators? Will my program be bad for large problems? How would you do it? From knepley at gmail.com Mon Apr 15 10:55:53 2013 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 15 Apr 2013 10:55:53 -0500 Subject: [petsc-users] AOApplicationToPetscIS In-Reply-To: References: Message-ID: On Mon, Apr 15, 2013 at 10:52 AM, Hui Zhang wrote: > I'm implementing a domain decomposition preconditioner. The dof is > ordered by myapp and using AO (and LocalToGlobalMapping for assembly) to > map to petsc ordering. > The task I'm doing is building VecScatter's from subdomains to the global > domain. So my program is > I do not understand why you would need AOs for this. They are for global reordering, whereas you seem to only need a local mapping here. Matt > Step 1. I can map subdomain petsc ordering to subdomain natural ordering. > > Step 2. I can also map subdomain natural ordering to global domain > natural ordering. > > Step 3. I have an AO for mapping global domain natural ordering to petsc > ordering. > > Since each subdomain is defined on a sub-communicator of the communicator > of the global domain. My question is for > > AOApplicationToPetscIS(AO ao,IS is) > > can ao and is have different communicators? Will my program be bad for > large problems? How would you do it? -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhang.wei at chalmers.se Mon Apr 15 11:02:03 2013 From: zhang.wei at chalmers.se (Zhang Wei) Date: Mon, 15 Apr 2013 16:02:03 +0000 Subject: [petsc-users] algorithm to get pure real eigen valule for general eigenvalue problem(Non-hermitian type) In-Reply-To: References: , , <7A3E1852-8C57-41B2-BB03-3423328C79C9@dsic.upv.es> <85C55751-4131-4365-BBF0-5351D39D2E0B@chalmers.se> <1ECBC813-436E-45B8-B1ED-5CA9A83A4E6D@dsic.upv.es> Message-ID: Hi Jose. In pa-> MatVecMult(Mat A, Vec x , Vec y). I do three things 1) copy the x as initial value for my fvm code: const PetscScalar *px; PetscScalar *py; PetscErrorCode ierr; PetscFunctionBegin; ierr = VecGetArrayRead(x,&px);CHKERRQ(ierr); ierr = VecGetArray(y,&py);CHKERRQ(ierr); if(loopID>0){ copySLEPcDataToField(&px[0]); } 2) In fvm solver, where I implement first order eular method, fourth order RK method and several others. 3) after several timestep. I do sent the field to slepc: copyFieldToSLEPcData(&py[0]); VecAssemblyBegin(y); VecAssemblyEnd(y); >From the those files I saved during the time stepping (all those methods), the fvm solver works fine for me. They gives expected things. Actually the numerical methods and accuracy in pa-> MatVecMult doesn't affect the results that much. At least it the shape of eigen vectors do not change with those factors. -----Original Message----- From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Jose E. Roman Sent: den 15 april 2013 17:35 To: PETSc users list Subject: Re: [petsc-users] algorithm to get pure real eigen valule for general eigenvalue problem(Non-hermitian type) El 15/04/2013, a las 17:25, Zhang Wei escribi?: > Hi, thanks for your reply! > Here is my code: > // Compute the operator matrix that defines the eigensystem, Ax=kx > ierr = MatCreateShell(PETSC_COMM_WORLD,n,n,PETSC_DETERMINE,PETSC_DETERMINE,NULL,&A);CHKERRQ(ierr); > ierr = MatSetFromOptions(A);CHKERRQ(ierr); > > ierr = MatShellSetContext(A, &pa); CHKERRQ(ierr); > ierr = MatShellSetOperation(A, MATOP_MULT, (void(*)()) MatVecMult); > CHKERRQ(ierr); > > ierr = MatGetVecs(A, PETSC_NULL,&xr); CHKERRQ(ierr); > ierr = MatGetVecs(A, PETSC_NULL,&xi); CHKERRQ(ierr); > ierr = MatGetVecs(A, PETSC_NULL,&iv); CHKERRQ(ierr); > > // Create the eigensolver and set various options > ierr = EPSCreate(PETSC_COMM_WORLD,&eps);CHKERRQ(ierr); > > // Set operators. In this case, it is a standard eigenvalue problem > ierr = EPSSetOperators(eps, A, PETSC_NULL);CHKERRQ(ierr); > ierr = EPSSetProblemType(eps, EPS_NHEP);CHKERRQ(ierr); > ierr = EPSSetDimensions(eps, nev, ncv, mpd);CHKERRQ(ierr); > ierr = EPSSetWhichEigenpairs(eps,which);CHKERRQ(ierr); > ierr = EPSSetType(eps,method.c_str());CHKERRQ(ierr); > ierr = EPSSetTolerances(eps,s_tol,s_maxit);CHKERRQ(ierr); > ierr = EPSSetConvergenceTest(eps,cov);CHKERRQ(ierr); > ierr = EPSKrylovSchurSetRestart(eps,keep);CHKERRQ(ierr); > // Set solver parameters at runtime > ierr = EPSSetFromOptions(eps); CHKERRQ(ierr); > > // Set initial vector > ierr = pa.setInitialVector(iv); > ierr = EPSSetInitialSpace(eps,1,&iv);CHKERRQ(ierr); > > ierr = EPSView(eps, PETSC_VIEWER_STDOUT_WORLD); CHKERRQ(ierr); > > // Solve the eigensystem > ierr = EPSSolve(eps);CHKERRQ(ierr); > > and pa is a class, where I define the OP, MatVecMult. It actually looks like this: > > PetscErrorCode MatVecMult(Mat A, Vec x, Vec y) { > linearizedTimeStepper *pa; > MatShellGetContext(A, &pa); > return pa->MatVecMult(A, x, y); > } This code is basically correct. What I was asking is whether you had tested the matrix (that is, pa->MatVecMult) in other contexts before computing eigenvalues. Jose From mike.hui.zhang at hotmail.com Mon Apr 15 11:08:18 2013 From: mike.hui.zhang at hotmail.com (Hui Zhang) Date: Mon, 15 Apr 2013 18:08:18 +0200 Subject: [petsc-users] AOApplicationToPetscIS In-Reply-To: References: Message-ID: On Apr 15, 2013, at 5:55 PM, Matthew Knepley wrote: > On Mon, Apr 15, 2013 at 10:52 AM, Hui Zhang wrote: > I'm implementing a domain decomposition preconditioner. The dof is ordered by myapp and using AO (and LocalToGlobalMapping for assembly) to map to petsc ordering. > The task I'm doing is building VecScatter's from subdomains to the global domain. So my program is > > I do not understand why you would need AOs for this. They are for global reordering, whereas you seem to only > need a local mapping here. Thanks a lot! I see the difference now. > > Matt > > Step 1. I can map subdomain petsc ordering to subdomain natural ordering. > > Step 2. I can also map subdomain natural ordering to global domain natural ordering. > > Step 3. I have an AO for mapping global domain natural ordering to petsc ordering. > > Since each subdomain is defined on a sub-communicator of the communicator of the global domain. My question is for > > AOApplicationToPetscIS(AO ao,IS is) > > can ao and is have different communicators? Will my program be bad for large problems? How would you do it? > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener From jroman at dsic.upv.es Mon Apr 15 11:33:45 2013 From: jroman at dsic.upv.es (Jose E. Roman) Date: Mon, 15 Apr 2013 18:33:45 +0200 Subject: [petsc-users] algorithm to get pure real eigen valule for general eigenvalue problem(Non-hermitian type) In-Reply-To: References: , , <7A3E1852-8C57-41B2-BB03-3423328C79C9@dsic.upv.es> <85C55751-4131-4365-BBF0-5351D39D2E0B@chalmers.se> <1ECBC813-436E-45B8-B1ED-5CA9A83A4E6D@dsic.upv.es> Message-ID: El 15/04/2013, a las 18:02, Zhang Wei escribi?: > Hi Jose. > In pa-> MatVecMult(Mat A, Vec x , Vec y). I do three things > 1) copy the x as initial value for my fvm code: > > const PetscScalar *px; > PetscScalar *py; > PetscErrorCode ierr; > > PetscFunctionBegin; > ierr = VecGetArrayRead(x,&px);CHKERRQ(ierr); > ierr = VecGetArray(y,&py);CHKERRQ(ierr); > > if(loopID>0){ > copySLEPcDataToField(&px[0]); > } > 2) In fvm solver, where I implement first order eular method, fourth order RK method and several others. > 3) after several timestep. I do sent the field to slepc: > > copyFieldToSLEPcData(&py[0]); > VecAssemblyBegin(y); > VecAssemblyEnd(y); > > From the those files I saved during the time stepping (all those methods), the fvm solver works fine for me. They gives expected things. > Actually the numerical methods and accuracy in pa-> MatVecMult doesn't affect the results that much. At least it the shape of eigen vectors do not change with those factors. I am not an expert in the application, but my understanding is that stability analysis must be based on eigenvalues of the linearized operator A, but if you embed the initial value solver then you will get approximations of eigenvalues of exp(t*A), and that may not be what you want. Maybe you should have a look at a related reference, e.g http://www.mech.kth.se/~shervin/pdfs/2012_arcme.pdf See also references under "Computational Fluid Dynamics" in http://www.grycap.upv.es/slepc/material/appli.htm Jose From mike.hui.zhang at hotmail.com Mon Apr 15 14:10:19 2013 From: mike.hui.zhang at hotmail.com (Hui Zhang) Date: Mon, 15 Apr 2013 21:10:19 +0200 Subject: [petsc-users] AOApplicationToPetscIS In-Reply-To: References: Message-ID: On Apr 15, 2013, at 5:55 PM, Matthew Knepley wrote: > On Mon, Apr 15, 2013 at 10:52 AM, Hui Zhang wrote: > I'm implementing a domain decomposition preconditioner. The dof is ordered by myapp and using AO (and LocalToGlobalMapping for assembly) to map to petsc ordering. > The task I'm doing is building VecScatter's from subdomains to the global domain. So my program is > > I do not understand why you would need AOs for this. They are for global reordering, whereas you seem to only > need a local mapping here. To use ISLocalToGlobalMappingCreate(MPI_Comm cm,PetscInt n,const PetscInt indices[],PetscCopyMode mode,ISLocalToGlobalMapping *mapping) In the manual page, it says "Not Collective, but communicator may have more than one process". What is the purpose of using a communicator other than SELF_COMM? Will the input indices[] be gathered in the communicator 'cm'? Now I understand why AOCreateBasic is not scalable. But I still need to use it in the beginning for construction of LocalToGlobalMapping. How did you implement the LocalToGlobalMapping for element-based decomposition? Did you avoid using any AO? > > Matt > > Step 1. I can map subdomain petsc ordering to subdomain natural ordering. > > Step 2. I can also map subdomain natural ordering to global domain natural ordering. > > Step 3. I have an AO for mapping global domain natural ordering to petsc ordering. > > Since each subdomain is defined on a sub-communicator of the communicator of the global domain. My question is for > > AOApplicationToPetscIS(AO ao,IS is) > > can ao and is have different communicators? Will my program be bad for large problems? How would you do it? > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener From jedbrown at mcs.anl.gov Mon Apr 15 15:04:59 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 15 Apr 2013 15:04:59 -0500 Subject: [petsc-users] KSPCG solution blow up In-Reply-To: References: Message-ID: <87wqs3mufo.fsf@mcs.anl.gov> Hugo Gagnon writes: > Hi, > > I hope you don't mind me writing to you off list, I just don't think > my problem would be helpful to others anyway. I'd rather keep discussions on the list. We (PETSc developers) cannot scale to have private conversations with all users. > I'm trying to converge a problem with KSPCG, which I know for sure > that both A and b, as in Ax=b, are correct. I can successfully > converge this particular problem using our own serial in-house PCG > solver, whereas with PETSc the solution blows up at the first > iteration with error code -8. -ksp_converged_reason prints why, or use KSPConvergedReasons to get the string. KSP_DIVERGED_INDEFINITE_PC = -8, > > I've included parts of my code below. Again, I've triple checked that > I build A and b correctly. Being a beginner with PETSc and parallel > solvers in general, I'd appreciate if you could give me some simple > pointers as to why my problem is blowing up and how can I improve my > code. http://scicomp.stackexchange.com/questions/513/why-is-my-iterative-linear-solver-not-converging > Note that this code does work for easier problems, although I've > noticed that in general PETSc takes way more iterations to converge > than our in-house solver. Would you have an idea as to why this is > so? You are either doing a different algorithm or evaluating convergence differently. What preconditioner are you using in serial? Why aren't you using PETSc's multigrid? Send the diagnostics described in the link if you want more help. > I know I'm asking quite a bit but I've been struggling with this > problem for what I feel far too long! > > Thank you for your help, > -- > Hugo Gagnon > > !-- initialize petsc with the current communicator > PETSC_COMM_WORLD = COMM_CURRENT > call PetscInitialize(PETSC_NULL_CHARACTER,Pierr) > > !-- instantiate the lhs and solution vectors > call VecCreate(PETSC_COMM_WORLD,Pb,Pierr) > call VecSetSizes(Pb,PETSC_DECIDE,numVar,Pierr)) > call VecSetType(Pb,VECSTANDARD,Pierr) > call VecDuplicate(Pb,Psol,Pierr) > > !-- instantiate the rhs > call MatCreate(PETSC_COMM_WORLD,Paa,Pierr) > call MatSetSizes(Paa,PETSC_DECIDE,PETSC_DECIDE,numVar,numVar,Pierr) > call MatSetType(Paa,MATAIJ,Pierr) !-- csr format > > !-- preallocate memory > mnnz = 81 !-- knob > call MatSeqAIJSetPreallocation(Paa,mnnz,PETSC_NULL_INTEGER,Pierr) > call MatMPIAIJSetPreallocation(Paa,mnnz,PETSC_NULL_INTEGER,mnnz,PETSC_NULL_INTEGER,Pierr) > > !-- instantiate the linear solver context > call KSPCreate(PETSC_COMM_WORLD,Pksp,Pierr) > call KSPSetType(Pksp,KSPCG,Pierr) > > !-- use a custom monitor function > call KSPMonitorSet(Pksp,KSPMonitorPETSc,PETSC_NULL_OBJECT,PETSC_NULL_FUNCTION,Pierr) > > !-- instantiate a scatterer so that root has access to full Psol > call VecScatterCreateToZero(Psol,Psols,Psol0,Pierr) > > call MatZeroEntries(Paa,Pierr) > > *** build Psol, Pb and Paa **** > > !-- assemble (distribute) the initial solution, the lhs and the rhs > call VecAssemblyBegin(Psol,Pierr) > call VecAssemblyBegin(Pb,Pierr) > call MatAssemblyBegin(Paa,MAT_FINAL_ASSEMBLY,Pierr) > call VecAssemblyEnd(Psol,Pierr) > call VecAssemblyEnd(Pb,Pierr) > call MatAssemblyEnd(Paa,MAT_FINAL_ASSEMBLY,Pierr) > call MatSetOption(Paa,MAT_NEW_NONZERO_LOCATIONS,PETSC_FALSE,Pierr) > > !-- setup pcg, reusing the lhs as preconditioner > call KSPSetOperators(Pksp,Paa,Paa,SAME_NONZERO_PATTERN,Pierr) > call KSPSetTolerances(Pksp,tol,PETSC_DEFAULT_DOUBLE_PRECISION,PETSC_DEFAULT_DOUBLE_PRECISION,grid%pcg_its,Pierr) > > !-- setup the preconditioner with the desired fill level > call KSPGetPC(Pksp,Ppc,Pierr) > if (nProc > 1) then > call PCSetType(Ppc,PCBJACOBI,Pierr) > call KSPSetup(Pksp,Pierr) > call PCBJacobiGetSubKSP(Ppc,j,j,Psubksp,Pierr) > call KSPGetPC(Psubksp(1),Ppc,Pierr) > end if > call PCSetType(Ppc,PCILU,Pierr) > call PCFactorSetLevels(Ppc,grid%pcg_fill,Pierr) > > !-- do not zero out the solution vector > call KSPSetInitialGuessNonzero(Pksp,PETSC_TRUE,Pierr) > call KSPSetFromOptions(Pksp,Pierr) > > !-- let petsc do its magic > call KSPSolve(Pksp,Pb,Psol,Pierr) > call KSPGetConvergedReason(Pksp,Pconv,Pierr) From mike.hui.zhang at hotmail.com Mon Apr 15 15:10:12 2013 From: mike.hui.zhang at hotmail.com (Hui Zhang) Date: Mon, 15 Apr 2013 22:10:12 +0200 Subject: [petsc-users] AOApplicationToPetscIS In-Reply-To: References: Message-ID: On Apr 15, 2013, at 9:10 PM, Hui Zhang wrote: > > On Apr 15, 2013, at 5:55 PM, Matthew Knepley wrote: > >> On Mon, Apr 15, 2013 at 10:52 AM, Hui Zhang wrote: >> I'm implementing a domain decomposition preconditioner. The dof is ordered by myapp and using AO (and LocalToGlobalMapping for assembly) to map to petsc ordering. >> The task I'm doing is building VecScatter's from subdomains to the global domain. So my program is >> >> I do not understand why you would need AOs for this. They are for global reordering, whereas you seem to only >> need a local mapping here. > > To use > > ISLocalToGlobalMappingCreate(MPI_Comm cm,PetscInt n,const PetscInt indices[],PetscCopyMode mode,ISLocalToGlobalMapping *mapping) > > In the manual page, it says "Not Collective, but communicator may have more than one process". What is the purpose of using a communicator other than SELF_COMM? Will the input indices[] be gathered in the communicator 'cm'? > > Now I understand why AOCreateBasic is not scalable. But I still need to use it in the beginning for construction of LocalToGlobalMapping. How did you implement the LocalToGlobalMapping for element-based decomposition? Did you avoid using any AO? I think I find a way. I can use GetOwnerShipRanges, the mesh element connectivity and mesh processor connectivity. AO is too global for this-- it does not take advantage of the connectivity but a global search. > >> >> Matt >> >> Step 1. I can map subdomain petsc ordering to subdomain natural ordering. >> >> Step 2. I can also map subdomain natural ordering to global domain natural ordering. >> >> Step 3. I have an AO for mapping global domain natural ordering to petsc ordering. >> >> Since each subdomain is defined on a sub-communicator of the communicator of the global domain. My question is for >> >> AOApplicationToPetscIS(AO ao,IS is) >> >> can ao and is have different communicators? Will my program be bad for large problems? How would you do it? >> >> >> >> -- >> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >> -- Norbert Wiener > > From mark.adams at columbia.edu Mon Apr 15 15:35:57 2013 From: mark.adams at columbia.edu (Mark F. Adams) Date: Mon, 15 Apr 2013 16:35:57 -0400 Subject: [petsc-users] KSPCG solution blow up In-Reply-To: <87wqs3mufo.fsf@mcs.anl.gov> References: <87wqs3mufo.fsf@mcs.anl.gov> Message-ID: ILU is not guaranteed to stay positive and CG requires this. PETSc's ILU is ILU(0). If your in-house PCG solver works with ILU(0) then there is probably a bug in your construction of the PETSc matrix. If your in-house code does not use ILU(0) then I would try PCJACOBI and verify that you get the same residaul history as your in-house solver (assuming that you can do Jacobi). On Apr 15, 2013, at 4:04 PM, Jed Brown wrote: > Hugo Gagnon writes: > >> Hi, >> >> I hope you don't mind me writing to you off list, I just don't think >> my problem would be helpful to others anyway. > > I'd rather keep discussions on the list. We (PETSc developers) cannot > scale to have private conversations with all users. > >> I'm trying to converge a problem with KSPCG, which I know for sure >> that both A and b, as in Ax=b, are correct. I can successfully >> converge this particular problem using our own serial in-house PCG >> solver, whereas with PETSc the solution blows up at the first >> iteration with error code -8. > > -ksp_converged_reason prints why, or use KSPConvergedReasons to get the string. > > KSP_DIVERGED_INDEFINITE_PC = -8, > >> >> I've included parts of my code below. Again, I've triple checked that >> I build A and b correctly. Being a beginner with PETSc and parallel >> solvers in general, I'd appreciate if you could give me some simple >> pointers as to why my problem is blowing up and how can I improve my >> code. > > http://scicomp.stackexchange.com/questions/513/why-is-my-iterative-linear-solver-not-converging > >> Note that this code does work for easier problems, although I've >> noticed that in general PETSc takes way more iterations to converge >> than our in-house solver. Would you have an idea as to why this is >> so? > > You are either doing a different algorithm or evaluating convergence > differently. What preconditioner are you using in serial? Why aren't > you using PETSc's multigrid? Send the diagnostics described in the link > if you want more help. > >> I know I'm asking quite a bit but I've been struggling with this >> problem for what I feel far too long! >> >> Thank you for your help, >> -- >> Hugo Gagnon >> >> !-- initialize petsc with the current communicator >> PETSC_COMM_WORLD = COMM_CURRENT >> call PetscInitialize(PETSC_NULL_CHARACTER,Pierr) >> >> !-- instantiate the lhs and solution vectors >> call VecCreate(PETSC_COMM_WORLD,Pb,Pierr) >> call VecSetSizes(Pb,PETSC_DECIDE,numVar,Pierr)) >> call VecSetType(Pb,VECSTANDARD,Pierr) >> call VecDuplicate(Pb,Psol,Pierr) >> >> !-- instantiate the rhs >> call MatCreate(PETSC_COMM_WORLD,Paa,Pierr) >> call MatSetSizes(Paa,PETSC_DECIDE,PETSC_DECIDE,numVar,numVar,Pierr) >> call MatSetType(Paa,MATAIJ,Pierr) !-- csr format >> >> !-- preallocate memory >> mnnz = 81 !-- knob >> call MatSeqAIJSetPreallocation(Paa,mnnz,PETSC_NULL_INTEGER,Pierr) >> call MatMPIAIJSetPreallocation(Paa,mnnz,PETSC_NULL_INTEGER,mnnz,PETSC_NULL_INTEGER,Pierr) >> >> !-- instantiate the linear solver context >> call KSPCreate(PETSC_COMM_WORLD,Pksp,Pierr) >> call KSPSetType(Pksp,KSPCG,Pierr) >> >> !-- use a custom monitor function >> call KSPMonitorSet(Pksp,KSPMonitorPETSc,PETSC_NULL_OBJECT,PETSC_NULL_FUNCTION,Pierr) >> >> !-- instantiate a scatterer so that root has access to full Psol >> call VecScatterCreateToZero(Psol,Psols,Psol0,Pierr) >> >> call MatZeroEntries(Paa,Pierr) >> >> *** build Psol, Pb and Paa **** >> >> !-- assemble (distribute) the initial solution, the lhs and the rhs >> call VecAssemblyBegin(Psol,Pierr) >> call VecAssemblyBegin(Pb,Pierr) >> call MatAssemblyBegin(Paa,MAT_FINAL_ASSEMBLY,Pierr) >> call VecAssemblyEnd(Psol,Pierr) >> call VecAssemblyEnd(Pb,Pierr) >> call MatAssemblyEnd(Paa,MAT_FINAL_ASSEMBLY,Pierr) >> call MatSetOption(Paa,MAT_NEW_NONZERO_LOCATIONS,PETSC_FALSE,Pierr) >> >> !-- setup pcg, reusing the lhs as preconditioner >> call KSPSetOperators(Pksp,Paa,Paa,SAME_NONZERO_PATTERN,Pierr) >> call KSPSetTolerances(Pksp,tol,PETSC_DEFAULT_DOUBLE_PRECISION,PETSC_DEFAULT_DOUBLE_PRECISION,grid%pcg_its,Pierr) >> >> !-- setup the preconditioner with the desired fill level >> call KSPGetPC(Pksp,Ppc,Pierr) >> if (nProc > 1) then >> call PCSetType(Ppc,PCBJACOBI,Pierr) >> call KSPSetup(Pksp,Pierr) >> call PCBJacobiGetSubKSP(Ppc,j,j,Psubksp,Pierr) >> call KSPGetPC(Psubksp(1),Ppc,Pierr) >> end if >> call PCSetType(Ppc,PCILU,Pierr) >> call PCFactorSetLevels(Ppc,grid%pcg_fill,Pierr) >> >> !-- do not zero out the solution vector >> call KSPSetInitialGuessNonzero(Pksp,PETSC_TRUE,Pierr) >> call KSPSetFromOptions(Pksp,Pierr) >> >> !-- let petsc do its magic >> call KSPSolve(Pksp,Pb,Psol,Pierr) >> call KSPGetConvergedReason(Pksp,Pconv,Pierr) > From dharmareddy84 at gmail.com Mon Apr 15 16:04:27 2013 From: dharmareddy84 at gmail.com (Dharmendar Reddy) Date: Mon, 15 Apr 2013 16:04:27 -0500 Subject: [petsc-users] dmsnessetfunctionlocal Message-ID: Hello, I am getting undefined reference errors on using dmsnesesetfunctionlocal and dmsnessetjacobianlocal. Are fortran interfaces to these functions missing ? I had no problem using snessetfunction before. Solver.F90:(.text+0xaa1): undefined reference to `dmsnessetjacobianlocal_' Solver.F90:(.text+0xabc): undefined reference to `dmsnessetfunctionlocal_' Also, petscsectiongetconstraintdof is giving undefined refernce error FEMModules.F90:(.text+0xc048): undefined reference to `petscsectiongetconstraintdof_ Please help thanks reddy -- ----------------------------------------------------- Dharmendar Reddy Palle Graduate Student Microelectronics Research center, University of Texas at Austin, 10100 Burnet Road, Bldg. 160 MER 2.608F, TX 78758-4445 e-mail: dharmareddy84 at gmail.com Phone: +1-512-350-9082 United States of America. Homepage: https://webspace.utexas.edu/~dpr342 -------------- next part -------------- An HTML attachment was scrubbed... URL: From opensource.petsc at user.fastmail.fm Mon Apr 15 16:08:41 2013 From: opensource.petsc at user.fastmail.fm (Hugo Gagnon) Date: Mon, 15 Apr 2013 17:08:41 -0400 Subject: [petsc-users] KSPCG solution blow up In-Reply-To: References: <87wqs3mufo.fsf@mcs.anl.gov> Message-ID: For the problem I'm describing my serial in-house solver does not work with ILU(0) but works with ILU(3). I have no option to run Jacobi. When I apply the same problem to PETSc's PC solver with ILU(3) in serial I get KSP_DIVERGED_INDEFINITE_PC on the first iteration (in MPI the solution somewhat converges but very slowly). call KSPGetPC(Pksp,Ppc,Pierr) call PCSetType(Ppc,PCILU,Pierr) call PCFactorSetLevels(Ppc,3,Pierr) This effectively changes the fill level from 0 to 3, right? -- Hugo Gagnon On 2013-04-15, at 4:35 PM, Mark F. Adams wrote: > ILU is not guaranteed to stay positive and CG requires this. PETSc's ILU is ILU(0). If your in-house PCG solver works with ILU(0) then there is probably a bug in your construction of the PETSc matrix. If your in-house code does not use ILU(0) then I would try PCJACOBI and verify that you get the same residaul history as your in-house solver (assuming that you can do Jacobi). > > > On Apr 15, 2013, at 4:04 PM, Jed Brown wrote: > >> Hugo Gagnon writes: >> >>> Hi, >>> >>> I hope you don't mind me writing to you off list, I just don't think >>> my problem would be helpful to others anyway. >> >> I'd rather keep discussions on the list. We (PETSc developers) cannot >> scale to have private conversations with all users. >> >>> I'm trying to converge a problem with KSPCG, which I know for sure >>> that both A and b, as in Ax=b, are correct. I can successfully >>> converge this particular problem using our own serial in-house PCG >>> solver, whereas with PETSc the solution blows up at the first >>> iteration with error code -8. >> >> -ksp_converged_reason prints why, or use KSPConvergedReasons to get the string. >> >> KSP_DIVERGED_INDEFINITE_PC = -8, >> >>> >>> I've included parts of my code below. Again, I've triple checked that >>> I build A and b correctly. Being a beginner with PETSc and parallel >>> solvers in general, I'd appreciate if you could give me some simple >>> pointers as to why my problem is blowing up and how can I improve my >>> code. >> >> http://scicomp.stackexchange.com/questions/513/why-is-my-iterative-linear-solver-not-converging >> >>> Note that this code does work for easier problems, although I've >>> noticed that in general PETSc takes way more iterations to converge >>> than our in-house solver. Would you have an idea as to why this is >>> so? >> >> You are either doing a different algorithm or evaluating convergence >> differently. What preconditioner are you using in serial? Why aren't >> you using PETSc's multigrid? Send the diagnostics described in the link >> if you want more help. >> >>> I know I'm asking quite a bit but I've been struggling with this >>> problem for what I feel far too long! >>> >>> Thank you for your help, >>> -- >>> Hugo Gagnon >>> >>> !-- initialize petsc with the current communicator >>> PETSC_COMM_WORLD = COMM_CURRENT >>> call PetscInitialize(PETSC_NULL_CHARACTER,Pierr) >>> >>> !-- instantiate the lhs and solution vectors >>> call VecCreate(PETSC_COMM_WORLD,Pb,Pierr) >>> call VecSetSizes(Pb,PETSC_DECIDE,numVar,Pierr)) >>> call VecSetType(Pb,VECSTANDARD,Pierr) >>> call VecDuplicate(Pb,Psol,Pierr) >>> >>> !-- instantiate the rhs >>> call MatCreate(PETSC_COMM_WORLD,Paa,Pierr) >>> call MatSetSizes(Paa,PETSC_DECIDE,PETSC_DECIDE,numVar,numVar,Pierr) >>> call MatSetType(Paa,MATAIJ,Pierr) !-- csr format >>> >>> !-- preallocate memory >>> mnnz = 81 !-- knob >>> call MatSeqAIJSetPreallocation(Paa,mnnz,PETSC_NULL_INTEGER,Pierr) >>> call MatMPIAIJSetPreallocation(Paa,mnnz,PETSC_NULL_INTEGER,mnnz,PETSC_NULL_INTEGER,Pierr) >>> >>> !-- instantiate the linear solver context >>> call KSPCreate(PETSC_COMM_WORLD,Pksp,Pierr) >>> call KSPSetType(Pksp,KSPCG,Pierr) >>> >>> !-- use a custom monitor function >>> call KSPMonitorSet(Pksp,KSPMonitorPETSc,PETSC_NULL_OBJECT,PETSC_NULL_FUNCTION,Pierr) >>> >>> !-- instantiate a scatterer so that root has access to full Psol >>> call VecScatterCreateToZero(Psol,Psols,Psol0,Pierr) >>> >>> call MatZeroEntries(Paa,Pierr) >>> >>> *** build Psol, Pb and Paa **** >>> >>> !-- assemble (distribute) the initial solution, the lhs and the rhs >>> call VecAssemblyBegin(Psol,Pierr) >>> call VecAssemblyBegin(Pb,Pierr) >>> call MatAssemblyBegin(Paa,MAT_FINAL_ASSEMBLY,Pierr) >>> call VecAssemblyEnd(Psol,Pierr) >>> call VecAssemblyEnd(Pb,Pierr) >>> call MatAssemblyEnd(Paa,MAT_FINAL_ASSEMBLY,Pierr) >>> call MatSetOption(Paa,MAT_NEW_NONZERO_LOCATIONS,PETSC_FALSE,Pierr) >>> >>> !-- setup pcg, reusing the lhs as preconditioner >>> call KSPSetOperators(Pksp,Paa,Paa,SAME_NONZERO_PATTERN,Pierr) >>> call KSPSetTolerances(Pksp,tol,PETSC_DEFAULT_DOUBLE_PRECISION,PETSC_DEFAULT_DOUBLE_PRECISION,grid%pcg_its,Pierr) >>> >>> !-- setup the preconditioner with the desired fill level >>> call KSPGetPC(Pksp,Ppc,Pierr) >>> if (nProc > 1) then >>> call PCSetType(Ppc,PCBJACOBI,Pierr) >>> call KSPSetup(Pksp,Pierr) >>> call PCBJacobiGetSubKSP(Ppc,j,j,Psubksp,Pierr) >>> call KSPGetPC(Psubksp(1),Ppc,Pierr) >>> end if >>> call PCSetType(Ppc,PCILU,Pierr) >>> call PCFactorSetLevels(Ppc,grid%pcg_fill,Pierr) >>> >>> !-- do not zero out the solution vector >>> call KSPSetInitialGuessNonzero(Pksp,PETSC_TRUE,Pierr) >>> call KSPSetFromOptions(Pksp,Pierr) >>> >>> !-- let petsc do its magic >>> call KSPSolve(Pksp,Pb,Psol,Pierr) >>> call KSPGetConvergedReason(Pksp,Pconv,Pierr) >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Apr 15 16:12:16 2013 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 15 Apr 2013 16:12:16 -0500 Subject: [petsc-users] KSPCG solution blow up In-Reply-To: References: <87wqs3mufo.fsf@mcs.anl.gov> Message-ID: On Mon, Apr 15, 2013 at 4:08 PM, Hugo Gagnon < opensource.petsc at user.fastmail.fm> wrote: > For the problem I'm describing my serial in-house solver does not work > with ILU(0) but works with ILU(3). I have no option to run Jacobi. When I > apply the same problem to PETSc's PC solver with ILU(3) in serial I > get KSP_DIVERGED_INDEFINITE_PC on the first iteration (in MPI the solution > somewhat converges but very slowly). > As Mark said, ILU(3) does not preserve either symmetry or definiteness. Matt > call KSPGetPC(Pksp,Ppc,Pierr) > call PCSetType(Ppc,PCILU,Pierr) > call PCFactorSetLevels(Ppc,3,Pierr) > > This effectively changes the fill level from 0 to 3, right? > > -- > Hugo Gagnon > > On 2013-04-15, at 4:35 PM, Mark F. Adams wrote: > > ILU is not guaranteed to stay positive and CG requires this. PETSc's ILU > is ILU(0). If your in-house PCG solver works with ILU(0) then there is > probably a bug in your construction of the PETSc matrix. If your in-house > code does not use ILU(0) then I would try PCJACOBI and verify that you get > the same residaul history as your in-house solver (assuming that you can do > Jacobi). > > > On Apr 15, 2013, at 4:04 PM, Jed Brown wrote: > > Hugo Gagnon writes: > > Hi, > > I hope you don't mind me writing to you off list, I just don't think > my problem would be helpful to others anyway. > > > I'd rather keep discussions on the list. We (PETSc developers) cannot > scale to have private conversations with all users. > > I'm trying to converge a problem with KSPCG, which I know for sure > that both A and b, as in Ax=b, are correct. I can successfully > converge this particular problem using our own serial in-house PCG > solver, whereas with PETSc the solution blows up at the first > iteration with error code -8. > > > -ksp_converged_reason prints why, or use KSPConvergedReasons to get the > string. > > KSP_DIVERGED_INDEFINITE_PC = -8, > > > I've included parts of my code below. Again, I've triple checked that > I build A and b correctly. Being a beginner with PETSc and parallel > solvers in general, I'd appreciate if you could give me some simple > pointers as to why my problem is blowing up and how can I improve my > code. > > > > http://scicomp.stackexchange.com/questions/513/why-is-my-iterative-linear-solver-not-converging > > Note that this code does work for easier problems, although I've > noticed that in general PETSc takes way more iterations to converge > than our in-house solver. Would you have an idea as to why this is > so? > > > You are either doing a different algorithm or evaluating convergence > differently. What preconditioner are you using in serial? Why aren't > you using PETSc's multigrid? Send the diagnostics described in the link > if you want more help. > > I know I'm asking quite a bit but I've been struggling with this > problem for what I feel far too long! > > Thank you for your help, > -- > Hugo Gagnon > > !-- initialize petsc with the current communicator > PETSC_COMM_WORLD = COMM_CURRENT > call PetscInitialize(PETSC_NULL_CHARACTER,Pierr) > > !-- instantiate the lhs and solution vectors > call VecCreate(PETSC_COMM_WORLD,Pb,Pierr) > call VecSetSizes(Pb,PETSC_DECIDE,numVar,Pierr)) > call VecSetType(Pb,VECSTANDARD,Pierr) > call VecDuplicate(Pb,Psol,Pierr) > > !-- instantiate the rhs > call MatCreate(PETSC_COMM_WORLD,Paa,Pierr) > call MatSetSizes(Paa,PETSC_DECIDE,PETSC_DECIDE,numVar,numVar,Pierr) > call MatSetType(Paa,MATAIJ,Pierr) !-- csr format > > !-- preallocate memory > mnnz = 81 !-- knob > call MatSeqAIJSetPreallocation(Paa,mnnz,PETSC_NULL_INTEGER,Pierr) > call > MatMPIAIJSetPreallocation(Paa,mnnz,PETSC_NULL_INTEGER,mnnz,PETSC_NULL_INTEGER,Pierr) > > !-- instantiate the linear solver context > call KSPCreate(PETSC_COMM_WORLD,Pksp,Pierr) > call KSPSetType(Pksp,KSPCG,Pierr) > > !-- use a custom monitor function > call > KSPMonitorSet(Pksp,KSPMonitorPETSc,PETSC_NULL_OBJECT,PETSC_NULL_FUNCTION,Pierr) > > !-- instantiate a scatterer so that root has access to full Psol > call VecScatterCreateToZero(Psol,Psols,Psol0,Pierr) > > call MatZeroEntries(Paa,Pierr) > > *** build Psol, Pb and Paa **** > > !-- assemble (distribute) the initial solution, the lhs and the rhs > call VecAssemblyBegin(Psol,Pierr) > call VecAssemblyBegin(Pb,Pierr) > call MatAssemblyBegin(Paa,MAT_FINAL_ASSEMBLY,Pierr) > call VecAssemblyEnd(Psol,Pierr) > call VecAssemblyEnd(Pb,Pierr) > call MatAssemblyEnd(Paa,MAT_FINAL_ASSEMBLY,Pierr) > call MatSetOption(Paa,MAT_NEW_NONZERO_LOCATIONS,PETSC_FALSE,Pierr) > > !-- setup pcg, reusing the lhs as preconditioner > call KSPSetOperators(Pksp,Paa,Paa,SAME_NONZERO_PATTERN,Pierr) > call > KSPSetTolerances(Pksp,tol,PETSC_DEFAULT_DOUBLE_PRECISION,PETSC_DEFAULT_DOUBLE_PRECISION,grid%pcg_its,Pierr) > > !-- setup the preconditioner with the desired fill level > call KSPGetPC(Pksp,Ppc,Pierr) > if (nProc > 1) then > call PCSetType(Ppc,PCBJACOBI,Pierr) > call KSPSetup(Pksp,Pierr) > call PCBJacobiGetSubKSP(Ppc,j,j,Psubksp,Pierr) > call KSPGetPC(Psubksp(1),Ppc,Pierr) > end if > call PCSetType(Ppc,PCILU,Pierr) > call PCFactorSetLevels(Ppc,grid%pcg_fill,Pierr) > > !-- do not zero out the solution vector > call KSPSetInitialGuessNonzero(Pksp,PETSC_TRUE,Pierr) > call KSPSetFromOptions(Pksp,Pierr) > > !-- let petsc do its magic > call KSPSolve(Pksp,Pb,Psol,Pierr) > call KSPGetConvergedReason(Pksp,Pconv,Pierr) > > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Mon Apr 15 16:15:40 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 15 Apr 2013 16:15:40 -0500 Subject: [petsc-users] KSPCG solution blow up In-Reply-To: References: <87wqs3mufo.fsf@mcs.anl.gov> Message-ID: <87mwszmr5v.fsf@mcs.anl.gov> Hugo Gagnon writes: > For the problem I'm describing my serial in-house solver does not work > with ILU(0) but works with ILU(3). I have no option to run Jacobi. > When I apply the same problem to PETSc's PC solver with ILU(3) in > serial I get KSP_DIVERGED_INDEFINITE_PC Does your in-house ILU(3) use a different ordering? What shift scheme does it use? > on the first iteration (in MPI the solution somewhat converges but > very slowly). > > call KSPGetPC(Pksp,Ppc,Pierr) > call PCSetType(Ppc,PCILU,Pierr) > call PCFactorSetLevels(Ppc,3,Pierr) > > This effectively changes the fill level from 0 to 3, right? This only works in serial. Check the -ksp_view output to see what is done. You should just call KSPSetFromOptions() and use run-time options to configure the solver. You can do it from code later, but writing code is slow to figure out what works. From zhang.wei at chalmers.se Mon Apr 15 16:18:24 2013 From: zhang.wei at chalmers.se (Zhang Wei) Date: Mon, 15 Apr 2013 21:18:24 +0000 Subject: [petsc-users] algorithm to get pure real eigen valule for general eigenvalue problem(Non-hermitian type) In-Reply-To: References: , , <7A3E1852-8C57-41B2-BB03-3423328C79C9@dsic.upv.es> <85C55751-4131-4365-BBF0-5351D39D2E0B@chalmers.se> <1ECBC813-436E-45B8-B1ED-5CA9A83A4E6D@dsic.upv.es> , Message-ID: hi Thanks very much for your help! Indeed, I am dealing with linearised Navier Stokes. Actually the algorithm is already tested in a FEM code by directly using Arpack. I tried slepc with wrapped Arpack. I noticed that the shit between each iteration is controlled by the number of converged eigenvalue and a predefined portion,which is the same as restart parameter "keep" in krylovschur solver, but can't be changed. While in the FEM code, the shift value is exact one. But I can't find different in the implementation. Could it be the problem? Yours Sincerely ------------------------ Wei Zhang Ph.D Hydrodynamic Group Dept. of Shipping and Marine Technology Chalmers University of Technology Sweden Phone:+46-31 772 2703 On 15 apr 2013, at 18:33, "Jose E. Roman" wrote: > > El 15/04/2013, a las 18:02, Zhang Wei escribi?: > >> Hi Jose. >> In pa-> MatVecMult(Mat A, Vec x , Vec y). I do three things >> 1) copy the x as initial value for my fvm code: >> >> const PetscScalar *px; >> PetscScalar *py; >> PetscErrorCode ierr; >> >> PetscFunctionBegin; >> ierr = VecGetArrayRead(x,&px);CHKERRQ(ierr); >> ierr = VecGetArray(y,&py);CHKERRQ(ierr); >> >> if(loopID>0){ >> copySLEPcDataToField(&px[0]); >> } >> 2) In fvm solver, where I implement first order eular method, fourth order RK method and several others. >> 3) after several timestep. I do sent the field to slepc: >> >> copyFieldToSLEPcData(&py[0]); >> VecAssemblyBegin(y); >> VecAssemblyEnd(y); >> >> From the those files I saved during the time stepping (all those methods), the fvm solver works fine for me. They gives expected things. >> Actually the numerical methods and accuracy in pa-> MatVecMult doesn't affect the results that much. At least it the shape of eigen vectors do not change with those factors. > > I am not an expert in the application, but my understanding is that stability analysis must be based on eigenvalues of the linearized operator A, but if you embed the initial value solver then you will get approximations of eigenvalues of exp(t*A), and that may not be what you want. Maybe you should have a look at a related reference, e.g http://www.mech.kth.se/~shervin/pdfs/2012_arcme.pdf > See also references under "Computational Fluid Dynamics" in http://www.grycap.upv.es/slepc/material/appli.htm > > Jose > From mark.adams at columbia.edu Mon Apr 15 16:26:36 2013 From: mark.adams at columbia.edu (Mark F. Adams) Date: Mon, 15 Apr 2013 17:26:36 -0400 Subject: [petsc-users] KSPCG solution blow up In-Reply-To: <87mwszmr5v.fsf@mcs.anl.gov> References: <87wqs3mufo.fsf@mcs.anl.gov> <87mwszmr5v.fsf@mcs.anl.gov> Message-ID: <4F4C8B0C-D801-4E06-87F4-93AABB88B8B8@columbia.edu> Its probably not worth trying to verify the code with ILU(3) because the space of algorithms is large as Jed points out (e.g., ILU(3) does not fully define the solver unless they use the same node ordering, shifting strategies and whatever else your ILU is doing to make ILU not suck). It looks like you are doing 3D elasticity. Try -pc_type gamg -pc_gamg_agg_nsmooths 1 assuming you have v3.3 or higher. On Apr 15, 2013, at 5:15 PM, Jed Brown wrote: > Hugo Gagnon writes: > >> For the problem I'm describing my serial in-house solver does not work >> with ILU(0) but works with ILU(3). I have no option to run Jacobi. >> When I apply the same problem to PETSc's PC solver with ILU(3) in >> serial I get KSP_DIVERGED_INDEFINITE_PC > > Does your in-house ILU(3) use a different ordering? What shift scheme > does it use? > >> on the first iteration (in MPI the solution somewhat converges but >> very slowly). >> >> call KSPGetPC(Pksp,Ppc,Pierr) >> call PCSetType(Ppc,PCILU,Pierr) >> call PCFactorSetLevels(Ppc,3,Pierr) >> >> This effectively changes the fill level from 0 to 3, right? > > This only works in serial. Check the -ksp_view output to see what is > done. You should just call KSPSetFromOptions() and use run-time options > to configure the solver. You can do it from code later, but writing > code is slow to figure out what works. > From opensource.petsc at user.fastmail.fm Mon Apr 15 16:27:41 2013 From: opensource.petsc at user.fastmail.fm (Hugo Gagnon) Date: Mon, 15 Apr 2013 17:27:41 -0400 Subject: [petsc-users] KSPCG solution blow up In-Reply-To: <87mwszmr5v.fsf@mcs.anl.gov> References: <87wqs3mufo.fsf@mcs.anl.gov> <87mwszmr5v.fsf@mcs.anl.gov> Message-ID: All I know is that we use SPARSKIT2's iluk. I am aware that the code snippet I gave only works in serial. -- Hugo Gagnon On 2013-04-15, at 5:15 PM, Jed Brown wrote: > Hugo Gagnon writes: > >> For the problem I'm describing my serial in-house solver does not work >> with ILU(0) but works with ILU(3). I have no option to run Jacobi. >> When I apply the same problem to PETSc's PC solver with ILU(3) in >> serial I get KSP_DIVERGED_INDEFINITE_PC > > Does your in-house ILU(3) use a different ordering? What shift scheme > does it use? > >> on the first iteration (in MPI the solution somewhat converges but >> very slowly). >> >> call KSPGetPC(Pksp,Ppc,Pierr) >> call PCSetType(Ppc,PCILU,Pierr) >> call PCFactorSetLevels(Ppc,3,Pierr) >> >> This effectively changes the fill level from 0 to 3, right? > > This only works in serial. Check the -ksp_view output to see what is > done. You should just call KSPSetFromOptions() and use run-time options > to configure the solver. You can do it from code later, but writing > code is slow to figure out what works. -------------- next part -------------- An HTML attachment was scrubbed... URL: From opensource.petsc at user.fastmail.fm Mon Apr 15 16:40:46 2013 From: opensource.petsc at user.fastmail.fm (Hugo Gagnon) Date: Mon, 15 Apr 2013 17:40:46 -0400 Subject: [petsc-users] KSPCG solution blow up In-Reply-To: <4F4C8B0C-D801-4E06-87F4-93AABB88B8B8@columbia.edu> References: <87wqs3mufo.fsf@mcs.anl.gov> <87mwszmr5v.fsf@mcs.anl.gov> <4F4C8B0C-D801-4E06-87F4-93AABB88B8B8@columbia.edu> Message-ID: <97A38D9B-4DFA-47B8-88CB-13FAA89476B8@user.fastmail.fm> Good point, this is indeed linear elasticity. Following your suggestion I first got the following error since I now use MATBAIJ: [0]PCSetData_AGG bs=3 MM=30441 [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Arguments are incompatible! [0]PETSC ERROR: MatMatMult requires A, mpibaij, to be compatible with B, mpiaij! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 6, Mon Feb 11 12:26:34 CST 2013 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: /Users/hugo/Documents/jetstream/jetstream_x86_64 on a arch-darw named user204-27.wireless.utoronto.ca by hugo Mon Apr 15 17:31:55 2013 [0]PETSC ERROR: Libraries linked from /Users/hugo/Documents/petsc-3.3-p6/arch-darwin-c-opt/lib [0]PETSC ERROR: Configure run at Mon Feb 18 15:08:07 2013 [0]PETSC ERROR: Configure options --with-debugging=0 [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: MatMatMult() line 8617 in src/mat/interface/matrix.c [0]PETSC ERROR: PCGAMGOptprol_AGG() line 1358 in src/ksp/pc/impls/gamg/agg.c [0]PETSC ERROR: PCSetUp_GAMG() line 673 in src/ksp/pc/impls/gamg/gamg.c [0]PETSC ERROR: PCSetUp() line 832 in src/ksp/pc/interface/precon.c [0]PETSC ERROR: PCApply() line 380 in src/ksp/pc/interface/precon.c [0]PETSC ERROR: KSPSolve_CG() line 139 in src/ksp/ksp/impls/cg/cg.c [0]PETSC ERROR: KSPSolve() line 446 in src/ksp/ksp/interface/itfunc.c I tried converting the matrix to MATAIJ with: call MatConvert(Paa,MATAIJ,MAT_INITIAL_MATRIX,Paa2,Pierr) and now I have this error (with ksp_view): [0]PCSetData_AGG bs=1 MM=30441 KSPCG resid. tolerance target = 1.000E-10 KSPCG initial residual |res0| = 8.981E-02 KSPCG iter = 0: |res|/|res0| = 1.000E+00 KSPCG iter = 1: |res|/|res0| = 4.949E-01 KSP Object: 2 MPI processes type: cg maximum iterations=4000 tolerances: relative=1e-10, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using PRECONDITIONED norm type for convergence test PC Object: 2 MPI processes type: gamg MG: type is MULTIPLICATIVE, levels=2 cycles=v Cycles per PCApply=1 Using Galerkin computed coarse grid matrices Coarse grid solver -- level ------------------------------- KSP Object: (mg_coarse_) 2 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_) 2 MPI processes type: bjacobi block Jacobi: number of blocks = 2 Local solve info for each block is in the following KSP and PC objects: [0] number of local blocks = 1, first local block number = 0 [0] local block number 0 KSP Object: (mg_coarse_sub_) 1 MPI processes type: preonly KSP Object: (mg_coarse_sub_) maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 1 MPI processes type: preonly left preconditioning maximum iterations=10000, initial guess is zero using NONE norm type for convergence test PC Object: (mg_coarse_sub_) 1 MPI processes tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_sub_) type: lu LU: out-of-place factorization 1 MPI processes type: lu tolerance for zero pivot 2.22045e-14 LU: out-of-place factorization matrix ordering: nd factor fill ratio given 5, needed 5.06305 tolerance for zero pivot 2.22045e-14 Factored matrix follows: matrix ordering: nd factor fill ratio given 5, needed 0 Matrix Object: Factored matrix follows: 1 MPI processes type: seqaij Matrix Object: 1 MPI processes rows=552, cols=552 type: seqaij package used to perform factorization: petsc rows=0, cols=0 total: nonzeros=106962, allocated nonzeros=106962 package used to perform factorization: petsc total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Matrix Object: 1 MPI processes total: nonzeros=1, allocated nonzeros=1 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: type: seqaij Matrix Object: rows=552, cols=552 1 MPI processes type: seqaij total: nonzeros=21126, allocated nonzeros=21126 rows=0, cols=0 total number of mallocs used during MatSetValues calls =0 total: nonzeros=0, allocated nonzeros=0 not using I-node routines total number of mallocs used during MatSetValues calls =0 - - - - - - - - - - - - - - - - - - not using I-node routines [1] number of local blocks = 1, first local block number = 1 [1] local block number 0 - - - - - - - - - - - - - - - - - - linear system matrix = precond matrix: Matrix Object: 2 MPI processes type: mpiaij rows=552, cols=552 total: nonzeros=21126, allocated nonzeros=21126 total number of mallocs used during MatSetValues calls =0 not using I-node (on process 0) routines Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 2 MPI processes type: chebyshev Chebyshev: eigenvalue estimates: min = 0.0508405, max = 5.88746 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (mg_levels_1_) 2 MPI processes type: jacobi linear system matrix = precond matrix: Matrix Object: 2 MPI processes type: mpiaij rows=60879, cols=60879 total: nonzeros=4509729, allocated nonzeros=4509729 total number of mallocs used during MatSetValues calls =0 using I-node (on process 0) routines: found 10147 nodes, limit used is 5 Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Matrix Object: 2 MPI processes type: mpiaij rows=60879, cols=60879 total: nonzeros=4509729, allocated nonzeros=4509729 total number of mallocs used during MatSetValues calls =0 using I-node (on process 0) routines: found 10147 nodes, limit used is 5 Error in FEMesh_Mod::moveFEMeshPETSc() : KSP returned with error code = -8 -- Hugo Gagnon On 2013-04-15, at 5:26 PM, Mark F. Adams wrote: > Its probably not worth trying to verify the code with ILU(3) because the space of algorithms is large as Jed points out (e.g., ILU(3) does not fully define the solver unless they use the same node ordering, shifting strategies and whatever else your ILU is doing to make ILU not suck). It looks like you are doing 3D elasticity. Try > > -pc_type gamg > -pc_gamg_agg_nsmooths 1 > > assuming you have v3.3 or higher. > > > On Apr 15, 2013, at 5:15 PM, Jed Brown wrote: > >> Hugo Gagnon writes: >> >>> For the problem I'm describing my serial in-house solver does not work >>> with ILU(0) but works with ILU(3). I have no option to run Jacobi. >>> When I apply the same problem to PETSc's PC solver with ILU(3) in >>> serial I get KSP_DIVERGED_INDEFINITE_PC >> >> Does your in-house ILU(3) use a different ordering? What shift scheme >> does it use? >> >>> on the first iteration (in MPI the solution somewhat converges but >>> very slowly). >>> >>> call KSPGetPC(Pksp,Ppc,Pierr) >>> call PCSetType(Ppc,PCILU,Pierr) >>> call PCFactorSetLevels(Ppc,3,Pierr) >>> >>> This effectively changes the fill level from 0 to 3, right? >> >> This only works in serial. Check the -ksp_view output to see what is >> done. You should just call KSPSetFromOptions() and use run-time options >> to configure the solver. You can do it from code later, but writing >> code is slow to figure out what works. >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark.adams at columbia.edu Mon Apr 15 17:07:41 2013 From: mark.adams at columbia.edu (Mark F. Adams) Date: Mon, 15 Apr 2013 18:07:41 -0400 Subject: [petsc-users] KSPCG solution blow up In-Reply-To: <97A38D9B-4DFA-47B8-88CB-13FAA89476B8@user.fastmail.fm> References: <87wqs3mufo.fsf@mcs.anl.gov> <87mwszmr5v.fsf@mcs.anl.gov> <4F4C8B0C-D801-4E06-87F4-93AABB88B8B8@columbia.edu> <97A38D9B-4DFA-47B8-88CB-13FAA89476B8@user.fastmail.fm> Message-ID: <3AA9AE23-2B68-41A9-8374-50D3DCBB252F@columbia.edu> You need to use AIJ for gamg. ML might work. You need to configure with ML. It is an external package. On Apr 15, 2013, at 5:40 PM, Hugo Gagnon wrote: > Good point, this is indeed linear elasticity. Following your suggestion I first got the following error since I now use MATBAIJ: > > [0]PCSetData_AGG bs=3 MM=30441 > [0]PETSC ERROR: --------------------- Error Message ------------------------------------ > [0]PETSC ERROR: Arguments are incompatible! > [0]PETSC ERROR: MatMatMult requires A, mpibaij, to be compatible with B, mpiaij! > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 6, Mon Feb 11 12:26:34 CST 2013 > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: /Users/hugo/Documents/jetstream/jetstream_x86_64 on a arch-darw named user204-27.wireless.utoronto.ca by hugo Mon Apr 15 17:31:55 2013 > [0]PETSC ERROR: Libraries linked from /Users/hugo/Documents/petsc-3.3-p6/arch-darwin-c-opt/lib > [0]PETSC ERROR: Configure run at Mon Feb 18 15:08:07 2013 > [0]PETSC ERROR: Configure options --with-debugging=0 > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: MatMatMult() line 8617 in src/mat/interface/matrix.c > [0]PETSC ERROR: PCGAMGOptprol_AGG() line 1358 in src/ksp/pc/impls/gamg/agg.c > [0]PETSC ERROR: PCSetUp_GAMG() line 673 in src/ksp/pc/impls/gamg/gamg.c > [0]PETSC ERROR: PCSetUp() line 832 in src/ksp/pc/interface/precon.c > [0]PETSC ERROR: PCApply() line 380 in src/ksp/pc/interface/precon.c > [0]PETSC ERROR: KSPSolve_CG() line 139 in src/ksp/ksp/impls/cg/cg.c > [0]PETSC ERROR: KSPSolve() line 446 in src/ksp/ksp/interface/itfunc.c > > I tried converting the matrix to MATAIJ with: > > call MatConvert(Paa,MATAIJ,MAT_INITIAL_MATRIX,Paa2,Pierr) > > and now I have this error (with ksp_view): > > [0]PCSetData_AGG bs=1 MM=30441 > KSPCG resid. tolerance target = 1.000E-10 > KSPCG initial residual |res0| = 8.981E-02 > KSPCG iter = 0: |res|/|res0| = 1.000E+00 > KSPCG iter = 1: |res|/|res0| = 4.949E-01 > KSP Object: 2 MPI processes > type: cg > maximum iterations=4000 > tolerances: relative=1e-10, absolute=1e-50, divergence=10000 > left preconditioning > using nonzero initial guess > using PRECONDITIONED norm type for convergence test > PC Object: 2 MPI processes > type: gamg > MG: type is MULTIPLICATIVE, levels=2 cycles=v > Cycles per PCApply=1 > Using Galerkin computed coarse grid matrices > Coarse grid solver -- level ------------------------------- > KSP Object: (mg_coarse_) 2 MPI processes > type: gmres > GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement > GMRES: happy breakdown tolerance 1e-30 > maximum iterations=1, initial guess is zero > tolerances: relative=1e-05, absolute=1e-50, divergence=10000 > left preconditioning > using NONE norm type for convergence test > PC Object: (mg_coarse_) 2 MPI processes > type: bjacobi > block Jacobi: number of blocks = 2 > Local solve info for each block is in the following KSP and PC objects: > [0] number of local blocks = 1, first local block number = 0 > [0] local block number 0 > KSP Object: (mg_coarse_sub_) 1 MPI processes > type: preonly > KSP Object: (mg_coarse_sub_) maximum iterations=10000, initial guess is zero > tolerances: relative=1e-05, absolute=1e-50, divergence=10000 > 1 MPI processes > type: preonly > left preconditioning > maximum iterations=10000, initial guess is zero > using NONE norm type for convergence test > PC Object: (mg_coarse_sub_) 1 MPI processes > tolerances: relative=1e-05, absolute=1e-50, divergence=10000 > left preconditioning > using NONE norm type for convergence test > PC Object: (mg_coarse_sub_) type: lu > LU: out-of-place factorization > 1 MPI processes > type: lu > tolerance for zero pivot 2.22045e-14 > LU: out-of-place factorization > matrix ordering: nd > factor fill ratio given 5, needed 5.06305 > tolerance for zero pivot 2.22045e-14 > Factored matrix follows: > matrix ordering: nd > factor fill ratio given 5, needed 0 > Matrix Object: Factored matrix follows: > 1 MPI processes > type: seqaij > Matrix Object: 1 MPI processes > rows=552, cols=552 > type: seqaij > package used to perform factorization: petsc > rows=0, cols=0 > total: nonzeros=106962, allocated nonzeros=106962 > package used to perform factorization: petsc > total number of mallocs used during MatSetValues calls =0 > not using I-node routines > linear system matrix = precond matrix: > Matrix Object: 1 MPI processes > total: nonzeros=1, allocated nonzeros=1 > total number of mallocs used during MatSetValues calls =0 > not using I-node routines > linear system matrix = precond matrix: > type: seqaij > Matrix Object: rows=552, cols=552 > 1 MPI processes > type: seqaij > total: nonzeros=21126, allocated nonzeros=21126 > rows=0, cols=0 > total number of mallocs used during MatSetValues calls =0 > total: nonzeros=0, allocated nonzeros=0 > not using I-node routines > total number of mallocs used during MatSetValues calls =0 > - - - - - - - - - - - - - - - - - - > not using I-node routines > [1] number of local blocks = 1, first local block number = 1 > [1] local block number 0 > - - - - - - - - - - - - - - - - - - > linear system matrix = precond matrix: > Matrix Object: 2 MPI processes > type: mpiaij > rows=552, cols=552 > total: nonzeros=21126, allocated nonzeros=21126 > total number of mallocs used during MatSetValues calls =0 > not using I-node (on process 0) routines > Down solver (pre-smoother) on level 1 ------------------------------- > KSP Object: (mg_levels_1_) 2 MPI processes > type: chebyshev > Chebyshev: eigenvalue estimates: min = 0.0508405, max = 5.88746 > maximum iterations=2 > tolerances: relative=1e-05, absolute=1e-50, divergence=10000 > left preconditioning > using nonzero initial guess > using NONE norm type for convergence test > PC Object: (mg_levels_1_) 2 MPI processes > type: jacobi > linear system matrix = precond matrix: > Matrix Object: 2 MPI processes > type: mpiaij > rows=60879, cols=60879 > total: nonzeros=4509729, allocated nonzeros=4509729 > total number of mallocs used during MatSetValues calls =0 > using I-node (on process 0) routines: found 10147 nodes, limit used is 5 > Up solver (post-smoother) same as down solver (pre-smoother) > linear system matrix = precond matrix: > Matrix Object: 2 MPI processes > type: mpiaij > rows=60879, cols=60879 > total: nonzeros=4509729, allocated nonzeros=4509729 > total number of mallocs used during MatSetValues calls =0 > using I-node (on process 0) routines: found 10147 nodes, limit used is 5 > Error in FEMesh_Mod::moveFEMeshPETSc() : KSP returned with error code = -8 > > -- > Hugo Gagnon > > On 2013-04-15, at 5:26 PM, Mark F. Adams wrote: > >> Its probably not worth trying to verify the code with ILU(3) because the space of algorithms is large as Jed points out (e.g., ILU(3) does not fully define the solver unless they use the same node ordering, shifting strategies and whatever else your ILU is doing to make ILU not suck). It looks like you are doing 3D elasticity. Try >> >> -pc_type gamg >> -pc_gamg_agg_nsmooths 1 >> >> assuming you have v3.3 or higher. >> >> >> On Apr 15, 2013, at 5:15 PM, Jed Brown wrote: >> >>> Hugo Gagnon writes: >>> >>>> For the problem I'm describing my serial in-house solver does not work >>>> with ILU(0) but works with ILU(3). I have no option to run Jacobi. >>>> When I apply the same problem to PETSc's PC solver with ILU(3) in >>>> serial I get KSP_DIVERGED_INDEFINITE_PC >>> >>> Does your in-house ILU(3) use a different ordering? What shift scheme >>> does it use? >>> >>>> on the first iteration (in MPI the solution somewhat converges but >>>> very slowly). >>>> >>>> call KSPGetPC(Pksp,Ppc,Pierr) >>>> call PCSetType(Ppc,PCILU,Pierr) >>>> call PCFactorSetLevels(Ppc,3,Pierr) >>>> >>>> This effectively changes the fill level from 0 to 3, right? >>> >>> This only works in serial. Check the -ksp_view output to see what is >>> done. You should just call KSPSetFromOptions() and use run-time options >>> to configure the solver. You can do it from code later, but writing >>> code is slow to figure out what works. >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Apr 15 18:56:27 2013 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 15 Apr 2013 18:56:27 -0500 Subject: [petsc-users] dmsnessetfunctionlocal In-Reply-To: References: Message-ID: On Mon, Apr 15, 2013 at 4:04 PM, Dharmendar Reddy wrote: > Hello, > I am getting undefined reference errors on using > dmsnesesetfunctionlocal and dmsnessetjacobianlocal. Are fortran interfaces > to these functions missing ? I had no problem using snessetfunction before. > > Solver.F90:(.text+0xaa1): undefined reference to `dmsnessetjacobianlocal_' > Solver.F90:(.text+0xabc): undefined reference to `dmsnessetfunctionlocal_' > I just pushed these to 'next'. I have no test, so let me know if they work. Thanks, Matt > Also, petscsectiongetconstraintdof is giving undefined refernce error > > FEMModules.F90:(.text+0xc048): undefined reference to > `petscsectiongetconstraintdof_ > > Please help > > thanks > reddy > > > -- > ----------------------------------------------------- > Dharmendar Reddy Palle > Graduate Student > Microelectronics Research center, > University of Texas at Austin, > 10100 Burnet Road, Bldg. 160 > MER 2.608F, TX 78758-4445 > e-mail: dharmareddy84 at gmail.com > Phone: +1-512-350-9082 > United States of America. > Homepage: https://webspace.utexas.edu/~dpr342 > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Apr 15 19:08:53 2013 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 15 Apr 2013 19:08:53 -0500 Subject: [petsc-users] dmsnessetfunctionlocal In-Reply-To: References: Message-ID: On Mon, Apr 15, 2013 at 4:04 PM, Dharmendar Reddy wrote: > Hello, > I am getting undefined reference errors on using > dmsnesesetfunctionlocal and dmsnessetjacobianlocal. Are fortran interfaces > to these functions missing ? I had no problem using snessetfunction before. > > > > Solver.F90:(.text+0xaa1): undefined reference to `dmsnessetjacobianlocal_' > Solver.F90:(.text+0xabc): undefined reference to `dmsnessetfunctionlocal_' > > Also, petscsectiongetconstraintdof is giving undefined refernce error > > FEMModules.F90:(.text+0xc048): undefined reference to > `petscsectiongetconstraintdof_ > Fixed. Matt > Please help > > thanks > reddy > > > -- > ----------------------------------------------------- > Dharmendar Reddy Palle > Graduate Student > Microelectronics Research center, > University of Texas at Austin, > 10100 Burnet Road, Bldg. 160 > MER 2.608F, TX 78758-4445 > e-mail: dharmareddy84 at gmail.com > Phone: +1-512-350-9082 > United States of America. > Homepage: https://webspace.utexas.edu/~dpr342 > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Apr 15 19:20:23 2013 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 15 Apr 2013 19:20:23 -0500 Subject: [petsc-users] DMPlexSetClosure In-Reply-To: References: Message-ID: On Sat, Apr 13, 2013 at 9:51 PM, Dharmendar Reddy wrote: > Hello, > Got it. I understand the reason for errors. I was using XXXSetF90 > functions in my code so i was using allocatable arrays. I thought all > set/getvlaues had corresponding F90 functions. I was trying to define and > use things consistently in the code. > I can fix the compile errors using pointers now. > > Now, can i request for Fortran interface for DMPlexMatSetClosure ? > Pushed. > will you be adding Fortran interfaces to the functions listed below ? > > FEMModules.F90:(.text+0xbfe0): undefined reference to > `dmplexgetdefaultsection_' > This is just DMGetDefaultSection(). Matt > FEMModules.F90:(.text+0xc048): undefined reference to > `petscsectiongetconstraintdof_ > > Solver.F90:(.text+0xaa1): undefined reference to `dmsnessetjacobianlocal_' > Solver.F90:(.text+0xabc): undefined reference to `dmsnessetfunctionlocal_' > > Thanks > Reddy > > > On Sat, Apr 13, 2013 at 9:32 PM, Matthew Knepley wrote: > >> On Sat, Apr 13, 2013 at 9:18 PM, Dharmendar Reddy < >> dharmareddy84 at gmail.com> wrote: >> >>> I am getting bunch of erros in my code related to DMPlex >>> >>> If i use DMPlexVecSetClosure I get the following error. >>> >>> A pointer dummy argument may only be argument associated with a >>> pointer. [FELM] >>> call >>> DMPlexVecSetClosure(dm,PETSC_NULL_OBJECT,F,cellId,Felm,ADD_VALUES,ierr) >>> >>> Felm is defined as : PetscScalar,allocatable :: Felm(:) >>> >> >> Did you look at the sample code? >> >> >> http://www.mcs.anl.gov/petsc/petsc-dev/src/dm/impls/plex/examples/tests/ex2f90.F.html >> >> You define pointers. You can see what function I have defined by looking >> at the header >> >> >> https://bitbucket.org/petsc/petsc/src/62a20339e027b37fab44424f1466054586f1dc85/include/finclude/ftn-custom/petscdmplex.h90?at=master >> >> and its clear from the file that DMPlexMatSetClosure() has not been >> defined in Fortran. >> >> Matt >> >> >>> I do a similar call to DMPlexMatSetClosure, i get no error. >>> >>> Now if i use DMPlexVecSetClosureF90, code compiles, but i see undefined >>> reference error during link stage: >>> >>> FEMModules.F90:(.text+0xba77): undefined reference to >>> `dmplexvecsetclosuref90_' >>> >>> FEMModules.F90:(.text+0xbea9): undefined reference to >>> `dmplexmatsetclosure_' >>> >>> FEMModules.F90:(.text+0xbfe0): undefined reference to >>> `dmplexgetdefaultsection_' >>> FEMModules.F90:(.text+0xc048): undefined reference to >>> `petscsectiongetconstraintdof_ >>> >>> Solver.F90:(.text+0xaa1): undefined reference to >>> `dmsnessetjacobianlocal_' >>> Solver.F90:(.text+0xabc): undefined reference to >>> `dmsnessetfunctionlocal_' >>> >>> >>> >>> >>> On Sat, Apr 13, 2013 at 8:22 PM, Matthew Knepley wrote: >>> >>>> On Sat, Apr 13, 2013 at 8:20 PM, Dharmendar Reddy < >>>> dharmareddy84 at gmail.com> wrote: >>>> >>>>> Hello, >>>>> I am getting an undefined reference error :: >>>>> FEMModules.F90:(.text+0xba77): undefined reference to >>>>> `dmplexvecsetclosuref90_' >>>>> FEMModules.F90:(.text+0xbea9): undefined reference to >>>>> `dmplexmatsetclosuref90_' >>>>> >>>>> I can see that DMPlexVecSetClosure is defined in >>>>> >>>>> /finclude/ftn-custom/petscdmplex.h90:159: >>>>> >>>>> but the name is DMPlexVecSetClosure instead of DMPlexVecSetClosureF90. >>>>> >>>>> And there is no DMPlexMatSetClosureF90 >>>>> >>>>> >>>>> What should i do ? >>>>> >>>> >>>> I was not consistent here with the naming. Since an F77 version was not >>>> possible, I did not >>>> add F90. That is probably wrong, however I would like to scrap the F77 >>>> version of Plex since >>>> everyone uses F90 now and the extra letters are annoying. Go ahead and >>>> use the function. >>>> >>>> Matt >>>> >>>> >>>>> Thanks >>>>> Reddy >>>>> -- >>>>> ----------------------------------------------------- >>>>> Dharmendar Reddy Palle >>>>> Graduate Student >>>>> Microelectronics Research center, >>>>> University of Texas at Austin, >>>>> 10100 Burnet Road, Bldg. 160 >>>>> MER 2.608F, TX 78758-4445 >>>>> e-mail: dharmareddy84 at gmail.com >>>>> Phone: +1-512-350-9082 >>>>> United States of America. >>>>> Homepage: https://webspace.utexas.edu/~dpr342 >>>>> >>>> >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>> >>> >>> >>> -- >>> ----------------------------------------------------- >>> Dharmendar Reddy Palle >>> Graduate Student >>> Microelectronics Research center, >>> University of Texas at Austin, >>> 10100 Burnet Road, Bldg. 160 >>> MER 2.608F, TX 78758-4445 >>> e-mail: dharmareddy84 at gmail.com >>> Phone: +1-512-350-9082 >>> United States of America. >>> Homepage: https://webspace.utexas.edu/~dpr342 >>> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > > > -- > ----------------------------------------------------- > Dharmendar Reddy Palle > Graduate Student > Microelectronics Research center, > University of Texas at Austin, > 10100 Burnet Road, Bldg. 160 > MER 2.608F, TX 78758-4445 > e-mail: dharmareddy84 at gmail.com > Phone: +1-512-350-9082 > United States of America. > Homepage: https://webspace.utexas.edu/~dpr342 > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From dharmareddy84 at gmail.com Mon Apr 15 19:36:31 2013 From: dharmareddy84 at gmail.com (Dharmendar Reddy) Date: Mon, 15 Apr 2013 19:36:31 -0500 Subject: [petsc-users] DMPlexSetClosure In-Reply-To: References: Message-ID: I did realize that after some digging. I fixed that part. Thanks for the updates to dmsneslocal. If i am on the branch next, do i get access to all the changes you push to knepley/plex ? On Mon, Apr 15, 2013 at 7:20 PM, Matthew Knepley wrote: > On Sat, Apr 13, 2013 at 9:51 PM, Dharmendar Reddy > wrote: > >> Hello, >> Got it. I understand the reason for errors. I was using >> XXXSetF90 functions in my code so i was using allocatable arrays. I thought >> all set/getvlaues had corresponding F90 functions. I was trying to define >> and use things consistently in the code. >> I can fix the compile errors using pointers now. >> >> Now, can i request for Fortran interface for DMPlexMatSetClosure ? >> > > Pushed. > > > >> will you be adding Fortran interfaces to the functions listed below ? >> >> FEMModules.F90:(.text+0xbfe0): undefined reference to >> `dmplexgetdefaultsection_' >> > > This is just DMGetDefaultSection(). > > Matt > > >> FEMModules.F90:(.text+0xc048): undefined reference to >> `petscsectiongetconstraintdof_ >> >> Solver.F90:(.text+0xaa1): undefined reference to `dmsnessetjacobianlocal_' >> Solver.F90:(.text+0xabc): undefined reference to `dmsnessetfunctionlocal_' >> >> Thanks >> Reddy >> >> >> On Sat, Apr 13, 2013 at 9:32 PM, Matthew Knepley wrote: >> >>> On Sat, Apr 13, 2013 at 9:18 PM, Dharmendar Reddy < >>> dharmareddy84 at gmail.com> wrote: >>> >>>> I am getting bunch of erros in my code related to DMPlex >>>> >>>> If i use DMPlexVecSetClosure I get the following error. >>>> >>>> A pointer dummy argument may only be argument associated with a >>>> pointer. [FELM] >>>> call >>>> DMPlexVecSetClosure(dm,PETSC_NULL_OBJECT,F,cellId,Felm,ADD_VALUES,ierr) >>>> >>>> Felm is defined as : PetscScalar,allocatable :: Felm(:) >>>> >>> >>> Did you look at the sample code? >>> >>> >>> http://www.mcs.anl.gov/petsc/petsc-dev/src/dm/impls/plex/examples/tests/ex2f90.F.html >>> >>> You define pointers. You can see what function I have defined by looking >>> at the header >>> >>> >>> https://bitbucket.org/petsc/petsc/src/62a20339e027b37fab44424f1466054586f1dc85/include/finclude/ftn-custom/petscdmplex.h90?at=master >>> >>> and its clear from the file that DMPlexMatSetClosure() has not been >>> defined in Fortran. >>> >>> Matt >>> >>> >>>> I do a similar call to DMPlexMatSetClosure, i get no error. >>>> >>>> Now if i use DMPlexVecSetClosureF90, code compiles, but i see undefined >>>> reference error during link stage: >>>> >>>> FEMModules.F90:(.text+0xba77): undefined reference to >>>> `dmplexvecsetclosuref90_' >>>> >>>> FEMModules.F90:(.text+0xbea9): undefined reference to >>>> `dmplexmatsetclosure_' >>>> >>>> FEMModules.F90:(.text+0xbfe0): undefined reference to >>>> `dmplexgetdefaultsection_' >>>> FEMModules.F90:(.text+0xc048): undefined reference to >>>> `petscsectiongetconstraintdof_ >>>> >>>> Solver.F90:(.text+0xaa1): undefined reference to >>>> `dmsnessetjacobianlocal_' >>>> Solver.F90:(.text+0xabc): undefined reference to >>>> `dmsnessetfunctionlocal_' >>>> >>>> >>>> >>>> >>>> On Sat, Apr 13, 2013 at 8:22 PM, Matthew Knepley wrote: >>>> >>>>> On Sat, Apr 13, 2013 at 8:20 PM, Dharmendar Reddy < >>>>> dharmareddy84 at gmail.com> wrote: >>>>> >>>>>> Hello, >>>>>> I am getting an undefined reference error :: >>>>>> FEMModules.F90:(.text+0xba77): undefined reference to >>>>>> `dmplexvecsetclosuref90_' >>>>>> FEMModules.F90:(.text+0xbea9): undefined reference to >>>>>> `dmplexmatsetclosuref90_' >>>>>> >>>>>> I can see that DMPlexVecSetClosure is defined in >>>>>> >>>>>> /finclude/ftn-custom/petscdmplex.h90:159: >>>>>> >>>>>> but the name is DMPlexVecSetClosure instead of DMPlexVecSetClosureF90. >>>>>> >>>>>> And there is no DMPlexMatSetClosureF90 >>>>>> >>>>>> >>>>>> What should i do ? >>>>>> >>>>> >>>>> I was not consistent here with the naming. Since an F77 version was >>>>> not possible, I did not >>>>> add F90. That is probably wrong, however I would like to scrap the F77 >>>>> version of Plex since >>>>> everyone uses F90 now and the extra letters are annoying. Go ahead and >>>>> use the function. >>>>> >>>>> Matt >>>>> >>>>> >>>>>> Thanks >>>>>> Reddy >>>>>> -- >>>>>> ----------------------------------------------------- >>>>>> Dharmendar Reddy Palle >>>>>> Graduate Student >>>>>> Microelectronics Research center, >>>>>> University of Texas at Austin, >>>>>> 10100 Burnet Road, Bldg. 160 >>>>>> MER 2.608F, TX 78758-4445 >>>>>> e-mail: dharmareddy84 at gmail.com >>>>>> Phone: +1-512-350-9082 >>>>>> United States of America. >>>>>> Homepage: https://webspace.utexas.edu/~dpr342 >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> What most experimenters take for granted before they begin their >>>>> experiments is infinitely more interesting than any results to which their >>>>> experiments lead. >>>>> -- Norbert Wiener >>>>> >>>> >>>> >>>> >>>> -- >>>> ----------------------------------------------------- >>>> Dharmendar Reddy Palle >>>> Graduate Student >>>> Microelectronics Research center, >>>> University of Texas at Austin, >>>> 10100 Burnet Road, Bldg. 160 >>>> MER 2.608F, TX 78758-4445 >>>> e-mail: dharmareddy84 at gmail.com >>>> Phone: +1-512-350-9082 >>>> United States of America. >>>> Homepage: https://webspace.utexas.edu/~dpr342 >>>> >>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >> >> >> >> -- >> ----------------------------------------------------- >> Dharmendar Reddy Palle >> Graduate Student >> Microelectronics Research center, >> University of Texas at Austin, >> 10100 Burnet Road, Bldg. 160 >> MER 2.608F, TX 78758-4445 >> e-mail: dharmareddy84 at gmail.com >> Phone: +1-512-350-9082 >> United States of America. >> Homepage: https://webspace.utexas.edu/~dpr342 >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -- ----------------------------------------------------- Dharmendar Reddy Palle Graduate Student Microelectronics Research center, University of Texas at Austin, 10100 Burnet Road, Bldg. 160 MER 2.608F, TX 78758-4445 e-mail: dharmareddy84 at gmail.com Phone: +1-512-350-9082 United States of America. Homepage: https://webspace.utexas.edu/~dpr342 -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Apr 15 19:41:06 2013 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 15 Apr 2013 19:41:06 -0500 Subject: [petsc-users] DMPlexSetClosure In-Reply-To: References: Message-ID: On Mon, Apr 15, 2013 at 7:36 PM, Dharmendar Reddy wrote: > I did realize that after some digging. I fixed that part. Thanks for the > updates to dmsneslocal. > > If i am on the branch next, do i get access to all the changes you push to > knepley/plex ? > Yes, once I merge it into next. Matt > > On Mon, Apr 15, 2013 at 7:20 PM, Matthew Knepley wrote: > >> On Sat, Apr 13, 2013 at 9:51 PM, Dharmendar Reddy < >> dharmareddy84 at gmail.com> wrote: >> >>> Hello, >>> Got it. I understand the reason for errors. I was using >>> XXXSetF90 functions in my code so i was using allocatable arrays. I thought >>> all set/getvlaues had corresponding F90 functions. I was trying to define >>> and use things consistently in the code. >>> I can fix the compile errors using pointers now. >>> >>> Now, can i request for Fortran interface for DMPlexMatSetClosure ? >>> >> >> Pushed. >> >> >> >>> will you be adding Fortran interfaces to the functions listed below ? >>> >>> FEMModules.F90:(.text+0xbfe0): undefined reference to >>> `dmplexgetdefaultsection_' >>> >> >> This is just DMGetDefaultSection(). >> >> Matt >> >> >>> FEMModules.F90:(.text+0xc048): undefined reference to >>> `petscsectiongetconstraintdof_ >>> >>> Solver.F90:(.text+0xaa1): undefined reference to >>> `dmsnessetjacobianlocal_' >>> Solver.F90:(.text+0xabc): undefined reference to >>> `dmsnessetfunctionlocal_' >>> >>> Thanks >>> Reddy >>> >>> >>> On Sat, Apr 13, 2013 at 9:32 PM, Matthew Knepley wrote: >>> >>>> On Sat, Apr 13, 2013 at 9:18 PM, Dharmendar Reddy < >>>> dharmareddy84 at gmail.com> wrote: >>>> >>>>> I am getting bunch of erros in my code related to DMPlex >>>>> >>>>> If i use DMPlexVecSetClosure I get the following error. >>>>> >>>>> A pointer dummy argument may only be argument associated with a >>>>> pointer. [FELM] >>>>> call >>>>> DMPlexVecSetClosure(dm,PETSC_NULL_OBJECT,F,cellId,Felm,ADD_VALUES,ierr) >>>>> >>>>> Felm is defined as : PetscScalar,allocatable :: Felm(:) >>>>> >>>> >>>> Did you look at the sample code? >>>> >>>> >>>> http://www.mcs.anl.gov/petsc/petsc-dev/src/dm/impls/plex/examples/tests/ex2f90.F.html >>>> >>>> You define pointers. You can see what function I have defined by >>>> looking at the header >>>> >>>> >>>> https://bitbucket.org/petsc/petsc/src/62a20339e027b37fab44424f1466054586f1dc85/include/finclude/ftn-custom/petscdmplex.h90?at=master >>>> >>>> and its clear from the file that DMPlexMatSetClosure() has not been >>>> defined in Fortran. >>>> >>>> Matt >>>> >>>> >>>>> I do a similar call to DMPlexMatSetClosure, i get no error. >>>>> >>>>> Now if i use DMPlexVecSetClosureF90, code compiles, but i see >>>>> undefined reference error during link stage: >>>>> >>>>> FEMModules.F90:(.text+0xba77): undefined reference to >>>>> `dmplexvecsetclosuref90_' >>>>> >>>>> FEMModules.F90:(.text+0xbea9): undefined reference to >>>>> `dmplexmatsetclosure_' >>>>> >>>>> FEMModules.F90:(.text+0xbfe0): undefined reference to >>>>> `dmplexgetdefaultsection_' >>>>> FEMModules.F90:(.text+0xc048): undefined reference to >>>>> `petscsectiongetconstraintdof_ >>>>> >>>>> Solver.F90:(.text+0xaa1): undefined reference to >>>>> `dmsnessetjacobianlocal_' >>>>> Solver.F90:(.text+0xabc): undefined reference to >>>>> `dmsnessetfunctionlocal_' >>>>> >>>>> >>>>> >>>>> >>>>> On Sat, Apr 13, 2013 at 8:22 PM, Matthew Knepley wrote: >>>>> >>>>>> On Sat, Apr 13, 2013 at 8:20 PM, Dharmendar Reddy < >>>>>> dharmareddy84 at gmail.com> wrote: >>>>>> >>>>>>> Hello, >>>>>>> I am getting an undefined reference error :: >>>>>>> FEMModules.F90:(.text+0xba77): undefined reference to >>>>>>> `dmplexvecsetclosuref90_' >>>>>>> FEMModules.F90:(.text+0xbea9): undefined reference to >>>>>>> `dmplexmatsetclosuref90_' >>>>>>> >>>>>>> I can see that DMPlexVecSetClosure is defined in >>>>>>> >>>>>>> /finclude/ftn-custom/petscdmplex.h90:159: >>>>>>> >>>>>>> but the name is DMPlexVecSetClosure instead of >>>>>>> DMPlexVecSetClosureF90. >>>>>>> >>>>>>> And there is no DMPlexMatSetClosureF90 >>>>>>> >>>>>>> >>>>>>> What should i do ? >>>>>>> >>>>>> >>>>>> I was not consistent here with the naming. Since an F77 version was >>>>>> not possible, I did not >>>>>> add F90. That is probably wrong, however I would like to scrap the >>>>>> F77 version of Plex since >>>>>> everyone uses F90 now and the extra letters are annoying. Go ahead >>>>>> and use the function. >>>>>> >>>>>> Matt >>>>>> >>>>>> >>>>>>> Thanks >>>>>>> Reddy >>>>>>> -- >>>>>>> ----------------------------------------------------- >>>>>>> Dharmendar Reddy Palle >>>>>>> Graduate Student >>>>>>> Microelectronics Research center, >>>>>>> University of Texas at Austin, >>>>>>> 10100 Burnet Road, Bldg. 160 >>>>>>> MER 2.608F, TX 78758-4445 >>>>>>> e-mail: dharmareddy84 at gmail.com >>>>>>> Phone: +1-512-350-9082 >>>>>>> United States of America. >>>>>>> Homepage: https://webspace.utexas.edu/~dpr342 >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> What most experimenters take for granted before they begin their >>>>>> experiments is infinitely more interesting than any results to which their >>>>>> experiments lead. >>>>>> -- Norbert Wiener >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> ----------------------------------------------------- >>>>> Dharmendar Reddy Palle >>>>> Graduate Student >>>>> Microelectronics Research center, >>>>> University of Texas at Austin, >>>>> 10100 Burnet Road, Bldg. 160 >>>>> MER 2.608F, TX 78758-4445 >>>>> e-mail: dharmareddy84 at gmail.com >>>>> Phone: +1-512-350-9082 >>>>> United States of America. >>>>> Homepage: https://webspace.utexas.edu/~dpr342 >>>>> >>>> >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>> >>> >>> >>> -- >>> ----------------------------------------------------- >>> Dharmendar Reddy Palle >>> Graduate Student >>> Microelectronics Research center, >>> University of Texas at Austin, >>> 10100 Burnet Road, Bldg. 160 >>> MER 2.608F, TX 78758-4445 >>> e-mail: dharmareddy84 at gmail.com >>> Phone: +1-512-350-9082 >>> United States of America. >>> Homepage: https://webspace.utexas.edu/~dpr342 >>> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > > > -- > ----------------------------------------------------- > Dharmendar Reddy Palle > Graduate Student > Microelectronics Research center, > University of Texas at Austin, > 10100 Burnet Road, Bldg. 160 > MER 2.608F, TX 78758-4445 > e-mail: dharmareddy84 at gmail.com > Phone: +1-512-350-9082 > United States of America. > Homepage: https://webspace.utexas.edu/~dpr342 > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From dharmareddy84 at gmail.com Mon Apr 15 21:01:08 2013 From: dharmareddy84 at gmail.com (Dharmendar Reddy) Date: Mon, 15 Apr 2013 21:01:08 -0500 Subject: [petsc-users] dmsnessetfunctionlocal In-Reply-To: References: Message-ID: Hello, Still getting an undefined reference error. I have attached a test code to reproduce the link error. Thanks Reddy On Mon, Apr 15, 2013 at 6:56 PM, Matthew Knepley wrote: > On Mon, Apr 15, 2013 at 4:04 PM, Dharmendar Reddy > wrote: > >> Hello, >> I am getting undefined reference errors on using >> dmsnesesetfunctionlocal and dmsnessetjacobianlocal. Are fortran interfaces >> to these functions missing ? I had no problem using snessetfunction before. >> >> Solver.F90:(.text+0xaa1): undefined reference to `dmsnessetjacobianlocal_' >> Solver.F90:(.text+0xabc): undefined reference to `dmsnessetfunctionlocal_' >> > > I just pushed these to 'next'. I have no test, so let me know if they work. > > Thanks, > > Matt > > >> Also, petscsectiongetconstraintdof is giving undefined refernce error >> >> FEMModules.F90:(.text+0xc048): undefined reference to >> `petscsectiongetconstraintdof_ >> >> Please help >> >> thanks >> reddy >> >> >> -- >> ----------------------------------------------------- >> Dharmendar Reddy Palle >> Graduate Student >> Microelectronics Research center, >> University of Texas at Austin, >> 10100 Burnet Road, Bldg. 160 >> MER 2.608F, TX 78758-4445 >> e-mail: dharmareddy84 at gmail.com >> Phone: +1-512-350-9082 >> United States of America. >> Homepage: https://webspace.utexas.edu/~dpr342 >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -- ----------------------------------------------------- Dharmendar Reddy Palle Graduate Student Microelectronics Research center, University of Texas at Austin, 10100 Burnet Road, Bldg. 160 MER 2.608F, TX 78758-4445 e-mail: dharmareddy84 at gmail.com Phone: +1-512-350-9082 United States of America. Homepage: https://webspace.utexas.edu/~dpr342 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: testDMSnesLocal.F90 Type: application/octet-stream Size: 1127 bytes Desc: not available URL: From jedbrown at mcs.anl.gov Mon Apr 15 21:23:17 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 15 Apr 2013 21:23:17 -0500 Subject: [petsc-users] dmsnessetfunctionlocal In-Reply-To: References: Message-ID: <87txn7kycq.fsf@mcs.anl.gov> Dharmendar Reddy writes: > Hello, > Still getting an undefined reference error. I have attached a test > code to reproduce the link error. Thanks for the test code. The fix is in 'next' now (file just wasn't being compiled). This provides Fortran bindings for DMSNESSetFunctionLocal (!) and DMSNESSetJacobianLocal, which are what your code is written for. I'll add DMSNESSetFunction/DMSNESSetJacobian as well. From jedbrown at mcs.anl.gov Mon Apr 15 21:45:31 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 15 Apr 2013 21:45:31 -0500 Subject: [petsc-users] dmsnessetfunctionlocal In-Reply-To: <87txn7kycq.fsf@mcs.anl.gov> References: <87txn7kycq.fsf@mcs.anl.gov> Message-ID: <87r4ibkxbo.fsf@mcs.anl.gov> Jed Brown writes: > Thanks for the test code. The fix is in 'next' now (file just wasn't > being compiled). This provides Fortran bindings for > DMSNESSetFunctionLocal (!) and DMSNESSetJacobianLocal, which are what > your code is written for. I'll add DMSNESSetFunction/DMSNESSetJacobian > as well. Pushed to 'next'. It's in 'jed/dmsneslocal-fortran' if anyone needs to merge it. From jroman at dsic.upv.es Tue Apr 16 01:54:17 2013 From: jroman at dsic.upv.es (Jose E. Roman) Date: Tue, 16 Apr 2013 08:54:17 +0200 Subject: [petsc-users] algorithm to get pure real eigen valule for general eigenvalue problem(Non-hermitian type) In-Reply-To: References: , , <7A3E1852-8C57-41B2-BB03-3423328C79C9@dsic.upv.es> <85C55751-4131-4365-BBF0-5351D39D2E0B@chalmers.se> <1ECBC813-436E-45B8-B1ED-5CA9A83A4E6D@dsic.upv.es> , Message-ID: El 15/04/2013, a las 23:18, Zhang Wei escribi?: > hi > Thanks very much for your help! > Indeed, I am dealing with linearised Navier Stokes. Actually the algorithm is already tested in a FEM code by directly using Arpack. I tried slepc with wrapped Arpack. I noticed that the shit between each iteration is controlled by the number of converged eigenvalue and a predefined portion,which is the same as restart parameter "keep" in krylovschur solver, but can't be changed. While in the FEM code, the shift value is exact one. But I can't find different in the implementation. Could it be the problem? > > Yours Sincerely > ------------------------ > Wei Zhang > Ph.D > Hydrodynamic Group > Dept. of Shipping and Marine Technology > Chalmers University of Technology > Sweden > Phone:+46-31 772 2703 Which shift? Do you mean doing shift-and-invert? This has nothing to do with the restart parameter. Jose From dharmareddy84 at gmail.com Tue Apr 16 02:52:08 2013 From: dharmareddy84 at gmail.com (Dharmendar Reddy) Date: Tue, 16 Apr 2013 02:52:08 -0500 Subject: [petsc-users] Segfault in DMPlexCreateSectionF90 Message-ID: Hello, I am getting a segfault in DMPlexcreateSectionF90 ...I am not sure if the problem is in parameters i pass or it is coming from petsc...can you help me fix this ? Here is the output from debugger: I print the variables passed to function before call and they seem consistent with the format required by the DMPlexCreateSectionF90 call... Breakpoint 1.1, FEMLIB_M::setupdefualtsection_dofmap (this=0x2781e28) at /home1/00924/R eddy135/projects/utgds/src/fem/VariationalProblemBoundProcedures.F90:197 197 call DMPlexCreateSectionF90(this%mdata%Meshdm,this%tdim, this%numField, & (idb) s 198 this%numComp,this%cellDofMap, this%numBC, this%bcFieldId, & (idb) s 199 this%bcPointIS,section,ierr) (idb) print this%tdim $25 = 1 (idb) print this%numField $26 = 1 (idb) print this%numcomp $27 = 1 (idb) print this%cellDofMap $28 = {1, 0} (idb) print this%numBC $29 = 2 (idb) c Continuing. Breakpoint 1.2, FEMLIB_M::setupdefualtsection_dofmap (this=0x2781e28) at /home1/00924/R eddy135/projects/utgds/src/fem/VariationalProblemBoundProcedures.F90:197 197 call DMPlexCreateSectionF90(this%mdata%Meshdm,this%tdim, this%numField, & (idb) c Continuing. Breakpoint 1.3, FEMLIB_M::setupdefualtsection_dofmap (this=0x2781e28) at /home1/00924/R eddy135/projects/utgds/src/fem/VariationalProblemBoundProcedures.F90:197 197 call DMPlexCreateSectionF90(this%mdata%Meshdm,this%tdim, this%numField, & (idb) c Continuing. Now i see a segfault in inside the petsc call Program received signal SIGSEGV DMPlexCreateSectionInitial (dm=0x281cfa0, dim=1, numFields=1, numComp=0x43ce7cf20000000 1, numDof=0x1, section=0x7fffa07c8d28) at /home1/00924/Reddy135/LocalApps/petsc/src/dm/ impls/plex/plex.c:5892 5892 for (f = 0; f < numFields; ++f) numDofTot[d] += numDof[f*(dim+1)+d]; why is the adress of numDof 0x1 ? some thing wrong in the FortranAddress conversion code in zplexf90 ? here is the backtrace #0 0x00002b79e6cec09a in DMPlexCreateSectionInitial (dm=0x281cfa0, dim=1, numFields=1, numComp=0x43ce7cf200000001, numDof=0x1, section=0x7fffa07c8d28) at /home1/00924/Reddy1 35/LocalApps/petsc/src/dm/impls/plex/plex.c:5892 #1 0x00002b79e6cf1038 in DMPlexCreateSection (dm=0x281cfa0, dim=1, numFields=1, numCom p=0x43ce7cf200000001, numDof=0x1, numBC=2, bcField=0xc420000000000000, bcPoints=0xc4200 00002a66e21, section=0x7fffa07c8d28) at /home1/00924/Reddy135/LocalApps/petsc/src/dm/im pls/plex/plex.c:6148 #2 0x00002b79e6d9e571 in dmplexcreatesectionf90_ (dm=0x259bda8, dim=0x2781e8c, numFiel ds=0x2781e90, ptrC=0x25ed9e0, ptrD=0x28adf00, numBC=0x27820a0, ptrF=0x25eda00, ptrP=0x2 878900, section=0x7fffa07c8d28, __ierr=0x7fffa07c8c98) at /home1/00924/Reddy135/LocalAp ps/petsc/src/dm/impls/plex/f90-custom/zplexf90.c:221 -- ----------------------------------------------------- Dharmendar Reddy Palle Graduate Student Microelectronics Research center, University of Texas at Austin, 10100 Burnet Road, Bldg. 160 MER 2.608F, TX 78758-4445 e-mail: dharmareddy84 at gmail.com Phone: +1-512-350-9082 United States of America. Homepage: https://webspace.utexas.edu/~dpr342 -------------- next part -------------- An HTML attachment was scrubbed... URL: From dharmareddy84 at gmail.com Tue Apr 16 03:36:46 2013 From: dharmareddy84 at gmail.com (Dharmendar Reddy) Date: Tue, 16 Apr 2013 03:36:46 -0500 Subject: [petsc-users] Segfault in DMPlexCreateSectionF90 In-Reply-To: References: Message-ID: Hello, I used the DMPlexCreateSection with Fortran pointers instead of allocatable arrays, the code goes past the error in previous error but now i get error about the BcPoints IS . Now i printed all the in put data before passing to the DMPlexCreateSection, the IS is in a valid state. However, i get a error realted to IS. am i doing something wrong here ? This is the data i pass to the DMPlexCreateSection. Data printed from code before calling the createsection. You can see that the number of boundary conditions is 2, i have allocated the BcPoints array with index lower bound 0 and upper bound 1. Calling ISgetLocalSize and ISView gives no error. dim 1 numField 1 numComp pointer 1 cell DofMap pointer 1 0 num BC 2 bcField 0 0 Check BCPoints Index set for BcPointsIS[ 0 ] Number of local points 1 Number of indices in set 1 0 1 -----------end IS View---------- Check BCPoints Index set for BcPointsIS[ 1 ] Number of local points 1 Number of indices in set 1 0 12 -----------end IS View---------- But the call do ISViewGetLocalSize inside the DMPlexCreateSection is saying object is in wrong state... [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Invalid argument! [0]PETSC ERROR: Wrong type of object: Parameter # 1! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Development GIT revision: b6da085e934eddcf71be97425c4be8a7ff05e85d GIT Date: 2013-04-15 22:47:22 -0500 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: ./PoisTest on a mpi_rScalar_Debug named login3.stampede.tacc.utexas.edu by Reddy135 Tue Apr 16 03:29:09 2013 [0]PETSC ERROR: Libraries linked from /home1/00924/Reddy135/LocalApps/petsc/mpi_rScalar_De bug/lib [0]PETSC ERROR: Configure run at Tue Apr 16 01:38:29 2013 [0]PETSC ERROR: Configure options --download-blacs=1 --download-ctetgen=1 --download-met is=1 --download-mumps=1 --download-parmetis=1 --download-scalapack=1 --download-superlu_di st=1 --download-triangle=1 --download-umfpack=1 --with-blas-lapack-dir=/opt/apps/intel/13/ composer_xe_2013.2.146/mkl/lib/intel64/ --with-debugging=1 --with-mpi-dir=/opt/apps/intel1 3/mvapich2/1.9/ --with-petsc-arch=mpi_rScalar_Debug --with-petsc-dir=/home1/00924/Reddy135 /LocalApps/petsc PETSC_ARCH=mpi_rScalar_Debug [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: ISGetLocalSize() line 322 in /home1/00924/Reddy135/LocalApps/petsc/src/vec /is/is/interface/index.c [0]PETSC ERROR: DMPlexCreateSectionBCDof() line 5937 in /home1/00924/Reddy135/LocalApps/pe tsc/src/dm/impls/plex/plex.c [0]PETSC ERROR: DMPlexCreateSection() line 6149 in /home1/00924/Reddy135/LocalApps/petsc/s rc/dm/impls/plex/plex.c On Tue, Apr 16, 2013 at 2:52 AM, Dharmendar Reddy wrote: > Hello, > I am getting a segfault in DMPlexcreateSectionF90 ...I am not > sure if the problem is in parameters i pass or it is coming from > petsc...can you help me fix this ? > Here is the output from debugger: > I print the variables passed to function before call and they seem > consistent with the format required by the DMPlexCreateSectionF90 call... > > Breakpoint 1.1, FEMLIB_M::setupdefualtsection_dofmap (this=0x2781e28) at > /home1/00924/R > eddy135/projects/utgds/src/fem/VariationalProblemBoundProcedures.F90:197 > 197 call DMPlexCreateSectionF90(this%mdata%Meshdm,this%tdim, > this%numField, & > (idb) s > 198 this%numComp,this%cellDofMap, this%numBC, this%bcFieldId, & > (idb) s > 199 this%bcPointIS,section,ierr) > (idb) print this%tdim > $25 = 1 > (idb) print this%numField > $26 = 1 > (idb) print this%numcomp > $27 = 1 > (idb) print this%cellDofMap > $28 = {1, 0} > (idb) print this%numBC > $29 = 2 > (idb) c > Continuing. > > Breakpoint 1.2, FEMLIB_M::setupdefualtsection_dofmap (this=0x2781e28) at > /home1/00924/R > eddy135/projects/utgds/src/fem/VariationalProblemBoundProcedures.F90:197 > 197 call DMPlexCreateSectionF90(this%mdata%Meshdm,this%tdim, > this%numField, & > (idb) c > Continuing. > > Breakpoint 1.3, FEMLIB_M::setupdefualtsection_dofmap (this=0x2781e28) at > /home1/00924/R > eddy135/projects/utgds/src/fem/VariationalProblemBoundProcedures.F90:197 > 197 call DMPlexCreateSectionF90(this%mdata%Meshdm,this%tdim, > this%numField, & > (idb) c > Continuing. > > Now i see a segfault in inside the petsc call > > Program received signal SIGSEGV > DMPlexCreateSectionInitial (dm=0x281cfa0, dim=1, numFields=1, > numComp=0x43ce7cf20000000 > 1, numDof=0x1, section=0x7fffa07c8d28) at > /home1/00924/Reddy135/LocalApps/petsc/src/dm/ > impls/plex/plex.c:5892 > 5892 for (f = 0; f < numFields; ++f) numDofTot[d] += > numDof[f*(dim+1)+d]; > > why is the adress of numDof 0x1 ? some thing wrong in the FortranAddress > conversion code in zplexf90 ? > > here is the backtrace > > #0 0x00002b79e6cec09a in DMPlexCreateSectionInitial (dm=0x281cfa0, dim=1, > numFields=1, > numComp=0x43ce7cf200000001, numDof=0x1, section=0x7fffa07c8d28) at > /home1/00924/Reddy1 > 35/LocalApps/petsc/src/dm/impls/plex/plex.c:5892 > #1 0x00002b79e6cf1038 in DMPlexCreateSection (dm=0x281cfa0, dim=1, > numFields=1, numCom > p=0x43ce7cf200000001, numDof=0x1, numBC=2, bcField=0xc420000000000000, > bcPoints=0xc4200 > 00002a66e21, section=0x7fffa07c8d28) at > /home1/00924/Reddy135/LocalApps/petsc/src/dm/im > pls/plex/plex.c:6148 > #2 0x00002b79e6d9e571 in dmplexcreatesectionf90_ (dm=0x259bda8, > dim=0x2781e8c, numFiel > ds=0x2781e90, ptrC=0x25ed9e0, ptrD=0x28adf00, numBC=0x27820a0, > ptrF=0x25eda00, ptrP=0x2 > 878900, section=0x7fffa07c8d28, __ierr=0x7fffa07c8c98) at > /home1/00924/Reddy135/LocalAp > ps/petsc/src/dm/impls/plex/f90-custom/zplexf90.c:221 > > -- > ----------------------------------------------------- > Dharmendar Reddy Palle > Graduate Student > Microelectronics Research center, > University of Texas at Austin, > 10100 Burnet Road, Bldg. 160 > MER 2.608F, TX 78758-4445 > e-mail: dharmareddy84 at gmail.com > Phone: +1-512-350-9082 > United States of America. > Homepage: https://webspace.utexas.edu/~dpr342 > -- ----------------------------------------------------- Dharmendar Reddy Palle Graduate Student Microelectronics Research center, University of Texas at Austin, 10100 Burnet Road, Bldg. 160 MER 2.608F, TX 78758-4445 e-mail: dharmareddy84 at gmail.com Phone: +1-512-350-9082 United States of America. Homepage: https://webspace.utexas.edu/~dpr342 -------------- next part -------------- An HTML attachment was scrubbed... URL: From sonyablade2010 at hotmail.com Tue Apr 16 05:25:07 2013 From: sonyablade2010 at hotmail.com (Sonya Blade) Date: Tue, 16 Apr 2013 11:25:07 +0100 Subject: [petsc-users] Segfault in DMPlexCreateSectionF90 Message-ID: Dear Dharmendar, Can you share with me which fortran version you are using? Is it same as is? in C configuring the Petsc or Slepc with fortran ? Regards, From knepley at gmail.com Tue Apr 16 07:10:20 2013 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 16 Apr 2013 07:10:20 -0500 Subject: [petsc-users] Segfault in DMPlexCreateSectionF90 In-Reply-To: References: Message-ID: On Tue, Apr 16, 2013 at 3:36 AM, Dharmendar Reddy wrote: > Hello, > I used the DMPlexCreateSection with Fortran pointers instead of > allocatable arrays, the code goes past the error in previous error but now > i get error about the BcPoints IS . Now i printed all the in put data > before passing to the DMPlexCreateSection, the IS is in a valid state. > However, i get a error realted to IS. am i doing something wrong here ? > There is an example of this: src/dm/impls/plex/examples/tutorials/ex1f90.F The problem is that I made this a traditional F77 interface. I will change it to F90. The problem is that I cannot be completely consistent since F77 cannot pass back arrays, and naming all those F90 makes the interface look horrible. Also, Fortran has no way of telling you that you are passing the wrong type. Matt > This is the data i pass to the DMPlexCreateSection. Data printed from code > before calling the createsection. You can see that the number of boundary > conditions is 2, i have allocated the BcPoints array with index lower bound > 0 and upper bound 1. Calling ISgetLocalSize and ISView gives no error. > > dim 1 > numField 1 > numComp pointer 1 > cell DofMap pointer 1 0 > num BC 2 > bcField 0 0 > Check BCPoints Index set for BcPointsIS[ 0 ] > Number of local points 1 > Number of indices in set 1 > 0 1 > -----------end IS View---------- > Check BCPoints Index set for BcPointsIS[ 1 ] > Number of local points 1 > Number of indices in set 1 > 0 12 > -----------end IS View---------- > > But the call do ISViewGetLocalSize inside the DMPlexCreateSection is > saying object is in wrong state... > > [0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [0]PETSC ERROR: Invalid argument! > [0]PETSC ERROR: Wrong type of object: Parameter # 1! > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Development GIT revision: > b6da085e934eddcf71be97425c4be8a7ff05e85d > GIT Date: 2013-04-15 22:47:22 -0500 > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: ./PoisTest on a mpi_rScalar_Debug named > login3.stampede.tacc.utexas.edu by > Reddy135 Tue Apr 16 03:29:09 2013 > [0]PETSC ERROR: Libraries linked from > /home1/00924/Reddy135/LocalApps/petsc/mpi_rScalar_De > bug/lib > [0]PETSC ERROR: Configure run at Tue Apr 16 01:38:29 2013 > [0]PETSC ERROR: Configure options --download-blacs=1 > --download-ctetgen=1 --download-met > is=1 --download-mumps=1 --download-parmetis=1 --download-scalapack=1 > --download-superlu_di > st=1 --download-triangle=1 --download-umfpack=1 > --with-blas-lapack-dir=/opt/apps/intel/13/ > composer_xe_2013.2.146/mkl/lib/intel64/ --with-debugging=1 > --with-mpi-dir=/opt/apps/intel1 > 3/mvapich2/1.9/ --with-petsc-arch=mpi_rScalar_Debug > --with-petsc-dir=/home1/00924/Reddy135 > /LocalApps/petsc PETSC_ARCH=mpi_rScalar_Debug > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: ISGetLocalSize() line 322 in > /home1/00924/Reddy135/LocalApps/petsc/src/vec > /is/is/interface/index.c > [0]PETSC ERROR: DMPlexCreateSectionBCDof() line 5937 in > /home1/00924/Reddy135/LocalApps/pe > tsc/src/dm/impls/plex/plex.c > [0]PETSC ERROR: DMPlexCreateSection() line 6149 in > /home1/00924/Reddy135/LocalApps/petsc/s > rc/dm/impls/plex/plex.c > > > On Tue, Apr 16, 2013 at 2:52 AM, Dharmendar Reddy > wrote: > >> Hello, >> I am getting a segfault in DMPlexcreateSectionF90 ...I am not >> sure if the problem is in parameters i pass or it is coming from >> petsc...can you help me fix this ? >> Here is the output from debugger: >> I print the variables passed to function before call and they seem >> consistent with the format required by the DMPlexCreateSectionF90 call... >> >> Breakpoint 1.1, FEMLIB_M::setupdefualtsection_dofmap (this=0x2781e28) at >> /home1/00924/R >> eddy135/projects/utgds/src/fem/VariationalProblemBoundProcedures.F90:197 >> 197 call DMPlexCreateSectionF90(this%mdata%Meshdm,this%tdim, >> this%numField, & >> (idb) s >> 198 this%numComp,this%cellDofMap, this%numBC, this%bcFieldId, >> & >> (idb) s >> 199 this%bcPointIS,section,ierr) >> (idb) print this%tdim >> $25 = 1 >> (idb) print this%numField >> $26 = 1 >> (idb) print this%numcomp >> $27 = 1 >> (idb) print this%cellDofMap >> $28 = {1, 0} >> (idb) print this%numBC >> $29 = 2 >> (idb) c >> Continuing. >> >> Breakpoint 1.2, FEMLIB_M::setupdefualtsection_dofmap (this=0x2781e28) at >> /home1/00924/R >> eddy135/projects/utgds/src/fem/VariationalProblemBoundProcedures.F90:197 >> 197 call DMPlexCreateSectionF90(this%mdata%Meshdm,this%tdim, >> this%numField, & >> (idb) c >> Continuing. >> >> Breakpoint 1.3, FEMLIB_M::setupdefualtsection_dofmap (this=0x2781e28) at >> /home1/00924/R >> eddy135/projects/utgds/src/fem/VariationalProblemBoundProcedures.F90:197 >> 197 call DMPlexCreateSectionF90(this%mdata%Meshdm,this%tdim, >> this%numField, & >> (idb) c >> Continuing. >> >> Now i see a segfault in inside the petsc call >> >> Program received signal SIGSEGV >> DMPlexCreateSectionInitial (dm=0x281cfa0, dim=1, numFields=1, >> numComp=0x43ce7cf20000000 >> 1, numDof=0x1, section=0x7fffa07c8d28) at >> /home1/00924/Reddy135/LocalApps/petsc/src/dm/ >> impls/plex/plex.c:5892 >> 5892 for (f = 0; f < numFields; ++f) numDofTot[d] += >> numDof[f*(dim+1)+d]; >> >> why is the adress of numDof 0x1 ? some thing wrong in the FortranAddress >> conversion code in zplexf90 ? >> >> here is the backtrace >> >> #0 0x00002b79e6cec09a in DMPlexCreateSectionInitial (dm=0x281cfa0, >> dim=1, numFields=1, >> numComp=0x43ce7cf200000001, numDof=0x1, section=0x7fffa07c8d28) at >> /home1/00924/Reddy1 >> 35/LocalApps/petsc/src/dm/impls/plex/plex.c:5892 >> #1 0x00002b79e6cf1038 in DMPlexCreateSection (dm=0x281cfa0, dim=1, >> numFields=1, numCom >> p=0x43ce7cf200000001, numDof=0x1, numBC=2, bcField=0xc420000000000000, >> bcPoints=0xc4200 >> 00002a66e21, section=0x7fffa07c8d28) at >> /home1/00924/Reddy135/LocalApps/petsc/src/dm/im >> pls/plex/plex.c:6148 >> #2 0x00002b79e6d9e571 in dmplexcreatesectionf90_ (dm=0x259bda8, >> dim=0x2781e8c, numFiel >> ds=0x2781e90, ptrC=0x25ed9e0, ptrD=0x28adf00, numBC=0x27820a0, >> ptrF=0x25eda00, ptrP=0x2 >> 878900, section=0x7fffa07c8d28, __ierr=0x7fffa07c8c98) at >> /home1/00924/Reddy135/LocalAp >> ps/petsc/src/dm/impls/plex/f90-custom/zplexf90.c:221 >> >> -- >> ----------------------------------------------------- >> Dharmendar Reddy Palle >> Graduate Student >> Microelectronics Research center, >> University of Texas at Austin, >> 10100 Burnet Road, Bldg. 160 >> MER 2.608F, TX 78758-4445 >> e-mail: dharmareddy84 at gmail.com >> Phone: +1-512-350-9082 >> United States of America. >> Homepage: https://webspace.utexas.edu/~dpr342 >> > > > > -- > ----------------------------------------------------- > Dharmendar Reddy Palle > Graduate Student > Microelectronics Research center, > University of Texas at Austin, > 10100 Burnet Road, Bldg. 160 > MER 2.608F, TX 78758-4445 > e-mail: dharmareddy84 at gmail.com > Phone: +1-512-350-9082 > United States of America. > Homepage: https://webspace.utexas.edu/~dpr342 > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhang.wei at chalmers.se Tue Apr 16 08:05:27 2013 From: zhang.wei at chalmers.se (Zhang Wei) Date: Tue, 16 Apr 2013 13:05:27 +0000 Subject: [petsc-users] =?gb2312?b?tPC4tDogIGFsZ29yaXRobSB0bwlnZXQJcHVyZQly?= =?gb2312?b?ZWFsCWVpZ2VuCXZhbHVsZQlmb3IJZ2VuZXJhbAllaWdlbnZhbHVlCXByb2Js?= =?gb2312?b?ZW0oTm9uLWhlcm1pdGlhbiB0eXBlKQ==?= In-Reply-To: References: , , <7A3E1852-8C57-41B2-BB03-3423328C79C9@dsic.upv.es> <85C55751-4131-4365-BBF0-5351D39D2E0B@chalmers.se> <1ECBC813-436E-45B8-B1ED-5CA9A83A4E6D@dsic.upv.es> , , Message-ID: Hi sorry for didn't properly describe the problem. and you are right, I am talking about restart. I mean I read the source code for wrapping the arpack in slepc. it seems no difference in the implementation as in this FEM code I mentioned before. but it works different from the output point of view. actully there is a parameter as 'keep' extact the same as in kryloyschur solver in wrapped arpack in slepc, when we use this FEM codes, it only throw out the first vector in krylov space. how can I let the arpack work as that in slepc? ________________________________________ ???: petsc-users-bounces at mcs.anl.gov [petsc-users-bounces at mcs.anl.gov] ?? Jose E. Roman [jroman at dsic.upv.es] ????: 2013?4?16? 8:54 ???: PETSc users list ??: Re: [petsc-users] algorithm to get pure real eigen valule for general eigenvalue problem(Non-hermitian type) El 15/04/2013, a las 23:18, Zhang Wei escribi?: > hi > Thanks very much for your help! > Indeed, I am dealing with linearised Navier Stokes. Actually the algorithm is already tested in a FEM code by directly using Arpack. I tried slepc with wrapped Arpack. I noticed that the shit between each iteration is controlled by the number of converged eigenvalue and a predefined portion,which is the same as restart parameter "keep" in krylovschur solver, but can't be changed. While in the FEM code, the shift value is exact one. But I can't find different in the implementation. Could it be the problem? > > Yours Sincerely > ------------------------ > Wei Zhang > Ph.D > Hydrodynamic Group > Dept. of Shipping and Marine Technology > Chalmers University of Technology > Sweden > Phone:+46-31 772 2703 Which shift? Do you mean doing shift-and-invert? This has nothing to do with the restart parameter. Jose From mikolaj.szydlarski at cea.fr Tue Apr 16 04:05:07 2013 From: mikolaj.szydlarski at cea.fr (Mikolaj) Date: Tue, 16 Apr 2013 11:05:07 +0200 Subject: [petsc-users] PCView only for one block ? Message-ID: <516D1443.4030406@cea.fr> Hello, I am using a block preconditioners and I wonder if it is possible to print PC data structure for sub-solvers limited only to the one block. By default PCView 'pollute' std output with the information about every single sub-setup from each MPI process ( e.g. below ) and I haven't found so far a solution for printing it only from one processor. call PCView(pc,PETSC_VIEWER_STDOUT_WORLD,ierr) after setup of sub-solvers: PC Object: 16 MPI processes type: bjacobi block Jacobi: number of blocks = 16 Local solve info for each block is in the following KSP and PC objects: [0] number of local blocks = 1, first local block number = 0 [0] local block number 0 And then 16 x { KSP Object: (sub_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (sub_) 1 MPI processes type: ilu ILU: out-of-place factorization 1 level of fill tolerance for zero pivot 2.22045e-14 } Thank you in advance for any hint, best regards, Mikolaj p.s. My petsc version: petsc-3.3-p5 and piece of code for printing: From knepley at gmail.com Tue Apr 16 09:09:53 2013 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 16 Apr 2013 09:09:53 -0500 Subject: [petsc-users] PCView only for one block ? In-Reply-To: <516D1443.4030406@cea.fr> References: <516D1443.4030406@cea.fr> Message-ID: On Tue, Apr 16, 2013 at 4:05 AM, Mikolaj wrote: > Hello, > > I am using a block preconditioners and I wonder if it is possible to print > PC data structure for sub-solvers limited only to the one block. By default > PCView 'pollute' std output with the information about every single > sub-setup from each MPI process ( e.g. below ) and I haven't found so far a > solution for printing it only from one processor. > Pull out the subpc you want and call PCView. There is nothing from the command line. Matt > call PCView(pc,PETSC_VIEWER_STDOUT_**WORLD,ierr) after setup of > sub-solvers: > > PC Object: 16 MPI processes > type: bjacobi > block Jacobi: number of blocks = 16 > Local solve info for each block is in the following KSP and PC objects: > [0] number of local blocks = 1, first local block number = 0 > [0] local block number 0 > > And then 16 x > > { > > KSP Object: (sub_) 1 MPI processes > type: preonly > maximum iterations=10000, initial guess is zero > tolerances: relative=1e-05, absolute=1e-50, divergence=10000 > left preconditioning > using NONE norm type for convergence test > PC Object: (sub_) 1 MPI processes > type: ilu > ILU: out-of-place factorization > 1 level of fill > tolerance for zero pivot 2.22045e-14 > > } > > Thank you in advance for any hint, > > best regards, > > Mikolaj > > p.s. My petsc version: petsc-3.3-p5 and piece of code for printing: > > > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From mikolaj.szydlarski at cea.fr Tue Apr 16 09:29:34 2013 From: mikolaj.szydlarski at cea.fr (Mikolaj) Date: Tue, 16 Apr 2013 16:29:34 +0200 Subject: [petsc-users] PCView only for one block ? In-Reply-To: References: <516D1443.4030406@cea.fr> Message-ID: <516D604E.6090206@cea.fr> Thank you Matt. I have done what you suggested i.e., call PCBJacobiGetSubKSP(pc,nlocal,PETSC_NULL,subksp,ierr) call KSPView(subksp(1),PETSC_VIEWER_STDOUT_WORLD,ierr) and it works perfectly. Mikolaj. > On Tue, Apr 16, 2013 at 4:05 AM, Mikolaj > wrote: > > Hello, > > I am using a block preconditioners and I wonder if it is possible > to print PC data structure for sub-solvers limited only to the one > block. By default PCView 'pollute' std output with the information > about every single sub-setup from each MPI process ( e.g. below ) > and I haven't found so far a solution for printing it only from > one processor. > > > Pull out the subpc you want and call PCView. There is nothing from the > command line. > > Matt > > call PCView(pc,PETSC_VIEWER_STDOUT_WORLD,ierr) after setup of > sub-solvers: > > PC Object: 16 MPI processes > type: bjacobi > block Jacobi: number of blocks = 16 > Local solve info for each block is in the following KSP and PC > objects: > [0] number of local blocks = 1, first local block number = 0 > [0] local block number 0 > > And then 16 x > > { > > KSP Object: (sub_) 1 MPI processes > type: preonly > maximum iterations=10000, initial guess is zero > tolerances: relative=1e-05, absolute=1e-50, divergence=10000 > left preconditioning > using NONE norm type for convergence test > PC Object: (sub_) 1 MPI processes > type: ilu > ILU: out-of-place factorization > 1 level of fill > tolerance for zero pivot 2.22045e-14 > > } > > Thank you in advance for any hint, > > best regards, > > Mikolaj > > p.s. My petsc version: petsc-3.3-p5 and piece of code for printing: > > > > > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Tue Apr 16 09:36:53 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 16 Apr 2013 09:36:53 -0500 Subject: [petsc-users] PCView only for one block ? In-Reply-To: References: <516D1443.4030406@cea.fr> Message-ID: <8761zmleyi.fsf@mcs.anl.gov> Matthew Knepley writes: > Pull out the subpc you want and call PCView. There is nothing from the > command line. I would be in favor of changing the default command line behavior. The current behavior makes -ksp_view output hopeless for large runs. I think -_view should always produce scalable output by default. From benjamin.kirk-1 at nasa.gov Tue Apr 16 10:14:02 2013 From: benjamin.kirk-1 at nasa.gov (Kirk, Benjamin (JSC-EG311)) Date: Tue, 16 Apr 2013 10:14:02 -0500 Subject: [petsc-users] MatType BAIJ and Jacobi preconditioner Message-ID: <8D77EF3A-F4BD-4C4D-916E-94DD79CABEDB@nasa.gov> I've got a BAIJ matrix with block size 5 for a compressible flow application. I am curious, when using -pc_type Jacobi what is actually applied as the "diagonal" in the preconditioner? That is, for block matrix K, I assume the Jacobi preconditioner extracts the diagonal values K_ii, but is this the scalar diagonal, or the 5x5 block-diagonal matrix? Thanks, -Ben From jedbrown at mcs.anl.gov Tue Apr 16 10:18:13 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 16 Apr 2013 10:18:13 -0500 Subject: [petsc-users] MatType BAIJ and Jacobi preconditioner In-Reply-To: <8D77EF3A-F4BD-4C4D-916E-94DD79CABEDB@nasa.gov> References: <8D77EF3A-F4BD-4C4D-916E-94DD79CABEDB@nasa.gov> Message-ID: <87vc7mjyh6.fsf@mcs.anl.gov> "Kirk, Benjamin (JSC-EG311)" writes: > I've got a BAIJ matrix with block size 5 for a compressible flow > application. > > I am curious, when using -pc_type Jacobi what is actually applied as > the "diagonal" in the preconditioner? > > That is, for block matrix K, I assume the Jacobi preconditioner > extracts the diagonal values K_ii, but is this the scalar diagonal, or > the 5x5 block-diagonal matrix? -pc_type jacobi : scalar diagonal -pc_type pbjacobi: point-block diagonal (the 5x5 blocks) From mike.hui.zhang at hotmail.com Tue Apr 16 10:44:03 2013 From: mike.hui.zhang at hotmail.com (Hui Zhang) Date: Tue, 16 Apr 2013 17:44:03 +0200 Subject: [petsc-users] VecGetOwnershipRanges Message-ID: VecGetOwnershipRanges(Vec x,const PetscInt *ranges[]) From the html manual page, range -array of length size+1 with the start and end+1 for each process Is it PetscInt ranges[size+1][2] or PetscInt ranges[size+1]? The function calls PetscLayoutGetRanges(PetscLayout map,const PetscInt *range[]) for which the manual page says range -start of each processors range of indices (the final entry is one more then the last index on the last process) I'm sorry that I'm confused. From balay at mcs.anl.gov Tue Apr 16 10:48:22 2013 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 16 Apr 2013 10:48:22 -0500 (CDT) Subject: [petsc-users] VecGetOwnershipRanges In-Reply-To: References: Message-ID: On Tue, 16 Apr 2013, Hui Zhang wrote: > VecGetOwnershipRanges(Vec x,const PetscInt *ranges[]) > > From the html manual page, > > range -array of length size+1 with the start and end+1 for each process > > Is it PetscInt ranges[size+1][2] or PetscInt ranges[size+1]? Its ranges[size+1] If you a vec of len 10 - distrbuted across 3 procs - say proc-0 3 proc-1 3 proc-2 4 Then you have ranges[3+1] = {0,3,6,10} Satish > > The function calls > > PetscLayoutGetRanges(PetscLayout map,const PetscInt *range[]) > > for which the manual page says > > range -start of each processors range of indices (the final entry is one more then the last index on the last process) > > I'm sorry that I'm confused. From mike.hui.zhang at hotmail.com Tue Apr 16 10:59:57 2013 From: mike.hui.zhang at hotmail.com (Hui Zhang) Date: Tue, 16 Apr 2013 17:59:57 +0200 Subject: [petsc-users] VecGetOwnershipRanges In-Reply-To: References: Message-ID: On Apr 16, 2013, at 5:48 PM, Satish Balay wrote: > On Tue, 16 Apr 2013, Hui Zhang wrote: > >> VecGetOwnershipRanges(Vec x,const PetscInt *ranges[]) >> >> From the html manual page, >> >> range -array of length size+1 with the start and end+1 for each process >> >> Is it PetscInt ranges[size+1][2] or PetscInt ranges[size+1]? > > Its ranges[size+1] > > If you a vec of len 10 - distrbuted across 3 procs - say > proc-0 3 > proc-1 3 > proc-2 4 > > Then you have ranges[3+1] = {0,3,6,10} > > Satish Thanks a lot! > >> >> The function calls >> >> PetscLayoutGetRanges(PetscLayout map,const PetscInt *range[]) >> >> for which the manual page says >> >> range -start of each processors range of indices (the final entry is one more then the last index on the last process) >> >> I'm sorry that I'm confused. > > From pengxwang at hotmail.com Tue Apr 16 11:04:07 2013 From: pengxwang at hotmail.com (Roc Wang) Date: Tue, 16 Apr 2013 11:04:07 -0500 Subject: [petsc-users] errors of ~/petsc-3.2-p7/src/ksp/ksp/examples/tutorials/ex50.c Message-ID: Hello, I am trying to run the example code ~/petsc-3.2-p7/src/ksp/ksp/examples/tutorials/ex50.c I changed the extension .c to .cpp and used g++ compiler. But, there were some errors when I compiled the code: ex50.cpp:58: error: cannot convert ?PetscInt*? to ?PetscLogStage*? for argument ?2? to ?PetscErrorCode PetscLogStageRegister(const char*, PetscLogStage*)? ex50.cpp:59: error: cannot convert ?PetscInt*? to ?PetscLogStage*? for argument ?2? to ?PetscErrorCode PetscLogStageRegister(const char*, PetscLogStage*)? ex50.cpp:326: error: cannot convert ?PetscInt*? to ?int*? for argument ?3? to ?int MPI_Get_count(MPI_Status*, MPI_Datatype, int*)? 270 326 ierr = MPI_Get_count(&status, MPIU_SCALAR, &inn); CHKERRQ(ierr); 327 nn = inn; After I made the following modifications, the code was compiled successfully: 1 Change the data type of state[3] from PetscInt to PetscLogStage in line 49; PetscLogStage stages[3]; 2 Define a new variable inn as datatype of int in line 270; 3 Change the line 326 to: ierr = MPI_Get_count(&status, MPIU_SCALAR, &inn); CHKERRQ(ierr); 4 After line 326, add: nn = inn; I am wondering if the modification by me is correct and are the bugs of the example code? The version of PETSc I am using is 3.2-p7, the compiler is MPI plus g++. Is it because I use a c++ compiler for a c code? Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Tue Apr 16 11:15:37 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 16 Apr 2013 11:15:37 -0500 Subject: [petsc-users] errors of ~/petsc-3.2-p7/src/ksp/ksp/examples/tutorials/ex50.c In-Reply-To: References: Message-ID: <87ppxujvti.fsf@mcs.anl.gov> Roc Wang writes: > ~/petsc-3.2-p7/src/ksp/ksp/examples/tutorials/ex50.c Please don't use such old code. That example uses DMMG which was deprecated then and has since been removed. From bsmith at mcs.anl.gov Tue Apr 16 12:05:36 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 16 Apr 2013 12:05:36 -0500 Subject: [petsc-users] Seq vs MPI convergence In-Reply-To: <87ehecpx9o.fsf@mcs.anl.gov> References: <87ehecpx9o.fsf@mcs.anl.gov> Message-ID: On Apr 14, 2013, at 5:23 PM, Jed Brown wrote: > Hugo Gagnon writes: > >> Hi, >> >> I have a problem that converges fine in sequential mode but diverges >> in MPI (other problems seem to converge fine for both modes). I am no >> expert in parallel solvers but, is this something I should expect? >> I'm using BiCGSTAB with BJACOBI ILU(3). Perhaps I'm overseeing some >> parameters that could improve my convergences? > > The most common problem is that you assemble a different operator in > parallel. So check that and check that you get the same answer when > using redundant solves. > > -pc_type redundant -redundant_pc_type ilu See also: http://www.mcs.anl.gov/petsc/documentation/faq.html#differentiterations and http://www.mcs.anl.gov/petsc/documentation/faq.html#kspdiverged From Shuangshuang.Jin at pnnl.gov Tue Apr 16 13:17:19 2013 From: Shuangshuang.Jin at pnnl.gov (Jin, Shuangshuang) Date: Tue, 16 Apr 2013 11:17:19 -0700 Subject: [petsc-users] KSPsolver Message-ID: <6778DE83AB681D49BFC2CD850610FEB1018FC933E211@EMAIL04.pnl.gov> Hello, everyone, I'm trying to run the following example: http://www.mcs.anl.gov/petsc/petsc-current/src/ksp/ksp/examples/tutorials/ex12.c.html It is said that this example solves a linear system in parallel with KSP. However, when I ran it on my clusters, I always got the same copy running multiple times. I printed the Istart and Iend numbers right after the MatGetOwnershipRange, and noticed that the index on each processor are all the same, which is [0, 56]. The following is the printout from the runs: [d3m956 at olympus ksp]$ mpiexec -n 1 ./ex12 0, 56 Norm of error 2.10144e-06 iterations 14 [d3m956 at olympus ksp]$ mpiexec -n 2 ./ex12 0, 56 0, 56 Norm of error 2.10144e-06 iterations 14 Norm of error 2.10144e-06 iterations 14 [d3m956 at olympus ksp]$ mpiexec -n 4 ./ex12 0, 56 0, 56 0, 56 0, 56 Norm of error 2.10144e-06 iterations 14 Norm of error 2.10144e-06 iterations 14 Norm of error 2.10144e-06 iterations 14 Norm of error 2.10144e-06 iterations 14 Can anyone explain to me why I ran into this issue? Anything I missed? Thanks, Shuangshuang -------------- next part -------------- An HTML attachment was scrubbed... URL: From hzhang at mcs.anl.gov Tue Apr 16 13:25:56 2013 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Tue, 16 Apr 2013 13:25:56 -0500 Subject: [petsc-users] KSPsolver In-Reply-To: <6778DE83AB681D49BFC2CD850610FEB1018FC933E211@EMAIL04.pnl.gov> References: <6778DE83AB681D49BFC2CD850610FEB1018FC933E211@EMAIL04.pnl.gov> Message-ID: What do you get by running mpiexec -n 2 ./ex12 -ksp_view I get mpiexec -n 2 ./ex12 -ksp_view KSP Object: 2 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using PRECONDITIONED norm type for convergence test GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 PC Object: 2 MPI processes type: ourjacobi linear system matrix = precond matrix: Matrix Object: 2 MPI processes type: mpiaij rows=56, cols=56 total: nonzeros=250, allocated nonzeros=560 total number of mallocs used during MatSetValues calls =0 not using I-node (on process 0) routines Norm of error 2.10144e-06 iterations 14 Hong On Tue, Apr 16, 2013 at 1:17 PM, Jin, Shuangshuang < Shuangshuang.Jin at pnnl.gov> wrote: > Hello, everyone, I?m trying to run the following example: * > http://www.mcs.anl.gov/petsc/petsc-current/src/ksp/ksp/examples/tutorials/ex12.c.html > * > > It is said that this example solves a linear system in parallel with KSP. > > However, when I ran it on my clusters, I always got the same copy running > multiple times. I printed the Istart and Iend numbers right after the > MatGetOwnershipRange, and noticed that the index on each processor are all > the same, which is [0, 56]. > > The following is the printout from the runs: > > [d3m956 at olympus ksp]$ mpiexec -n 1 ./ex12 > 0, 56 > Norm of error 2.10144e-06 iterations 14 > [d3m956 at olympus ksp]$ mpiexec -n 2 ./ex12 > 0, 56 > 0, 56 > Norm of error 2.10144e-06 iterations 14 > Norm of error 2.10144e-06 iterations 14 > [d3m956 at olympus ksp]$ mpiexec -n 4 ./ex12 > 0, 56 > 0, 56 > 0, 56 > 0, 56 > Norm of error 2.10144e-06 iterations 14 > Norm of error 2.10144e-06 iterations 14 > Norm of error 2.10144e-06 iterations 14 > Norm of error 2.10144e-06 iterations 14 > > Can anyone explain to me why I ran into this issue? Anything I missed? > > Thanks, > Shuangshuang > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Shuangshuang.Jin at pnnl.gov Tue Apr 16 13:29:00 2013 From: Shuangshuang.Jin at pnnl.gov (Jin, Shuangshuang) Date: Tue, 16 Apr 2013 11:29:00 -0700 Subject: [petsc-users] KSPsolver In-Reply-To: References: <6778DE83AB681D49BFC2CD850610FEB1018FC933E211@EMAIL04.pnl.gov> Message-ID: <6778DE83AB681D49BFC2CD850610FEB1018FC933E21D@EMAIL04.pnl.gov> Hi, by running mpiexec -n 2 ./ex12 -ksp_view, I got: [d3m956 at olympus ksp]$ mpiexec -n 2 ./ex12 -ksp_view 0, 56 0, 56 KSP Object: 1 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using PRECONDITIONED norm type for convergence test PC Object: 1 MPI processes type: ourjacobi linear system matrix = precond matrix: KSP Object: 1 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using PRECONDITIONED norm type for convergence test PC Object: 1 MPI processes type: ourjacobi linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=56, cols=56 total: nonzeros=250, allocated nonzeros=280 total number of mallocs used during MatSetValues calls =0 not using I-node routines Norm of error 2.10144e-06 iterations 14 Matrix Object: 1 MPI processes type: seqaij rows=56, cols=56 total: nonzeros=250, allocated nonzeros=280 total number of mallocs used during MatSetValues calls =0 not using I-node routines Norm of error 2.10144e-06 iterations 14 Thanks, Shuangshuang From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Hong Zhang Sent: Tuesday, April 16, 2013 11:26 AM To: PETSc users list Subject: Re: [petsc-users] KSPsolver What do you get by running mpiexec -n 2 ./ex12 -ksp_view I get mpiexec -n 2 ./ex12 -ksp_view KSP Object: 2 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using PRECONDITIONED norm type for convergence test GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 PC Object: 2 MPI processes type: ourjacobi linear system matrix = precond matrix: Matrix Object: 2 MPI processes type: mpiaij rows=56, cols=56 total: nonzeros=250, allocated nonzeros=560 total number of mallocs used during MatSetValues calls =0 not using I-node (on process 0) routines Norm of error 2.10144e-06 iterations 14 Hong On Tue, Apr 16, 2013 at 1:17 PM, Jin, Shuangshuang > wrote: Hello, everyone, I'm trying to run the following example: http://www.mcs.anl.gov/petsc/petsc-current/src/ksp/ksp/examples/tutorials/ex12.c.html It is said that this example solves a linear system in parallel with KSP. However, when I ran it on my clusters, I always got the same copy running multiple times. I printed the Istart and Iend numbers right after the MatGetOwnershipRange, and noticed that the index on each processor are all the same, which is [0, 56]. The following is the printout from the runs: [d3m956 at olympus ksp]$ mpiexec -n 1 ./ex12 0, 56 Norm of error 2.10144e-06 iterations 14 [d3m956 at olympus ksp]$ mpiexec -n 2 ./ex12 0, 56 0, 56 Norm of error 2.10144e-06 iterations 14 Norm of error 2.10144e-06 iterations 14 [d3m956 at olympus ksp]$ mpiexec -n 4 ./ex12 0, 56 0, 56 0, 56 0, 56 Norm of error 2.10144e-06 iterations 14 Norm of error 2.10144e-06 iterations 14 Norm of error 2.10144e-06 iterations 14 Norm of error 2.10144e-06 iterations 14 Can anyone explain to me why I ran into this issue? Anything I missed? Thanks, Shuangshuang -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Tue Apr 16 13:29:34 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 16 Apr 2013 13:29:34 -0500 Subject: [petsc-users] KSPsolver In-Reply-To: <6778DE83AB681D49BFC2CD850610FEB1018FC933E211@EMAIL04.pnl.gov> References: <6778DE83AB681D49BFC2CD850610FEB1018FC933E211@EMAIL04.pnl.gov> Message-ID: <87bo9ejpm9.fsf@mcs.anl.gov> "Jin, Shuangshuang" writes: > Hello, everyone, I'm trying to run the following example: > http://www.mcs.anl.gov/petsc/petsc-current/src/ksp/ksp/examples/tutorials/ex12.c.html > > It is said that this example solves a linear system in parallel with > KSP. > > However, when I ran it on my clusters, I always got the same copy > running multiple times. I printed the Istart and Iend numbers right > after the MatGetOwnershipRange, and noticed that the index on each > processor are all the same, which is [0, 56]. Your mpiexec does not match the MPI library you compiled with. From stali at geology.wisc.edu Tue Apr 16 13:43:38 2013 From: stali at geology.wisc.edu (Tabrez Ali) Date: Tue, 16 Apr 2013 13:43:38 -0500 Subject: [petsc-users] KSPsolver In-Reply-To: <87bo9ejpm9.fsf@mcs.anl.gov> References: <6778DE83AB681D49BFC2CD850610FEB1018FC933E211@EMAIL04.pnl.gov> <87bo9ejpm9.fsf@mcs.anl.gov> Message-ID: <9C981B50-1A8F-4E70-8C3C-6E3B3DB7E0BC@geology.wisc.edu> Shuangshuang If you do want to keep multiple versions of MPI/Compilers/Libs etc. then use the environment modules package. It is a simple one time install that will make your life easy. http://modules.sourceforge.net/ T On Apr 16, 2013, at 1:29 PM, Jed Brown wrote: > "Jin, Shuangshuang" writes: > >> Hello, everyone, I'm trying to run the following example: >> http://www.mcs.anl.gov/petsc/petsc-current/src/ksp/ksp/examples/tutorials/ex12.c.html >> >> It is said that this example solves a linear system in parallel with >> KSP. >> >> However, when I ran it on my clusters, I always got the same copy >> running multiple times. I printed the Istart and Iend numbers right >> after the MatGetOwnershipRange, and noticed that the index on each >> processor are all the same, which is [0, 56]. > > Your mpiexec does not match the MPI library you compiled with. From jroman at dsic.upv.es Tue Apr 16 13:46:24 2013 From: jroman at dsic.upv.es (Jose E. Roman) Date: Tue, 16 Apr 2013 20:46:24 +0200 Subject: [petsc-users] =?utf-8?b?562U5aSNOiAgYWxnb3JpdGhtIHRvCWdldAlwdXJl?= =?utf-8?q?=09real=09eigen=09valule=09for=09general=09eigenvalue=09problem?= =?utf-8?q?=28Non-hermitian_type=29?= In-Reply-To: References: , , <7A3E1852-8C57-41B2-BB03-3423328C79C9@dsic.upv.es> <85C55751-4131-4365-BBF0-5351D39D2E0B@chalmers.se> <1ECBC813-436E-45B8-B1ED-5CA9A83A4E6D@dsic.upv.es> , , Message-ID: <4A28F304-5E62-4D07-91D3-2695C0F534C0@dsic.upv.es> El 16/04/2013, a las 15:05, Zhang Wei escribi?: > Hi > sorry for didn't properly describe the problem. and you are right, I am talking about restart. I mean I read the source code for wrapping the arpack in slepc. it seems no difference in the implementation as in this FEM code I mentioned before. but it works different from the output point of view. actully there is a parameter as 'keep' extact the same as in kryloyschur solver in wrapped arpack in slepc, when we use this FEM codes, it only throw out the first vector in krylov space. how can I let the arpack work as that in slepc? There is no 'keep' parameter in SLEPc's wrapper to ARPACK. I don't know what you are talking about. Jose From dharmareddy84 at gmail.com Tue Apr 16 15:27:47 2013 From: dharmareddy84 at gmail.com (Dharmendar Reddy) Date: Tue, 16 Apr 2013 15:27:47 -0500 Subject: [petsc-users] Segfault in DMPlexCreateSectionF90 In-Reply-To: References: Message-ID: Thanks. Our server is down today, will test it later tonight. On Tue, Apr 16, 2013 at 7:10 AM, Matthew Knepley wrote: > On Tue, Apr 16, 2013 at 3:36 AM, Dharmendar Reddy > wrote: > >> Hello, >> I used the DMPlexCreateSection with Fortran pointers instead of >> allocatable arrays, the code goes past the error in previous error but now >> i get error about the BcPoints IS . Now i printed all the in put data >> before passing to the DMPlexCreateSection, the IS is in a valid state. >> However, i get a error realted to IS. am i doing something wrong here ? >> > > There is an example of this: src/dm/impls/plex/examples/tutorials/ex1f90.F > > The problem is that I made this a traditional F77 interface. I will change > it to F90. The problem is that > I cannot be completely consistent since F77 cannot pass back arrays, and > naming all those F90 makes > the interface look horrible. Also, Fortran has no way of telling you that > you are passing the wrong type. > > Matt > > >> This is the data i pass to the DMPlexCreateSection. Data printed from >> code before calling the createsection. You can see that the number of >> boundary conditions is 2, i have allocated the BcPoints array with index >> lower bound 0 and upper bound 1. Calling ISgetLocalSize and ISView gives no >> error. >> >> dim 1 >> numField 1 >> numComp pointer 1 >> cell DofMap pointer 1 0 >> num BC 2 >> bcField 0 0 >> Check BCPoints Index set for BcPointsIS[ 0 ] >> Number of local points 1 >> Number of indices in set 1 >> 0 1 >> -----------end IS View---------- >> Check BCPoints Index set for BcPointsIS[ 1 ] >> Number of local points 1 >> Number of indices in set 1 >> 0 12 >> -----------end IS View---------- >> >> But the call do ISViewGetLocalSize inside the DMPlexCreateSection is >> saying object is in wrong state... >> >> [0]PETSC ERROR: --------------------- Error Message >> ------------------------------------ >> [0]PETSC ERROR: Invalid argument! >> [0]PETSC ERROR: Wrong type of object: Parameter # 1! >> [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> [0]PETSC ERROR: Petsc Development GIT revision: >> b6da085e934eddcf71be97425c4be8a7ff05e85d >> GIT Date: 2013-04-15 22:47:22 -0500 >> [0]PETSC ERROR: See docs/changes/index.html for recent updates. >> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >> [0]PETSC ERROR: See docs/index.html for manual pages. >> [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> [0]PETSC ERROR: ./PoisTest on a mpi_rScalar_Debug named >> login3.stampede.tacc.utexas.edu by >> Reddy135 Tue Apr 16 03:29:09 2013 >> [0]PETSC ERROR: Libraries linked from >> /home1/00924/Reddy135/LocalApps/petsc/mpi_rScalar_De >> bug/lib >> [0]PETSC ERROR: Configure run at Tue Apr 16 01:38:29 2013 >> [0]PETSC ERROR: Configure options --download-blacs=1 >> --download-ctetgen=1 --download-met >> is=1 --download-mumps=1 --download-parmetis=1 --download-scalapack=1 >> --download-superlu_di >> st=1 --download-triangle=1 --download-umfpack=1 >> --with-blas-lapack-dir=/opt/apps/intel/13/ >> composer_xe_2013.2.146/mkl/lib/intel64/ --with-debugging=1 >> --with-mpi-dir=/opt/apps/intel1 >> 3/mvapich2/1.9/ --with-petsc-arch=mpi_rScalar_Debug >> --with-petsc-dir=/home1/00924/Reddy135 >> /LocalApps/petsc PETSC_ARCH=mpi_rScalar_Debug >> [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> [0]PETSC ERROR: ISGetLocalSize() line 322 in >> /home1/00924/Reddy135/LocalApps/petsc/src/vec >> /is/is/interface/index.c >> [0]PETSC ERROR: DMPlexCreateSectionBCDof() line 5937 in >> /home1/00924/Reddy135/LocalApps/pe >> tsc/src/dm/impls/plex/plex.c >> [0]PETSC ERROR: DMPlexCreateSection() line 6149 in >> /home1/00924/Reddy135/LocalApps/petsc/s >> rc/dm/impls/plex/plex.c >> >> >> On Tue, Apr 16, 2013 at 2:52 AM, Dharmendar Reddy < >> dharmareddy84 at gmail.com> wrote: >> >>> Hello, >>> I am getting a segfault in DMPlexcreateSectionF90 ...I am not >>> sure if the problem is in parameters i pass or it is coming from >>> petsc...can you help me fix this ? >>> Here is the output from debugger: >>> I print the variables passed to function before call and they seem >>> consistent with the format required by the DMPlexCreateSectionF90 call... >>> >>> Breakpoint 1.1, FEMLIB_M::setupdefualtsection_dofmap (this=0x2781e28) at >>> /home1/00924/R >>> eddy135/projects/utgds/src/fem/VariationalProblemBoundProcedures.F90:197 >>> 197 call DMPlexCreateSectionF90(this%mdata%Meshdm,this%tdim, >>> this%numField, & >>> (idb) s >>> 198 this%numComp,this%cellDofMap, this%numBC, >>> this%bcFieldId, & >>> (idb) s >>> 199 this%bcPointIS,section,ierr) >>> (idb) print this%tdim >>> $25 = 1 >>> (idb) print this%numField >>> $26 = 1 >>> (idb) print this%numcomp >>> $27 = 1 >>> (idb) print this%cellDofMap >>> $28 = {1, 0} >>> (idb) print this%numBC >>> $29 = 2 >>> (idb) c >>> Continuing. >>> >>> Breakpoint 1.2, FEMLIB_M::setupdefualtsection_dofmap (this=0x2781e28) at >>> /home1/00924/R >>> eddy135/projects/utgds/src/fem/VariationalProblemBoundProcedures.F90:197 >>> 197 call DMPlexCreateSectionF90(this%mdata%Meshdm,this%tdim, >>> this%numField, & >>> (idb) c >>> Continuing. >>> >>> Breakpoint 1.3, FEMLIB_M::setupdefualtsection_dofmap (this=0x2781e28) at >>> /home1/00924/R >>> eddy135/projects/utgds/src/fem/VariationalProblemBoundProcedures.F90:197 >>> 197 call DMPlexCreateSectionF90(this%mdata%Meshdm,this%tdim, >>> this%numField, & >>> (idb) c >>> Continuing. >>> >>> Now i see a segfault in inside the petsc call >>> >>> Program received signal SIGSEGV >>> DMPlexCreateSectionInitial (dm=0x281cfa0, dim=1, numFields=1, >>> numComp=0x43ce7cf20000000 >>> 1, numDof=0x1, section=0x7fffa07c8d28) at >>> /home1/00924/Reddy135/LocalApps/petsc/src/dm/ >>> impls/plex/plex.c:5892 >>> 5892 for (f = 0; f < numFields; ++f) numDofTot[d] += >>> numDof[f*(dim+1)+d]; >>> >>> why is the adress of numDof 0x1 ? some thing wrong in the FortranAddress >>> conversion code in zplexf90 ? >>> >>> here is the backtrace >>> >>> #0 0x00002b79e6cec09a in DMPlexCreateSectionInitial (dm=0x281cfa0, >>> dim=1, numFields=1, >>> numComp=0x43ce7cf200000001, numDof=0x1, section=0x7fffa07c8d28) at >>> /home1/00924/Reddy1 >>> 35/LocalApps/petsc/src/dm/impls/plex/plex.c:5892 >>> #1 0x00002b79e6cf1038 in DMPlexCreateSection (dm=0x281cfa0, dim=1, >>> numFields=1, numCom >>> p=0x43ce7cf200000001, numDof=0x1, numBC=2, bcField=0xc420000000000000, >>> bcPoints=0xc4200 >>> 00002a66e21, section=0x7fffa07c8d28) at >>> /home1/00924/Reddy135/LocalApps/petsc/src/dm/im >>> pls/plex/plex.c:6148 >>> #2 0x00002b79e6d9e571 in dmplexcreatesectionf90_ (dm=0x259bda8, >>> dim=0x2781e8c, numFiel >>> ds=0x2781e90, ptrC=0x25ed9e0, ptrD=0x28adf00, numBC=0x27820a0, >>> ptrF=0x25eda00, ptrP=0x2 >>> 878900, section=0x7fffa07c8d28, __ierr=0x7fffa07c8c98) at >>> /home1/00924/Reddy135/LocalAp >>> ps/petsc/src/dm/impls/plex/f90-custom/zplexf90.c:221 >>> >>> -- >>> ----------------------------------------------------- >>> Dharmendar Reddy Palle >>> Graduate Student >>> Microelectronics Research center, >>> University of Texas at Austin, >>> 10100 Burnet Road, Bldg. 160 >>> MER 2.608F, TX 78758-4445 >>> e-mail: dharmareddy84 at gmail.com >>> Phone: +1-512-350-9082 >>> United States of America. >>> Homepage: https://webspace.utexas.edu/~dpr342 >>> >> >> >> >> -- >> ----------------------------------------------------- >> Dharmendar Reddy Palle >> Graduate Student >> Microelectronics Research center, >> University of Texas at Austin, >> 10100 Burnet Road, Bldg. 160 >> MER 2.608F, TX 78758-4445 >> e-mail: dharmareddy84 at gmail.com >> Phone: +1-512-350-9082 >> United States of America. >> Homepage: https://webspace.utexas.edu/~dpr342 >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -- ----------------------------------------------------- Dharmendar Reddy Palle Graduate Student Microelectronics Research center, University of Texas at Austin, 10100 Burnet Road, Bldg. 160 MER 2.608F, TX 78758-4445 e-mail: dharmareddy84 at gmail.com Phone: +1-512-350-9082 United States of America. Homepage: https://webspace.utexas.edu/~dpr342 -------------- next part -------------- An HTML attachment was scrubbed... URL: From dharmareddy84 at gmail.com Tue Apr 16 15:38:58 2013 From: dharmareddy84 at gmail.com (Dharmendar Reddy) Date: Tue, 16 Apr 2013 15:38:58 -0500 Subject: [petsc-users] Segfault in DMPlexCreateSectionF90 In-Reply-To: References: Message-ID: I do not do any thing special in configure stage for Fortran. So if you have a Fortran code and call petsc from with in, it should work. Just use same compilers for bulding petsc and your code. There a number of Fortran based examples in petsc which i very often refer to. Most of my code is Fortran 2003 compliant and to your second question, i don't clearly understand it but i think yes. On Tue, Apr 16, 2013 at 5:25 AM, Sonya Blade wrote: > Dear Dharmendar, > > Can you share with me which fortran version you are using? Is it same as > is > in C configuring the Petsc or Slepc with fortran ? > > Regards, -- ----------------------------------------------------- Dharmendar Reddy Palle Graduate Student Microelectronics Research center, University of Texas at Austin, 10100 Burnet Road, Bldg. 160 MER 2.608F, TX 78758-4445 e-mail: dharmareddy84 at gmail.com Phone: +1-512-350-9082 United States of America. Homepage: https://webspace.utexas.edu/~dpr342 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Shuangshuang.Jin at pnnl.gov Tue Apr 16 18:21:31 2013 From: Shuangshuang.Jin at pnnl.gov (Jin, Shuangshuang) Date: Tue, 16 Apr 2013 16:21:31 -0700 Subject: [petsc-users] ksp for AX=B system Message-ID: <6778DE83AB681D49BFC2CD850610FEB1018FC933E31C@EMAIL04.pnl.gov> Hi, petsc developers, I have another question regarding solving the AX=B linear systems. I know that we can use PETSc ksp solver to solve an Ax=b linear system in parallel, where A is a square matrix, and b is a column vector for example. What about solving AX=B in parallel, where A is still n*n, and B is a n*m matrix? If I solve each column of B one by one, such as: for (i = 0; i < m; i++) Callkspsolver(A, xi, bi); // user defined wrapper function to call PETSc ksp solver Then for solving each individual Axi = bi, it's parallel. However, if m is big, the sequential outside loop is quite inefficient. What is the best approach to parallelize the outside loop as well to speed up the overall computation? Thanks, Shuangshuang -------------- next part -------------- An HTML attachment was scrubbed... URL: From ztdep at yahoo.com.cn Tue Apr 16 19:12:38 2013 From: ztdep at yahoo.com.cn (ztdep at yahoo.com.cn) Date: Wed, 17 Apr 2013 08:12:38 +0800 (CST) Subject: [petsc-users] =?utf-8?b?5Zue5aSN77yaICBIb3cgdG8gZ2V0IHRoZSBlaWdl?= =?utf-8?q?nvectors_in_Slepc?= In-Reply-To: References: , Message-ID: <1366157558.80453.YahooMailNeo@web15104.mail.cnb.yahoo.com> Dear Roman: ? ?Could you please tell me how to ask questions in mailing list. i have put one question to?petsc-users at mcs.anl.gov, but i get nothing feedback ? ? >________________________________ > ???? Jose E. Roman >???? PETSc users list >????? 2013?4?14?, ???, 10:44 ?? >??: Re: [petsc-users] How to get the eigenvectors in Slepc > > > >El 14/04/2013, a las 08:04, Sonya Blade escribi?: > >> Dear All, >> >> If I have all eigenvalues as real numbers is it possible mathematically that >> I get the complex eigenvectors? Because nowhere in my solution I obtain the >> complex eigenvalues but Slepc returns the complex for eigenvectors. >> >> Your enlightenment will be appreciated. >> >> Regards, ??? ??? ??? ? ??? ??? ? > >If the eigenvector is complex then of course the eigenvalue is complex as well (I assume your matrix is real non-symmetric). You have to get both the real and imaginary parts of the eigenvalue. >http://www.grycap.upv.es/slepc/documentation/current/docs/manualpages/EPS/EPSGetEigenpair.html > >An alternative is to do all the computation in complex arithmetic (configure --with-scalar-type=complex). > >Jose > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Apr 16 19:15:40 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 16 Apr 2013 19:15:40 -0500 Subject: [petsc-users] ksp for AX=B system In-Reply-To: <6778DE83AB681D49BFC2CD850610FEB1018FC933E31C@EMAIL04.pnl.gov> References: <6778DE83AB681D49BFC2CD850610FEB1018FC933E31C@EMAIL04.pnl.gov> Message-ID: <42A940ED-9634-4AAC-A8FB-2C6EAD3EEF6B@mcs.anl.gov> Shuangshuang, How large is n and m? PETSc does not have any built-in "multiple right hand side" iterative solvers. Generally if m is small, m < 10-20 we recommend just solving each one in a loop as you suggest below. If m is large and n is not "too large" we recommend using a direct solver and using MatMatSolve() to solve all the right hand sides "together". If n is not "too large" and m is very large it would also be reasonable to solve in parallel different "sets" of right hand sides where each "set" of right hand sides is solve in parallel using a direct solver with MatMatSolve(), we don't have specific "canned" code set up to do this but it is pretty straightforward. Also where is your matrix coming from? A PDE where there are good known preconditioners to solve it (like multigrid) or some other type of problem without good preconditioners? Once we know the type of problem and some idea of n and m we can make more specific recommendations. Barry On Apr 16, 2013, at 6:21 PM, "Jin, Shuangshuang" wrote: > Hi, petsc developers, I have another question regarding solving the AX=B linear systems. > > I know that we can use PETSc ksp solver to solve an Ax=b linear system in parallel, where A is a square matrix, and b is a column vector for example. > > What about solving AX=B in parallel, where A is still n*n, and B is a n*m matrix? > > If I solve each column of B one by one, such as: > for (i = 0; i < m; i++) > Callkspsolver(A, xi, bi); // user defined wrapper function to call PETSc ksp solver > > Then for solving each individual Axi = bi, it?s parallel. However, if m is big, the sequential outside loop is quite inefficient. > > What is the best approach to parallelize the outside loop as well to speed up the overall computation? > > Thanks, > Shuangshuang > From Shuangshuang.Jin at pnnl.gov Tue Apr 16 19:38:29 2013 From: Shuangshuang.Jin at pnnl.gov (Jin, Shuangshuang) Date: Tue, 16 Apr 2013 17:38:29 -0700 Subject: [petsc-users] ksp for AX=B system In-Reply-To: <42A940ED-9634-4AAC-A8FB-2C6EAD3EEF6B@mcs.anl.gov> References: <6778DE83AB681D49BFC2CD850610FEB1018FC933E31C@EMAIL04.pnl.gov> <42A940ED-9634-4AAC-A8FB-2C6EAD3EEF6B@mcs.anl.gov> Message-ID: <6778DE83AB681D49BFC2CD850610FEB1018FC933E32C@EMAIL04.pnl.gov> Hi, Barry, thanks for your prompt reply. We have various size problems, from (n=9, m=3), (n=1081, m = 288), to (n=16072, m=2361) or even larger ultimately. Usually the dimension of the square matrix A, n is much larger than the column dimension of B, m. As you said, I'm using the loop to deal with the small (n=9, m=3) case. However, for bigger problem, I do hope there's a better approach. This is a power flow problem. When we parallelized it in OpenMP previously, we just parallelized the outside loop, and used a direct solver to solve it. We are now switching to MPI and like to use PETSc ksp solver to solve it in parallel. However, I don't know what's the best ksp solver I should use here? Direct solver or try a preconditioner? I appreciate very much for your recommendations. Thanks, Shuangshuang -----Original Message----- From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Barry Smith Sent: Tuesday, April 16, 2013 5:16 PM To: PETSc users list Subject: Re: [petsc-users] ksp for AX=B system Shuangshuang, How large is n and m? PETSc does not have any built-in "multiple right hand side" iterative solvers. Generally if m is small, m < 10-20 we recommend just solving each one in a loop as you suggest below. If m is large and n is not "too large" we recommend using a direct solver and using MatMatSolve() to solve all the right hand sides "together". If n is not "too large" and m is very large it would also be reasonable to solve in parallel different "sets" of right hand sides where each "set" of right hand sides is solve in parallel using a direct solver with MatMatSolve(), we don't have specific "canned" code set up to do this but it is pretty straightforward. Also where is your matrix coming from? A PDE where there are good known preconditioners to solve it (like multigrid) or some other type of problem without good preconditioners? Once we know the type of problem and some idea of n and m we can make more specific recommendations. Barry On Apr 16, 2013, at 6:21 PM, "Jin, Shuangshuang" wrote: > Hi, petsc developers, I have another question regarding solving the AX=B linear systems. > > I know that we can use PETSc ksp solver to solve an Ax=b linear system in parallel, where A is a square matrix, and b is a column vector for example. > > What about solving AX=B in parallel, where A is still n*n, and B is a n*m matrix? > > If I solve each column of B one by one, such as: > for (i = 0; i < m; i++) > Callkspsolver(A, xi, bi); // user defined wrapper function to call PETSc ksp solver > > Then for solving each individual Axi = bi, it's parallel. However, if m is big, the sequential outside loop is quite inefficient. > > What is the best approach to parallelize the outside loop as well to speed up the overall computation? > > Thanks, > Shuangshuang > From bsmith at mcs.anl.gov Tue Apr 16 19:49:58 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 16 Apr 2013 19:49:58 -0500 Subject: [petsc-users] ksp for AX=B system In-Reply-To: <6778DE83AB681D49BFC2CD850610FEB1018FC933E32C@EMAIL04.pnl.gov> References: <6778DE83AB681D49BFC2CD850610FEB1018FC933E31C@EMAIL04.pnl.gov> <42A940ED-9634-4AAC-A8FB-2C6EAD3EEF6B@mcs.anl.gov> <6778DE83AB681D49BFC2CD850610FEB1018FC933E32C@EMAIL04.pnl.gov> Message-ID: <58CD76F6-ECC5-4DE5-A93F-1075179054B9@mcs.anl.gov> Shuangshuang This is what I was expecting, thanks for the confirmation. For these size problems you definitely want to use a direct solver (often parallel but not for smaller matrices) and solve multiple right hand sides. This means you actually will not use the KSP solver that is standard for most PETSc work, instead you will work directly with the MatGetFactor(), MatGetOrdering(), MatLUFactorSymbolic(), MatLUFactorNumeric(), MatMatSolve() paradigm where the A matrix is stored as an MATAIJ matrix and the B (multiple right hand side) as a MATDENSE matrix. An example that displays this paradigm is src/mat/examples/tests/ex125.c Once you have something running of interest to you we would like to work with you to improve the performance, we have some "tricks" we haven't yet implemented to make these solvers much faster than they will be by default. Barry On Apr 16, 2013, at 7:38 PM, "Jin, Shuangshuang" wrote: > Hi, Barry, thanks for your prompt reply. > > We have various size problems, from (n=9, m=3), (n=1081, m = 288), to (n=16072, m=2361) or even larger ultimately. > > Usually the dimension of the square matrix A, n is much larger than the column dimension of B, m. > > As you said, I'm using the loop to deal with the small (n=9, m=3) case. However, for bigger problem, I do hope there's a better approach. > > This is a power flow problem. When we parallelized it in OpenMP previously, we just parallelized the outside loop, and used a direct solver to solve it. > > We are now switching to MPI and like to use PETSc ksp solver to solve it in parallel. However, I don't know what's the best ksp solver I should use here? Direct solver or try a preconditioner? > > I appreciate very much for your recommendations. > > Thanks, > Shuangshuang > > > -----Original Message----- > From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Barry Smith > Sent: Tuesday, April 16, 2013 5:16 PM > To: PETSc users list > Subject: Re: [petsc-users] ksp for AX=B system > > > Shuangshuang, > > How large is n and m? PETSc does not have any built-in "multiple right hand side" iterative solvers. Generally if m is small, m < 10-20 we recommend just solving each one in a loop as you suggest below. If m is large and n is not "too large" we recommend using a direct solver and using MatMatSolve() to solve all the right hand sides "together". If n is not "too large" and m is very large it would also be reasonable to solve in parallel different "sets" of right hand sides where each "set" of right hand sides is solve in parallel using a direct solver with MatMatSolve(), we don't have specific "canned" code set up to do this but it is pretty straightforward. > > Also where is your matrix coming from? A PDE where there are good known preconditioners to solve it (like multigrid) or some other type of problem without good preconditioners? > > Once we know the type of problem and some idea of n and m we can make more specific recommendations. > > Barry > > > On Apr 16, 2013, at 6:21 PM, "Jin, Shuangshuang" wrote: > >> Hi, petsc developers, I have another question regarding solving the AX=B linear systems. >> >> I know that we can use PETSc ksp solver to solve an Ax=b linear system in parallel, where A is a square matrix, and b is a column vector for example. >> >> What about solving AX=B in parallel, where A is still n*n, and B is a n*m matrix? >> >> If I solve each column of B one by one, such as: >> for (i = 0; i < m; i++) >> Callkspsolver(A, xi, bi); // user defined wrapper function to call PETSc ksp solver >> >> Then for solving each individual Axi = bi, it's parallel. However, if m is big, the sequential outside loop is quite inefficient. >> >> What is the best approach to parallelize the outside loop as well to speed up the overall computation? >> >> Thanks, >> Shuangshuang >> > From jroman at dsic.upv.es Wed Apr 17 01:49:56 2013 From: jroman at dsic.upv.es (Jose E. Roman) Date: Wed, 17 Apr 2013 08:49:56 +0200 Subject: [petsc-users] =?utf-8?b?5Zue5aSN77yaICBIb3cgdG8gZ2V0IHRoZSBlaWdl?= =?utf-8?q?nvectors_in_Slepc?= In-Reply-To: <1366157558.80453.YahooMailNeo@web15104.mail.cnb.yahoo.com> References: , <1366157558.80453.YahooMailNeo@web15104.mail.cnb.yahoo.com> Message-ID: El 17/04/2013, a las 02:12, ztdep at yahoo.com.cn escribi?: > Dear Roman: > Could you please tell me how to ask questions in mailing list. i have put one question to petsc-users at mcs.anl.gov, but i get nothing feedback > Do you mean your question on March 3? Matt replied, here is the answer: https://lists.mcs.anl.gov/mailman/htdig/petsc-users/2013-March/016530.html Answers are sent to the list, so you must be suscribed to receive the answer. Jose From dominik at itis.ethz.ch Wed Apr 17 06:41:57 2013 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Wed, 17 Apr 2013 13:41:57 +0200 Subject: [petsc-users] Crash when using valgrind Message-ID: I have been successfully using valgrind for a long long time with petsc but now suddenly it refuses to work. E.g. calling up a properly functioning program causes a crash: mpiexec -n 2 valgrind --tool=memcheck -q --num-callers=20 MySolver cr_libinit.c:183 cri_init: sigaction() failed: Invalid argument cr_libinit.c:183 cri_init: sigaction() failed: Invalid argument ===================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = EXIT CODE: 134 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES ===================================================================================== APPLICATION TERMINATED WITH THE EXIT STRING: Aborted (signal 6) Same problem if I run without mpiexec at all, just on one proc. Google found just one and only related page on valgrind pages but I was not able to conclude much. Did anyone else experience the same problem? Thanks, Dominik -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Wed Apr 17 08:43:13 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 17 Apr 2013 08:43:13 -0500 Subject: [petsc-users] Crash when using valgrind In-Reply-To: References: Message-ID: Can you get a stack trace? Does this happen on a different machine? Should we turn off signal handling when running in valgrind? On Apr 17, 2013 6:42 AM, "Dominik Szczerba" wrote: > I have been successfully using valgrind for a long long time with petsc > but now suddenly it refuses to work. E.g. calling up a properly functioning > program causes a crash: > > mpiexec -n 2 valgrind --tool=memcheck -q --num-callers=20 MySolver > > cr_libinit.c:183 cri_init: sigaction() failed: Invalid argument > cr_libinit.c:183 cri_init: sigaction() failed: Invalid argument > > > ===================================================================================== > = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES > = EXIT CODE: 134 > = CLEANING UP REMAINING PROCESSES > = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES > > ===================================================================================== > APPLICATION TERMINATED WITH THE EXIT STRING: Aborted (signal 6) > > Same problem if I run without mpiexec at all, just on one proc. > > Google found just one and only related page on valgrind pages but I was > not able to conclude much. Did anyone else experience the same problem? > > Thanks, > Dominik > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexei.matveev at gmail.com Wed Apr 17 08:30:31 2013 From: alexei.matveev at gmail.com (Alexei Matveev) Date: Wed, 17 Apr 2013 15:30:31 +0200 Subject: [petsc-users] maintaining application compatibility with 3.1 and 3.2 Message-ID: Hi, Everyone, I just upgraded a cluster box to Debian Wheezy that comes with PETSC 3.2 and noted quite a few changes in the interface related to the DM/DA/DMDA. I am still somewhat confused about the distinction of the three. My question is --- does anyone have experience maintaining a common application source for 3.2 (e.g. Wheezy) and, say, 3.1 (e.g. Ubuntu 12.04)? So far I was able to do so for 2.3 (Lenny) and 3.1. However the changes in 3.2 seem to be quite significant so that I decided to first ask for your experience and advice. Do you maybe have any tips or pointers on how to maintain such a compatibility? Alexei -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Apr 17 09:19:11 2013 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 17 Apr 2013 10:19:11 -0400 Subject: [petsc-users] maintaining application compatibility with 3.1 and 3.2 In-Reply-To: References: Message-ID: On Wed, Apr 17, 2013 at 9:30 AM, Alexei Matveev wrote: > > Hi, Everyone, > > I just upgraded a cluster box to Debian Wheezy that comes with PETSC 3.2 > and noted quite a few changes in the interface related to the DM/DA/DMDA. > DMDA is the renamed DA. DM is a superinterface that always existed, but had not done much. The DM is intended to provide more structure to data than the simple linear space of Vec, e.g. structured grid (DMDA), unstructured grid (DMPlex), problem composition (DMComposite), etc. Matt > I am still somewhat confused about the distinction of the three. > > My question is --- does anyone have experience maintaining a common > application source for 3.2 (e.g. Wheezy) and, say, 3.1 (e.g. Ubuntu 12.04)? > > So far I was able to do so for 2.3 (Lenny) and 3.1. However the changes > in 3.2 seem to be quite significant so that I decided to first ask for your > experience and advice. > > Do you maybe have any tips or pointers on how to maintain such a > compatibility? > > Alexei > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From benjamin.kirk-1 at nasa.gov Wed Apr 17 09:20:31 2013 From: benjamin.kirk-1 at nasa.gov (Kirk, Benjamin (JSC-EG311)) Date: Wed, 17 Apr 2013 09:20:31 -0500 Subject: [petsc-users] maintaining application compatibility with 3.1 and 3.2 In-Reply-To: References: Message-ID: <6B9E2C49-A1D4-4FEE-9839-27787A876C52@nasa.gov> On Apr 17, 2013, at 9:12 AM, "Alexei Matveev" wrote: > Do you maybe have any tips or pointers on how to maintain such a > compatibility? We have a collection of macros that we use in the libMesh project to aid with this - see in particular petsc_macro.h at http://libmesh.sf.net. We test daily with petsc 3.0 to 3.3 using this approach. Ill get you the precise link if you desire later... -Ben From dominik at itis.ethz.ch Wed Apr 17 10:07:44 2013 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Wed, 17 Apr 2013 17:07:44 +0200 Subject: [petsc-users] Crash when using valgrind In-Reply-To: References: Message-ID: On Wed, Apr 17, 2013 at 3:43 PM, Jed Brown wrote: > Can you get a stack trace? Does this happen on a different machine? > > Stack trace of what exactly? I do not seem to be able to run gdb with valgrind...? gdb valgrind --tool=memcheck -q --num-callers=20 MySolver gdb: unrecognized option '--tool=memcheck' I have no other machine to check at the moment, but am trying to set one up. Thanks Dominik -------------- next part -------------- An HTML attachment was scrubbed... URL: From benjamin.kirk-1 at nasa.gov Wed Apr 17 10:12:46 2013 From: benjamin.kirk-1 at nasa.gov (Kirk, Benjamin (JSC-EG311)) Date: Wed, 17 Apr 2013 10:12:46 -0500 Subject: [petsc-users] maintaining application compatibility with 3.1 and 3.2 In-Reply-To: <6B9E2C49-A1D4-4FEE-9839-27787A876C52@nasa.gov> References: <6B9E2C49-A1D4-4FEE-9839-27787A876C52@nasa.gov> Message-ID: On Apr 17, 2013, at 9:20 AM, "Kirk, Benjamin (JSC-EG311)" wrote: > On Apr 17, 2013, at 9:12 AM, "Alexei Matveev" wrote: > >> Do you maybe have any tips or pointers on how to maintain such a >> compatibility? > > We have a collection of macros that we use in the libMesh project to aid with this - see in particular petsc_macro.h at http://libmesh.sf.net. > > We test daily with petsc 3.0 to 3.3 using this approach. > > Ill get you the precise link if you desire later? See the macro definition here: http://libmesh.sourceforge.net/doxygen/petsc__macro_8h_source.php For example, we define the macro LibMeshVecDestroy() as a wrapper to the version-specific PETSc implementation. -Ben From Shuangshuang.Jin at pnnl.gov Wed Apr 17 10:50:25 2013 From: Shuangshuang.Jin at pnnl.gov (Jin, Shuangshuang) Date: Wed, 17 Apr 2013 08:50:25 -0700 Subject: [petsc-users] ksp for AX=B system In-Reply-To: <58CD76F6-ECC5-4DE5-A93F-1075179054B9@mcs.anl.gov> References: <6778DE83AB681D49BFC2CD850610FEB1018FC933E31C@EMAIL04.pnl.gov> <42A940ED-9634-4AAC-A8FB-2C6EAD3EEF6B@mcs.anl.gov> <6778DE83AB681D49BFC2CD850610FEB1018FC933E32C@EMAIL04.pnl.gov> <58CD76F6-ECC5-4DE5-A93F-1075179054B9@mcs.anl.gov> Message-ID: <6778DE83AB681D49BFC2CD850610FEB1018FC933E37B@EMAIL04.pnl.gov> Thank you, Barry, I'll take a look at ex125.c first and try to implement it in our code. I'll get back to you for further help to improve the performance. Thanks, Shuangshuang -----Original Message----- From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Barry Smith Sent: Tuesday, April 16, 2013 5:50 PM To: PETSc users list Subject: Re: [petsc-users] ksp for AX=B system Shuangshuang This is what I was expecting, thanks for the confirmation. For these size problems you definitely want to use a direct solver (often parallel but not for smaller matrices) and solve multiple right hand sides. This means you actually will not use the KSP solver that is standard for most PETSc work, instead you will work directly with the MatGetFactor(), MatGetOrdering(), MatLUFactorSymbolic(), MatLUFactorNumeric(), MatMatSolve() paradigm where the A matrix is stored as an MATAIJ matrix and the B (multiple right hand side) as a MATDENSE matrix. An example that displays this paradigm is src/mat/examples/tests/ex125.c Once you have something running of interest to you we would like to work with you to improve the performance, we have some "tricks" we haven't yet implemented to make these solvers much faster than they will be by default. Barry On Apr 16, 2013, at 7:38 PM, "Jin, Shuangshuang" wrote: > Hi, Barry, thanks for your prompt reply. > > We have various size problems, from (n=9, m=3), (n=1081, m = 288), to (n=16072, m=2361) or even larger ultimately. > > Usually the dimension of the square matrix A, n is much larger than the column dimension of B, m. > > As you said, I'm using the loop to deal with the small (n=9, m=3) case. However, for bigger problem, I do hope there's a better approach. > > This is a power flow problem. When we parallelized it in OpenMP previously, we just parallelized the outside loop, and used a direct solver to solve it. > > We are now switching to MPI and like to use PETSc ksp solver to solve it in parallel. However, I don't know what's the best ksp solver I should use here? Direct solver or try a preconditioner? > > I appreciate very much for your recommendations. > > Thanks, > Shuangshuang > > > -----Original Message----- > From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Barry Smith > Sent: Tuesday, April 16, 2013 5:16 PM > To: PETSc users list > Subject: Re: [petsc-users] ksp for AX=B system > > > Shuangshuang, > > How large is n and m? PETSc does not have any built-in "multiple right hand side" iterative solvers. Generally if m is small, m < 10-20 we recommend just solving each one in a loop as you suggest below. If m is large and n is not "too large" we recommend using a direct solver and using MatMatSolve() to solve all the right hand sides "together". If n is not "too large" and m is very large it would also be reasonable to solve in parallel different "sets" of right hand sides where each "set" of right hand sides is solve in parallel using a direct solver with MatMatSolve(), we don't have specific "canned" code set up to do this but it is pretty straightforward. > > Also where is your matrix coming from? A PDE where there are good known preconditioners to solve it (like multigrid) or some other type of problem without good preconditioners? > > Once we know the type of problem and some idea of n and m we can make more specific recommendations. > > Barry > > > On Apr 16, 2013, at 6:21 PM, "Jin, Shuangshuang" wrote: > >> Hi, petsc developers, I have another question regarding solving the AX=B linear systems. >> >> I know that we can use PETSc ksp solver to solve an Ax=b linear system in parallel, where A is a square matrix, and b is a column vector for example. >> >> What about solving AX=B in parallel, where A is still n*n, and B is a n*m matrix? >> >> If I solve each column of B one by one, such as: >> for (i = 0; i < m; i++) >> Callkspsolver(A, xi, bi); // user defined wrapper function to call PETSc ksp solver >> >> Then for solving each individual Axi = bi, it's parallel. However, if m is big, the sequential outside loop is quite inefficient. >> >> What is the best approach to parallelize the outside loop as well to speed up the overall computation? >> >> Thanks, >> Shuangshuang >> > From pengxwang at hotmail.com Wed Apr 17 10:52:55 2013 From: pengxwang at hotmail.com (Roc Wang) Date: Wed, 17 Apr 2013 10:52:55 -0500 Subject: [petsc-users] errors when install petsc-3.3-p6 Message-ID: Hi, I am installing the last version of the petsc. Previously, I have petsc-3.2-p7 and it works well. After install the package, I run the test by typing: make PETSC_DIR=/home/pwang/soft/petsc-3.3-p6 PETSC_ARCH=arch-linux2-c-debug test A lots of error information come out, the following shows the first several lines. Which package or library I missed? Thanks. mpicc -o ex19.o -c -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -g3 -fno-inline -O0 -I/home/pwang/soft/petsc-3.3-p6/include -I/home/pwang/soft/petsc-3.3-p6/arch-linux2-c-debug/include -I/usr/include/mpich2-x86_64 -D__INSDIR__=src/snes/examples/tutorials/ ex19.c mpicc -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -g3 -fno-inline -O0 -o ex19 ex19.o -L/home/pwang/soft/petsc-3.3-p6/arch-linux2-c-debug/lib -lpetsc -lX11 -lpthread -Wl,-rpath,/home/pwang/soft/petsc-3.3-p6/arch-linux2-c-debug/lib -lflapack -lfblas -lm -L/usr/lib64/mpich2/lib -L/usr/lib/gcc/x86_64-redhat-linux/4.4.6 -lmpichf90 -lgfortran -lm -Wl,-rpath,/usr/lib64/mpich2/lib -lm -ldl -lmpich -lopa -lpthread -lrt -lgcc_s -ldl /home/pwang/soft/petsc-3.3-p6/arch-linux2-c-debug/lib/libmpich.a(initthread.o): In function `PMPI_Init_thread': /home/pwang/soft/petsc-3.3-p6/externalpackages/mpich2-1.4.1p1/src/mpi/init/initthread.c:535: undefined reference to `MPL_env2bool' /home/pwang/soft/petsc-3.3-p6/arch-linux2-c-debug/lib/libmpich.a(param_vals.o): In function `MPIR_Param_init_params': /home/pwang/soft/petsc-3.3-p6/externalpackages/mpich2-1.4.1p1/src/util/param/param_vals.c:249: undefined reference to `MPL_env2int' /home/pwang/soft/petsc-3.3-p6/externalpackages/mpich2-1.4.1p1/src/util/param/param_vals.c:251: undefined reference to `MPL_env2int' /home/pwang/soft/petsc-3.3-p6/externalpackages/mpich2-1.4.1p1/src/util/param/param_vals.c:254: undefined reference to `MPL_env2int' /home/pwang/soft/petsc-3.3-p6/externalpackages/mpich2-1.4.1p1/src/util/param/param_vals.c:256: undefined reference to `MPL_env2int' /home/pwang/soft/petsc-3.3-p6/externalpackages/mpich2-1.4.1p1/src/util/param/param_vals.c:259: undefined reference to `MPL_env2int' /home/pwang/soft/petsc-3.3-p6/arch-linux2-c-debug/lib/libmpich.a(param_vals.o):/home/pwang/soft/petsc-3.3-p6/externalpackages/mpich2-1.4.1p1/src/util/param/param_vals.c:261: more undefined references to `MPL_env2int' follow ... -------------- next part -------------- An HTML attachment was scrubbed... URL: From mike.hui.zhang at hotmail.com Wed Apr 17 11:02:19 2013 From: mike.hui.zhang at hotmail.com (Hui Zhang) Date: Wed, 17 Apr 2013 18:02:19 +0200 Subject: [petsc-users] Read PetscInt in binary file from Matlab Message-ID: Is there something similar to the Matlab function PetscBinaryRead for PetscInt? The binary file is obtained by PetscIntView. From balay at mcs.anl.gov Wed Apr 17 11:58:18 2013 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 17 Apr 2013 11:58:18 -0500 (CDT) Subject: [petsc-users] errors when install petsc-3.3-p6 In-Reply-To: References: Message-ID: Can you sen the relavent logs to petsc-maint? Satish On Wed, 17 Apr 2013, Roc Wang wrote: > Hi, I am installing the last version of the petsc. Previously, I have petsc-3.2-p7 and it works well. > > After install the package, I run the test by typing: > > make PETSC_DIR=/home/pwang/soft/petsc-3.3-p6 PETSC_ARCH=arch-linux2-c-debug test > > A lots of error information come out, the following shows the first several lines. Which package or library I missed? Thanks. > > mpicc -o ex19.o -c -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -g3 -fno-inline -O0 -I/home/pwang/soft/petsc-3.3-p6/include -I/home/pwang/soft/petsc-3.3-p6/arch-linux2-c-debug/include -I/usr/include/mpich2-x86_64 -D__INSDIR__=src/snes/examples/tutorials/ ex19.c > mpicc -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -g3 -fno-inline -O0 -o ex19 ex19.o -L/home/pwang/soft/petsc-3.3-p6/arch-linux2-c-debug/lib -lpetsc -lX11 -lpthread -Wl,-rpath,/home/pwang/soft/petsc-3.3-p6/arch-linux2-c-debug/lib -lflapack -lfblas -lm -L/usr/lib64/mpich2/lib -L/usr/lib/gcc/x86_64-redhat-linux/4.4.6 -lmpichf90 -lgfortran -lm -Wl,-rpath,/usr/lib64/mpich2/lib -lm -ldl -lmpich -lopa -lpthread -lrt -lgcc_s -ldl > /home/pwang/soft/petsc-3.3-p6/arch-linux2-c-debug/lib/libmpich.a(initthread.o): In function `PMPI_Init_thread': > /home/pwang/soft/petsc-3.3-p6/externalpackages/mpich2-1.4.1p1/src/mpi/init/initthread.c:535: undefined reference to `MPL_env2bool' > /home/pwang/soft/petsc-3.3-p6/arch-linux2-c-debug/lib/libmpich.a(param_vals.o): In function `MPIR_Param_init_params': > /home/pwang/soft/petsc-3.3-p6/externalpackages/mpich2-1.4.1p1/src/util/param/param_vals.c:249: undefined reference to `MPL_env2int' > /home/pwang/soft/petsc-3.3-p6/externalpackages/mpich2-1.4.1p1/src/util/param/param_vals.c:251: undefined reference to `MPL_env2int' > /home/pwang/soft/petsc-3.3-p6/externalpackages/mpich2-1.4.1p1/src/util/param/param_vals.c:254: undefined reference to `MPL_env2int' > /home/pwang/soft/petsc-3.3-p6/externalpackages/mpich2-1.4.1p1/src/util/param/param_vals.c:256: undefined reference to `MPL_env2int' > /home/pwang/soft/petsc-3.3-p6/externalpackages/mpich2-1.4.1p1/src/util/param/param_vals.c:259: undefined reference to `MPL_env2int' > /home/pwang/soft/petsc-3.3-p6/arch-linux2-c-debug/lib/libmpich.a(param_vals.o):/home/pwang/soft/petsc-3.3-p6/externalpackages/mpich2-1.4.1p1/src/util/param/param_vals.c:261: more undefined references to `MPL_env2int' follow > ... > > > From mpovolot at purdue.edu Wed Apr 17 12:23:04 2013 From: mpovolot at purdue.edu (Michael Povolotskyi) Date: Wed, 17 Apr 2013 13:23:04 -0400 Subject: [petsc-users] MatDenseGetArray question Message-ID: <516EDA78.9020304@purdue.edu> Dear Petsc developers, does the function MatDenseGetArray allocate additional memory or just returns a pointer to existing memory ? Thank you, Michael. -- Michael Povolotskyi, PhD Research Assistant Professor Network for Computational Nanotechnology 207 S Martin Jischke Drive Purdue University, DLR, room 441-10 West Lafayette, Indiana 47907 phone: +1-765-494-9396 fax: +1-765-496-6026 From knepley at gmail.com Wed Apr 17 12:37:15 2013 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 17 Apr 2013 13:37:15 -0400 Subject: [petsc-users] MatDenseGetArray question In-Reply-To: <516EDA78.9020304@purdue.edu> References: <516EDA78.9020304@purdue.edu> Message-ID: On Wed, Apr 17, 2013 at 1:23 PM, Michael Povolotskyi wrote: > Dear Petsc developers, > does the function MatDenseGetArray allocate additional memory or just > returns a pointer to existing memory ? > Returns a pointer. Matt > Thank you, > Michael. > > -- > Michael Povolotskyi, PhD > Research Assistant Professor > Network for Computational Nanotechnology > 207 S Martin Jischke Drive > Purdue University, DLR, room 441-10 > West Lafayette, Indiana 47907 > > phone: +1-765-494-9396 > fax: +1-765-496-6026 > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jbakosi at lanl.gov Wed Apr 17 11:26:47 2013 From: jbakosi at lanl.gov (Jozsef Bakosi) Date: Wed, 17 Apr 2013 10:26:47 -0600 Subject: [petsc-users] [MEF-QUAR] Re: Any changes in ML usage between 3.1-p8 -> 3.3-p6? In-Reply-To: <20130405143415.GH17937@karman> References: <20130405143415.GH17937@karman> Message-ID: <20130417162647.GC23247@karman> On 04.05.2013 08:34, Jozsef Bakosi wrote: > Hi folks, > > In switching from 3.1-p8 to 3.3-p6, keeping the same ML ml-6.2.tar.gz, I get > indefinite preconditioner with the newer PETSc version. Has there been anything > substantial changed around how PCs are handled, e.g. in the defaults? > > I know this request is pretty general, I would just like to know where to start > looking, where changes in PETSc might be clobbering the (supposedly same) > behavior of ML. > Alright, here is a little more information about what we see. Running the same setup/solve using ML (using the same ML and application source code) and switching from PETSc 3.1-p8 to 3.3-p6 appears to work differently, in some cases, resulting in divergence compared to the old version. I attach the output from KSPView() called after KSPSetup() for the 3.1-p8 (old.out) and for the 3.3-p6 (new.out), both running on 4 MPI ranks. A diff reveals some notable differences: * using (PRECONDITIONED -> NONE) norm type for convergence test * (using -> not using) I-node routines * tolerance for zero pivot (1e-12 -> 2.22045e-14) for PPE_mg_levels_[12]_sub_ (stayed the same for PPE_mg_coarse_redundant_) So we are wondering what might have changed in the PETSc defaults around how PCs, in particular ML, is used. Thanks, and please let me know if I can give you more information, Thanks, Jozsef -------------- next part -------------- KSP Object:(PPE_) type: cg maximum iterations=10000 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using UNPRECONDITIONED norm type for convergence test PC Object:(PPE_) type: ml MG: type is MULTIPLICATIVE, levels=3 cycles=v Cycles per PCApply=1 Coarse grid solver -- level 0 presmooths=1 postsmooths=1 ----- KSP Object:(PPE_mg_coarse_) type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using PRECONDITIONED norm type for convergence test PC Object:(PPE_mg_coarse_) type: redundant Redundant preconditioner: First (color=0) of 4 PCs follows KSP Object:(PPE_mg_coarse_redundant_) type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using PRECONDITIONED norm type for convergence test PC Object:(PPE_mg_coarse_redundant_) type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-16 matrix ordering: nd factor fill ratio given 5, needed 3.80946 Factored matrix follows: Matrix Object: type=seqaij, rows=338, cols=338 package used to perform factorization: petsc total: nonzeros=49302, allocated nonzeros=49302 using I-node routines: found 273 nodes, limit used is 5 linear system matrix = precond matrix: Matrix Object: type=seqaij, rows=338, cols=338 total: nonzeros=12942, allocated nonzeros=26026 not using I-node routines KSP Object:(PPE_mg_coarse_redundant_) type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using PRECONDITIONED norm type for convergence test PC Object:(PPE_mg_coarse_redundant_) type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-16 matrix ordering: nd factor fill ratio given 5, needed 3.80946 Factored matrix follows: Matrix Object: type=seqaij, rows=338, cols=338 package used to perform factorization: petsc total: nonzeros=49302, allocated nonzeros=49302 using I-node routines: found 273 nodes, limit used is 5 linear system matrix = precond matrix: Matrix Object: type=seqaij, rows=338, cols=338 total: nonzeros=12942, allocated nonzeros=26026 not using I-node routines KSP Object:(PPE_mg_coarse_redundant_) type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using PRECONDITIONED norm type for convergence test PC Object:(PPE_mg_coarse_redundant_) type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-16 matrix ordering: nd factor fill ratio given 5, needed 3.80946 Factored matrix follows: Matrix Object: type=seqaij, rows=338, cols=338 package used to perform factorization: petsc total: nonzeros=49302, allocated nonzeros=49302 using I-node routines: found 273 nodes, limit used is 5 linear system matrix = precond matrix: Matrix Object: type=seqaij, rows=338, cols=338 total: nonzeros=12942, allocated nonzeros=26026 not using I-node routines linear system matrix = precond matrix: Matrix Object: type=mpiaij, rows=338, cols=338 total: nonzeros=12942, allocated nonzeros=12942 not using I-node (on process 0) routines Down solver (pre-smoother) on level 1 smooths=1 -------------------- KSP Object:(PPE_mg_levels_1_) type: richardson Richardson: damping factor=0.9 maximum iterations=1 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NO norm type for convergence test PC Object:(PPE_mg_levels_1_) type: bjacobi block Jacobi: number of blocks = 4 Local solve is same for all blocks, in the following KSP and PC objects: KSP Object:(PPE_mg_levels_1_sub_) type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NO norm type for convergence test PC Object:(PPE_mg_levels_1_sub_) type: icc 0 levels of fill tolerance for zero pivot 1e-12 using Manteuffel shift matrix ordering: natural linear system matrix = precond matrix: Matrix Object: type=seqaij, rows=2265, cols=2265 total: nonzeros=60131, allocated nonzeros=60131 not using I-node routines linear system matrix = precond matrix: Matrix Object: type=mpiaij, rows=9124, cols=9124 total: nonzeros=267508, allocated nonzeros=267508 not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 2 smooths=1 -------------------- KSP Object:(PPE_mg_levels_2_) type: richardson Richardson: damping factor=0.9 maximum iterations=1 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NO norm type for convergence test PC Object:(PPE_mg_levels_2_) type: bjacobi block Jacobi: number of blocks = 4 Local solve is same for all blocks, in the following KSP and PC objects: KSP Object:(PPE_mg_levels_2_sub_) type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NO norm type for convergence test PC Object:(PPE_mg_levels_2_sub_) type: icc 0 levels of fill tolerance for zero pivot 1e-12 using Manteuffel shift matrix ordering: natural linear system matrix = precond matrix: Matrix Object: type=seqaij, rows=59045, cols=59045 total: nonzeros=1504413, allocated nonzeros=1594215 not using I-node routines linear system matrix = precond matrix: Matrix Object: type=mpiaij, rows=236600, cols=236600 total: nonzeros=6183334, allocated nonzeros=12776400 not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Matrix Object: type=mpiaij, rows=236600, cols=236600 total: nonzeros=6183334, allocated nonzeros=12776400 not using I-node (on process 0) routines -------------- next part -------------- KSP Object:(PPE_) 4 MPI processes type: cg maximum iterations=10000 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using UNPRECONDITIONED norm type for convergence test PC Object:(PPE_) 4 MPI processes type: ml MG: type is MULTIPLICATIVE, levels=3 cycles=v Cycles per PCApply=1 Using Galerkin computed coarse grid matrices Coarse grid solver -- level ------------------------------- KSP Object: (PPE_mg_coarse_) 4 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (PPE_mg_coarse_) 4 MPI processes type: redundant Redundant preconditioner: First (color=0) of 4 PCs follows KSP Object: (PPE_mg_coarse_redundant_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (PPE_mg_coarse_redundant_) 1 MPI processes type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-16 matrix ordering: nd factor fill ratio given 5, needed 3.80946 Factored matrix follows: Matrix Object: 1 MPI processes type: seqaij rows=338, cols=338 package used to perform factorization: petsc total: nonzeros=49302, allocated nonzeros=49302 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=338, cols=338 total: nonzeros=12942, allocated nonzeros=26026 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Matrix Object: 4 MPI processes type: mpiaij rows=338, cols=338 total: nonzeros=12942, allocated nonzeros=12942 total number of mallocs used during MatSetValues calls =0 not using I-node (on process 0) routines Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (PPE_mg_levels_1_) 4 MPI processes type: richardson Richardson: damping factor=0.9 maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (PPE_mg_levels_1_) 4 MPI processes type: bjacobi block Jacobi: number of blocks = 4 Local solve is same for all blocks, in the following KSP and PC objects: KSP Object: (PPE_mg_levels_1_sub_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (PPE_mg_levels_1_sub_) 1 MPI processes type: icc 0 levels of fill tolerance for zero pivot 2.22045e-14 using Manteuffel shift matrix ordering: natural linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=2265, cols=2265 total: nonzeros=60131, allocated nonzeros=60131 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Matrix Object: 4 MPI processes type: mpiaij rows=9124, cols=9124 total: nonzeros=267508, allocated nonzeros=267508 total number of mallocs used during MatSetValues calls =0 not using I-node (on process 0) routines Up solver (post-smoother) on level 1 ------------------------------- KSP Object: (PPE_mg_levels_1_) 4 MPI processes type: richardson Richardson: damping factor=0.9 maximum iterations=1 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (PPE_mg_levels_1_) 4 MPI processes type: bjacobi block Jacobi: number of blocks = 4 Local solve is same for all blocks, in the following KSP and PC objects: KSP Object: (PPE_mg_levels_1_sub_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (PPE_mg_levels_1_sub_) 1 MPI processes type: icc 0 levels of fill tolerance for zero pivot 2.22045e-14 using Manteuffel shift matrix ordering: natural linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=2265, cols=2265 total: nonzeros=60131, allocated nonzeros=60131 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Matrix Object: 4 MPI processes type: mpiaij rows=9124, cols=9124 total: nonzeros=267508, allocated nonzeros=267508 total number of mallocs used during MatSetValues calls =0 not using I-node (on process 0) routines Down solver (pre-smoother) on level 2 ------------------------------- KSP Object: (PPE_mg_levels_2_) 4 MPI processes type: richardson Richardson: damping factor=0.9 maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (PPE_mg_levels_2_) 4 MPI processes type: bjacobi block Jacobi: number of blocks = 4 Local solve is same for all blocks, in the following KSP and PC objects: KSP Object: (PPE_mg_levels_2_sub_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (PPE_mg_levels_2_sub_) 1 MPI processes type: icc 0 levels of fill tolerance for zero pivot 2.22045e-14 using Manteuffel shift matrix ordering: natural linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=59045, cols=59045 total: nonzeros=1504413, allocated nonzeros=1594215 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Matrix Object: 4 MPI processes type: mpiaij rows=236600, cols=236600 total: nonzeros=6183334, allocated nonzeros=12776400 total number of mallocs used during MatSetValues calls =0 not using I-node (on process 0) routines Up solver (post-smoother) on level 2 ------------------------------- KSP Object: (PPE_mg_levels_2_) 4 MPI processes type: richardson Richardson: damping factor=0.9 maximum iterations=1 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (PPE_mg_levels_2_) 4 MPI processes type: bjacobi block Jacobi: number of blocks = 4 Local solve is same for all blocks, in the following KSP and PC objects: KSP Object: (PPE_mg_levels_2_sub_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (PPE_mg_levels_2_sub_) 1 MPI processes type: icc 0 levels of fill tolerance for zero pivot 2.22045e-14 using Manteuffel shift matrix ordering: natural linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=59045, cols=59045 total: nonzeros=1504413, allocated nonzeros=1594215 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Matrix Object: 4 MPI processes type: mpiaij rows=236600, cols=236600 total: nonzeros=6183334, allocated nonzeros=12776400 total number of mallocs used during MatSetValues calls =0 not using I-node (on process 0) routines linear system matrix = precond matrix: Matrix Object: 4 MPI processes type: mpiaij rows=236600, cols=236600 total: nonzeros=6183334, allocated nonzeros=12776400 total number of mallocs used during MatSetValues calls =0 not using I-node (on process 0) routines From Shuangshuang.Jin at pnnl.gov Wed Apr 17 13:53:42 2013 From: Shuangshuang.Jin at pnnl.gov (Jin, Shuangshuang) Date: Wed, 17 Apr 2013 11:53:42 -0700 Subject: [petsc-users] ksp for AX=B system In-Reply-To: <58CD76F6-ECC5-4DE5-A93F-1075179054B9@mcs.anl.gov> References: <6778DE83AB681D49BFC2CD850610FEB1018FC933E31C@EMAIL04.pnl.gov> <42A940ED-9634-4AAC-A8FB-2C6EAD3EEF6B@mcs.anl.gov> <6778DE83AB681D49BFC2CD850610FEB1018FC933E32C@EMAIL04.pnl.gov> <58CD76F6-ECC5-4DE5-A93F-1075179054B9@mcs.anl.gov> Message-ID: <6778DE83AB681D49BFC2CD850610FEB1018FC933E40B@EMAIL04.pnl.gov> Hello, Barry, I got a new question here. I have integrated the AX=B solver into my code using the example from ex125.c. The only major difference between my code and the example is the input matrix. In the example ex125.c, the A matrix is created by loading the data from a binary file. In my code, I passed in the computed A matrix and the B matrix as arguments to a user defined solvingAXB() function: static PetscErrorCode solvingAXB(Mat A, Mat B, PetscInt n, PetscInt nrhs, Mat X, const int me); I noticed that A has to be stored as an MATAIJ matrix and the B as a MATDENSE matrix. So I converted their storage types inside the solvingAXB as below (They were MPIAIJ type before): ierr = MatConvert(A, MATAIJ, MAT_REUSE_MATRIX, &A); // A has to be MATAIJ for built-in PETSc LU! MatView(A, PETSC_VIEWER_STDOUT_WORLD); ierr = MatConvert(B, MATDENSE, MAT_REUSE_MATRIX, &B); // B has to be a SeqDense matrix! MatView(B, PETSC_VIEWER_STDOUT_WORLD); With this implementation, I can run the code with expected results when 1 processor is used. However, when I use 2 processors, I get a run time error which I guess is still somehow related to the matrix format but cannot figure out how to fix it. Could you please take a look at the following error message? [d3m956 at olympus ss08]$ mpiexec -n 2 dynSim -i 3g9b.txt Matrix Object: 1 MPI processes type: mpiaij row 0: (0, 0 - 16.4474 i) (3, 0 + 17.3611 i) row 1: (1, 0 - 8.34725 i) (7, 0 + 16 i) row 2: (2, 0 - 5.51572 i) (5, 0 + 17.0648 i) row 3: (0, 0 + 17.3611 i) (3, 0 - 39.9954 i) (4, 0 + 10.8696 i) (8, 0 + 11.7647 i) row 4: (3, 0 + 10.8696 i) (4, 0.934 - 17.0633 i) (5, 0 + 5.88235 i) row 5: (2, 0 + 17.0648 i) (4, 0 + 5.88235 i) (5, 0 - 32.8678 i) (6, 0 + 9.92063 i) row 6: (5, 0 + 9.92063 i) (6, 1.03854 - 24.173 i) (7, 0 + 13.8889 i) row 7: (1, 0 + 16 i) (6, 0 + 13.8889 i) (7, 0 - 36.1001 i) (8, 0 + 6.21118 i) row 8: (3, 0 + 11.7647 i) (7, 0 + 6.21118 i) (8, 1.33901 - 18.5115 i) Matrix Object: 1 MPI processes type: mpidense 0.0000000000000000e+00 + -1.6447368421052630e+01i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + -8.3472454090150254e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + -5.5157198014340878e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i PETSC LU: [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: No support for this operation for this object type! [0]PETSC ERROR: Matrix format mpiaij does not have a built-in PETSc LU! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Development HG revision: 6e0ddc6e9b6d8a9d8eb4c0ede0105827a6b58dfb HG Date: Mon Mar 11 22:54:30 2013 -0500 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: dynSim on a arch-complex named olympus.local by d3m956 Wed Apr 17 11:38:12 2013 [0]PETSC ERROR: Libraries linked from /pic/projects/ds/petsc-dev/arch-complex/lib [0]PETSC ERROR: Configure run at Tue Mar 12 14:32:37 2013 [0]PETSC ERROR: Configure options --with-scalar-type=complex --with-clanguage=C++ PETSC_ARCH=arch-complex --with-fortran-kernels=generic [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: MatGetFactor() line 3949 in src/mat/interface/matrix.c [0]PETSC ERROR: solvingAXB() line 880 in "unknowndirectory/"reducedYBus.C End! [1]PETSC ERROR: ------------------------------------------------------------------------ [1]PETSC ERROR: Caught signal number 15 Terminate: Somet process (or the batch system) has told this process to end [1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [1]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[1]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [1]PETSC ERROR: likely location of problem given in stack below [1]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [1]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, [1]PETSC ERROR: INSTEAD the line number of the start of the function [1]PETSC ERROR: is given. [1]PETSC ERROR: [1] PetscSleep line 35 src/sys/utils/psleep.c [1]PETSC ERROR: [1] PetscTraceBackErrorHandler line 172 src/sys/error/errtrace.c [1]PETSC ERROR: [1] PetscError line 361 src/sys/error/err.c [1]PETSC ERROR: [1] MatGetFactor line 3932 src/mat/interface/matrix.c [1]PETSC ERROR: --------------------- Error Message ------------------------------------ [1]PETSC ERROR: Signal received! [1]PETSC ERROR: ------------------------------------------------------------------------ [1]PETSC ERROR: Petsc Development HG revision: 6e0ddc6e9b6d8a9d8eb4c0ede0105827a6b58dfb HG Date: Mon Mar 11 22:54:30 2013 -0500 [1]PETSC ERROR: See docs/changes/index.html for recent updates. [1]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [1]PETSC ERROR: See docs/index.html for manual pages. [1]PETSC ERROR: ------------------------------------------------------------------------ [1]PETSC ERROR: dynSim on a arch-complex named olympus.local by d3m956 Wed Apr 17 11:38:12 2013 [1]PETSC ERROR: Libraries linked from /pic/projects/ds/petsc-dev/arch-complex/lib [1]PETSC ERROR: Configure run at Tue Mar 12 14:32:37 2013 [1]PETSC ERROR: Configure options --with-scalar-type=complex --with-clanguage=C++ PETSC_ARCH=arch-complex --with-fortran-kernels=generic [1]PETSC ERROR: ------------------------------------------------------------------------ [1]PETSC ERROR: User provided function() line 0 in unknown directory unknown file -------------------------------------------------------------------------- Thanks, Shuangshuang -----Original Message----- From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Barry Smith Sent: Tuesday, April 16, 2013 5:50 PM To: PETSc users list Subject: Re: [petsc-users] ksp for AX=B system Shuangshuang This is what I was expecting, thanks for the confirmation. For these size problems you definitely want to use a direct solver (often parallel but not for smaller matrices) and solve multiple right hand sides. This means you actually will not use the KSP solver that is standard for most PETSc work, instead you will work directly with the MatGetFactor(), MatGetOrdering(), MatLUFactorSymbolic(), MatLUFactorNumeric(), MatMatSolve() paradigm where the A matrix is stored as an MATAIJ matrix and the B (multiple right hand side) as a MATDENSE matrix. An example that displays this paradigm is src/mat/examples/tests/ex125.c Once you have something running of interest to you we would like to work with you to improve the performance, we have some "tricks" we haven't yet implemented to make these solvers much faster than they will be by default. Barry On Apr 16, 2013, at 7:38 PM, "Jin, Shuangshuang" wrote: > Hi, Barry, thanks for your prompt reply. > > We have various size problems, from (n=9, m=3), (n=1081, m = 288), to (n=16072, m=2361) or even larger ultimately. > > Usually the dimension of the square matrix A, n is much larger than the column dimension of B, m. > > As you said, I'm using the loop to deal with the small (n=9, m=3) case. However, for bigger problem, I do hope there's a better approach. > > This is a power flow problem. When we parallelized it in OpenMP previously, we just parallelized the outside loop, and used a direct solver to solve it. > > We are now switching to MPI and like to use PETSc ksp solver to solve it in parallel. However, I don't know what's the best ksp solver I should use here? Direct solver or try a preconditioner? > > I appreciate very much for your recommendations. > > Thanks, > Shuangshuang > > From jbakosi at lanl.gov Wed Apr 17 14:00:51 2013 From: jbakosi at lanl.gov (Jozsef Bakosi) Date: Wed, 17 Apr 2013 13:00:51 -0600 Subject: [petsc-users] Any changes in ML usage between 3.1-p8 -> 3.3-p6? In-Reply-To: <20130417162647.GC23247@karman> References: <20130405143415.GH17937@karman> <20130417162647.GC23247@karman> Message-ID: <20130417190051.GA2495@karman> On 04.17.2013 10:26, Jozsef Bakosi wrote: > On 04.05.2013 08:34, Jozsef Bakosi wrote: > > Hi folks, > > > > In switching from 3.1-p8 to 3.3-p6, keeping the same ML ml-6.2.tar.gz, I get > > indefinite preconditioner with the newer PETSc version. Has there been anything > > substantial changed around how PCs are handled, e.g. in the defaults? > > > > I know this request is pretty general, I would just like to know where to start > > looking, where changes in PETSc might be clobbering the (supposedly same) > > behavior of ML. > > > > Alright, here is a little more information about what we see. Running the same > setup/solve using ML (using the same ML and application source code) and > switching from PETSc 3.1-p8 to 3.3-p6 appears to work differently, in some > cases, resulting in divergence compared to the old version. > > I attach the output from KSPView() called after KSPSetup() for the 3.1-p8 > (old.out) and for the 3.3-p6 (new.out), both running on 4 MPI ranks. > > A diff reveals some notable differences: > > * using (PRECONDITIONED -> NONE) norm type for convergence test > > * (using -> not using) I-node routines > > * tolerance for zero pivot (1e-12 -> 2.22045e-14) for PPE_mg_levels_[12]_sub_ > (stayed the same for PPE_mg_coarse_redundant_) > > So we are wondering what might have changed in the PETSc defaults around how > PCs, in particular ML, is used. > > Thanks, and please let me know if I can give you more information, > > Thanks, > Jozsef Another piece of info: The error we get is indefinite preconditioner. Interestingly the same setup produces an OK preconditioner using 1,2,3, and 5 ranks -- indefinite only on 4. ps: Please CC me as I'm not on the email list. Thanks, J From knepley at gmail.com Wed Apr 17 14:05:21 2013 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 17 Apr 2013 15:05:21 -0400 Subject: [petsc-users] [MEF-QUAR] Re: Any changes in ML usage between 3.1-p8 -> 3.3-p6? In-Reply-To: <20130417162647.GC23247@karman> References: <20130405143415.GH17937@karman> <20130417162647.GC23247@karman> Message-ID: On Wed, Apr 17, 2013 at 12:26 PM, Jozsef Bakosi wrote: > On 04.05.2013 08:34, Jozsef Bakosi wrote: > > Hi folks, > > > > In switching from 3.1-p8 to 3.3-p6, keeping the same ML ml-6.2.tar.gz, I > get > > indefinite preconditioner with the newer PETSc version. Has there been > anything > > substantial changed around how PCs are handled, e.g. in the defaults? > > > > I know this request is pretty general, I would just like to know where > to start > > looking, where changes in PETSc might be clobbering the (supposedly same) > > behavior of ML. > > > > Alright, here is a little more information about what we see. Running the > same > setup/solve using ML (using the same ML and application source code) and > switching from PETSc 3.1-p8 to 3.3-p6 appears to work differently, in some > cases, resulting in divergence compared to the old version. > > I attach the output from KSPView() called after KSPSetup() for the 3.1-p8 > (old.out) and for the 3.3-p6 (new.out), both running on 4 MPI ranks. > > A diff reveals some notable differences: > > * using (PRECONDITIONED -> NONE) norm type for convergence test > > * (using -> not using) I-node routines > > * tolerance for zero pivot (1e-12 -> 2.22045e-14) for > PPE_mg_levels_[12]_sub_ > (stayed the same for PPE_mg_coarse_redundant_) > > So we are wondering what might have changed in the PETSc defaults around > how > PCs, in particular ML, is used. > > Thanks, and please let me know if I can give you more information, > 1) Please do not give error reports without the full error message 2) If you get "Indefinite PC" (I am guessing from using CG) it is because the preconditioner really is indefinite (or possible non-symmetric). We improved the checking for this in one of those releases. AMG does not guarantee an SPD preconditioner so why persist in trying to use CG? Matt > Thanks, > Jozsef > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From mpovolot at purdue.edu Wed Apr 17 11:35:47 2013 From: mpovolot at purdue.edu (Michael Povolotskyi) Date: Wed, 17 Apr 2013 12:35:47 -0400 Subject: [petsc-users] [MEF-QUAR] MatGetArray question Message-ID: <516ECF63.40902@purdue.edu> Dear Petsc developers, Does the function MatGetArray allocates additional memory, or it simply returns a pointer to the existing memory? I'm talking about an MPI dense matrix. Thank you. Michael. -- Michael Povolotskyi, PhD Research Assistant Professor Network for Computational Nanotechnology 207 S Martin Jischke Drive Purdue University, DLR, room 441-10 West Lafayette, Indiana 47907 phone: +1-765-494-9396 fax: +1-765-496-6026 From mark.adams at columbia.edu Wed Apr 17 14:25:04 2013 From: mark.adams at columbia.edu (Mark F. Adams) Date: Wed, 17 Apr 2013 15:25:04 -0400 Subject: [petsc-users] [MEF-QUAR] Re: Any changes in ML usage between 3.1-p8 -> 3.3-p6? In-Reply-To: References: <20130405143415.GH17937@karman> <20130417162647.GC23247@karman> Message-ID: > > 2) If you get "Indefinite PC" (I am guessing from using CG) it is because the preconditioner > really is indefinite (or possible non-symmetric). We improved the checking for this in one > of those releases. > > AMG does not guarantee an SPD preconditioner so why persist in trying to use CG? > AMG is positive if everything is working correctly. Are these problems only semidefinite? Singular systems can give erratic behavior. From mpovolot at purdue.edu Wed Apr 17 14:25:09 2013 From: mpovolot at purdue.edu (Michael Povolotskyi) Date: Wed, 17 Apr 2013 15:25:09 -0400 Subject: [petsc-users] MatDenseGetArray question In-Reply-To: References: <516EDA78.9020304@purdue.edu> Message-ID: <516EF715.2080700@purdue.edu> On 04/17/2013 01:37 PM, Matthew Knepley wrote: > On Wed, Apr 17, 2013 at 1:23 PM, Michael Povolotskyi > > wrote: > > Dear Petsc developers, > does the function MatDenseGetArray allocate additional memory or > just returns a pointer to existing memory ? > > > Returns a pointer. > > Matt Thank you, one more detail about MatDenseGetArray is this statement correct: if a dense matrix is serial, the data is stored in memory in a column-wise order, but if a dense matrix is distributed the data is stored in a row-wise order? thank you, Michael. -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Wed Apr 17 14:27:36 2013 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 17 Apr 2013 14:27:36 -0500 (CDT) Subject: [petsc-users] MatDenseGetArray question In-Reply-To: <516EF715.2080700@purdue.edu> References: <516EDA78.9020304@purdue.edu> <516EF715.2080700@purdue.edu> Message-ID: On Wed, 17 Apr 2013, Michael Povolotskyi wrote: > On 04/17/2013 01:37 PM, Matthew Knepley wrote: > > On Wed, Apr 17, 2013 at 1:23 PM, Michael Povolotskyi > > wrote: > > > > Dear Petsc developers, > > does the function MatDenseGetArray allocate additional memory or > > just returns a pointer to existing memory ? > > > > > > Returns a pointer. > > > > Matt > Thank you, one more detail about MatDenseGetArray > is this statement correct: > if a dense matrix is serial, the data is stored in memory in a column-wise > order, but if a dense matrix is distributed the data is stored in a row-wise > order? nope - the rows are distributed across procs - but within the processor - the data is stored column-wise Satish From jedbrown at mcs.anl.gov Wed Apr 17 14:32:21 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 17 Apr 2013 14:32:21 -0500 Subject: [petsc-users] MatDenseGetArray question In-Reply-To: <516EF715.2080700@purdue.edu> References: <516EDA78.9020304@purdue.edu> <516EF715.2080700@purdue.edu> Message-ID: <87li8heywq.fsf@mcs.anl.gov> Michael Povolotskyi writes: > Thank you, one more detail about MatDenseGetArray > is this statement correct: > if a dense matrix is serial, the data is stored in memory in a > column-wise order, Yes. > but if a dense matrix is distributed the data is stored in a row-wise > order? No, it's still column-major storage, but each process only owns a range of rows. So the distributed memory partition is by row. If you have large dense matrices, then chances are that you should switch from MATDENSE to MATELEMENTAL anyway. From jbakosi at lanl.gov Wed Apr 17 15:26:41 2013 From: jbakosi at lanl.gov (Jozsef Bakosi) Date: Wed, 17 Apr 2013 14:26:41 -0600 Subject: [petsc-users] Any changes in ML usage between 3.1-p8 -> 3.3-p6? In-Reply-To: <20130417190051.GA2495@karman> References: <20130405143415.GH17937@karman> <20130417162647.GC23247@karman> <20130417190051.GA2495@karman> Message-ID: <20130417202640.GC2495@karman> > Mark F. Adams mark.adams at columbia.edu > Wed Apr 17 14:25:04 CDT 2013 > > 2) If you get "Indefinite PC" (I am guessing from using CG) it is because the > preconditioner > really is indefinite (or possible non-symmetric). We improved the checking > for this in one > of those releases. > > AMG does not guarantee an SPD preconditioner so why persist in trying to use > CG? > > > AMG is positive if everything is working correctly. > > Are these problems only semidefinite? Singular systems can give erratic > behavior. It is a Laplace operator from Galerkin finite elements. And the PC is fine on ranks 1, 2, 3, and 5 -- indefinite only on 4. I think we can safely say that the same PC should be positive on 4 as well. Can you guys please CC jbakosi at lanl.gov? Thanks, J From balay at mcs.anl.gov Wed Apr 17 15:32:14 2013 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 17 Apr 2013 15:32:14 -0500 (CDT) Subject: [petsc-users] Any changes in ML usage between 3.1-p8 -> 3.3-p6? In-Reply-To: <20130417202640.GC2495@karman> References: <20130405143415.GH17937@karman> <20130417162647.GC23247@karman> <20130417190051.GA2495@karman> <20130417202640.GC2495@karman> Message-ID: On Wed, 17 Apr 2013, Jozsef Bakosi wrote: > Can you guys please CC jbakosi at lanl.gov? Thanks, J Mailing lists are setup that way. The default is: subscribe to participate, and reply-to: list. So cc:ing automatically doesn't work. I suspect you are subscribed [since you are able to post] - but disabled receiving messages in your subscription config. You can always use petsc-maint - and not participate in the petsc-users mailing list. Satish From knepley at gmail.com Wed Apr 17 15:38:57 2013 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 17 Apr 2013 16:38:57 -0400 Subject: [petsc-users] Any changes in ML usage between 3.1-p8 -> 3.3-p6? In-Reply-To: <20130417202640.GC2495@karman> References: <20130405143415.GH17937@karman> <20130417162647.GC23247@karman> <20130417190051.GA2495@karman> <20130417202640.GC2495@karman> Message-ID: On Wed, Apr 17, 2013 at 4:26 PM, Jozsef Bakosi wrote: > > > Mark F. Adams mark.adams at columbia.edu > > Wed Apr 17 14:25:04 CDT 2013 > > > > 2) If you get "Indefinite PC" (I am guessing from using CG) it is > because the > > preconditioner > > really is indefinite (or possible non-symmetric). We improved the > checking > > for this in one > > of those releases. > > > > AMG does not guarantee an SPD preconditioner so why persist in trying to > use > > CG? > > > > > > AMG is positive if everything is working correctly. > > > > Are these problems only semidefinite? Singular systems can give erratic > > behavior. > > It is a Laplace operator from Galerkin finite elements. And the PC is fine > on > ranks 1, 2, 3, and 5 -- indefinite only on 4. I think we can safely say > that the > same PC should be positive on 4 as well. > Why is it safe? Because it sounds plausible? Mathematics is replete with things that sound plausible and are false. Are there proofs that suggest this? Is there computational evidence? Why would I believe you? Matt > Can you guys please CC jbakosi at lanl.gov? Thanks, J > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark.adams at columbia.edu Wed Apr 17 15:45:34 2013 From: mark.adams at columbia.edu (Mark F. Adams) Date: Wed, 17 Apr 2013 16:45:34 -0400 Subject: [petsc-users] Any changes in ML usage between 3.1-p8 -> 3.3-p6? In-Reply-To: <20130417202640.GC2495@karman> References: <20130405143415.GH17937@karman> <20130417162647.GC23247@karman> <20130417190051.GA2495@karman> <20130417202640.GC2495@karman> Message-ID: On Apr 17, 2013, at 4:26 PM, Jozsef Bakosi wrote: > >> Mark F. Adams mark.adams at columbia.edu >> Wed Apr 17 14:25:04 CDT 2013 >> >> 2) If you get "Indefinite PC" (I am guessing from using CG) it is because the >> preconditioner >> really is indefinite (or possible non-symmetric). We improved the checking >> for this in one >> of those releases. >> >> AMG does not guarantee an SPD preconditioner so why persist in trying to use >> CG? >> >> >> AMG is positive if everything is working correctly. >> >> Are these problems only semidefinite? Singular systems can give erratic >> behavior. > > It is a Laplace operator from Galerkin finite elements. And the PC is fine on > ranks 1, 2, 3, and 5 -- indefinite only on 4. I think we can safely say that the > same PC should be positive on 4 as well. > > Can you guys please CC jbakosi at lanl.gov? Thanks, J > I assume that this is not a Neumann problem ? you can try -pc_type gamg and -pc_type hypre. And PETSc is testing for essentially < 0 which is a numerically finicky thing to do. These solvers are not bitwise identical wrt number of processors so getting different results for this test is not unreasonable. From bsmith at mcs.anl.gov Wed Apr 17 15:48:46 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 17 Apr 2013 15:48:46 -0500 Subject: [petsc-users] Read PetscInt in binary file from Matlab In-Reply-To: References: Message-ID: On Apr 17, 2013, at 11:02 AM, Hui Zhang wrote: > Is there something similar to the Matlab function PetscBinaryRead for PetscInt? The binary file is obtained by PetscIntView. If you look at the source for PetscIntView() you will see that it merely writes the raw integers directly to the file (with no header information). MATLAB provides routines for reading raw binary files, if you look at the source code for PetscBinaryRead.m you will see it merely opens the binary file with fopen(filename,rw,'ieee-be')) and then reads the data with fread(). So you can cook up a simple MATLAB script to read in the integers from the file. Barry From bsmith at mcs.anl.gov Wed Apr 17 16:14:18 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 17 Apr 2013 16:14:18 -0500 Subject: [petsc-users] [MEF-QUAR] MatGetArray question In-Reply-To: <516ECF63.40902@purdue.edu> References: <516ECF63.40902@purdue.edu> Message-ID: On Apr 17, 2013, at 11:35 AM, Michael Povolotskyi wrote: > Dear Petsc developers, > Does the function MatGetArray allocates additional memory, or it simply returns a pointer to the existing memory? > I'm talking about an MPI dense matrix. Return a pointer to the existing memory. Barry > > Thank you. > Michael. > > -- > Michael Povolotskyi, PhD > Research Assistant Professor > Network for Computational Nanotechnology > 207 S Martin Jischke Drive > Purdue University, DLR, room 441-10 > West Lafayette, Indiana 47907 > > phone: +1-765-494-9396 > fax: +1-765-496-6026 > From jedbrown at mcs.anl.gov Wed Apr 17 16:19:48 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 17 Apr 2013 16:19:48 -0500 Subject: [petsc-users] Mailing list reply-to munging (was Any changes in ML usage between 3.1-p8 -> 3.3-p6?) In-Reply-To: References: <20130405143415.GH17937@karman> <20130417162647.GC23247@karman> <20130417190051.GA2495@karman> <20130417202640.GC2495@karman> Message-ID: <8738uog8i3.fsf@mcs.anl.gov> Satish Balay writes: > On Wed, 17 Apr 2013, Jozsef Bakosi wrote: > >> Can you guys please CC jbakosi at lanl.gov? Thanks, J > > Mailing lists are setup that way. The default is: subscribe to > participate, and reply-to: list. So cc:ing automatically doesn't work. If everyone used mailers that did group replies correctly, then we would always preserve Cc's in list discussions and the list would not munge the Reply-to header. Then people could subscribe and turn off list mail, or they could filter all mail to the list that didn't directly Cc them. This is a great way to manage high-volume mailing lists. You can even allow anonymous posting to the mailing list, which is what the Git list and many other open source/technical lists do. This is ruined by munging Reply-to because many/most mailers drop the >From address in a group-reply when Reply-to is set. http://www.unicom.com/pw/reply-to-harmful.html http://woozle.org/~neale/papers/reply-to-still-harmful.html The problem is that an awful lot of mailers/users don't automatically do group replies to mailing list messages, causing the list Cc to be dropped. We have this problem with petsc-maint in that several emails per day are reminding people to keep petsc-maint Cc'd in the reply. Personally, I would rather turn off Reply-to munging and use a canned reply instructing users to resend their email to the list with all Cc's included (i.e., use "reply-all" when replying to the list). Almost all mailers can be configured to make this the default. I think this change would cause more people to ask questions on the mailing list where it becomes searchable than on petsc-maint where the reply helps only one person. Satish argued the other way when we discussed this a few years ago: http://lists.mcs.anl.gov/pipermail/petsc-dev/2010-March/002489.html From balay at mcs.anl.gov Wed Apr 17 16:27:27 2013 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 17 Apr 2013 16:27:27 -0500 (CDT) Subject: [petsc-users] Mailing list reply-to munging (was Any changes in ML usage between 3.1-p8 -> 3.3-p6?) In-Reply-To: <8738uog8i3.fsf@mcs.anl.gov> References: <20130405143415.GH17937@karman> <20130417162647.GC23247@karman> <20130417190051.GA2495@karman> <20130417202640.GC2495@karman> <8738uog8i3.fsf@mcs.anl.gov> Message-ID: On Wed, 17 Apr 2013, Jed Brown wrote: > Satish Balay writes: > > > On Wed, 17 Apr 2013, Jozsef Bakosi wrote: > > > >> Can you guys please CC jbakosi at lanl.gov? Thanks, J > > > > Mailing lists are setup that way. The default is: subscribe to > > participate, and reply-to: list. So cc:ing automatically doesn't work. > > If everyone used mailers that did group replies correctly, then we would > always preserve Cc's in list discussions and the list would not munge > the Reply-to header. Then people could subscribe and turn off list > mail, or they could filter all mail to the list that didn't directly Cc > them. This is a great way to manage high-volume mailing lists. You can > even allow anonymous posting to the mailing list, which is what the Git > list and many other open source/technical lists do. > > This is ruined by munging Reply-to because many/most mailers drop the > From address in a group-reply when Reply-to is set. > > http://www.unicom.com/pw/reply-to-harmful.html > > http://woozle.org/~neale/papers/reply-to-still-harmful.html > > The problem is that an awful lot of mailers/users don't automatically do > group replies to mailing list messages, causing the list Cc to be > dropped. We have this problem with petsc-maint in that several emails > per day are reminding people to keep petsc-maint Cc'd in the reply. > > Personally, I would rather turn off Reply-to munging and use a canned > reply instructing users to resend their email to the list with all Cc's > included (i.e., use "reply-all" when replying to the list). Almost all > mailers can be configured to make this the default. > > I think this change would cause more people to ask questions on the > mailing list where it becomes searchable than on petsc-maint where the > reply helps only one person. > > > Satish argued the other way when we discussed this a few years ago: > > http://lists.mcs.anl.gov/pipermail/petsc-dev/2010-March/002489.html Yes - and I still stand by that argument. Its best to otimize for 'majority usage' pattern. I know you deal with personal replies for petsc-maint stuff. But I set up my mailer to automatically set Reply-to:petsc-maint for all petsc-maint traffic [and modify it manually for the 1% usage case where thats not appropriate] And wrt anonymous posts [without subscribing] - that was a receipie for spam. [however good the spam filters are] - so you'll have to account for that crap aswell. We get plenty of that on petsc-maint - but now with petsc-users mailing list - that spam gets distributed to all list subscribers. Satish From bsmith at mcs.anl.gov Wed Apr 17 16:35:27 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 17 Apr 2013 16:35:27 -0500 Subject: [petsc-users] maintaining application compatibility with 3.1 and 3.2 In-Reply-To: References: Message-ID: <3DCB1B18-F192-4488-97BF-5AD5FD9D26AE@mcs.anl.gov> Alexei, It is our intention that PETSc be easy enough for anyone to install that rather than making your application work with different versions one simply install the PETSc version one needs. In addition we recommend updating applications to work with the latest release within a couple of months after each release. If you ever have trouble with installs please send configure.log and make.log to petsc-maint at mcs.anl.gov Barry On Apr 17, 2013, at 8:30 AM, Alexei Matveev wrote: > > Hi, Everyone, > > I just upgraded a cluster box to Debian Wheezy that comes with PETSC 3.2 > and noted quite a few changes in the interface related to the DM/DA/DMDA. > I am still somewhat confused about the distinction of the three. > > My question is --- does anyone have experience maintaining a common > application source for 3.2 (e.g. Wheezy) and, say, 3.1 (e.g. Ubuntu 12.04)? > > So far I was able to do so for 2.3 (Lenny) and 3.1. However the changes > in 3.2 seem to be quite significant so that I decided to first ask for your > experience and advice. > > Do you maybe have any tips or pointers on how to maintain such a > compatibility? > > Alexei From bsmith at mcs.anl.gov Wed Apr 17 16:48:45 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 17 Apr 2013 16:48:45 -0500 Subject: [petsc-users] ksp for AX=B system In-Reply-To: <6778DE83AB681D49BFC2CD850610FEB1018FC933E40B@EMAIL04.pnl.gov> References: <6778DE83AB681D49BFC2CD850610FEB1018FC933E31C@EMAIL04.pnl.gov> <42A940ED-9634-4AAC-A8FB-2C6EAD3EEF6B@mcs.anl.gov> <6778DE83AB681D49BFC2CD850610FEB1018FC933E32C@EMAIL04.pnl.gov> <58CD76F6-ECC5-4DE5-A93F-1075179054B9@mcs.anl.gov> <6778DE83AB681D49BFC2CD850610FEB1018FC933E40B@EMAIL04.pnl.gov> Message-ID: <5C4000FF-5FCA-45C5-9024-C9908527D61B@mcs.anl.gov> On Apr 17, 2013, at 1:53 PM, "Jin, Shuangshuang" wrote: > Hello, Barry, I got a new question here. > > I have integrated the AX=B solver into my code using the example from ex125.c. The only major difference between my code and the example is the input matrix. > > In the example ex125.c, the A matrix is created by loading the data from a binary file. In my code, I passed in the computed A matrix and the B matrix as arguments to a user defined solvingAXB() function: > > static PetscErrorCode solvingAXB(Mat A, Mat B, PetscInt n, PetscInt nrhs, Mat X, const int me); > > I noticed that A has to be stored as an MATAIJ matrix and the B as a MATDENSE matrix. So I converted their storage types inside the solvingAXB as below (They were MPIAIJ type before): Note that MATAIJ is MATMPIAIJ when rank > 1 and MATSEQAIJ when rank == 1. > > ierr = MatConvert(A, MATAIJ, MAT_REUSE_MATRIX, &A); // A has to be MATAIJ for built-in PETSc LU! > MatView(A, PETSC_VIEWER_STDOUT_WORLD); > > ierr = MatConvert(B, MATDENSE, MAT_REUSE_MATRIX, &B); // B has to be a SeqDense matrix! > MatView(B, PETSC_VIEWER_STDOUT_WORLD); > > With this implementation, I can run the code with expected results when 1 processor is used. > > However, when I use 2 processors, I get a run time error which I guess is still somehow related to the matrix format but cannot figure out how to fix it. Could you please take a look at the following error message? PETSc doesn't have a parallel LU solver, you need to install PETSc with SuperLU_DIST or MUMPS (preferably both) and then pass in one of those names when you call MatGetFactor(). Barry > > [d3m956 at olympus ss08]$ mpiexec -n 2 dynSim -i 3g9b.txt > Matrix Object: 1 MPI processes > type: mpiaij > row 0: (0, 0 - 16.4474 i) (3, 0 + 17.3611 i) > row 1: (1, 0 - 8.34725 i) (7, 0 + 16 i) > row 2: (2, 0 - 5.51572 i) (5, 0 + 17.0648 i) > row 3: (0, 0 + 17.3611 i) (3, 0 - 39.9954 i) (4, 0 + 10.8696 i) (8, 0 + 11.7647 i) > row 4: (3, 0 + 10.8696 i) (4, 0.934 - 17.0633 i) (5, 0 + 5.88235 i) > row 5: (2, 0 + 17.0648 i) (4, 0 + 5.88235 i) (5, 0 - 32.8678 i) (6, 0 + 9.92063 i) > row 6: (5, 0 + 9.92063 i) (6, 1.03854 - 24.173 i) (7, 0 + 13.8889 i) > row 7: (1, 0 + 16 i) (6, 0 + 13.8889 i) (7, 0 - 36.1001 i) (8, 0 + 6.21118 i) > row 8: (3, 0 + 11.7647 i) (7, 0 + 6.21118 i) (8, 1.33901 - 18.5115 i) > Matrix Object: 1 MPI processes > type: mpidense > 0.0000000000000000e+00 + -1.6447368421052630e+01i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + -8.3472454090150254e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + -5.5157198014340878e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i > PETSC LU: > [0]PETSC ERROR: --------------------- Error Message ------------------------------------ > [0]PETSC ERROR: No support for this operation for this object type! > [0]PETSC ERROR: Matrix format mpiaij does not have a built-in PETSc LU! > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Development HG revision: 6e0ddc6e9b6d8a9d8eb4c0ede0105827a6b58dfb HG Date: Mon Mar 11 22:54:30 2013 -0500 > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: dynSim on a arch-complex named olympus.local by d3m956 Wed Apr 17 11:38:12 2013 > [0]PETSC ERROR: Libraries linked from /pic/projects/ds/petsc-dev/arch-complex/lib > [0]PETSC ERROR: Configure run at Tue Mar 12 14:32:37 2013 > [0]PETSC ERROR: Configure options --with-scalar-type=complex --with-clanguage=C++ PETSC_ARCH=arch-complex --with-fortran-kernels=generic > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: MatGetFactor() line 3949 in src/mat/interface/matrix.c > [0]PETSC ERROR: solvingAXB() line 880 in "unknowndirectory/"reducedYBus.C > End! > [1]PETSC ERROR: ------------------------------------------------------------------------ > [1]PETSC ERROR: Caught signal number 15 Terminate: Somet process (or the batch system) has told this process to end > [1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > [1]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[1]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors > [1]PETSC ERROR: likely location of problem given in stack below > [1]PETSC ERROR: --------------------- Stack Frames ------------------------------------ > [1]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, > [1]PETSC ERROR: INSTEAD the line number of the start of the function > [1]PETSC ERROR: is given. > [1]PETSC ERROR: [1] PetscSleep line 35 src/sys/utils/psleep.c > [1]PETSC ERROR: [1] PetscTraceBackErrorHandler line 172 src/sys/error/errtrace.c > [1]PETSC ERROR: [1] PetscError line 361 src/sys/error/err.c > [1]PETSC ERROR: [1] MatGetFactor line 3932 src/mat/interface/matrix.c > [1]PETSC ERROR: --------------------- Error Message ------------------------------------ > [1]PETSC ERROR: Signal received! > [1]PETSC ERROR: ------------------------------------------------------------------------ > [1]PETSC ERROR: Petsc Development HG revision: 6e0ddc6e9b6d8a9d8eb4c0ede0105827a6b58dfb HG Date: Mon Mar 11 22:54:30 2013 -0500 > [1]PETSC ERROR: See docs/changes/index.html for recent updates. > [1]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [1]PETSC ERROR: See docs/index.html for manual pages. > [1]PETSC ERROR: ------------------------------------------------------------------------ > [1]PETSC ERROR: dynSim on a arch-complex named olympus.local by d3m956 Wed Apr 17 11:38:12 2013 > [1]PETSC ERROR: Libraries linked from /pic/projects/ds/petsc-dev/arch-complex/lib > [1]PETSC ERROR: Configure run at Tue Mar 12 14:32:37 2013 > [1]PETSC ERROR: Configure options --with-scalar-type=complex --with-clanguage=C++ PETSC_ARCH=arch-complex --with-fortran-kernels=generic > [1]PETSC ERROR: ------------------------------------------------------------------------ > [1]PETSC ERROR: User provided function() line 0 in unknown directory unknown file > -------------------------------------------------------------------------- > > Thanks, > Shuangshuang > > > -----Original Message----- > From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Barry Smith > Sent: Tuesday, April 16, 2013 5:50 PM > To: PETSc users list > Subject: Re: [petsc-users] ksp for AX=B system > > > Shuangshuang > > This is what I was expecting, thanks for the confirmation. For these size problems you definitely want to use a direct solver (often parallel but not for smaller matrices) and solve multiple right hand sides. This means you actually will not use the KSP solver that is standard for most PETSc work, instead you will work directly with the MatGetFactor(), MatGetOrdering(), MatLUFactorSymbolic(), MatLUFactorNumeric(), MatMatSolve() paradigm where the A matrix is stored as an MATAIJ matrix and the B (multiple right hand side) as a MATDENSE matrix. > > An example that displays this paradigm is src/mat/examples/tests/ex125.c > > Once you have something running of interest to you we would like to work with you to improve the performance, we have some "tricks" we haven't yet implemented to make these solvers much faster than they will be by default. > > Barry > > > > On Apr 16, 2013, at 7:38 PM, "Jin, Shuangshuang" wrote: > >> Hi, Barry, thanks for your prompt reply. >> >> We have various size problems, from (n=9, m=3), (n=1081, m = 288), to (n=16072, m=2361) or even larger ultimately. >> >> Usually the dimension of the square matrix A, n is much larger than the column dimension of B, m. >> >> As you said, I'm using the loop to deal with the small (n=9, m=3) case. However, for bigger problem, I do hope there's a better approach. >> >> This is a power flow problem. When we parallelized it in OpenMP previously, we just parallelized the outside loop, and used a direct solver to solve it. >> >> We are now switching to MPI and like to use PETSc ksp solver to solve it in parallel. However, I don't know what's the best ksp solver I should use here? Direct solver or try a preconditioner? >> >> I appreciate very much for your recommendations. >> >> Thanks, >> Shuangshuang >> >> From jedbrown at mcs.anl.gov Wed Apr 17 16:51:07 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 17 Apr 2013 16:51:07 -0500 Subject: [petsc-users] Mailing list reply-to munging (was Any changes in ML usage between 3.1-p8 -> 3.3-p6?) In-Reply-To: References: <20130405143415.GH17937@karman> <20130417162647.GC23247@karman> <20130417190051.GA2495@karman> <20130417202640.GC2495@karman> <8738uog8i3.fsf@mcs.anl.gov> Message-ID: <87mwsweshg.fsf@mcs.anl.gov> Satish Balay writes: > I know you deal with personal replies for petsc-maint stuff. But I set > up my mailer to automatically set Reply-to:petsc-maint for all > petsc-maint traffic [and modify it manually for the 1% usage case > where thats not appropriate] If I only set Reply-to, then Gmail's "reply" does not reply correctly This seems buggy, but it's common and they never fix bugs so it doesn't help. Your headers set both: From: Satish Balay Reply-To: petsc-maint Several of us sending email as petsc-maint mixes up a lot of address books and Gmail will not allow me to send mail that way because Matt already claimed it and Gmail won't let two users send via the same address. So I would have to configure my outgoing smtp via mcs.anl.gov for those messages. (Not a problem, except when on networks that I have to proxy just to reach mcs.anl.gov.) > And wrt anonymous posts [without subscribing] - that was a receipie > for spam. [however good the spam filters are] - so you'll have to > account for that crap aswell. We get plenty of that on petsc-maint - > but now with petsc-users mailing list - that spam gets distributed to > all list subscribers. Yeah, it happens occasionally, but it's easier to filter with the original headers intact. From jedbrown at mcs.anl.gov Wed Apr 17 17:03:07 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 17 Apr 2013 17:03:07 -0500 Subject: [petsc-users] Crash when using valgrind In-Reply-To: References: Message-ID: <87k3o0erxg.fsf@mcs.anl.gov> Dominik Szczerba writes: > On Wed, Apr 17, 2013 at 3:43 PM, Jed Brown wrote: > >> Can you get a stack trace? Does this happen on a different machine? >> >> Stack trace of what exactly? I do not seem to be able to run gdb with > valgrind...? > > gdb valgrind --tool=memcheck -q --num-callers=20 MySolver > gdb: unrecognized option '--tool=memcheck' Sometimes it helps to use 'valgrind --db-attach=yes'. What happens when you pass -no_signal_handler to the PETSc program? From balay at mcs.anl.gov Wed Apr 17 17:03:58 2013 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 17 Apr 2013 17:03:58 -0500 (CDT) Subject: [petsc-users] Mailing list reply-to munging (was Any changes in ML usage between 3.1-p8 -> 3.3-p6?) In-Reply-To: <87mwsweshg.fsf@mcs.anl.gov> References: <20130405143415.GH17937@karman> <20130417162647.GC23247@karman> <20130417190051.GA2495@karman> <20130417202640.GC2495@karman> <8738uog8i3.fsf@mcs.anl.gov> <87mwsweshg.fsf@mcs.anl.gov> Message-ID: On Wed, 17 Apr 2013, Jed Brown wrote: > Satish Balay writes: > > > I know you deal with personal replies for petsc-maint stuff. But I set > > up my mailer to automatically set Reply-to:petsc-maint for all > > petsc-maint traffic [and modify it manually for the 1% usage case > > where thats not appropriate] > > If I only set Reply-to, then Gmail's "reply" does not reply correctly > This seems buggy, but it's common and they never fix bugs so it doesn't > help. > > Your headers set both: > > From: Satish Balay If from is a problem [and messesup anyones mailboxes] - I can change that. I felt it was best to deal with it as petsc-maint completely. > Reply-To: petsc-maint > > Several of us sending email as petsc-maint mixes up a lot of address > books and Gmail will not allow me to send mail that way because Matt > already claimed it and Gmail won't let two users send via the same > address. No such problem from pine [even though most of you think its an antique tool for current times] > So I would have to configure my outgoing smtp via mcs.anl.gov > for those messages. (Not a problem, except when on networks that I have > to proxy just to reach mcs.anl.gov.) Any smtp server should be fine. [but I guess if one doesn't work - all won't work]. I usually tunnel imap/smtp over ssh [eventhough is not required for smtp]. But you do go to places with blocked ssh - so that doesn't help. Satish > > > And wrt anonymous posts [without subscribing] - that was a receipie > > for spam. [however good the spam filters are] - so you'll have to > > account for that crap aswell. We get plenty of that on petsc-maint - > > but now with petsc-users mailing list - that spam gets distributed to > > all list subscribers. > > Yeah, it happens occasionally, but it's easier to filter with the > original headers intact. > From jbakosi at lanl.gov Wed Apr 17 17:21:03 2013 From: jbakosi at lanl.gov (Jozsef Bakosi) Date: Wed, 17 Apr 2013 16:21:03 -0600 Subject: [petsc-users] Any changes in ML usage between 3.1-p8 -> 3.3-p6? In-Reply-To: <20130417202640.GC2495@karman> References: <20130405143415.GH17937@karman> <20130417162647.GC23247@karman> <20130417190051.GA2495@karman> <20130417202640.GC2495@karman> Message-ID: <20130417222103.GG2495@karman> > On 04.17.2013 15:38, Matthew Knepley wrote: > > > On 04.17.2013 14:26, Jozsef Bakosi wrote: > > > > > Mark F. Adams mark.adams at columbia.edu > > > Wed Apr 17 14:25:04 CDT 2013 > > > > > > 2) If you get "Indefinite PC" (I am guessing from using CG) it is because the > > > preconditioner > > > really is indefinite (or possible non-symmetric). We improved the checking > > > for this in one > > > of those releases. > > > > > > AMG does not guarantee an SPD preconditioner so why persist in trying to use > > > CG? > > > > > > > > > AMG is positive if everything is working correctly. > > > > > > Are these problems only semidefinite? Singular systems can give erratic > > > behavior. > > > > It is a Laplace operator from Galerkin finite elements. And the PC is fine on > > ranks 1, 2, 3, and 5 -- indefinite only on 4. I think we can safely say that the > > same PC should be positive on 4 as well. > > Why is it safe? Because it sounds plausible? Mathematics is replete with things > that sound plausible and are false. Are there proofs that suggest this? Is there > computational evidence? Why would I believe you? Okay, so here is some additional information: I tried both old and new PETSc versions again, but now only taking 2 iterations (both with 4 CPUs) and checked the residuals. I get the same exact PC from ML in both cases, however, the residuals are different after both iterations: Please do a diff on the attached files and you can verify that the ML diagnostics are exactly the same: same max eigenvalues, nodes aggregated, etc, while the norm coming out of the solver at the end at both iterations are different. We reproduced the same exact behavior on two different linux platforms. Once again: same application source code, same ML source code, different PETSc: 3.1-p8 vs. 3.3-p6. -------------- next part -------------- Entering ML_Gen_MGHierarchy_UsingAggregation ************************************************************** * ML Aggregation information * ============================================================== ML_Aggregate : ordering = natural. ML_Aggregate : min nodes/aggr = 2 ML_Aggregate : max neigh selected = 0 ML_Aggregate : attach scheme = MAXLINK ML_Aggregate : strong threshold = 0.000000e+00 ML_Aggregate : P damping factor = 1.333333e+00 ML_Aggregate : number of PDEs = 1 ML_Aggregate : number of null vec = 1 ML_Aggregate : smoother drop tol = 0.000000e+00 ML_Aggregate : max coarse size = 250 ML_Aggregate : max no. of levels = 10 ************************************************************** ML_Aggregate_Coarsen (level 0) begins ML_Aggregate_CoarsenUncoupled : current level = 0 ML_Aggregate_CoarsenUncoupled : current eps = 0.000000e+00 Aggregation(UVB) : Total nonzeros = 6183334 (Nrows=236600) Aggregation(UC) : Phase 0 - no. of bdry pts = 0 Aggregation(UC) : Phase 1 - nodes aggregated = 216030 (236600) Aggregation(UC) : Phase 1 - total aggregates = 9124 Aggregation(UC_Phase2_3) : Phase 1 - nodes aggregated = 216030 Aggregation(UC_Phase2_3) : Phase 1 - total aggregates = 9124 Aggregation(UC_Phase2_3) : Phase 2a- additional aggregates = 0 Aggregation(UC_Phase2_3) : Phase 2 - total aggregates = 9124 Aggregation(UC_Phase2_3) : Phase 2 - boundary nodes = 0 Aggregation(UC_Phase2_3) : Phase 3 - leftovers = 0 and singletons = 0 Gen_Prolongator (level 0) : Max eigenvalue = 4.2404e+00 Prolongator/Restriction smoother (level 0) : damping factor #1 = 3.1444e-01 Prolongator/Restriction smoother (level 0) : ( = 1.3333e+00 / 4.2404e+00) ML_Aggregate_Coarsen (level 1) begins ML_Aggregate_CoarsenUncoupled : current level = 1 ML_Aggregate_CoarsenUncoupled : current eps = 0.000000e+00 Aggregation(UVB) : Total nonzeros = 267508 (Nrows=9124) Aggregation(UC) : Phase 0 - no. of bdry pts = 0 Aggregation(UC) : Phase 1 - nodes aggregated = 5944 (9124) Aggregation(UC) : Phase 1 - total aggregates = 293 Aggregation(UC_Phase2_3) : Phase 1 - nodes aggregated = 5944 Aggregation(UC_Phase2_3) : Phase 1 - total aggregates = 293 Aggregation(UC_Phase2_3) : Phase 2a- additional aggregates = 45 Aggregation(UC_Phase2_3) : Phase 2 - total aggregates = 338 Aggregation(UC_Phase2_3) : Phase 2 - boundary nodes = 0 Aggregation(UC_Phase2_3) : Phase 3 - leftovers = 0 and singletons = 0 Gen_Prolongator (level 1) : Max eigenvalue = 1.9726e+00 Prolongator/Restriction smoother (level 1) : damping factor #1 = 6.7594e-01 Prolongator/Restriction smoother (level 1) : ( = 1.3333e+00 / 1.9726e+00) ML_Aggregate_Coarsen (level 2) begins ML_Aggregate_CoarsenUncoupled : current level = 2 ML_Aggregate_CoarsenUncoupled : current eps = 0.000000e+00 Aggregation(UVB) : Total nonzeros = 12942 (Nrows=338) Aggregation(UC) : Phase 0 - no. of bdry pts = 0 Aggregation(UC) : Phase 1 - nodes aggregated = 174 (338) Aggregation(UC) : Phase 1 - total aggregates = 10 Aggregation(UC_Phase2_3) : Phase 1 - nodes aggregated = 174 Aggregation(UC_Phase2_3) : Phase 1 - total aggregates = 10 Aggregation(UC_Phase2_3) : Phase 2a- additional aggregates = 5 Aggregation(UC_Phase2_3) : Phase 2 - total aggregates = 15 Aggregation(UC_Phase2_3) : Phase 2 - boundary nodes = 0 Aggregation(UC_Phase2_3) : Phase 3 - leftovers = 0 and singletons = 0 Gen_Prolongator (level 2) : Max eigenvalue = 1.4886e+00 Prolongator/Restriction smoother (level 2) : damping factor #1 = 8.9569e-01 Prolongator/Restriction smoother (level 2) : ( = 1.3333e+00 / 1.4886e+00) Smoothed Aggregation : operator complexity = 1.045392e+00. KSP Object:(PPE_) type: cg maximum iterations=10000 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using UNPRECONDITIONED norm type for convergence test PC Object:(PPE_) type: ml MG: type is MULTIPLICATIVE, levels=4 cycles=v Cycles per PCApply=1 Coarse grid solver -- level 0 presmooths=1 postsmooths=1 ----- KSP Object:(PPE_mg_coarse_) type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using PRECONDITIONED norm type for convergence test PC Object:(PPE_mg_coarse_) type: redundant Redundant preconditioner: First (color=0) of 4 PCs follows KSP Object:(PPE_mg_coarse_redundant_) type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using PRECONDITIONED norm type for convergence test PC Object:(PPE_mg_coarse_redundant_) type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-16 matrix ordering: nd factor fill ratio given 5, needed 1 Factored matrix follows: Matrix Object: type=seqaij, rows=15, cols=15 package used to perform factorization: petsc total: nonzeros=225, allocated nonzeros=225 using I-node routines: found 3 nodes, limit used is 5 linear system matrix = precond matrix: Matrix Object: type=seqaij, rows=15, cols=15 total: nonzeros=225, allocated nonzeros=225 using I-node routines: found 3 nodes, limit used is 5 KSP Object:(PPE_mg_coarse_redundant_) type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using PRECONDITIONED norm type for convergence test PC Object:(PPE_mg_coarse_redundant_) type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-16 matrix ordering: nd factor fill ratio given 5, needed 1 Factored matrix follows: Matrix Object: type=seqaij, rows=15, cols=15 package used to perform factorization: petsc total: nonzeros=225, allocated nonzeros=225 using I-node routines: found 3 nodes, limit used is 5 linear system matrix = precond matrix: Matrix Object: type=seqaij, rows=15, cols=15 total: nonzeros=225, allocated nonzeros=225 using I-node routines: found 3 nodes, limit used is 5 KSP Object:(PPE_mg_coarse_redundant_) type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using PRECONDITIONED norm type for convergence test PC Object:(PPE_mg_coarse_redundant_) type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-16 matrix ordering: nd factor fill ratio given 5, needed 1 Factored matrix follows: Matrix Object: type=seqaij, rows=15, cols=15 package used to perform factorization: petsc total: nonzeros=225, allocated nonzeros=225 using I-node routines: found 3 nodes, limit used is 5 linear system matrix = precond matrix: Matrix Object: type=seqaij, rows=15, cols=15 total: nonzeros=225, allocated nonzeros=225 using I-node routines: found 3 nodes, limit used is 5 linear system matrix = precond matrix: Matrix Object: type=mpiaij, rows=15, cols=15 total: nonzeros=225, allocated nonzeros=225 using I-node (on process 0) routines: found 1 nodes, limit used is 5 Down solver (pre-smoother) on level 1 smooths=1 -------------------- KSP Object:(PPE_mg_levels_1_) type: richardson Richardson: damping factor=0.9 maximum iterations=1 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NO norm type for convergence test PC Object:(PPE_mg_levels_1_) type: bjacobi block Jacobi: number of blocks = 4 Local solve is same for all blocks, in the following KSP and PC objects: KSP Object:(PPE_mg_levels_1_sub_) type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NO norm type for convergence test PC Object:(PPE_mg_levels_1_sub_) type: icc 0 levels of fill tolerance for zero pivot 1e-12 using Manteuffel shift matrix ordering: natural linear system matrix = precond matrix: Matrix Object: type=seqaij, rows=81, cols=81 total: nonzeros=2091, allocated nonzeros=2091 not using I-node routines linear system matrix = precond matrix: Matrix Object: type=mpiaij, rows=338, cols=338 total: nonzeros=12942, allocated nonzeros=12942 not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 2 smooths=1 -------------------- KSP Object:(PPE_mg_levels_2_) type: richardson Richardson: damping factor=0.9 maximum iterations=1 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NO norm type for convergence test PC Object:(PPE_mg_levels_2_) type: bjacobi block Jacobi: number of blocks = 4 Local solve is same for all blocks, in the following KSP and PC objects: KSP Object:(PPE_mg_levels_2_sub_) type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NO norm type for convergence test PC Object:(PPE_mg_levels_2_sub_) type: icc 0 levels of fill tolerance for zero pivot 1e-12 using Manteuffel shift matrix ordering: natural linear system matrix = precond matrix: Matrix Object: type=seqaij, rows=2265, cols=2265 total: nonzeros=60131, allocated nonzeros=60131 not using I-node routines linear system matrix = precond matrix: Matrix Object: type=mpiaij, rows=9124, cols=9124 total: nonzeros=267508, allocated nonzeros=267508 not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 3 smooths=1 -------------------- KSP Object:(PPE_mg_levels_3_) type: richardson Richardson: damping factor=0.9 maximum iterations=1 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NO norm type for convergence test PC Object:(PPE_mg_levels_3_) type: bjacobi block Jacobi: number of blocks = 4 Local solve is same for all blocks, in the following KSP and PC objects: KSP Object:(PPE_mg_levels_3_sub_) type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NO norm type for convergence test PC Object:(PPE_mg_levels_3_sub_) type: icc 0 levels of fill tolerance for zero pivot 1e-12 using Manteuffel shift matrix ordering: natural linear system matrix = precond matrix: Matrix Object: type=seqaij, rows=59045, cols=59045 total: nonzeros=1504413, allocated nonzeros=1594215 not using I-node routines linear system matrix = precond matrix: Matrix Object: type=mpiaij, rows=236600, cols=236600 total: nonzeros=6183334, allocated nonzeros=12776400 not using I-node (on process 0) routines Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Matrix Object: type=mpiaij, rows=236600, cols=236600 total: nonzeros=6183334, allocated nonzeros=12776400 not using I-node (on process 0) routines solver: ||b|| = 5.9221e-03 solver: ||x(0)|| = 0.0000e+00 solver: Iteration: 1 ||r|| = 3.7007e-03 ||x(i+1)-x(i)|| = 1.2896e+01 ||x(i+1)|| = 1.2896e+01 solver: Iteration: 2 ||r|| = 2.8927e-03 ||x(i+1)-x(i)|| = 2.8520e+00 ||x(i+1)|| = 1.2444e+01 -------------- next part -------------- ML_Gen_MultiLevelHierarchy (level 0) : Gen Restriction and Prolongator ML_Aggregate_Coarsen (level 0) begins ML_Aggregate_CoarsenUncoupled : current level = 0 ML_Aggregate_CoarsenUncoupled : current eps = 0.000000e+00 Aggregation(UVB) : Total nonzeros = 6183334 (Nrows=236600) Aggregation(UC) : Phase 0 - no. of bdry pts = 0 Aggregation(UC) : Phase 1 - nodes aggregated = 216030 (236600) Aggregation(UC) : Phase 1 - total aggregates = 9124 Aggregation(UC_Phase2_3) : Phase 1 - nodes aggregated = 216030 Aggregation(UC_Phase2_3) : Phase 1 - total aggregates = 9124 Aggregation(UC_Phase2_3) : Phase 2a- additional aggregates = 0 Aggregation(UC_Phase2_3) : Phase 2 - total aggregates = 9124 Aggregation(UC_Phase2_3) : Phase 2 - boundary nodes = 0 Aggregation(UC_Phase2_3) : Phase 3 - leftovers = 0 and singletons = 0 Gen_Prolongator (level 0) : Max eigenvalue = 4.2404e+00 Prolongator/Restriction smoother (level 0) : damping factor #1 = 3.1444e-01 Prolongator/Restriction smoother (level 0) : ( = 1.3333e+00 / 4.2404e+00) ML_Gen_MultiLevelHierarchy (level 1) : Gen Restriction and Prolongator ML_Aggregate_Coarsen (level 1) begins ML_Aggregate_CoarsenUncoupled : current level = 1 ML_Aggregate_CoarsenUncoupled : current eps = 0.000000e+00 Aggregation(UVB) : Total nonzeros = 267508 (Nrows=9124) Aggregation(UC) : Phase 0 - no. of bdry pts = 0 Aggregation(UC) : Phase 1 - nodes aggregated = 5944 (9124) Aggregation(UC) : Phase 1 - total aggregates = 293 Aggregation(UC_Phase2_3) : Phase 1 - nodes aggregated = 5944 Aggregation(UC_Phase2_3) : Phase 1 - total aggregates = 293 Aggregation(UC_Phase2_3) : Phase 2a- additional aggregates = 45 Aggregation(UC_Phase2_3) : Phase 2 - total aggregates = 338 Aggregation(UC_Phase2_3) : Phase 2 - boundary nodes = 0 Aggregation(UC_Phase2_3) : Phase 3 - leftovers = 0 and singletons = 0 Gen_Prolongator (level 1) : Max eigenvalue = 1.9726e+00 Prolongator/Restriction smoother (level 1) : damping factor #1 = 6.7594e-01 Prolongator/Restriction smoother (level 1) : ( = 1.3333e+00 / 1.9726e+00) ML_Gen_MultiLevelHierarchy (level 2) : Gen Restriction and Prolongator ML_Aggregate_Coarsen (level 2) begins ML_Aggregate_CoarsenUncoupled : current level = 2 ML_Aggregate_CoarsenUncoupled : current eps = 0.000000e+00 Aggregation(UVB) : Total nonzeros = 12942 (Nrows=338) Aggregation(UC) : Phase 0 - no. of bdry pts = 0 Aggregation(UC) : Phase 1 - nodes aggregated = 174 (338) Aggregation(UC) : Phase 1 - total aggregates = 10 Aggregation(UC_Phase2_3) : Phase 1 - nodes aggregated = 174 Aggregation(UC_Phase2_3) : Phase 1 - total aggregates = 10 Aggregation(UC_Phase2_3) : Phase 2a- additional aggregates = 5 Aggregation(UC_Phase2_3) : Phase 2 - total aggregates = 15 Aggregation(UC_Phase2_3) : Phase 2 - boundary nodes = 0 Aggregation(UC_Phase2_3) : Phase 3 - leftovers = 0 and singletons = 0 Gen_Prolongator (level 2) : Max eigenvalue = 1.4886e+00 Prolongator/Restriction smoother (level 2) : damping factor #1 = 8.9569e-01 Prolongator/Restriction smoother (level 2) : ( = 1.3333e+00 / 1.4886e+00) Smoothed Aggregation : operator complexity = 1.045392e+00. KSP Object:(PPE_) 4 MPI processes type: cg maximum iterations=10000 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using UNPRECONDITIONED norm type for convergence test PC Object:(PPE_) 4 MPI processes type: ml MG: type is MULTIPLICATIVE, levels=4 cycles=v Cycles per PCApply=1 Using Galerkin computed coarse grid matrices Coarse grid solver -- level ------------------------------- KSP Object: (PPE_mg_coarse_) 4 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (PPE_mg_coarse_) 4 MPI processes type: redundant Redundant preconditioner: First (color=0) of 4 PCs follows KSP Object: (PPE_mg_coarse_redundant_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (PPE_mg_coarse_redundant_) 1 MPI processes type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-16 matrix ordering: nd factor fill ratio given 5, needed 1 Factored matrix follows: Matrix Object: 1 MPI processes type: seqaij rows=15, cols=15 package used to perform factorization: petsc total: nonzeros=225, allocated nonzeros=225 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 3 nodes, limit used is 5 linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=15, cols=15 total: nonzeros=225, allocated nonzeros=225 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 3 nodes, limit used is 5 linear system matrix = precond matrix: Matrix Object: 4 MPI processes type: mpiaij rows=15, cols=15 total: nonzeros=225, allocated nonzeros=225 total number of mallocs used during MatSetValues calls =0 using I-node (on process 0) routines: found 1 nodes, limit used is 5 Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (PPE_mg_levels_1_) 4 MPI processes type: richardson Richardson: damping factor=0.9 maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (PPE_mg_levels_1_) 4 MPI processes type: bjacobi block Jacobi: number of blocks = 4 Local solve is same for all blocks, in the following KSP and PC objects: KSP Object: (PPE_mg_levels_1_sub_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (PPE_mg_levels_1_sub_) 1 MPI processes type: icc 0 levels of fill tolerance for zero pivot 2.22045e-14 using Manteuffel shift matrix ordering: natural linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=81, cols=81 total: nonzeros=2091, allocated nonzeros=2091 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Matrix Object: 4 MPI processes type: mpiaij rows=338, cols=338 total: nonzeros=12942, allocated nonzeros=12942 total number of mallocs used during MatSetValues calls =0 not using I-node (on process 0) routines Up solver (post-smoother) on level 1 ------------------------------- KSP Object: (PPE_mg_levels_1_) 4 MPI processes type: richardson Richardson: damping factor=0.9 maximum iterations=1 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (PPE_mg_levels_1_) 4 MPI processes type: bjacobi block Jacobi: number of blocks = 4 Local solve is same for all blocks, in the following KSP and PC objects: KSP Object: (PPE_mg_levels_1_sub_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (PPE_mg_levels_1_sub_) 1 MPI processes type: icc 0 levels of fill tolerance for zero pivot 2.22045e-14 using Manteuffel shift matrix ordering: natural linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=81, cols=81 total: nonzeros=2091, allocated nonzeros=2091 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Matrix Object: 4 MPI processes type: mpiaij rows=338, cols=338 total: nonzeros=12942, allocated nonzeros=12942 total number of mallocs used during MatSetValues calls =0 not using I-node (on process 0) routines Down solver (pre-smoother) on level 2 ------------------------------- KSP Object: (PPE_mg_levels_2_) 4 MPI processes type: richardson Richardson: damping factor=0.9 maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (PPE_mg_levels_2_) 4 MPI processes type: bjacobi block Jacobi: number of blocks = 4 Local solve is same for all blocks, in the following KSP and PC objects: KSP Object: (PPE_mg_levels_2_sub_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (PPE_mg_levels_2_sub_) 1 MPI processes type: icc 0 levels of fill tolerance for zero pivot 2.22045e-14 using Manteuffel shift matrix ordering: natural linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=2265, cols=2265 total: nonzeros=60131, allocated nonzeros=60131 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Matrix Object: 4 MPI processes type: mpiaij rows=9124, cols=9124 total: nonzeros=267508, allocated nonzeros=267508 total number of mallocs used during MatSetValues calls =0 not using I-node (on process 0) routines Up solver (post-smoother) on level 2 ------------------------------- KSP Object: (PPE_mg_levels_2_) 4 MPI processes type: richardson Richardson: damping factor=0.9 maximum iterations=1 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (PPE_mg_levels_2_) 4 MPI processes type: bjacobi block Jacobi: number of blocks = 4 Local solve is same for all blocks, in the following KSP and PC objects: KSP Object: (PPE_mg_levels_2_sub_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (PPE_mg_levels_2_sub_) 1 MPI processes type: icc 0 levels of fill tolerance for zero pivot 2.22045e-14 using Manteuffel shift matrix ordering: natural linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=2265, cols=2265 total: nonzeros=60131, allocated nonzeros=60131 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Matrix Object: 4 MPI processes type: mpiaij rows=9124, cols=9124 total: nonzeros=267508, allocated nonzeros=267508 total number of mallocs used during MatSetValues calls =0 not using I-node (on process 0) routines Down solver (pre-smoother) on level 3 ------------------------------- KSP Object: (PPE_mg_levels_3_) 4 MPI processes type: richardson Richardson: damping factor=0.9 maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (PPE_mg_levels_3_) 4 MPI processes type: bjacobi block Jacobi: number of blocks = 4 Local solve is same for all blocks, in the following KSP and PC objects: KSP Object: (PPE_mg_levels_3_sub_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (PPE_mg_levels_3_sub_) 1 MPI processes type: icc 0 levels of fill tolerance for zero pivot 2.22045e-14 using Manteuffel shift matrix ordering: natural linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=59045, cols=59045 total: nonzeros=1504413, allocated nonzeros=1594215 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Matrix Object: 4 MPI processes type: mpiaij rows=236600, cols=236600 total: nonzeros=6183334, allocated nonzeros=12776400 total number of mallocs used during MatSetValues calls =0 not using I-node (on process 0) routines Up solver (post-smoother) on level 3 ------------------------------- KSP Object: (PPE_mg_levels_3_) 4 MPI processes type: richardson Richardson: damping factor=0.9 maximum iterations=1 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (PPE_mg_levels_3_) 4 MPI processes type: bjacobi block Jacobi: number of blocks = 4 Local solve is same for all blocks, in the following KSP and PC objects: KSP Object: (PPE_mg_levels_3_sub_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (PPE_mg_levels_3_sub_) 1 MPI processes type: icc 0 levels of fill tolerance for zero pivot 2.22045e-14 using Manteuffel shift matrix ordering: natural linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=59045, cols=59045 total: nonzeros=1504413, allocated nonzeros=1594215 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Matrix Object: 4 MPI processes type: mpiaij rows=236600, cols=236600 total: nonzeros=6183334, allocated nonzeros=12776400 total number of mallocs used during MatSetValues calls =0 not using I-node (on process 0) routines linear system matrix = precond matrix: Matrix Object: 4 MPI processes type: mpiaij rows=236600, cols=236600 total: nonzeros=6183334, allocated nonzeros=12776400 total number of mallocs used during MatSetValues calls =0 not using I-node (on process 0) routines solver: ||b|| = 5.9221e-03 solver: ||x(0)|| = 0.0000e+00 solver: Iteration: 1 ||r|| = 2.2256e-03 ||x(i+1)-x(i)|| = 4.7864e+00 ||x(i+1)|| = 4.7864e+00 solver: Iteration: 2 ||r|| = 1.4156e-03 ||x(i+1)-x(i)|| = 5.6720e+00 ||x(i+1)|| = 1.0227e+01 From jedbrown at mcs.anl.gov Wed Apr 17 17:25:12 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 17 Apr 2013 17:25:12 -0500 Subject: [petsc-users] Mailing list reply-to munging (was Any changes in ML usage between 3.1-p8 -> 3.3-p6?) In-Reply-To: References: <20130405143415.GH17937@karman> <20130417162647.GC23247@karman> <20130417190051.GA2495@karman> <20130417202640.GC2495@karman> <8738uog8i3.fsf@mcs.anl.gov> <87mwsweshg.fsf@mcs.anl.gov> Message-ID: <87ehe8eqwn.fsf@mcs.anl.gov> Satish Balay writes: > If from is a problem [and messesup anyones mailboxes] - I can change > that. I felt it was best to deal with it as petsc-maint completely. This will cause the problem I mentioned because Reply-to is not strictly respected either. >> Several of us sending email as petsc-maint mixes up a lot of address >> books and Gmail will not allow me to send mail that way because Matt >> already claimed it and Gmail won't let two users send via the same >> address. > > No such problem from pine [even though most of you think its an > antique tool for current times] The problem is server-side. I use Notmuch for most list mail (excluding messages sent from my phone) so I can write whatever headers I want, but smtp.gmail.com insists on only sending mail from verified addresses. This is to limit outgoing spam and spoofed message headers. The logic they have implemented only allows one gmail account to claim a particular outgoing address. > Any smtp server should be fine. [but I guess if one doesn't work - all > won't work]. I usually tunnel imap/smtp over ssh [eventhough is not > required for smtp]. But you do go to places with blocked ssh - so that > doesn't help. If I send it via a private host, it's hard to avoid being (occasionally) blocked as spam, especially when the server does not match the email address. Sending via Argonne's server fixes that problem, but then I have to be able to access their server to send email. But this is drifting off-topic. The question is whether it's better to munge Reply-to for petsc-users and petsc-dev, which boils down to: Is it feasible to adopt mailing list etiquette of using "reply-all" or must we stick with the current mode of munging Reply-to? The former has many benefits, including making more email discussion searchable. From dharmareddy84 at gmail.com Wed Apr 17 17:30:22 2013 From: dharmareddy84 at gmail.com (Dharmendar Reddy) Date: Wed, 17 Apr 2013 17:30:22 -0500 Subject: [petsc-users] snes test fails Message-ID: Hello, I am solving a one dimensional (linear) Poisson equation. I have setup a snes problem using DM object. I am using Dirichlet boundary conditons at both ends of the one dimensional domain. For the test case i use the same value for potential at both ends. I am using DMplex object for the mesh and dof lay out. If i run the solver, i get a constant potential profile as expected. However, If i run the solver with snes_type test It is giving an error. what could i have done wrong ? Testing hand-coded Jacobian, if the ratio is O(1.e-8), the hand-coded Jacobian is probably correct. Run with -snes_test_display to show difference of hand-coded and finite difference Jacobian. Norm of matrix ratio 1.0913e-10 difference 5.85042e-10 (user-defined state) Norm of matrix ratio 0.5 difference 5.36098 (constant state -1.0) Norm of matrix ratio 0.666667 difference 10.722 (constant state 1.0) [0]PETSC ERROR: --------------------- Error Message ----------------------------------- - [0]PETSC ERROR: Object is in wrong state! [0]PETSC ERROR: SNESTest aborts after Jacobian test! [0]PETSC ERROR: ----------------------------------------------------------------------- - [0]PETSC ERROR: Petsc Development GIT revision: e0030536e6573667cee5340eb367e8213e67d68 9 GIT Date: 2013-04-16 21:48:15 -0500 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ----------------------------------------------------------------------- - [0]PETSC ERROR: ./PoisTest on a mpi_rScalar_Debug named login2.stampede.tacc.utexas.edu by Reddy135 Wed Apr 17 17:22:02 2013 [0]PETSC ERROR: Libraries linked from /home1/00924/Reddy135/LocalApps/petsc/mpi_rScalar _Debug/lib [0]PETSC ERROR: Configure run at Tue Apr 16 22:30:58 2013 [0]PETSC ERROR: Configure options --download-blacs=1 --download-ctetgen=1 --download- metis=1 --download-mumps=1 --download-parmetis=1 --download-scalapack=1 --download-supe rlu_dist=1 --download-triangle=1 --download-umfpack=1 --with-blas-lapack-dir=/opt/apps/ intel/13/composer_xe_2013.2.146/mkl/lib/intel64/ --with-debugging=1 --with-mpi-dir=/opt /apps/intel13/mvapich2/1.9/ --with-petsc-arch=mpi_rScalar_Debug --with-petsc-dir=/home1 /00924/Reddy135/LocalApps/petsc PETSC_ARCH=mpi_rScalar_Debug [0]PETSC ERROR: ----------------------------------------------------------------------- - [0]PETSC ERROR: SNESSolve_Test() line 127 in /home1/00924/Reddy135/LocalApps/petsc/src/ snes/impls/test/snestest.c [0]PETSC ERROR: SNESSolve() line 3755 in /home1/00924/Reddy135/LocalApps/petsc/src/snes /interface/snes.c SNESDivergedReason 0 Exiting solve -- ----------------------------------------------------- Dharmendar Reddy Palle Graduate Student Microelectronics Research center, University of Texas at Austin, 10100 Burnet Road, Bldg. 160 MER 2.608F, TX 78758-4445 e-mail: dharmareddy84 at gmail.com Phone: +1-512-350-9082 United States of America. Homepage: https://webspace.utexas.edu/~dpr342 -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Wed Apr 17 17:36:24 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 17 Apr 2013 17:36:24 -0500 Subject: [petsc-users] snes test fails In-Reply-To: References: Message-ID: <8761zkeqdz.fsf@mcs.anl.gov> Dharmendar Reddy writes: > Hello, > I am solving a one dimensional (linear) Poisson equation. I have > setup a snes problem using DM object. I am using Dirichlet boundary > conditons at both ends of the one dimensional domain. For the test case i > use the same value for potential at both ends. > > I am using DMplex object for the mesh and dof lay out. > > If i run the solver, i get a constant potential profile as expected. > However, If i run the solver with snes_type test > It is giving an error. what could i have done wrong ? > Testing hand-coded Jacobian, if the ratio is > O(1.e-8), the hand-coded Jacobian is probably correct. > Run with -snes_test_display to show difference > of hand-coded and finite difference Jacobian. > Norm of matrix ratio 1.0913e-10 difference 5.85042e-10 (user-defined state) > Norm of matrix ratio 0.5 difference 5.36098 (constant state -1.0) > Norm of matrix ratio 0.666667 difference 10.722 (constant state 1.0) > [0]PETSC ERROR: --------------------- Error Message > ----------------------------------- > - > [0]PETSC ERROR: Object is in wrong state! > [0]PETSC ERROR: SNESTest aborts after Jacobian test! This is what '-snes_type test' is supposed to do. So you're fine and your Jacobian is fine at the initial state, but not at the constant value 1.0 or -1.0. (That's okay if those are non-physical states, otherwise your Jacobian evaluation is incorrect.) > [0]PETSC ERROR: > ----------------------------------------------------------------------- > - > [0]PETSC ERROR: Petsc Development GIT revision: > e0030536e6573667cee5340eb367e8213e67d68 > 9 GIT Date: 2013-04-16 21:48:15 -0500 > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: > ----------------------------------------------------------------------- > - > [0]PETSC ERROR: ./PoisTest on a mpi_rScalar_Debug named > login2.stampede.tacc.utexas.edu > by Reddy135 Wed Apr 17 17:22:02 2013 > [0]PETSC ERROR: Libraries linked from > /home1/00924/Reddy135/LocalApps/petsc/mpi_rScalar > _Debug/lib > [0]PETSC ERROR: Configure run at Tue Apr 16 22:30:58 2013 > [0]PETSC ERROR: Configure options --download-blacs=1 --download-ctetgen=1 > --download- > metis=1 --download-mumps=1 --download-parmetis=1 --download-scalapack=1 > --download-supe > rlu_dist=1 --download-triangle=1 --download-umfpack=1 > --with-blas-lapack-dir=/opt/apps/ > intel/13/composer_xe_2013.2.146/mkl/lib/intel64/ --with-debugging=1 > --with-mpi-dir=/opt > /apps/intel13/mvapich2/1.9/ --with-petsc-arch=mpi_rScalar_Debug > --with-petsc-dir=/home1 > /00924/Reddy135/LocalApps/petsc PETSC_ARCH=mpi_rScalar_Debug > [0]PETSC ERROR: > ----------------------------------------------------------------------- > - > [0]PETSC ERROR: SNESSolve_Test() line 127 in > /home1/00924/Reddy135/LocalApps/petsc/src/ > snes/impls/test/snestest.c > [0]PETSC ERROR: SNESSolve() line 3755 in > /home1/00924/Reddy135/LocalApps/petsc/src/snes > /interface/snes.c > SNESDivergedReason 0 > Exiting solve > -- > ----------------------------------------------------- > Dharmendar Reddy Palle > Graduate Student > Microelectronics Research center, > University of Texas at Austin, > 10100 Burnet Road, Bldg. 160 > MER 2.608F, TX 78758-4445 > e-mail: dharmareddy84 at gmail.com > Phone: +1-512-350-9082 > United States of America. > Homepage: https://webspace.utexas.edu/~dpr342 From mark.adams at columbia.edu Wed Apr 17 17:51:02 2013 From: mark.adams at columbia.edu (Mark F. Adams) Date: Wed, 17 Apr 2013 18:51:02 -0400 Subject: [petsc-users] Any changes in ML usage between 3.1-p8 -> 3.3-p6? In-Reply-To: <20130417222103.GG2495@karman> References: <20130405143415.GH17937@karman> <20130417162647.GC23247@karman> <20130417190051.GA2495@karman> <20130417202640.GC2495@karman> <20130417222103.GG2495@karman> Message-ID: I see you are using icc. Perhaps our icc changed a bit between versions. These results look like both solves are working and the old does a little better (after two iterations). Try using jacobi instead of icc. On Apr 17, 2013, at 6:21 PM, Jozsef Bakosi wrote: >> On 04.17.2013 15:38, Matthew Knepley wrote: >> >>> On 04.17.2013 14:26, Jozsef Bakosi wrote: >>> >>>> Mark F. Adams mark.adams at columbia.edu >>>> Wed Apr 17 14:25:04 CDT 2013 >>>> >>>> 2) If you get "Indefinite PC" (I am guessing from using CG) it is because the >>>> preconditioner >>>> really is indefinite (or possible non-symmetric). We improved the checking >>>> for this in one >>>> of those releases. >>>> >>>> AMG does not guarantee an SPD preconditioner so why persist in trying to use >>>> CG? >>>> >>>> >>>> AMG is positive if everything is working correctly. >>>> >>>> Are these problems only semidefinite? Singular systems can give erratic >>>> behavior. >>> >>> It is a Laplace operator from Galerkin finite elements. And the PC is fine on >>> ranks 1, 2, 3, and 5 -- indefinite only on 4. I think we can safely say that the >>> same PC should be positive on 4 as well. >> >> Why is it safe? Because it sounds plausible? Mathematics is replete with things >> that sound plausible and are false. Are there proofs that suggest this? Is there >> computational evidence? Why would I believe you? > > Okay, so here is some additional information: > > I tried both old and new PETSc versions again, but now only taking 2 iterations > (both with 4 CPUs) and checked the residuals. I get the same exact PC from ML in > both cases, however, the residuals are different after both iterations: > > Please do a diff on the attached files and you can verify that the ML > diagnostics are exactly the same: same max eigenvalues, nodes aggregated, etc, > while the norm coming out of the solver at the end at both iterations are > different. > > We reproduced the same exact behavior on two different linux platforms. > > Once again: same application source code, same ML source code, different PETSc: > 3.1-p8 vs. 3.3-p6. > From balay at mcs.anl.gov Wed Apr 17 17:51:40 2013 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 17 Apr 2013 17:51:40 -0500 (CDT) Subject: [petsc-users] Mailing list reply-to munging (was Any changes in ML usage between 3.1-p8 -> 3.3-p6?) In-Reply-To: <87ehe8eqwn.fsf@mcs.anl.gov> References: <20130405143415.GH17937@karman> <20130417162647.GC23247@karman> <20130417190051.GA2495@karman> <20130417202640.GC2495@karman> <8738uog8i3.fsf@mcs.anl.gov> <87mwsweshg.fsf@mcs.anl.gov> <87ehe8eqwn.fsf@mcs.anl.gov> Message-ID: On Wed, 17 Apr 2013, Jed Brown wrote: > But this is drifting off-topic. The question is whether it's better to > munge Reply-to for petsc-users and petsc-dev, which boils down to: > > Is it feasible to adopt mailing list etiquette of using "reply-all" or > must we stick with the current mode of munging Reply-to? > > The former has many benefits, including making more email discussion > searchable. This benefit is a bit dubious - as you'll get some migration of petsc-maint traffic to petsc-users - but then you loose all the 'reply-to-individual' emails from the archives [yeah - reply-to-reply emails with cc:list added get archived - perhaps with broken threads]. For myself - I can fixup my client side config for mailing lists to be similar to petsc-maint. So this change is up to you and Barry - who deal with these personal e-mails [which I guess you are already used to - and are ok with] And then there is spam - which you say can be dealt with filters. Is this client side or server side? Side note: if its client side - then I would expect users could be doing the same for current mode - and not have to do the 'subscribe' but set config to 'not recieve e-mails' stuff. satish From mark.adams at columbia.edu Wed Apr 17 18:22:36 2013 From: mark.adams at columbia.edu (Mark F. Adams) Date: Wed, 17 Apr 2013 19:22:36 -0400 Subject: [petsc-users] Fwd: Any changes in ML usage between 3.1-p8 -> 3.3-p6? References: Message-ID: <04CAAFD0-E94A-4D15-A2D3-5C4A677109A4@columbia.edu> Begin forwarded message: > From: "Christon, Mark A" > Subject: Re: [petsc-users] Any changes in ML usage between 3.1-p8 -> 3.3-p6? > Date: April 17, 2013 7:06:11 PM EDT > To: "Mark F. Adams" , "Bakosi, Jozsef" > > Hi Mark, > > Yes, looks like the new version does a little better after 2 iterations, but at the 8th iteration, the residuals increase:( > > I suspect this is why PETSc is whining about an indefinite preconditioner. > > Something definitely changes as we've had about 6-8 regression tests start failing that have been running flawlessly with ML + PETSc 3.1-p8 for almost two years. > > If we can understand what changed, we probably have a fighting chance of correcting it ? assuming it's some solver setting for PETSc that we're not currently using. > > - Mark > > -- > Mark A. Christon > Computational Physics Group (CCS-2) > Computer, Computational and Statistical Sciences Division > Los Alamos National Laboratory > MS D413, P.O. Box 1663 > Los Alamos, NM 87545 > > E-mail: christon at lanl.gov > Phone: (505) 663-5124 > Mobile: (505) 695-5649 (voice mail) > > International Journal for Numerical Methods in Fluids > > From: "Mark F. Adams" > Date: Wed, 17 Apr 2013 18:51:02 -0400 > To: PETSc users list > Cc: "Mark A. Christon" > Subject: Re: [petsc-users] Any changes in ML usage between 3.1-p8 -> 3.3-p6? > >> I see you are using icc. Perhaps our icc changed a bit between versions. These results look like both solves are working and the old does a little better (after two iterations). >> >> Try using jacobi instead of icc. >> >> >> On Apr 17, 2013, at 6:21 PM, Jozsef Bakosi wrote: >> >>>> On 04.17.2013 15:38, Matthew Knepley wrote: >>>>> On 04.17.2013 14:26, Jozsef Bakosi wrote: >>>>>> Mark F. Adams mark.adams at columbia.edu >>>>>> Wed Apr 17 14:25:04 CDT 2013 >>>>>> 2) If you get "Indefinite PC" (I am guessing from using CG) it is because the >>>>>> preconditioner >>>>>> really is indefinite (or possible non-symmetric). We improved the checking >>>>>> for this in one >>>>>> of those releases. >>>>>> AMG does not guarantee an SPD preconditioner so why persist in trying to use >>>>>> CG? >>>>>> AMG is positive if everything is working correctly. >>>>>> Are these problems only semidefinite? Singular systems can give erratic >>>>>> behavior. >>>>> It is a Laplace operator from Galerkin finite elements. And the PC is fine on >>>>> ranks 1, 2, 3, and 5 -- indefinite only on 4. I think we can safely say that the >>>>> same PC should be positive on 4 as well. >>>> Why is it safe? Because it sounds plausible? Mathematics is replete with things >>>> that sound plausible and are false. Are there proofs that suggest this? Is there >>>> computational evidence? Why would I believe you? >>> Okay, so here is some additional information: >>> I tried both old and new PETSc versions again, but now only taking 2 iterations >>> (both with 4 CPUs) and checked the residuals. I get the same exact PC from ML in >>> both cases, however, the residuals are different after both iterations: >>> Please do a diff on the attached files and you can verify that the ML >>> diagnostics are exactly the same: same max eigenvalues, nodes aggregated, etc, >>> while the norm coming out of the solver at the end at both iterations are >>> different. >>> We reproduced the same exact behavior on two different linux platforms. >>> Once again: same application source code, same ML source code, different PETSc: >>> 3.1-p8 vs. 3.3-p6. >>> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From dharmareddy84 at gmail.com Wed Apr 17 18:25:48 2013 From: dharmareddy84 at gmail.com (Dharmendar Reddy) Date: Wed, 17 Apr 2013 18:25:48 -0500 Subject: [petsc-users] snes test fails In-Reply-To: References: <8761zkeqdz.fsf@mcs.anl.gov> Message-ID: Sorry, I do not know how the reply went to your id. I replied the usual way. I am solving : div(grad(V)) = 0 for x in [0, L] and V(0) = V1 and V(L) = V1 I get a solution V(x) = V1 after solving with intial guess V(X) = 0 for x in (0, L) Now, what is the user defined ? is it V(X) = 0 ? or V(x) = V1 ? Is the user defined state is V(X) = V1 then it means the Jacobin is tested after doing a solve first ? But then, for a linear Poisson problem, The Jacobin should be independent of V(X). I seem to get correct solution but snes_type test (fails ?) can you give me some pointers on how to debug this ? Thanks On Wed, Apr 17, 2013 at 5:55 PM, Dharmendar Reddy wrote: > I am confused. > > I am solving : div(grad(V)) = 0 for x in [0, L] and V(0) = V1 and V(L) > = V1 > > I get a solution V(x) = V1 after solving with intial guess V(X) = 0 for x > in (0, L) > > Now, what is the user defined ? is it V(X) = 0 ? or V(x) = V1 ? > > Is the user defined state is V(X) = V1 then it means the Jacobin is tested > after doing a solve first ? > > But then, for a linear Poisson problem, The Jacobin should be independent > of V(X). I seem to get correct solution but snes_type test (fails ?) can > you give me some pointers on how to debug this ? > > Thanks > Reddy > > > > > On Wed, Apr 17, 2013 at 5:36 PM, Jed Brown wrote: > >> Dharmendar Reddy writes: >> >> > Hello, >> > I am solving a one dimensional (linear) Poisson equation. I >> have >> > setup a snes problem using DM object. I am using Dirichlet boundary >> > conditons at both ends of the one dimensional domain. For the test case >> i >> > use the same value for potential at both ends. >> > >> > I am using DMplex object for the mesh and dof lay out. >> > >> > If i run the solver, i get a constant potential profile as expected. >> > However, If i run the solver with snes_type test >> > It is giving an error. what could i have done wrong ? >> > Testing hand-coded Jacobian, if the ratio is >> > O(1.e-8), the hand-coded Jacobian is probably correct. >> > Run with -snes_test_display to show difference >> > of hand-coded and finite difference Jacobian. >> > Norm of matrix ratio 1.0913e-10 difference 5.85042e-10 (user-defined >> state) >> > Norm of matrix ratio 0.5 difference 5.36098 (constant state -1.0) >> > Norm of matrix ratio 0.666667 difference 10.722 (constant state 1.0) >> > [0]PETSC ERROR: --------------------- Error Message >> > ----------------------------------- >> > - >> > [0]PETSC ERROR: Object is in wrong state! >> > [0]PETSC ERROR: SNESTest aborts after Jacobian test! >> >> This is what '-snes_type test' is supposed to do. So you're fine and >> your Jacobian is fine at the initial state, but not at the constant >> value 1.0 or -1.0. (That's okay if those are non-physical states, >> otherwise your Jacobian evaluation is incorrect.) >> >> > [0]PETSC ERROR: >> > ----------------------------------------------------------------------- >> > - >> > [0]PETSC ERROR: Petsc Development GIT revision: >> > e0030536e6573667cee5340eb367e8213e67d68 >> > 9 GIT Date: 2013-04-16 21:48:15 -0500 >> > [0]PETSC ERROR: See docs/changes/index.html for recent updates. >> > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >> > [0]PETSC ERROR: See docs/index.html for manual pages. >> > [0]PETSC ERROR: >> > ----------------------------------------------------------------------- >> > - >> > [0]PETSC ERROR: ./PoisTest on a mpi_rScalar_Debug named >> > login2.stampede.tacc.utexas.edu >> > by Reddy135 Wed Apr 17 17:22:02 2013 >> > [0]PETSC ERROR: Libraries linked from >> > /home1/00924/Reddy135/LocalApps/petsc/mpi_rScalar >> > _Debug/lib >> > [0]PETSC ERROR: Configure run at Tue Apr 16 22:30:58 2013 >> > [0]PETSC ERROR: Configure options --download-blacs=1 >> --download-ctetgen=1 >> > --download- >> > metis=1 --download-mumps=1 --download-parmetis=1 --download-scalapack=1 >> > --download-supe >> > rlu_dist=1 --download-triangle=1 --download-umfpack=1 >> > --with-blas-lapack-dir=/opt/apps/ >> > intel/13/composer_xe_2013.2.146/mkl/lib/intel64/ --with-debugging=1 >> > --with-mpi-dir=/opt >> > /apps/intel13/mvapich2/1.9/ --with-petsc-arch=mpi_rScalar_Debug >> > --with-petsc-dir=/home1 >> > /00924/Reddy135/LocalApps/petsc PETSC_ARCH=mpi_rScalar_Debug >> > [0]PETSC ERROR: >> > ----------------------------------------------------------------------- >> > - >> > [0]PETSC ERROR: SNESSolve_Test() line 127 in >> > /home1/00924/Reddy135/LocalApps/petsc/src/ >> > snes/impls/test/snestest.c >> > [0]PETSC ERROR: SNESSolve() line 3755 in >> > /home1/00924/Reddy135/LocalApps/petsc/src/snes >> > /interface/snes.c >> > SNESDivergedReason 0 >> > Exiting solve >> > -- >> > ----------------------------------------------------- >> > Dharmendar Reddy Palle >> > Graduate Student >> > Microelectronics Research center, >> > University of Texas at Austin, >> > 10100 Burnet Road, Bldg. 160 >> > MER 2.608F, TX 78758-4445 >> > e-mail: dharmareddy84 at gmail.com >> > Phone: +1-512-350-9082 >> > United States of America. >> > Homepage: https://webspace.utexas.edu/~dpr342 >> > > > > -- > ----------------------------------------------------- > Dharmendar Reddy Palle > Graduate Student > Microelectronics Research center, > University of Texas at Austin, > 10100 Burnet Road, Bldg. 160 > MER 2.608F, TX 78758-4445 > e-mail: dharmareddy84 at gmail.com > Phone: +1-512-350-9082 > United States of America. > Homepage: https://webspace.utexas.edu/~dpr342 > -- ----------------------------------------------------- Dharmendar Reddy Palle Graduate Student Microelectronics Research center, University of Texas at Austin, 10100 Burnet Road, Bldg. 160 MER 2.608F, TX 78758-4445 e-mail: dharmareddy84 at gmail.com Phone: +1-512-350-9082 United States of America. Homepage: https://webspace.utexas.edu/~dpr342 -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Wed Apr 17 18:31:58 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 17 Apr 2013 18:31:58 -0500 Subject: [petsc-users] snes test fails In-Reply-To: References: <8761zkeqdz.fsf@mcs.anl.gov> Message-ID: On Apr 17, 2013 6:25 PM, "Dharmendar Reddy" wrote: > > Sorry, I do not know how the reply went to your id. I replied the usual way. > > > > I am solving : div(grad(V)) = 0 for x in [0, L] and V(0) = V1 and V(L) = V1 > > I get a solution V(x) = V1 after solving with intial guess V(X) = 0 for x in (0, L) > > Now, what is the user defined ? is it V(X) = 0 ? or V(x) = V1 ? Yes, it is the initial guess. SNES Test does not actually solve the problem. > > Is the user defined state is V(X) = V1 then it means the Jacobin is tested after doing a solve first ? > > But then, for a linear Poisson problem, The Jacobin should be independent of V(X). I seem to get correct solution but snes_type test (fails ?) can you give me some pointers on how to debug this ? How do you define the function evaluation? Is the Jacobian code really independent of the state vector? (It should be.) > > Thanks > > > On Wed, Apr 17, 2013 at 5:55 PM, Dharmendar Reddy wrote: >> >> I am confused. >> >> I am solving : div(grad(V)) = 0 for x in [0, L] and V(0) = V1 and V(L) = V1 >> >> I get a solution V(x) = V1 after solving with intial guess V(X) = 0 for x in (0, L) >> >> Now, what is the user defined ? is it V(X) = 0 ? or V(x) = V1 ? >> >> Is the user defined state is V(X) = V1 then it means the Jacobin is tested after doing a solve first ? >> >> But then, for a linear Poisson problem, The Jacobin should be independent of V(X). I seem to get correct solution but snes_type test (fails ?) can you give me some pointers on how to debug this ? >> >> Thanks >> Reddy >> >> >> >> >> On Wed, Apr 17, 2013 at 5:36 PM, Jed Brown wrote: >>> >>> Dharmendar Reddy writes: >>> >>> > Hello, >>> > I am solving a one dimensional (linear) Poisson equation. I have >>> > setup a snes problem using DM object. I am using Dirichlet boundary >>> > conditons at both ends of the one dimensional domain. For the test case i >>> > use the same value for potential at both ends. >>> > >>> > I am using DMplex object for the mesh and dof lay out. >>> > >>> > If i run the solver, i get a constant potential profile as expected. >>> > However, If i run the solver with snes_type test >>> > It is giving an error. what could i have done wrong ? >>> > Testing hand-coded Jacobian, if the ratio is >>> > O(1.e-8), the hand-coded Jacobian is probably correct. >>> > Run with -snes_test_display to show difference >>> > of hand-coded and finite difference Jacobian. >>> > Norm of matrix ratio 1.0913e-10 difference 5.85042e-10 (user-defined state) >>> > Norm of matrix ratio 0.5 difference 5.36098 (constant state -1.0) >>> > Norm of matrix ratio 0.666667 difference 10.722 (constant state 1.0) >>> > [0]PETSC ERROR: --------------------- Error Message >>> > ----------------------------------- >>> > - >>> > [0]PETSC ERROR: Object is in wrong state! >>> > [0]PETSC ERROR: SNESTest aborts after Jacobian test! >>> >>> This is what '-snes_type test' is supposed to do. So you're fine and >>> your Jacobian is fine at the initial state, but not at the constant >>> value 1.0 or -1.0. (That's okay if those are non-physical states, >>> otherwise your Jacobian evaluation is incorrect.) >>> >>> > [0]PETSC ERROR: >>> > ----------------------------------------------------------------------- >>> > - >>> > [0]PETSC ERROR: Petsc Development GIT revision: >>> > e0030536e6573667cee5340eb367e8213e67d68 >>> > 9 GIT Date: 2013-04-16 21:48:15 -0500 >>> > [0]PETSC ERROR: See docs/changes/index.html for recent updates. >>> > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >>> > [0]PETSC ERROR: See docs/index.html for manual pages. >>> > [0]PETSC ERROR: >>> > ----------------------------------------------------------------------- >>> > - >>> > [0]PETSC ERROR: ./PoisTest on a mpi_rScalar_Debug named >>> > login2.stampede.tacc.utexas.edu >>> > by Reddy135 Wed Apr 17 17:22:02 2013 >>> > [0]PETSC ERROR: Libraries linked from >>> > /home1/00924/Reddy135/LocalApps/petsc/mpi_rScalar >>> > _Debug/lib >>> > [0]PETSC ERROR: Configure run at Tue Apr 16 22:30:58 2013 >>> > [0]PETSC ERROR: Configure options --download-blacs=1 --download-ctetgen=1 >>> > --download- >>> > metis=1 --download-mumps=1 --download-parmetis=1 --download-scalapack=1 >>> > --download-supe >>> > rlu_dist=1 --download-triangle=1 --download-umfpack=1 >>> > --with-blas-lapack-dir=/opt/apps/ >>> > intel/13/composer_xe_2013.2.146/mkl/lib/intel64/ --with-debugging=1 >>> > --with-mpi-dir=/opt >>> > /apps/intel13/mvapich2/1.9/ --with-petsc-arch=mpi_rScalar_Debug >>> > --with-petsc-dir=/home1 >>> > /00924/Reddy135/LocalApps/petsc PETSC_ARCH=mpi_rScalar_Debug >>> > [0]PETSC ERROR: >>> > ----------------------------------------------------------------------- >>> > - >>> > [0]PETSC ERROR: SNESSolve_Test() line 127 in >>> > /home1/00924/Reddy135/LocalApps/petsc/src/ >>> > snes/impls/test/snestest.c >>> > [0]PETSC ERROR: SNESSolve() line 3755 in >>> > /home1/00924/Reddy135/LocalApps/petsc/src/snes >>> > /interface/snes.c >>> > SNESDivergedReason 0 >>> > Exiting solve >>> > -- >>> > ----------------------------------------------------- >>> > Dharmendar Reddy Palle >>> > Graduate Student >>> > Microelectronics Research center, >>> > University of Texas at Austin, >>> > 10100 Burnet Road, Bldg. 160 >>> > MER 2.608F, TX 78758-4445 >>> > e-mail: dharmareddy84 at gmail.com >>> > Phone: +1-512-350-9082 >>> > United States of America. >>> > Homepage: https://webspace.utexas.edu/~dpr342 >> >> >> >> >> -- >> ----------------------------------------------------- >> Dharmendar Reddy Palle >> Graduate Student >> Microelectronics Research center, >> University of Texas at Austin, >> 10100 Burnet Road, Bldg. 160 >> MER 2.608F, TX 78758-4445 >> e-mail: dharmareddy84 at gmail.com >> Phone: +1-512-350-9082 >> United States of America. >> Homepage: https://webspace.utexas.edu/~dpr342 > > > > > -- > ----------------------------------------------------- > Dharmendar Reddy Palle > Graduate Student > Microelectronics Research center, > University of Texas at Austin, > 10100 Burnet Road, Bldg. 160 > MER 2.608F, TX 78758-4445 > e-mail: dharmareddy84 at gmail.com > Phone: +1-512-350-9082 > United States of America. > Homepage: https://webspace.utexas.edu/~dpr342 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark.adams at columbia.edu Wed Apr 17 18:42:47 2013 From: mark.adams at columbia.edu (Mark F. Adams) Date: Wed, 17 Apr 2013 19:42:47 -0400 Subject: [petsc-users] Fwd: Any changes in ML usage between 3.1-p8 -> 3.3-p6? In-Reply-To: <04CAAFD0-E94A-4D15-A2D3-5C4A677109A4@columbia.edu> References: <04CAAFD0-E94A-4D15-A2D3-5C4A677109A4@columbia.edu> Message-ID: <33803CA8-5CE4-416D-9F0C-C99D37475904@columbia.edu> In looking at the logs for icc it looks like Hong has done a little messing around with the shifting tolerance: - ((PC_Factor*)icc)->info.shiftamount = 1.e-12; - ((PC_Factor*)icc)->info.zeropivot = 1.e-12; + ((PC_Factor*)icc)->info.shiftamount = 100.0*PETSC_MACHINE_EPSILON; + ((PC_Factor*)icc)->info.zeropivot = 100.0*PETSC_MACHINE_EPSILON; This looks like it would lower the shifting and drop tolerance. You might set these back to 1e-12. http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCFactorSetZeroPivot.html http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCFactorSetShiftAmount.html BTW, using an indefinite preconditioner, that has to be fixed with is-this-a-small-number kind of code, on a warm and fluffy Laplacian is not recommended. As I said before I would just use jacobi -- god gave you an easy problem. Exploit it. On Apr 17, 2013, at 7:22 PM, "Mark F. Adams" wrote: > > > Begin forwarded message: > >> From: "Christon, Mark A" >> Subject: Re: [petsc-users] Any changes in ML usage between 3.1-p8 -> 3.3-p6? >> Date: April 17, 2013 7:06:11 PM EDT >> To: "Mark F. Adams" , "Bakosi, Jozsef" >> >> Hi Mark, >> >> Yes, looks like the new version does a little better after 2 iterations, but at the 8th iteration, the residuals increase:( >> >> I suspect this is why PETSc is whining about an indefinite preconditioner. >> >> Something definitely changes as we've had about 6-8 regression tests start failing that have been running flawlessly with ML + PETSc 3.1-p8 for almost two years. >> >> If we can understand what changed, we probably have a fighting chance of correcting it ? assuming it's some solver setting for PETSc that we're not currently using. >> >> - Mark >> >> -- >> Mark A. Christon >> Computational Physics Group (CCS-2) >> Computer, Computational and Statistical Sciences Division >> Los Alamos National Laboratory >> MS D413, P.O. Box 1663 >> Los Alamos, NM 87545 >> >> E-mail: christon at lanl.gov >> Phone: (505) 663-5124 >> Mobile: (505) 695-5649 (voice mail) >> >> International Journal for Numerical Methods in Fluids >> >> From: "Mark F. Adams" >> Date: Wed, 17 Apr 2013 18:51:02 -0400 >> To: PETSc users list >> Cc: "Mark A. Christon" >> Subject: Re: [petsc-users] Any changes in ML usage between 3.1-p8 -> 3.3-p6? >> >>> I see you are using icc. Perhaps our icc changed a bit between versions. These results look like both solves are working and the old does a little better (after two iterations). >>> >>> Try using jacobi instead of icc. >>> >>> >>> On Apr 17, 2013, at 6:21 PM, Jozsef Bakosi wrote: >>> >>>>> On 04.17.2013 15:38, Matthew Knepley wrote: >>>>>> On 04.17.2013 14:26, Jozsef Bakosi wrote: >>>>>>> Mark F. Adams mark.adams at columbia.edu >>>>>>> Wed Apr 17 14:25:04 CDT 2013 >>>>>>> 2) If you get "Indefinite PC" (I am guessing from using CG) it is because the >>>>>>> preconditioner >>>>>>> really is indefinite (or possible non-symmetric). We improved the checking >>>>>>> for this in one >>>>>>> of those releases. >>>>>>> AMG does not guarantee an SPD preconditioner so why persist in trying to use >>>>>>> CG? >>>>>>> AMG is positive if everything is working correctly. >>>>>>> Are these problems only semidefinite? Singular systems can give erratic >>>>>>> behavior. >>>>>> It is a Laplace operator from Galerkin finite elements. And the PC is fine on >>>>>> ranks 1, 2, 3, and 5 -- indefinite only on 4. I think we can safely say that the >>>>>> same PC should be positive on 4 as well. >>>>> Why is it safe? Because it sounds plausible? Mathematics is replete with things >>>>> that sound plausible and are false. Are there proofs that suggest this? Is there >>>>> computational evidence? Why would I believe you? >>>> Okay, so here is some additional information: >>>> I tried both old and new PETSc versions again, but now only taking 2 iterations >>>> (both with 4 CPUs) and checked the residuals. I get the same exact PC from ML in >>>> both cases, however, the residuals are different after both iterations: >>>> Please do a diff on the attached files and you can verify that the ML >>>> diagnostics are exactly the same: same max eigenvalues, nodes aggregated, etc, >>>> while the norm coming out of the solver at the end at both iterations are >>>> different. >>>> We reproduced the same exact behavior on two different linux platforms. >>>> Once again: same application source code, same ML source code, different PETSc: >>>> 3.1-p8 vs. 3.3-p6. >>>> >>> >>> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dharmareddy84 at gmail.com Wed Apr 17 19:03:29 2013 From: dharmareddy84 at gmail.com (Dharmendar Reddy) Date: Wed, 17 Apr 2013 19:03:29 -0500 Subject: [petsc-users] snes test fails In-Reply-To: References: <8761zkeqdz.fsf@mcs.anl.gov> Message-ID: I am using DMSNESSetLocal . I have attached the solution of the solve with boundary conditions V(0) = 0 and V(L) = 1.0 using normal solve and senes_fd I printed the and V_at grid point id. The grid points id are not sorted in increasing x but You can see that the solution is as expected. I have 431 grid points and delV is expected to be 1/431 ~ = 0.0023 . I have used the subroutines i pass for set function and set jacobian earlier and i had no issues. Now, restructured the code to use DMPlex object. I may have messed up in process but looking at the obtained solution it doesn't seem to be the case. In my earlier code i used to mark the boundary nodes with negative indices and had VEC and MAt objects set to ignore negative indieces during assembly. I am not sure how this is done for DMPlexSetClosure I insert the boundary values into the local vector before assembly of the function and jacobain On Wed, Apr 17, 2013 at 6:31 PM, Jed Brown wrote: > > On Apr 17, 2013 6:25 PM, "Dharmendar Reddy" > wrote: > > > > Sorry, I do not know how the reply went to your id. I replied the usual > way. > > > > > > > > I am solving : div(grad(V)) = 0 for x in [0, L] and V(0) = V1 and > V(L) = V1 > > > > I get a solution V(x) = V1 after solving with intial guess V(X) = 0 for > x in (0, L) > > > > Now, what is the user defined ? is it V(X) = 0 ? or V(x) = V1 ? > > Yes, it is the initial guess. SNES Test does not actually solve the > problem. > > > > > Is the user defined state is V(X) = V1 then it means the Jacobin is > tested after doing a solve first ? > > > > But then, for a linear Poisson problem, The Jacobin should be > independent of V(X). I seem to get correct solution but snes_type test > (fails ?) can you give me some pointers on how to debug this ? > > How do you define the function evaluation? Is the Jacobian code really > independent of the state vector? (It should be.) > > > > > Thanks > > > > > > On Wed, Apr 17, 2013 at 5:55 PM, Dharmendar Reddy < > dharmareddy84 at gmail.com> wrote: > >> > >> I am confused. > >> > >> I am solving : div(grad(V)) = 0 for x in [0, L] and V(0) = V1 and > V(L) = V1 > >> > >> I get a solution V(x) = V1 after solving with intial guess V(X) = 0 for > x in (0, L) > >> > >> Now, what is the user defined ? is it V(X) = 0 ? or V(x) = V1 ? > >> > >> Is the user defined state is V(X) = V1 then it means the Jacobin is > tested after doing a solve first ? > >> > >> But then, for a linear Poisson problem, The Jacobin should be > independent of V(X). I seem to get correct solution but snes_type test > (fails ?) can you give me some pointers on how to debug this ? > >> > >> Thanks > >> Reddy > >> > >> > >> > >> > >> On Wed, Apr 17, 2013 at 5:36 PM, Jed Brown > wrote: > >>> > >>> Dharmendar Reddy writes: > >>> > >>> > Hello, > >>> > I am solving a one dimensional (linear) Poisson equation. I > have > >>> > setup a snes problem using DM object. I am using Dirichlet boundary > >>> > conditons at both ends of the one dimensional domain. For the test > case i > >>> > use the same value for potential at both ends. > >>> > > >>> > I am using DMplex object for the mesh and dof lay out. > >>> > > >>> > If i run the solver, i get a constant potential profile as expected. > >>> > However, If i run the solver with snes_type test > >>> > It is giving an error. what could i have done wrong ? > >>> > Testing hand-coded Jacobian, if the ratio is > >>> > O(1.e-8), the hand-coded Jacobian is probably correct. > >>> > Run with -snes_test_display to show difference > >>> > of hand-coded and finite difference Jacobian. > >>> > Norm of matrix ratio 1.0913e-10 difference 5.85042e-10 (user-defined > state) > >>> > Norm of matrix ratio 0.5 difference 5.36098 (constant state -1.0) > >>> > Norm of matrix ratio 0.666667 difference 10.722 (constant state 1.0) > >>> > [0]PETSC ERROR: --------------------- Error Message > >>> > ----------------------------------- > >>> > - > >>> > [0]PETSC ERROR: Object is in wrong state! > >>> > [0]PETSC ERROR: SNESTest aborts after Jacobian test! > >>> > >>> This is what '-snes_type test' is supposed to do. So you're fine and > >>> your Jacobian is fine at the initial state, but not at the constant > >>> value 1.0 or -1.0. (That's okay if those are non-physical states, > >>> otherwise your Jacobian evaluation is incorrect.) > >>> > >>> > [0]PETSC ERROR: > >>> > > ----------------------------------------------------------------------- > >>> > - > >>> > [0]PETSC ERROR: Petsc Development GIT revision: > >>> > e0030536e6573667cee5340eb367e8213e67d68 > >>> > 9 GIT Date: 2013-04-16 21:48:15 -0500 > >>> > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > >>> > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > >>> > [0]PETSC ERROR: See docs/index.html for manual pages. > >>> > [0]PETSC ERROR: > >>> > > ----------------------------------------------------------------------- > >>> > - > >>> > [0]PETSC ERROR: ./PoisTest on a mpi_rScalar_Debug named > >>> > login2.stampede.tacc.utexas.edu > >>> > by Reddy135 Wed Apr 17 17:22:02 2013 > >>> > [0]PETSC ERROR: Libraries linked from > >>> > /home1/00924/Reddy135/LocalApps/petsc/mpi_rScalar > >>> > _Debug/lib > >>> > [0]PETSC ERROR: Configure run at Tue Apr 16 22:30:58 2013 > >>> > [0]PETSC ERROR: Configure options --download-blacs=1 > --download-ctetgen=1 > >>> > --download- > >>> > metis=1 --download-mumps=1 --download-parmetis=1 > --download-scalapack=1 > >>> > --download-supe > >>> > rlu_dist=1 --download-triangle=1 --download-umfpack=1 > >>> > --with-blas-lapack-dir=/opt/apps/ > >>> > intel/13/composer_xe_2013.2.146/mkl/lib/intel64/ --with-debugging=1 > >>> > --with-mpi-dir=/opt > >>> > /apps/intel13/mvapich2/1.9/ --with-petsc-arch=mpi_rScalar_Debug > >>> > --with-petsc-dir=/home1 > >>> > /00924/Reddy135/LocalApps/petsc PETSC_ARCH=mpi_rScalar_Debug > >>> > [0]PETSC ERROR: > >>> > > ----------------------------------------------------------------------- > >>> > - > >>> > [0]PETSC ERROR: SNESSolve_Test() line 127 in > >>> > /home1/00924/Reddy135/LocalApps/petsc/src/ > >>> > snes/impls/test/snestest.c > >>> > [0]PETSC ERROR: SNESSolve() line 3755 in > >>> > /home1/00924/Reddy135/LocalApps/petsc/src/snes > >>> > /interface/snes.c > >>> > SNESDivergedReason 0 > >>> > Exiting solve > >>> > -- > >>> > ----------------------------------------------------- > >>> > Dharmendar Reddy Palle > >>> > Graduate Student > >>> > Microelectronics Research center, > >>> > University of Texas at Austin, > >>> > 10100 Burnet Road, Bldg. 160 > >>> > MER 2.608F, TX 78758-4445 > >>> > e-mail: dharmareddy84 at gmail.com > >>> > Phone: +1-512-350-9082 > >>> > United States of America. > >>> > Homepage: https://webspace.utexas.edu/~dpr342 > >> > >> > >> > >> > >> -- > >> ----------------------------------------------------- > >> Dharmendar Reddy Palle > >> Graduate Student > >> Microelectronics Research center, > >> University of Texas at Austin, > >> 10100 Burnet Road, Bldg. 160 > >> MER 2.608F, TX 78758-4445 > >> e-mail: dharmareddy84 at gmail.com > >> Phone: +1-512-350-9082 > >> United States of America. > >> Homepage: https://webspace.utexas.edu/~dpr342 > > > > > > > > > > -- > > ----------------------------------------------------- > > Dharmendar Reddy Palle > > Graduate Student > > Microelectronics Research center, > > University of Texas at Austin, > > 10100 Burnet Road, Bldg. 160 > > MER 2.608F, TX 78758-4445 > > e-mail: dharmareddy84 at gmail.com > > Phone: +1-512-350-9082 > > United States of America. > > Homepage: https://webspace.utexas.edu/~dpr342 > -- ----------------------------------------------------- Dharmendar Reddy Palle Graduate Student Microelectronics Research center, University of Texas at Austin, 10100 Burnet Road, Bldg. 160 MER 2.608F, TX 78758-4445 e-mail: dharmareddy84 at gmail.com Phone: +1-512-350-9082 United States of America. Homepage: https://webspace.utexas.edu/~dpr342 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: phi.res Type: application/octet-stream Size: 16943 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: phi_fd.res Type: application/octet-stream Size: 17026 bytes Desc: not available URL: From dharmareddy84 at gmail.com Wed Apr 17 19:19:17 2013 From: dharmareddy84 at gmail.com (Dharmendar Reddy) Date: Wed, 17 Apr 2013 19:19:17 -0500 Subject: [petsc-users] snes test fails In-Reply-To: <8761zkeqdz.fsf@mcs.anl.gov> References: <8761zkeqdz.fsf@mcs.anl.gov> Message-ID: Hello, Found the bug in my code. I forgot the call MatZeroEntries on the Jacobin. I think the linear problem would make only one call to compute jacobain and i was getting correct answer. But the test was making three consecutive calls where it obviously was failing. Thanks Reddy On Wed, Apr 17, 2013 at 5:36 PM, Jed Brown wrote: > Dharmendar Reddy writes: > > > Hello, > > I am solving a one dimensional (linear) Poisson equation. I have > > setup a snes problem using DM object. I am using Dirichlet boundary > > conditons at both ends of the one dimensional domain. For the test case i > > use the same value for potential at both ends. > > > > I am using DMplex object for the mesh and dof lay out. > > > > If i run the solver, i get a constant potential profile as expected. > > However, If i run the solver with snes_type test > > It is giving an error. what could i have done wrong ? > > Testing hand-coded Jacobian, if the ratio is > > O(1.e-8), the hand-coded Jacobian is probably correct. > > Run with -snes_test_display to show difference > > of hand-coded and finite difference Jacobian. > > Norm of matrix ratio 1.0913e-10 difference 5.85042e-10 (user-defined > state) > > Norm of matrix ratio 0.5 difference 5.36098 (constant state -1.0) > > Norm of matrix ratio 0.666667 difference 10.722 (constant state 1.0) > > [0]PETSC ERROR: --------------------- Error Message > > ----------------------------------- > > - > > [0]PETSC ERROR: Object is in wrong state! > > [0]PETSC ERROR: SNESTest aborts after Jacobian test! > > This is what '-snes_type test' is supposed to do. So you're fine and > your Jacobian is fine at the initial state, but not at the constant > value 1.0 or -1.0. (That's okay if those are non-physical states, > otherwise your Jacobian evaluation is incorrect.) > > > [0]PETSC ERROR: > > ----------------------------------------------------------------------- > > - > > [0]PETSC ERROR: Petsc Development GIT revision: > > e0030536e6573667cee5340eb367e8213e67d68 > > 9 GIT Date: 2013-04-16 21:48:15 -0500 > > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > > [0]PETSC ERROR: See docs/index.html for manual pages. > > [0]PETSC ERROR: > > ----------------------------------------------------------------------- > > - > > [0]PETSC ERROR: ./PoisTest on a mpi_rScalar_Debug named > > login2.stampede.tacc.utexas.edu > > by Reddy135 Wed Apr 17 17:22:02 2013 > > [0]PETSC ERROR: Libraries linked from > > /home1/00924/Reddy135/LocalApps/petsc/mpi_rScalar > > _Debug/lib > > [0]PETSC ERROR: Configure run at Tue Apr 16 22:30:58 2013 > > [0]PETSC ERROR: Configure options --download-blacs=1 > --download-ctetgen=1 > > --download- > > metis=1 --download-mumps=1 --download-parmetis=1 --download-scalapack=1 > > --download-supe > > rlu_dist=1 --download-triangle=1 --download-umfpack=1 > > --with-blas-lapack-dir=/opt/apps/ > > intel/13/composer_xe_2013.2.146/mkl/lib/intel64/ --with-debugging=1 > > --with-mpi-dir=/opt > > /apps/intel13/mvapich2/1.9/ --with-petsc-arch=mpi_rScalar_Debug > > --with-petsc-dir=/home1 > > /00924/Reddy135/LocalApps/petsc PETSC_ARCH=mpi_rScalar_Debug > > [0]PETSC ERROR: > > ----------------------------------------------------------------------- > > - > > [0]PETSC ERROR: SNESSolve_Test() line 127 in > > /home1/00924/Reddy135/LocalApps/petsc/src/ > > snes/impls/test/snestest.c > > [0]PETSC ERROR: SNESSolve() line 3755 in > > /home1/00924/Reddy135/LocalApps/petsc/src/snes > > /interface/snes.c > > SNESDivergedReason 0 > > Exiting solve > > -- > > ----------------------------------------------------- > > Dharmendar Reddy Palle > > Graduate Student > > Microelectronics Research center, > > University of Texas at Austin, > > 10100 Burnet Road, Bldg. 160 > > MER 2.608F, TX 78758-4445 > > e-mail: dharmareddy84 at gmail.com > > Phone: +1-512-350-9082 > > United States of America. > > Homepage: https://webspace.utexas.edu/~dpr342 > -- ----------------------------------------------------- Dharmendar Reddy Palle Graduate Student Microelectronics Research center, University of Texas at Austin, 10100 Burnet Road, Bldg. 160 MER 2.608F, TX 78758-4445 e-mail: dharmareddy84 at gmail.com Phone: +1-512-350-9082 United States of America. Homepage: https://webspace.utexas.edu/~dpr342 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Shuangshuang.Jin at pnnl.gov Wed Apr 17 19:27:32 2013 From: Shuangshuang.Jin at pnnl.gov (Jin, Shuangshuang) Date: Wed, 17 Apr 2013 17:27:32 -0700 Subject: [petsc-users] ksp for AX=B system In-Reply-To: <5C4000FF-5FCA-45C5-9024-C9908527D61B@mcs.anl.gov> References: <6778DE83AB681D49BFC2CD850610FEB1018FC933E31C@EMAIL04.pnl.gov> <42A940ED-9634-4AAC-A8FB-2C6EAD3EEF6B@mcs.anl.gov> <6778DE83AB681D49BFC2CD850610FEB1018FC933E32C@EMAIL04.pnl.gov> <58CD76F6-ECC5-4DE5-A93F-1075179054B9@mcs.anl.gov> <6778DE83AB681D49BFC2CD850610FEB1018FC933E40B@EMAIL04.pnl.gov> <5C4000FF-5FCA-45C5-9024-C9908527D61B@mcs.anl.gov> Message-ID: <6778DE83AB681D49BFC2CD850610FEB1018FC933E4C6@EMAIL04.pnl.gov> Ok, I spent a long time trying to install SuperLU_DIST and MUMPS, but so far with no luck. It took a long time for every ./configure I made. And it kept on complaining about "UNABLE to CONFIGURE with GIVEN OPTIONS" each time for this and that package. And the latest one I got was: [d3m956 at olympus petsc-dev.4.17.13]$ ./configure --with-scalar-type=complex --with-clanguage=C++ PETSC_ARCH=arch-complex --with-fortran-kernels=generic --download-superlu_dist --download-mumps --with-scalapack-dir=/share/apps/modules/Modules/3.2.8/modulefiles/development/tools/scalapack/ --download-parmetis --download-metis --with-CMake-dir=/share/apps/modules/Modules/3.2.8/modulefiles/development/tools/cmake =============================================================================== Configuring PETSc to compile on your system =============================================================================== TESTING: configureLibrary from PETSc.packages.metis(config/BuildSystem/config/package.py:464) ******************************************************************************* UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details): ------------------------------------------------------------------------------- CMake > 2.5 is needed to build METIS ******************************************************************************* Can you see anything wrong with my configuration command? Thanks, Shuangshuang -----Original Message----- From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Barry Smith Sent: Wednesday, April 17, 2013 2:49 PM To: PETSc users list Subject: Re: [petsc-users] ksp for AX=B system On Apr 17, 2013, at 1:53 PM, "Jin, Shuangshuang" wrote: > Hello, Barry, I got a new question here. > > I have integrated the AX=B solver into my code using the example from ex125.c. The only major difference between my code and the example is the input matrix. > > In the example ex125.c, the A matrix is created by loading the data from a binary file. In my code, I passed in the computed A matrix and the B matrix as arguments to a user defined solvingAXB() function: > > static PetscErrorCode solvingAXB(Mat A, Mat B, PetscInt n, PetscInt > nrhs, Mat X, const int me); > > I noticed that A has to be stored as an MATAIJ matrix and the B as a MATDENSE matrix. So I converted their storage types inside the solvingAXB as below (They were MPIAIJ type before): Note that MATAIJ is MATMPIAIJ when rank > 1 and MATSEQAIJ when rank == 1. > > ierr = MatConvert(A, MATAIJ, MAT_REUSE_MATRIX, &A); // A has to be MATAIJ for built-in PETSc LU! > MatView(A, PETSC_VIEWER_STDOUT_WORLD); > > ierr = MatConvert(B, MATDENSE, MAT_REUSE_MATRIX, &B); // B has to be a SeqDense matrix! > MatView(B, PETSC_VIEWER_STDOUT_WORLD); > > With this implementation, I can run the code with expected results when 1 processor is used. > > However, when I use 2 processors, I get a run time error which I guess is still somehow related to the matrix format but cannot figure out how to fix it. Could you please take a look at the following error message? PETSc doesn't have a parallel LU solver, you need to install PETSc with SuperLU_DIST or MUMPS (preferably both) and then pass in one of those names when you call MatGetFactor(). Barry > > [d3m956 at olympus ss08]$ mpiexec -n 2 dynSim -i 3g9b.txt Matrix Object: > 1 MPI processes > type: mpiaij > row 0: (0, 0 - 16.4474 i) (3, 0 + 17.3611 i) row 1: (1, 0 - 8.34725 i) > (7, 0 + 16 i) row 2: (2, 0 - 5.51572 i) (5, 0 + 17.0648 i) row 3: (0, > 0 + 17.3611 i) (3, 0 - 39.9954 i) (4, 0 + 10.8696 i) (8, 0 + 11.7647 > i) row 4: (3, 0 + 10.8696 i) (4, 0.934 - 17.0633 i) (5, 0 + 5.88235 i) > row 5: (2, 0 + 17.0648 i) (4, 0 + 5.88235 i) (5, 0 - 32.8678 i) (6, 0 > + 9.92063 i) row 6: (5, 0 + 9.92063 i) (6, 1.03854 - 24.173 i) (7, 0 + > 13.8889 i) row 7: (1, 0 + 16 i) (6, 0 + 13.8889 i) (7, 0 - 36.1001 i) > (8, 0 + 6.21118 i) row 8: (3, 0 + 11.7647 i) (7, 0 + 6.21118 i) (8, > 1.33901 - 18.5115 i) Matrix Object: 1 MPI processes > type: mpidense > 0.0000000000000000e+00 + -1.6447368421052630e+01i > 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + -8.3472454090150254e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + -5.5157198014340878e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i PETSC LU: > [0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [0]PETSC ERROR: No support for this operation for this object type! > [0]PETSC ERROR: Matrix format mpiaij does not have a built-in PETSc LU! > [0]PETSC ERROR: > ---------------------------------------------------------------------- > -- [0]PETSC ERROR: Petsc Development HG revision: > 6e0ddc6e9b6d8a9d8eb4c0ede0105827a6b58dfb HG Date: Mon Mar 11 22:54:30 > 2013 -0500 [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: > ---------------------------------------------------------------------- > -- [0]PETSC ERROR: dynSim on a arch-complex named olympus.local by > d3m956 Wed Apr 17 11:38:12 2013 [0]PETSC ERROR: Libraries linked from > /pic/projects/ds/petsc-dev/arch-complex/lib > [0]PETSC ERROR: Configure run at Tue Mar 12 14:32:37 2013 [0]PETSC > ERROR: Configure options --with-scalar-type=complex > --with-clanguage=C++ PETSC_ARCH=arch-complex > --with-fortran-kernels=generic [0]PETSC ERROR: > ---------------------------------------------------------------------- > -- [0]PETSC ERROR: MatGetFactor() line 3949 in > src/mat/interface/matrix.c [0]PETSC ERROR: solvingAXB() line 880 in > "unknowndirectory/"reducedYBus.C End! > [1]PETSC ERROR: > ---------------------------------------------------------------------- > -- [1]PETSC ERROR: Caught signal number 15 Terminate: Somet process > (or the batch system) has told this process to end [1]PETSC ERROR: Try > option -start_in_debugger or -on_error_attach_debugger [1]PETSC ERROR: > or see > http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[1]PETSC > ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to > find memory corruption errors [1]PETSC ERROR: likely location of > problem given in stack below [1]PETSC ERROR: --------------------- > Stack Frames ------------------------------------ > [1]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, > [1]PETSC ERROR: INSTEAD the line number of the start of the function > [1]PETSC ERROR: is given. > [1]PETSC ERROR: [1] PetscSleep line 35 src/sys/utils/psleep.c [1]PETSC > ERROR: [1] PetscTraceBackErrorHandler line 172 > src/sys/error/errtrace.c [1]PETSC ERROR: [1] PetscError line 361 > src/sys/error/err.c [1]PETSC ERROR: [1] MatGetFactor line 3932 > src/mat/interface/matrix.c [1]PETSC ERROR: --------------------- Error > Message ------------------------------------ > [1]PETSC ERROR: Signal received! > [1]PETSC ERROR: > ---------------------------------------------------------------------- > -- [1]PETSC ERROR: Petsc Development HG revision: > 6e0ddc6e9b6d8a9d8eb4c0ede0105827a6b58dfb HG Date: Mon Mar 11 22:54:30 > 2013 -0500 [1]PETSC ERROR: See docs/changes/index.html for recent updates. > [1]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [1]PETSC ERROR: See docs/index.html for manual pages. > [1]PETSC ERROR: > ---------------------------------------------------------------------- > -- [1]PETSC ERROR: dynSim on a arch-complex named olympus.local by > d3m956 Wed Apr 17 11:38:12 2013 [1]PETSC ERROR: Libraries linked from > /pic/projects/ds/petsc-dev/arch-complex/lib > [1]PETSC ERROR: Configure run at Tue Mar 12 14:32:37 2013 [1]PETSC > ERROR: Configure options --with-scalar-type=complex > --with-clanguage=C++ PETSC_ARCH=arch-complex > --with-fortran-kernels=generic [1]PETSC ERROR: > ---------------------------------------------------------------------- > -- [1]PETSC ERROR: User provided function() line 0 in unknown > directory unknown file > ---------------------------------------------------------------------- > ---- > > Thanks, > Shuangshuang > > > -----Original Message----- > From: petsc-users-bounces at mcs.anl.gov > [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Barry Smith > Sent: Tuesday, April 16, 2013 5:50 PM > To: PETSc users list > Subject: Re: [petsc-users] ksp for AX=B system > > > Shuangshuang > > This is what I was expecting, thanks for the confirmation. For these size problems you definitely want to use a direct solver (often parallel but not for smaller matrices) and solve multiple right hand sides. This means you actually will not use the KSP solver that is standard for most PETSc work, instead you will work directly with the MatGetFactor(), MatGetOrdering(), MatLUFactorSymbolic(), MatLUFactorNumeric(), MatMatSolve() paradigm where the A matrix is stored as an MATAIJ matrix and the B (multiple right hand side) as a MATDENSE matrix. > > An example that displays this paradigm is > src/mat/examples/tests/ex125.c > > Once you have something running of interest to you we would like to work with you to improve the performance, we have some "tricks" we haven't yet implemented to make these solvers much faster than they will be by default. > > Barry > > > > On Apr 16, 2013, at 7:38 PM, "Jin, Shuangshuang" wrote: > >> Hi, Barry, thanks for your prompt reply. >> >> We have various size problems, from (n=9, m=3), (n=1081, m = 288), to (n=16072, m=2361) or even larger ultimately. >> >> Usually the dimension of the square matrix A, n is much larger than the column dimension of B, m. >> >> As you said, I'm using the loop to deal with the small (n=9, m=3) case. However, for bigger problem, I do hope there's a better approach. >> >> This is a power flow problem. When we parallelized it in OpenMP previously, we just parallelized the outside loop, and used a direct solver to solve it. >> >> We are now switching to MPI and like to use PETSc ksp solver to solve it in parallel. However, I don't know what's the best ksp solver I should use here? Direct solver or try a preconditioner? >> >> I appreciate very much for your recommendations. >> >> Thanks, >> Shuangshuang >> >> From balay at mcs.anl.gov Wed Apr 17 19:32:51 2013 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 17 Apr 2013 19:32:51 -0500 (CDT) Subject: [petsc-users] ksp for AX=B system In-Reply-To: <6778DE83AB681D49BFC2CD850610FEB1018FC933E4C6@EMAIL04.pnl.gov> References: <6778DE83AB681D49BFC2CD850610FEB1018FC933E31C@EMAIL04.pnl.gov> <42A940ED-9634-4AAC-A8FB-2C6EAD3EEF6B@mcs.anl.gov> <6778DE83AB681D49BFC2CD850610FEB1018FC933E32C@EMAIL04.pnl.gov> <58CD76F6-ECC5-4DE5-A93F-1075179054B9@mcs.anl.gov> <6778DE83AB681D49BFC2CD850610FEB1018FC933E40B@EMAIL04.pnl.gov> <5C4000FF-5FCA-45C5-9024-C9908527D61B@mcs.anl.gov> <6778DE83AB681D49BFC2CD850610FEB1018FC933E4C6@EMAIL04.pnl.gov> Message-ID: There is no --with-CMake-dir option. If its in your PATH it should be picked up. Or you can use --with-cmake=/path/to/cmake-binary Or use --download-cmake Also suggest using --download-scalapack If you are still having trouble installing petsc - send the relavent logs to petsc-maint Satish On Wed, 17 Apr 2013, Jin, Shuangshuang wrote: > Ok, I spent a long time trying to install SuperLU_DIST and MUMPS, but so far with no luck. It took a long time for every ./configure I made. And it kept on complaining about "UNABLE to CONFIGURE with GIVEN OPTIONS" each time for this and that package. > > And the latest one I got was: > > [d3m956 at olympus petsc-dev.4.17.13]$ ./configure --with-scalar-type=complex --with-clanguage=C++ PETSC_ARCH=arch-complex --with-fortran-kernels=generic --download-superlu_dist --download-mumps --with-scalapack-dir=/share/apps/modules/Modules/3.2.8/modulefiles/development/tools/scalapack/ --download-parmetis --download-metis --with-CMake-dir=/share/apps/modules/Modules/3.2.8/modulefiles/development/tools/cmake > =============================================================================== > Configuring PETSc to compile on your system > =============================================================================== > TESTING: configureLibrary from PETSc.packages.metis(config/BuildSystem/config/package.py:464) ******************************************************************************* > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details): > ------------------------------------------------------------------------------- > CMake > 2.5 is needed to build METIS > ******************************************************************************* > > Can you see anything wrong with my configuration command? > > Thanks, > Shuangshuang > > > -----Original Message----- > From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Barry Smith > Sent: Wednesday, April 17, 2013 2:49 PM > To: PETSc users list > Subject: Re: [petsc-users] ksp for AX=B system > > > On Apr 17, 2013, at 1:53 PM, "Jin, Shuangshuang" wrote: > > > Hello, Barry, I got a new question here. > > > > I have integrated the AX=B solver into my code using the example from ex125.c. The only major difference between my code and the example is the input matrix. > > > > In the example ex125.c, the A matrix is created by loading the data from a binary file. In my code, I passed in the computed A matrix and the B matrix as arguments to a user defined solvingAXB() function: > > > > static PetscErrorCode solvingAXB(Mat A, Mat B, PetscInt n, PetscInt > > nrhs, Mat X, const int me); > > > > I noticed that A has to be stored as an MATAIJ matrix and the B as a MATDENSE matrix. So I converted their storage types inside the solvingAXB as below (They were MPIAIJ type before): > > Note that MATAIJ is MATMPIAIJ when rank > 1 and MATSEQAIJ when rank == 1. > > > > > ierr = MatConvert(A, MATAIJ, MAT_REUSE_MATRIX, &A); // A has to be MATAIJ for built-in PETSc LU! > > MatView(A, PETSC_VIEWER_STDOUT_WORLD); > > > > ierr = MatConvert(B, MATDENSE, MAT_REUSE_MATRIX, &B); // B has to be a SeqDense matrix! > > MatView(B, PETSC_VIEWER_STDOUT_WORLD); > > > > With this implementation, I can run the code with expected results when 1 processor is used. > > > > However, when I use 2 processors, I get a run time error which I guess is still somehow related to the matrix format but cannot figure out how to fix it. Could you please take a look at the following error message? > > PETSc doesn't have a parallel LU solver, you need to install PETSc with SuperLU_DIST or MUMPS (preferably both) and then pass in one of those names when you call MatGetFactor(). > > Barry > > > > > [d3m956 at olympus ss08]$ mpiexec -n 2 dynSim -i 3g9b.txt Matrix Object: > > 1 MPI processes > > type: mpiaij > > row 0: (0, 0 - 16.4474 i) (3, 0 + 17.3611 i) row 1: (1, 0 - 8.34725 i) > > (7, 0 + 16 i) row 2: (2, 0 - 5.51572 i) (5, 0 + 17.0648 i) row 3: (0, > > 0 + 17.3611 i) (3, 0 - 39.9954 i) (4, 0 + 10.8696 i) (8, 0 + 11.7647 > > i) row 4: (3, 0 + 10.8696 i) (4, 0.934 - 17.0633 i) (5, 0 + 5.88235 i) > > row 5: (2, 0 + 17.0648 i) (4, 0 + 5.88235 i) (5, 0 - 32.8678 i) (6, 0 > > + 9.92063 i) row 6: (5, 0 + 9.92063 i) (6, 1.03854 - 24.173 i) (7, 0 + > > 13.8889 i) row 7: (1, 0 + 16 i) (6, 0 + 13.8889 i) (7, 0 - 36.1001 i) > > (8, 0 + 6.21118 i) row 8: (3, 0 + 11.7647 i) (7, 0 + 6.21118 i) (8, > > 1.33901 - 18.5115 i) Matrix Object: 1 MPI processes > > type: mpidense > > 0.0000000000000000e+00 + -1.6447368421052630e+01i > > 0.0000000000000000e+00 + 0.0000000000000000e+00i > > 0.0000000000000000e+00 + 0.0000000000000000e+00i > > 0.0000000000000000e+00 + 0.0000000000000000e+00i > > 0.0000000000000000e+00 + -8.3472454090150254e+00i > > 0.0000000000000000e+00 + 0.0000000000000000e+00i > > 0.0000000000000000e+00 + 0.0000000000000000e+00i > > 0.0000000000000000e+00 + 0.0000000000000000e+00i > > 0.0000000000000000e+00 + -5.5157198014340878e+00i > > 0.0000000000000000e+00 + 0.0000000000000000e+00i > > 0.0000000000000000e+00 + 0.0000000000000000e+00i > > 0.0000000000000000e+00 + 0.0000000000000000e+00i > > 0.0000000000000000e+00 + 0.0000000000000000e+00i > > 0.0000000000000000e+00 + 0.0000000000000000e+00i > > 0.0000000000000000e+00 + 0.0000000000000000e+00i > > 0.0000000000000000e+00 + 0.0000000000000000e+00i > > 0.0000000000000000e+00 + 0.0000000000000000e+00i > > 0.0000000000000000e+00 + 0.0000000000000000e+00i > > 0.0000000000000000e+00 + 0.0000000000000000e+00i > > 0.0000000000000000e+00 + 0.0000000000000000e+00i > > 0.0000000000000000e+00 + 0.0000000000000000e+00i > > 0.0000000000000000e+00 + 0.0000000000000000e+00i > > 0.0000000000000000e+00 + 0.0000000000000000e+00i > > 0.0000000000000000e+00 + 0.0000000000000000e+00i > > 0.0000000000000000e+00 + 0.0000000000000000e+00i > > 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i PETSC LU: > > [0]PETSC ERROR: --------------------- Error Message > > ------------------------------------ > > [0]PETSC ERROR: No support for this operation for this object type! > > [0]PETSC ERROR: Matrix format mpiaij does not have a built-in PETSc LU! > > [0]PETSC ERROR: > > ---------------------------------------------------------------------- > > -- [0]PETSC ERROR: Petsc Development HG revision: > > 6e0ddc6e9b6d8a9d8eb4c0ede0105827a6b58dfb HG Date: Mon Mar 11 22:54:30 > > 2013 -0500 [0]PETSC ERROR: See docs/changes/index.html for recent updates. > > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > > [0]PETSC ERROR: See docs/index.html for manual pages. > > [0]PETSC ERROR: > > ---------------------------------------------------------------------- > > -- [0]PETSC ERROR: dynSim on a arch-complex named olympus.local by > > d3m956 Wed Apr 17 11:38:12 2013 [0]PETSC ERROR: Libraries linked from > > /pic/projects/ds/petsc-dev/arch-complex/lib > > [0]PETSC ERROR: Configure run at Tue Mar 12 14:32:37 2013 [0]PETSC > > ERROR: Configure options --with-scalar-type=complex > > --with-clanguage=C++ PETSC_ARCH=arch-complex > > --with-fortran-kernels=generic [0]PETSC ERROR: > > ---------------------------------------------------------------------- > > -- [0]PETSC ERROR: MatGetFactor() line 3949 in > > src/mat/interface/matrix.c [0]PETSC ERROR: solvingAXB() line 880 in > > "unknowndirectory/"reducedYBus.C End! > > [1]PETSC ERROR: > > ---------------------------------------------------------------------- > > -- [1]PETSC ERROR: Caught signal number 15 Terminate: Somet process > > (or the batch system) has told this process to end [1]PETSC ERROR: Try > > option -start_in_debugger or -on_error_attach_debugger [1]PETSC ERROR: > > or see > > http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[1]PETSC > > ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to > > find memory corruption errors [1]PETSC ERROR: likely location of > > problem given in stack below [1]PETSC ERROR: --------------------- > > Stack Frames ------------------------------------ > > [1]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, > > [1]PETSC ERROR: INSTEAD the line number of the start of the function > > [1]PETSC ERROR: is given. > > [1]PETSC ERROR: [1] PetscSleep line 35 src/sys/utils/psleep.c [1]PETSC > > ERROR: [1] PetscTraceBackErrorHandler line 172 > > src/sys/error/errtrace.c [1]PETSC ERROR: [1] PetscError line 361 > > src/sys/error/err.c [1]PETSC ERROR: [1] MatGetFactor line 3932 > > src/mat/interface/matrix.c [1]PETSC ERROR: --------------------- Error > > Message ------------------------------------ > > [1]PETSC ERROR: Signal received! > > [1]PETSC ERROR: > > ---------------------------------------------------------------------- > > -- [1]PETSC ERROR: Petsc Development HG revision: > > 6e0ddc6e9b6d8a9d8eb4c0ede0105827a6b58dfb HG Date: Mon Mar 11 22:54:30 > > 2013 -0500 [1]PETSC ERROR: See docs/changes/index.html for recent updates. > > [1]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > > [1]PETSC ERROR: See docs/index.html for manual pages. > > [1]PETSC ERROR: > > ---------------------------------------------------------------------- > > -- [1]PETSC ERROR: dynSim on a arch-complex named olympus.local by > > d3m956 Wed Apr 17 11:38:12 2013 [1]PETSC ERROR: Libraries linked from > > /pic/projects/ds/petsc-dev/arch-complex/lib > > [1]PETSC ERROR: Configure run at Tue Mar 12 14:32:37 2013 [1]PETSC > > ERROR: Configure options --with-scalar-type=complex > > --with-clanguage=C++ PETSC_ARCH=arch-complex > > --with-fortran-kernels=generic [1]PETSC ERROR: > > ---------------------------------------------------------------------- > > -- [1]PETSC ERROR: User provided function() line 0 in unknown > > directory unknown file > > ---------------------------------------------------------------------- > > ---- > > > > Thanks, > > Shuangshuang > > > > > > -----Original Message----- > > From: petsc-users-bounces at mcs.anl.gov > > [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Barry Smith > > Sent: Tuesday, April 16, 2013 5:50 PM > > To: PETSc users list > > Subject: Re: [petsc-users] ksp for AX=B system > > > > > > Shuangshuang > > > > This is what I was expecting, thanks for the confirmation. For these size problems you definitely want to use a direct solver (often parallel but not for smaller matrices) and solve multiple right hand sides. This means you actually will not use the KSP solver that is standard for most PETSc work, instead you will work directly with the MatGetFactor(), MatGetOrdering(), MatLUFactorSymbolic(), MatLUFactorNumeric(), MatMatSolve() paradigm where the A matrix is stored as an MATAIJ matrix and the B (multiple right hand side) as a MATDENSE matrix. > > > > An example that displays this paradigm is > > src/mat/examples/tests/ex125.c > > > > Once you have something running of interest to you we would like to work with you to improve the performance, we have some "tricks" we haven't yet implemented to make these solvers much faster than they will be by default. > > > > Barry > > > > > > > > On Apr 16, 2013, at 7:38 PM, "Jin, Shuangshuang" wrote: > > > >> Hi, Barry, thanks for your prompt reply. > >> > >> We have various size problems, from (n=9, m=3), (n=1081, m = 288), to (n=16072, m=2361) or even larger ultimately. > >> > >> Usually the dimension of the square matrix A, n is much larger than the column dimension of B, m. > >> > >> As you said, I'm using the loop to deal with the small (n=9, m=3) case. However, for bigger problem, I do hope there's a better approach. > >> > >> This is a power flow problem. When we parallelized it in OpenMP previously, we just parallelized the outside loop, and used a direct solver to solve it. > >> > >> We are now switching to MPI and like to use PETSc ksp solver to solve it in parallel. However, I don't know what's the best ksp solver I should use here? Direct solver or try a preconditioner? > >> > >> I appreciate very much for your recommendations. > >> > >> Thanks, > >> Shuangshuang > >> > >> > > From dharmareddy84 at gmail.com Wed Apr 17 19:34:43 2013 From: dharmareddy84 at gmail.com (Dharmendar Reddy) Date: Wed, 17 Apr 2013 19:34:43 -0500 Subject: [petsc-users] DMPlex Filed data to VTK file Message-ID: Hello, I think i finally was able to go thorugh the process of solving a poisson problem using DMPlex object. Now i need to visualize the data. I was looking at following lines from exmple 62: 733: if (user.runType == RUN_FULL) {734: PetscViewer viewer;735: Vec uLocal; 737: PetscViewerCreate (PETSC_COMM_WORLD , &viewer);738: PetscViewerSetType (viewer, PETSCVIEWERVTK);739: PetscViewerSetFormat (viewer, PETSC_VIEWER_ASCII_VTK);740: PetscViewerFileSetName (viewer, "ex62_sol.vtk"); 742: DMGetLocalVector (user.dm, &uLocal);743: DMGlobalToLocalBegin (user.dm, u, INSERT_VALUES , uLocal);744: DMGlobalToLocalEnd (user.dm, u, INSERT_VALUES , uLocal); 746: PetscObjectReference ((PetscObject ) user.dm); /* Needed because viewer destroys the DM */747: PetscObjectReference ((PetscObject ) uLocal); /* Needed because viewer destroys the Vec */748: PetscViewerVTKAddField (viewer, (PetscObject ) user.dm, DMPlexVTKWriteAll , PETSC_VTK_POINT_FIELD, (PetscObject ) uLocal);749: DMRestoreLocalVector (user.dm, &uLocal);750: PetscViewerDestroy (&viewer);751: } Can i do the same or some thing similar from fortran ? Some calls seem to have type casting. Othere wise if you can give me some idea on how to get the solution vector, including the boundary values. As of now, i access the solution using solution vector passed to snes object but that does not give values at boundary nodes. Thanks Reddy -- ----------------------------------------------------- Dharmendar Reddy Palle Graduate Student Microelectronics Research center, University of Texas at Austin, 10100 Burnet Road, Bldg. 160 MER 2.608F, TX 78758-4445 e-mail: dharmareddy84 at gmail.com Phone: +1-512-350-9082 United States of America. Homepage: https://webspace.utexas.edu/~dpr342 -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Wed Apr 17 19:41:38 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 17 Apr 2013 19:41:38 -0500 Subject: [petsc-users] snes test fails In-Reply-To: References: <8761zkeqdz.fsf@mcs.anl.gov> Message-ID: <87wqs0d60t.fsf@mcs.anl.gov> Dharmendar Reddy writes: > Hello, > Found the bug in my code. I forgot the call MatZeroEntries on the > Jacobin. Great. This turns out to be a common accident that we could make SNESTEST check for. From jedbrown at mcs.anl.gov Wed Apr 17 20:20:18 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 17 Apr 2013 20:20:18 -0500 Subject: [petsc-users] Mailing list reply-to munging (was Any changes in ML usage between 3.1-p8 -> 3.3-p6?) In-Reply-To: References: <20130405143415.GH17937@karman> <20130417162647.GC23247@karman> <20130417190051.GA2495@karman> <20130417202640.GC2495@karman> <8738uog8i3.fsf@mcs.anl.gov> <87mwsweshg.fsf@mcs.anl.gov> <87ehe8eqwn.fsf@mcs.anl.gov> Message-ID: <87txn4d48d.fsf@mcs.anl.gov> Satish Balay writes: > This benefit is a bit dubious - as you'll get some migration of > petsc-maint traffic to petsc-users - but then you loose all the > 'reply-to-individual' emails from the archives [yeah - reply-to-reply > emails with cc:list added get archived - perhaps with broken threads]. Thus the canned response: "Please resend your last message with all Cc's intact so that I can reply to it on the list." Having a consistent convention between petsc-users/petsc-dev and petsc-maint would be fine by me [1]. > And then there is spam - which you say can be dealt with filters. Is > this client side or server side? Preserving unmunged headers makes existing spam filters more accurate. For example, petsc-maint is considered to be an important address in my mails, making it less likely to label mail setting "Reply-to: petsc-maint" as spam. This is one of many criteria and I don't know how significant it is, but anecdotally, petsc-maint spam is almost never detected by gmail's spam filter, while git at vger.kernel.org spam is usually detected. And header munging could be turned off without enabling anonymous posting. Maybe we can provide a one-click subscribe-without-delivery? [1] petsc-maint could become a mailing list with private delivery, but anonymous posting, fixing minor annoyances like RT delivering mails a second time to original recipients, and setting Message-ID matching In-Reply-To. From jedbrown at mcs.anl.gov Wed Apr 17 20:36:58 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 17 Apr 2013 20:36:58 -0500 Subject: [petsc-users] DMPlex Filed data to VTK file In-Reply-To: References: Message-ID: <87mwswd3gl.fsf@mcs.anl.gov> Dharmendar Reddy writes: > Hello, > I think i finally was able to go thorugh the process of solving a > poisson problem using DMPlex object. Now i need to visualize the data. > > I was looking at following lines from exmple 62: You don't need any of that crazy nonsense. (That terrible code didn't get cleaned up after we fixed DMPlex viewing.) This is how we do the same thing in src/ts/examples/tutorials/ex11.c: ierr = PetscViewerCreate(PetscObjectComm((PetscObject)dm), viewer);CHKERRQ(ierr); ierr = PetscViewerSetType(*viewer, PETSCVIEWERVTK);CHKERRQ(ierr); ierr = PetscViewerFileSetName(*viewer, filename);CHKERRQ(ierr); From dharmareddy84 at gmail.com Wed Apr 17 20:47:22 2013 From: dharmareddy84 at gmail.com (Dharmendar Reddy) Date: Wed, 17 Apr 2013 20:47:22 -0500 Subject: [petsc-users] DMPlex Filed data to VTK file In-Reply-To: <87mwswd3gl.fsf@mcs.anl.gov> References: <87mwswd3gl.fsf@mcs.anl.gov> Message-ID: I am using Fortran. Can i make this call from fortran ? PetscViewerCreate(PetscObjectComm((PetscObject)dm), viewer) On Wed, Apr 17, 2013 at 8:36 PM, Jed Brown wrote: > Dharmendar Reddy writes: > > > Hello, > > I think i finally was able to go thorugh the process of solving > a > > poisson problem using DMPlex object. Now i need to visualize the data. > > > > I was looking at following lines from exmple 62: > > You don't need any of that crazy nonsense. (That terrible code didn't > get cleaned up after we fixed DMPlex viewing.) This is how we do the > same thing in src/ts/examples/tutorials/ex11.c: > > ierr = PetscViewerCreate(PetscObjectComm((PetscObject)dm), > viewer);CHKERRQ(ierr); > ierr = PetscViewerSetType(*viewer, PETSCVIEWERVTK);CHKERRQ(ierr); > ierr = PetscViewerFileSetName(*viewer, filename);CHKERRQ(ierr); > -- ----------------------------------------------------- Dharmendar Reddy Palle Graduate Student Microelectronics Research center, University of Texas at Austin, 10100 Burnet Road, Bldg. 160 MER 2.608F, TX 78758-4445 e-mail: dharmareddy84 at gmail.com Phone: +1-512-350-9082 United States of America. Homepage: https://webspace.utexas.edu/~dpr342 -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Wed Apr 17 20:54:08 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 17 Apr 2013 20:54:08 -0500 Subject: [petsc-users] DMPlex Filed data to VTK file In-Reply-To: References: <87mwswd3gl.fsf@mcs.anl.gov> Message-ID: <87fvyod2nz.fsf@mcs.anl.gov> Dharmendar Reddy writes: > I am using Fortran. Can i make this call from fortran ? > PetscViewerCreate(PetscObjectComm((PetscObject)dm), viewer) MPI_Comm comm call PetscObjectGetComm(dm, comm, ierr) call PetscViewerCreate(comm, viewer, ierr) or use PETSC_COMM_WORLD if you never use subcommunicators. From balay at mcs.anl.gov Wed Apr 17 22:17:31 2013 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 17 Apr 2013 22:17:31 -0500 (CDT) Subject: [petsc-users] Mailing list reply-to munging (was Any changes in ML usage between 3.1-p8 -> 3.3-p6?) In-Reply-To: <87txn4d48d.fsf@mcs.anl.gov> References: <20130405143415.GH17937@karman> <20130417162647.GC23247@karman> <20130417190051.GA2495@karman> <20130417202640.GC2495@karman> <8738uog8i3.fsf@mcs.anl.gov> <87mwsweshg.fsf@mcs.anl.gov> <87ehe8eqwn.fsf@mcs.anl.gov> <87txn4d48d.fsf@mcs.anl.gov> Message-ID: On Wed, 17 Apr 2013, Jed Brown wrote: > Satish Balay writes: > > > This benefit is a bit dubious - as you'll get some migration of > > petsc-maint traffic to petsc-users - but then you loose all the > > 'reply-to-individual' emails from the archives [yeah - reply-to-reply > > emails with cc:list added get archived - perhaps with broken threads]. > > Thus the canned response: "Please resend your last message with all Cc's > intact so that I can reply to it on the list." > > Having a consistent convention between petsc-users/petsc-dev and > petsc-maint would be fine by me [1]. > > > And then there is spam - which you say can be dealt with filters. Is > > this client side or server side? > > Preserving unmunged headers makes existing spam filters more accurate. > For example, petsc-maint is considered to be an important address in my > mails, making it less likely to label mail setting "Reply-to: > petsc-maint" as spam. This is one of many criteria and I don't know how > significant it is, but anecdotally, petsc-maint spam is almost never > detected by gmail's spam filter, while git at vger.kernel.org spam is > usually detected. So it is a client side filtering. Curently there is no spam on the mailing lists - as it goes in for moderator approval. If we switch everyone will get spam - and users filters would have to take care of things. I guess gmail does it one way - but not everyone is on gmail. And then - if gmail spam fails because of "Reply-to: petsc-maint" - then thats a useless spam filter. RT doesn't have to set that field. Any spamer can do that trivially. > And header munging could be turned off without enabling anonymous > posting. yes thats possible. With that - we'll be trading off 'enabling users to subscribe-without-delivery' [who can easily use filters to prevent mailing list traffic flooding their mailbox] - at the cost of everyone remembering to 'reply-all' all the time. > Maybe we can provide a one-click subscribe-without-delivery? I don't know. Will have to check with systems. For one - there are quiet a few posts to petsc-users without subscribing first. These mails go into moderation. I approve/subscribe the post so that they get the replies - and participate in the followup emails. I still don't see the benefits of changing the mailing lists [except for it being similar to git at vger.kernel.org, and sure - less time spent moderating]. The current situation isn't perfect. But changing appears to just switch one set of issues with others.. Satish > > > [1] petsc-maint could become a mailing list with private delivery, but > anonymous posting, fixing minor annoyances like RT delivering mails > a second time to original recipients, and setting Message-ID > matching In-Reply-To. > From jedbrown at mcs.anl.gov Wed Apr 17 22:59:07 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 17 Apr 2013 22:59:07 -0500 Subject: [petsc-users] Mailing list reply-to munging (was Any changes in ML usage between 3.1-p8 -> 3.3-p6?) In-Reply-To: References: <20130405143415.GH17937@karman> <20130417162647.GC23247@karman> <20130417190051.GA2495@karman> <20130417202640.GC2495@karman> <8738uog8i3.fsf@mcs.anl.gov> <87mwsweshg.fsf@mcs.anl.gov> <87ehe8eqwn.fsf@mcs.anl.gov> <87txn4d48d.fsf@mcs.anl.gov> Message-ID: <871ua8cwvo.fsf@mcs.anl.gov> Satish Balay writes: > So it is a client side filtering. Curently there is no spam on the > mailing lists - as it goes in for moderator approval. If we switch > everyone will get spam - and users filters would have to take care of > things. I guess gmail does it one way - but not everyone is on gmail. Everyone with an email address gets sent spam so they must have some filtering mechanism in place. And no-subscription list traffic is not necessary; we could keep the current moderation system. > And then - if gmail spam fails because of "Reply-to: petsc-maint" - > then thats a useless spam filter. RT doesn't have to set that > field. Any spamer can do that trivially. Yes, but they have to have done their homework to know that petsc-maint has some significance to me. If they did that much, they would always send me email spoofed to look like it came from you, Barry, and my girlfriend. And every spam message to the Git list would be Reply-to: Linus, etc. But that doesn't happen, and even >> And header munging could be turned off without enabling anonymous >> posting. > > yes thats possible. With that - we'll be trading off 'enabling users > to subscribe-without-delivery' [who can easily use filters to prevent > mailing list traffic flooding their mailbox] - at the cost of everyone > remembering to 'reply-all' all the time. John Doe sends email to petsc-users and the mailing list rewrites Reply-To back to the list. Now any user hits reply-all and their mailer gives them a message that replies *only* to petsc-users, dropping the original author. This is a problem, and only a few mailers have a "when >From and Reply-To do not agree, assume this is mailing list munging and disregard the intent of the Reply-To field (RFC 2822) by also replying to the address found in From" feature. In other words, any mailer that interprets the Reply-To field as its intended "instead of" semantics rather than "in addition to" will drop the original author, meaning lost replies for people that are not subscribed or have delivery disabled. Perhaps a middle ground would be to have the list copy the From header over to Reply-to (if it doesn't already exist) and then _add_ the list address to Reply-to. That still isn't quite right when cross-posting, but it would allow us to advertise "subscribe with delivery off and ask questions on the list" or even "mail the list without subscribing" instead of "always write petsc-maint if you can't be bothered to filter the high-volume list". From ztdepyahoo at 163.com Thu Apr 18 00:40:32 2013 From: ztdepyahoo at 163.com (=?GBK?B?tqHAz8qm?=) Date: Thu, 18 Apr 2013 13:40:32 +0800 (CST) Subject: [petsc-users] Does Petsc has Finite volume code example Message-ID: <1921287a.137.13e1ba891f2.Coremail.ztdepyahoo@163.com> Does Petsc has Finite volume code example with Ksp -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Thu Apr 18 01:20:36 2013 From: balay at mcs.anl.gov (Satish Balay) Date: Thu, 18 Apr 2013 01:20:36 -0500 (CDT) Subject: [petsc-users] Mailing list reply-to munging (was Any changes in ML usage between 3.1-p8 -> 3.3-p6?) In-Reply-To: <871ua8cwvo.fsf@mcs.anl.gov> References: <20130405143415.GH17937@karman> <20130417162647.GC23247@karman> <20130417190051.GA2495@karman> <20130417202640.GC2495@karman> <8738uog8i3.fsf@mcs.anl.gov> <87mwsweshg.fsf@mcs.anl.gov> <87ehe8eqwn.fsf@mcs.anl.gov> <87txn4d48d.fsf@mcs.anl.gov> <871ua8cwvo.fsf@mcs.anl.gov> Message-ID: On Wed, 17 Apr 2013, Jed Brown wrote: > John Doe sends email to petsc-users and the mailing list rewrites > Reply-To back to the list. Now any user hits reply-all and their mailer > gives them a message that replies *only* to petsc-users, dropping the > original author. This is a problem, Its a problem only if the author is not subscribed. > and only a few mailers have a "when > From and Reply-To do not agree, assume this is mailing list munging and > disregard the intent of the Reply-To field (RFC 2822) by also replying > to the address found in From" feature. > In other words, any mailer that interprets the Reply-To field as its > intended "instead of" semantics rather than "in addition to" will drop > the original author, meaning lost replies for people that are not > subscribed or have delivery disabled. Or remove option 'subscribe-but-do-not-deliver' for our usage of 'Reply-To: list' > Perhaps a middle ground would be to have the list copy the From header > over to Reply-to (if it doesn't already exist) and then _add_ the list > address to Reply-to. That still isn't quite right when cross-posting, > but it would allow us to advertise "subscribe with delivery off and ask > questions on the list" or even "mail the list without subscribing" > instead of "always write petsc-maint if you can't be bothered to filter > the high-volume list". Earlier in the thread you've supported: reminder emails to folks doing 'reply' instead of 'reply-all:' as an acceptable thing. [and this happens a few times a day]. But here a reply of 'use petsc-maint' instead of subscribe-but-do-not-deliver with petsc-users' is suggested not good. [which happens so infrequently - except for configure.log sutff]. And I fail to see how 'e-mail petsc-maint without subscribing is not good - whereas 'email petsc-users without subscribing is a great feature'. [yeah you get archives on petsc-users - but I don't think uses are as much concerened about that.] And I'll submit - its easier for most folks to send email to petsc-maint instead of figuring out 'subscribe-but-donot-deliver stuff on petsc-users'. [Yeah 'expert' mailing list users might expect "subscribe with delivery" workflow to work.] Perhaps the problem here is - I view petsc-users and petsc-dev as public mailing lists - and primary purpose of public mailing lists is all to all communication mechanism. [so subscription/ reply-to make sense to me.] And petsc-maint as the longstanding non-subscribe/support or any type of conversation e-mail to-petsc-developers. But most use petsc-users [and some view it] as a support e-mail adress [with searchable archives]. If thats what it it - then no-subscribe-post or subscribe-but-do-not-deliver stuff would be the primary thing - and recommending that would make sense. And then we should be accepting build logs on it as well - and not worry about flooding users mailboxes iwth them. [compressed as openmpi list recommends] [what about petsc-dev? some use it as reaching petsc-developers - not petsc development discussions. And what about petsc-maint? redirect to petsc-users and have petsc-developers an non-ambiguous place for non-public e-mails to petsc-developers?] Satish From dharmareddy84 at gmail.com Thu Apr 18 02:00:51 2013 From: dharmareddy84 at gmail.com (Dharmendar Reddy) Date: Thu, 18 Apr 2013 02:00:51 -0500 Subject: [petsc-users] [Fortran] VTK viewer error Message-ID: Hello, I am getting an erro when i try to use vtk viewer. I make the following set of calls. call PetscObjectGetComm(this%meshdm, comm, ierr) call PetscViewerCreate(comm, viewer, ierr) ! This line gives a compile error, as PETSCVIEWERVTK is not defined for FORTRAN ! !call PetscViewerSetType(viewer, PETSCVIEWERVTK) call PetscViewerSetFormat(viewer,PETSC_VIEWER_ASCII_VTK,ierr) call PetscViewerFileSetName(viewer, trim(filename),ierr) !call VecView(X, PETSC_VIEWER_DEFAULT ,ierr) call DMView(this%meshdm,ierr) call VecView(X,viewer,ierr) call PetscViewerDestroy(viewer,ierr) What am i doing wrong here ? [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Out of memory. This could be due to allocating [0]PETSC ERROR: too large an object or bleeding by not properly [0]PETSC ERROR: destroying unneeded objects. [0]PETSC ERROR: Memory allocated 582624 Memory used by process 30969856 [0]PETSC ERROR: Try running with -malloc_dump or -malloc_log for info. [0]PETSC ERROR: Memory requested 18446744073296916480! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Development GIT revision: e0030536e6573667cee5340eb367e8213e67d689 GIT Date: 2013-04-16 21:48:15 -0500 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: ./PoisTest on a mpi_rScalar_Debug named login4.stampede.tacc.utexas.edu by Re ddy135 Thu Apr 18 01:47:22 2013 [0]PETSC ERROR: Libraries linked from /home1/00924/Reddy135/LocalApps/petsc/mpi_rScalar_Debug /lib [0]PETSC ERROR: Configure run at Tue Apr 16 22:30:58 2013 [0]PETSC ERROR: Configure options --download-blacs=1 --download-ctetgen=1 --download-metis= 1 --download-mumps=1 --download-parmetis=1 --download-scalapack=1 --download-superlu_dist=1 - -download-triangle=1 --download-umfpack=1 --with-blas-lapack-dir=/opt/apps/intel/13/composer_ xe_2013.2.146/mkl/lib/intel64/ --with-debugging=1 --with-mpi-dir=/opt/apps/intel13/mvapich2/1 .9/ --with-petsc-arch=mpi_rScalar_Debug --with-petsc-dir=/home1/00924/Reddy135/LocalApps/pets c PETSC_ARCH=mpi_rScalar_Debug [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: PetscMallocAlign() line 46 in /home1/00924/Reddy135/LocalApps/petsc/src/sys/m emory/mal.c [0]PETSC ERROR: PetscTrMallocDefault() line 189 in /home1/00924/Reddy135/LocalApps/petsc/src/ sys/memory/mtr.c [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSC E RROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption er rors [0]PETSC ERROR: likely location of problem given in stack below [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, [0]PETSC ERROR: INSTEAD the line number of the start of the function [0]PETSC ERROR: is given. [0]PETSC ERROR: [0] PetscObjectTypeCompare line 140 /home1/00924/Reddy135/LocalApps/petsc/src /sys/objects/destroy.c [0]PETSC ERROR: [0] DMView line 546 /home1/00924/Reddy135/LocalApps/petsc/src/dm/interface/dm .c [0]PETSC ERROR: [0] PetscTrMallocDefault line 181 /home1/00924/Reddy135/LocalApps/petsc/src/s ys/memory/mtr.c [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Signal received! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Development GIT revision: e0030536e6573667cee5340eb367e8213e67d689 GIT Date: 2013-04-16 21:48:15 -0500 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: ./PoisTest on a mpi_rScalar_Debug named login4.stampede.tacc.utexas.edu by Re ddy135 Thu Apr 18 01:47:22 2013 [0]PETSC ERROR: Libraries linked from /home1/00924/Reddy135/LocalApps/petsc/mpi_rScalar_Debug /lib [0]PETSC ERROR: Configure run at Tue Apr 16 22:30:58 2013 [0]PETSC ERROR: Configure options --download-blacs=1 --download-ctetgen=1 --download-metis= 1 --download-mumps=1 --download-parmetis=1 --download-scalapack=1 --download-superlu_dist=1 - -download-triangle=1 --download-umfpack=1 --with-blas-lapack-dir=/opt/apps/intel/13/composer_ xe_2013.2.146/mkl/lib/intel64/ --with-debugging=1 --with-mpi-dir=/opt/apps/intel13/mvapich2/1 .9/ --with-petsc-arch=mpi_rScalar_Debug --with-petsc-dir=/home1/00924/Reddy135/LocalApps/pets c PETSC_ARCH=mpi_rScalar_Debug [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: User provided function() line 0 in unknown directory unknown file [unset]: aborting job: application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 -- ----------------------------------------------------- Dharmendar Reddy Palle Graduate Student Microelectronics Research center, University of Texas at Austin, 10100 Burnet Road, Bldg. 160 MER 2.608F, TX 78758-4445 e-mail: dharmareddy84 at gmail.com Phone: +1-512-350-9082 United States of America. Homepage: https://webspace.utexas.edu/~dpr342 -------------- next part -------------- An HTML attachment was scrubbed... URL: From dominik at itis.ethz.ch Thu Apr 18 04:12:38 2013 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Thu, 18 Apr 2013 11:12:38 +0200 Subject: [petsc-users] Crash when using valgrind In-Reply-To: <87k3o0erxg.fsf@mcs.anl.gov> References: <87k3o0erxg.fsf@mcs.anl.gov> Message-ID: > > Sometimes it helps to use 'valgrind --db-attach=yes'. valgrind --db-attach=yes --tool=memcheck -q --num-callers=20 MySolver bt #0 0x0000000007cf2425 in __GI_raise (sig=) at ../nptl/sysdeps/unix/sysv/linux/raise.c:64 #1 0x0000000007cf5b8b in __GI_abort () at abort.c:91 #2 0x000000000987c98d in ?? () from /usr/lib/libcr.so.0 #3 0x000000000400f306 in call_init (l=, argc=2, argv=0x7ff000138, env=0x7ff000150) at dl-init.c:85 #4 0x000000000400f3df in call_init (env=, argv=, argc=, l=) at dl-init.c:52 #5 _dl_init (main_map=0x42242c8, argc=2, argv=0x7ff000138, env=0x7ff000150) at dl-init.c:134 #6 0x00000000040016ea in _dl_start_user () from /lib64/ld-linux-x86-64.so.2 #7 0x0000000000000002 in ?? () #8 0x00000007ff0003d7 in ?? () #9 0x00000007ff000412 in ?? () #10 0x0000000000000000 in ?? () > > What happens when you pass -no_signal_handler to the PETSc program? > valgrind --tool=memcheck -q --num-callers=20 MySolver -no_signal_handler No change, i.e: cr_libinit.c:183 cri_init: sigaction() failed: Invalid argument Aborted (core dumped) Any more ideas? Many thanks Dominik -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Apr 18 06:04:24 2013 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 18 Apr 2013 07:04:24 -0400 Subject: [petsc-users] Does Petsc has Finite volume code example In-Reply-To: <1921287a.137.13e1ba891f2.Coremail.ztdepyahoo@163.com> References: <1921287a.137.13e1ba891f2.Coremail.ztdepyahoo@163.com> Message-ID: On Thu, Apr 18, 2013 at 1:40 AM, ??? wrote: > Does Petsc has Finite volume code example with Ksp > TS ex11. Matt -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexei.matveev+petsc at gmail.com Thu Apr 18 07:08:26 2013 From: alexei.matveev+petsc at gmail.com (Alexei Matveev) Date: Thu, 18 Apr 2013 14:08:26 +0200 Subject: [petsc-users] maintaining application compatibility with 3.1 and 3.2 In-Reply-To: <3DCB1B18-F192-4488-97BF-5AD5FD9D26AE@mcs.anl.gov> References: <3DCB1B18-F192-4488-97BF-5AD5FD9D26AE@mcs.anl.gov> Message-ID: Hi, All, Thanks for your comments. On 17 April 2013 23:35, Barry Smith wrote: > > It is our intention that PETSc be easy enough for anyone to install > that rather than making your application work with different versions one > simply install the PETSc version one needs. In addition we recommend > updating applications to work with the latest release within a couple of > months after each release. > I understand that. With rapid development it is unavoidable to break API occasionally. I managed to compile and run the code with the quick hack below and somewhat more. Thanks, Kirk! Need yet to find out why the tests fail. Do I understand it correctly that there is no way currently to load a distributed Vec from the file without knowing and setting its dimensions first? Like I was doing here: +# if PETSC_VERSION < 30200 VecLoad (viewer, VECMPI, &vec); /* creates it */ +#else + /* FIXME: how to make it distributed? */ + VecCreate (PETSC_COMM_WORLD, &vec); + VecLoad (vec, viewer); +#endif BTW, I found myself calling (DM)DAGetInfo() on the array descriptor DM/DA quite often to get the shape of the 3d grid. I noticed that now that the signature of the *GetInfo() changed. Is there an official way to get that shape info by enquiring the Vec itself? Alexei -#include "petscda.h" /* Vec, Mat, DA, ... */ +#include "petscdmda.h" /* Vec, Mat, DA, ... */ #include "petscdmmg.h" /* KSP, ... */ +#define PETSC_VERSION (PETSC_VERSION_MAJOR * 10000 + PETSC_VERSION_MINOR * 100) + +/* FIXME: PETSC 3.2 */ +#if PETSC_VERSION >= 30200 +typedef DM DA; +typedef PetscBool PetscTruth; +# define VecDestroy(x) (VecDestroy)(&(x)) +# define VecScatterDestroy(x) (VecScatterDestroy)(&(x)) +# define MatDestroy(x) (MatDestroy)(&(x)) +# define ISDestroy(x) (ISDestroy)(&(x)) +# define KSPDestroy(x) (KSPDestroy)(&(x)) +# define SNESDestroy(x) (SNESDestroy)(&(x)) +# define PetscViewerDestroy(x) (PetscViewerDestroy)(&(x)) +# define PCDestroy(x) (PCDestroy)(&(x)) +# define DADestroy(x) (DMDestroy)(&(x)) +# define DAGetCorners DMDAGetCorners +# define DACreate3d DMDACreate3d +# define DACreateGlobalVector DMCreateGlobalVector +# define DAGetGlobalVector DMGetGlobalVector +# define DAGetInfo DMDAGetInfo +# define DAGetMatrix DMGetMatrix +# define DARestoreGlobalVector DMRestoreGlobalVector +# define DAVecGetArray DMDAVecGetArray +# define DAVecRestoreArray DMDAVecRestoreArray +# define VecLoadIntoVector(viewer, vec) VecLoad (vec, viewer) +# define DA_STENCIL_STAR DMDA_STENCIL_STAR +# define DA_XYZPERIODIC DMDA_XYZPERIODIC +#endif -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Apr 18 07:13:20 2013 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 18 Apr 2013 08:13:20 -0400 Subject: [petsc-users] maintaining application compatibility with 3.1 and 3.2 In-Reply-To: References: <3DCB1B18-F192-4488-97BF-5AD5FD9D26AE@mcs.anl.gov> Message-ID: On Thu, Apr 18, 2013 at 8:08 AM, Alexei Matveev < alexei.matveev+petsc at gmail.com> wrote: > > > Hi, All, > > Thanks for your comments. > > On 17 April 2013 23:35, Barry Smith wrote: > >> >> It is our intention that PETSc be easy enough for anyone to >> install that rather than making your application work with different >> versions one simply install the PETSc version one needs. In addition we >> recommend updating applications to work with the latest release within a >> couple of months after each release. >> > > I understand that. With rapid development it is unavoidable to break > API occasionally. > > I managed to compile and run the code with the quick hack below and > somewhat > more. Thanks, Kirk! Need yet to find out why the tests fail. > > Do I understand it correctly that there is no way currently to load a > distributed > Vec from the file without knowing and setting its dimensions first? > I am not sure what you mean here. The point is that prescribing the layout of a Vec in a file is clearly wrong. We would like to be able to save from one parallel configuration and load on another (maybe serial). Thus, the loader prescribes the layout. Matt > Like I was doing here: > > +# if PETSC_VERSION < 30200 > VecLoad (viewer, VECMPI, &vec); /* creates it */ > +#else > + /* FIXME: how to make it distributed? */ > + VecCreate (PETSC_COMM_WORLD, &vec); > + VecLoad (vec, viewer); > +#endif > > BTW, I found myself calling (DM)DAGetInfo() on the array descriptor DM/DA > quite often to get the shape of the 3d grid. I noticed that now that the > signature > of the *GetInfo() changed. Is there an official way to get that shape info > by enquiring > the Vec itself? > > Alexei > > -#include "petscda.h" /* Vec, Mat, DA, ... */ > +#include "petscdmda.h" /* Vec, Mat, DA, ... */ > #include "petscdmmg.h" /* KSP, ... */ > > +#define PETSC_VERSION (PETSC_VERSION_MAJOR * 10000 + PETSC_VERSION_MINOR > * 100) > + > +/* FIXME: PETSC 3.2 */ > +#if PETSC_VERSION >= 30200 > +typedef DM DA; > +typedef PetscBool PetscTruth; > +# define VecDestroy(x) (VecDestroy)(&(x)) > +# define VecScatterDestroy(x) (VecScatterDestroy)(&(x)) > +# define MatDestroy(x) (MatDestroy)(&(x)) > +# define ISDestroy(x) (ISDestroy)(&(x)) > +# define KSPDestroy(x) (KSPDestroy)(&(x)) > +# define SNESDestroy(x) (SNESDestroy)(&(x)) > +# define PetscViewerDestroy(x) (PetscViewerDestroy)(&(x)) > +# define PCDestroy(x) (PCDestroy)(&(x)) > +# define DADestroy(x) (DMDestroy)(&(x)) > +# define DAGetCorners DMDAGetCorners > +# define DACreate3d DMDACreate3d > +# define DACreateGlobalVector DMCreateGlobalVector > +# define DAGetGlobalVector DMGetGlobalVector > +# define DAGetInfo DMDAGetInfo > +# define DAGetMatrix DMGetMatrix > +# define DARestoreGlobalVector DMRestoreGlobalVector > +# define DAVecGetArray DMDAVecGetArray > +# define DAVecRestoreArray DMDAVecRestoreArray > +# define VecLoadIntoVector(viewer, vec) VecLoad (vec, viewer) > +# define DA_STENCIL_STAR DMDA_STENCIL_STAR > +# define DA_XYZPERIODIC DMDA_XYZPERIODIC > +#endif > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexei.matveev+petsc at gmail.com Thu Apr 18 07:26:41 2013 From: alexei.matveev+petsc at gmail.com (Alexei Matveev) Date: Thu, 18 Apr 2013 14:26:41 +0200 Subject: [petsc-users] maintaining application compatibility with 3.1 and 3.2 In-Reply-To: References: <3DCB1B18-F192-4488-97BF-5AD5FD9D26AE@mcs.anl.gov> Message-ID: > Do I understand it correctly that there is no way currently to load a >> distributed >> Vec from the file without knowing and setting its dimensions first? >> > > I am not sure what you mean here. The point is that prescribing the layout > of a Vec > in a file is clearly wrong. We would like to be able to save from one > parallel configuration > and load on another (maybe serial). Thus, the loader prescribes the layout. > > I see. You understood me correctly. I used that feature to to quickly load a Vec from a file and dump it in ACSII or to do simple arithmetics. Admittedly that was usually done in serial runs. So the comment here is actually wrong, since you say it is not going to be supported by any viewer/loader pair? /* This one is supposed to save enough meta-info (such as distribution pattern, dimensions) to recover the vector from scratch: */ void vec_save (const char file[], const Vec vec) { PetscViewer viewer; PetscViewerBinaryOpen (PETSC_COMM_WORLD, file, FILE_MODE_WRITE, &viewer); VecView (vec, viewer); PetscViewerDestroy (viewer); } Sorry for spamming the channel, Alexei. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Thu Apr 18 08:19:41 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Thu, 18 Apr 2013 08:19:41 -0500 Subject: [petsc-users] Mailing list reply-to munging (was Any changes in ML usage between 3.1-p8 -> 3.3-p6?) In-Reply-To: References: <20130405143415.GH17937@karman> <20130417162647.GC23247@karman> <20130417190051.GA2495@karman> <20130417202640.GC2495@karman> <8738uog8i3.fsf@mcs.anl.gov> <87mwsweshg.fsf@mcs.anl.gov> <87ehe8eqwn.fsf@mcs.anl.gov> <87txn4d48d.fsf@mcs.anl.gov> <871ua8cwvo.fsf@mcs.anl.gov> Message-ID: <87mwswascy.fsf@mcs.anl.gov> Satish Balay writes: > On Wed, 17 Apr 2013, Jed Brown wrote: > >> John Doe sends email to petsc-users and the mailing list rewrites >> Reply-To back to the list. Now any user hits reply-all and their mailer >> gives them a message that replies *only* to petsc-users, dropping the >> original author. This is a problem, > > Its a problem only if the author is not subscribed. If they are not subscribed OR if they have turned off delivery. Even with delivery turned on, they cannot reliably filter using "petsc-users AND NOT to:me" because their address will be chronically dropped. This makes the list volume more burdensome. > Or remove option 'subscribe-but-do-not-deliver' for our usage of > 'Reply-To: list' That is back to the current model where (I think) many people ask questions on petsc-maint just because it's more effort/noise to be subscribed to petsc-users with delivery turned on. >> Perhaps a middle ground would be to have the list copy the From header >> over to Reply-to (if it doesn't already exist) and then _add_ the list >> address to Reply-to. That still isn't quite right when cross-posting, >> but it would allow us to advertise "subscribe with delivery off and ask >> questions on the list" or even "mail the list without subscribing" >> instead of "always write petsc-maint if you can't be bothered to filter >> the high-volume list". > > Earlier in the thread you've supported: reminder emails to folks doing > 'reply' instead of 'reply-all:' as an acceptable thing. [and this > happens a few times a day]. But here a reply of 'use petsc-maint' > instead of subscribe-but-do-not-deliver with petsc-users' is suggested > not good. [which happens so infrequently - except for configure.log > sutff]. I think almost nobody uses subscribe-without-delivery to petsc-users/petsc-dev because it's useless with the current reply-to munging. I reply to the other point below. > And I fail to see how 'e-mail petsc-maint without subscribing is not > good - whereas 'email petsc-users without subscribing is a great > feature'. [yeah you get archives on petsc-users - but I don't think > uses are as much concerened about that.] Each time someone resolves their problem by searching and finding an answer in the archives is one less time we have to repeat ourselves. The lists are indexed by the search engines and they do come up in searches. When a subject has already been discussed, linking a user to that thread is much faster than retyping the argument and it encourages them to try searching before asking. My perception is that a lot of questions come up more than once on petsc-maint. We can only link them to the archives if it has already been discussed on petsc-users, and with so many discussions on petsc-maint, it's hard for us to keep track of whether the topic has been discussed. > And I'll submit - its easier for most folks to send email to > petsc-maint instead of figuring out 'subscribe-but-donot-deliver stuff > on petsc-users'. [Yeah 'expert' mailing list users might expect > "subscribe with delivery" workflow to work.] Which is why we would encourage them to write petsc-users, either via an easy subscribe-without-delivery, or by having their original message only go to a few of us, where a reply from any of us automatically subscribes them without delivery. If the list interpreted any mail from a subscribed user as subscribing the Cc's without delivery, we could also move discussions from petsc-maint to petsc-users/petsc-dev any time the discussion does not need to be kept private. > Perhaps the problem here is - I view petsc-users and petsc-dev as > public mailing lists - and primary purpose of public mailing lists is > all to all communication mechanism. [so subscription/ reply-to make > sense to me.] And petsc-maint as the longstanding > non-subscribe/support or any type of conversation e-mail > to-petsc-developers. I've always thought of petsc-maint as the intentionally _private_ help venue. If the conversation does not have a good reason to be private, then I'd rather see it on a public (searchable) list. > But most use petsc-users [and some view it] as a support e-mail adress > [with searchable archives]. If thats what it it - then > no-subscribe-post or subscribe-but-do-not-deliver stuff would be the > primary thing - and recommending that would make sense. And then we > should be accepting build logs on it as well - and not worry about > flooding users mailboxes iwth them. [compressed as openmpi list > recommends] I wonder if we can do either (a) selective delivery of attachments greater than some small threshold and/or (b) create a [config] topic that people can unsubscribe from. (Maybe leave unsubscribed by default.) http://www.gnu.org/software/mailman/mailman-member/node30.html > [what about petsc-dev? some use it as reaching petsc-developers - not > petsc development discussions. I don't think that's a problem. > And what about petsc-maint? redirect to petsc-users and have > petsc-developers an non-ambiguous place for non-public e-mails to > petsc-developers?] How about converting the petsc-maint address to a mailing list that allows anonymous posting, but that has private delivery. We don't use RT numbers anyway. Then any time the discussion clearly doesn't need to be private, we just move it to petsc-users or petsc-dev. Workable? From jedbrown at mcs.anl.gov Thu Apr 18 08:25:05 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Thu, 18 Apr 2013 08:25:05 -0500 Subject: [petsc-users] [Fortran] VTK viewer error In-Reply-To: References: Message-ID: <87k3o0as3y.fsf@mcs.anl.gov> Dharmendar Reddy writes: > ! This line gives a compile error, as PETSCVIEWERVTK is not defined for > FORTRAN > ! !call PetscViewerSetType(viewer, PETSCVIEWERVTK) Oops, it didn't have a Fortran interface. This is pushed: commit b584eeb14d1f56b4d67c33ffc07f5f2c4fb64bad Author: Jed Brown Date: Thu Apr 18 08:22:56 2013 -0500 Viewer: add PETSCVIEWERVTK for Fortran diff --git a/include/finclude/petscviewerdef.h b/include/finclude/petscviewerdef.h index 06a808d..79996e3 100644 --- a/include/finclude/petscviewerdef.h +++ b/include/finclude/petscviewerdef.h @@ -22,6 +22,7 @@ #define PETSCVIEWERMATHEMATICA 'mathematica' #define PETSCVIEWERNETCDF 'netcdf' #define PETSCVIEWERHDF5 'hdf5' +#define PETSCVIEWERVTK 'vtk' #define PETSCVIEWERMATLAB 'matlab' #define PETSCVIEWERAMS 'ams' From jedbrown at mcs.anl.gov Thu Apr 18 08:29:16 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Thu, 18 Apr 2013 08:29:16 -0500 Subject: [petsc-users] Crash when using valgrind In-Reply-To: References: <87k3o0erxg.fsf@mcs.anl.gov> Message-ID: <87haj4arwz.fsf@mcs.anl.gov> Dominik Szczerba writes: >> What happens when you pass -no_signal_handler to the PETSc program? >> > > valgrind --tool=memcheck -q --num-callers=20 MySolver -no_signal_handler > > No change, i.e: > > cr_libinit.c:183 cri_init: sigaction() failed: Invalid argument > Aborted (core dumped) Can you run simple BLCR-using programs in Valgrind? You might have to do your debugging in a non-BLCR build. From jedbrown at mcs.anl.gov Thu Apr 18 08:31:46 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Thu, 18 Apr 2013 08:31:46 -0500 Subject: [petsc-users] Does Petsc has Finite volume code example In-Reply-To: References: <1921287a.137.13e1ba891f2.Coremail.ztdepyahoo@163.com> Message-ID: <87ehe8arst.fsf@mcs.anl.gov> Matthew Knepley writes: > On Thu, Apr 18, 2013 at 1:40 AM, ??? wrote: > >> Does Petsc has Finite volume code example with Ksp >> > > TS ex11. Much more basic finite volume that uses the KSP interface: src/ksp/ksp/examples/tutorials/ex32.c Solves 2D inhomogeneous Laplacian using multigrid. From john.mousel at gmail.com Thu Apr 18 08:49:22 2013 From: john.mousel at gmail.com (John Mousel) Date: Thu, 18 Apr 2013 08:49:22 -0500 Subject: [petsc-users] VecGetArray2d from Fortran Message-ID: Can VecGetArray2d be used from Fortran? The Fortran related documentation is a bit sparse for those functions. I use a lot of VecGetArrayF90 calls, but I'm switching to an interlaced variable format, so it would be convenient to have access to a multi-dimensional array format. Best Regards, John -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark.adams at columbia.edu Thu Apr 18 09:12:51 2013 From: mark.adams at columbia.edu (Mark F. Adams) Date: Thu, 18 Apr 2013 10:12:51 -0400 Subject: [petsc-users] Any changes in ML usage between 3.1-p8 -> 3.3-p6? In-Reply-To: References: Message-ID: <839879AE-43B9-40DA-94F5-28DABAB7C645@columbia.edu> Note, you need to damp jacobi for a smoother with something like this: -mg_levels_ksp_type chebyshev -mg_levels_ksp_chebyshev_estimate_eigenvalues 0,0.1,0,1.05 On Apr 18, 2013, at 9:49 AM, "Christon, Mark A" wrote: > HI Mark, > > Thanks for the information. We thought something had changed and could see it the effect, but couldn't quite pin it down. > > To be clear our pressure-Poisson equation is a warm and fluffy Laplacian, but typically quite stiff from a spectral point of view when dealing with boundary-layer meshes and complex geometry ? our norm. This is the first-order computational cost in our flow solver, so hit's like the recent change are very problematic, and particularly so when they break a number of regression tests that run nightly across multiple platflorms. > > So, unfortunately, while I'd like to use something as Jacobi, it's completely ineffective in for the operator (and RHS) in question. > > Thanks. > > - Mark > > -- > Mark A. Christon > Computational Physics Group (CCS-2) > Computer, Computational and Statistical Sciences Division > Los Alamos National Laboratory > MS D413, P.O. Box 1663 > Los Alamos, NM 87545 > > E-mail: christon at lanl.gov > Phone: (505) 663-5124 > Mobile: (505) 695-5649 (voice mail) > > International Journal for Numerical Methods in Fluids > > From: "Mark F. Adams" > Reply-To: PETSc users list > Date: Wed, 17 Apr 2013 19:42:47 -0400 > To: PETSc users list > Subject: Re: [petsc-users] Fwd: Any changes in ML usage between 3.1-p8 -> 3.3-p6? > >> In looking at the logs for icc it looks like Hong has done a little messing around with the shifting tolerance: >> >> - ((PC_Factor*)icc)->info.shiftamount = 1.e-12; >> - ((PC_Factor*)icc)->info.zeropivot = 1.e-12; >> + ((PC_Factor*)icc)->info.shiftamount = 100.0*PETSC_MACHINE_EPSILON; >> + ((PC_Factor*)icc)->info.zeropivot = 100.0*PETSC_MACHINE_EPSILON; >> >> >> This looks like it would lower the shifting and drop tolerance. You might set these back to 1e-12. >> >> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCFactorSetZeroPivot.html >> >> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCFactorSetShiftAmount.html >> >> BTW, using an indefinite preconditioner, that has to be fixed with is-this-a-small-number kind of code, on a warm and fluffy Laplacian is not recommended. As I said before I would just use jacobi -- god gave you an easy problem. Exploit it. >> >> On Apr 17, 2013, at 7:22 PM, "Mark F. Adams" wrote: >> >>> >>> >>> Begin forwarded message: >>> >>>> From: "Christon, Mark A" >>>> Subject: Re: [petsc-users] Any changes in ML usage between 3.1-p8 -> 3.3-p6? >>>> Date: April 17, 2013 7:06:11 PM EDT >>>> To: "Mark F. Adams" , "Bakosi, Jozsef" >>>> >>>> Hi Mark, >>>> >>>> Yes, looks like the new version does a little better after 2 iterations, but at the 8th iteration, the residuals increase:( >>>> >>>> I suspect this is why PETSc is whining about an indefinite preconditioner. >>>> >>>> Something definitely changes as we've had about 6-8 regression tests start failing that have been running flawlessly with ML + PETSc 3.1-p8 for almost two years. >>>> >>>> If we can understand what changed, we probably have a fighting chance of correcting it ? assuming it's some solver setting for PETSc that we're not currently using. >>>> >>>> - Mark >>>> >>>> -- >>>> Mark A. Christon >>>> Computational Physics Group (CCS-2) >>>> Computer, Computational and Statistical Sciences Division >>>> Los Alamos National Laboratory >>>> MS D413, P.O. Box 1663 >>>> Los Alamos, NM 87545 >>>> >>>> E-mail: christon at lanl.gov >>>> Phone: (505) 663-5124 >>>> Mobile: (505) 695-5649 (voice mail) >>>> >>>> International Journal for Numerical Methods in Fluids >>>> >>>> From: "Mark F. Adams" >>>> Date: Wed, 17 Apr 2013 18:51:02 -0400 >>>> To: PETSc users list >>>> Cc: "Mark A. Christon" >>>> Subject: Re: [petsc-users] Any changes in ML usage between 3.1-p8 -> 3.3-p6? >>>> >>>>> I see you are using icc. Perhaps our icc changed a bit between versions. These results look like both solves are working and the old does a little better (after two iterations). >>>>> >>>>> Try using jacobi instead of icc. >>>>> >>>>> >>>>> On Apr 17, 2013, at 6:21 PM, Jozsef Bakosi wrote: >>>>> >>>>>>> On 04.17.2013 15:38, Matthew Knepley wrote: >>>>>>>> On 04.17.2013 14:26, Jozsef Bakosi wrote: >>>>>>>>> Mark F. Adams mark.adams at columbia.edu >>>>>>>>> Wed Apr 17 14:25:04 CDT 2013 >>>>>>>>> 2) If you get "Indefinite PC" (I am guessing from using CG) it is because the >>>>>>>>> preconditioner >>>>>>>>> really is indefinite (or possible non-symmetric). We improved the checking >>>>>>>>> for this in one >>>>>>>>> of those releases. >>>>>>>>> AMG does not guarantee an SPD preconditioner so why persist in trying to use >>>>>>>>> CG? >>>>>>>>> AMG is positive if everything is working correctly. >>>>>>>>> Are these problems only semidefinite? Singular systems can give erratic >>>>>>>>> behavior. >>>>>>>> It is a Laplace operator from Galerkin finite elements. And the PC is fine on >>>>>>>> ranks 1, 2, 3, and 5 -- indefinite only on 4. I think we can safely say that the >>>>>>>> same PC should be positive on 4 as well. >>>>>>> Why is it safe? Because it sounds plausible? Mathematics is replete with things >>>>>>> that sound plausible and are false. Are there proofs that suggest this? Is there >>>>>>> computational evidence? Why would I believe you? >>>>>> Okay, so here is some additional information: >>>>>> I tried both old and new PETSc versions again, but now only taking 2 iterations >>>>>> (both with 4 CPUs) and checked the residuals. I get the same exact PC from ML in >>>>>> both cases, however, the residuals are different after both iterations: >>>>>> Please do a diff on the attached files and you can verify that the ML >>>>>> diagnostics are exactly the same: same max eigenvalues, nodes aggregated, etc, >>>>>> while the norm coming out of the solver at the end at both iterations are >>>>>> different. >>>>>> We reproduced the same exact behavior on two different linux platforms. >>>>>> Once again: same application source code, same ML source code, different PETSc: >>>>>> 3.1-p8 vs. 3.3-p6. >>>>>> >>>>> >>>>> >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From christon at lanl.gov Thu Apr 18 08:49:15 2013 From: christon at lanl.gov (Christon, Mark A) Date: Thu, 18 Apr 2013 13:49:15 +0000 Subject: [petsc-users] Fwd: Any changes in ML usage between 3.1-p8 -> 3.3-p6? In-Reply-To: <33803CA8-5CE4-416D-9F0C-C99D37475904@columbia.edu> Message-ID: HI Mark, Thanks for the information. We thought something had changed and could see it the effect, but couldn't quite pin it down. To be clear our pressure-Poisson equation is a warm and fluffy Laplacian, but typically quite stiff from a spectral point of view when dealing with boundary-layer meshes and complex geometry ? our norm. This is the first-order computational cost in our flow solver, so hit's like the recent change are very problematic, and particularly so when they break a number of regression tests that run nightly across multiple platflorms. So, unfortunately, while I'd like to use something as Jacobi, it's completely ineffective in for the operator (and RHS) in question. Thanks. - Mark -- Mark A. Christon Computational Physics Group (CCS-2) Computer, Computational and Statistical Sciences Division Los Alamos National Laboratory MS D413, P.O. Box 1663 Los Alamos, NM 87545 E-mail: christon at lanl.gov Phone: (505) 663-5124 Mobile: (505) 695-5649 (voice mail) International Journal for Numerical Methods in Fluids From: "Mark F. Adams" > Reply-To: PETSc users list > Date: Wed, 17 Apr 2013 19:42:47 -0400 To: PETSc users list > Subject: Re: [petsc-users] Fwd: Any changes in ML usage between 3.1-p8 -> 3.3-p6? In looking at the logs for icc it looks like Hong has done a little messing around with the shifting tolerance: - ((PC_Factor*)icc)->info.shiftamount = 1.e-12; - ((PC_Factor*)icc)->info.zeropivot = 1.e-12; + ((PC_Factor*)icc)->info.shiftamount = 100.0*PETSC_MACHINE_EPSILON; + ((PC_Factor*)icc)->info.zeropivot = 100.0*PETSC_MACHINE_EPSILON; This looks like it would lower the shifting and drop tolerance. You might set these back to 1e-12. http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCFactorSetZeroPivot.html http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCFactorSetShiftAmount.html BTW, using an indefinite preconditioner, that has to be fixed with is-this-a-small-number kind of code, on a warm and fluffy Laplacian is not recommended. As I said before I would just use jacobi -- god gave you an easy problem. Exploit it. On Apr 17, 2013, at 7:22 PM, "Mark F. Adams" > wrote: Begin forwarded message: From: "Christon, Mark A" > Subject: Re: [petsc-users] Any changes in ML usage between 3.1-p8 -> 3.3-p6? Date: April 17, 2013 7:06:11 PM EDT To: "Mark F. Adams" >, "Bakosi, Jozsef" > Hi Mark, Yes, looks like the new version does a little better after 2 iterations, but at the 8th iteration, the residuals increase:( I suspect this is why PETSc is whining about an indefinite preconditioner. Something definitely changes as we've had about 6-8 regression tests start failing that have been running flawlessly with ML + PETSc 3.1-p8 for almost two years. If we can understand what changed, we probably have a fighting chance of correcting it ? assuming it's some solver setting for PETSc that we're not currently using. - Mark -- Mark A. Christon Computational Physics Group (CCS-2) Computer, Computational and Statistical Sciences Division Los Alamos National Laboratory MS D413, P.O. Box 1663 Los Alamos, NM 87545 E-mail: christon at lanl.gov Phone: (505) 663-5124 Mobile: (505) 695-5649 (voice mail) International Journal for Numerical Methods in Fluids From: "Mark F. Adams" > Date: Wed, 17 Apr 2013 18:51:02 -0400 To: PETSc users list > Cc: "Mark A. Christon" > Subject: Re: [petsc-users] Any changes in ML usage between 3.1-p8 -> 3.3-p6? I see you are using icc. Perhaps our icc changed a bit between versions. These results look like both solves are working and the old does a little better (after two iterations). Try using jacobi instead of icc. On Apr 17, 2013, at 6:21 PM, Jozsef Bakosi > wrote: On 04.17.2013 15:38, Matthew Knepley wrote: On 04.17.2013 14:26, Jozsef Bakosi wrote: Mark F. Adams mark.adams at columbia.edu Wed Apr 17 14:25:04 CDT 2013 2) If you get "Indefinite PC" (I am guessing from using CG) it is because the preconditioner really is indefinite (or possible non-symmetric). We improved the checking for this in one of those releases. AMG does not guarantee an SPD preconditioner so why persist in trying to use CG? AMG is positive if everything is working correctly. Are these problems only semidefinite? Singular systems can give erratic behavior. It is a Laplace operator from Galerkin finite elements. And the PC is fine on ranks 1, 2, 3, and 5 -- indefinite only on 4. I think we can safely say that the same PC should be positive on 4 as well. Why is it safe? Because it sounds plausible? Mathematics is replete with things that sound plausible and are false. Are there proofs that suggest this? Is there computational evidence? Why would I believe you? Okay, so here is some additional information: I tried both old and new PETSc versions again, but now only taking 2 iterations (both with 4 CPUs) and checked the residuals. I get the same exact PC from ML in both cases, however, the residuals are different after both iterations: Please do a diff on the attached files and you can verify that the ML diagnostics are exactly the same: same max eigenvalues, nodes aggregated, etc, while the norm coming out of the solver at the end at both iterations are different. We reproduced the same exact behavior on two different linux platforms. Once again: same application source code, same ML source code, different PETSc: 3.1-p8 vs. 3.3-p6. -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Thu Apr 18 10:09:54 2013 From: balay at mcs.anl.gov (Satish Balay) Date: Thu, 18 Apr 2013 10:09:54 -0500 (CDT) Subject: [petsc-users] Mailing list reply-to munging (was Any changes in ML usage between 3.1-p8 -> 3.3-p6?) In-Reply-To: <87mwswascy.fsf@mcs.anl.gov> References: <20130405143415.GH17937@karman> <20130417162647.GC23247@karman> <20130417190051.GA2495@karman> <20130417202640.GC2495@karman> <8738uog8i3.fsf@mcs.anl.gov> <87mwsweshg.fsf@mcs.anl.gov> <87ehe8eqwn.fsf@mcs.anl.gov> <87txn4d48d.fsf@mcs.anl.gov> <871ua8cwvo.fsf@mcs.anl.gov> <87mwswascy.fsf@mcs.anl.gov> Message-ID: On Thu, 18 Apr 2013, Jed Brown wrote: > Satish Balay writes: > > > On Wed, 17 Apr 2013, Jed Brown wrote: > > > >> John Doe sends email to petsc-users and the mailing list rewrites > >> Reply-To back to the list. Now any user hits reply-all and their mailer > >> gives them a message that replies *only* to petsc-users, dropping the > >> original author. This is a problem, > > > > Its a problem only if the author is not subscribed. > > If they are not subscribed OR if they have turned off delivery. As mentioned this is a mailing list. And that 'minority' usage is possible with alternative workflow. subscribe and use a filter. > Even with delivery turned on, they cannot reliably filter using > "petsc-users AND NOT to:me" because their address will be > chronically dropped. This makes the list volume more burdensome. Again 'minority usage. Since one would not care about following list except for 'when they post' - They would filter list traffic into a different folder - and look at that folder only when they post to that list. As I claimed the usage is possible [for the minority use case]. Its insisting that the 'exact workflow' as with 'non-reply-to: lists' should be supported is not what I accept. > > > Or remove option 'subscribe-but-do-not-deliver' for our usage of > > 'Reply-To: list' > > That is back to the current model Whih I think is fine - and optimized for majority usage. And change has extra costs [which you are ignoring. > where (I think) many people ask questions on petsc-maint just > because it's more effort/noise to be subscribed to petsc-users with > delivery turned on. using petsc-maint is fine. But here you are suggesting using petsc-maint should be discouraged. > >> Perhaps a middle ground would be to have the list copy the From header > >> over to Reply-to (if it doesn't already exist) and then _add_ the list > >> address to Reply-to. That still isn't quite right when cross-posting, > >> but it would allow us to advertise "subscribe with delivery off and ask > >> questions on the list" or even "mail the list without subscribing" > >> instead of "always write petsc-maint if you can't be bothered to filter > >> the high-volume list". > > > > Earlier in the thread you've supported: reminder emails to folks doing > > 'reply' instead of 'reply-all:' as an acceptable thing. [and this > > happens a few times a day]. But here a reply of 'use petsc-maint' > > instead of subscribe-but-do-not-deliver with petsc-users' is suggested > > not good. [which happens so infrequently - except for configure.log > > sutff]. > > I think almost nobody uses subscribe-without-delivery to > petsc-users/petsc-dev because it's useless with the current reply-to > munging. I reply to the other point below. I doubt most users know about subscribe-without-delivery option of mailing lists. And I think most users think petsc-users as not a mailing list - but as petsc-maint. > > And I fail to see how 'e-mail petsc-maint without subscribing is not > > good - whereas 'email petsc-users without subscribing is a great > > feature'. [yeah you get archives on petsc-users - but I don't think > > uses are as much concerened about that.] > > Each time someone resolves their problem by searching and finding an > answer in the archives is one less time we have to repeat ourselves. > The lists are indexed by the search engines and they do come up in > searches. When a subject has already been discussed, linking a user to > that thread is much faster than retyping the argument and it encourages > them to try searching before asking. My perception is that a lot of > questions come up more than once on petsc-maint. We can only link them > to the archives if it has already been discussed on petsc-users, and > with so many discussions on petsc-maint, it's hard for us to keep track > of whether the topic has been discussed. I don't object to more archiving of issues. > > And I'll submit - its easier for most folks to send email to > > petsc-maint instead of figuring out 'subscribe-but-donot-deliver stuff > > on petsc-users'. [Yeah 'expert' mailing list users might expect > > "subscribe with delivery" workflow to work.] > > Which is why we would encourage them to write petsc-users, either via an > easy subscribe-without-delivery, or by having their original message > only go to a few of us, where a reply from any of us automatically > subscribes them without delivery. I already do the second part with the current mailing lists. [plenty of users post without subscribing every day - which goes into moderation. I appprove/subscribe that post.] > If the list interpreted any mail from a subscribed user as subscribing > the Cc's without delivery, we could also move discussions from > petsc-maint to petsc-users/petsc-dev any time the discussion does not > need to be kept private. I agree this usage is not supported currently. [but I don't know if that automatic-cc-subscribe-as-without-delivery is possible] > > Perhaps the problem here is - I view petsc-users and petsc-dev as > > public mailing lists - and primary purpose of public mailing lists is > > all to all communication mechanism. [so subscription/ reply-to make > > sense to me.] And petsc-maint as the longstanding > > non-subscribe/support or any type of conversation e-mail > > to-petsc-developers. > > I've always thought of petsc-maint as the intentionally _private_ help > venue. If the conversation does not have a good reason to be private, > then I'd rather see it on a public (searchable) list. the whole argument is more archives and email-without subscribing. I don't buy the stuff about "subscribe with delivery" or reply-to is breaking stuff. And the cost is more replies going to individuals. And some extra spam. And huge logs to subscribers. [and advertise petsc-users as support list - not mailing list]. > > But most use petsc-users [and some view it] as a support e-mail adress > > [with searchable archives]. If thats what it it - then > > no-subscribe-post or subscribe-but-do-not-deliver stuff would be the > > primary thing - and recommending that would make sense. And then we > > should be accepting build logs on it as well - and not worry about > > flooding users mailboxes iwth them. [compressed as openmpi list > > recommends] > > I wonder if we can do either (a) selective delivery of attachments > greater than some small threshold and/or I don't know if there is an option for that. Currently all moderators get such emails. > (b) create a [config] topic that people can unsubscribe from. > (Maybe leave unsubscribed by default.) > > http://www.gnu.org/software/mailman/mailman-member/node30.html But the user has to set the correct topic in the subject line when they post? Again transfering decision from 'use petsc-users vs petsc maint' to use subject: 'installation' vs 'bugreport' vs 'general'. > > [what about petsc-dev? some use it as reaching petsc-developers - not > > petsc development discussions. > > I don't think that's a problem. > > > And what about petsc-maint? redirect to petsc-users and have > > petsc-developers an non-ambiguous place for non-public e-mails to > > petsc-developers?] > > How about converting the petsc-maint address to a mailing list that > allows anonymous posting, but that has private delivery. We don't use > RT numbers anyway. Then any time the discussion clearly doesn't need to > be private, we just move it to petsc-users or petsc-dev. Workable? I guess petsc-maint doesn't matter anymore - as everyone should be using petsc-users. I'll remove the limits on petsc-users and petsc-dev and add you as admin so you can set things up as you see fit. Satish From knepley at gmail.com Thu Apr 18 10:20:10 2013 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 18 Apr 2013 11:20:10 -0400 Subject: [petsc-users] maintaining application compatibility with 3.1 and 3.2 In-Reply-To: References: <3DCB1B18-F192-4488-97BF-5AD5FD9D26AE@mcs.anl.gov> Message-ID: On Thu, Apr 18, 2013 at 8:26 AM, Alexei Matveev < alexei.matveev+petsc at gmail.com> wrote: > > Do I understand it correctly that there is no way currently to load a >>> distributed >>> Vec from the file without knowing and setting its dimensions first? >>> >> >> I am not sure what you mean here. The point is that prescribing the >> layout of a Vec >> in a file is clearly wrong. We would like to be able to save from one >> parallel configuration >> and load on another (maybe serial). Thus, the loader prescribes the >> layout. >> >> > I see. You understood me correctly. I used that feature to to quickly load > a Vec from a file and dump it in ACSII or to do simple arithmetics. > Admittedly > that was usually done in serial runs. > > So the comment here is actually wrong, since you say it is not going to be > supported by any viewer/loader pair? > No, I have to be more specific. PETSc saves the Vec in a uniform format, but also saves information "out of band" in a .info file. If the input Vec has not been already partitioned, the plan was to use the .info file to recreate the original layout. Matt > /* This one is supposed to save enough meta-info (such as distribution > pattern, dimensions) to recover the vector from scratch: */ > void vec_save (const char file[], const Vec vec) > { > PetscViewer viewer; > > PetscViewerBinaryOpen (PETSC_COMM_WORLD, file, FILE_MODE_WRITE, &viewer); > VecView (vec, viewer); > PetscViewerDestroy (viewer); > } > > Sorry for spamming the channel, > > Alexei. > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Thu Apr 18 10:34:46 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Thu, 18 Apr 2013 10:34:46 -0500 Subject: [petsc-users] Fwd: Any changes in ML usage between 3.1-p8 -> 3.3-p6? In-Reply-To: References: Message-ID: <8761zjc0o9.fsf@mcs.anl.gov> "Christon, Mark A" writes: > HI Mark, > > Thanks for the information. We thought something had changed and > could see it the effect, but couldn't quite pin it down. So did you try running your old tests with -mg_levels_sub_pc_factor_zeropivot 1e-12 -mg_levels_sub_pc_factor_shift_amount 1e-12 I would expect this to only make a small difference. > To be clear our pressure-Poisson equation is a warm and fluffy > Laplacian, but typically quite stiff from a spectral point of view > when dealing with boundary-layer meshes and complex geometry ? our > norm. This is the first-order computational cost in our flow solver, > so hit's like the recent change are very problematic, and particularly > so when they break a number of regression tests that run nightly > across multiple platflorms. > > So, unfortunately, while I'd like to use something as Jacobi, it's > completely ineffective in for the operator (and RHS) in question. As Mark says, use -mg_levels_ksp_type chebyshev -mg_levels_pc_type jacobi (possibly with more explicit spectrum options). See Mark's paper http://www.columbia.edu/~ma2325/adams_poly.pdf From bsmith at mcs.anl.gov Thu Apr 18 11:04:54 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 18 Apr 2013 11:04:54 -0500 Subject: [petsc-users] maintaining application compatibility with 3.1 and 3.2 In-Reply-To: References: <3DCB1B18-F192-4488-97BF-5AD5FD9D26AE@mcs.anl.gov> Message-ID: <4124EA10-CB36-4F85-AB11-8ABEAD57DAA4@mcs.anl.gov> On Apr 18, 2013, at 7:08 AM, Alexei Matveev wrote: > > > Hi, All, > > Thanks for your comments. > > On 17 April 2013 23:35, Barry Smith wrote: > > It is our intention that PETSc be easy enough for anyone to install that rather than making your application work with different versions one simply install the PETSc version one needs. In addition we recommend updating applications to work with the latest release within a couple of months after each release. > > I understand that. With rapid development it is unavoidable to break > API occasionally. > > I managed to compile and run the code with the quick hack below and somewhat > more. Thanks, Kirk! Need yet to find out why the tests fail. > > Do I understand it correctly that there is no way currently to load a distributed > Vec from the file without knowing and setting its dimensions first? > Like I was doing here: > > +# if PETSC_VERSION < 30200 > VecLoad (viewer, VECMPI, &vec); /* creates it */ > +#else > + /* FIXME: how to make it distributed? */ > + VecCreate (PETSC_COMM_WORLD, &vec); > + VecLoad (vec, viewer); > +#endif Use VecCreate(), VecSetType(vec,VECSTANDARD); <-- indicates seq on one process and MPI on several VecLoad(). If that does not work then please send all error output to petsc-maint. > > BTW, I found myself calling (DM)DAGetInfo() on the array descriptor DM/DA > quite often to get the shape of the 3d grid. I noticed that now that the signature > of the *GetInfo() changed. Is there an official way to get that shape info by enquiring > the Vec itself? We now have in petsc-dev VecGetDM() that will give you the DM/DA associated with the vector, if it has one, from that you can get the shape info. Barry > > Alexei > > -#include "petscda.h" /* Vec, Mat, DA, ... */ > +#include "petscdmda.h" /* Vec, Mat, DA, ... */ > #include "petscdmmg.h" /* KSP, ... */ > > +#define PETSC_VERSION (PETSC_VERSION_MAJOR * 10000 + PETSC_VERSION_MINOR * 100) > + > +/* FIXME: PETSC 3.2 */ > +#if PETSC_VERSION >= 30200 > +typedef DM DA; > +typedef PetscBool PetscTruth; > +# define VecDestroy(x) (VecDestroy)(&(x)) > +# define VecScatterDestroy(x) (VecScatterDestroy)(&(x)) > +# define MatDestroy(x) (MatDestroy)(&(x)) > +# define ISDestroy(x) (ISDestroy)(&(x)) > +# define KSPDestroy(x) (KSPDestroy)(&(x)) > +# define SNESDestroy(x) (SNESDestroy)(&(x)) > +# define PetscViewerDestroy(x) (PetscViewerDestroy)(&(x)) > +# define PCDestroy(x) (PCDestroy)(&(x)) > +# define DADestroy(x) (DMDestroy)(&(x)) > +# define DAGetCorners DMDAGetCorners > +# define DACreate3d DMDACreate3d > +# define DACreateGlobalVector DMCreateGlobalVector > +# define DAGetGlobalVector DMGetGlobalVector > +# define DAGetInfo DMDAGetInfo > +# define DAGetMatrix DMGetMatrix > +# define DARestoreGlobalVector DMRestoreGlobalVector > +# define DAVecGetArray DMDAVecGetArray > +# define DAVecRestoreArray DMDAVecRestoreArray > +# define VecLoadIntoVector(viewer, vec) VecLoad (vec, viewer) > +# define DA_STENCIL_STAR DMDA_STENCIL_STAR > +# define DA_XYZPERIODIC DMDA_XYZPERIODIC > +#endif From knepley at gmail.com Thu Apr 18 11:06:45 2013 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 18 Apr 2013 12:06:45 -0400 Subject: [petsc-users] [Fortran] VTK viewer error In-Reply-To: <87k3o0as3y.fsf@mcs.anl.gov> References: <87k3o0as3y.fsf@mcs.anl.gov> Message-ID: On Thu, Apr 18, 2013 at 9:25 AM, Jed Brown wrote: > Dharmendar Reddy writes: > > > ! This line gives a compile error, as PETSCVIEWERVTK is not defined for > > FORTRAN > > ! !call PetscViewerSetType(viewer, PETSCVIEWERVTK) > I have pushed a Fortran example of this now: src/dm/impls/plex/example/tutorials/ex1f90 Thanks, Matt > Oops, it didn't have a Fortran interface. This is pushed: > > commit b584eeb14d1f56b4d67c33ffc07f5f2c4fb64bad > Author: Jed Brown > Date: Thu Apr 18 08:22:56 2013 -0500 > > Viewer: add PETSCVIEWERVTK for Fortran > > diff --git a/include/finclude/petscviewerdef.h > b/include/finclude/petscviewerdef.h > index 06a808d..79996e3 100644 > --- a/include/finclude/petscviewerdef.h > +++ b/include/finclude/petscviewerdef.h > @@ -22,6 +22,7 @@ > #define PETSCVIEWERMATHEMATICA 'mathematica' > #define PETSCVIEWERNETCDF 'netcdf' > #define PETSCVIEWERHDF5 'hdf5' > +#define PETSCVIEWERVTK 'vtk' > #define PETSCVIEWERMATLAB 'matlab' > #define PETSCVIEWERAMS 'ams' > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Thu Apr 18 11:20:58 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Thu, 18 Apr 2013 11:20:58 -0500 Subject: [petsc-users] Mailing list reply-to munging (was Any changes in ML usage between 3.1-p8 -> 3.3-p6?) In-Reply-To: References: <20130405143415.GH17937@karman> <20130417162647.GC23247@karman> <20130417190051.GA2495@karman> <20130417202640.GC2495@karman> <8738uog8i3.fsf@mcs.anl.gov> <87mwsweshg.fsf@mcs.anl.gov> <87ehe8eqwn.fsf@mcs.anl.gov> <87txn4d48d.fsf@mcs.anl.gov> <871ua8cwvo.fsf@mcs.anl.gov> <87mwswascy.fsf@mcs.anl.gov> Message-ID: <8738unbyj9.fsf@mcs.anl.gov> Satish Balay writes: > Again 'minority usage. Since one would not care about following list > except for 'when they post' - They would filter list traffic into a > different folder - and look at that folder only when they post to that > list. The problem is that you have to monitor _everything_ and you have to draw a line after sending a message when you decide to stop paying attention (changing your filters or not "checking that folder"). With reply-all convention, you just email and rest assured that all messages relevant to you will continue to Cc you. > using petsc-maint is fine. But here you are suggesting using > petsc-maint should be discouraged. Yes, the reason is that our effort does not scale well and has no historical value when it happens on petsc-maint. On an archived and searchable mailing list, we can refer to old discussions and it's more open in that people who are not "core" developers can participate. > I doubt most users know about subscribe-without-delivery option of > mailing lists. And I think most users think petsc-users as not a > mailing list - but as petsc-maint. Hmm, I would think that most users know petsc-users is a mailing list. > I agree this usage is not supported currently. [but I don't know if > that automatic-cc-subscribe-as-without-delivery is possible] Does the list configuration have an API? If so, we could have a bot monitoring petsc-users email and subscribing (without delivery) addresses that are Cc'd in approved messages? > the whole argument is more archives and email-without subscribing. I > don't buy the stuff about "subscribe with delivery" or reply-to is > breaking stuff. What part don't you buy? If someone writes to the list, "reply-all" from another list subscriber goes only to the list. That means they can't distinguish mail that they are interested in from all the other stuff on the list. I hypothesize that a lot of people write petsc-maint because they don't like the firehose implied by using petsc-users. Turning off munging fixes the firehose problem. The reason to prefer petsc-users when possible is searchability/archives. > And the cost is more replies going to individuals. We already have this on petsc-maint, but asking for the author to resend (which teaches them) is more justifiable on an archived list because it provides understandable value. > And some extra spam. When you approve a message, are you whitelisting the thread or the author? If the author, it's equivalent to subscribe-without-delivery. Maybe that is good enough? > And huge logs to subscribers. [and advertise petsc-users as support > list - not mailing list]. Scrubbing large attachments is fine as long as we can deliver them to people who opt in, or at least those who are currently on petsc-maint. > I don't know if there is an option for that. Currently all moderators > get such emails. There is a difference between list "moderator" and "admin", right? Can the current petsc-maint group be labeled as "moderator" so that we get the attachments? > But the user has to set the correct topic in the subject line when > they post? Again transfering decision from 'use petsc-users vs petsc > maint' to use subject: 'installation' vs 'bugreport' vs 'general'. We can just add [installation] to the subject line when we reply so that users don't see the reply threads for untagged messages. The main disadvantage would be that it would look like we weren't replying to those messages. This may not be worthwhile. From ling.zou at inl.gov Thu Apr 18 11:21:53 2013 From: ling.zou at inl.gov (Zou (Non-US), Ling) Date: Thu, 18 Apr 2013 10:21:53 -0600 Subject: [petsc-users] snes/examples/tutorials/ex3 x-window plot issue Message-ID: Hi, All I tried snes/examples/tutorials/ex3 on my Mac OS 10.7.5, with X11 (XQuartz 2.6.5 (xorg-server 1.10.6)) installed. Everything seems fine, as I got output from the terminal as: ====================== atol=1e-50, rtol=1e-08, stol=1e-08, maxit=50, maxf=10000 iter = 0,SNES Function norm 5.41468 iter = 1,SNES Function norm 0.295258 iter = 2,SNES Function norm 0.000450229 iter = 3,SNES Function norm 1.38967e-09 Number of SNES iterations = 3 Norm of error 1.49751e-10 Iterations 3 ====================== I suppose it is right. The only thing which bothers me a little bit is the x-window plot. When I ran the code, I could see a x-window plot flashing on my screen and it's gone forever. I don't know if it is supposed to be like this, or I should see an x-window plot staying on my screen. Anyway, this is not a big issue and I guess it is most likely not a PETSc problem. My X11 should be fine too, as when I do gnuplot, the x-window shows stuffs correctly. However, I'd appreciate if anyone could give a hint how to resolve it. Best, Ling -------------- next part -------------- An HTML attachment was scrubbed... URL: From cjm2176 at columbia.edu Thu Apr 18 11:22:59 2013 From: cjm2176 at columbia.edu (Colin McAuliffe) Date: Thu, 18 Apr 2013 12:22:59 -0400 Subject: [petsc-users] GMRES convergence Message-ID: <20130418122259.6ii74jvf3sw8sgcc@cubmail.cc.columbia.edu> Hello all, I am testing out a few linear solvers on a small (1052 by 1052) matrix. Using unpreconditioned GMRES with -ksp_gmres_restart 10000, GMRES still takes many times more than 1052 iterations to converge. Shouldn't it be the case that for a n by n matrix, n iterations of GMRES will give a full factorization of the matrix? Thanks -- Colin McAuliffe PhD Candidate Columbia University Department of Civil Engineering and Engineering Mechanics From jedbrown at mcs.anl.gov Thu Apr 18 11:37:50 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Thu, 18 Apr 2013 11:37:50 -0500 Subject: [petsc-users] [Fortran] VTK viewer error In-Reply-To: References: <87k3o0as3y.fsf@mcs.anl.gov> Message-ID: <87zjwvaj6p.fsf@mcs.anl.gov> Matthew Knepley writes: > On Thu, Apr 18, 2013 at 9:25 AM, Jed Brown wrote: > >> Dharmendar Reddy writes: >> >> > ! This line gives a compile error, as PETSCVIEWERVTK is not defined for >> > FORTRAN >> > ! !call PetscViewerSetType(viewer, PETSCVIEWERVTK) >> > > I have pushed a Fortran example of this now: > > src/dm/impls/plex/example/tutorials/ex1f90 Relative to Matt's example, you can use the following for binary-appended XML, which is fast and works in parallel. diff --git i/src/dm/impls/plex/examples/tutorials/ex1f90.F w/src/dm/impls/plex/examples/tutorials/ex1f90.F index d6954e6..570ad9f 100644 --- i/src/dm/impls/plex/examples/tutorials/ex1f90.F +++ w/src/dm/impls/plex/examples/tutorials/ex1f90.F @@ -82,9 +82,7 @@ CHKERRQ(ierr) call PetscViewerSetType(viewer, PETSCVIEWERVTK, ierr) CHKERRQ(ierr) - call PetscViewerSetFormat(viewer, PETSC_VIEWER_ASCII_VTK, ierr) - CHKERRQ(ierr) - call PetscViewerFileSetName(viewer, 'sol.vtk', ierr) + call PetscViewerFileSetName(viewer, 'sol.vtu', ierr) CHKERRQ(ierr) call VecView(u, viewer, ierr) CHKERRQ(ierr) The example also leaks memory. (Matt is fixing that.) From jedbrown at mcs.anl.gov Thu Apr 18 11:44:32 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Thu, 18 Apr 2013 11:44:32 -0500 Subject: [petsc-users] snes/examples/tutorials/ex3 x-window plot issue In-Reply-To: References: Message-ID: <87wqrzaivj.fsf@mcs.anl.gov> "Zou (Non-US), Ling" writes: > I suppose it is right. The only thing which bothers me a little bit is the > x-window plot. When I ran the code, I could see a x-window plot flashing on > my screen and it's gone forever. I don't know if it is supposed to be like > this, or I should see an x-window plot staying on my screen. What options are you running with? You should be able to do something like -snes_monitor_solution -draw_pause 1 It's probably flickering because it redraws once per iteration and that problem is tiny so the iterations go very fast. From jedbrown at mcs.anl.gov Thu Apr 18 11:46:24 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Thu, 18 Apr 2013 11:46:24 -0500 Subject: [petsc-users] GMRES convergence In-Reply-To: <20130418122259.6ii74jvf3sw8sgcc@cubmail.cc.columbia.edu> References: <20130418122259.6ii74jvf3sw8sgcc@cubmail.cc.columbia.edu> Message-ID: <87txn3aisf.fsf@mcs.anl.gov> Colin McAuliffe writes: > Hello all, > > I am testing out a few linear solvers on a small (1052 by 1052) > matrix. Using unpreconditioned GMRES with -ksp_gmres_restart 10000, > GMRES still takes many times more than 1052 iterations to converge. > Shouldn't it be the case that for a n by n matrix, n iterations of > GMRES will give a full factorization of the matrix? Yes, _in exact arithmetic_. This option will make orthogonalization more stable: -ksp_gmres_modifiedgramschmidt: Modified Gram-Schmidt (slow,more stable) (KSPGMRESSetOrthogonalization) the default is -ksp_gmres_classicalgramschmidt: Classical (unmodified) Gram-Schmidt (fast) (KSPGMRESSetOrthogonalization) From balay at mcs.anl.gov Thu Apr 18 12:00:11 2013 From: balay at mcs.anl.gov (Satish Balay) Date: Thu, 18 Apr 2013 12:00:11 -0500 (CDT) Subject: [petsc-users] Mailing list reply-to munging (was Any changes in ML usage between 3.1-p8 -> 3.3-p6?) In-Reply-To: <8738unbyj9.fsf@mcs.anl.gov> References: <20130405143415.GH17937@karman> <20130417162647.GC23247@karman> <20130417190051.GA2495@karman> <20130417202640.GC2495@karman> <8738uog8i3.fsf@mcs.anl.gov> <87mwsweshg.fsf@mcs.anl.gov> <87ehe8eqwn.fsf@mcs.anl.gov> <87txn4d48d.fsf@mcs.anl.gov> <871ua8cwvo.fsf@mcs.anl.gov> <87mwswascy.fsf@mcs.anl.gov> <8738unbyj9.fsf@mcs.anl.gov> Message-ID: On Thu, 18 Apr 2013, Jed Brown wrote: > Satish Balay writes: > > > Again 'minority usage. Since one would not care about following list > > except for 'when they post' - They would filter list traffic into a > > different folder - and look at that folder only when they post to that > > list. > > The problem is that you have to monitor _everything_ and you have to > draw a line after sending a message when you decide to stop paying > attention (changing your filters or not "checking that folder"). With > reply-all convention, you just email and rest assured that all messages > relevant to you will continue to Cc you. Sure - there is cost for 'minority users. But to save that - you propose a cost to majority users [i.e everyone should conciously use 'reply-all'] > Does the list configuration have an API? If so, we could have a bot > monitoring petsc-users email and subscribing (without delivery) > addresses that are Cc'd in approved messages? you can check that. > > the whole argument is more archives and email-without subscribing. I > > don't buy the stuff about "subscribe with delivery" or reply-to is > > breaking stuff. > > What part don't you buy? If someone writes to the list, "reply-all" > from another list subscriber goes only to the list. That means they > can't distinguish mail that they are interested in from all the other > stuff on the list. see above > I hypothesize that a lot of people write petsc-maint > because they don't like the firehose implied by using petsc-users. > Turning off munging fixes the firehose problem. I submit that most petsc-maint users are familiar with petsc-maint - and continue to use it. New users do get confused between petsc-users & petsc-maint > When you approve a message, are you whitelisting the thread or the > author? If the author, it's equivalent to subscribe-without-delivery. > Maybe that is good enough? I've been doing subscribe-with-delivery. > Scrubbing large attachments is fine as long as we can deliver them to > people who opt in, or at least those who are currently on petsc-maint. Not sure if its possible to scrub just the attachments. If thats possible you can now configure mailman to do that. > > I don't know if there is an option for that. Currently all moderators > > get such emails. > > There is a difference between list "moderator" and "admin", right? Can > the current petsc-maint group be labeled as "moderator" so that we get > the attachments? Sure - you can set that up now. Satish From jedbrown at mcs.anl.gov Thu Apr 18 12:12:45 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Thu, 18 Apr 2013 12:12:45 -0500 Subject: [petsc-users] snes/examples/tutorials/ex3 x-window plot issue In-Reply-To: References: <87wqrzaivj.fsf@mcs.anl.gov> Message-ID: <87mwsvahki.fsf@mcs.anl.gov> [Please group-reply to the list.] "Zou (Non-US), Ling" writes: > Thanks Jed. I didn't use any options when I ran it. The options you > proposed works pretty well and I could see the x-window plot staying there > for a while and disappeared after the code run is finished. Any option to > let the x-window stay there even after the code run is finished? '-draw_pause -1' will pause after each redraw and you can click it to continue. From jedbrown at mcs.anl.gov Thu Apr 18 12:55:59 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Thu, 18 Apr 2013 12:55:59 -0500 Subject: [petsc-users] PETSC Memory leakage In-Reply-To: References: Message-ID: <87fvynafkg.fsf@mcs.anl.gov> Foad Hassaninejadfarahani writes: > Dear Jed; > > > > Hi; > > > > I am PhD student at the University of Manitoba working on a CFD code. > > I tried to use PETSC direct solver (SUPER LU) to solve the system of > equations. > > After a long time effort, the code is working properly and gives some good > results. The only issue that came up recently is a bout memory leakage. > > I found that after iterations RES memory is increasing which leads to the > server crash. Run a short simulation (like two steps) with the option -malloc_dump (or better, '-objects_dump' in petsc-dev). It will show the memory leaks. > I tried bunch of cures which I found on the web, but none of them worked. > > I am using MatDestroy and VecDestroy at the end of PETSC part in the code to > free all the allocated memories. This is not helpful. > > I also tried iterative solvers like Richardson and other direct solver like > spooles and the same problem occurred. > > I would appreciate if you could help me in this regard. > > > > With Best Regards; > > Foad > > From jedbrown at mcs.anl.gov Thu Apr 18 14:06:42 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Thu, 18 Apr 2013 14:06:42 -0500 Subject: [petsc-users] Mailing list reply-to munging (was Any changes in ML usage between 3.1-p8 -> 3.3-p6?) In-Reply-To: References: <20130405143415.GH17937@karman> <20130417162647.GC23247@karman> <20130417190051.GA2495@karman> <20130417202640.GC2495@karman> <8738uog8i3.fsf@mcs.anl.gov> <87mwsweshg.fsf@mcs.anl.gov> <87ehe8eqwn.fsf@mcs.anl.gov> <87txn4d48d.fsf@mcs.anl.gov> <871ua8cwvo.fsf@mcs.anl.gov> <87mwswascy.fsf@mcs.anl.gov> <8738unbyj9.fsf@mcs.anl.gov> Message-ID: <87ehe7acal.fsf@mcs.anl.gov> Satish Balay writes: > Sure - there is cost for 'minority users. But to save that - you > propose a cost to majority users [i.e everyone should conciously use > 'reply-all'] That has always been standard mailing list etiquette. >> Does the list configuration have an API? If so, we could have a bot >> monitoring petsc-users email and subscribing (without delivery) >> addresses that are Cc'd in approved messages? > > you can check that. Hmm, I couldn't find it. > Not sure if its possible to scrub just the attachments. If thats > possible you can now configure mailman to do that. It looks like we can do it, but not from the web interface: http://wiki.list.org/pages/viewpage.action?pageId=7602227 One option would be to apply a good compression like lzma/xz to configure.log attachments. That brings a 5MB log file down to 160 kB. PETSc could compress configure.log up-front, but then you have to use xzless to look at it and not many people know to do that. >> > I don't know if there is an option for that. Currently all moderators >> > get such emails. >> >> There is a difference between list "moderator" and "admin", right? Can >> the current petsc-maint group be labeled as "moderator" so that we get >> the attachments? > > Sure - you can set that up now. Okay, I set up two topics: * installation * methods The former currently matches 'configure.log|make.log' and the latter matches the following which should be almost all generally "interesting" threads. method |converge |diverge |solver |solving |performance |usage |poisson |stokes |elasticity |compressible |flow |preconditioner |krylov |domain Let's see how well this classifies. If it is accurate, we can mention that people are welcome to unsubscribe from [installation], and possibly change the defaults. From bsmith at mcs.anl.gov Thu Apr 18 14:16:37 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 18 Apr 2013 14:16:37 -0500 Subject: [petsc-users] GMRES convergence In-Reply-To: <20130418122259.6ii74jvf3sw8sgcc@cubmail.cc.columbia.edu> References: <20130418122259.6ii74jvf3sw8sgcc@cubmail.cc.columbia.edu> Message-ID: <33C71067-4D24-416E-B2DE-591A6A964108@mcs.anl.gov> On Apr 18, 2013, at 11:22 AM, Colin McAuliffe wrote: > Hello all, > > I am testing out a few linear solvers on a small (1052 by 1052) matrix. Using unpreconditioned GMRES with -ksp_gmres_restart 10000, GMRES still takes many times more than 1052 iterations to converge. Shouldn't it be the case that for a n by n matrix, n iterations of GMRES will give a full factorization of the matrix? Also run with -ksp_view and make sure that it is actually using that 10,000 restart value. (The restart size will be printed with other information about the solver). You can also run with -ksp_monitor_singular_value this will you give you some idea of the conditioning of the matrix, which likely is huge. Barry > Thanks > > -- > Colin McAuliffe > PhD Candidate > Columbia University > Department of Civil Engineering and Engineering Mechanics From jedbrown at mcs.anl.gov Thu Apr 18 19:43:43 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Thu, 18 Apr 2013 19:43:43 -0500 Subject: [petsc-users] [Fortran] VTK viewer error In-Reply-To: References: <87k3o0as3y.fsf@mcs.anl.gov> <87zjwvaj6p.fsf@mcs.anl.gov> Message-ID: <8738un73k0.fsf@mcs.anl.gov> Please always use "reply-all" so that your messages go to the list. This is standard mailing list etiquette. It is important to preserve threading for people who find this discussion later and so that we do not waste our time re-answering the same questions that have already been answered in private side-conversations. You'll likely get an answer faster that way too. Dharmendar Reddy writes: > Thanks. It works now. And i like the binary-appended XML. Few questions > though. > > I see that the data in vtu has Field Names liek this : > Vec_0xyyyyyyyy_0Point0 > > It would be nice to have the data with Field Names set in then default > section of DMPlex. call PetscObjectSetName(U, 'myfield', ierr) > Also, I do not see the Boundary values, How do I get that data in to the > vector before viewing? Did you remove the boundary nodes when setting DMPlex's section? > I need to added to the viewer some auxiliary data which depends on the > solution of the snes problem, If do the following will that do ? If the 'auxdm' is different from the original DM, you currently have to write it to a different file or have a single big "visualization DM" that contains all the fields you ever want to look at (including boundary values as appropriate). This may be the simplest solution for you and for us. > DMGetGlobalVector(dm,u,ierr) > > DMGetGlobalVector(auxdm,v,ierr) > > > > v = f(u) ! compute the aux data... > > PetscViewerCreate(PETSC_COMM_WORLD, viewer, ierr); CHKERRQ(ierr) > PetscViewerSetType(viewer, PETSCVIEWERVTK, ierr); CHKERRQ(ierr) > PetscViewerFileSetName(viewer, 'sol.vtu', ierr) > > VecView(u,viewer,ierr) > VecView(v,viewer,ierr) > > > > > On Thu, Apr 18, 2013 at 11:37 AM, Jed Brown wrote: > >> Matthew Knepley writes: >> >> > On Thu, Apr 18, 2013 at 9:25 AM, Jed Brown wrote: >> > >> >> Dharmendar Reddy writes: >> >> >> >> > ! This line gives a compile error, as PETSCVIEWERVTK is not defined >> for >> >> > FORTRAN >> >> > ! !call PetscViewerSetType(viewer, PETSCVIEWERVTK) >> >> >> > >> > I have pushed a Fortran example of this now: >> > >> > src/dm/impls/plex/example/tutorials/ex1f90 >> >> Relative to Matt's example, you can use the following for >> binary-appended XML, which is fast and works in parallel. >> >> diff --git i/src/dm/impls/plex/examples/tutorials/ex1f90.F >> w/src/dm/impls/plex/examples/tutorials/ex1f90.F >> index d6954e6..570ad9f 100644 >> --- i/src/dm/impls/plex/examples/tutorials/ex1f90.F >> +++ w/src/dm/impls/plex/examples/tutorials/ex1f90.F >> @@ -82,9 +82,7 @@ >> CHKERRQ(ierr) >> call PetscViewerSetType(viewer, PETSCVIEWERVTK, ierr) >> CHKERRQ(ierr) >> - call PetscViewerSetFormat(viewer, PETSC_VIEWER_ASCII_VTK, ierr) >> - CHKERRQ(ierr) >> - call PetscViewerFileSetName(viewer, 'sol.vtk', ierr) >> + call PetscViewerFileSetName(viewer, 'sol.vtu', ierr) >> CHKERRQ(ierr) >> call VecView(u, viewer, ierr) >> CHKERRQ(ierr) >> >> >> The example also leaks memory. (Matt is fixing that.) >> > > > > -- > ----------------------------------------------------- > Dharmendar Reddy Palle > Graduate Student > Microelectronics Research center, > University of Texas at Austin, > 10100 Burnet Road, Bldg. 160 > MER 2.608F, TX 78758-4445 > e-mail: dharmareddy84 at gmail.com > Phone: +1-512-350-9082 > United States of America. > Homepage: https://webspace.utexas.edu/~dpr342 From dharmareddy84 at gmail.com Thu Apr 18 20:17:14 2013 From: dharmareddy84 at gmail.com (Dharmendar Reddy) Date: Thu, 18 Apr 2013 20:17:14 -0500 Subject: [petsc-users] [Fortran] VTK viewer error In-Reply-To: <8738un73k0.fsf@mcs.anl.gov> References: <87k3o0as3y.fsf@mcs.anl.gov> <87zjwvaj6p.fsf@mcs.anl.gov> <8738un73k0.fsf@mcs.anl.gov> Message-ID: Sorry Again. I always use to use reply and the mail went to petsc-users. Only since last few days it is not going to petsc-users. I will make sure i reply-all from now on. On Thu, Apr 18, 2013 at 7:43 PM, Jed Brown wrote: > Please always use "reply-all" so that your messages go to the list. > This is standard mailing list etiquette. It is important to preserve > threading for people who find this discussion later and so that we do > not waste our time re-answering the same questions that have already > been answered in private side-conversations. You'll likely get an > answer faster that way too. > > Dharmendar Reddy writes: > > > Thanks. It works now. And i like the binary-appended XML. Few questions > > though. > > > > I see that the data in vtu has Field Names liek this : > > Vec_0xyyyyyyyy_0Point0 > > > > It would be nice to have the data with Field Names set in then default > > section of DMPlex. > > call PetscObjectSetName(U, 'myfield', ierr) > > This procedure may not work if U has multiple fields per mesh node right ? The field names are already set into Default section of the dm. I set the field layout and boundary points using DMPlexCreateSection and DMSetDefualtSection Now, my test problem has 567 nodes. Node 1 and 567 have Dirichlet BC. When i do DMGetGlobalVector it has size 565, the solution vector of the snes also has size 565. The solution vector written vtu file has 567 values with zero value at the boundary nodes. Am i missing a step here? Usgin Vecsetvaluessection i apply bc inside the subroutines passed to dmsnessetfunction/jacobain. The bc values are propagated to the solver. > > Also, I do not see the Boundary values, How do I get that data in to the > > vector before viewing? > > Did you remove the boundary nodes when setting DMPlex's section? > > > I need to added to the viewer some auxiliary data which depends on the > > solution of the snes problem, If do the following will that do ? > > If the 'auxdm' is different from the original DM, you currently have to > write it to a different file or have a single big "visualization DM" > that contains all the fields you ever want to look at (including > boundary values as appropriate). This may be the simplest solution for > you and for us. > > auxdm is the clone of actual dm. I think i got how to handle this. > > DMGetGlobalVector(dm,u,ierr) > > > > DMGetGlobalVector(auxdm,v,ierr) > > > > > > > > v = f(u) ! compute the aux data... > > > > PetscViewerCreate(PETSC_COMM_WORLD, viewer, ierr); CHKERRQ(ierr) > > PetscViewerSetType(viewer, PETSCVIEWERVTK, ierr); CHKERRQ(ierr) > > PetscViewerFileSetName(viewer, 'sol.vtu', ierr) > > > > VecView(u,viewer,ierr) > > VecView(v,viewer,ierr) > > > > > > > > > > On Thu, Apr 18, 2013 at 11:37 AM, Jed Brown > wrote: > > > >> Matthew Knepley writes: > >> > >> > On Thu, Apr 18, 2013 at 9:25 AM, Jed Brown > wrote: > >> > > >> >> Dharmendar Reddy writes: > >> >> > >> >> > ! This line gives a compile error, as PETSCVIEWERVTK is not defined > >> for > >> >> > FORTRAN > >> >> > ! !call PetscViewerSetType(viewer, PETSCVIEWERVTK) > >> >> > >> > > >> > I have pushed a Fortran example of this now: > >> > > >> > src/dm/impls/plex/example/tutorials/ex1f90 > >> > >> Relative to Matt's example, you can use the following for > >> binary-appended XML, which is fast and works in parallel. > >> > >> diff --git i/src/dm/impls/plex/examples/tutorials/ex1f90.F > >> w/src/dm/impls/plex/examples/tutorials/ex1f90.F > >> index d6954e6..570ad9f 100644 > >> --- i/src/dm/impls/plex/examples/tutorials/ex1f90.F > >> +++ w/src/dm/impls/plex/examples/tutorials/ex1f90.F > >> @@ -82,9 +82,7 @@ > >> CHKERRQ(ierr) > >> call PetscViewerSetType(viewer, PETSCVIEWERVTK, ierr) > >> CHKERRQ(ierr) > >> - call PetscViewerSetFormat(viewer, PETSC_VIEWER_ASCII_VTK, ierr) > >> - CHKERRQ(ierr) > >> - call PetscViewerFileSetName(viewer, 'sol.vtk', ierr) > >> + call PetscViewerFileSetName(viewer, 'sol.vtu', ierr) > >> CHKERRQ(ierr) > >> call VecView(u, viewer, ierr) > >> CHKERRQ(ierr) > >> > >> > >> The example also leaks memory. (Matt is fixing that.) > >> > > > > > > > > -- > > ----------------------------------------------------- > > Dharmendar Reddy Palle > > Graduate Student > > Microelectronics Research center, > > University of Texas at Austin, > > 10100 Burnet Road, Bldg. 160 > > MER 2.608F, TX 78758-4445 > > e-mail: dharmareddy84 at gmail.com > > Phone: +1-512-350-9082 > > United States of America. > > Homepage: https://webspace.utexas.edu/~dpr342 > -- ----------------------------------------------------- Dharmendar Reddy Palle Graduate Student Microelectronics Research center, University of Texas at Austin, 10100 Burnet Road, Bldg. 160 MER 2.608F, TX 78758-4445 e-mail: dharmareddy84 at gmail.com Phone: +1-512-350-9082 United States of America. Homepage: https://webspace.utexas.edu/~dpr342 -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Thu Apr 18 20:44:19 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Thu, 18 Apr 2013 20:44:19 -0500 Subject: [petsc-users] [Fortran] VTK viewer error In-Reply-To: References: <87k3o0as3y.fsf@mcs.anl.gov> <87zjwvaj6p.fsf@mcs.anl.gov> <8738un73k0.fsf@mcs.anl.gov> Message-ID: <87obdb5m6k.fsf@mcs.anl.gov> Dharmendar Reddy writes: > Sorry Again. I always use to use reply and the mail went to petsc-users. > Only since last few days it is not going to petsc-users. I will make sure i > reply-all from now on. We just changed the list default to make it easier for people to filter email and to have discussions on petsc-users without necessarily being overwhelmed by all list traffic. For example, "subscribe without delivery" works properly now. Discussed at length starting here: http://lists.mcs.anl.gov/pipermail/petsc-users/2013-April/017067.html >> This procedure may not work if U has multiple fields per mesh node > right ? The field names are already set into Default section of the dm. Yes, the name is a combination of the object name (the automatic name was "Vec_0xyyyyyyyy"), the field name you give to DMPlex, and the location (point/cell). > I set the field layout and boundary points using DMPlexCreateSection and > DMSetDefualtSection > Now, my test problem has 567 nodes. Node 1 and 567 have Dirichlet BC. > When i do DMGetGlobalVector it has size 565, the solution vector of the > snes also has size 565. The solution vector written vtu file has 567 > values with zero value at the boundary nodes. Am i missing a step here? > > Usgin Vecsetvaluessection i apply bc inside the subroutines passed to > dmsnessetfunction/jacobain. The bc values are propagated to the solver. I think Matt envisions running DMGlobalToLocalBegin/End yourself, filling in those values, and then VecView on the local Vec. I'm not thrilled with using local vectors to represent global state, but it is the first thing the implementation does when you give it a global vector, so it's the right thing for you to do. From dharmareddy84 at gmail.com Thu Apr 18 21:29:19 2013 From: dharmareddy84 at gmail.com (Dharmendar Reddy) Date: Thu, 18 Apr 2013 21:29:19 -0500 Subject: [petsc-users] [Fortran] VTK viewer error In-Reply-To: <87obdb5m6k.fsf@mcs.anl.gov> References: <87k3o0as3y.fsf@mcs.anl.gov> <87zjwvaj6p.fsf@mcs.anl.gov> <8738un73k0.fsf@mcs.anl.gov> <87obdb5m6k.fsf@mcs.anl.gov> Message-ID: On Thu, Apr 18, 2013 at 8:44 PM, Jed Brown wrote: > Dharmendar Reddy writes: > > > Sorry Again. I always use to use reply and the mail went to petsc-users. > > Only since last few days it is not going to petsc-users. I will make > sure i > > reply-all from now on. > > We just changed the list default to make it easier for people to filter > email and to have discussions on petsc-users without necessarily being > overwhelmed by all list traffic. For example, "subscribe without > delivery" works properly now. Discussed at length starting here: > > http://lists.mcs.anl.gov/pipermail/petsc-users/2013-April/017067.html > > >> This procedure may not work if U has multiple fields per mesh node > > right ? The field names are already set into Default section of the dm. > > Yes, the name is a combination of the object name (the automatic name > was "Vec_0xyyyyyyyy"), the field name you give to DMPlex, and the > location (point/cell). > > it tired with PetscObjectSetName(u,'myfield', ierr) Now the data in vtu file is calle myfieldPoint0 So the pattern is .. ? where the field Name is the one set using PetscObjectSetName ? But why is it not using the name set using PetscSectionSetFieldName ? If i have a few fields per point then i can manage with the field Ids but usually the aux data i look at has about 10 to 12 fields. It would make things easier if i can refer to them using Names in the paraview. As the viewer is accessing the DM object, why not get the user set field names in the default section ? > > I set the field layout and boundary points using DMPlexCreateSection > and > > DMSetDefualtSection > > Now, my test problem has 567 nodes. Node 1 and 567 have Dirichlet BC. > > When i do DMGetGlobalVector it has size 565, the solution vector of the > > snes also has size 565. The solution vector written vtu file has 567 > > values with zero value at the boundary nodes. Am i missing a step here? > > > > Usgin Vecsetvaluessection i apply bc inside the subroutines passed to > > dmsnessetfunction/jacobain. The bc values are propagated to the solver. > > I think Matt envisions running DMGlobalToLocalBegin/End yourself, > filling in those values, and then VecView on the local Vec. I'm not > thrilled with using local vectors to represent global state, but it is > the first thing the implementation does when you give it a global > vector, so it's the right thing for you to do. > -- ----------------------------------------------------- Dharmendar Reddy Palle Graduate Student Microelectronics Research center, University of Texas at Austin, 10100 Burnet Road, Bldg. 160 MER 2.608F, TX 78758-4445 e-mail: dharmareddy84 at gmail.com Phone: +1-512-350-9082 United States of America. Homepage: https://webspace.utexas.edu/~dpr342 -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Fri Apr 19 00:54:18 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Fri, 19 Apr 2013 00:54:18 -0500 Subject: [petsc-users] [Fortran] VTK viewer error In-Reply-To: References: <87k3o0as3y.fsf@mcs.anl.gov> <87zjwvaj6p.fsf@mcs.anl.gov> <8738un73k0.fsf@mcs.anl.gov> <87obdb5m6k.fsf@mcs.anl.gov> Message-ID: <87bo9b5alx.fsf@mcs.anl.gov> Dharmendar Reddy writes: > Now the data in vtu file is calle > > myfieldPoint0 So the pattern is .. ? > where the field Name is the one set using PetscObjectSetName ? > > But why is it not using the name set using PetscSectionSetFieldName ? The DMDA code (that I wrote before the DMPlex stuff) used field information, but for whatever reason, I didn't have it available when writing the DMPlex VTU code. Maybe it was just that the "field" interface is more complicated in DMPlex. > If i have a few fields per point then i can manage with the field Ids but > usually the aux data i look at has about 10 to 12 fields. It would make > things easier if i can refer to them using Names in the paraview. As the > viewer is accessing the DM object, why not get the user set field names in > the default section ? This is now in 'next'. I've only tested it for a cell-centered problem. Let me know if it works for you. commit 1cfafdd3620b0d8e55f5b21e1f3de1941197d333 Author: Jed Brown Date: Fri Apr 19 00:40:19 2013 -0500 DMPlex: teach VTU output about field names and components Schema is "${ObjectName}${FieldName}.${Component}". For example SolutionDensity.0 SolutionMomentum.0 SolutionMomentum.1 SolutionMomentum.2 SolutionEnergy.0 src/dm/impls/plex/plexvtu.c | 43 ++++++++++++++++++++++++----------- 1 file changed, 30 insertions(+), 13 deletions(-) From dharmareddy84 at gmail.com Fri Apr 19 01:48:05 2013 From: dharmareddy84 at gmail.com (Dharmendar Reddy) Date: Fri, 19 Apr 2013 01:48:05 -0500 Subject: [petsc-users] [Fortran] VTK viewer error In-Reply-To: <87bo9b5alx.fsf@mcs.anl.gov> References: <87k3o0as3y.fsf@mcs.anl.gov> <87zjwvaj6p.fsf@mcs.anl.gov> <8738un73k0.fsf@mcs.anl.gov> <87obdb5m6k.fsf@mcs.anl.gov> <87bo9b5alx.fsf@mcs.anl.gov> Message-ID: On Fri, Apr 19, 2013 at 12:54 AM, Jed Brown wrote: > Dharmendar Reddy writes: > > > Now the data in vtu file is calle > > > > myfieldPoint0 So the pattern is .. ? > > where the field Name is the one set using PetscObjectSetName ? > > > > But why is it not using the name set using PetscSectionSetFieldName ? > > The DMDA code (that I wrote before the DMPlex stuff) used field > information, but for whatever reason, I didn't have it available when > writing the DMPlex VTU code. Maybe it was just that the "field" > interface is more complicated in DMPlex. > > > If i have a few fields per point then i can manage with the field Ids > but > > usually the aux data i look at has about 10 to 12 fields. It would make > > things easier if i can refer to them using Names in the paraview. As > the > > viewer is accessing the DM object, why not get the user set field names > in > > the default section ? > > This is now in 'next'. I've only tested it for a cell-centered problem. > Let me know if it works for you. > > commit 1cfafdd3620b0d8e55f5b21e1f3de1941197d333 > Author: Jed Brown > Date: Fri Apr 19 00:40:19 2013 -0500 > > DMPlex: teach VTU output about field names and components > > Schema is "${ObjectName}${FieldName}.${Component}". For example > I like the above schema, lets call it schmea1. In the earlier ${ObjectName}.${FielId}, i find the Cell | Point part of name not useful assuming the data is visualized in say preview as it the data is shown as cell or point data once the data is loaded. What do u think about the schema ${ObjectName}${FieldName}${FielId}.${Component} . Kind of redundant, but ObjectName and FieldName are optional variables whereas FieldId and Component always exist. If the user did provide the Names one can use them. Otherwise, i am assuming that ObjectName is a default name generated by petsc and FieldName seems to be set to Unnamed? If the FieldId is not used and user sets just the ObjectName and not FieldName then the names will clash for multiple fields if one were to use schmea1. > SolutionDensity.0 > SolutionMomentum.0 > SolutionMomentum.1 > SolutionMomentum.2 > SolutionEnergy.0 > > src/dm/impls/plex/plexvtu.c | 43 ++++++++++++++++++++++++----------- > 1 file changed, 30 insertions(+), 13 deletions(-) > -- ----------------------------------------------------- Dharmendar Reddy Palle Graduate Student Microelectronics Research center, University of Texas at Austin, 10100 Burnet Road, Bldg. 160 MER 2.608F, TX 78758-4445 e-mail: dharmareddy84 at gmail.com Phone: +1-512-350-9082 United States of America. Homepage: https://webspace.utexas.edu/~dpr342 -------------- next part -------------- An HTML attachment was scrubbed... URL: From dharmareddy84 at gmail.com Fri Apr 19 02:00:56 2013 From: dharmareddy84 at gmail.com (Dharmendar Reddy) Date: Fri, 19 Apr 2013 02:00:56 -0500 Subject: [petsc-users] [Fortran] VTK viewer error In-Reply-To: References: <87k3o0as3y.fsf@mcs.anl.gov> <87zjwvaj6p.fsf@mcs.anl.gov> <8738un73k0.fsf@mcs.anl.gov> <87obdb5m6k.fsf@mcs.anl.gov> <87bo9b5alx.fsf@mcs.anl.gov> Message-ID: On Fri, Apr 19, 2013 at 1:48 AM, Dharmendar Reddy wrote: > > On Fri, Apr 19, 2013 at 12:54 AM, Jed Brown wrote: > >> Dharmendar Reddy writes: >> >> > Now the data in vtu file is calle >> > >> > myfieldPoint0 So the pattern is .. >> ? >> > where the field Name is the one set using PetscObjectSetName ? >> > >> > But why is it not using the name set using PetscSectionSetFieldName ? >> >> The DMDA code (that I wrote before the DMPlex stuff) used field >> information, but for whatever reason, I didn't have it available when >> writing the DMPlex VTU code. Maybe it was just that the "field" >> interface is more complicated in DMPlex. >> >> > If i have a few fields per point then i can manage with the field Ids >> but >> > usually the aux data i look at has about 10 to 12 fields. It would make >> > things easier if i can refer to them using Names in the paraview. As >> the >> > viewer is accessing the DM object, why not get the user set field names >> in >> > the default section ? >> >> This is now in 'next'. I've only tested it for a cell-centered problem. >> Let me know if it works for you. >> >> commit 1cfafdd3620b0d8e55f5b21e1f3de1941197d333 >> Author: Jed Brown >> Date: Fri Apr 19 00:40:19 2013 -0500 >> >> DMPlex: teach VTU output about field names and components >> >> Schema is "${ObjectName}${FieldName}.${Component}". For example >> > > I like the above schema, lets call it schmea1. In the earlier > ${ObjectName}.${FielId}, i find the Cell | Point part of name > not useful assuming the data is visualized in say preview as it the data is > shown as cell or point data once the data is loaded. > What do u think about the schema > ${ObjectName}${FieldName}${FielId}.${Component} . Kind of redundant, but > ObjectName and FieldName are optional variables whereas FieldId and > Component always exist. > If the user did provide the Names one can use them. Otherwise, i am > assuming that ObjectName is a default name generated by petsc and FieldName > seems to be set to Unnamed? If the FieldId is not used and user sets just > the ObjectName and not FieldName then the names will clash for multiple > fields if one were to use schmea1. > > > I guess you can ignore what i said above. Looks like you are doing that in plex vtu 225: ierr = PetscSNPrintf(buf,sizeof(buf),"CellField%D",field);CHKERRQ(ierr); 259: ierr = PetscSNPrintf(buf,sizeof(buf),"PointField%D",field);CHKERRQ(ierr); for undefined filenames, fielname = buf. I will try this with my code. Thanks > >> SolutionDensity.0 >> SolutionMomentum.0 >> SolutionMomentum.1 >> SolutionMomentum.2 >> SolutionEnergy.0 >> >> src/dm/impls/plex/plexvtu.c | 43 ++++++++++++++++++++++++----------- >> 1 file changed, 30 insertions(+), 13 deletions(-) >> > > > > -- > ----------------------------------------------------- > Dharmendar Reddy Palle > Graduate Student > Microelectronics Research center, > University of Texas at Austin, > 10100 Burnet Road, Bldg. 160 > MER 2.608F, TX 78758-4445 > e-mail: dharmareddy84 at gmail.com > Phone: +1-512-350-9082 > United States of America. > Homepage: https://webspace.utexas.edu/~dpr342 > -- ----------------------------------------------------- Dharmendar Reddy Palle Graduate Student Microelectronics Research center, University of Texas at Austin, 10100 Burnet Road, Bldg. 160 MER 2.608F, TX 78758-4445 e-mail: dharmareddy84 at gmail.com Phone: +1-512-350-9082 United States of America. Homepage: https://webspace.utexas.edu/~dpr342 -------------- next part -------------- An HTML attachment was scrubbed... URL: From dominik at itis.ethz.ch Fri Apr 19 06:36:00 2013 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Fri, 19 Apr 2013 13:36:00 +0200 Subject: [petsc-users] Crash when using valgrind In-Reply-To: <87haj4arwz.fsf@mcs.anl.gov> References: <87k3o0erxg.fsf@mcs.anl.gov> <87haj4arwz.fsf@mcs.anl.gov> Message-ID: On Thu, Apr 18, 2013 at 3:29 PM, Jed Brown wrote: > Dominik Szczerba writes: > >>> What happens when you pass -no_signal_handler to the PETSc program? >>> >> >> valgrind --tool=memcheck -q --num-callers=20 MySolver -no_signal_handler >> >> No change, i.e: >> >> cr_libinit.c:183 cri_init: sigaction() failed: Invalid argument >> Aborted (core dumped) > > Can you run simple BLCR-using programs in Valgrind? You might have to > do your debugging in a non-BLCR build. I now found that it seems to be an issue with the system mpich2 that I am using. Thanks for the pointer. Dominik From mike.hui.zhang at hotmail.com Fri Apr 19 10:15:19 2013 From: mike.hui.zhang at hotmail.com (Hui Zhang) Date: Fri, 19 Apr 2013 17:15:19 +0200 Subject: [petsc-users] VecScatter from comm to subcomm: should I aggregate? Message-ID: I'm doing something like gasm.c. I can construct one VecScatte from comm to each subcomm. But I can also construct VecScatter's from comm to the processors in subcomm. Which mode do you think is good? Why? Thanks for teaching me! From gokhalen at gmail.com Fri Apr 19 10:46:21 2013 From: gokhalen at gmail.com (Nachiket Gokhale) Date: Fri, 19 Apr 2013 11:46:21 -0400 Subject: [petsc-users] real and imaginary part of a number In-Reply-To: References: Message-ID: Unless I am missing something, I still can't seem to find PetscRealPart in http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/index.html Perhaps you added it elsewhere? -Nachiket On Thu, Dec 6, 2012 at 1:35 PM, Jed Brown wrote: > PetscRealPart() and PetscImaginaryPart() > > It looks like none of the math functions have man pages. > > > On Thu, Dec 6, 2012 at 10:33 AM, Nachiket Gokhale wrote: > >> Does petsc provide functions to get real and imaginary parts of a number? >> I couldn't seem to find any functions in >> >> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/index.html >> >> or in the vec collective either. >> >> Cheers, >> >> -Nachiket >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Fri Apr 19 11:32:56 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Fri, 19 Apr 2013 11:32:56 -0500 Subject: [petsc-users] real and imaginary part of a number In-Reply-To: References: Message-ID: <87ip3i4h1j.fsf@mcs.anl.gov> Nachiket Gokhale writes: > Unless I am missing something, I still can't seem to find PetscRealPart in > > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/index.html > > Perhaps you added it elsewhere? No, I have not added these man pages. The functions do not currently have Fortran bindings, but they are supported in C. Look at include/petscmath.h for details. From jedbrown at mcs.anl.gov Fri Apr 19 14:59:57 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Fri, 19 Apr 2013 14:59:57 -0500 Subject: [petsc-users] [Fortran] VTK viewer error In-Reply-To: References: <87k3o0as3y.fsf@mcs.anl.gov> <87zjwvaj6p.fsf@mcs.anl.gov> <8738un73k0.fsf@mcs.anl.gov> <87obdb5m6k.fsf@mcs.anl.gov> <87bo9b5alx.fsf@mcs.anl.gov> Message-ID: <87mwsu2sw2.fsf@mcs.anl.gov> Dharmendar Reddy writes: > I like the above schema, lets call it schmea1. In the earlier > ${ObjectName}.${FielId}, i find the Cell | Point part of name > not useful assuming the data is visualized in say preview as it the data is > shown as cell or point data once the data is loaded. Yes, but if we don't have names all, then we just have Field0, Field1, Field2, etc. Since cell and vertex data can be written separately, this could result in Field0 on cells being a completely different quantity from Field0 on vertices. Naming them PointField0 and CellField0 helps to disambiguate. It's not perfect and I have to problem changing. (Better to change now than to annoy users down the road.) > What do u think about the schema > ${ObjectName}${FieldName}${FielId}.${Component} . Kind of redundant, but > ObjectName and FieldName are optional variables whereas FieldId and > Component always exist. > If the user did provide the Names one can use them. Otherwise, i am > assuming that ObjectName is a default name generated by petsc and FieldName > seems to be set to Unnamed? If the FieldId is not used and user sets just > the ObjectName and not FieldName then the names will clash for multiple > fields if one were to use schmea1. I did something like this, and from your other message, it sounds like you think it's okay. Let me know if you'd like something different. From dharmareddy84 at gmail.com Fri Apr 19 18:36:35 2013 From: dharmareddy84 at gmail.com (Dharmendar Reddy) Date: Fri, 19 Apr 2013 18:36:35 -0500 Subject: [petsc-users] Eigenvalue Problem, Dirichlet BC and DMPlex Message-ID: Hello, I need to assemble the operators for an eigenvalue problem. Consider for example a 1D Schrodinger equation -div(grad(psi)) = E psi for x in [0, L] and psi(0) = 0 and psi(L) = 0 I understand how to get the 1D mesh into a DM object. I was thinking to create the default section using 1 scalar field per node add boundary points. Use DMCreateMatrix to get the operators. Now if i do element by element assembly, Do i need to use matsetclosure or masetvalues? how are the elements with boundary nodes handled ? Are the values for boundary nodes ignored ? I will pass the assembled operators to slepc to solve the eigenvalue problem. Thanks Reddy -- ----------------------------------------------------- Dharmendar Reddy Palle Graduate Student Microelectronics Research center, University of Texas at Austin, 10100 Burnet Road, Bldg. 160 MER 2.608F, TX 78758-4445 e-mail: dharmareddy84 at gmail.com Phone: +1-512-350-9082 United States of America. Homepage: https://webspace.utexas.edu/~dpr342 -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Apr 19 20:57:01 2013 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 19 Apr 2013 21:57:01 -0400 Subject: [petsc-users] Eigenvalue Problem, Dirichlet BC and DMPlex In-Reply-To: References: Message-ID: On Fri, Apr 19, 2013 at 7:36 PM, Dharmendar Reddy wrote: > Hello, > I need to assemble the operators for an eigenvalue problem. > Consider for example a 1D Schrodinger equation > > -div(grad(psi)) = E psi for x in [0, L] and psi(0) = 0 and psi(L) = 0 > > I understand how to get the 1D mesh into a DM object. > > I was thinking to create the default section using 1 scalar field per node > add boundary points. > > Use DMCreateMatrix to get the operators. > > Now if i do element by element assembly, > Do i need to use matsetclosure or masetvalues? > MatSetClosure() just calls GetTransitiveClosure()+SectionGetOffset() to translate points to indices, and then calls MatSetValues(). In some sense, its a convenience method. > how are the elements with boundary nodes handled ? > You choose. If you want boundary dofs eliminated, use the mechanism in the PetscSection. Otherwise, use the linear algebra tools like MatZeroRowsColumns(). > Are the values for boundary nodes ignored ? > See above. Matt > I will pass the assembled operators to slepc to solve the eigenvalue > problem. > > Thanks > Reddy > > > > -- > ----------------------------------------------------- > Dharmendar Reddy Palle > Graduate Student > Microelectronics Research center, > University of Texas at Austin, > 10100 Burnet Road, Bldg. 160 > MER 2.608F, TX 78758-4445 > e-mail: dharmareddy84 at gmail.com > Phone: +1-512-350-9082 > United States of America. > Homepage: https://webspace.utexas.edu/~dpr342 > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From opensource.petsc at user.fastmail.fm Sat Apr 20 20:53:09 2013 From: opensource.petsc at user.fastmail.fm (Hugo Gagnon) Date: Sat, 20 Apr 2013 21:53:09 -0400 Subject: [petsc-users] Increasing ILU robustness Message-ID: <1DDFD00E-0D08-4CB7-AD89-8288154CC3AD@user.fastmail.fm> Hi, I'm getting a KSP_DIVERGED_INDEFINITE_PC error using CG with ILU. I tried increasing the number of levels of fill and also tried other options described in http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCILU.html but without any luck. Are there some other preconditioner options that might work? The solution converges nicely with petsc's cg + lu and sparskit2's iluk on an independent cg serial solver. Thanks, -- Hugo Gagnon -------------- next part -------------- An HTML attachment was scrubbed... URL: From mike.hui.zhang at hotmail.com Sun Apr 21 06:23:25 2013 From: mike.hui.zhang at hotmail.com (Hui Zhang) Date: Sun, 21 Apr 2013 13:23:25 +0200 Subject: [petsc-users] VecScatter from comm to subcomm: should I aggregate? In-Reply-To: References: Message-ID: On Apr 19, 2013, at 5:15 PM, Hui Zhang wrote: > I'm doing something like gasm.c. I can construct one VecScatte from comm to each subcomm. But I can also construct VecScatter's from comm to the processors in subcomm. Which mode do you think is good? Why? Thanks for teaching me! After reading the source codes, I think I mis-understood VecScatterCreate. There is no direct support for Scatter between comm and subcomm unless one of them is Seq (MPI_Comm_size == 1). Anyway, gasm.c provides a very good example for the 'MPI to MPI' (the same comm) Scatter. And asm.c provides that for 'MPI to Seq'. Learned a lot! Thanks for petsc's source codes! From jedbrown at mcs.anl.gov Sun Apr 21 09:58:24 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sun, 21 Apr 2013 09:58:24 -0500 Subject: [petsc-users] Increasing ILU robustness In-Reply-To: <1DDFD00E-0D08-4CB7-AD89-8288154CC3AD@user.fastmail.fm> References: <1DDFD00E-0D08-4CB7-AD89-8288154CC3AD@user.fastmail.fm> Message-ID: <87mwssgcbz.fsf@mcs.anl.gov> Hugo Gagnon writes: > Hi, > > I'm getting a KSP_DIVERGED_INDEFINITE_PC error using CG with ILU. I > tried increasing the number of levels of fill and also tried other > options described in > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCILU.html > but without any luck. Are there some other preconditioner options > that might work? What kind of problem are you solving? How does this work? -pc_type gamg -pc_gamg_agg_nsmooths 1 From opensource.petsc at user.fastmail.fm Sun Apr 21 10:38:44 2013 From: opensource.petsc at user.fastmail.fm (Hugo Gagnon) Date: Sun, 21 Apr 2013 11:38:44 -0400 Subject: [petsc-users] Increasing ILU robustness In-Reply-To: <87mwssgcbz.fsf@mcs.anl.gov> References: <1DDFD00E-0D08-4CB7-AD89-8288154CC3AD@user.fastmail.fm> <87mwssgcbz.fsf@mcs.anl.gov> Message-ID: <9AF08C22-2C1A-4ACA-935A-C1D52B145AC9@user.fastmail.fm> Linear elasticity, which yields symmetric positive definite matrices. So I guess I could reformulate my question to: what is the solver/preconditioner combination that is "best" suited for this kind of problem? I tried Anton suggestion and gave BCGS a shot but although it does seem to work it converges very slowly. Using the gamg preconditioner blows up: [0]PCSetData_AGG bs=1 MM=9120 KSP resid. tolerance target = 1.000E-10 KSP initial residual |res0| = 1.443E-01 KSP iter = 0: |res|/|res0| = 1.000E+00 KSP iter = 1: |res|/|res0| = 4.861E-01 KSP Object: 6 MPI processes type: cg maximum iterations=10000 tolerances: relative=1e-10, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using PRECONDITIONED norm type for convergence test PC Object: 6 MPI processes type: gamg MG: type is MULTIPLICATIVE, levels=2 cycles=v Cycles per PCApply=1 Using Galerkin computed coarse grid matrices Coarse grid solver -- level ------------------------------- KSP Object: (mg_coarse_) 6 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_) 6 MPI processes type: bjacobi block Jacobi: number of blocks = 6 Local solve info for each block is in the following KSP and PC objects: [0] number of local blocks = 1, first local block number = 0 [0] local block number 0 KSP Object: KSP Object: (mg_coarse_sub_) 1 MPI processes type: preonly KSP Object: (mg_coarse_sub_) 1 MPI processes KSP Object: (mg_coarse_sub_) 1 MPI processes KSP Object: (mg_coarse_sub_) 1 MPI processes KSP Object: (mg_coarse_sub_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning (mg_coarse_sub_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_sub_) 1 MPI processes using NONE norm type for convergence test PC Object: (mg_coarse_sub_) 1 MPI processes using NONE norm type for convergence test PC Object: (mg_coarse_sub_) 1 MPI processes using NONE norm type for convergence test PC Object: (mg_coarse_sub_) 1 MPI processes type: lu using NONE norm type for convergence test PC Object: (mg_coarse_sub_) 1 MPI processes type: lu PC Object: (mg_coarse_sub_) 1 MPI processes type: lu LU: out-of-place factorization type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 matrix ordering: nd type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 matrix ordering: nd factor fill ratio given 5, needed 0 Factored matrix follows: Matrix Object: type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 matrix ordering: nd factor fill ratio given 5, needed 0 Factored matrix follows: Matrix Object: LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 matrix ordering: nd factor fill ratio given 5, needed 0 Factored matrix follows: Matrix Object: LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 matrix ordering: nd factor fill ratio given 5, needed 0 Factored matrix follows: Matrix Object: tolerance for zero pivot 2.22045e-14 matrix ordering: nd factor fill ratio given 5, needed 0 Factored matrix follows: Matrix Object: 1 MPI processes factor fill ratio given 5, needed 4.41555 Factored matrix follows: Matrix Object: 1 MPI processes type: seqaij rows=447, cols=447 package used to perform factorization: petsc total: nonzeros=75113, allocated nonzeros=75113 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=447, cols=447 total: nonzeros=17011, allocated nonzeros=17011 total number of mallocs used during MatSetValues calls =0 not using I-node routines - - - - - - - - - - - - - - - - - - 1 MPI processes type: seqaij rows=0, cols=0 package used to perform factorization: petsc total: nonzeros=1, allocated nonzeros=1 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=0, cols=0 total: nonzeros=0, allocated nonzeros=0 total number of mallocs used during MatSetValues calls =0 not using I-node routines 1 MPI processes type: seqaij rows=0, cols=0 package used to perform factorization: petsc total: nonzeros=1, allocated nonzeros=1 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=0, cols=0 total: nonzeros=0, allocated nonzeros=0 total number of mallocs used during MatSetValues calls =0 not using I-node routines 1 MPI processes type: seqaij rows=0, cols=0 package used to perform factorization: petsc total: nonzeros=1, allocated nonzeros=1 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=0, cols=0 total: nonzeros=0, allocated nonzeros=0 total number of mallocs used during MatSetValues calls =0 not using I-node routines 1 MPI processes type: seqaij rows=0, cols=0 package used to perform factorization: petsc total: nonzeros=1, allocated nonzeros=1 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=0, cols=0 total: nonzeros=0, allocated nonzeros=0 total number of mallocs used during MatSetValues calls =0 not using I-node routines type: seqaij rows=0, cols=0 package used to perform factorization: petsc total: nonzeros=1, allocated nonzeros=1 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=0, cols=0 total: nonzeros=0, allocated nonzeros=0 total number of mallocs used during MatSetValues calls =0 not using I-node routines [1] number of local blocks = 1, first local block number = 1 [1] local block number 0 - - - - - - - - - - - - - - - - - - [2] number of local blocks = 1, first local block number = 2 [2] local block number 0 - - - - - - - - - - - - - - - - - - [3] number of local blocks = 1, first local block number = 3 [3] local block number 0 - - - - - - - - - - - - - - - - - - [4] number of local blocks = 1, first local block number = 4 [4] local block number 0 - - - - - - - - - - - - - - - - - - [5] number of local blocks = 1, first local block number = 5 [5] local block number 0 - - - - - - - - - - - - - - - - - - linear system matrix = precond matrix: Matrix Object: 6 MPI processes type: mpiaij rows=447, cols=447 total: nonzeros=17011, allocated nonzeros=17011 total number of mallocs used during MatSetValues calls =0 not using I-node (on process 0) routines Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 6 MPI processes type: chebyshev Chebyshev: eigenvalue estimates: min = 0.0358458, max = 4.60675 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (mg_levels_1_) 6 MPI processes type: jacobi linear system matrix = precond matrix: Matrix Object: 6 MPI processes type: mpiaij rows=54711, cols=54711 total: nonzeros=4086585, allocated nonzeros=4086585 total number of mallocs used during MatSetValues calls =0 using I-node (on process 0) routines: found 3040 nodes, limit used is 5 Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Matrix Object: 6 MPI processes type: mpiaij rows=54711, cols=54711 total: nonzeros=4086585, allocated nonzeros=4086585 total number of mallocs used during MatSetValues calls =0 using I-node (on process 0) routines: found 3040 nodes, limit used is 5 Error in FEMesh_Mod::moveFEMeshPETSc() : KSP returned with error code = -8 -- Hugo Gagnon On 2013-04-21, at 10:58 AM, Jed Brown wrote: > Hugo Gagnon writes: > >> Hi, >> >> I'm getting a KSP_DIVERGED_INDEFINITE_PC error using CG with ILU. I >> tried increasing the number of levels of fill and also tried other >> options described in >> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCILU.html >> but without any luck. Are there some other preconditioner options >> that might work? > > What kind of problem are you solving? How does this work? > > -pc_type gamg -pc_gamg_agg_nsmooths 1 -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Sun Apr 21 10:56:20 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sun, 21 Apr 2013 10:56:20 -0500 Subject: [petsc-users] Increasing ILU robustness In-Reply-To: <9AF08C22-2C1A-4ACA-935A-C1D52B145AC9@user.fastmail.fm> References: <1DDFD00E-0D08-4CB7-AD89-8288154CC3AD@user.fastmail.fm> <87mwssgcbz.fsf@mcs.anl.gov> <9AF08C22-2C1A-4ACA-935A-C1D52B145AC9@user.fastmail.fm> Message-ID: <87k3nvho7v.fsf@mcs.anl.gov> Hugo Gagnon writes: > Linear elasticity, Smoothed aggregation is a good choice. > which yields symmetric positive definite matrices. Note that ILU can produce negative pivots for SPD matrices. See Kershaw 1978 or src/ksp/pc/examples/tutorials/ex1.c for a 4x4 example. > So I guess I could reformulate my question to: what is the > solver/preconditioner combination that is "best" suited for this kind > of problem? I tried Anton suggestion and gave BCGS a shot but > although it does seem to work it converges very slowly. Using the > gamg preconditioner blows up: What do you mean "blows up"? Does this problem have sufficient boundary conditions to be non-singular? For elasticity, you should set the "near null space" (rigid body modes) using MatSetNearNullSpace (and perhaps MatNullSpaceCreateRigidBody). Once this is set, you can compare PCGAMG to PCML. From mark.adams at columbia.edu Sun Apr 21 15:20:54 2013 From: mark.adams at columbia.edu (Mark F. Adams) Date: Sun, 21 Apr 2013 16:20:54 -0400 Subject: [petsc-users] Increasing ILU robustness In-Reply-To: <9AF08C22-2C1A-4ACA-935A-C1D52B145AC9@user.fastmail.fm> References: <1DDFD00E-0D08-4CB7-AD89-8288154CC3AD@user.fastmail.fm> <87mwssgcbz.fsf@mcs.anl.gov> <9AF08C22-2C1A-4ACA-935A-C1D52B145AC9@user.fastmail.fm> Message-ID: You need to set the block size (3 or 6 in your case) for AMG. You also want to give ML and GAMG the (near) null space or the six rigid body modes in your case. For convince GAMG lets you give us the nodal coordinates and we will figure it out for you. But it should work to some degree w/o the null space (it can figure out the 3 translational modes) On Apr 21, 2013, at 11:38 AM, Hugo Gagnon wrote: > Linear elasticity, which yields symmetric positive definite matrices. So I guess I could reformulate my question to: what is the solver/preconditioner combination that is "best" suited for this kind of problem? I tried Anton suggestion and gave BCGS a shot but although it does seem to work it converges very slowly. Using the gamg preconditioner blows up: > > [0]PCSetData_AGG bs=1 MM=9120 > KSP resid. tolerance target = 1.000E-10 > KSP initial residual |res0| = 1.443E-01 > KSP iter = 0: |res|/|res0| = 1.000E+00 > KSP iter = 1: |res|/|res0| = 4.861E-01 > KSP Object: 6 MPI processes > type: cg > maximum iterations=10000 > tolerances: relative=1e-10, absolute=1e-50, divergence=10000 > left preconditioning > using nonzero initial guess > using PRECONDITIONED norm type for convergence test > PC Object: 6 MPI processes > type: gamg > MG: type is MULTIPLICATIVE, levels=2 cycles=v > Cycles per PCApply=1 > Using Galerkin computed coarse grid matrices > Coarse grid solver -- level ------------------------------- > KSP Object: (mg_coarse_) 6 MPI processes > type: gmres > GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement > GMRES: happy breakdown tolerance 1e-30 > maximum iterations=1, initial guess is zero > tolerances: relative=1e-05, absolute=1e-50, divergence=10000 > left preconditioning > using NONE norm type for convergence test > PC Object: (mg_coarse_) 6 MPI processes > type: bjacobi > block Jacobi: number of blocks = 6 > Local solve info for each block is in the following KSP and PC objects: > [0] number of local blocks = 1, first local block number = 0 > [0] local block number 0 > KSP Object: KSP Object: (mg_coarse_sub_) 1 MPI processes > type: preonly > KSP Object: (mg_coarse_sub_) 1 MPI processes > KSP Object: (mg_coarse_sub_) 1 MPI processes > KSP Object: (mg_coarse_sub_) 1 MPI processes > KSP Object: (mg_coarse_sub_) 1 MPI processes > type: preonly > maximum iterations=10000, initial guess is zero > tolerances: relative=1e-05, absolute=1e-50, divergence=10000 > left preconditioning > using NONE norm type for convergence test > type: preonly > maximum iterations=10000, initial guess is zero > tolerances: relative=1e-05, absolute=1e-50, divergence=10000 > left preconditioning > type: preonly > maximum iterations=10000, initial guess is zero > tolerances: relative=1e-05, absolute=1e-50, divergence=10000 > left preconditioning > type: preonly > maximum iterations=10000, initial guess is zero > tolerances: relative=1e-05, absolute=1e-50, divergence=10000 > left preconditioning > maximum iterations=10000, initial guess is zero > tolerances: relative=1e-05, absolute=1e-50, divergence=10000 > left preconditioning > (mg_coarse_sub_) 1 MPI processes > type: preonly > maximum iterations=10000, initial guess is zero > tolerances: relative=1e-05, absolute=1e-50, divergence=10000 > left preconditioning > using NONE norm type for convergence test > PC Object: (mg_coarse_sub_) 1 MPI processes > using NONE norm type for convergence test > PC Object: (mg_coarse_sub_) 1 MPI processes > using NONE norm type for convergence test > PC Object: (mg_coarse_sub_) 1 MPI processes > using NONE norm type for convergence test > PC Object: (mg_coarse_sub_) 1 MPI processes > type: lu > using NONE norm type for convergence test > PC Object: (mg_coarse_sub_) 1 MPI processes > type: lu > PC Object: (mg_coarse_sub_) 1 MPI processes > type: lu > LU: out-of-place factorization > type: lu > LU: out-of-place factorization > tolerance for zero pivot 2.22045e-14 > matrix ordering: nd > type: lu > LU: out-of-place factorization > tolerance for zero pivot 2.22045e-14 > matrix ordering: nd > factor fill ratio given 5, needed 0 > Factored matrix follows: > Matrix Object: type: lu > LU: out-of-place factorization > tolerance for zero pivot 2.22045e-14 > matrix ordering: nd > factor fill ratio given 5, needed 0 > Factored matrix follows: > Matrix Object: LU: out-of-place factorization > tolerance for zero pivot 2.22045e-14 > matrix ordering: nd > factor fill ratio given 5, needed 0 > Factored matrix follows: > Matrix Object: LU: out-of-place factorization > tolerance for zero pivot 2.22045e-14 > matrix ordering: nd > factor fill ratio given 5, needed 0 > Factored matrix follows: > Matrix Object: tolerance for zero pivot 2.22045e-14 > matrix ordering: nd > factor fill ratio given 5, needed 0 > Factored matrix follows: > Matrix Object: 1 MPI processes > factor fill ratio given 5, needed 4.41555 > Factored matrix follows: > Matrix Object: 1 MPI processes > type: seqaij > rows=447, cols=447 > package used to perform factorization: petsc > total: nonzeros=75113, allocated nonzeros=75113 > total number of mallocs used during MatSetValues calls =0 > not using I-node routines > linear system matrix = precond matrix: > Matrix Object: 1 MPI processes > type: seqaij > rows=447, cols=447 > total: nonzeros=17011, allocated nonzeros=17011 > total number of mallocs used during MatSetValues calls =0 > not using I-node routines > - - - - - - - - - - - - - - - - - - > 1 MPI processes > type: seqaij > rows=0, cols=0 > package used to perform factorization: petsc > total: nonzeros=1, allocated nonzeros=1 > total number of mallocs used during MatSetValues calls =0 > not using I-node routines > linear system matrix = precond matrix: > Matrix Object: 1 MPI processes > type: seqaij > rows=0, cols=0 > total: nonzeros=0, allocated nonzeros=0 > total number of mallocs used during MatSetValues calls =0 > not using I-node routines > 1 MPI processes > type: seqaij > rows=0, cols=0 > package used to perform factorization: petsc > total: nonzeros=1, allocated nonzeros=1 > total number of mallocs used during MatSetValues calls =0 > not using I-node routines > linear system matrix = precond matrix: > Matrix Object: 1 MPI processes > type: seqaij > rows=0, cols=0 > total: nonzeros=0, allocated nonzeros=0 > total number of mallocs used during MatSetValues calls =0 > not using I-node routines > 1 MPI processes > type: seqaij > rows=0, cols=0 > package used to perform factorization: petsc > total: nonzeros=1, allocated nonzeros=1 > total number of mallocs used during MatSetValues calls =0 > not using I-node routines > linear system matrix = precond matrix: > Matrix Object: 1 MPI processes > type: seqaij > rows=0, cols=0 > total: nonzeros=0, allocated nonzeros=0 > total number of mallocs used during MatSetValues calls =0 > not using I-node routines > 1 MPI processes > type: seqaij > rows=0, cols=0 > package used to perform factorization: petsc > total: nonzeros=1, allocated nonzeros=1 > total number of mallocs used during MatSetValues calls =0 > not using I-node routines > linear system matrix = precond matrix: > Matrix Object: 1 MPI processes > type: seqaij > rows=0, cols=0 > total: nonzeros=0, allocated nonzeros=0 > total number of mallocs used during MatSetValues calls =0 > not using I-node routines > type: seqaij > rows=0, cols=0 > package used to perform factorization: petsc > total: nonzeros=1, allocated nonzeros=1 > total number of mallocs used during MatSetValues calls =0 > not using I-node routines > linear system matrix = precond matrix: > Matrix Object: 1 MPI processes > type: seqaij > rows=0, cols=0 > total: nonzeros=0, allocated nonzeros=0 > total number of mallocs used during MatSetValues calls =0 > not using I-node routines > [1] number of local blocks = 1, first local block number = 1 > [1] local block number 0 > - - - - - - - - - - - - - - - - - - > [2] number of local blocks = 1, first local block number = 2 > [2] local block number 0 > - - - - - - - - - - - - - - - - - - > [3] number of local blocks = 1, first local block number = 3 > [3] local block number 0 > - - - - - - - - - - - - - - - - - - > [4] number of local blocks = 1, first local block number = 4 > [4] local block number 0 > - - - - - - - - - - - - - - - - - - > [5] number of local blocks = 1, first local block number = 5 > [5] local block number 0 > - - - - - - - - - - - - - - - - - - > linear system matrix = precond matrix: > Matrix Object: 6 MPI processes > type: mpiaij > rows=447, cols=447 > total: nonzeros=17011, allocated nonzeros=17011 > total number of mallocs used during MatSetValues calls =0 > not using I-node (on process 0) routines > Down solver (pre-smoother) on level 1 ------------------------------- > KSP Object: (mg_levels_1_) 6 MPI processes > type: chebyshev > Chebyshev: eigenvalue estimates: min = 0.0358458, max = 4.60675 > maximum iterations=2 > tolerances: relative=1e-05, absolute=1e-50, divergence=10000 > left preconditioning > using nonzero initial guess > using NONE norm type for convergence test > PC Object: (mg_levels_1_) 6 MPI processes > type: jacobi > linear system matrix = precond matrix: > Matrix Object: 6 MPI processes > type: mpiaij > rows=54711, cols=54711 > total: nonzeros=4086585, allocated nonzeros=4086585 > total number of mallocs used during MatSetValues calls =0 > using I-node (on process 0) routines: found 3040 nodes, limit used is 5 > Up solver (post-smoother) same as down solver (pre-smoother) > linear system matrix = precond matrix: > Matrix Object: 6 MPI processes > type: mpiaij > rows=54711, cols=54711 > total: nonzeros=4086585, allocated nonzeros=4086585 > total number of mallocs used during MatSetValues calls =0 > using I-node (on process 0) routines: found 3040 nodes, limit used is 5 > Error in FEMesh_Mod::moveFEMeshPETSc() : KSP returned with error code = -8 > > -- > Hugo Gagnon > > On 2013-04-21, at 10:58 AM, Jed Brown wrote: > >> Hugo Gagnon writes: >> >>> Hi, >>> >>> I'm getting a KSP_DIVERGED_INDEFINITE_PC error using CG with ILU. I >>> tried increasing the number of levels of fill and also tried other >>> options described in >>> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCILU.html >>> but without any luck. Are there some other preconditioner options >>> that might work? >> >> What kind of problem are you solving? How does this work? >> >> -pc_type gamg -pc_gamg_agg_nsmooths 1 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dharmareddy84 at gmail.com Sun Apr 21 22:48:07 2013 From: dharmareddy84 at gmail.com (Dharmendar Reddy) Date: Sun, 21 Apr 2013 22:48:07 -0500 Subject: [petsc-users] DMPlex submesh and Boundary mesh Message-ID: Hello, I see that i can extract submesh from a DM object using DMPlexCreateSubmesh(DM dm, const char vertexLabel[], DM *subdm) Can i request for an interface where i can extract DMPlexCreateSubmesh(DM dm, const char vertexLabel[], PetscInt value, DM *subdm) I have to create a lot of subdms typically few hundred. I can always create required number of unique labels, but i was wondering if i can group them with single label but different values of strata. How do i extract boundary nodes of a given mesh ? For example i have a triangular mesh on square boundary, I need to mark the nodes on the square boundary. Thanks Reddy -- ----------------------------------------------------- Dharmendar Reddy Palle Graduate Student Microelectronics Research center, University of Texas at Austin, 10100 Burnet Road, Bldg. 160 MER 2.608F, TX 78758-4445 e-mail: dharmareddy84 at gmail.com Phone: +1-512-350-9082 United States of America. Homepage: https://webspace.utexas.edu/~dpr342 -------------- next part -------------- An HTML attachment was scrubbed... URL: From dharmareddy84 at gmail.com Mon Apr 22 00:44:28 2013 From: dharmareddy84 at gmail.com (Dharmendar Reddy) Date: Mon, 22 Apr 2013 00:44:28 -0500 Subject: [petsc-users] DMPlex submesh and Boundary mesh In-Reply-To: References: Message-ID: Looks like there is not fortran binding for this function, can you please add it ? Thanks Reddy On Sun, Apr 21, 2013 at 10:48 PM, Dharmendar Reddy wrote: > Hello, > I see that i can extract submesh from a DM object using > > DMPlexCreateSubmesh(DM dm, const char vertexLabel[], DM *subdm) > > > Can i request for an interface where i can extract > > DMPlexCreateSubmesh(DM dm, const char vertexLabel[], PetscInt value, DM *subdm) > > I have to create a lot of subdms typically few hundred. I can always > create required number of unique labels, but i was wondering if i can group > them with single label but different values of strata. > > How do i extract boundary nodes of a given mesh ? For example i have a > triangular mesh on square boundary, I need to mark the nodes on the square > boundary. > > Thanks > Reddy > -- > ----------------------------------------------------- > Dharmendar Reddy Palle > Graduate Student > Microelectronics Research center, > University of Texas at Austin, > 10100 Burnet Road, Bldg. 160 > MER 2.608F, TX 78758-4445 > e-mail: dharmareddy84 at gmail.com > Phone: +1-512-350-9082 > United States of America. > Homepage: https://webspace.utexas.edu/~dpr342 > -- ----------------------------------------------------- Dharmendar Reddy Palle Graduate Student Microelectronics Research center, University of Texas at Austin, 10100 Burnet Road, Bldg. 160 MER 2.608F, TX 78758-4445 e-mail: dharmareddy84 at gmail.com Phone: +1-512-350-9082 United States of America. Homepage: https://webspace.utexas.edu/~dpr342 -------------- next part -------------- An HTML attachment was scrubbed... URL: From d.scott at ed.ac.uk Mon Apr 22 06:56:57 2013 From: d.scott at ed.ac.uk (David Scott) Date: Mon, 22 Apr 2013 12:56:57 +0100 Subject: [petsc-users] Advice Being Sought Message-ID: <51752589.4080001@ed.ac.uk> Hello, I am working on a fluid-mechanical code to solve the two-phase Navier?Stokes equations with levelset interface capturing. I have been asked to replace the pressure calculation which uses the SOR and Jacobi iterative schemes with a Krylov subspace method. I have done this and the code is working but as I have never used PETSc before I would like to know if improvements to my code, or the run time parameters that I am using, could be made. I am using GMRES with a Block Jacobi pre-conditioner. I have tried Conjugate Gradient with a Block Jacobi pre-conditioner but it diverges. If I use GMRES for the first few thousand time steps and then swap to CG it does converge but the speed of execution is somewhat reduced. I have attached relevant excerpts from the code. Yours sincerely, David Scott -- Dr. D. M. Scott Applications Consultant Edinburgh Parallel Computing Centre Tel. 0131 650 5921 The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. -------------- next part -------------- program mainprogram use petsc use pressure_solver implicit none #include "finclude/petscdef.h" ! Many declarations deleted. PetscInt :: dof,stencil_width parameter (dof = 1, stencil_width = 1) DM :: da KSP :: ksp PetscInt :: petsc_y_ranges(0:num_procs_x-1) PetscReal :: rtol, abstol, dtol PetscInt :: maxits, its KSPType :: solver_type PC :: pc PCType :: pc_type PCSide :: pc_side double precision :: div_grad_p, residual MatNullSpace :: nullspace PetscReal :: rnorm ! **************************************************************************************** ! MPI stuff call PetscInitialize(PETSC_NULL_CHARACTER,ierr) call MPI_Comm_size(PETSC_COMM_WORLD,num_procs,ierr) call MPI_Comm_rank(PETSC_COMM_WORLD,my_id,ierr) if(my_id==0)then write(*,*) 'num_procs=', num_procs if(num_procs_x*num_procs_y*num_procs_z/=num_procs)then write(*,*) 'Error 1: domain decomposition inconsistent with number of processors, exiting' stop end if if(num_procs_z/=1)then write(*,*) 'Error 2: domain decomposition inconsistent with number of processors, exiting' stop end if if((mod(maxl-1,num_procs_x)/=0).or.(mod(maxm-1,num_procs_y)/=0))then write(*,*) 'Error 3: number of processors must evenly divide (grid dimension-1), exiting' stop end if end if dims(1) = num_procs_x dims(2) = num_procs_y dims(3) = num_procs_z periodic(1) = .false. periodic(2) = .true. periodic(3) = .false. reorder = .false. Call mpi_cart_create(PETSC_COMM_WORLD,Ndim,dims,periodic,reorder,comm2d_quasiperiodic,ierr) Call mpi_comm_rank( comm2d_quasiperiodic,my_id,ierr) Call mpi_cart_coords(comm2d_quasiperiodic,my_id,Ndim,coords,ierr) Call get_mpi_neighbours(neighbours,comm2d_quasiperiodic) call mpi_decomp_2d(sx,ex,sy,ey,n_local_x,n_local_y,maxl,maxm,coords,dims,Ndim) petsc_y_ranges = n_local_x petsc_y_ranges(0) = n_local_x + 1 petsc_y_ranges(num_procs_x-1) = n_local_x + 1 call DMDACreate3d(PETSC_COMM_WORLD, & DMDA_BOUNDARY_PERIODIC, & DMDA_BOUNDARY_NONE, & DMDA_BOUNDARY_NONE, & DMDA_STENCIL_BOX, & global_dim_x, global_dim_y+2, global_dim_z+2, & num_procs_y, num_procs_x, num_procs_z, & dof, stencil_width, & PETSC_NULL_INTEGER, & petsc_y_ranges, & PETSC_NULL_INTEGER, & da, ierr) call KSPCreate(PETSC_COMM_WORLD, ksp, ierr) call KSPSetFromOptions(ksp, ierr) call DMSetInitialGuess(da, copy_pressure_in_to_petsc, ierr) call KSPSetComputeRHS(ksp, compute_rhs, PETSC_NULL_OBJECT, ierr) call KSPSetComputeOperators(ksp, compute_matrix, PETSC_NULL_OBJECT, ierr) call KSPSetDM(ksp, da, ierr) call MatNullSpaceCreate(PETSC_COMM_WORLD, PETSC_TRUE, 0, PETSC_NULL_OBJECT, nullspace, ierr) call KSPSetNullSpace(ksp, nullspace, ierr) call MatNullSpaceDestroy(nullspace, ierr) call calculate_indices(da, sx, sy, ex, ey, n_local_x, n_local_y, my_id, ierr) ! Initialisation code deleted. ! Time loop do iteration=1,n_timesteps ! Velocity calculations (including RHS_p) deleted. call KSPSolve(ksp, PETSC_NULL_OBJECT, PETSC_NULL_OBJECT, ierr) if (my_id==60) then call output_converged_reason(ksp) call KSPGetResidualNorm(ksp, rnorm, ierr) write(*, *) 'approximate, preconditioned, residual norm', rnorm call KSPGetIterationNumber(ksp, its, ierr) write(*, *) 'iterations =', its end if call copy_pressure_out_of_petsc(ksp, ierr) ! Some more velocity stuff deleted. end do call PetscFinalize(ierr) end program mainprogram -------------- next part -------------- module pressure_solver use petsc implicit none #include "finclude/petscdef.h" PetscInt :: maxl, maxm, maxn parameter (maxl = 481, maxm = 153, maxn = 153) ! Note that the first two dimensions are swapped so that as far as PETSc is concerned ! it is the first dimension that is periodic. PetscInt :: global_dim_x, global_dim_y, global_dim_z parameter (global_dim_x = maxm-1, global_dim_y = maxl-1, global_dim_z = maxn-1) PetscInt :: x_start, y_start, z_start, x_width, y_width, z_width PetscInt :: x_max, y_max, z_max PetscInt :: x_start_ghost, y_start_ghost, z_start_ghost PetscInt :: x_width_ghost, y_width_ghost, z_width_ghost PetscInt :: x_max_ghost, y_max_ghost, z_max_ghost PetscScalar, allocatable, dimension(:, :, :) :: pres PetscScalar :: dx, dy, dz, dt PetscScalar, allocatable, dimension(:, :, :) :: RHS_p, u3 PetscScalar :: u_inlet(0:global_dim_z-1) contains subroutine calculate_indices(da, sx, sy, ex, ey, n_local_x, n_local_y, rank, ierr) implicit none DM :: da PetscInt :: sx, sy, ex, ey, n_local_x, n_local_y, rank PetscInt :: ierr call DMDAGetCorners(da, x_start, y_start, z_start, x_width, y_width, z_width, ierr) x_max = x_start + x_width - 1 y_max = y_start + y_width - 1 z_max = z_start + z_width - 1 if (sx==1 .AND. ((sx-1).NE.y_start .OR. ex/=y_max)) then write(*, *) 'ID1', rank, 'sx-1, y_start, ex, y_max, n_local_x, y_width', sx-1, y_start, ex, y_max, n_local_x, y_width end if if (ex==(maxl-1) .AND. (sx.NE.y_start .OR. (ex+1)/=y_max)) then write(*, *) 'ID2', rank, 'sx, y_start, ex+1, y_max, n_local_x, y_width', sx, y_start, ex+1, y_max, n_local_x, y_width end if if ((sx/=1 .AND. ex/=(maxl-1)) .AND. (sx/=y_start .OR. ex/=y_max)) then write(*, *) 'ID3', rank, 'sx, y_start, ex, y_max, n_local_x, y_width', sx, y_start, ex, y_max, n_local_x, y_width end if if (sy/=(x_start+1) .OR. ey/=(x_max+1)) then write(*, *) 'ID4', rank, 'sy, x_start, ey, x_max, n_local_y, x_width', sy, x_start, ey, x_max, n_local_y, x_width end if call DMDAGetGhostCorners(da, x_start_ghost, y_start_ghost, z_start_ghost, & x_width_ghost, y_width_ghost, z_width_ghost, ierr) x_max_ghost = x_start_ghost + x_width_ghost - 1 y_max_ghost = y_start_ghost + y_width_ghost - 1 z_max_ghost = z_start_ghost + z_width_ghost - 1 if ((sx-1)/=y_start_ghost .OR. (ex+1)/=y_max_ghost) then write(*, *) 'GHOST1', rank, 'sx-1, y_start_ghost, ex+1, y_max_ghost', & sx-1, y_start_ghost, ex+1, y_max_ghost end if if ((sy-1)/=(x_start_ghost+1) .OR. (ey+1)/=(x_max_ghost+1)) then write(*, *) 'GHOST2', rank, 'sy-1, x_start_ghost+1, ey+1, x_max_ghost+1', & sy-1, x_start_ghost+1, ey+1, x_max_ghost+1 end if if (z_start/=z_start_ghost .OR. z_width/=z_width_ghost) then write(*, *) 'GHOST3', rank, 'z_start, z_start_ghost, z_width, z_width_ghost', & z_start, z_start_ghost, z_width, z_width_ghost end if end subroutine calculate_indices subroutine copy_pressure_in_to_petsc(dm, x, ierr) implicit none ! This routine copies the data generated by the other parts of the TPLS code into ! locations where they can be accessed by PETSc. ! The global pressure array in the non-PETSc code is pres_global(0:maxl, 0:maxm, 0:maxn). ! This includes a bounday layer on all sides even though the second dimension is periodic ! as the periodic structure is implemented by copying data one interior face of the cuboid ! to the opposite face in the boundary layer. ! So the interior is (1:maxl-1, 1:maxm-1, 1:maxn-1). ! The indexing of the data in the PETSc code is different for two reasons. ! * PETSc lays out data on processes differently from the way that it is done in the ! non-PETSc code which necessitates swapping of the first two dimensions. ! * PETSc is instructed to impose the periodic boundary condition behind the scenes ! (through specifying DMDA_BOUNDARY_PERIODIC for the appropriate dimesnion). The ! ghost points that are required to do this are not visible in the global vector, ! but they do appear in the local vectors. ! Consequently the interior in the PETSc code is (0:maxm-2, 1:maxl-1, 1:maxn-1) ! which may written as (0:global_dim_x-1, 1:global_dim_y, 1:global_dim_z) ! Including the explicit boundaries we have ! (0:global_dim_x-1, 0:global_dim_y+1, 0:global_dim_z+1) or (0:maxm-2, 0:maxl, 0:maxn) ! Including the ghost points supplied by PETSc we have ! (-1:global_dim_x, 0:global_dim_y+1, 0:global_dim_z+1) or (-1:maxm-1, 0:maxl, 0:maxn) ! Let a pressure datum have coordinates (i1, j1, k1) in the non-PETSc code and ! (i2, j2, k2) in the PETSc code, then ! i2 = j1 - 1 ! j2 = i1 ! k2 = k1, or ! i1 = j2 ! j1 = i2+1 ! k1 = k2. DM, intent(inout) :: dm Vec, intent(inout) :: x ! x is a global vector. PetscErrorCode, intent(inout) :: ierr PetscScalar, pointer, dimension(:, :, :) :: x_3da PetscInt :: i, j, k call DMDAVecGetArrayF90(dm, x, x_3da, ierr) do k = z_start, z_max do j = y_start, y_max do i = x_start, x_max x_3da(i, j, k) = pres(j, i+1, k) end do end do end do call DMDAVecRestoreArrayF90(dm, x, x_3da, ierr) end subroutine copy_pressure_in_to_petsc subroutine compute_rhs(ksp, b, dummy, ierr) implicit none KSP, intent(inout) :: ksp Vec, intent(inout) :: b ! b is a global vector. integer, intent(inout) :: dummy(*) PetscErrorCode, intent(inout) :: ierr PetscScalar, pointer, dimension(:, :, :) :: b_3da PetscInt :: i, j, k DM :: dm call KSPGetDM(ksp, dm, ierr) call DMDAVecGetArrayF90(dm, b, b_3da, ierr) do k = z_start, z_max do j = y_start, y_max do i = x_start, x_max b_3da(i, j, k) = -RHS_p(j, i+1, k) end do end do end do if (y_start==0) then do k = 1, global_dim_z do i = x_start, x_max b_3da(i, 0, k) = (dx/dt)*(u_inlet(k-1)-u3(0, i ,k-1))/dy end do end do end if call DMDAVecRestoreArrayF90(dm, b, b_3da, ierr) end subroutine compute_rhs subroutine compute_matrix(ksp, A, B, str, dummy, ierr) implicit none KSP, intent(inout) :: ksp Mat, intent(inout) :: A, B MatStructure, intent(inout) :: str integer, intent(inout) :: dummy(*) PetscErrorCode, intent(inout) :: ierr PetscInt :: i, j, k PetscScalar :: v(7) MatStencil :: row(4), col(4, 7) do k = z_start, z_max do j = y_start, y_max do i = x_start, x_max row(MatStencil_i) = i row(MatStencil_j) = j row(MatStencil_k) = k ! Deal with the edges of the cuboid. ! The edges with i=0 and i=(global_dim_x-1) are taken care of by the periodic boundary condition. if ((j==0 .AND. k==0) .OR. & (j==(global_dim_y+1) .AND. k==0)) then v(1) = 1/dz col(MatStencil_i, 1) = i col(MatStencil_j, 1) = j col(MatStencil_k, 1) = k v(2) = -1/dz col(MatStencil_i, 2) = i col(MatStencil_j, 2) = j col(MatStencil_k, 2) = k+1 call MatSetValuesStencil(B, 1, row, 2, col, v, INSERT_VALUES, ierr) else if ((j==0 .AND. k==(global_dim_z+1)) .OR. & (j==(global_dim_y+1) .AND. k==(global_dim_z+1))) then v(1) = -1/dz col(MatStencil_i, 1) = i col(MatStencil_j, 1) = j col(MatStencil_k, 1) = k v(2) = 1/dz col(MatStencil_i, 2) = i col(MatStencil_j, 2) = j col(MatStencil_k, 2) = k-1 call MatSetValuesStencil(B, 1, row, 2, col, v, INSERT_VALUES, ierr) ! Deal with the faces of the cuboid, excluding the edges and the corners. ! The faces i=0 and i=(global_dim_x-1) are taken care of by the periodic boundary condition. else if (j==0) then ! Von Neumann boundary condition on y=0 boundary. v(1) = 1/dy col(MatStencil_i, 1) = i col(MatStencil_j, 1) = j col(MatStencil_k, 1) = k v(2) = -1/dy col(MatStencil_i, 2) = i col(MatStencil_j, 2) = j+1 col(MatStencil_k, 2) = k call MatSetValuesStencil(B, 1, row, 2, col, v, INSERT_VALUES, ierr) else if (j==(global_dim_y+1)) then ! Von Neumann boundary condition on y=(global_dim_y+1) boundary. v(1) = -1/dy col(MatStencil_i, 1) = i col(MatStencil_j, 1) = j col(MatStencil_k, 1) = k v(2) = 1/dy col(MatStencil_i, 2) = i col(MatStencil_j, 2) = j-1 col(MatStencil_k, 2) = k call MatSetValuesStencil(B, 1, row, 2, col, v, INSERT_VALUES, ierr) else if (k==0) then ! Von Neumann boundary condition on z=0 boundary. v(1) = 1/dz col(MatStencil_i, 1) = i col(MatStencil_j, 1) = j col(MatStencil_k, 1) = k v(2) = -1/dz col(MatStencil_i, 2) = i col(MatStencil_j, 2) = j col(MatStencil_k, 2) = k+1 call MatSetValuesStencil(B, 1, row, 2, col, v, INSERT_VALUES, ierr) else if (k==(global_dim_z+1)) then ! Von Neumann boundary conditions on z=(global_dim_z+1) boundary. v(1) = -1/dz col(MatStencil_i, 1) = i col(MatStencil_j, 1) = j col(MatStencil_k, 1) = k v(2) = 1/dz col(MatStencil_i, 2) = i col(MatStencil_j, 2) = j col(MatStencil_k, 2) = k-1 call MatSetValuesStencil(B, 1, row, 2, col, v, INSERT_VALUES, ierr) else ! Deal with the interior. ! Laplacian in 3D. v(1) = -1/dz**2 col(MatStencil_i, 1) = i col(MatStencil_j, 1) = j col(MatStencil_k, 1) = k-1 v(2) = -1/dy**2 col(MatStencil_i, 2) = i col(MatStencil_j, 2) = j-1 col(MatStencil_k, 2) = k v(3) = -1/dx**2 col(MatStencil_i, 3) = i-1 col(MatStencil_j, 3) = j col(MatStencil_k, 3) = k v(4) = 2*(1/dx**2+1/dy**2+1/dz**2) col(MatStencil_i, 4) = i col(MatStencil_j, 4) = j col(MatStencil_k, 4) = k v(5) = -1/dx**2 col(MatStencil_i, 5) = i+1 col(MatStencil_j, 5) = j col(MatStencil_k, 5) = k v(6) = -1/dy**2 col(MatStencil_i, 6) = i col(MatStencil_j, 6) = j+1 col(MatStencil_k, 6) = k v(7) = -1/dz**2 col(MatStencil_i, 7) = i col(MatStencil_j, 7) = j col(MatStencil_k, 7) = k+1 call MatSetValuesStencil(B, 1, row, 7, col, v, INSERT_VALUES, ierr) end if end do end do end do call MatAssemblyBegin(B, MAT_FINAL_ASSEMBLY, ierr) call MatAssemblyEnd(B, MAT_FINAL_ASSEMBLY, ierr) str = SAME_NONZERO_PATTERN if (A.ne.B) then call MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY, ierr) call MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY, ierr) endif end subroutine compute_matrix subroutine copy_pressure_out_of_petsc(ksp, ierr) implicit none KSP, intent(in) :: ksp PetscErrorCode, intent(out) :: ierr DM :: dm Vec :: global_vector Vec :: local_vector PetscScalar, pointer, dimension(:, :, :) :: local_vector_3da, global_vector_3da PetscInt :: i, j, k call KSPGetDM(ksp, dm, ierr) call KSPGetSolution(ksp, global_vector, ierr) call DMCreateLocalVector(dm, local_vector, ierr) call DMGlobalToLocalBegin(dm, global_vector, INSERT_VALUES, local_vector, ierr) call DMGlobalToLocalEnd(dm, global_vector, INSERT_VALUES, local_vector, ierr) call DMDAVecGetArrayF90(dm, local_vector, local_vector_3da, ierr) call DMDAVecGetArrayF90(dm, global_vector, global_vector_3da, ierr) do k = z_start_ghost, z_max_ghost do j = y_start_ghost, y_max_ghost do i = x_start_ghost, x_max_ghost pres(j, i+1, k) = local_vector_3da(i, j, k) end do end do end do call DMDAVecRestoreArrayF90(dm, global_vector, global_vector_3da, ierr) call DMDAVecRestoreArrayF90(dm, local_vector, local_vector_3da, ierr) ! The following call is required to prevent a memory leak. call VecDestroy(local_vector, ierr) end subroutine copy_pressure_out_of_petsc end module pressure_solver From knepley at gmail.com Mon Apr 22 07:12:48 2013 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 22 Apr 2013 08:12:48 -0400 Subject: [petsc-users] Advice Being Sought In-Reply-To: <51752589.4080001@ed.ac.uk> References: <51752589.4080001@ed.ac.uk> Message-ID: On Mon, Apr 22, 2013 at 7:56 AM, David Scott wrote: > Hello, > > I am working on a fluid-mechanical code to solve the two-phase > Navier?Stokes equations with levelset interface capturing. I have been > asked to replace the pressure calculation which uses the SOR and Jacobi > iterative schemes with a Krylov subspace method. I have done this and the > code is working but as I have never used PETSc before I would like to know > if improvements to my code, or the run time parameters that I am using, > could be made. > > I am using GMRES with a Block Jacobi pre-conditioner. I have tried > Conjugate Gradient with a Block Jacobi pre-conditioner but it diverges. If > I use GMRES for the first few thousand time steps and then swap to CG it > does converge but the speed of execution is somewhat reduced. > Krylov methods do not work with preconditioners. You have a Poisson problem, so as abundantly documented in the literature, you should use multigrid. The easiest thing to try is -pc_type gamg -pc_gamg_agg_nsmooths 1 Thanks, Matt > I have attached relevant excerpts from the code. > > Yours sincerely, > > David Scott > -- > Dr. D. M. Scott > Applications Consultant > Edinburgh Parallel Computing Centre > Tel. 0131 650 5921 > > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark.adams at columbia.edu Mon Apr 22 07:16:26 2013 From: mark.adams at columbia.edu (Mark F. Adams) Date: Mon, 22 Apr 2013 08:16:26 -0400 Subject: [petsc-users] Advice Being Sought In-Reply-To: <51752589.4080001@ed.ac.uk> References: <51752589.4080001@ed.ac.uk> Message-ID: <8BDE3638-84C7-4CDE-871D-A58DBD9C7A8B@columbia.edu> So it look alike your operator is not symmetric but is a scalar Laplacian with constant coefficients (?) You should using AMG. '-pc_type hypre' if you are configured with hypre and '-pc_type gamg' if not. On Apr 22, 2013, at 7:56 AM, David Scott wrote: > Hello, > > I am working on a fluid-mechanical code to solve the two-phase Navier?Stokes equations with levelset interface capturing. I have been asked to replace the pressure calculation which uses the SOR and Jacobi iterative schemes with a Krylov subspace method. I have done this and the code is working but as I have never used PETSc before I would like to know if improvements to my code, or the run time parameters that I am using, could be made. > > I am using GMRES with a Block Jacobi pre-conditioner. I have tried Conjugate Gradient with a Block Jacobi pre-conditioner but it diverges. If I use GMRES for the first few thousand time steps and then swap to CG it does converge but the speed of execution is somewhat reduced. > > I have attached relevant excerpts from the code. > > Yours sincerely, > > David Scott > -- > Dr. D. M. Scott > Applications Consultant > Edinburgh Parallel Computing Centre > Tel. 0131 650 5921 > > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. > From netz at essert.name Mon Apr 22 08:14:03 2013 From: netz at essert.name (Jan Essert) Date: Mon, 22 Apr 2013 15:14:03 +0200 Subject: [petsc-users] Assembling a matrix from submatrices Message-ID: <598ed300da430733b12bd79040f9c683.squirrel@jan.essert.name> Dear list, I would like to construct a large, sparse matrix M out of four submatrices. (A B) (C D) These submatrices are obtained as results of MatMatMult multiplications of other sparse matrices. I have tried the following procedure for all four submatrices (here only for A) Mat A; MatGetLocalSubMatrix(M, rows1, cols1, MAT_INITIAL_MATRIX, &A); MatMatMult(A1, A2, MAT_INITIAL_MATRIX, PETSC_DEFAULT, &A); MatRestoreLocalSubMatrix(M, rows1, cols1, &A); These lines use appropriate ISs rows1 and cols1 which contain the first rows and columns of M that correspond to the submatrix A. This, however results in an empty matrix M. What am I doing wrong? If this is not the right approach, how can I do it better? Manually copying the values from the results of the multiplications into M takes forever, unfortunately.. Thanks for your help! Jan From knepley at gmail.com Mon Apr 22 09:28:34 2013 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 22 Apr 2013 10:28:34 -0400 Subject: [petsc-users] Assembling a matrix from submatrices In-Reply-To: <598ed300da430733b12bd79040f9c683.squirrel@jan.essert.name> References: <598ed300da430733b12bd79040f9c683.squirrel@jan.essert.name> Message-ID: On Mon, Apr 22, 2013 at 9:14 AM, Jan Essert wrote: > Dear list, > > I would like to construct a large, sparse matrix M out of four submatrices. > > (A B) > (C D) > > These submatrices are obtained as results of MatMatMult multiplications of > other sparse matrices. > I have tried the following procedure for all four submatrices (here only > for A) > > Mat A; > MatGetLocalSubMatrix(M, rows1, cols1, MAT_INITIAL_MATRIX, &A); > MatMatMult(A1, A2, MAT_INITIAL_MATRIX, PETSC_DEFAULT, &A); > MatRestoreLocalSubMatrix(M, rows1, cols1, &A); > MAT_INITIAL_MATRIX creates the matrix. You need MAT_RESUE_MATRIX. Matt > These lines use appropriate ISs rows1 and cols1 which contain the first > rows and columns of M that correspond to the submatrix A. > > This, however results in an empty matrix M. > > What am I doing wrong? > If this is not the right approach, how can I do it better? > Manually copying the values from the results of the multiplications into M > takes forever, unfortunately.. > > Thanks for your help! > Jan > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From tahar.amari at polytechnique.edu Mon Apr 22 09:36:16 2013 From: tahar.amari at polytechnique.edu (Tahar Amari) Date: Mon, 22 Apr 2013 16:36:16 +0200 Subject: [petsc-users] Advice Being Sought In-Reply-To: References: <51752589.4080001@ed.ac.uk> Message-ID: <65D953DE-B112-4035-99ED-7ADD2B2F6E73@polytechnique.edu> Hello Matt, I am not sure I understand your point. I guess you mean that inside petsc only a version of Gmres working with preconditioning is not yet implemanted , Am I right ? Tahar Le 22 avr. 2013 ? 14:12, Matthew Knepley a ?crit : > On Mon, Apr 22, 2013 at 7:56 AM, David Scott wrote: > Hello, > > I am working on a fluid-mechanical code to solve the two-phase Navier?Stokes equations with levelset interface capturing. I have been asked to replace the pressure calculation which uses the SOR and Jacobi iterative schemes with a Krylov subspace method. I have done this and the code is working but as I have never used PETSc before I would like to know if improvements to my code, or the run time parameters that I am using, could be made. > > I am using GMRES with a Block Jacobi pre-conditioner. I have tried Conjugate Gradient with a Block Jacobi pre-conditioner but it diverges. If I use GMRES for the first few thousand time steps and then swap to CG it does converge but the speed of execution is somewhat reduced. > > Krylov methods do not work with preconditioners. You have a Poisson problem, so as abundantly documented in the literature, you should use multigrid. The easiest thing to try is > > -pc_type gamg -pc_gamg_agg_nsmooths 1 > > Thanks, > > Matt > > I have attached relevant excerpts from the code. > > Yours sincerely, > > David Scott > -- > Dr. D. M. Scott > Applications Consultant > Edinburgh Parallel Computing Centre > Tel. 0131 650 5921 > > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener -------------------------------------------- T. Amari Centre de Physique Theorique Ecole Polytechnique 91128 Palaiseau Cedex France tel : 33 1 69 33 42 52 fax: 33 1 69 33 49 49 email: URL : http://www.cpht.polytechnique.fr/cpht/amari -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Mon Apr 22 10:07:47 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 22 Apr 2013 09:07:47 -0600 Subject: [petsc-users] Assembling a matrix from submatrices In-Reply-To: References: <598ed300da430733b12bd79040f9c683.squirrel@jan.essert.name> Message-ID: <87wqrufvss.fsf@mcs.anl.gov> Matthew Knepley writes: > On Mon, Apr 22, 2013 at 9:14 AM, Jan Essert wrote: > >> Dear list, >> >> I would like to construct a large, sparse matrix M out of four submatrices. >> >> (A B) >> (C D) >> >> These submatrices are obtained as results of MatMatMult multiplications of >> other sparse matrices. >> I have tried the following procedure for all four submatrices (here only >> for A) >> >> Mat A; >> MatGetLocalSubMatrix(M, rows1, cols1, MAT_INITIAL_MATRIX, &A); >> MatMatMult(A1, A2, MAT_INITIAL_MATRIX, PETSC_DEFAULT, &A); >> MatRestoreLocalSubMatrix(M, rows1, cols1, &A); >> > > MAT_INITIAL_MATRIX creates the matrix. You need MAT_RESUE_MATRIX. This won't work either because MatGetLocalSubMatrix() does not work that way and because MAT_REUSE_MATRIX requires that you have already used that matrix for the given operation. You can create a MATNEST for the coupled matrix, but if you want it fully assembled, you'll have to loop over the rows and call MatSetValues. We have a partial implementation of MatConvert_Nest_AIJ that, once completed, will provide a nicer interface for this. From jedbrown at mcs.anl.gov Mon Apr 22 10:14:47 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 22 Apr 2013 09:14:47 -0600 Subject: [petsc-users] Advice Being Sought In-Reply-To: <65D953DE-B112-4035-99ED-7ADD2B2F6E73@polytechnique.edu> References: <51752589.4080001@ed.ac.uk> <65D953DE-B112-4035-99ED-7ADD2B2F6E73@polytechnique.edu> Message-ID: <87txmyfvh4.fsf@mcs.anl.gov> Tahar Amari writes: > I am not sure I understand your point. I guess you mean that inside > petsc only a version of Gmres working with preconditioning is not yet > implemanted , Am I right ? I have no idea what Matt intended to say, but all PETSc Krylov methods use preconditioners. However, as Matt and Mark said, multigrid is much better than block Jacobi as a preconditioner for these problems. From tahar.amari at polytechnique.edu Mon Apr 22 10:18:06 2013 From: tahar.amari at polytechnique.edu (Tahar Amari) Date: Mon, 22 Apr 2013 17:18:06 +0200 Subject: [petsc-users] Advice Being Sought In-Reply-To: <87txmyfvh4.fsf@mcs.anl.gov> References: <51752589.4080001@ed.ac.uk> <65D953DE-B112-4035-99ED-7ADD2B2F6E73@polytechnique.edu> <87txmyfvh4.fsf@mcs.anl.gov> Message-ID: <9D78C293-9548-4188-9770-862E98409BA6@polytechnique.edu> Great Thanks for clarifying . Le 22 avr. 2013 ? 17:14, Jed Brown a ?crit : > Tahar Amari writes: > >> I am not sure I understand your point. I guess you mean that inside >> petsc only a version of Gmres working with preconditioning is not yet >> implemanted , Am I right ? > > I have no idea what Matt intended to say, but all PETSc Krylov methods > use preconditioners. However, as Matt and Mark said, multigrid is much > better than block Jacobi as a preconditioner for these problems. -------------- next part -------------- An HTML attachment was scrubbed... URL: From d.scott at ed.ac.uk Mon Apr 22 11:26:04 2013 From: d.scott at ed.ac.uk (David Scott) Date: Mon, 22 Apr 2013 17:26:04 +0100 Subject: [petsc-users] Advice Being Sought In-Reply-To: References: <51752589.4080001@ed.ac.uk> Message-ID: <5175649C.6060906@ed.ac.uk> Thanks for the suggestion. I had tried '-pc_type gamg -pc_gamg_agg_nsmooths 1' with an earlier version of the code without success. I have tried it again but I get NaN's after only 90 time steps whereas with block Jacobi it runs quite happily for 36,000 time steps and produces physically sensible results. David On 22/04/2013 13:12, Matthew Knepley wrote: > On Mon, Apr 22, 2013 at 7:56 AM, David Scott > wrote: > > Hello, > > I am working on a fluid-mechanical code to solve the two-phase > Navier?Stokes equations with levelset interface capturing. I have > been asked to replace the pressure calculation which uses the SOR > and Jacobi iterative schemes with a Krylov subspace method. I have > done this and the code is working but as I have never used PETSc > before I would like to know if improvements to my code, or the run > time parameters that I am using, could be made. > > I am using GMRES with a Block Jacobi pre-conditioner. I have tried > Conjugate Gradient with a Block Jacobi pre-conditioner but it > diverges. If I use GMRES for the first few thousand time steps and > then swap to CG it does converge but the speed of execution is > somewhat reduced. > > > Krylov methods do not work with preconditioners. You have a Poisson > problem, so as abundantly documented in the literature, you should use > multigrid. The easiest thing to try is > > -pc_type gamg -pc_gamg_agg_nsmooths 1 > > Thanks, > > Matt > > I have attached relevant excerpts from the code. > > Yours sincerely, > > David Scott > -- > Dr. D. M. Scott > Applications Consultant > Edinburgh Parallel Computing Centre > Tel. 0131 650 5921 > > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener -- Dr. D. M. Scott Applications Consultant Edinburgh Parallel Computing Centre Tel. 0131 650 5921 The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From knepley at gmail.com Mon Apr 22 12:51:39 2013 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 22 Apr 2013 13:51:39 -0400 Subject: [petsc-users] DMPlex submesh and Boundary mesh In-Reply-To: References: Message-ID: On Mon, Apr 22, 2013 at 1:44 AM, Dharmendar Reddy wrote: > > Looks like there is not fortran binding for this function, can you please > add it ? > Change made and Fortran binding added. Thanks, Matt > Thanks > Reddy > > > On Sun, Apr 21, 2013 at 10:48 PM, Dharmendar Reddy < > dharmareddy84 at gmail.com> wrote: > >> Hello, >> I see that i can extract submesh from a DM object using >> >> DMPlexCreateSubmesh(DM dm, const char vertexLabel[], DM *subdm) >> >> >> Can i request for an interface where i can extract >> >> DMPlexCreateSubmesh(DM dm, const char vertexLabel[], PetscInt value, DM *subdm) >> >> I have to create a lot of subdms typically few hundred. I can always >> create required number of unique labels, but i was wondering if i can group >> them with single label but different values of strata. >> >> How do i extract boundary nodes of a given mesh ? For example i have a >> triangular mesh on square boundary, I need to mark the nodes on the square >> boundary. >> >> Thanks >> Reddy >> -- >> ----------------------------------------------------- >> Dharmendar Reddy Palle >> Graduate Student >> Microelectronics Research center, >> University of Texas at Austin, >> 10100 Burnet Road, Bldg. 160 >> MER 2.608F, TX 78758-4445 >> e-mail: dharmareddy84 at gmail.com >> Phone: +1-512-350-9082 >> United States of America. >> Homepage: https://webspace.utexas.edu/~dpr342 >> > > > > -- > ----------------------------------------------------- > Dharmendar Reddy Palle > Graduate Student > Microelectronics Research center, > University of Texas at Austin, > 10100 Burnet Road, Bldg. 160 > MER 2.608F, TX 78758-4445 > e-mail: dharmareddy84 at gmail.com > Phone: +1-512-350-9082 > United States of America. > Homepage: https://webspace.utexas.edu/~dpr342 > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From dharmareddy84 at gmail.com Mon Apr 22 18:04:40 2013 From: dharmareddy84 at gmail.com (Dharmendar Reddy) Date: Mon, 22 Apr 2013 18:04:40 -0500 Subject: [petsc-users] DMPlex submesh and Boundary mesh In-Reply-To: References: Message-ID: Thanks. The code works. Now, i need to access the map from points in subdm to points in dm. I need to use this function right ? DMPlexCreateSubpointIS(DM dm, IS *subpointIS) Fortra binding please.. Also, i was thinking it may be of use to have interface like this.. DMPlexCreateSubpointIS(DM dm, PetscInt pointDimInSubdm, IS *subpointIS) this way i can have map from say (dim)-cells in subdm to corresponding (dim)-cells in dm if a subdm is a lower dimensional mesh. Of course i can use the first interface by checking the dim of the point in subpointIs before using. thanks Reddy On Mon, Apr 22, 2013 at 12:51 PM, Matthew Knepley wrote: > On Mon, Apr 22, 2013 at 1:44 AM, Dharmendar Reddy > wrote: > >> >> Looks like there is not fortran binding for this function, can you please >> add it ? >> > > Change made and Fortran binding added. > > Thanks, > > Matt > > >> Thanks >> Reddy >> >> >> On Sun, Apr 21, 2013 at 10:48 PM, Dharmendar Reddy < >> dharmareddy84 at gmail.com> wrote: >> >>> Hello, >>> I see that i can extract submesh from a DM object using >>> >>> DMPlexCreateSubmesh(DM dm, const char vertexLabel[], DM *subdm) >>> >>> >>> Can i request for an interface where i can extract >>> >>> DMPlexCreateSubmesh(DM dm, const char vertexLabel[], PetscInt value, DM *subdm) >>> >>> I have to create a lot of subdms typically few hundred. I can always >>> create required number of unique labels, but i was wondering if i can group >>> them with single label but different values of strata. >>> >>> How do i extract boundary nodes of a given mesh ? For example i have a >>> triangular mesh on square boundary, I need to mark the nodes on the square >>> boundary. >>> >>> Thanks >>> Reddy >>> -- >>> ----------------------------------------------------- >>> Dharmendar Reddy Palle >>> Graduate Student >>> Microelectronics Research center, >>> University of Texas at Austin, >>> 10100 Burnet Road, Bldg. 160 >>> MER 2.608F, TX 78758-4445 >>> e-mail: dharmareddy84 at gmail.com >>> Phone: +1-512-350-9082 >>> United States of America. >>> Homepage: https://webspace.utexas.edu/~dpr342 >>> >> >> >> >> -- >> ----------------------------------------------------- >> Dharmendar Reddy Palle >> Graduate Student >> Microelectronics Research center, >> University of Texas at Austin, >> 10100 Burnet Road, Bldg. 160 >> MER 2.608F, TX 78758-4445 >> e-mail: dharmareddy84 at gmail.com >> Phone: +1-512-350-9082 >> United States of America. >> Homepage: https://webspace.utexas.edu/~dpr342 >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -- ----------------------------------------------------- Dharmendar Reddy Palle Graduate Student Microelectronics Research center, University of Texas at Austin, 10100 Burnet Road, Bldg. 160 MER 2.608F, TX 78758-4445 e-mail: dharmareddy84 at gmail.com Phone: +1-512-350-9082 United States of America. Homepage: https://webspace.utexas.edu/~dpr342 -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Mon Apr 22 20:36:02 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 22 Apr 2013 20:36:02 -0500 Subject: [petsc-users] Advice Being Sought In-Reply-To: <5175649C.6060906@ed.ac.uk> References: <51752589.4080001@ed.ac.uk> <5175649C.6060906@ed.ac.uk> Message-ID: On Apr 22, 2013, at 11:26 AM, David Scott wrote: > Thanks for the suggestion. > > I had tried '-pc_type gamg -pc_gamg_agg_nsmooths 1' with an earlier version of the code without success. I have tried it again but I get NaN's after only 90 time steps whereas with block Jacobi it runs quite happily for 36,000 time steps and produces physically sensible results. David, We would be very interested in determining what is "going wrong" with the solver here since we hope to make it robust. Would it be possible for you to use a MatView() and VecView() on the matrix and the right hand side with a binary viewer when it "goes bad" and send us the resulting file? Barry We'd run the gamg solver on your matrix and track down what is happening. > > David > > On 22/04/2013 13:12, Matthew Knepley wrote: >> On Mon, Apr 22, 2013 at 7:56 AM, David Scott > > wrote: >> >> Hello, >> >> I am working on a fluid-mechanical code to solve the two-phase >> Navier?Stokes equations with levelset interface capturing. I have >> been asked to replace the pressure calculation which uses the SOR >> and Jacobi iterative schemes with a Krylov subspace method. I have >> done this and the code is working but as I have never used PETSc >> before I would like to know if improvements to my code, or the run >> time parameters that I am using, could be made. >> >> I am using GMRES with a Block Jacobi pre-conditioner. I have tried >> Conjugate Gradient with a Block Jacobi pre-conditioner but it >> diverges. If I use GMRES for the first few thousand time steps and >> then swap to CG it does converge but the speed of execution is >> somewhat reduced. >> >> >> Krylov methods do not work with preconditioners. You have a Poisson >> problem, so as abundantly documented in the literature, you should use >> multigrid. The easiest thing to try is >> >> -pc_type gamg -pc_gamg_agg_nsmooths 1 >> >> Thanks, >> >> Matt >> >> I have attached relevant excerpts from the code. >> >> Yours sincerely, >> >> David Scott >> -- >> Dr. D. M. Scott >> Applications Consultant >> Edinburgh Parallel Computing Centre >> Tel. 0131 650 5921 >> >> The University of Edinburgh is a charitable body, registered in >> Scotland, with registration number SC005336. >> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which >> their experiments lead. >> -- Norbert Wiener > > > -- > Dr. D. M. Scott > Applications Consultant > Edinburgh Parallel Computing Centre > Tel. 0131 650 5921 > > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. From ztdepyahoo at 163.com Mon Apr 22 22:54:38 2013 From: ztdepyahoo at 163.com (=?GBK?B?tqHAz8qm?=) Date: Tue, 23 Apr 2013 11:54:38 +0800 (CST) Subject: [petsc-users] Does Petsc has diagonal preconditioner for Bicgstab algorithm Message-ID: -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Mon Apr 22 22:58:27 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 22 Apr 2013 21:58:27 -0600 Subject: [petsc-users] Does Petsc has diagonal preconditioner for Bicgstab algorithm In-Reply-To: References: Message-ID: <87mwspew4c.fsf@mcs.anl.gov> -pc_type jacobi http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCJACOBI.html From netz at essert.name Tue Apr 23 01:54:49 2013 From: netz at essert.name (Jan Essert) Date: Tue, 23 Apr 2013 08:54:49 +0200 Subject: [petsc-users] Assembling a matrix from submatrices In-Reply-To: <87wqrufvss.fsf@mcs.anl.gov> References: <598ed300da430733b12bd79040f9c683.squirrel@jan.essert.name> <87wqrufvss.fsf@mcs.anl.gov> Message-ID: Dear Jed, > This won't work either because MatGetLocalSubMatrix() does not work that > way and because MAT_REUSE_MATRIX requires that you have already used > that matrix for the given operation. > > You can create a MATNEST for the coupled matrix, but if you want it > fully assembled, you'll have to loop over the rows and call > MatSetValues. We have a partial implementation of MatConvert_Nest_AIJ > that, once completed, will provide a nicer interface for this. Thank you, looping over the rows worked fine. I was just confused since the other method looked as if it was the right way to do this. Sorry for bothering you with this.. Jan From mark.adams at columbia.edu Tue Apr 23 03:39:21 2013 From: mark.adams at columbia.edu (Mark F. Adams) Date: Tue, 23 Apr 2013 04:39:21 -0400 Subject: [petsc-users] Advice Being Sought In-Reply-To: <5175649C.6060906@ed.ac.uk> References: <51752589.4080001@ed.ac.uk> <5175649C.6060906@ed.ac.uk> Message-ID: On Apr 22, 2013, at 12:26 PM, David Scott wrote: > Thanks for the suggestion. > > I had tried '-pc_type gamg -pc_gamg_agg_nsmooths 1' with an earlier version of the code without success. I have tried it again but I get NaN's after only 90 time steps whereas with block Jacobi it runs quite happily for 36,000 time steps and produces physically sensible results. > I have had good success with the robustness of hypre. So you might want to try that. It sounds like these pressure solves are changing over time. Using -pc_gamg_reuse_interpolation false with GAMG will make it more robust to that. > David > > On 22/04/2013 13:12, Matthew Knepley wrote: >> On Mon, Apr 22, 2013 at 7:56 AM, David Scott > > wrote: >> >> Hello, >> >> I am working on a fluid-mechanical code to solve the two-phase >> Navier?Stokes equations with levelset interface capturing. I have >> been asked to replace the pressure calculation which uses the SOR >> and Jacobi iterative schemes with a Krylov subspace method. I have >> done this and the code is working but as I have never used PETSc >> before I would like to know if improvements to my code, or the run >> time parameters that I am using, could be made. >> >> I am using GMRES with a Block Jacobi pre-conditioner. I have tried >> Conjugate Gradient with a Block Jacobi pre-conditioner but it >> diverges. If I use GMRES for the first few thousand time steps and >> then swap to CG it does converge but the speed of execution is >> somewhat reduced. >> >> >> Krylov methods do not work with preconditioners. You have a Poisson >> problem, so as abundantly documented in the literature, you should use >> multigrid. The easiest thing to try is >> >> -pc_type gamg -pc_gamg_agg_nsmooths 1 >> >> Thanks, >> >> Matt >> >> I have attached relevant excerpts from the code. >> >> Yours sincerely, >> >> David Scott >> -- >> Dr. D. M. Scott >> Applications Consultant >> Edinburgh Parallel Computing Centre >> Tel. 0131 650 5921 >> >> The University of Edinburgh is a charitable body, registered in >> Scotland, with registration number SC005336. >> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which >> their experiments lead. >> -- Norbert Wiener > > > -- > Dr. D. M. Scott > Applications Consultant > Edinburgh Parallel Computing Centre > Tel. 0131 650 5921 > > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. > From paeanball at gmail.com Tue Apr 23 04:20:41 2013 From: paeanball at gmail.com (Bao Kai) Date: Tue, 23 Apr 2013 12:20:41 +0300 Subject: [petsc-users] Downloding petsc-dev with git does not work. Message-ID: Hi, I failed in downloading petsc-dev just now. The following is the message given. petsc-dev]$ git clone https://bitbucket.org/petsc/petsc.git Initialized empty Git repository in /home/baok/software/petsc-dev/petsc/.git/ fatal: https://bitbucket.org/petsc/petsc.git/info/refs download error - The requested URL returned error: 403 I can visit the webpage https://bitbucket.org/petsc/. Could anyone tell me what is wrong? Thanks. Kai From garnet.vaz at gmail.com Tue Apr 23 05:02:31 2013 From: garnet.vaz at gmail.com (Garnet Vaz) Date: Tue, 23 Apr 2013 03:02:31 -0700 Subject: [petsc-users] Downloding petsc-dev with git does not work. In-Reply-To: References: Message-ID: This is from a previous mail from Satish which works for me: [to download] hg clone https://bitbucket.org/petsc/petsc-hg [to get updates] cd petsc-hg && hg pull -u And ignore BuildSystem [its no longer a separate repo for petsc purpose] Hope this helps. On Tue, Apr 23, 2013 at 2:20 AM, Bao Kai wrote: > Hi, > > I failed in downloading petsc-dev just now. The following is the > message given. > > petsc-dev]$ git clone https://bitbucket.org/petsc/petsc.git > Initialized empty Git repository in > /home/baok/software/petsc-dev/petsc/.git/ > fatal: https://bitbucket.org/petsc/petsc.git/info/refs download error > - The requested URL returned error: 403 > > I can visit the webpage https://bitbucket.org/petsc/. > > Could anyone tell me what is wrong? > > Thanks. > > > Kai > -- Regards, Garnet -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Tue Apr 23 06:15:32 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 23 Apr 2013 05:15:32 -0600 Subject: [petsc-users] Downloding petsc-dev with git does not work. In-Reply-To: References: Message-ID: <87haixebvv.fsf@mcs.anl.gov> Bao Kai writes: > Hi, > > I failed in downloading petsc-dev just now. The following is the > message given. > > petsc-dev]$ git clone https://bitbucket.org/petsc/petsc.git > Initialized empty Git repository in /home/baok/software/petsc-dev/petsc/.git/ > fatal: https://bitbucket.org/petsc/petsc.git/info/refs download error > - The requested URL returned error: 403 > > I can visit the webpage https://bitbucket.org/petsc/. > > Could anyone tell me what is wrong? You have a very old version of Git that doesn't understand "smart" HTTP. Either upgrade Git (recommended and easy), use SSH to access the server (must have a bitbucket account and have uploaded your ssh public key) $ git clone git at bitbucket.org/petsc/petsc.git or clone from the mirror that supports "git://" protocol: $ git clone git://github.com/petsc/petsc.git From jedbrown at mcs.anl.gov Tue Apr 23 06:21:42 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Tue, 23 Apr 2013 05:21:42 -0600 Subject: [petsc-users] Assembling a matrix from submatrices In-Reply-To: References: <598ed300da430733b12bd79040f9c683.squirrel@jan.essert.name> <87wqrufvss.fsf@mcs.anl.gov> Message-ID: <87ehe1ebll.fsf@mcs.anl.gov> Jan Essert writes: > Thank you, looping over the rows worked fine. > I was just confused since the other method looked as if it was the right > way to do this. > > Sorry for bothering you with this.. No worries, it's fine to ask and it's good to know when people need this functionality because it helps us decide what to spend time implementing. From ztdepyahoo at 163.com Tue Apr 23 07:49:46 2013 From: ztdepyahoo at 163.com (=?GBK?B?tqHAz8qm?=) Date: Tue, 23 Apr 2013 20:49:46 +0800 (CST) Subject: [petsc-users] How to set a minimum iteration number for Bicgstab solver Message-ID: <60b0c2ec.e7bc.13e36f1584b.Coremail.ztdepyahoo@163.com> -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Apr 23 08:45:07 2013 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 23 Apr 2013 09:45:07 -0400 Subject: [petsc-users] How to set a minimum iteration number for Bicgstab solver In-Reply-To: <60b0c2ec.e7bc.13e36f1584b.Coremail.ztdepyahoo@163.com> References: <60b0c2ec.e7bc.13e36f1584b.Coremail.ztdepyahoo@163.com> Message-ID: We do not have a minimum number of iterations. Its not clear what purpose that would serve. Matt On Tue, Apr 23, 2013 at 8:49 AM, ??? wrote: > > > > > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jbakosi at lanl.gov Tue Apr 23 13:57:56 2013 From: jbakosi at lanl.gov (Jozsef Bakosi) Date: Tue, 23 Apr 2013 12:57:56 -0600 Subject: [petsc-users] Fwd: Any changes in ML usage between 3.1-p8 -> 3.3-p6? In-Reply-To: <33803CA8-5CE4-416D-9F0C-C99D37475904@columbia.edu> References: <04CAAFD0-E94A-4D15-A2D3-5C4A677109A4@columbia.edu> <33803CA8-5CE4-416D-9F0C-C99D37475904@columbia.edu> Message-ID: <20130423185756.GA2835@karman> Hi Mark and Jed, Thanks for your suggestions. I tried setting the zero pivot and shift as well, and those do not seem to help. See the attached output. > On 04.18.2013 10:34, Jed Brown wrote: > > As Mark says, use > > -mg_levels_ksp_type chebyshev -mg_levels_pc_type jacobi As Mark Christon said earlier, this is not really an option for us because it is a solution to problem that is different than what we are trying to solve. The problem is not that we would like to have this particular problem converge, as BCGS or Hypre handles this problem just fine. What we are trying to track down is what causes the change in PETSc's behavior going between these two versions, while keeping the application and ML source code the same, as that might point out an error on our end, e.g. using an undocumented option, or relying on a default that has changed, etc. The zeropivot and the shift, pointing to their diff was a good catch, but it seems like there might be something else as well that causes a different behavior. Thanks, Jozsef -------------- next part -------------- Phase 0 - no. of bdry pts = 0 Aggregation(UC) : Phase 1 - nodes aggregated = 5944 (9124) Aggregation(UC) : Phase 1 - total aggregates = 293 Aggregation(UC_Phase2_3) : Phase 1 - nodes aggregated = 5944 Aggregation(UC_Phase2_3) : Phase 1 - total aggregates = 293 Aggregation(UC_Phase2_3) : Phase 2a- additional aggregates = 45 Aggregation(UC_Phase2_3) : Phase 2 - total aggregates = 338 Aggregation(UC_Phase2_3) : Phase 2 - boundary nodes = 0 Aggregation(UC_Phase2_3) : Phase 3 - leftovers = 0 and singletons = 0 Gen_Prolongator (level 1) : Max eigenvalue = 1.9726e+00 Prolongator/Restriction smoother (level 1) : damping factor #1 = 6.7594e-01 Prolongator/Restriction smoother (level 1) : ( = 1.3333e+00 / 1.9726e+00) Smoothed Aggregation : operator complexity = 1.045356e+00. KSP Object:(PPE_) 4 MPI processes type: cg maximum iterations=10000 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using UNPRECONDITIONED norm type for convergence test PC Object:(PPE_) 4 MPI processes type: ml MG: type is MULTIPLICATIVE, levels=3 cycles=v Cycles per PCApply=1 Using Galerkin computed coarse grid matrices Coarse grid solver -- level ------------------------------- KSP Object: (PPE_mg_coarse_) 4 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (PPE_mg_coarse_) 4 MPI processes type: redundant Redundant preconditioner: First (color=0) of 4 PCs follows KSP Object: (PPE_mg_coarse_redundant_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (PPE_mg_coarse_redundant_) 1 MPI processes type: lu LU: out-of-place factorization tolerance for zero pivot 1e-12 matrix ordering: nd factor fill ratio given 5, needed 3.80946 Factored matrix follows: Matrix Object: 1 MPI processes type: seqaij rows=338, cols=338 package used to perform factorization: petsc total: nonzeros=49302, allocated nonzeros=49302 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=338, cols=338 total: nonzeros=12942, allocated nonzeros=26026 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Matrix Object: 4 MPI processes type: mpiaij rows=338, cols=338 total: nonzeros=12942, allocated nonzeros=12942 total number of mallocs used during MatSetValues calls =0 not using I-node (on process 0) routines Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (PPE_mg_levels_1_) 4 MPI processes type: richardson Richardson: damping factor=0.9 maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (PPE_mg_levels_1_) 4 MPI processes type: bjacobi block Jacobi: number of blocks = 4 Local solve is same for all blocks, in the following KSP and PC objects: KSP Object: (PPE_mg_levels_1_sub_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (PPE_mg_levels_1_sub_) 1 MPI processes type: icc 0 levels of fill tolerance for zero pivot 1e-12 using Manteuffel shift matrix ordering: natural linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=2265, cols=2265 total: nonzeros=60131, allocated nonzeros=60131 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Matrix Object: 4 MPI processes type: mpiaij rows=9124, cols=9124 total: nonzeros=267508, allocated nonzeros=267508 total number of mallocs used during MatSetValues calls =0 not using I-node (on process 0) routines Up solver (post-smoother) on level 1 ------------------------------- KSP Object: (PPE_mg_levels_1_) 4 MPI processes type: richardson Richardson: damping factor=0.9 maximum iterations=1 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (PPE_mg_levels_1_) 4 MPI processes type: bjacobi block Jacobi: number of blocks = 4 Local solve is same for all blocks, in the following KSP and PC objects: KSP Object: (PPE_mg_levels_1_sub_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (PPE_mg_levels_1_sub_) 1 MPI processes type: icc 0 levels of fill tolerance for zero pivot 1e-12 using Manteuffel shift matrix ordering: natural linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=2265, cols=2265 total: nonzeros=60131, allocated nonzeros=60131 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Matrix Object: 4 MPI processes type: mpiaij rows=9124, cols=9124 total: nonzeros=267508, allocated nonzeros=267508 total number of mallocs used during MatSetValues calls =0 not using I-node (on process 0) routines Down solver (pre-smoother) on level 2 ------------------------------- KSP Object: (PPE_mg_levels_2_) 4 MPI processes type: richardson Richardson: damping factor=0.9 maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (PPE_mg_levels_2_) 4 MPI processes type: bjacobi block Jacobi: number of blocks = 4 Local solve is same for all blocks, in the following KSP and PC objects: KSP Object: (PPE_mg_levels_2_sub_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (PPE_mg_levels_2_sub_) 1 MPI processes type: icc 0 levels of fill tolerance for zero pivot 1e-12 using Manteuffel shift matrix ordering: natural linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=59045, cols=59045 total: nonzeros=1504413, allocated nonzeros=1594215 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Matrix Object: 4 MPI processes type: mpiaij rows=236600, cols=236600 total: nonzeros=6183334, allocated nonzeros=12776400 total number of mallocs used during MatSetValues calls =0 not using I-node (on process 0) routines Up solver (post-smoother) on level 2 ------------------------------- KSP Object: (PPE_mg_levels_2_) 4 MPI processes type: richardson Richardson: damping factor=0.9 maximum iterations=1 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (PPE_mg_levels_2_) 4 MPI processes type: bjacobi block Jacobi: number of blocks = 4 Local solve is same for all blocks, in the following KSP and PC objects: KSP Object: (PPE_mg_levels_2_sub_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (PPE_mg_levels_2_sub_) 1 MPI processes type: icc 0 levels of fill tolerance for zero pivot 1e-12 using Manteuffel shift matrix ordering: natural linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=59045, cols=59045 total: nonzeros=1504413, allocated nonzeros=1594215 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Matrix Object: 4 MPI processes type: mpiaij rows=236600, cols=236600 total: nonzeros=6183334, allocated nonzeros=12776400 total number of mallocs used during MatSetValues calls =0 not using I-node (on process 0) routines linear system matrix = precond matrix: Matrix Object: 4 MPI processes type: mpiaij rows=236600, cols=236600 total: nonzeros=6183334, allocated nonzeros=12776400 total number of mallocs used during MatSetValues calls =0 not using I-node (on process 0) routines solver: ||b|| = 5.9221e-03 solver: ||x(0)|| = 0.0000e+00 solver: Iteration: 1 ||r|| = 2.2899e-03 ||x(i+1)-x(i)|| = 5.0933e+00 ||x(i+1)|| = 5.0933e+00 solver: Iteration: 2 ||r|| = 1.3937e-03 ||x(i+1)-x(i)|| = 5.6534e+00 ||x(i+1)|| = 1.0532e+01 solver: Iteration: 3 ||r|| = 8.6624e-04 ||x(i+1)-x(i)|| = 2.6100e+00 ||x(i+1)|| = 1.2788e+01 solver: Iteration: 4 ||r|| = 5.8704e-04 ||x(i+1)-x(i)|| = 9.9167e-01 ||x(i+1)|| = 1.3473e+01 solver: Iteration: 5 ||r|| = 5.7145e-04 ||x(i+1)-x(i)|| = 3.3801e-01 ||x(i+1)|| = 1.3619e+01 solver: Iteration: 6 ||r|| = 1.0164e-03 ||x(i+1)-x(i)|| = 1.1916e-01 ||x(i+1)|| = 1.3629e+01 solver: Iteration: 7 ||r|| = 1.1305e-03 ||x(i+1)-x(i)|| = 5.4793e-03 ||x(i+1)|| = 1.3628e+01 ... #PETSc Option Table entries: -Mom_ksp_gmres_classicalgramschmidt -Mom_pc_asm_overlap 1 -Mom_pc_type asm -Mom_sub_pc_factor_levels 0 -Mom_sub_pc_type ilu -PPE_mg_coarse_redundant_pc_factor_zeropivot 1e-12 -PPE_mg_levels_ksp_norm_type none -PPE_mg_levels_ksp_richardson_scale 0.90 -PPE_mg_levels_ksp_type richardson -PPE_mg_levels_pc_type bjacobi -PPE_mg_levels_sub_ksp_norm_type none -PPE_mg_levels_sub_ksp_type preonly -PPE_mg_levels_sub_pc_factor_shift_amount 1e-12 -PPE_mg_levels_sub_pc_factor_zeropivot 1e-12 -PPE_mg_levels_sub_pc_type icc -PPE_pc_mg_cycle_type V -PPE_pc_mg_smoothdown 1 -PPE_pc_mg_smoothup 1 -PPE_pc_ml_PrintLevel 10 -PPE_pc_ml_maxCoarseSize 1000 -PPE_pc_ml_maxNlevels 10 -c ldc3d_iles.cntl -i ldc3d.exo -options_left -options_table #End of PETSc Option Table entries There are 2 unused database options. They are: Option left: name:-c value: ldc3d_iles.cntl Option left: name:-i value: ldc3d.exo From mark.adams at columbia.edu Tue Apr 23 14:44:30 2013 From: mark.adams at columbia.edu (Mark F. Adams) Date: Tue, 23 Apr 2013 15:44:30 -0400 Subject: [petsc-users] Fwd: Any changes in ML usage between 3.1-p8 -> 3.3-p6? In-Reply-To: <20130423185756.GA2835@karman> References: <04CAAFD0-E94A-4D15-A2D3-5C4A677109A4@columbia.edu> <33803CA8-5CE4-416D-9F0C-C99D37475904@columbia.edu> <20130423185756.GA2835@karman> Message-ID: The only thing that I can think of is a change in icc or ml.c (the interface code). I have looked over the logs for clues and do not see any more red flags. So I would look at the logs: https://bitbucket.org/petsc/petsc/history-node/cee652cbe889/src/ksp/pc/impls/ml/ml.c?page=2 and the similar one for icc (the history button when you have a file open). See if you can see anything that could change the semantics. Some of these changes are required to work with the rest of PETSc so doing a bisection search might be hard. git does have a bisection utility (http://www.youtube.com/watch?v=X_pQfoaRuhA) that can help in doing this. You could probably revert to the old version of these two files, add any of the changes back in that are required for syntax (e.g., PetscTruth --> PetscBool) and get the code to compile and see if you get your old semantics back. If so then add changes back (bisect might help to speed this up) and search for the change that broke you. On Apr 23, 2013, at 2:57 PM, Jozsef Bakosi wrote: > Hi Mark and Jed, > > Thanks for your suggestions. I tried setting the zero pivot and shift as well, > and those do not seem to help. See the attached output. > >> On 04.18.2013 10:34, Jed Brown wrote: >> >> As Mark says, use >> >> -mg_levels_ksp_type chebyshev -mg_levels_pc_type jacobi > > As Mark Christon said earlier, this is not really an option for us because it is > a solution to problem that is different than what we are trying to solve. > > The problem is not that we would like to have this particular problem converge, > as BCGS or Hypre handles this problem just fine. What we are trying to track > down is what causes the change in PETSc's behavior going between these two > versions, while keeping the application and ML source code the same, as that > might point out an error on our end, e.g. using an undocumented option, or > relying on a default that has changed, etc. The zeropivot and the shift, > pointing to their diff was a good catch, but it seems like there might be > something else as well that causes a different behavior. > > Thanks, > Jozsef > From choi240 at purdue.edu Tue Apr 23 15:27:06 2013 From: choi240 at purdue.edu (Joon hee Choi) Date: Tue, 23 Apr 2013 16:27:06 -0400 (EDT) Subject: [petsc-users] Transpose of Block Matrix with aij type In-Reply-To: <1840317682.88629.1366748126302.JavaMail.root@mailhub028.itcs.purdue.edu> Message-ID: <1850082456.88680.1366748826575.JavaMail.root@mailhub028.itcs.purdue.edu> Hello, I tried to get transpose of block matrix(just with aij type), but the result was not a block matrix. For example, A = 1 2 3 4 | 4 3 2 1 2 3 4 5 | 5 4 3 2 3 4 5 6 | 6 5 4 3 AT(expected) = 1 2 3 4 2 3 4 5 3 4 5 6 ------- 4 3 2 1 5 4 3 2 6 5 4 3 AT(result) = 1 2 3 2 3 4 3 4 5 4 5 6 4 5 6 3 4 5 2 3 4 1 2 3 If someone knows about this problem, please let me know it. Thank you From rupp at mcs.anl.gov Tue Apr 23 16:13:20 2013 From: rupp at mcs.anl.gov (Karl Rupp) Date: Tue, 23 Apr 2013 16:13:20 -0500 Subject: [petsc-users] Transpose of Block Matrix with aij type In-Reply-To: <1850082456.88680.1366748826575.JavaMail.root@mailhub028.itcs.purdue.edu> References: <1850082456.88680.1366748826575.JavaMail.root@mailhub028.itcs.purdue.edu> Message-ID: <5176F970.9000003@mcs.anl.gov> Hi, why would you expect that the transpose of a 3x8 matrix is not a 8x3-matrix? Best regards, Karli On 04/23/2013 03:27 PM, Joon hee Choi wrote: > Hello, > > I tried to get transpose of block matrix(just with aij type), but the result was not a block matrix. For example, > > A = > 1 2 3 4 | 4 3 2 1 > 2 3 4 5 | 5 4 3 2 > 3 4 5 6 | 6 5 4 3 > > > AT(expected) = > 1 2 3 4 > 2 3 4 5 > 3 4 5 6 > ------- > 4 3 2 1 > 5 4 3 2 > 6 5 4 3 > > > AT(result) = > 1 2 3 > 2 3 4 > 3 4 5 > 4 5 6 > 4 5 6 > 3 4 5 > 2 3 4 > 1 2 3 > > If someone knows about this problem, please let me know it. > > Thank you > From choi240 at purdue.edu Tue Apr 23 17:17:07 2013 From: choi240 at purdue.edu (Choi240) Date: Tue, 23 Apr 2013 18:17:07 -0400 (EDT) Subject: [petsc-users] Transpose of Block Matrix with aij type Message-ID: <93r1c9dffcqd829e95l7gnq9.1366755287590@email.android.com> Hi, I have to compute multiplication between two block matrices. It should be as follows: B1 | B2 ? ? ? ? A1 ? ? ? ?B1A1+B2A2 ---------- ? * ? --- ? = ? --------------- B3 | B4 ? ? ? ? A2 ? ? ? ?B3A1+B4A2 However, I just have A = ?[A1 A2]. So, I need to get A^T. Is there a way I can get the transpose of this block matrix with the MatTranspose()? Or do I have to use another function such as MatGetSubMatrices()? Thank you, Joon -------- Original message -------- Subject: Re: [petsc-users] Transpose of Block Matrix with aij type From: Karl Rupp To: petsc-users at mcs.anl.gov CC: choi240 at purdue.edu Hi, why would you expect that the transpose of a 3x8 matrix is not a 8x3-matrix? Best regards, Karli On 04/23/2013 03:27 PM, Joon hee Choi wrote: > Hello, > > I tried to get transpose of block matrix(just with aij type), but the result was not a block matrix. For example, > > A = > 1 2 3 4 | 4 3 2 1 > 2 3 4 5 | 5 4 3 2 > 3 4 5 6 | 6 5 4 3 > > > AT(expected) = > 1 2 3 4 > 2 3 4 5 > 3 4 5 6 > ------- > 4 3 2 1 > 5 4 3 2 > 6 5 4 3 > > > AT(result) = > 1 2 3 > 2 3 4 > 3 4 5 > 4 5 6 > 4 5 6 > 3 4 5 > 2 3 4 > 1 2 3 > > If someone knows about this problem, please let me know it. > > Thank you > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rupp at mcs.anl.gov Tue Apr 23 17:23:00 2013 From: rupp at mcs.anl.gov (Karl Rupp) Date: Tue, 23 Apr 2013 17:23:00 -0500 Subject: [petsc-users] Transpose of Block Matrix with aij type In-Reply-To: <93r1c9dffcqd829e95l7gnq9.1366755287590@email.android.com> References: <93r1c9dffcqd829e95l7gnq9.1366755287590@email.android.com> Message-ID: <517709C4.70405@mcs.anl.gov> Hi, is there a good reason for setting up A = [A1 A2] instead of A = [A1; A2] in the first place? Best regards, Karli On 04/23/2013 05:17 PM, Choi240 wrote: > Hi, > > I have to compute multiplication between two block matrices. It should > be as follows: > > B1 | B2 A1 B1A1+B2A2 > ---------- * --- = --------------- > B3 | B4 A2 B3A1+B4A2 > > However, I just have A = [A1 A2]. So, I need to get A^T. Is there a way > I can get the transpose of this block matrix with the MatTranspose()? Or > do I have to use another function such as MatGetSubMatrices()? > > Thank you, > Joon > > > > -------- Original message -------- > Subject: Re: [petsc-users] Transpose of Block Matrix with aij type > From: Karl Rupp > To: petsc-users at mcs.anl.gov > CC: choi240 at purdue.edu > > > Hi, > > why would you expect that the transpose of a 3x8 matrix is not a 8x3-matrix? > > Best regards, > Karli > > > On 04/23/2013 03:27 PM, Joon hee Choi wrote: > > Hello, > > > > I tried to get transpose of block matrix(just with aij type), but the > result was not a block matrix. For example, > > > > A = > > 1 2 3 4 | 4 3 2 1 > > 2 3 4 5 | 5 4 3 2 > > 3 4 5 6 | 6 5 4 3 > > > > > > AT(expected) = > > 1 2 3 4 > > 2 3 4 5 > > 3 4 5 6 > > ------- > > 4 3 2 1 > > 5 4 3 2 > > 6 5 4 3 > > > > > > AT(result) = > > 1 2 3 > > 2 3 4 > > 3 4 5 > > 4 5 6 > > 4 5 6 > > 3 4 5 > > 2 3 4 > > 1 2 3 > > > > If someone knows about this problem, please let me know it. > > > > Thank you > > > From rupp at mcs.anl.gov Tue Apr 23 17:24:38 2013 From: rupp at mcs.anl.gov (Karl Rupp) Date: Tue, 23 Apr 2013 17:24:38 -0500 Subject: [petsc-users] Transpose of Block Matrix with aij type In-Reply-To: <93r1c9dffcqd829e95l7gnq9.1366755287590@email.android.com> References: <93r1c9dffcqd829e95l7gnq9.1366755287590@email.android.com> Message-ID: <51770A26.7070405@mcs.anl.gov> Hi again, if you have control over the structure of B, what about computing [A1 A2] * [B1 B3; B2 B4] instead? Best regards, Karli On 04/23/2013 05:17 PM, Choi240 wrote: > Hi, > > I have to compute multiplication between two block matrices. It should > be as follows: > > B1 | B2 A1 B1A1+B2A2 > ---------- * --- = --------------- > B3 | B4 A2 B3A1+B4A2 > > However, I just have A = [A1 A2]. So, I need to get A^T. Is there a way > I can get the transpose of this block matrix with the MatTranspose()? Or > do I have to use another function such as MatGetSubMatrices()? > > Thank you, > Joon > > > > -------- Original message -------- > Subject: Re: [petsc-users] Transpose of Block Matrix with aij type > From: Karl Rupp > To: petsc-users at mcs.anl.gov > CC: choi240 at purdue.edu > > > Hi, > > why would you expect that the transpose of a 3x8 matrix is not a 8x3-matrix? > > Best regards, > Karli > > > On 04/23/2013 03:27 PM, Joon hee Choi wrote: > > Hello, > > > > I tried to get transpose of block matrix(just with aij type), but the > result was not a block matrix. For example, > > > > A = > > 1 2 3 4 | 4 3 2 1 > > 2 3 4 5 | 5 4 3 2 > > 3 4 5 6 | 6 5 4 3 > > > > > > AT(expected) = > > 1 2 3 4 > > 2 3 4 5 > > 3 4 5 6 > > ------- > > 4 3 2 1 > > 5 4 3 2 > > 6 5 4 3 > > > > > > AT(result) = > > 1 2 3 > > 2 3 4 > > 3 4 5 > > 4 5 6 > > 4 5 6 > > 3 4 5 > > 2 3 4 > > 1 2 3 > > > > If someone knows about this problem, please let me know it. > > > > Thank you > > > From choi240 at purdue.edu Tue Apr 23 18:05:41 2013 From: choi240 at purdue.edu (Choi240) Date: Tue, 23 Apr 2013 19:05:41 -0400 (EDT) Subject: [petsc-users] Transpose of Block Matrix with aij type Message-ID: <4d3980bkm1tw4nogt5utgdpw.1366758338051@email.android.com> Hi, Thank you for your reply. But A1,A2 are not square(3x4). B1~B4 are 3x3 matrices. So A1*B1 is not impossible. Best regards, Joon -------- Original message -------- Subject: Re: [petsc-users] Transpose of Block Matrix with aij type From: Karl Rupp To: Choi240 CC: petsc-users at mcs.anl.gov Hi again, if you have control over the structure of B, what about computing [A1 A2] * [B1 B3; B2 B4] instead? Best regards, Karli On 04/23/2013 05:17 PM, Choi240 wrote: > Hi, > > I have to compute multiplication between two block matrices. It should > be as follows: > > B1 | B2???????? A1??????? B1A1+B2A2 > ----------?? *?? ---?? =?? --------------- > B3 | B4???????? A2??????? B3A1+B4A2 > > However, I just have A =? [A1 A2]. So, I need to get A^T. Is there a way > I can get the transpose of this block matrix with the MatTranspose()? Or > do I have to use another function such as MatGetSubMatrices()? > > Thank you, > Joon > > > > -------- Original message -------- > Subject: Re: [petsc-users] Transpose of Block Matrix with aij type > From: Karl Rupp > To: petsc-users at mcs.anl.gov > CC: choi240 at purdue.edu > > > Hi, > > why would you expect that the transpose of a 3x8 matrix is not a 8x3-matrix? > > Best regards, > Karli > > > On 04/23/2013 03:27 PM, Joon hee Choi wrote: >? > Hello, >? > >? > I tried to get transpose of block matrix(just with aij type), but the > result was not a block matrix. For example, >? > >? > A = >? > 1 2 3 4 | 4 3 2 1 >? > 2 3 4 5 | 5 4 3 2 >? > 3 4 5 6 | 6 5 4 3 >? > >? > >? > AT(expected) = >? > 1 2 3 4 >? > 2 3 4 5 >? > 3 4 5 6 >? > ------- >? > 4 3 2 1 >? > 5 4 3 2 >? > 6 5 4 3 >? > >? > >? > AT(result) = >? > 1 2 3 >? > 2 3 4 >? > 3 4 5 >? > 4 5 6 >? > 4 5 6 >? > 3 4 5 >? > 2 3 4 >? > 1 2 3 >? > >? > If someone knows about this problem, please let me know it. >? > >? > Thank you >? > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rupp at mcs.anl.gov Tue Apr 23 20:39:56 2013 From: rupp at mcs.anl.gov (Karl Rupp) Date: Tue, 23 Apr 2013 20:39:56 -0500 Subject: [petsc-users] Transpose of Block Matrix with aij type In-Reply-To: <4d3980bkm1tw4nogt5utgdpw.1366758338051@email.android.com> References: <4d3980bkm1tw4nogt5utgdpw.1366758338051@email.android.com> Message-ID: <517737EC.7020308@mcs.anl.gov> Hi Joon, sorry, that was a pretty bad idea (wouldn't even work with square matrices in general) I'm afraid you'll have to set up new matrices A, B with the respective entries. For performance reasons you better use a dense format since your matrices are so small. Best regards, Karli On 04/23/2013 06:05 PM, Choi240 wrote: > Hi, > > Thank you for your reply. But A1,A2 are not square(3x4). B1~B4 are 3x3 > matrices. So A1*B1 is not impossible. > > Best regards, > Joon > > > > > -------- Original message -------- > Subject: Re: [petsc-users] Transpose of Block Matrix with aij type > From: Karl Rupp > To: Choi240 > CC: petsc-users at mcs.anl.gov > > > Hi again, > > if you have control over the structure of B, what about computing > [A1 A2] * [B1 B3; B2 B4] > instead? > > Best regards, > Karli > > > On 04/23/2013 05:17 PM, Choi240 wrote: > > Hi, > > > > I have to compute multiplication between two block matrices. It should > > be as follows: > > > > B1 | B2 A1 B1A1+B2A2 > > ---------- * --- = --------------- > > B3 | B4 A2 B3A1+B4A2 > > > > However, I just have A = [A1 A2]. So, I need to get A^T. Is there a way > > I can get the transpose of this block matrix with the MatTranspose()? Or > > do I have to use another function such as MatGetSubMatrices()? > > > > Thank you, > > Joon > > > > > > > > -------- Original message -------- > > Subject: Re: [petsc-users] Transpose of Block Matrix with aij type > > From: Karl Rupp > > To: petsc-users at mcs.anl.gov > > CC: choi240 at purdue.edu > > > > > > Hi, > > > > why would you expect that the transpose of a 3x8 matrix is not a > 8x3-matrix? > > > > Best regards, > > Karli > > > > > > On 04/23/2013 03:27 PM, Joon hee Choi wrote: > > > Hello, > > > > > > I tried to get transpose of block matrix(just with aij type), but the > > result was not a block matrix. For example, > > > > > > A = > > > 1 2 3 4 | 4 3 2 1 > > > 2 3 4 5 | 5 4 3 2 > > > 3 4 5 6 | 6 5 4 3 > > > > > > > > > AT(expected) = > > > 1 2 3 4 > > > 2 3 4 5 > > > 3 4 5 6 > > > ------- > > > 4 3 2 1 > > > 5 4 3 2 > > > 6 5 4 3 > > > > > > > > > AT(result) = > > > 1 2 3 > > > 2 3 4 > > > 3 4 5 > > > 4 5 6 > > > 4 5 6 > > > 3 4 5 > > > 2 3 4 > > > 1 2 3 > > > > > > If someone knows about this problem, please let me know it. > > > > > > Thank you > > > > > > From n-crgfp-hfref=zpf.nay.tbi-e289c at postmaster.twitter.com Wed Apr 24 01:09:22 2013 From: n-crgfp-hfref=zpf.nay.tbi-e289c at postmaster.twitter.com (Twitter) Date: Wed, 24 Apr 2013 06:09:22 +0000 Subject: [petsc-users] =?utf-8?q?Marcelo_Guterres_ainda_est=C3=A1_esperand?= =?utf-8?q?o_que_voc=C3=AA_participe_do_Twitter=2E=2E=2E?= Message-ID: <970566753.2239430.1366783762720.JavaMail.ibis@1UUstO-000D1s-Lf.mta.twitter.com> Marcelo Guterres ainda est? esperando que voc? participe do Twitter... Atrav?s do Twitter voc? fica conectado ao que est? acontecendo neste momento com as pessoas e organiza??es que lhe interessam. Aceitar convite https://twitter.com/i/cf3dac3a9cc5332792b33d184d2cc4bbf4c04759 ------------------------ Esta mensagem foi enviada pelo Twitter em nome de usu?rios que digitaram seu e-mail convidando voc? para participar do Twitter. Remover inscri??o: https://twitter.com/i/o?t=1&iid=b88b72a0-ae08-4af2-ad7f-fe2e8d0469c8&uid=0&c=CslKu56tIKaSSnAfvRhYceC8Jq8%2FkKu6&nid=68+26+20130423 Precisa de ajuda? https://support.twitter.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From d.scott at ed.ac.uk Wed Apr 24 06:15:56 2013 From: d.scott at ed.ac.uk (David Scott) Date: Wed, 24 Apr 2013 12:15:56 +0100 Subject: [petsc-users] Advice Being Sought In-Reply-To: References: <51752589.4080001@ed.ac.uk> <5175649C.6060906@ed.ac.uk> Message-ID: <5177BEEC.1000602@ed.ac.uk> Hello Barry, On 23/04/2013 02:36, Barry Smith wrote: > David, > > We would be very interested in determining what is "going wrong" with the solver here since we hope to make it robust. Would it be possible for you to use a MatView() and VecView() on the matrix and the right hand side with a binary viewer when it "goes bad" and send us the resulting file? > > Barry > > We'd run the gamg solver on your matrix and track down what is happening. > > I shall try to do what you ask but I shall try to do it for a smaller domain (2 million rather than 11 million points) than I have been working on recently. I am uncertain what I should do precisely. With reference to the code that I attached to a previous message should I insert something like PetscViewer viewer call MatView(B, viewer, ierr) call PetscViewerBinaryOpen(PETSC_COMM_WORLD, 'BinaryMatrix', & FILE_MODE_WRITE, viewer, ierr) call PetscViewerDestroy(viewer, ierr) in compute_matrix after B has been assembled (and similar code in compute_rhs). David -- Dr. D. M. Scott Applications Consultant Edinburgh Parallel Computing Centre Tel. 0131 650 5921 The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From jedbrown at mcs.anl.gov Wed Apr 24 07:11:23 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 24 Apr 2013 06:11:23 -0600 Subject: [petsc-users] Advice Being Sought In-Reply-To: <5177BEEC.1000602@ed.ac.uk> References: <51752589.4080001@ed.ac.uk> <5175649C.6060906@ed.ac.uk> <5177BEEC.1000602@ed.ac.uk> Message-ID: <877gjscems.fsf@mcs.anl.gov> David Scott writes: > I am uncertain what I should do precisely. With reference to the code > that I attached to a previous message should I insert something like > PetscViewer viewer > call MatView(B, viewer, ierr) > call PetscViewerBinaryOpen(PETSC_COMM_WORLD, 'BinaryMatrix', & > FILE_MODE_WRITE, viewer, ierr) You can't use the viewer before you open the viewer, so you'll have to swap the two lines above. > call PetscViewerDestroy(viewer, ierr) > in compute_matrix after B has been assembled (and similar code in > compute_rhs). You can just check for error/non-convergence when you call KSPSolve and write the matrix and vector at that time. You can put them in the same file: call PetscViewerBinaryOpen(PETSC_COMM_WORLD, 'DifficultSystem', & FILE_MODE_WRITE, viewer, ierr) call MatView(B, viewer, ierr) ! use KSPGetOperators() if needed call VecView(X, viewer, ierr) ! use KSPGetRhs() if needed call PetscViewerDestroy(viewer, ierr) From choi240 at purdue.edu Wed Apr 24 10:45:44 2013 From: choi240 at purdue.edu (Joon hee Choi) Date: Wed, 24 Apr 2013 11:45:44 -0400 (EDT) Subject: [petsc-users] Transpose of Block Matrix with aij type In-Reply-To: <517737EC.7020308@mcs.anl.gov> Message-ID: <2035581605.90911.1366818344058.JavaMail.root@mailhub028.itcs.purdue.edu> Hi Karli, Thank you. Actually, B is very huge(more than 10^10) and sparse(about 1% dense). 3x3 and 3x4 matrices were an example. I think I have to set up A again. Anyway, thank you again. Best regards, Joon ----- ?? ??? ----- ?? ??: "Karl Rupp" ?? ??: "Choi240" ??: petsc-users at mcs.anl.gov ?? ??: 2013? 4? 23?, ??? ?? 9:39:56 ??: Re: [petsc-users] Transpose of Block Matrix with aij type Hi Joon, sorry, that was a pretty bad idea (wouldn't even work with square matrices in general) I'm afraid you'll have to set up new matrices A, B with the respective entries. For performance reasons you better use a dense format since your matrices are so small. Best regards, Karli On 04/23/2013 06:05 PM, Choi240 wrote: > Hi, > > Thank you for your reply. But A1,A2 are not square(3x4). B1~B4 are 3x3 > matrices. So A1*B1 is not impossible. > > Best regards, > Joon > > > > > -------- Original message -------- > Subject: Re: [petsc-users] Transpose of Block Matrix with aij type > From: Karl Rupp > To: Choi240 > CC: petsc-users at mcs.anl.gov > > > Hi again, > > if you have control over the structure of B, what about computing > [A1 A2] * [B1 B3; B2 B4] > instead? > > Best regards, > Karli > > > On 04/23/2013 05:17 PM, Choi240 wrote: > > Hi, > > > > I have to compute multiplication between two block matrices. It should > > be as follows: > > > > B1 | B2 A1 B1A1+B2A2 > > ---------- * --- = --------------- > > B3 | B4 A2 B3A1+B4A2 > > > > However, I just have A = [A1 A2]. So, I need to get A^T. Is there a way > > I can get the transpose of this block matrix with the MatTranspose()? Or > > do I have to use another function such as MatGetSubMatrices()? > > > > Thank you, > > Joon > > > > > > > > -------- Original message -------- > > Subject: Re: [petsc-users] Transpose of Block Matrix with aij type > > From: Karl Rupp > > To: petsc-users at mcs.anl.gov > > CC: choi240 at purdue.edu > > > > > > Hi, > > > > why would you expect that the transpose of a 3x8 matrix is not a > 8x3-matrix? > > > > Best regards, > > Karli > > > > > > On 04/23/2013 03:27 PM, Joon hee Choi wrote: > > > Hello, > > > > > > I tried to get transpose of block matrix(just with aij type), but the > > result was not a block matrix. For example, > > > > > > A = > > > 1 2 3 4 | 4 3 2 1 > > > 2 3 4 5 | 5 4 3 2 > > > 3 4 5 6 | 6 5 4 3 > > > > > > > > > AT(expected) = > > > 1 2 3 4 > > > 2 3 4 5 > > > 3 4 5 6 > > > ------- > > > 4 3 2 1 > > > 5 4 3 2 > > > 6 5 4 3 > > > > > > > > > AT(result) = > > > 1 2 3 > > > 2 3 4 > > > 3 4 5 > > > 4 5 6 > > > 4 5 6 > > > 3 4 5 > > > 2 3 4 > > > 1 2 3 > > > > > > If someone knows about this problem, please let me know it. > > > > > > Thank you > > > > > > From mike.hui.zhang at hotmail.com Wed Apr 24 10:58:20 2013 From: mike.hui.zhang at hotmail.com (Hui Zhang) Date: Wed, 24 Apr 2013 17:58:20 +0200 Subject: [petsc-users] VecGetArray and use it for VecCreateSeqWithArray Message-ID: Vec u,v; VecCreateMPIAIJ(..,&u); VecGetArray(u,&a); VecCreateSeqWithArray(...,a,&v); Is this safe? I want the VecSeq v to read and write local array of u. This is for VecScatterCreate from a global Vec to the local Seq v, and then I hope that u get updated as well automatically. Thanks in advance! From mike.hui.zhang at hotmail.com Wed Apr 24 11:02:23 2013 From: mike.hui.zhang at hotmail.com (Hui Zhang) Date: Wed, 24 Apr 2013 18:02:23 +0200 Subject: [petsc-users] VecGetArray and use it for VecCreateSeqWithArray In-Reply-To: References: Message-ID: On Apr 24, 2013, at 5:58 PM, Hui Zhang wrote: > Vec u,v; > > VecCreateMPIAIJ(..,&u); > VecGetArray(u,&a); > > VecCreateSeqWithArray(...,a,&v); > > Is this safe? I want the VecSeq v to read and write local array of u. > This is for VecScatterCreate from a global Vec to the local Seq v, Note that the global Vec to scatter is COMM_WORLD, but the communicator of u is smaller. > and then I hope that u get updated as well automatically. > > Thanks in advance! > From rupp at mcs.anl.gov Wed Apr 24 11:07:11 2013 From: rupp at mcs.anl.gov (Karl Rupp) Date: Wed, 24 Apr 2013 11:07:11 -0500 Subject: [petsc-users] Transpose of Block Matrix with aij type In-Reply-To: <2035581605.90911.1366818344058.JavaMail.root@mailhub028.itcs.purdue.edu> References: <2035581605.90911.1366818344058.JavaMail.root@mailhub028.itcs.purdue.edu> Message-ID: <5178032F.1010809@mcs.anl.gov> Hi Joon, you're welcome - another idea: if you have just six matrices A1, A2, and B1-B4, it's worthwhile to compute the result blocks B1A1 + B2A2 and B3A1 + B4A2 directly into temporary matrices using standard operations and compose the result into a single matrix. This way you can avoid the more expensive copies of A1, A2, B1-B4. Best regards, Karli On 04/24/2013 10:45 AM, Joon hee Choi wrote: > Hi Karli, > > Thank you. Actually, B is very huge(more than 10^10) and sparse(about 1% dense). 3x3 and 3x4 matrices were an example. I think I have to set up A again. Anyway, thank you again. > > Best regards, > Joon > > > ----- ?? ??? ----- > ?? ??: "Karl Rupp" > ?? ??: "Choi240" > ??: petsc-users at mcs.anl.gov > ?? ??: 2013? 4? 23?, ??? ?? 9:39:56 > ??: Re: [petsc-users] Transpose of Block Matrix with aij type > > Hi Joon, > > sorry, that was a pretty bad idea (wouldn't even work with square > matrices in general) > > I'm afraid you'll have to set up new matrices A, B with the respective > entries. For performance reasons you better use a dense format since > your matrices are so small. > > Best regards, > Karli > > > On 04/23/2013 06:05 PM, Choi240 wrote: >> Hi, >> >> Thank you for your reply. But A1,A2 are not square(3x4). B1~B4 are 3x3 >> matrices. So A1*B1 is not impossible. >> >> Best regards, >> Joon >> >> >> >> >> -------- Original message -------- >> Subject: Re: [petsc-users] Transpose of Block Matrix with aij type >> From: Karl Rupp >> To: Choi240 >> CC: petsc-users at mcs.anl.gov >> >> >> Hi again, >> >> if you have control over the structure of B, what about computing >> [A1 A2] * [B1 B3; B2 B4] >> instead? >> >> Best regards, >> Karli >> >> >> On 04/23/2013 05:17 PM, Choi240 wrote: >> > Hi, >> > >> > I have to compute multiplication between two block matrices. It should >> > be as follows: >> > >> > B1 | B2 A1 B1A1+B2A2 >> > ---------- * --- = --------------- >> > B3 | B4 A2 B3A1+B4A2 >> > >> > However, I just have A = [A1 A2]. So, I need to get A^T. Is there a way >> > I can get the transpose of this block matrix with the MatTranspose()? Or >> > do I have to use another function such as MatGetSubMatrices()? >> > >> > Thank you, >> > Joon >> > >> > >> > >> > -------- Original message -------- >> > Subject: Re: [petsc-users] Transpose of Block Matrix with aij type >> > From: Karl Rupp >> > To: petsc-users at mcs.anl.gov >> > CC: choi240 at purdue.edu >> > >> > >> > Hi, >> > >> > why would you expect that the transpose of a 3x8 matrix is not a >> 8x3-matrix? >> > >> > Best regards, >> > Karli >> > >> > >> > On 04/23/2013 03:27 PM, Joon hee Choi wrote: >> > > Hello, >> > > >> > > I tried to get transpose of block matrix(just with aij type), but the >> > result was not a block matrix. For example, >> > > >> > > A = >> > > 1 2 3 4 | 4 3 2 1 >> > > 2 3 4 5 | 5 4 3 2 >> > > 3 4 5 6 | 6 5 4 3 >> > > >> > > >> > > AT(expected) = >> > > 1 2 3 4 >> > > 2 3 4 5 >> > > 3 4 5 6 >> > > ------- >> > > 4 3 2 1 >> > > 5 4 3 2 >> > > 6 5 4 3 >> > > >> > > >> > > AT(result) = >> > > 1 2 3 >> > > 2 3 4 >> > > 3 4 5 >> > > 4 5 6 >> > > 4 5 6 >> > > 3 4 5 >> > > 2 3 4 >> > > 1 2 3 >> > > >> > > If someone knows about this problem, please let me know it. >> > > >> > > Thank you >> > > >> > >> > From knepley at gmail.com Wed Apr 24 11:21:39 2013 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 24 Apr 2013 12:21:39 -0400 Subject: [petsc-users] VecGetArray and use it for VecCreateSeqWithArray In-Reply-To: References: Message-ID: On Wed, Apr 24, 2013 at 11:58 AM, Hui Zhang wrote: > Vec u,v; > > VecCreateMPIAIJ(..,&u); > VecGetArray(u,&a); > > VecCreateSeqWithArray(...,a,&v); > > Is this safe? I want the VecSeq v to read and write local array of u. > Yes, this is fine. Matt > This is for VecScatterCreate from a global Vec to the local Seq v, > and then I hope that u get updated as well automatically. > > Thanks in advance! > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From mike.hui.zhang at hotmail.com Wed Apr 24 11:24:18 2013 From: mike.hui.zhang at hotmail.com (Hui Zhang) Date: Wed, 24 Apr 2013 18:24:18 +0200 Subject: [petsc-users] VecGetArray and use it for VecCreateSeqWithArray In-Reply-To: References: Message-ID: On Apr 24, 2013, at 6:21 PM, Matthew Knepley wrote: > On Wed, Apr 24, 2013 at 11:58 AM, Hui Zhang wrote: > Vec u,v; > > VecCreateMPIAIJ(..,&u); > VecGetArray(u,&a); > > VecCreateSeqWithArray(...,a,&v); > > Is this safe? I want the VecSeq v to read and write local array of u. > > Yes, this is fine. Nice, thanks! > > Matt > > This is for VecScatterCreate from a global Vec to the local Seq v, > and then I hope that u get updated as well automatically. > > Thanks in advance! > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener From choi240 at purdue.edu Wed Apr 24 11:24:50 2013 From: choi240 at purdue.edu (Joon hee Choi) Date: Wed, 24 Apr 2013 12:24:50 -0400 (EDT) Subject: [petsc-users] Transpose of Block Matrix with aij type In-Reply-To: <5178032F.1010809@mcs.anl.gov> Message-ID: <1614499041.91064.1366820690560.JavaMail.root@mailhub028.itcs.purdue.edu> Hi Karli, It may be a good idea to compute the result blocks directly. What function do I need to use for getting each block? I tried to use MatGetSubMatrices(Mat mat,PetscInt n,const IS irow[],const IS icol[],MatReuse scall,Mat *submat[]), but I didn't know how I got irow[] and icol[] from block matrix. Best regards, Joon ----- ?? ??? ----- ?? ??: "Karl Rupp" ?? ??: "Joon hee Choi" ??: petsc-users at mcs.anl.gov ?? ??: 2013? 4? 24?, ??? ?? 12:07:11 ??: Re: [petsc-users] Transpose of Block Matrix with aij type Hi Joon, you're welcome - another idea: if you have just six matrices A1, A2, and B1-B4, it's worthwhile to compute the result blocks B1A1 + B2A2 and B3A1 + B4A2 directly into temporary matrices using standard operations and compose the result into a single matrix. This way you can avoid the more expensive copies of A1, A2, B1-B4. Best regards, Karli On 04/24/2013 10:45 AM, Joon hee Choi wrote: > Hi Karli, > > Thank you. Actually, B is very huge(more than 10^10) and sparse(about 1% dense). 3x3 and 3x4 matrices were an example. I think I have to set up A again. Anyway, thank you again. > > Best regards, > Joon > > > ----- ?? ??? ----- > ?? ??: "Karl Rupp" > ?? ??: "Choi240" > ??: petsc-users at mcs.anl.gov > ?? ??: 2013? 4? 23?, ??? ?? 9:39:56 > ??: Re: [petsc-users] Transpose of Block Matrix with aij type > > Hi Joon, > > sorry, that was a pretty bad idea (wouldn't even work with square > matrices in general) > > I'm afraid you'll have to set up new matrices A, B with the respective > entries. For performance reasons you better use a dense format since > your matrices are so small. > > Best regards, > Karli > > > On 04/23/2013 06:05 PM, Choi240 wrote: >> Hi, >> >> Thank you for your reply. But A1,A2 are not square(3x4). B1~B4 are 3x3 >> matrices. So A1*B1 is not impossible. >> >> Best regards, >> Joon >> >> >> >> >> -------- Original message -------- >> Subject: Re: [petsc-users] Transpose of Block Matrix with aij type >> From: Karl Rupp >> To: Choi240 >> CC: petsc-users at mcs.anl.gov >> >> >> Hi again, >> >> if you have control over the structure of B, what about computing >> [A1 A2] * [B1 B3; B2 B4] >> instead? >> >> Best regards, >> Karli >> >> >> On 04/23/2013 05:17 PM, Choi240 wrote: >> > Hi, >> > >> > I have to compute multiplication between two block matrices. It should >> > be as follows: >> > >> > B1 | B2 A1 B1A1+B2A2 >> > ---------- * --- = --------------- >> > B3 | B4 A2 B3A1+B4A2 >> > >> > However, I just have A = [A1 A2]. So, I need to get A^T. Is there a way >> > I can get the transpose of this block matrix with the MatTranspose()? Or >> > do I have to use another function such as MatGetSubMatrices()? >> > >> > Thank you, >> > Joon >> > >> > >> > >> > -------- Original message -------- >> > Subject: Re: [petsc-users] Transpose of Block Matrix with aij type >> > From: Karl Rupp >> > To: petsc-users at mcs.anl.gov >> > CC: choi240 at purdue.edu >> > >> > >> > Hi, >> > >> > why would you expect that the transpose of a 3x8 matrix is not a >> 8x3-matrix? >> > >> > Best regards, >> > Karli >> > >> > >> > On 04/23/2013 03:27 PM, Joon hee Choi wrote: >> > > Hello, >> > > >> > > I tried to get transpose of block matrix(just with aij type), but the >> > result was not a block matrix. For example, >> > > >> > > A = >> > > 1 2 3 4 | 4 3 2 1 >> > > 2 3 4 5 | 5 4 3 2 >> > > 3 4 5 6 | 6 5 4 3 >> > > >> > > >> > > AT(expected) = >> > > 1 2 3 4 >> > > 2 3 4 5 >> > > 3 4 5 6 >> > > ------- >> > > 4 3 2 1 >> > > 5 4 3 2 >> > > 6 5 4 3 >> > > >> > > >> > > AT(result) = >> > > 1 2 3 >> > > 2 3 4 >> > > 3 4 5 >> > > 4 5 6 >> > > 4 5 6 >> > > 3 4 5 >> > > 2 3 4 >> > > 1 2 3 >> > > >> > > If someone knows about this problem, please let me know it. >> > > >> > > Thank you >> > > >> > >> > From rupp at mcs.anl.gov Wed Apr 24 16:18:07 2013 From: rupp at mcs.anl.gov (Karl Rupp) Date: Wed, 24 Apr 2013 16:18:07 -0500 Subject: [petsc-users] Transpose of Block Matrix with aij type In-Reply-To: <1614499041.91064.1366820690560.JavaMail.root@mailhub028.itcs.purdue.edu> References: <1614499041.91064.1366820690560.JavaMail.root@mailhub028.itcs.purdue.edu> Message-ID: <51784C0F.20307@mcs.anl.gov> Hi Joon, yes, MatGetSubMatrices should do the job - even though it copies the values. Just create the respective index sets for irow and icol directly. Have a look at at src/mat/examples/tests/ex51.c [1] on how to extract the matrices. Keep in mind that you need to specify all indices within a block (cf. MatGetSubMatrices man page [2]) Best regards, Karli [1] http://www.mcs.anl.gov/petsc/petsc-dev/src/mat/examples/tests/ex51.c.html [2] http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatGetSubMatrices.html On 04/24/2013 11:24 AM, Joon hee Choi wrote: > Hi Karli, > > It may be a good idea to compute the result blocks directly. What function do I need to use for getting each block? I tried to use MatGetSubMatrices(Mat mat,PetscInt n,const IS irow[],const IS icol[],MatReuse scall,Mat *submat[]), but I didn't know how I got irow[] and icol[] from block matrix. > > Best regards, > Joon > > ----- ?? ??? ----- > ?? ??: "Karl Rupp" > ?? ??: "Joon hee Choi" > ??: petsc-users at mcs.anl.gov > ?? ??: 2013? 4? 24?, ??? ?? 12:07:11 > ??: Re: [petsc-users] Transpose of Block Matrix with aij type > > Hi Joon, > > you're welcome - another idea: if you have just six matrices A1, A2, and > B1-B4, it's worthwhile to compute the result blocks B1A1 + B2A2 and B3A1 > + B4A2 directly into temporary matrices using standard operations and > compose the result into a single matrix. This way you can avoid the more > expensive copies of A1, A2, B1-B4. > > Best regards, > Karli > > > On 04/24/2013 10:45 AM, Joon hee Choi wrote: >> Hi Karli, >> >> Thank you. Actually, B is very huge(more than 10^10) and sparse(about 1% dense). 3x3 and 3x4 matrices were an example. I think I have to set up A again. Anyway, thank you again. >> >> Best regards, >> Joon >> >> >> ----- ?? ??? ----- >> ?? ??: "Karl Rupp" >> ?? ??: "Choi240" >> ??: petsc-users at mcs.anl.gov >> ?? ??: 2013? 4? 23?, ??? ?? 9:39:56 >> ??: Re: [petsc-users] Transpose of Block Matrix with aij type >> >> Hi Joon, >> >> sorry, that was a pretty bad idea (wouldn't even work with square >> matrices in general) >> >> I'm afraid you'll have to set up new matrices A, B with the respective >> entries. For performance reasons you better use a dense format since >> your matrices are so small. >> >> Best regards, >> Karli >> >> >> On 04/23/2013 06:05 PM, Choi240 wrote: >>> Hi, >>> >>> Thank you for your reply. But A1,A2 are not square(3x4). B1~B4 are 3x3 >>> matrices. So A1*B1 is not impossible. >>> >>> Best regards, >>> Joon >>> >>> >>> >>> >>> -------- Original message -------- >>> Subject: Re: [petsc-users] Transpose of Block Matrix with aij type >>> From: Karl Rupp >>> To: Choi240 >>> CC: petsc-users at mcs.anl.gov >>> >>> >>> Hi again, >>> >>> if you have control over the structure of B, what about computing >>> [A1 A2] * [B1 B3; B2 B4] >>> instead? >>> >>> Best regards, >>> Karli >>> >>> >>> On 04/23/2013 05:17 PM, Choi240 wrote: >>> > Hi, >>> > >>> > I have to compute multiplication between two block matrices. It should >>> > be as follows: >>> > >>> > B1 | B2 A1 B1A1+B2A2 >>> > ---------- * --- = --------------- >>> > B3 | B4 A2 B3A1+B4A2 >>> > >>> > However, I just have A = [A1 A2]. So, I need to get A^T. Is there a way >>> > I can get the transpose of this block matrix with the MatTranspose()? Or >>> > do I have to use another function such as MatGetSubMatrices()? >>> > >>> > Thank you, >>> > Joon >>> > >>> > >>> > >>> > -------- Original message -------- >>> > Subject: Re: [petsc-users] Transpose of Block Matrix with aij type >>> > From: Karl Rupp >>> > To: petsc-users at mcs.anl.gov >>> > CC: choi240 at purdue.edu >>> > >>> > >>> > Hi, >>> > >>> > why would you expect that the transpose of a 3x8 matrix is not a >>> 8x3-matrix? >>> > >>> > Best regards, >>> > Karli >>> > >>> > >>> > On 04/23/2013 03:27 PM, Joon hee Choi wrote: >>> > > Hello, >>> > > >>> > > I tried to get transpose of block matrix(just with aij type), but the >>> > result was not a block matrix. For example, >>> > > >>> > > A = >>> > > 1 2 3 4 | 4 3 2 1 >>> > > 2 3 4 5 | 5 4 3 2 >>> > > 3 4 5 6 | 6 5 4 3 >>> > > >>> > > >>> > > AT(expected) = >>> > > 1 2 3 4 >>> > > 2 3 4 5 >>> > > 3 4 5 6 >>> > > ------- >>> > > 4 3 2 1 >>> > > 5 4 3 2 >>> > > 6 5 4 3 >>> > > >>> > > >>> > > AT(result) = >>> > > 1 2 3 >>> > > 2 3 4 >>> > > 3 4 5 >>> > > 4 5 6 >>> > > 4 5 6 >>> > > 3 4 5 >>> > > 2 3 4 >>> > > 1 2 3 >>> > > >>> > > If someone knows about this problem, please let me know it. >>> > > >>> > > Thank you >>> > > >>> > >>> >> > From jedbrown at mcs.anl.gov Wed Apr 24 16:18:38 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 24 Apr 2013 16:18:38 -0500 Subject: [petsc-users] Transpose of Block Matrix with aij type In-Reply-To: <2035581605.90911.1366818344058.JavaMail.root@mailhub028.itcs.purdue.edu> References: <517737EC.7020308@mcs.anl.gov> <2035581605.90911.1366818344058.JavaMail.root@mailhub028.itcs.purdue.edu> Message-ID: On Wed, Apr 24, 2013 at 10:45 AM, Joon hee Choi wrote: > Actually, B is very huge(more than 10^10) and sparse(about 1% dense). (10^{10})^2 * 1% = 10^{18} which is probably not a quantity of memory that you have available. So either the matrix is much smaller than 10^{10} or it's much less than 1% dense. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gpau at lbl.gov Wed Apr 24 21:28:15 2013 From: gpau at lbl.gov (George Pau) Date: Wed, 24 Apr 2013 19:28:15 -0700 Subject: [petsc-users] macport and petsc Message-ID: Hi, I am having some trouble configuring petsc on my mac. I have an updated macport distribution of openmpi. But, the configure fails with the following error: Incomplete LAPACK install? Perhaps lapack package is installed - but lapack-dev/lapack-devel is required. I can't find a lapack-dev on macport as well. It occurs even though with --with-debugging=0. Thanks, George -- George Pau Earth Sciences Division Lawrence Berkeley National Laboratory One Cyclotron, MS 74-120 Berkeley, CA 94720 (510) 486-7196 gpau at lbl.gov http://esd.lbl.gov/about/staff/georgepau/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Apr 24 21:36:32 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 24 Apr 2013 21:36:32 -0500 Subject: [petsc-users] macport and petsc In-Reply-To: References: Message-ID: <5FDCBDB4-AF7E-4D55-81F8-5088715ABD60@mcs.anl.gov> George, Send configure.log to petsc-maint at mcs.anl.gov Barry On Apr 24, 2013, at 9:28 PM, George Pau wrote: > Hi, > > I am having some trouble configuring petsc on my mac. I have an updated macport distribution of openmpi. But, the configure fails with the following error: > > Incomplete LAPACK install? Perhaps lapack package is installed - but lapack-dev/lapack-devel is required. > > I can't find a lapack-dev on macport as well. It occurs even though with --with-debugging=0. > > Thanks, > George > > > -- > George Pau > Earth Sciences Division > Lawrence Berkeley National Laboratory > One Cyclotron, MS 74-120 > Berkeley, CA 94720 > > (510) 486-7196 > gpau at lbl.gov > http://esd.lbl.gov/about/staff/georgepau/ From balay at mcs.anl.gov Wed Apr 24 22:00:52 2013 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 24 Apr 2013 22:00:52 -0500 (CDT) Subject: [petsc-users] macport and petsc In-Reply-To: <5FDCBDB4-AF7E-4D55-81F8-5088715ABD60@mcs.anl.gov> References: <5FDCBDB4-AF7E-4D55-81F8-5088715ABD60@mcs.anl.gov> Message-ID: compressed logs to petsc-users/petsc-dev is acceptable now. [not sure if autocompression of attachments can be setup with mailmain mailing lists] macports has some conflicts with the default blas/lapack - so its best to avoid it. [you can use --download-openmpi to get non-macports mpi] [or use macports to install petsc aswell. I believe if you use scienceports to install PETSc - it might do the right thing with blas/lapack] Satish On Wed, 24 Apr 2013, Barry Smith wrote: > > George, > > Send configure.log to petsc-maint at mcs.anl.gov > > Barry > > On Apr 24, 2013, at 9:28 PM, George Pau wrote: > > > Hi, > > > > I am having some trouble configuring petsc on my mac. I have an updated macport distribution of openmpi. But, the configure fails with the following error: > > > > Incomplete LAPACK install? Perhaps lapack package is installed - but lapack-dev/lapack-devel is required. > > > > I can't find a lapack-dev on macport as well. It occurs even though with --with-debugging=0. > > > > Thanks, > > George > > > > > > -- > > George Pau > > Earth Sciences Division > > Lawrence Berkeley National Laboratory > > One Cyclotron, MS 74-120 > > Berkeley, CA 94720 > > > > (510) 486-7196 > > gpau at lbl.gov > > http://esd.lbl.gov/about/staff/georgepau/ > > From bsmith at mcs.anl.gov Wed Apr 24 22:05:12 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 24 Apr 2013 22:05:12 -0500 Subject: [petsc-users] macport and petsc In-Reply-To: References: <5FDCBDB4-AF7E-4D55-81F8-5088715ABD60@mcs.anl.gov> Message-ID: <3A42728B-D373-419D-A0F1-E54D4C9DCFB6@mcs.anl.gov> On Apr 24, 2013, at 10:00 PM, Satish Balay wrote: > compressed logs to petsc-users/petsc-dev is acceptable now. > > [not sure if autocompression of attachments can be setup with mailmain > mailing lists] > > macports has some conflicts with the default blas/lapack - so its best > to avoid it. [you can use --download-openmpi to get non-macports mpi] Satish, We should fix the PETSc BLAS/LAPACK configuration (BuildSystem) to not "have conflicts with macports blas/lapack" by, for example, to use the Apple BLAS/LAPACK if the macports one fail. That is, we shouldn't tell people not to use macports, we should have a configuration system robust enough to handle macports problems. Is this a problem we can reproduce? and hence debug? and hence fix? Barry > > [or use macports to install petsc aswell. I believe if you use > scienceports to install PETSc - it might do the right thing with > blas/lapack] > > Satish > > On Wed, 24 Apr 2013, Barry Smith wrote: > >> >> George, >> >> Send configure.log to petsc-maint at mcs.anl.gov >> >> Barry >> >> On Apr 24, 2013, at 9:28 PM, George Pau wrote: >> >>> Hi, >>> >>> I am having some trouble configuring petsc on my mac. I have an updated macport distribution of openmpi. But, the configure fails with the following error: >>> >>> Incomplete LAPACK install? Perhaps lapack package is installed - but lapack-dev/lapack-devel is required. >>> >>> I can't find a lapack-dev on macport as well. It occurs even though with --with-debugging=0. >>> >>> Thanks, >>> George >>> >>> >>> -- >>> George Pau >>> Earth Sciences Division >>> Lawrence Berkeley National Laboratory >>> One Cyclotron, MS 74-120 >>> Berkeley, CA 94720 >>> >>> (510) 486-7196 >>> gpau at lbl.gov >>> http://esd.lbl.gov/about/staff/georgepau/ >> >> > From balay at mcs.anl.gov Wed Apr 24 22:29:09 2013 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 24 Apr 2013 22:29:09 -0500 (CDT) Subject: [petsc-users] macport and petsc In-Reply-To: <3A42728B-D373-419D-A0F1-E54D4C9DCFB6@mcs.anl.gov> References: <5FDCBDB4-AF7E-4D55-81F8-5088715ABD60@mcs.anl.gov> <3A42728B-D373-419D-A0F1-E54D4C9DCFB6@mcs.anl.gov> Message-ID: On Wed, 24 Apr 2013, Barry Smith wrote: > > On Apr 24, 2013, at 10:00 PM, Satish Balay wrote: > > > compressed logs to petsc-users/petsc-dev is acceptable now. > > > > [not sure if autocompression of attachments can be setup with mailmain > > mailing lists] > > > > macports has some conflicts with the default blas/lapack - so its best > > to avoid it. [you can use --download-openmpi to get non-macports mpi] > > Satish, > > We should fix the PETSc BLAS/LAPACK configuration (BuildSystem) to not "have conflicts with macports blas/lapack" by, for example, to use the Apple BLAS/LAPACK if the macports one fail. That is, we shouldn't tell people not to use macports, we should have a configuration system robust enough to handle macports problems. Is this a problem we can reproduce? and hence debug? and hence fix? I don't remember the exact issue. [a configure.log for this failure can perhaps refresh our memories] One way a conflicts creep in is - with non-blas-lapack packages that a user might specify from macports. For ex: configure check for blas/lapack from system location succeeds. Next when configure checks the other package from macports location - the blas from macports gets picked up and fails. i.e "gcc -lblas" works. "gcc -L/opt/macports/lib -lotherpackage -lblas" fails.. Satish From bsmith at mcs.anl.gov Wed Apr 24 22:39:41 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 24 Apr 2013 22:39:41 -0500 Subject: [petsc-users] macport and petsc In-Reply-To: References: <5FDCBDB4-AF7E-4D55-81F8-5088715ABD60@mcs.anl.gov> <3A42728B-D373-419D-A0F1-E54D4C9DCFB6@mcs.anl.gov> Message-ID: <9D1798D1-9215-4A61-BB9E-2C576E4DD9DA@mcs.anl.gov> On Apr 24, 2013, at 10:29 PM, Satish Balay wrote: > On Wed, 24 Apr 2013, Barry Smith wrote: > >> >> On Apr 24, 2013, at 10:00 PM, Satish Balay wrote: >> >>> compressed logs to petsc-users/petsc-dev is acceptable now. >>> >>> [not sure if autocompression of attachments can be setup with mailmain >>> mailing lists] >>> >>> macports has some conflicts with the default blas/lapack - so its best >>> to avoid it. [you can use --download-openmpi to get non-macports mpi] >> >> Satish, >> >> We should fix the PETSc BLAS/LAPACK configuration (BuildSystem) to not "have conflicts with macports blas/lapack" by, for example, to use the Apple BLAS/LAPACK if the macports one fail. That is, we shouldn't tell people not to use macports, we should have a configuration system robust enough to handle macports problems. Is this a problem we can reproduce? and hence debug? and hence fix? > > I don't remember the exact issue. [a configure.log for this failure > can perhaps refresh our memories] > > One way a conflicts creep in is - with non-blas-lapack packages that a > user might specify from macports. > > For ex: configure check for blas/lapack from system location succeeds. > Next when configure checks the other package from macports location - > the blas from macports gets picked up and fails. > > i.e > "gcc -lblas" works. > "gcc -L/opt/macports/lib -lotherpackage -lblas" fails.. Unix and Buildsystem suck :-). Why does the macports blas fail? Why are they delivering an incomplete blas? Barry > > Satish From gpau at lbl.gov Wed Apr 24 22:51:16 2013 From: gpau at lbl.gov (George Pau) Date: Wed, 24 Apr 2013 20:51:16 -0700 Subject: [petsc-users] macport and petsc In-Reply-To: <9D1798D1-9215-4A61-BB9E-2C576E4DD9DA@mcs.anl.gov> References: <5FDCBDB4-AF7E-4D55-81F8-5088715ABD60@mcs.anl.gov> <3A42728B-D373-419D-A0F1-E54D4C9DCFB6@mcs.anl.gov> <9D1798D1-9215-4A61-BB9E-2C576E4DD9DA@mcs.anl.gov> Message-ID: Hi All, I am attaching the zip file of the configure.log. I forwarding it to both petsc-users and petsc-maint. I completely understand the intricacies of multi-platform build system. Thanks, George On Wed, Apr 24, 2013 at 8:39 PM, Barry Smith wrote: > > On Apr 24, 2013, at 10:29 PM, Satish Balay wrote: > > > On Wed, 24 Apr 2013, Barry Smith wrote: > > > >> > >> On Apr 24, 2013, at 10:00 PM, Satish Balay wrote: > >> > >>> compressed logs to petsc-users/petsc-dev is acceptable now. > >>> > >>> [not sure if autocompression of attachments can be setup with mailmain > >>> mailing lists] > >>> > >>> macports has some conflicts with the default blas/lapack - so its best > >>> to avoid it. [you can use --download-openmpi to get non-macports mpi] > >> > >> Satish, > >> > >> We should fix the PETSc BLAS/LAPACK configuration (BuildSystem) to > not "have conflicts with macports blas/lapack" by, for example, to use the > Apple BLAS/LAPACK if the macports one fail. That is, we shouldn't tell > people not to use macports, we should have a configuration system robust > enough to handle macports problems. Is this a problem we can reproduce? and > hence debug? and hence fix? > > > > I don't remember the exact issue. [a configure.log for this failure > > can perhaps refresh our memories] > > > > One way a conflicts creep in is - with non-blas-lapack packages that a > > user might specify from macports. > > > > For ex: configure check for blas/lapack from system location succeeds. > > Next when configure checks the other package from macports location - > > the blas from macports gets picked up and fails. > > > > i.e > > "gcc -lblas" works. > > "gcc -L/opt/macports/lib -lotherpackage -lblas" fails.. > > Unix and Buildsystem suck :-). > > Why does the macports blas fail? Why are they delivering an incomplete > blas? > > Barry > > > > > > Satish > > -- George Pau Earth Sciences Division Lawrence Berkeley National Laboratory One Cyclotron, MS 74-120 Berkeley, CA 94720 (510) 486-7196 gpau at lbl.gov http://esd.lbl.gov/about/staff/georgepau/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log.zip Type: application/zip Size: 162 bytes Desc: not available URL: From jedbrown at mcs.anl.gov Wed Apr 24 23:52:07 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Wed, 24 Apr 2013 23:52:07 -0500 Subject: [petsc-users] macport and petsc In-Reply-To: <9D1798D1-9215-4A61-BB9E-2C576E4DD9DA@mcs.anl.gov> References: <5FDCBDB4-AF7E-4D55-81F8-5088715ABD60@mcs.anl.gov> <3A42728B-D373-419D-A0F1-E54D4C9DCFB6@mcs.anl.gov> <9D1798D1-9215-4A61-BB9E-2C576E4DD9DA@mcs.anl.gov> Message-ID: <87d2tj9pqg.fsf@mcs.anl.gov> Barry Smith writes: >> "gcc -lblas" works. >> "gcc -L/opt/macports/lib -lotherpackage -lblas" fails.. > > Unix and Buildsystem suck :-). This is the reason why CMake insists on resolving libraries to complete paths. It still can't fix linking to shared libraries, but at least they can warn/error when you'll be getting a different version than you asked for. On the other hand, CMake's approach is spoiled by compilers that have private paths that you aren't supposed to know about, but that -lsomelib will be found in. From choi240 at purdue.edu Thu Apr 25 10:26:16 2013 From: choi240 at purdue.edu (Joon hee Choi) Date: Thu, 25 Apr 2013 11:26:16 -0400 (EDT) Subject: [petsc-users] Transpose of Block Matrix with aij type In-Reply-To: <51784C0F.20307@mcs.anl.gov> Message-ID: <1708408068.94268.1366903576914.JavaMail.root@mailhub028.itcs.purdue.edu> Hi Karli, Thank you very much. I solved my problem using MatGetSubMatrices. Best regards, Joon ----- ?? ??? ----- ?? ??: "Karl Rupp" ?? ??: "Joon hee Choi" ??: petsc-users at mcs.anl.gov ?? ??: 2013? 4? 24?, ??? ?? 5:18:07 ??: Re: [petsc-users] Transpose of Block Matrix with aij type Hi Joon, yes, MatGetSubMatrices should do the job - even though it copies the values. Just create the respective index sets for irow and icol directly. Have a look at at src/mat/examples/tests/ex51.c [1] on how to extract the matrices. Keep in mind that you need to specify all indices within a block (cf. MatGetSubMatrices man page [2]) Best regards, Karli [1] http://www.mcs.anl.gov/petsc/petsc-dev/src/mat/examples/tests/ex51.c.html [2] http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatGetSubMatrices.html On 04/24/2013 11:24 AM, Joon hee Choi wrote: > Hi Karli, > > It may be a good idea to compute the result blocks directly. What function do I need to use for getting each block? I tried to use MatGetSubMatrices(Mat mat,PetscInt n,const IS irow[],const IS icol[],MatReuse scall,Mat *submat[]), but I didn't know how I got irow[] and icol[] from block matrix. > > Best regards, > Joon > > ----- ?? ??? ----- > ?? ??: "Karl Rupp" > ?? ??: "Joon hee Choi" > ??: petsc-users at mcs.anl.gov > ?? ??: 2013? 4? 24?, ??? ?? 12:07:11 > ??: Re: [petsc-users] Transpose of Block Matrix with aij type > > Hi Joon, > > you're welcome - another idea: if you have just six matrices A1, A2, and > B1-B4, it's worthwhile to compute the result blocks B1A1 + B2A2 and B3A1 > + B4A2 directly into temporary matrices using standard operations and > compose the result into a single matrix. This way you can avoid the more > expensive copies of A1, A2, B1-B4. > > Best regards, > Karli > > > On 04/24/2013 10:45 AM, Joon hee Choi wrote: >> Hi Karli, >> >> Thank you. Actually, B is very huge(more than 10^10) and sparse(about 1% dense). 3x3 and 3x4 matrices were an example. I think I have to set up A again. Anyway, thank you again. >> >> Best regards, >> Joon >> >> >> ----- ?? ??? ----- >> ?? ??: "Karl Rupp" >> ?? ??: "Choi240" >> ??: petsc-users at mcs.anl.gov >> ?? ??: 2013? 4? 23?, ??? ?? 9:39:56 >> ??: Re: [petsc-users] Transpose of Block Matrix with aij type >> >> Hi Joon, >> >> sorry, that was a pretty bad idea (wouldn't even work with square >> matrices in general) >> >> I'm afraid you'll have to set up new matrices A, B with the respective >> entries. For performance reasons you better use a dense format since >> your matrices are so small. >> >> Best regards, >> Karli >> >> >> On 04/23/2013 06:05 PM, Choi240 wrote: >>> Hi, >>> >>> Thank you for your reply. But A1,A2 are not square(3x4). B1~B4 are 3x3 >>> matrices. So A1*B1 is not impossible. >>> >>> Best regards, >>> Joon >>> >>> >>> >>> >>> -------- Original message -------- >>> Subject: Re: [petsc-users] Transpose of Block Matrix with aij type >>> From: Karl Rupp >>> To: Choi240 >>> CC: petsc-users at mcs.anl.gov >>> >>> >>> Hi again, >>> >>> if you have control over the structure of B, what about computing >>> [A1 A2] * [B1 B3; B2 B4] >>> instead? >>> >>> Best regards, >>> Karli >>> >>> >>> On 04/23/2013 05:17 PM, Choi240 wrote: >>> > Hi, >>> > >>> > I have to compute multiplication between two block matrices. It should >>> > be as follows: >>> > >>> > B1 | B2 A1 B1A1+B2A2 >>> > ---------- * --- = --------------- >>> > B3 | B4 A2 B3A1+B4A2 >>> > >>> > However, I just have A = [A1 A2]. So, I need to get A^T. Is there a way >>> > I can get the transpose of this block matrix with the MatTranspose()? Or >>> > do I have to use another function such as MatGetSubMatrices()? >>> > >>> > Thank you, >>> > Joon >>> > >>> > >>> > >>> > -------- Original message -------- >>> > Subject: Re: [petsc-users] Transpose of Block Matrix with aij type >>> > From: Karl Rupp >>> > To: petsc-users at mcs.anl.gov >>> > CC: choi240 at purdue.edu >>> > >>> > >>> > Hi, >>> > >>> > why would you expect that the transpose of a 3x8 matrix is not a >>> 8x3-matrix? >>> > >>> > Best regards, >>> > Karli >>> > >>> > >>> > On 04/23/2013 03:27 PM, Joon hee Choi wrote: >>> > > Hello, >>> > > >>> > > I tried to get transpose of block matrix(just with aij type), but the >>> > result was not a block matrix. For example, >>> > > >>> > > A = >>> > > 1 2 3 4 | 4 3 2 1 >>> > > 2 3 4 5 | 5 4 3 2 >>> > > 3 4 5 6 | 6 5 4 3 >>> > > >>> > > >>> > > AT(expected) = >>> > > 1 2 3 4 >>> > > 2 3 4 5 >>> > > 3 4 5 6 >>> > > ------- >>> > > 4 3 2 1 >>> > > 5 4 3 2 >>> > > 6 5 4 3 >>> > > >>> > > >>> > > AT(result) = >>> > > 1 2 3 >>> > > 2 3 4 >>> > > 3 4 5 >>> > > 4 5 6 >>> > > 4 5 6 >>> > > 3 4 5 >>> > > 2 3 4 >>> > > 1 2 3 >>> > > >>> > > If someone knows about this problem, please let me know it. >>> > > >>> > > Thank you >>> > > >>> > >>> >> > From choi240 at purdue.edu Thu Apr 25 10:32:31 2013 From: choi240 at purdue.edu (Joon hee Choi) Date: Thu, 25 Apr 2013 11:32:31 -0400 (EDT) Subject: [petsc-users] Transpose of Block Matrix with aij type In-Reply-To: Message-ID: <1036563252.94300.1366903951768.JavaMail.root@mailhub028.itcs.purdue.edu> I meant 10^10 elements. However, you are right. The elements number of the data I want to use is 10^{7}x10^{14} and nonzero is 10^{8}. I don't have the data yet, so I confused. Best regards, Joon ----- ?? ??? ----- ?? ??: "Jed Brown" ?? ??: "Joon hee Choi" ??: "Karl Rupp" , "PETSc users list" ?? ??: 2013? 4? 24?, ??? ?? 5:18:38 ??: Re: [petsc-users] Transpose of Block Matrix with aij type On Wed, Apr 24, 2013 at 10:45 AM, Joon hee Choi < choi240 at purdue.edu > wrote: Actually, B is very huge(more than 10^10) and sparse(about 1% dense). (10^{10})^2 * 1% = 10^{18} which is probably not a quantity of memory that you have available. So either the matrix is much smaller than 10^{10} or it's much less than 1% dense. From dharmareddy84 at gmail.com Thu Apr 25 14:35:26 2013 From: dharmareddy84 at gmail.com (Dharmendar Reddy) Date: Thu, 25 Apr 2013 14:35:26 -0500 Subject: [petsc-users] IS map for sub mesh Message-ID: Hello, I need to access the map from points in subdm to points in dm. Can you please provide fortran binding for DMPlexCreateSubpointIS(DM dm, IS *subpointIS) Also, i was thinking it may be of use to have interface like this.. DMPlexCreateSubpointIS(DM dm, PetscInt pointDimInSubdm, IS *subpointIS) this way i can have map from say (dim)-cells in subdm to corresponding (dim)-cells in dm. thanks Reddy -- ----------------------------------------------------- Dharmendar Reddy Palle Graduate Student Microelectronics Research center, University of Texas at Austin, 10100 Burnet Road, Bldg. 160 MER 2.608F, TX 78758-4445 e-mail: dharmareddy84 at gmail.com Phone: +1-512-350-9082 United States of America. Homepage: https://webspace.utexas.edu/~dpr342 -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Apr 25 15:31:03 2013 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 25 Apr 2013 16:31:03 -0400 Subject: [petsc-users] IS map for sub mesh In-Reply-To: References: Message-ID: On Thu, Apr 25, 2013 at 3:35 PM, Dharmendar Reddy wrote: > Hello, > I need to access the map from points in subdm to points in dm. Can you > please provide fortran binding for > > DMPlexCreateSubpointIS(DM dm, IS *subpointIS) > > Pushed to next. > Also, i was thinking it may be of use to have interface like this.. > > DMPlexCreateSubpointIS(DM dm, PetscInt pointDimInSubdm, IS *subpointIS) > > this way i can have map from say (dim)-cells in subdm to corresponding > (dim)-cells in dm. > The intention here is to use DMPlexGetSubpointMap(). Matt > > thanks > Reddy > > -- > ----------------------------------------------------- > Dharmendar Reddy Palle > Graduate Student > Microelectronics Research center, > University of Texas at Austin, > 10100 Burnet Road, Bldg. 160 > MER 2.608F, TX 78758-4445 > e-mail: dharmareddy84 at gmail.com > Phone: +1-512-350-9082 > United States of America. > Homepage: https://webspace.utexas.edu/~dpr342 > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From dharmareddy84 at gmail.com Thu Apr 25 15:48:08 2013 From: dharmareddy84 at gmail.com (Dharmendar Reddy) Date: Thu, 25 Apr 2013 15:48:08 -0500 Subject: [petsc-users] IS map for sub mesh In-Reply-To: References: Message-ID: On Thu, Apr 25, 2013 at 3:31 PM, Matthew Knepley wrote: > On Thu, Apr 25, 2013 at 3:35 PM, Dharmendar Reddy > wrote: > >> Hello, >> I need to access the map from points in subdm to points in dm. Can you >> please provide fortran binding for >> >> DMPlexCreateSubpointIS(DM dm, IS *subpointIS) >> >> Pushed to next. > > >> Also, i was thinking it may be of use to have interface like this.. >> >> DMPlexCreateSubpointIS(DM dm, PetscInt pointDimInSubdm, IS *subpointIS) >> >> this way i can have map from say (dim)-cells in subdm to corresponding >> (dim)-cells in dm. >> > > The intention here is to use DMPlexGetSubpointMap(). > > So, i should do the calls below ? DMPlexGetSubpointMap(DM dm, DMLabel subpointMap) DMLabelGetStratumIS(subpointMap,depth,IS) I have had trouble using DMLabel in my fortran code earlier. I can give it a try again, Is there a fortran binding for above functions ? Thanks Reddy > Matt > > >> >> thanks >> Reddy >> >> -- >> ----------------------------------------------------- >> Dharmendar Reddy Palle >> Graduate Student >> Microelectronics Research center, >> University of Texas at Austin, >> 10100 Burnet Road, Bldg. 160 >> MER 2.608F, TX 78758-4445 >> e-mail: dharmareddy84 at gmail.com >> Phone: +1-512-350-9082 >> United States of America. >> Homepage: https://webspace.utexas.edu/~dpr342 >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -- ----------------------------------------------------- Dharmendar Reddy Palle Graduate Student Microelectronics Research center, University of Texas at Austin, 10100 Burnet Road, Bldg. 160 MER 2.608F, TX 78758-4445 e-mail: dharmareddy84 at gmail.com Phone: +1-512-350-9082 United States of America. Homepage: https://webspace.utexas.edu/~dpr342 -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Apr 25 15:57:21 2013 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 25 Apr 2013 16:57:21 -0400 Subject: [petsc-users] IS map for sub mesh In-Reply-To: References: Message-ID: On Thu, Apr 25, 2013 at 4:48 PM, Dharmendar Reddy wrote: > > > > On Thu, Apr 25, 2013 at 3:31 PM, Matthew Knepley wrote: > >> On Thu, Apr 25, 2013 at 3:35 PM, Dharmendar Reddy < >> dharmareddy84 at gmail.com> wrote: >> >>> Hello, >>> I need to access the map from points in subdm to points in dm. Can you >>> please provide fortran binding for >>> >>> DMPlexCreateSubpointIS(DM dm, IS *subpointIS) >>> >>> Pushed to next. >> >> >>> Also, i was thinking it may be of use to have interface like this.. >>> >>> DMPlexCreateSubpointIS(DM dm, PetscInt pointDimInSubdm, IS *subpointIS) >>> >>> this way i can have map from say (dim)-cells in subdm to corresponding >>> (dim)-cells in dm. >>> >> >> The intention here is to use DMPlexGetSubpointMap(). >> >> So, i should do the calls below ? > > DMPlexGetSubpointMap(DM dm, DMLabel subpointMap) > > DMLabelGetStratumIS(subpointMap,depth,IS) > > > I have had trouble using DMLabel in my fortran code earlier. > I can give it a try again, Is there a fortran binding for above functions ? > > Hmm, I have not tested it. I will put it in an example. Thanks, Matt > Thanks > Reddy > > > > > >> Matt >> >> >>> >>> thanks >>> Reddy >>> >>> -- >>> ----------------------------------------------------- >>> Dharmendar Reddy Palle >>> Graduate Student >>> Microelectronics Research center, >>> University of Texas at Austin, >>> 10100 Burnet Road, Bldg. 160 >>> MER 2.608F, TX 78758-4445 >>> e-mail: dharmareddy84 at gmail.com >>> Phone: +1-512-350-9082 >>> United States of America. >>> Homepage: https://webspace.utexas.edu/~dpr342 >>> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > > > -- > ----------------------------------------------------- > Dharmendar Reddy Palle > Graduate Student > Microelectronics Research center, > University of Texas at Austin, > 10100 Burnet Road, Bldg. 160 > MER 2.608F, TX 78758-4445 > e-mail: dharmareddy84 at gmail.com > Phone: +1-512-350-9082 > United States of America. > Homepage: https://webspace.utexas.edu/~dpr342 > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From dharmareddy84 at gmail.com Thu Apr 25 18:03:48 2013 From: dharmareddy84 at gmail.com (Dharmendar Reddy) Date: Thu, 25 Apr 2013 18:03:48 -0500 Subject: [petsc-users] Boundary Nodes of DMPlex object Message-ID: Hello, I need to mark the nodes on the boundary of a mesh to impose Dirichlet BC. I am doing the following: call DMPlexGetHeightStratum(subdm,1,facetIdStart,facetIdend,ierr) do facetId=facetIdStart,facetIdEnd ! check if a facet is internal, i.e., numSharedCell=2 because a facet is ! shared by two cells call DMPlexGetSupportSize(subdm,facetId,numSharedCell,ierr) if(numSharedCell==1) then ! facet is a boundary face ! mark the nodes of this face as boundary end if end do How do i get the 0-cells (ie nodes) forming given dim-cell ? Am i using the right approach ? Also, is there DMPlex functionality to mark boundary nodes ? Thanks Reddy -- ----------------------------------------------------- Dharmendar Reddy Palle Graduate Student Microelectronics Research center, University of Texas at Austin, 10100 Burnet Road, Bldg. 160 MER 2.608F, TX 78758-4445 e-mail: dharmareddy84 at gmail.com Phone: +1-512-350-9082 United States of America. Homepage: https://webspace.utexas.edu/~dpr342 -------------- next part -------------- An HTML attachment was scrubbed... URL: From dharmareddy84 at gmail.com Thu Apr 25 18:13:11 2013 From: dharmareddy84 at gmail.com (Dharmendar Reddy) Date: Thu, 25 Apr 2013 18:13:11 -0500 Subject: [petsc-users] Boundary Nodes of DMPlex object In-Reply-To: References: Message-ID: Should i be using : DMPlexGetTransitiveClosure ? On Thu, Apr 25, 2013 at 6:03 PM, Dharmendar Reddy wrote: > Hello, > I need to mark the nodes on the boundary of a mesh to impose > Dirichlet BC. I am doing the following: > > call DMPlexGetHeightStratum(subdm,1,facetIdStart,facetIdend,ierr) > do facetId=facetIdStart,facetIdEnd > ! check if a facet is internal, i.e., numSharedCell=2 because a > facet is > ! shared by two cells > call DMPlexGetSupportSize(subdm,facetId,numSharedCell,ierr) > if(numSharedCell==1) then ! facet is a boundary face > ! mark the nodes of this face as boundary > > end if > end do > > How do i get the 0-cells (ie nodes) forming given dim-cell ? > > Am i using the right approach ? > Also, is there DMPlex functionality to mark boundary nodes ? > > Thanks > Reddy > -- > ----------------------------------------------------- > Dharmendar Reddy Palle > Graduate Student > Microelectronics Research center, > University of Texas at Austin, > 10100 Burnet Road, Bldg. 160 > MER 2.608F, TX 78758-4445 > e-mail: dharmareddy84 at gmail.com > Phone: +1-512-350-9082 > United States of America. > Homepage: https://webspace.utexas.edu/~dpr342 > -- ----------------------------------------------------- Dharmendar Reddy Palle Graduate Student Microelectronics Research center, University of Texas at Austin, 10100 Burnet Road, Bldg. 160 MER 2.608F, TX 78758-4445 e-mail: dharmareddy84 at gmail.com Phone: +1-512-350-9082 United States of America. Homepage: https://webspace.utexas.edu/~dpr342 -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Apr 25 20:37:22 2013 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 25 Apr 2013 21:37:22 -0400 Subject: [petsc-users] Boundary Nodes of DMPlex object In-Reply-To: References: Message-ID: On Thu, Apr 25, 2013 at 7:03 PM, Dharmendar Reddy wrote: > Hello, > I need to mark the nodes on the boundary of a mesh to impose > Dirichlet BC. I am doing the following: > > call DMPlexGetHeightStratum(subdm,1,facetIdStart,facetIdend,ierr) > do facetId=facetIdStart,facetIdEnd > ! check if a facet is internal, i.e., numSharedCell=2 because a > facet is > ! shared by two cells > call DMPlexGetSupportSize(subdm,facetId,numSharedCell,ierr) > if(numSharedCell==1) then ! facet is a boundary face > ! mark the nodes of this face as boundary > > end if > end do > In SNES ex12, I do this. You could start with ierr = DMPlexMarkBoundaryFaces(dm, label);CHKERRQ(ierr); but what you have is fine. Then I use ierr = DMPlexLabelComplete(dm, label);CHKERRQ(ierr); which does what you want. Underneath it does exactly what you proposed, namely mark the closure of each point in the label. Thanks, Matt > How do i get the 0-cells (ie nodes) forming given dim-cell ? > > Am i using the right approach ? > Also, is there DMPlex functionality to mark boundary nodes ? > > Thanks > Reddy > -- > ----------------------------------------------------- > Dharmendar Reddy Palle > Graduate Student > Microelectronics Research center, > University of Texas at Austin, > 10100 Burnet Road, Bldg. 160 > MER 2.608F, TX 78758-4445 > e-mail: dharmareddy84 at gmail.com > Phone: +1-512-350-9082 > United States of America. > Homepage: https://webspace.utexas.edu/~dpr342 > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennan.wong at gmail.com Thu Apr 25 23:11:54 2013 From: kennan.wong at gmail.com (Kainan Wang) Date: Thu, 25 Apr 2013 23:11:54 -0500 Subject: [petsc-users] a config problem Message-ID: Hello, I tried to install petsc 3.3-p6 on a cluster with gcc/4.4.3 or gcc/4.4.5.I manually load the module of gotoblas (v1.30) and specify in the petsc configure command as ./configure --with-blas-lapack-dir=/opt/apps/gotoblas/1.30 and I got the following error: ------------------------------------------------------------------------------- You set a value for --with-blas-lapack-dir=, but /opt/apps/gotoblas/1.30 cannot be used ******************************************************************************* However, if I change the compiler to intel (v11.1) the same configure can work and installation is smooth. Best, Kainan -- Kainan Wang www.math.tamu.edu/~kwang Texas A&M University --------------- Wish U happiness EveRyday ??? -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Thu Apr 25 23:17:23 2013 From: balay at mcs.anl.gov (Satish Balay) Date: Thu, 25 Apr 2013 23:17:23 -0500 (CDT) Subject: [petsc-users] a config problem In-Reply-To: References: Message-ID: On Thu, 25 Apr 2013, Kainan Wang wrote: > Hello, > > I tried to install petsc 3.3-p6 on a cluster with gcc/4.4.3 or > gcc/4.4.5.I manually load the module of gotoblas (v1.30) and specify > in the petsc > configure command as > > ./configure --with-blas-lapack-dir=/opt/apps/gotoblas/1.30 > > and I got the following error: > ------------------------------------------------------------------------------- > You set a value for --with-blas-lapack-dir=, but > /opt/apps/gotoblas/1.30 cannot be used > ******************************************************************************* configure doesn't know enought about goto blas. Try using --with-blas-lapack-lib option and specify the exact link command that should be useable with gotoblas. > > However, if I change the compiler to intel (v11.1) the same configure can > work and installation is smooth. perhaps configure found a different blas. you can check the summary it printed. If you still have trouble - send configure.log. Without that - we won't know what issues you are encountering. Satish From kennan.wong at gmail.com Fri Apr 26 00:18:33 2013 From: kennan.wong at gmail.com (Kainan Wang) Date: Fri, 26 Apr 2013 00:18:33 -0500 Subject: [petsc-users] a config problem In-Reply-To: References: Message-ID: I just tried with the --with-blas-lapack-lib option and it is the same: it works for intel compilers but not for gcc. When having the intel compiler, the configure summary has the following line: BLAS/LAPACK: -Wl,-rpath,/opt/apps/gotoblas/1.30 -L/opt/apps/gotoblas/1.30 -lgoto_lp64 while configuring with gcc compiler it gives error. Please see attachment for the configure.log when using gcc to configure. Kainan On Thu, Apr 25, 2013 at 11:17 PM, Satish Balay wrote: > On Thu, 25 Apr 2013, Kainan Wang wrote: > > > Hello, > > > > I tried to install petsc 3.3-p6 on a cluster with gcc/4.4.3 or > > gcc/4.4.5.I manually load the module of gotoblas (v1.30) and specify > > in the petsc > > configure command as > > > > ./configure --with-blas-lapack-dir=/opt/apps/gotoblas/1.30 > > > > and I got the following error: > > > ------------------------------------------------------------------------------- > > You set a value for --with-blas-lapack-dir=, but > > /opt/apps/gotoblas/1.30 cannot be used > > > ******************************************************************************* > > configure doesn't know enought about goto blas. Try using > --with-blas-lapack-lib option and specify the exact link command that > should be useable with gotoblas. > > > > > However, if I change the compiler to intel (v11.1) the same configure can > > work and installation is smooth. > > perhaps configure found a different blas. you can check the summary it > printed. > > If you still have trouble - send configure.log. Without that - we won't > know what issues you are encountering. > > Satish > -- Kainan Wang www.math.tamu.edu/~kwang Texas A&M University --------------- Wish U happiness EveRyday ??? -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: application/octet-stream Size: 2488989 bytes Desc: not available URL: From balay at mcs.anl.gov Fri Apr 26 00:29:28 2013 From: balay at mcs.anl.gov (Satish Balay) Date: Fri, 26 Apr 2013 00:29:28 -0500 (CDT) Subject: [petsc-users] a config problem In-Reply-To: References: Message-ID: >>>>>>>. /opt/apps/gotoblas/1.30/libgoto_lp64.so: undefined reference to `_intel_fast_memcpy' /opt/apps/gotoblas/1.30/libgoto_lp64.so: undefined reference to `for_cpystr' /opt/apps/gotoblas/1.30/libgoto_lp64.so: undefined reference to `for_concat' /opt/apps/gotoblas/1.30/libgoto_lp64.so: undefined reference to `__powr8i4' /opt/apps/gotoblas/1.30/libgoto_lp64.so: undefined reference to `__svml_cosf4' /opt/apps/gotoblas/1.30/libgoto_lp64.so: undefined reference to `__svml_roundf4' /opt/apps/gotoblas/1.30/libgoto_lp64.so: undefined reference to `__svml_cos2' /opt/apps/gotoblas/1.30/libgoto_lp64.so: undefined reference to `_intel_fast_memset' /opt/apps/gotoblas/1.30/libgoto_lp64.so: undefined reference to `__powr4i4' /opt/apps/gotoblas/1.30/libgoto_lp64.so: undefined reference to `__libm_sse2_sincos' /opt/apps/gotoblas/1.30/libgoto_lp64.so: undefined reference to `__svml_logf4' <<<<<<<<<< Looks like this goto blas is built with intel compilers - so it won't work with gnu compilers. [unless you know the intel compiler libraries that are required - and link them in aswell] Or use --download-f-blas-lapack instead. Satish On Fri, 26 Apr 2013, Kainan Wang wrote: > I just tried with the --with-blas-lapack-lib option and it is the same: it > works for intel compilers but not for gcc. When having the intel compiler, > the configure summary has the following line: > > BLAS/LAPACK: -Wl,-rpath,/opt/apps/gotoblas/1.30 -L/opt/apps/gotoblas/1.30 > -lgoto_lp64 > > while configuring with gcc compiler it gives error. > > Please see attachment for the configure.log when using gcc to configure. > > Kainan > > > On Thu, Apr 25, 2013 at 11:17 PM, Satish Balay wrote: > > > On Thu, 25 Apr 2013, Kainan Wang wrote: > > > > > Hello, > > > > > > I tried to install petsc 3.3-p6 on a cluster with gcc/4.4.3 or > > > gcc/4.4.5.I manually load the module of gotoblas (v1.30) and specify > > > in the petsc > > > configure command as > > > > > > ./configure --with-blas-lapack-dir=/opt/apps/gotoblas/1.30 > > > > > > and I got the following error: > > > > > ------------------------------------------------------------------------------- > > > You set a value for --with-blas-lapack-dir=, but > > > /opt/apps/gotoblas/1.30 cannot be used > > > > > ******************************************************************************* > > > > configure doesn't know enought about goto blas. Try using > > --with-blas-lapack-lib option and specify the exact link command that > > should be useable with gotoblas. > > > > > > > > However, if I change the compiler to intel (v11.1) the same configure can > > > work and installation is smooth. > > > > perhaps configure found a different blas. you can check the summary it > > printed. > > > > If you still have trouble - send configure.log. Without that - we won't > > know what issues you are encountering. > > > > Satish > > > > > > From kennan.wong at gmail.com Fri Apr 26 00:35:36 2013 From: kennan.wong at gmail.com (Kainan Wang) Date: Fri, 26 Apr 2013 00:35:36 -0500 Subject: [petsc-users] a config problem In-Reply-To: References: Message-ID: OK I see. I will try the download option. Thanks ! On Fri, Apr 26, 2013 at 12:29 AM, Satish Balay wrote: > >>>>>>>. > /opt/apps/gotoblas/1.30/libgoto_lp64.so: undefined reference to > `_intel_fast_memcpy' > /opt/apps/gotoblas/1.30/libgoto_lp64.so: undefined reference to > `for_cpystr' > /opt/apps/gotoblas/1.30/libgoto_lp64.so: undefined reference to > `for_concat' > /opt/apps/gotoblas/1.30/libgoto_lp64.so: undefined reference to `__powr8i4' > /opt/apps/gotoblas/1.30/libgoto_lp64.so: undefined reference to > `__svml_cosf4' > /opt/apps/gotoblas/1.30/libgoto_lp64.so: undefined reference to > `__svml_roundf4' > /opt/apps/gotoblas/1.30/libgoto_lp64.so: undefined reference to > `__svml_cos2' > /opt/apps/gotoblas/1.30/libgoto_lp64.so: undefined reference to > `_intel_fast_memset' > /opt/apps/gotoblas/1.30/libgoto_lp64.so: undefined reference to `__powr4i4' > /opt/apps/gotoblas/1.30/libgoto_lp64.so: undefined reference to > `__libm_sse2_sincos' > /opt/apps/gotoblas/1.30/libgoto_lp64.so: undefined reference to > `__svml_logf4' > <<<<<<<<<< > > Looks like this goto blas is built with intel compilers - so it won't > work with gnu compilers. > > [unless you know the intel compiler libraries that are required - and > link them in aswell] > > Or use --download-f-blas-lapack instead. > > Satish > > On Fri, 26 Apr 2013, Kainan Wang wrote: > > > I just tried with the --with-blas-lapack-lib option and it is the same: > it > > works for intel compilers but not for gcc. When having the intel > compiler, > > the configure summary has the following line: > > > > BLAS/LAPACK: -Wl,-rpath,/opt/apps/gotoblas/1.30 -L/opt/apps/gotoblas/1.30 > > -lgoto_lp64 > > > > while configuring with gcc compiler it gives error. > > > > Please see attachment for the configure.log when using gcc to configure. > > > > Kainan > > > > > > On Thu, Apr 25, 2013 at 11:17 PM, Satish Balay > wrote: > > > > > On Thu, 25 Apr 2013, Kainan Wang wrote: > > > > > > > Hello, > > > > > > > > I tried to install petsc 3.3-p6 on a cluster with gcc/4.4.3 or > > > > gcc/4.4.5.I manually load the module of gotoblas (v1.30) and specify > > > > in the petsc > > > > configure command as > > > > > > > > ./configure --with-blas-lapack-dir=/opt/apps/gotoblas/1.30 > > > > > > > > and I got the following error: > > > > > > > > ------------------------------------------------------------------------------- > > > > You set a value for --with-blas-lapack-dir=, but > > > > /opt/apps/gotoblas/1.30 cannot be used > > > > > > > > ******************************************************************************* > > > > > > configure doesn't know enought about goto blas. Try using > > > --with-blas-lapack-lib option and specify the exact link command that > > > should be useable with gotoblas. > > > > > > > > > > > However, if I change the compiler to intel (v11.1) the same > configure can > > > > work and installation is smooth. > > > > > > perhaps configure found a different blas. you can check the summary it > > > printed. > > > > > > If you still have trouble - send configure.log. Without that - we won't > > > know what issues you are encountering. > > > > > > Satish > > > > > > > > > > > > > -- Kainan Wang www.math.tamu.edu/~kwang Texas A&M University --------------- Wish U happiness EveRyday ??? -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexei.matveev+petsc at gmail.com Fri Apr 26 07:17:03 2013 From: alexei.matveev+petsc at gmail.com (Alexei Matveev) Date: Fri, 26 Apr 2013 14:17:03 +0200 Subject: [petsc-users] command line handling in PETSC 3.2, a simple fix for Wheezy? Message-ID: Dear List, While porting an application to the upcoming Debian Wheeze we noted a misbehaviour in command line parsing by PETSC 3.2. It looks like if a "--key value" pair is following a "--flag-with-no-value" option at the leading position, the key-value pair is being ignored. See minimal example below. Is this something known? Is there a simple fix that had a chance to convince Debian maintainers to apply it before 7.0 or even 7.0.1? Alexei, Bo $ cat options.c #include "petsc.h" int main( int argc, char *argv[] ) { int val1 = 1; int val2 = 2; PetscBool test; PetscErrorCode ierr; PetscInitialize( &argc, &argv, 0, 0); ierr = PetscOptionsHasName (PETSC_NULL, "--test", &test); if (test) PetscPrintf (PETSC_COMM_WORLD, "with --test option\n"); else PetscPrintf (PETSC_COMM_WORLD, "without --test option \n"); ierr = PetscOptionsGetInt (PETSC_NULL, "--op1", &val1, &test); ierr = PetscOptionsGetInt (PETSC_NULL, "--op2", &val2, &test); PetscPrintf (PETSC_COMM_WORLD, "op1: %d\n", val1); PetscPrintf (PETSC_COMM_WORLD, "op2: %d\n", val2); PetscFinalize(); return 0; } When executing on neh13, "--opt1" couldn't get its value from command line when the no-value option "--test" appears before it: lib at neh13:~/petsc/options$ ./test --test --op1 23 --op2 45 with --test option op1: 1 op2: 45 lib at neh13:~/petsc/options$ ./test --op1 23 --op2 45 without --test option op1: 23 op2: 45 lib at neh13:~/petsc/options$ ./test --op1 23 --op2 45 --test with --test option op1: 23 op2: 45 I also test it on quad4 and on my local cluster with the latest version of Petsc(3.3), this bug doesn't appear: lib at crc3:~/petsc/options$ ./test --test --op1 23 --op2 45 with --test option op1: 23 op2: 45 Looks it only exists in Petsc-3.2 :( -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Fri Apr 26 08:12:45 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Fri, 26 Apr 2013 08:12:45 -0500 Subject: [petsc-users] command line handling in PETSC 3.2, a simple fix for Wheezy? In-Reply-To: References: Message-ID: <87fvyd4er6.fsf@mcs.anl.gov> Alexei Matveev writes: > Dear List, > > While porting an application to the upcoming Debian Wheeze > we noted a misbehaviour in command line parsing by PETSC 3.2. > > It looks like if a "--key value" pair is following a "--flag-with-no-value" > option > at the leading position, the key-value pair is being ignored. See minimal > example below. > > Is this something known? Is there a simple fix that had a chance > to convince Debian maintainers to apply it before 7.0 or even > 7.0.1? Historically, PETSc has not intended to support double-dash (--) options, so if it worked with older versions, it was just coincidental. In particular, --options did not work correctly with prefixes at any time. At the request of a user, I added support in commit 7cd08cec2432658160c70b3c529f243c3b7cdafe Author: Jed Brown Date: Tue Feb 28 13:45:41 2012 -0600 Allow keys starting with --, handle prefixes as --prefix_option instead of -prefix_-option. We thought of this as a feature rather than a bug fix at the time, but I think it's highly unlikely that someone was depending on the weird behavior, thus I think it's acceptable to backport. Consequently, I've cherry-picked this back onto 'maint-3.2'. https://bitbucket.org/petsc/petsc/commits/branch/maint-3.2 Please roll your package from this branch, assuming that using petsc-3.3 is not an option for Wheezy. From parsani.matteo at gmail.com Fri Apr 26 10:20:21 2013 From: parsani.matteo at gmail.com (Matteo Parsani) Date: Fri, 26 Apr 2013 11:20:21 -0400 Subject: [petsc-users] unexpected ordering when VecSetValues set multiples values Message-ID: Hello, I have some problem when I try to set multiple values to a PETSc vector that I will use later on with SNES. I am using Fortran 90. Here the problem and two fixes that however are not so good for performances reasons. The code is very simple. *Standard approach that does not work correctly: (I am probably doing something wrong) *m = 1 ! Loop over all elements do ielem = elem_low, elem_high ! Loop over all nodes in the element do inode = 1, nodesperelem !Loop over all equations do ieq = 1, nequations ! Add element to x_vec_local x_vec_local(m) = ug(ieq,inode,ielem) ! Add element to index list ind(m) = (elem_low-1)*nodesperelem*nequations+m-1 ! Update m index m = m+1 end do end do end do ! Set values in the portion of the vector owned by the process call VecSetValues(x_vec_in,len_local,index_list,x_vec_local,INSERT_VALUES,& & ierr_local) ! Assemble initial guess call VecAssemblyBegin(x_vec_in,ierr_local) call VecAssemblyEnd(x_vec_in,ierr_local) Then I print my expected values and the values contained in the PETSc vector to a file. See attachment. I am running in serial for the moment BUT strangely if you look at the file I have attached the first 79 DOFs values have a wrong ordering and the remaining 80 are zero. *1st approach: set just one value at the time inside the loop*. m = 1 ! Loop over all elements do ielem = elem_low, elem_high ! Loop over all nodes in the element do inode = 1, nodesperelem !Loop over all equations do ieq = 1, nequations ! Add element to x_vec_local value = ug(ieq,inode,ielem) ! Add element to index list ind = (elem_low-1)*nodesperelem*nequations+m-1 call VecSetValues(x_vec_in,1,ind,value,INSERT_VALUES,& & ierr_local) ! Update m index m = m+1 end do end do end do *This works fine*. As you can see I am using the same expression used in the previous loop to compute the index of the element that I have to add in the x_vec_in, i.e. ind = (elem_low-1)*nodesperelem*nequations+m-1 Thus I cannot see which is the problem. *2nd approach: get the pointer to the local part of the global vector and use it to set the values in the global vector *m = 1 ! Loop over all elements do ielem = elem_low, elem_high ! Loop over all nodes in the element do inode = 1, nodesperelem !Loop over all equations do ieq = 1, nequations ! Add element to x_vec_local tmp(m) = ug(ieq,inode,ielem) ! Update m index m = m+1 end do end do end do* * *This works fine too*. Jut to be complete. I use the following two approaches to view the vector: call VecView(x_vec_in,PETSC_VIEWER_STDOUT_WORLD,ierr_local) and call VecGetArrayF90(x_vec_in,tmp,ierr_local) m = 1 ! Loop over all elements do ielem = elem_low, elem_high ! Loop over all nodes in the element do inode = 1, nodesperelem !Loop over all equations do ieq = 1, nequations write(*,*) m,index_list(m),x_vec_local(m),tmp(m) ! Update m index m = m+1 end do end do end do Thank you. -- Matteo -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- Fortran index PETSc index Expected values wrong values stored in PETSc vec (x_vec_in) 1 0 0.93948776058346750 0.73888153309055049 2 1 0.55294328167143081 0.98109660706253266 3 2 0.98109660706253266 0.72201428570658543 4 3 0.0000000000000000 0.37500572347501365 5 4 0.72201428570658543 0.0000000000000000 6 5 0.87455684095044839 0.92271262338061255 7 6 0.37500572347501365 1.0498576790186152 8 7 0.74592494251227448 0.73867698789744696 9 8 0.0000000000000000 0.79005016762214764 10 9 0.63192361099133243 0.0000000000000000 11 10 0.92271262338061255 0.87455684095044839 12 11 0.86706624329676873 0.74592494251227448 13 12 1.0498576790186152 0.63192361099133243 14 13 0.0000000000000000 0.30331870059955296 15 14 0.73867698789744696 0.0000000000000000 16 15 0.84075382752435801 0.84075382752435801 17 16 0.79005016762214764 0.77428621161408018 18 17 0.77428621161408018 0.63305806072430315 19 18 0.0000000000000000 0.75036023117937511 20 19 0.63305806072430315 0.0000000000000000 21 20 0.87455684095044839 0.84075382752435801 22 21 0.37500572347501365 0.28755474459148550 23 22 0.74592494251227448 0.57067306732654188 24 23 0.0000000000000000 0.37500572347501365 25 24 0.63192361099133243 0.0000000000000000 26 25 0.84075382752435801 0.79851667937097781 27 26 0.30331870059955296 0.27310878912639885 28 27 0.28755474459148550 0.56120402227094435 29 28 0.0000000000000000 0.79005016762214764 30 29 0.57067306732654188 0.0000000000000000 31 30 0.84075382752435801 0.87455684095044839 32 31 0.79005016762214764 -0.14769283033564165 33 32 0.77428621161408018 0.60136008311655142 34 33 0.0000000000000000 0.55294328167143081 35 34 0.63305806072430315 0.0000000000000000 36 35 0.79851667937097781 0.84075382752435801 37 36 0.75036023117937511 -0.19917672243110918 38 37 0.27310878912639885 0.59976366750186694 39 38 0.0000000000000000 0.86706624329676873 40 39 0.56120402227094435 0.0000000000000000 41 40 0.84075382752435801 0.92271262338061255 42 41 0.30331870059955296 1.0498576790186152 43 42 0.28755474459148550 0.73867698789744696 44 43 0.0000000000000000 0.79005016762214764 45 44 0.57067306732654188 0.0000000000000000 46 45 0.87455684095044839 0.93948776058346750 47 46 0.37500572347501365 0.98109660706253266 48 47 -0.14769283033564165 0.78401265530230047 49 48 0.0000000000000000 1.2686234963229299 50 49 0.60136008311655142 0.0000000000000000 51 50 0.79851667937097781 0.84075382752435801 52 51 0.75036023117937511 0.77428621161408018 53 52 0.27310878912639885 0.63305806072430315 54 53 0.0000000000000000 0.75036023117937511 55 54 0.56120402227094435 0.0000000000000000 56 55 0.84075382752435801 0.87455684095044839 57 56 0.79005016762214764 0.74592494251227448 58 57 -0.19917672243110918 0.71589621368616485 59 58 0.0000000000000000 1.2767816346447423 60 59 0.59976366750186694 0.0000000000000000 61 60 0.87455684095044839 0.79851667937097781 62 61 0.37500572347501365 0.27310878912639885 63 62 -0.14769283033564165 0.56120402227094435 64 63 0.0000000000000000 0.79005016762214764 65 64 0.60136008311655142 0.0000000000000000 66 65 0.93948776058346750 0.84075382752435801 67 66 0.55294328167143081 0.28755474459148550 68 67 -0.33844913000759452 0.66214866089962809 69 68 0.0000000000000000 1.2686234963229299 70 69 0.67688316349483535 0.0000000000000000 71 70 0.84075382752435801 0.84075382752435801 72 71 0.79005016762214764 -0.19917672243110918 73 72 -0.19917672243110918 0.59976366750186694 74 73 0.0000000000000000 0.86706624329676873 75 74 0.59976366750186694 0.0000000000000000 76 75 0.92271262338061255 0.87455684095044839 77 76 0.86706624329676873 -0.14769283033564165 78 77 -0.41868507162453339 0.68533268581138396 79 78 0.0000000000000000 1.2127161502064945 80 79 0.68844986769196281 0.0000000000000000 81 80 0.92271262338061255 0.0000000000000000 82 81 0.86706624329676873 0.0000000000000000 83 82 1.0498576790186152 0.0000000000000000 84 83 0.0000000000000000 0.0000000000000000 85 84 0.73867698789744696 0.0000000000000000 86 85 0.84075382752435801 0.0000000000000000 87 86 0.79005016762214764 0.0000000000000000 88 87 0.77428621161408018 0.0000000000000000 89 88 0.0000000000000000 0.0000000000000000 90 89 0.63305806072430315 0.0000000000000000 91 90 0.93948776058346750 0.0000000000000000 92 91 1.2127161502064945 0.0000000000000000 93 92 0.98109660706253266 0.0000000000000000 94 93 0.0000000000000000 0.0000000000000000 95 94 0.78401265530230047 0.0000000000000000 96 95 0.87455684095044839 0.0000000000000000 97 96 1.2686234963229299 0.0000000000000000 98 97 0.74592494251227448 0.0000000000000000 99 98 0.0000000000000000 0.0000000000000000 100 99 0.71589621368616485 0.0000000000000000 101 100 0.84075382752435801 0.0000000000000000 102 101 0.79005016762214764 0.0000000000000000 103 102 0.77428621161408018 0.0000000000000000 104 103 0.0000000000000000 0.0000000000000000 105 104 0.63305806072430315 0.0000000000000000 106 105 0.79851667937097781 0.0000000000000000 107 106 0.75036023117937511 0.0000000000000000 108 107 0.27310878912639885 0.0000000000000000 109 108 0.0000000000000000 0.0000000000000000 110 109 0.56120402227094435 0.0000000000000000 111 110 0.87455684095044839 0.0000000000000000 112 111 1.2686234963229299 0.0000000000000000 113 112 0.74592494251227448 0.0000000000000000 114 113 0.0000000000000000 0.0000000000000000 115 114 0.71589621368616485 0.0000000000000000 116 115 0.84075382752435801 0.0000000000000000 117 116 1.2767816346447423 0.0000000000000000 118 117 0.28755474459148550 0.0000000000000000 119 118 0.0000000000000000 0.0000000000000000 120 119 0.66214866089962809 0.0000000000000000 121 120 0.79851667937097781 0.0000000000000000 122 121 0.75036023117937511 0.0000000000000000 123 122 0.27310878912639885 0.0000000000000000 124 123 0.0000000000000000 0.0000000000000000 125 124 0.56120402227094435 0.0000000000000000 126 125 0.84075382752435801 0.0000000000000000 127 126 0.79005016762214764 0.0000000000000000 128 127 -0.19917672243110918 0.0000000000000000 129 128 0.0000000000000000 0.0000000000000000 130 129 0.59976366750186694 0.0000000000000000 131 130 0.84075382752435801 0.0000000000000000 132 131 1.2767816346447423 0.0000000000000000 133 132 0.28755474459148550 0.0000000000000000 134 133 0.0000000000000000 0.0000000000000000 135 134 0.66214866089962809 0.0000000000000000 136 135 0.87455684095044839 0.0000000000000000 137 136 1.2686234963229299 0.0000000000000000 138 137 -0.14769283033564165 0.0000000000000000 139 138 0.0000000000000000 0.0000000000000000 140 139 0.68533268581138396 0.0000000000000000 141 140 0.84075382752435801 0.0000000000000000 142 141 0.79005016762214764 0.0000000000000000 143 142 -0.19917672243110918 0.0000000000000000 144 143 0.0000000000000000 0.0000000000000000 145 144 0.59976366750186694 0.0000000000000000 146 145 0.92271262338061255 0.0000000000000000 147 146 0.86706624329676873 0.0000000000000000 148 147 -0.41868507162453339 0.0000000000000000 149 148 0.0000000000000000 0.0000000000000000 150 149 0.68844986769196281 0.0000000000000000 151 150 0.87455684095044839 0.0000000000000000 152 151 1.2686234963229299 0.0000000000000000 153 152 -0.14769283033564165 0.0000000000000000 154 153 0.0000000000000000 0.0000000000000000 155 154 0.68533268581138396 0.0000000000000000 156 155 0.93948776058346750 0.0000000000000000 157 156 1.2127161502064945 0.0000000000000000 158 157 -0.33844913000759452 0.0000000000000000 159 158 0.0000000000000000 0.0000000000000000 160 159 0.73888153309055049 0.0000000000000000 From alexei.matveev+petsc at gmail.com Fri Apr 26 11:12:51 2013 From: alexei.matveev+petsc at gmail.com (Alexei Matveev) Date: Fri, 26 Apr 2013 18:12:51 +0200 Subject: [petsc-users] command line handling in PETSC 3.2, a simple fix for Wheezy? In-Reply-To: <87fvyd4er6.fsf@mcs.anl.gov> References: <87fvyd4er6.fsf@mcs.anl.gov> Message-ID: > behavior, thus I think it's acceptable to backport. > > Consequently, I've cherry-picked this back onto 'maint-3.2'. > > https://bitbucket.org/petsc/petsc/commits/branch/maint-3.2 > > > Dear Debian Science Maintainers, Do you think it is possible to apply this change to Petsc Debian package for Wheezy? I just re-built the package with the fix applied and the regression is gone. I guess a similar effect could be achieved by using the most recent upstream for 3.2 (this would be "old stable" Petsc). I am not familiar with proper procedure so I attach a "git format-patch" output for Debian Science petsc repo for the "wheezy" branch. Alexei $ mpicc -I/usr/include/petsc a.c -lpetsc $ ./a.out without --test option op1: 1 op2: 2 $ ./a.out --test with --test option op1: 1 op2: 2 $ ./a.out --test --op1 42 with --test option op1: 42 op2: 2 $ ./a.out --test --op1 42 --op2 24 with --test option op1: 42 op2: 24 $ ./a.out --op1 42 --test --op2 24 with --test option op1: 42 op2: 24 $ ./a.out --op1 42 --op2 24 without --test option op1: 42 op2: 24 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: wheezy-git.patch Type: application/octet-stream Size: 4107 bytes Desc: not available URL: From alexei.matveev+petsc at gmail.com Fri Apr 26 11:15:08 2013 From: alexei.matveev+petsc at gmail.com (Alexei Matveev) Date: Fri, 26 Apr 2013 18:15:08 +0200 Subject: [petsc-users] command line handling in PETSC 3.2, a simple fix for Wheezy? In-Reply-To: <87fvyd4er6.fsf@mcs.anl.gov> References: <87fvyd4er6.fsf@mcs.anl.gov> Message-ID: On 26 April 2013 15:12, Jed Brown wrote: > > > Consequently, I've cherry-picked this back onto 'maint-3.2'. > > https://bitbucket.org/petsc/petsc/commits/branch/maint-3.2 > > Please roll your package from this branch, assuming that using petsc-3.3 > is not an option for Wheezy. > Thanks a lot! Alexei -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Fri Apr 26 13:39:46 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 26 Apr 2013 13:39:46 -0500 Subject: [petsc-users] unexpected ordering when VecSetValues set multiples values In-Reply-To: References: Message-ID: <6C9A6BAE-E7B8-4498-8D77-DD23C81334EC@mcs.anl.gov> On Apr 26, 2013, at 10:20 AM, Matteo Parsani wrote: > Hello, > I have some problem when I try to set multiple values to a PETSc vector that I will use later on with SNES. I am using Fortran 90. > Here the problem and two fixes that however are not so good for performances reasons. The code is very simple. > > Standard approach that does not work correctly: (I am probably doing something wrong) > > m = 1 > ! Loop over all elements > do ielem = elem_low, elem_high > ! Loop over all nodes in the element > do inode = 1, nodesperelem > !Loop over all equations > do ieq = 1, nequations > ! Add element to x_vec_local > x_vec_local(m) = ug(ieq,inode,ielem) > ! Add element to index list > ind(m) = (elem_low-1)*nodesperelem*nequations+m-1 > ! Update m index > m = m+1 > end do > end do > end do > > ! Set values in the portion of the vector owned by the process > call VecSetValues(x_vec_in,len_local,index_list,x_vec_local,INSERT_VALUES,& What is len_local and index_list? They do not appear in the loop above. Shouldn't you be passing m-1 for the length and ind for the indices? I would first print out the all the values in your input to VecSetValues() and make sure they are correct. Barry > & ierr_local) > > ! Assemble initial guess > call VecAssemblyBegin(x_vec_in,ierr_local) > call VecAssemblyEnd(x_vec_in,ierr_local) > > Then I print my expected values and the values contained in the PETSc vector to a file. See attachment. I am running in serial for the moment BUT strangely if you look at the file I have attached the first 79 DOFs values have a wrong ordering and the remaining 80 are zero. > > > 1st approach: set just one value at the time inside the loop. > m = 1 > ! Loop over all elements > do ielem = elem_low, elem_high > ! Loop over all nodes in the element > do inode = 1, nodesperelem > !Loop over all equations > do ieq = 1, nequations > ! Add element to x_vec_local > value = ug(ieq,inode,ielem) > ! Add element to index list > ind = (elem_low-1)*nodesperelem*nequations+m-1 > call VecSetValues(x_vec_in,1,ind,value,INSERT_VALUES,& > & ierr_local) > ! Update m index > m = m+1 > end do > end do > end do > > > This works fine. As you can see I am using the same expression used in the previous loop to compute the index of the element that I have to add in the x_vec_in, i.e. > ind = (elem_low-1)*nodesperelem*nequations+m-1 > > Thus I cannot see which is the problem. > > 2nd approach: get the pointer to the local part of the global vector and use it to set the values in the global vector > > m = 1 > ! Loop over all elements > do ielem = elem_low, elem_high > ! Loop over all nodes in the element > do inode = 1, nodesperelem > !Loop over all equations > do ieq = 1, nequations > ! Add element to x_vec_local > tmp(m) = ug(ieq,inode,ielem) > ! Update m index > m = m+1 > end do > end do > end do > > > This works fine too. > > > Jut to be complete. I use the following two approaches to view the vector: > > call VecView(x_vec_in,PETSC_VIEWER_STDOUT_WORLD,ierr_local) > > > and > > call VecGetArrayF90(x_vec_in,tmp,ierr_local) > > > m = 1 > ! Loop over all elements > do ielem = elem_low, elem_high > ! Loop over all nodes in the element > do inode = 1, nodesperelem > !Loop over all equations > do ieq = 1, nequations > write(*,*) m,index_list(m),x_vec_local(m),tmp(m) > ! Update m index > m = m+1 > end do > end do > end do > > > Thank you. > > > -- > Matteo > From bsmith at mcs.anl.gov Fri Apr 26 14:23:50 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 26 Apr 2013 14:23:50 -0500 Subject: [petsc-users] unexpected ordering when VecSetValues set multiples values In-Reply-To: References: <6C9A6BAE-E7B8-4498-8D77-DD23C81334EC@mcs.anl.gov> Message-ID: Shouldn't matter that it is called from Fortran we do it all the time. Does it work if the final m is not very large? You may need to run in the debugger and follow the values through the code to see why they don't get to where they belong. Barry Could also be a buggy fortran compiler. On Apr 26, 2013, at 2:06 PM, Matteo Parsani wrote: > Hello Barry, > sorry I modified few things just before to write the mail. > The correct loop with the correct name of the variables is the following > > ! Number of DOFs owned by this process > len_local = (elem_high-elem_low+1)*nodesperelem*nequations > > ! Allocate memory for x_vec_local and index_list > allocate(x_vec_local(len_local)) > allocate(index_list(len_local)) > > > m = 1 > ! Loop over all elements > do ielem = elem_low, elem_high > ! Loop over all nodes in the element > do inode = 1, nodesperelem > !Loop over all equations > do ieq = 1, nequations > ! Add element to x_vec_local > x_vec_local(m) = ug(ieq,inode,ielem) > ! Add element to index list > index_list(m) = (elem_low-1)*nodesperelem*nequations+m-1 > ! Update m index > m = m+1 > end do > end do > end do > > ! HERE I HAVE PRINTED x_vec_local, ug and index_list > > ! Set values in the portion of the vector owned by the process > call VecSetValues(x_vec_in,m-1,index_list,x_vec_local,INSERT_VALUES,& > & ierr_local) > > ! Assemble initial guess > call VecAssemblyBegin(x_vec_in,ierr_local) > call VecAssemblyEnd(x_vec_in,ierr_local) > > > > I have printed the values and the indices I want to pass to VecSetValues() and they are correct. > > I also printed the values after VecSetValues() has been called and they are wrong. > > The attachment shows that. > > > Could it be a problem of VecSetValues() + F90 when more than 1 elements is set? > > I order to debug my the code I am running just with 1 processor. Thus the process owns all the DOFs. > > Thank you. > > > > > > > > > > > > On Fri, Apr 26, 2013 at 2:39 PM, Barry Smith wrote: > > On Apr 26, 2013, at 10:20 AM, Matteo Parsani wrote: > > > Hello, > > I have some problem when I try to set multiple values to a PETSc vector that I will use later on with SNES. I am using Fortran 90. > > Here the problem and two fixes that however are not so good for performances reasons. The code is very simple. > > > > Standard approach that does not work correctly: (I am probably doing something wrong) > > > > m = 1 > > ! Loop over all elements > > do ielem = elem_low, elem_high > > ! Loop over all nodes in the element > > do inode = 1, nodesperelem > > !Loop over all equations > > do ieq = 1, nequations > > ! Add element to x_vec_local > > x_vec_local(m) = ug(ieq,inode,ielem) > > ! Add element to index list > > ind(m) = (elem_low-1)*nodesperelem*nequations+m-1 > > ! Update m index > > m = m+1 > > end do > > end do > > end do > > > > ! Set values in the portion of the vector owned by the process > > call VecSetValues(x_vec_in,len_local,index_list,x_vec_local,INSERT_VALUES,& > > What is len_local and index_list? They do not appear in the loop above. Shouldn't you be passing m-1 for the length and ind for the indices? > > I would first print out the all the values in your input to VecSetValues() and make sure they are correct. > > Barry > > > & ierr_local) > > > > ! Assemble initial guess > > call VecAssemblyBegin(x_vec_in,ierr_local) > > call VecAssemblyEnd(x_vec_in,ierr_local) > > > > Then I print my expected values and the values contained in the PETSc vector to a file. See attachment. I am running in serial for the moment BUT strangely if you look at the file I have attached the first 79 DOFs values have a wrong ordering and the remaining 80 are zero. > > > > > > 1st approach: set just one value at the time inside the loop. > > m = 1 > > ! Loop over all elements > > do ielem = elem_low, elem_high > > ! Loop over all nodes in the element > > do inode = 1, nodesperelem > > !Loop over all equations > > do ieq = 1, nequations > > ! Add element to x_vec_local > > value = ug(ieq,inode,ielem) > > ! Add element to index list > > ind = (elem_low-1)*nodesperelem*nequations+m-1 > > call VecSetValues(x_vec_in,1,ind,value,INSERT_VALUES,& > > & ierr_local) > > ! Update m index > > m = m+1 > > end do > > end do > > end do > > > > > > This works fine. As you can see I am using the same expression used in the previous loop to compute the index of the element that I have to add in the x_vec_in, i.e. > > ind = (elem_low-1)*nodesperelem*nequations+m-1 > > > > Thus I cannot see which is the problem. > > > > 2nd approach: get the pointer to the local part of the global vector and use it to set the values in the global vector > > > > m = 1 > > ! Loop over all elements > > do ielem = elem_low, elem_high > > ! Loop over all nodes in the element > > do inode = 1, nodesperelem > > !Loop over all equations > > do ieq = 1, nequations > > ! Add element to x_vec_local > > tmp(m) = ug(ieq,inode,ielem) > > ! Update m index > > m = m+1 > > end do > > end do > > end do > > > > > > This works fine too. > > > > > > Jut to be complete. I use the following two approaches to view the vector: > > > > call VecView(x_vec_in,PETSC_VIEWER_STDOUT_WORLD,ierr_local) > > > > > > and > > > > call VecGetArrayF90(x_vec_in,tmp,ierr_local) > > > > > > m = 1 > > ! Loop over all elements > > do ielem = elem_low, elem_high > > ! Loop over all nodes in the element > > do inode = 1, nodesperelem > > !Loop over all equations > > do ieq = 1, nequations > > write(*,*) m,index_list(m),x_vec_local(m),tmp(m) > > ! Update m index > > m = m+1 > > end do > > end do > > end do > > > > > > Thank you. > > > > > > -- > > Matteo > > > > > > > -- > Matteo > From gladk at debian.org Fri Apr 26 14:23:57 2013 From: gladk at debian.org (Anton Gladky) Date: Fri, 26 Apr 2013 21:23:57 +0200 Subject: [petsc-users] command line handling in PETSC 3.2, a simple fix for Wheezy? In-Reply-To: References: <87fvyd4er6.fsf@mcs.anl.gov> Message-ID: <517AD44D.7000002@debian.org> Hi Alexei and Jed, thanks for triaging and fixing that. I have committed the fix into package git-repo [1], but it will unlikely be approved by Release-Team, as it is not a critical. We can try to upload the fix into the backports after the Wheezy will be released. One of our team-members has done work on packaging 3.3-version, so that will probably be soon uploaded as well. Best regards, Anton [1] http://anonscm.debian.org/gitweb/?p=debian-science/packages/petsc.git;a=commit;h=6d043c23c3eea30e327543a5189a8602d24467ec On 04/26/2013 06:12 PM, Alexei Matveev wrote: > Dear Debian Science Maintainers, > > Do you think it is possible to apply this change to Petsc Debian > package for Wheezy? I just re-built the package with the fix > applied and the regression is gone. > > I guess a similar effect could be achieved by using the most recent > upstream for 3.2 (this would be "old stable" Petsc). > > I am not familiar with proper procedure so I attach a "git format-patch" > output for Debian Science petsc repo for the "wheezy" branch. > > Alexei -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 897 bytes Desc: OpenPGP digital signature URL: From dharmareddy84 at gmail.com Fri Apr 26 17:39:37 2013 From: dharmareddy84 at gmail.com (Dharmendar Reddy) Date: Fri, 26 Apr 2013 17:39:37 -0500 Subject: [petsc-users] IS map for sub mesh In-Reply-To: References: Message-ID: Hello, I was wondering if you have had time to look into this. Thanks Reddy On Thu, Apr 25, 2013 at 3:57 PM, Matthew Knepley wrote: > On Thu, Apr 25, 2013 at 4:48 PM, Dharmendar Reddy > wrote: > >> >> >> >> On Thu, Apr 25, 2013 at 3:31 PM, Matthew Knepley wrote: >> >>> On Thu, Apr 25, 2013 at 3:35 PM, Dharmendar Reddy < >>> dharmareddy84 at gmail.com> wrote: >>> >>>> Hello, >>>> I need to access the map from points in subdm to points in dm. Can you >>>> please provide fortran binding for >>>> >>>> DMPlexCreateSubpointIS(DM dm, IS *subpointIS) >>>> >>>> Pushed to next. >>> >>> >>>> Also, i was thinking it may be of use to have interface like this.. >>>> >>>> DMPlexCreateSubpointIS(DM dm, PetscInt pointDimInSubdm, IS *subpointIS) >>>> >>>> this way i can have map from say (dim)-cells in subdm to corresponding >>>> (dim)-cells in dm. >>>> >>> >>> The intention here is to use DMPlexGetSubpointMap(). >>> >>> So, i should do the calls below ? >> >> DMPlexGetSubpointMap(DM dm, DMLabel subpointMap) >> >> DMLabelGetStratumIS(subpointMap,depth,IS) >> >> >> I have had trouble using DMLabel in my fortran code earlier. >> I can give it a try again, Is there a fortran binding for above functions ? >> >> > Hmm, I have not tested it. I will put it in an example. > > Thanks, > > Matt > > >> Thanks >> Reddy >> >> >> >> >> >>> Matt >>> >>> >>>> >>>> thanks >>>> Reddy >>>> >>>> -- >>>> ----------------------------------------------------- >>>> Dharmendar Reddy Palle >>>> Graduate Student >>>> Microelectronics Research center, >>>> University of Texas at Austin, >>>> 10100 Burnet Road, Bldg. 160 >>>> MER 2.608F, TX 78758-4445 >>>> e-mail: dharmareddy84 at gmail.com >>>> Phone: +1-512-350-9082 >>>> United States of America. >>>> Homepage: https://webspace.utexas.edu/~dpr342 >>>> >>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >> >> >> >> -- >> ----------------------------------------------------- >> Dharmendar Reddy Palle >> Graduate Student >> Microelectronics Research center, >> University of Texas at Austin, >> 10100 Burnet Road, Bldg. 160 >> MER 2.608F, TX 78758-4445 >> e-mail: dharmareddy84 at gmail.com >> Phone: +1-512-350-9082 >> United States of America. >> Homepage: https://webspace.utexas.edu/~dpr342 >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -- ----------------------------------------------------- Dharmendar Reddy Palle Graduate Student Microelectronics Research center, University of Texas at Austin, 10100 Burnet Road, Bldg. 160 MER 2.608F, TX 78758-4445 e-mail: dharmareddy84 at gmail.com Phone: +1-512-350-9082 United States of America. Homepage: https://webspace.utexas.edu/~dpr342 -------------- next part -------------- An HTML attachment was scrubbed... URL: From dharmareddy84 at gmail.com Sat Apr 27 20:23:04 2013 From: dharmareddy84 at gmail.com (Dharmendar Reddy) Date: Sat, 27 Apr 2013 20:23:04 -0500 Subject: [petsc-users] DMPlex Submesh Message-ID: Hello, I was testing the plexsubmesh functionality. I created a 1D submesh from a 2D mesh. The 2D mesh is a square mesh with uniform x-y grid. x [0,L] and y [0,L] numX = 10, numY = 91 The 1D mesh is along y axis at second x-grid point. when i call a DMView on submesh, i get the follwoing: Mesh in 1 dimensions: 0-cells: 91 1-cells: 90 Labels: Boundary: 1 strata of sizes (2) depth: 3 strata of sizes (91, 90, 180) I was expecting : depth: 2 strata of sizes (91, 90) Looks like the submesh has the information of 2-cells of the parent mesh. Is this a default behavior ? Thanks Reddy -- ----------------------------------------------------- Dharmendar Reddy Palle Graduate Student Microelectronics Research center, University of Texas at Austin, 10100 Burnet Road, Bldg. 160 MER 2.608F, TX 78758-4445 e-mail: dharmareddy84 at gmail.com Phone: +1-512-350-9082 United States of America. Homepage: https://webspace.utexas.edu/~dpr342 -------------- next part -------------- An HTML attachment was scrubbed... URL: From armiuswu at gmail.com Sun Apr 28 01:14:58 2013 From: armiuswu at gmail.com (Panruo Wu) Date: Sat, 27 Apr 2013 23:14:58 -0700 Subject: [petsc-users] matrix distribution Message-ID: Hello, I have a question about the matrix distribution in Petsc. Can I define the distribution pattern as the output of graph partition software like METIS? Pointers to documentation/code about matrix distribution in Petsc would be very helpful. Thanks, Panruo -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Sun Apr 28 07:26:47 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sun, 28 Apr 2013 07:26:47 -0500 Subject: [petsc-users] matrix distribution In-Reply-To: References: Message-ID: <87ppxe2648.fsf@mcs.anl.gov> Panruo Wu writes: > Hello, > > I have a question about the matrix distribution in Petsc. > Can I define the distribution pattern as the output > of graph partition software like METIS? Pointers > to documentation/code about matrix distribution in Petsc > would be very helpful. http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatCreateMPIAdj.html http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/MatOrderings/MatPartitioningCreate.html After applying the partitioning, relabel your mesh (or whatever is the source of your problem), migrate that data, and create a new matrix using the new distribution. From armiuswu at gmail.com Sun Apr 28 16:16:34 2013 From: armiuswu at gmail.com (Panruo Wu) Date: Sun, 28 Apr 2013 14:16:34 -0700 Subject: [petsc-users] matrix distribution In-Reply-To: <87ppxe2648.fsf@mcs.anl.gov> References: <87ppxe2648.fsf@mcs.anl.gov> Message-ID: Thanks Jed! I understand that MatCreateMPIAdj takes adjacency information and partition the matrix for me; what if I don't want it to partition for me, that I already have a particular partitioning? I guess my question is, how do I tell Petsc to use my own partitioning? Thanks, Panruo On Sun, Apr 28, 2013 at 5:26 AM, Jed Brown wrote: > Panruo Wu writes: > > > Hello, > > > > I have a question about the matrix distribution in Petsc. > > Can I define the distribution pattern as the output > > of graph partition software like METIS? Pointers > > to documentation/code about matrix distribution in Petsc > > would be very helpful. > > > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatCreateMPIAdj.html > > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/MatOrderings/MatPartitioningCreate.html > > After applying the partitioning, relabel your mesh (or whatever is the > source of your problem), migrate that data, and create a new matrix > using the new distribution. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Sun Apr 28 16:24:49 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Sun, 28 Apr 2013 16:24:49 -0500 Subject: [petsc-users] matrix distribution In-Reply-To: References: <87ppxe2648.fsf@mcs.anl.gov> Message-ID: Order your unknowns appropriately and set the local sizes to match your distribution. On Apr 28, 2013 4:16 PM, "Panruo Wu" wrote: > Thanks Jed! > > I understand that MatCreateMPIAdj takes adjacency information > and partition the matrix for me; what if I don't want it to partition > for me, that I already have a particular partitioning? > > I guess my question is, how do I tell Petsc to use my own partitioning? > > Thanks, > Panruo > > > On Sun, Apr 28, 2013 at 5:26 AM, Jed Brown wrote: > >> Panruo Wu writes: >> >> > Hello, >> > >> > I have a question about the matrix distribution in Petsc. >> > Can I define the distribution pattern as the output >> > of graph partition software like METIS? Pointers >> > to documentation/code about matrix distribution in Petsc >> > would be very helpful. >> >> >> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatCreateMPIAdj.html >> >> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/MatOrderings/MatPartitioningCreate.html >> >> After applying the partitioning, relabel your mesh (or whatever is the >> source of your problem), migrate that data, and create a new matrix >> using the new distribution. >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From armiuswu at gmail.com Sun Apr 28 16:27:00 2013 From: armiuswu at gmail.com (Panruo Wu) Date: Sun, 28 Apr 2013 14:27:00 -0700 Subject: [petsc-users] matrix distribution In-Reply-To: References: <87ppxe2648.fsf@mcs.anl.gov> Message-ID: I get it. Thanks! Regards, Panruo On Sun, Apr 28, 2013 at 2:24 PM, Jed Brown wrote: > Order your unknowns appropriately and set the local sizes to match your > distribution. > On Apr 28, 2013 4:16 PM, "Panruo Wu" wrote: > >> Thanks Jed! >> >> I understand that MatCreateMPIAdj takes adjacency information >> and partition the matrix for me; what if I don't want it to partition >> for me, that I already have a particular partitioning? >> >> I guess my question is, how do I tell Petsc to use my own partitioning? >> >> Thanks, >> Panruo >> >> >> On Sun, Apr 28, 2013 at 5:26 AM, Jed Brown wrote: >> >>> Panruo Wu writes: >>> >>> > Hello, >>> > >>> > I have a question about the matrix distribution in Petsc. >>> > Can I define the distribution pattern as the output >>> > of graph partition software like METIS? Pointers >>> > to documentation/code about matrix distribution in Petsc >>> > would be very helpful. >>> >>> >>> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatCreateMPIAdj.html >>> >>> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/MatOrderings/MatPartitioningCreate.html >>> >>> After applying the partitioning, relabel your mesh (or whatever is the >>> source of your problem), migrate that data, and create a new matrix >>> using the new distribution. >>> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From gaetank at gmail.com Sun Apr 28 18:01:29 2013 From: gaetank at gmail.com (Gaetan Kenway) Date: Sun, 28 Apr 2013 19:01:29 -0400 Subject: [petsc-users] Help with ML/BoomerAMG Message-ID: Hello I am the process of trying out some of the multigrid functionality in PETSc and not having much luck. The simple system I am trying to solve is adjoint system of equations resulting from the finite volume discretization of the Euler equation on a 147,456 cell mesh resulting in a linear system of equations of size 5*147,456=737280. All of the test are done on a single processor and use petsc-3.2-p7. My current technique for solving this is to use following options: -matload_block_size 5 -mat_type seqbaij -ksp_type gmres -ksp_max_it 100 -ksp_gmres_restart 50 -ksp_monitor -pc_type asm -pc_asm_overlap 1 -sub_pc_factor_mat_ordering_type rcm -sub_pc_factor_levels 1 Which results in ~1e-6 convergence in ~50 iterations. Next I naively tried the following options: -mat_type seqaij -ksp_type gmres -ksp_max_it 100 -ksp_gmres_restart 50 -ksp_monitor -ksp_view -pc_type ml The result of the ksp_monitor and ksp_view is: 0 KSP Residual norm 7.366926114851e+70 1 KSP Residual norm 1.744597669120e+61 KSP Object: 1 MPI processes type: gmres GMRES: restart=50, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=100, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using PRECONDITIONED norm type for convergence test PC Object: 1 MPI processes type: ml MG: type is MULTIPLICATIVE, levels=5 cycles=v Cycles per PCApply=1 Using Galerkin computed coarse grid matrices Coarse grid solver -- level ------------------------------- KSP Object: (mg_coarse_) 1 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_) 1 MPI processes type: lu LU: out-of-place factorization tolerance for zero pivot 1e-12 matrix ordering: nd factor fill ratio given 5, needed 1 Factored matrix follows: Matrix Object: 1 MPI processes type: seqaij rows=1, cols=1 package used to perform factorization: petsc total: nonzeros=1, allocated nonzeros=1 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=1, cols=1 total: nonzeros=1, allocated nonzeros=1 total number of mallocs used during MatSetValues calls =0 not using I-node routines Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=1 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using PRECONDITIONED norm type for convergence test PC Object: (mg_levels_1_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=2, cols=2 total: nonzeros=4, allocated nonzeros=4 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 1 nodes, limit used is 5 Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 2 ------------------------------- KSP Object: (mg_levels_2_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=1 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using PRECONDITIONED norm type for convergence test PC Object: (mg_levels_2_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=93, cols=93 total: nonzeros=3861, allocated nonzeros=3861 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 3 ------------------------------- KSP Object: (mg_levels_3_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=1 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using PRECONDITIONED norm type for convergence test PC Object: (mg_levels_3_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=8303, cols=8303 total: nonzeros=697606, allocated nonzeros=697606 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 4 ------------------------------- KSP Object: (mg_levels_4_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=1 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using PRECONDITIONED norm type for convergence test PC Object: (mg_levels_4_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=737280, cols=737280 total: nonzeros=46288425, allocated nonzeros=46288425 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 147456 nodes, limit used is 5 Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=737280, cols=737280 total: nonzeros=46288425, allocated nonzeros=46288425 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 147456 nodes, limit used is 5 Time: 11.8599750995636 Actual res norm: 0.132693075876745 It stopped after the second iteration because it "converged" over 6 orders of magnitude. Of course, it didn't actually converge and the actual residual remained unchanged. Given that that the coarse solver is LU and all the smoothers are KSPRichardson with 1 iteration and SOR smoothing, I should be able to use GMRES. I also tried it with -ksp_type fgrmes and all the other options the same. This time, it doesn't blowup, but also doesn't make significant progress. The actual residual at end of 100 iterations is larger than the real initial residual. (The ksp_view for this is identical except for the change is ksp type to fgmres and the use of right preconditioning. 0 KSP Residual norm 1.326930758772e-01 10 KSP Residual norm 1.326906693969e-01 20 KSP Residual norm 1.326879007861e-01 30 KSP Residual norm 1.326851758083e-01 40 KSP Residual norm 1.326824509983e-01 50 KSP Residual norm 1.333174206652e-01 60 KSP Residual norm 1.279860083890e-01 70 KSP Residual norm 1.227523658429e-01 80 KSP Residual norm 1.181123717224e-01 90 KSP Residual norm 1.139616660987e-01 100 KSP Residual norm 1.102198901480e-01 Time: 67.6706080436707 Actual res norm: 0.443508987516716 I then started trying some of the ML options. Specifically, -pc_mg_smoothup 3 -pc_mg_smoothdown 3 Now it returns a floating point exception: 0 KSP Residual norm 1.326930758772e-01 [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Floating point exception! [0]PETSC ERROR: Infinite or not-a-number generated in norm! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Release Version 3.2.0, Patch 7, Thu Mar 15 09:30:51 CDT 2012 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: ./main on a real-opt named mica by kenway Sun Apr 28 18:32:22 2013 [0]PETSC ERROR: Libraries linked from /home/kenway/Downloads/petsc-3.2-p7/real-opt/lib [0]PETSC ERROR: Configure run at Sun Apr 28 15:16:05 2013 [0]PETSC ERROR: Configure options --with-shared-libraries --download-superlu_dist=yes --download-parmetis=yes --with-fortran-interfaces=1 --with-debuggig=no -with-scalar-type=real --PETSC_ARCH=real-opt --download-hypre=yes --download-spai=yes --download-sundials=yes --download-ml=yes [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: VecNorm() line 167 in src/vec/vec/interface/rvector.c [0]PETSC ERROR: VecNormalize() line 261 in src/vec/vec/interface/rvector.c [0]PETSC ERROR: GMREScycle() line 128 in src/ksp/ksp/impls/gmres/gmres.c [0]PETSC ERROR: KSPSolve_GMRES() line 231 in src/ksp/ksp/impls/gmres/gmres.c [0]PETSC ERROR: KSPSolve() line 423 in src/ksp/ksp/interface/itfunc.c [0]PETSC ERROR: PCMGMCycle_Private() line 55 in src/ksp/pc/impls/mg/mg.c [0]PETSC ERROR: PCMGMCycle_Private() line 49 in src/ksp/pc/impls/mg/mg.c [0]PETSC ERROR: PCMGMCycle_Private() line 49 in src/ksp/pc/impls/mg/mg.c [0]PETSC ERROR: PCMGMCycle_Private() line 49 in src/ksp/pc/impls/mg/mg.c [0]PETSC ERROR: PCApply_MG() line 320 in src/ksp/pc/impls/mg/mg.c [0]PETSC ERROR: PCApply() line 383 in src/ksp/pc/interface/precon.c [0]PETSC ERROR: FGMREScycle() line 174 in src/ksp/ksp/impls/gmres/fgmres/fgmres.c [0]PETSC ERROR: KSPSolve_FGMRES() line 299 in src/ksp/ksp/impls/gmres/fgmres/fgmres.c [0]PETSC ERROR: KSPSolve() line 423 in src/ksp/ksp/interface/itfunc.c Time: 20.1171851158142 Actual res norm: 0.132693075877161 So I tried running with -pc_mg_smoothup 1 -pc_mg_smoothdown 1 This runs, but somehow does not result in the same sequence of KSP objects on the various levels. 0 KSP Residual norm 1.326930758772e-01 10 KSP Residual norm 1.310779571550e-01 20 KSP Residual norm 1.290623886203e-01 30 KSP Residual norm 1.271370286802e-01 40 KSP Residual norm 1.252953431171e-01 50 KSP Residual norm 3.224565651498e-01 60 KSP Residual norm 2.974317106914e-01 70 KSP Residual norm 2.708573182063e-01 80 KSP Residual norm 2.503317725548e-01 90 KSP Residual norm 2.338612668565e-01 100 KSP Residual norm 2.202653375200e-01 KSP Object: 1 MPI processes type: fgmres GMRES: restart=50, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=100, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 right preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: 1 MPI processes type: ml MG: type is MULTIPLICATIVE, levels=5 cycles=v Cycles per PCApply=1 Using Galerkin computed coarse grid matrices Coarse grid solver -- level ------------------------------- KSP Object: (mg_coarse_) 1 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_) 1 MPI processes type: lu LU: out-of-place factorization tolerance for zero pivot 1e-12 matrix ordering: nd factor fill ratio given 5, needed 1 Factored matrix follows: Matrix Object: 1 MPI processes type: seqaij rows=1, cols=1 package used to perform factorization: petsc total: nonzeros=1, allocated nonzeros=1 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=1, cols=1 total: nonzeros=1, allocated nonzeros=1 total number of mallocs used during MatSetValues calls =0 not using I-node routines Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using PRECONDITIONED norm type for convergence test PC Object: (mg_levels_1_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=2, cols=2 total: nonzeros=4, allocated nonzeros=4 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 1 nodes, limit used is 5 Up solver (post-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 1 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=1 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using PRECONDITIONED norm type for convergence test PC Object: (mg_levels_1_) 1 MPI processes type: ilu ILU: out-of-place factorization 0 levels of fill tolerance for zero pivot 1e-12 using diagonal shift to prevent zero pivot matrix ordering: natural factor fill ratio given 1, needed 1 Factored matrix follows: Matrix Object: 1 MPI processes type: seqaij rows=2, cols=2 package used to perform factorization: petsc total: nonzeros=4, allocated nonzeros=4 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 1 nodes, limit used is 5 linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=2, cols=2 total: nonzeros=4, allocated nonzeros=4 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 1 nodes, limit used is 5 Down solver (pre-smoother) on level 2 ------------------------------- KSP Object: (mg_levels_2_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using PRECONDITIONED norm type for convergence test PC Object: (mg_levels_2_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=93, cols=93 total: nonzeros=3861, allocated nonzeros=3861 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) on level 2 ------------------------------- KSP Object: (mg_levels_2_) 1 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=1 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using PRECONDITIONED norm type for convergence test PC Object: (mg_levels_2_) 1 MPI processes type: ilu ILU: out-of-place factorization 0 levels of fill tolerance for zero pivot 1e-12 using diagonal shift to prevent zero pivot matrix ordering: natural factor fill ratio given 1, needed 1 Factored matrix follows: Matrix Object: 1 MPI processes type: seqaij rows=93, cols=93 package used to perform factorization: petsc total: nonzeros=3861, allocated nonzeros=3861 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=93, cols=93 total: nonzeros=3861, allocated nonzeros=3861 total number of mallocs used during MatSetValues calls =0 not using I-node routines Down solver (pre-smoother) on level 3 ------------------------------- KSP Object: (mg_levels_3_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using PRECONDITIONED norm type for convergence test PC Object: (mg_levels_3_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=8303, cols=8303 total: nonzeros=697606, allocated nonzeros=697606 total number of mallocs used during MatSetValues calls =0 not using I-node routines Up solver (post-smoother) on level 3 ------------------------------- KSP Object: (mg_levels_3_) 1 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=1 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using PRECONDITIONED norm type for convergence test PC Object: (mg_levels_3_) 1 MPI processes type: ilu ILU: out-of-place factorization 0 levels of fill tolerance for zero pivot 1e-12 using diagonal shift to prevent zero pivot matrix ordering: natural factor fill ratio given 1, needed 1 Factored matrix follows: Matrix Object: 1 MPI processes type: seqaij rows=8303, cols=8303 package used to perform factorization: petsc total: nonzeros=697606, allocated nonzeros=697606 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=8303, cols=8303 total: nonzeros=697606, allocated nonzeros=697606 total number of mallocs used during MatSetValues calls =0 not using I-node routines Down solver (pre-smoother) on level 4 ------------------------------- KSP Object: (mg_levels_4_) 1 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using PRECONDITIONED norm type for convergence test PC Object: (mg_levels_4_) 1 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=737280, cols=737280 total: nonzeros=46288425, allocated nonzeros=46288425 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 147456 nodes, limit used is 5 Up solver (post-smoother) on level 4 ------------------------------- KSP Object: (mg_levels_4_) 1 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=1 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using PRECONDITIONED norm type for convergence test PC Object: (mg_levels_4_) 1 MPI processes type: ilu ILU: out-of-place factorization 0 levels of fill tolerance for zero pivot 1e-12 using diagonal shift to prevent zero pivot matrix ordering: natural factor fill ratio given 1, needed 1 Factored matrix follows: Matrix Object: 1 MPI processes type: seqaij rows=737280, cols=737280 package used to perform factorization: petsc total: nonzeros=46288425, allocated nonzeros=46288425 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 147456 nodes, limit used is 5 linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=737280, cols=737280 total: nonzeros=46288425, allocated nonzeros=46288425 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 147456 nodes, limit used is 5 linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=737280, cols=737280 total: nonzeros=46288425, allocated nonzeros=46288425 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 147456 nodes, limit used is 5 Time: 118.043510913849 Actual res norm: 1.11054776641920 Now, the Down Solver is still KSPRichardson, but the UP solvers have turned into a default GMRES, ILU solver. In either case, this didn't help the convergence, and the final residual is larger than then initial. I have also experimented with BoomerAMG, and have run into similar problems; all of the various options result in a preconditioner that is not significantly better tan not using any preconditioner at all. Any suggestions would be greatly appreciated. Thank you, Gaetan Kenway The compete source code listing is below: program main ! Test different petsc solution techniques implicit none #include "include/finclude/petsc.h" KSP ksp Vec RHS, x, res Mat A PetscViewer fd integer :: ierr, rank real*8 :: timeA, timeB,err_nrm ! Initialize PETSc call PetscInitialize('petsc_options', ierr) call mpi_comm_rank(PETSC_COMM_WORLD, rank, ierr) ! Load matrix call PetscViewerBinaryOpen(PETSC_COMM_WORLD,"drdw.bin", FILE_MODE_READ, fd, ierr) call MatCreate(PETSC_COMM_WORLD, A, ierr) call MatSetFromOptions(A, ierr) call MatLoad(A, fd, ierr) call PetscViewerDestroy(fd, ierr) ! Create vector call MatGetVecs(A, RHS, x, ierr) call vecduplicate(x, res, ierr) ! Load RHS call PetscViewerBinaryOpen(PETSC_COMM_WORLD, "didw.bin", FILE_MODE_READ, fd, ierr) call VecSetFromOptions(RHS, ierr) call VecLoad(RHS, fd, ierr) call PetscViewerDestroy(fd, ierr) ! Create KSP object call KSPCreate(PEtSC_COMM_WORLD, ksp, ierr) call KSPsetFromOptions(ksp, ierr) call KSPSetOperators(ksp, A, A, SAME_NONZERO_PATTERN, ierr) ! Solve system timeA = mpi_wtime() call KSPSolve(ksp, RHS, x, ierr) timeB = mpi_wtime() if (rank == 0) then print *,'Time:',timeB-timeA end if ! Check actual error call MatMult(A, x, res, ierr) call VecNorm(res, NORM_2, err_nrm, ierr) call VecAxPy(res, -1.0_8, RHS, ierr) call VecNorm(res, NORM_2, err_nrm, ierr) if (rank == 0) then print *,'Actual res norm:',err_nrm end if ! Destroy call KSPDestroy(ksp, ierr) call MatDestroy(A, ierr) call VecDestroy(RHS, ierr) call VecDestroy(x, ierr) call VecDestroy(res, ierr) ! Finalize call PETSCFinalize(ierr) end program main -------------- next part -------------- An HTML attachment was scrubbed... URL: From dharmareddy84 at gmail.com Sun Apr 28 19:36:08 2013 From: dharmareddy84 at gmail.com (Dharmendar Reddy) Date: Sun, 28 Apr 2013 19:36:08 -0500 Subject: [petsc-users] DMPlexGetSupport Message-ID: Hello, I am seeing an issue using DMPlexGetSupport. I am trying to get the Ids of cells sharing a given facet in mesh. I have attached a test code for a small 3 x 3 square mesh. When i print the support for the 12 facets it has, i see 4 internal facets and 8 boundary faces, as expected. But the cell Ids are wrong. Surcessfully created DM From Cell List Mesh in 2 dimensions: 0-cells: 9 1-cells: 12 2-cells: 4 Labels: depth: 3 strata of sizes (9, 12, 4) printing support for facet edge id start= 13 edge id end= 24 f: 13 c: 0 f: 14 c: 0 1 f: 15 c: 0 2 f: 16 c: 0 f: 17 c: 0 f: 18 c: 0 f: 19 c: 0 0 f: 20 c: 0 0 f: 21 c: 0 f: 22 c: 0 f: 23 c: 0 f: 24 c: 0 -- ----------------------------------------------------- Dharmendar Reddy Palle Graduate Student Microelectronics Research center, University of Texas at Austin, 10100 Burnet Road, Bldg. 160 MER 2.608F, TX 78758-4445 e-mail: dharmareddy84 at gmail.com Phone: +1-512-350-9082 United States of America. Homepage: https://webspace.utexas.edu/~dpr342 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: testDMSupport.F90 Type: application/octet-stream Size: 4781 bytes Desc: not available URL: From dharmareddy84 at gmail.com Sun Apr 28 21:30:13 2013 From: dharmareddy84 at gmail.com (Dharmendar Reddy) Date: Sun, 28 Apr 2013 21:30:13 -0500 Subject: [petsc-users] DMPlexGetSupport In-Reply-To: References: Message-ID: Hello, Looks like the problem is with getCone and getSupport called from F90 code. Thanks Reddy On Sun, Apr 28, 2013 at 7:36 PM, Dharmendar Reddy wrote: > Hello, > I am seeing an issue using DMPlexGetSupport. > > I am trying to get the Ids of cells sharing a given facet in mesh. > > I have attached a test code for a small 3 x 3 square mesh. When i print > the support for the 12 facets it has, i see 4 internal facets and 8 > boundary faces, as expected. But the cell Ids are wrong. > > Surcessfully created DM From Cell List > Mesh in 2 dimensions: > 0-cells: 9 > 1-cells: 12 > 2-cells: 4 > Labels: > depth: 3 strata of sizes (9, 12, 4) > printing support for facet > edge id start= 13 edge id end= 24 > f: 13 c: 0 > f: 14 c: 0 1 > f: 15 c: 0 2 > f: 16 c: 0 > f: 17 c: 0 > f: 18 c: 0 > f: 19 c: 0 0 > f: 20 c: 0 0 > f: 21 c: 0 > f: 22 c: 0 > f: 23 c: 0 > f: 24 c: 0 > > -- > ----------------------------------------------------- > Dharmendar Reddy Palle > Graduate Student > Microelectronics Research center, > University of Texas at Austin, > 10100 Burnet Road, Bldg. 160 > MER 2.608F, TX 78758-4445 > e-mail: dharmareddy84 at gmail.com > Phone: +1-512-350-9082 > United States of America. > Homepage: https://webspace.utexas.edu/~dpr342 > -- ----------------------------------------------------- Dharmendar Reddy Palle Graduate Student Microelectronics Research center, University of Texas at Austin, 10100 Burnet Road, Bldg. 160 MER 2.608F, TX 78758-4445 e-mail: dharmareddy84 at gmail.com Phone: +1-512-350-9082 United States of America. Homepage: https://webspace.utexas.edu/~dpr342 -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sun Apr 28 23:20:43 2013 From: knepley at gmail.com (Matthew Knepley) Date: Sun, 28 Apr 2013 23:20:43 -0500 Subject: [petsc-users] DMPlex Submesh In-Reply-To: References: Message-ID: On Sat, Apr 27, 2013 at 8:23 PM, Dharmendar Reddy wrote: > Hello, > I was testing the plexsubmesh functionality. I created a 1D > submesh from a 2D mesh. The 2D mesh is a square mesh with uniform x-y grid. > > x [0,L] and y [0,L] > numX = 10, numY = 91 > > The 1D mesh is along y axis at second x-grid point. > > when i call a DMView on submesh, i get the follwoing: > > Mesh in 1 dimensions: > 0-cells: 91 > 1-cells: 90 > Labels: > Boundary: 1 strata of sizes (2) > depth: 3 strata of sizes (91, 90, 180) > > I was expecting : > > depth: 2 strata of sizes (91, 90) > > Looks like the submesh has the information of 2-cells of the parent mesh. > Is this a default behavior ? > Yes, the default behavior is to retain the cells adjacent to the submesh. I have needed this for anything I have ever done. Even though it seems messier, it is much more practical. Matt > Thanks > Reddy > -- > ----------------------------------------------------- > Dharmendar Reddy Palle > Graduate Student > Microelectronics Research center, > University of Texas at Austin, > 10100 Burnet Road, Bldg. 160 > MER 2.608F, TX 78758-4445 > e-mail: dharmareddy84 at gmail.com > Phone: +1-512-350-9082 > United States of America. > Homepage: https://webspace.utexas.edu/~dpr342 > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From dharmareddy84 at gmail.com Mon Apr 29 04:02:16 2013 From: dharmareddy84 at gmail.com (Dharmendar Reddy) Date: Mon, 29 Apr 2013 04:02:16 -0500 Subject: [petsc-users] DMPlexGetSupport In-Reply-To: References: Message-ID: Hello, If i extract the support and cone using DMPlexGetTransitiveclousre, I am getting correct answers. Please have a look at the attached code. On Sun, Apr 28, 2013 at 9:30 PM, Dharmendar Reddy wrote: > Hello, > Looks like the problem is with getCone and getSupport called from > F90 code. > > Thanks > Reddy > > > On Sun, Apr 28, 2013 at 7:36 PM, Dharmendar Reddy > wrote: > >> Hello, >> I am seeing an issue using DMPlexGetSupport. >> >> I am trying to get the Ids of cells sharing a given facet in mesh. >> >> I have attached a test code for a small 3 x 3 square mesh. When i print >> the support for the 12 facets it has, i see 4 internal facets and 8 >> boundary faces, as expected. But the cell Ids are wrong. >> >> Surcessfully created DM From Cell List >> Mesh in 2 dimensions: >> 0-cells: 9 >> 1-cells: 12 >> 2-cells: 4 >> Labels: >> depth: 3 strata of sizes (9, 12, 4) >> printing support for facet >> edge id start= 13 edge id end= 24 >> f: 13 c: 0 >> f: 14 c: 0 1 >> f: 15 c: 0 2 >> f: 16 c: 0 >> f: 17 c: 0 >> f: 18 c: 0 >> f: 19 c: 0 0 >> f: 20 c: 0 0 >> f: 21 c: 0 >> f: 22 c: 0 >> f: 23 c: 0 >> f: 24 c: 0 >> >> -- >> ----------------------------------------------------- >> Dharmendar Reddy Palle >> Graduate Student >> Microelectronics Research center, >> University of Texas at Austin, >> 10100 Burnet Road, Bldg. 160 >> MER 2.608F, TX 78758-4445 >> e-mail: dharmareddy84 at gmail.com >> Phone: +1-512-350-9082 >> United States of America. >> Homepage: https://webspace.utexas.edu/~dpr342 >> > > > > -- > ----------------------------------------------------- > Dharmendar Reddy Palle > Graduate Student > Microelectronics Research center, > University of Texas at Austin, > 10100 Burnet Road, Bldg. 160 > MER 2.608F, TX 78758-4445 > e-mail: dharmareddy84 at gmail.com > Phone: +1-512-350-9082 > United States of America. > Homepage: https://webspace.utexas.edu/~dpr342 > -- ----------------------------------------------------- Dharmendar Reddy Palle Graduate Student Microelectronics Research center, University of Texas at Austin, 10100 Burnet Road, Bldg. 160 MER 2.608F, TX 78758-4445 e-mail: dharmareddy84 at gmail.com Phone: +1-512-350-9082 United States of America. Homepage: https://webspace.utexas.edu/~dpr342 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: testDMSupport.F90 Type: application/octet-stream Size: 5578 bytes Desc: not available URL: From sonyablade2010 at hotmail.com Mon Apr 29 05:58:58 2013 From: sonyablade2010 at hotmail.com (Sonya Blade) Date: Mon, 29 Apr 2013 11:58:58 +0100 Subject: [petsc-users] How to run Petsc in Fortran Message-ID: Dear All,I'm experiencing the difficulties with running the Petsc in Fortran Language, I'm using gfortran integrated in the CodeBlock editor. I've compiled/installed the Petsc , Slepc and having noproblems calling the Petsc/Slepc functions in C, it works perfectly. When it comes to using Petsc in Fortran I follow the instructions given in that article, especially the item 3http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/UsingFortran.html but so far I couldn't succeed to run them succesfully in Fortran, my main code is as follow: program mainimplicit none#include #include #include PetscReal norm print *,"Hello World "end I've added all the related include folders to search path (Petsc_Dir\include\finclude etc..) and library file "libpetsc.a" which results after Petsc compilation, but still compiler complains about the "Error: Unclassifiable statement at (1)" PetscReal norm. Which means that compiler is unaware of type identifier PetscReal. What am I missing here ? Your help will be appreciated, Regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Apr 29 06:05:30 2013 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 29 Apr 2013 06:05:30 -0500 Subject: [petsc-users] How to run Petsc in Fortran In-Reply-To: References: Message-ID: On Mon, Apr 29, 2013 at 5:58 AM, Sonya Blade wrote: > Dear All, > I'm experiencing the difficulties with running the Petsc in Fortran > Language, I'm using gfortran > integrated in the CodeBlock editor. I've compiled/installed the Petsc , > Slepc and having no > problems calling the Petsc/Slepc functions in C, it works perfectly. > > When it comes to using Petsc in Fortran I follow the instructions given in > that article, especially the item 3 > > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/UsingFortran.html > > but so far I couldn't succeed to run them succesfully in Fortran, my main > code is as follow: > > program main > implicit none > #include > #include > #include > > PetscReal norm > > print *,"Hello World " > end > > I've added all the related include folders to search path > (Petsc_Dir\include\finclude etc..) and library > file "libpetsc.a" which results after Petsc compilation, but still > compiler complains about the > "Error: Unclassifiable statement at (1)" PetscReal norm. Which means that > compiler is unaware of type > identifier PetscReal. What am I missing here ? > > Your help will be appreciated, > Are you sure that your compiler is preprocessing the file? You usually need to name it *.F. Matt > Regards, > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From Wadud.Miah at awe.co.uk Mon Apr 29 06:07:04 2013 From: Wadud.Miah at awe.co.uk (Wadud.Miah at awe.co.uk) Date: Mon, 29 Apr 2013 11:07:04 +0000 Subject: [petsc-users] EXTERNAL: How to run Petsc in Fortran In-Reply-To: References: Message-ID: <201304291107.r3TB7Avl027935@msw1.awe.co.uk> What is the compilation command you are executing? It is probably because you are not running the pre-processor in Fortran which is switched off in the PGI compiler by default. Regards, Wadud. ________________________________ From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Sonya Blade Sent: 29 April 2013 11:59 To: petsc-users at mcs.anl.gov Subject: EXTERNAL: [petsc-users] How to run Petsc in Fortran Dear All, I'm experiencing the difficulties with running the Petsc in Fortran Language, I'm using gfortran integrated in the CodeBlock editor. I've compiled/installed the Petsc , Slepc and having no problems calling the Petsc/Slepc functions in C, it works perfectly. When it comes to using Petsc in Fortran I follow the instructions given in that article, especially the item 3 http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/UsingFortran.html but so far I couldn't succeed to run them succesfully in Fortran, my main code is as follow: program main implicit none #include #include #include PetscReal norm print *,"Hello World " end I've added all the related include folders to search path (Petsc_Dir\include\finclude etc..) and library file "libpetsc.a" which results after Petsc compilation, but still compiler complains about the "Error: Unclassifiable statement at (1)" PetscReal norm. Which means that compiler is unaware of type identifier PetscReal. What am I missing here ? Your help will be appreciated, Regards, ___________________________________________________ ____________________________ The information in this email and in any attachment(s) is commercial in confidence. If you are not the named addressee(s) or if you receive this email in error then any distribution, copying or use of this communication or the information in it is strictly prohibited. Please notify us immediately by email at admin.internet(at)awe.co.uk, and then delete this message from your computer. While attachments are virus checked, AWE plc does not accept any liability in respect of any virus which is not detected. AWE Plc Registered in England and Wales Registration No 02763902 AWE, Aldermaston, Reading, RG7 4PR -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Apr 29 06:32:22 2013 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 29 Apr 2013 06:32:22 -0500 Subject: [petsc-users] DMPlexGetSupport In-Reply-To: References: Message-ID: On Mon, Apr 29, 2013 at 4:02 AM, Dharmendar Reddy wrote: > Hello, > If i extract the support and cone using > DMPlexGetTransitiveclousre, I am getting correct answers. Please have a > look at the attached code. > Yes, the interface definition for DMPlexRestoreCone/Support() was missing from petscdmplex.h90, so Fortran was just stomping through random memory without giving a warning. I pushed the fix to 'next', and there is now a test of it in DMPlex ex1f90. Thanks, Matt > On Sun, Apr 28, 2013 at 9:30 PM, Dharmendar Reddy > wrote: > >> Hello, >> Looks like the problem is with getCone and getSupport called >> from F90 code. >> >> Thanks >> Reddy >> >> >> On Sun, Apr 28, 2013 at 7:36 PM, Dharmendar Reddy < >> dharmareddy84 at gmail.com> wrote: >> >>> Hello, >>> I am seeing an issue using DMPlexGetSupport. >>> >>> I am trying to get the Ids of cells sharing a given facet in mesh. >>> >>> I have attached a test code for a small 3 x 3 square mesh. When i print >>> the support for the 12 facets it has, i see 4 internal facets and 8 >>> boundary faces, as expected. But the cell Ids are wrong. >>> >>> Surcessfully created DM From Cell List >>> Mesh in 2 dimensions: >>> 0-cells: 9 >>> 1-cells: 12 >>> 2-cells: 4 >>> Labels: >>> depth: 3 strata of sizes (9, 12, 4) >>> printing support for facet >>> edge id start= 13 edge id end= 24 >>> f: 13 c: 0 >>> f: 14 c: 0 1 >>> f: 15 c: 0 2 >>> f: 16 c: 0 >>> f: 17 c: 0 >>> f: 18 c: 0 >>> f: 19 c: 0 0 >>> f: 20 c: 0 0 >>> f: 21 c: 0 >>> f: 22 c: 0 >>> f: 23 c: 0 >>> f: 24 c: 0 >>> >>> -- >>> ----------------------------------------------------- >>> Dharmendar Reddy Palle >>> Graduate Student >>> Microelectronics Research center, >>> University of Texas at Austin, >>> 10100 Burnet Road, Bldg. 160 >>> MER 2.608F, TX 78758-4445 >>> e-mail: dharmareddy84 at gmail.com >>> Phone: +1-512-350-9082 >>> United States of America. >>> Homepage: https://webspace.utexas.edu/~dpr342 >>> >> >> >> >> -- >> ----------------------------------------------------- >> Dharmendar Reddy Palle >> Graduate Student >> Microelectronics Research center, >> University of Texas at Austin, >> 10100 Burnet Road, Bldg. 160 >> MER 2.608F, TX 78758-4445 >> e-mail: dharmareddy84 at gmail.com >> Phone: +1-512-350-9082 >> United States of America. >> Homepage: https://webspace.utexas.edu/~dpr342 >> > > > > -- > ----------------------------------------------------- > Dharmendar Reddy Palle > Graduate Student > Microelectronics Research center, > University of Texas at Austin, > 10100 Burnet Road, Bldg. 160 > MER 2.608F, TX 78758-4445 > e-mail: dharmareddy84 at gmail.com > Phone: +1-512-350-9082 > United States of America. > Homepage: https://webspace.utexas.edu/~dpr342 > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From sonyablade2010 at hotmail.com Mon Apr 29 07:09:46 2013 From: sonyablade2010 at hotmail.com (Sonya Blade) Date: Mon, 29 Apr 2013 13:09:46 +0100 Subject: [petsc-users] EXTERNAL: How to run Petsc in Fortran In-Reply-To: <201304291107.r3TB7Avl027935@msw1.awe.co.uk> References: , <201304291107.r3TB7Avl027935@msw1.awe.co.uk> Message-ID: > What is the compilation command you are executing? It is probably? > because you are not running the pre-processor in Fortran which is > switched off in the PGI compiler by default. > Regards, > > Wadud. > ? What it's got anything to do with PGI compiler I'm using Gnu Fortran? I try to compile program with the given command lines executions. -------------- Build: Debug in Petsc_Fortran (compiler: GNU GFortran Compiler)--------------- gfortran.exe -Wall ?-g ?-Wall ? -IC:\Users\....\Downloads\petsc-3.3-p6\include\finclude -IC:\Users\...\Downloads\petsc-3.3-p6\arch-mswin-c-debug\include ?-c D:\....\PROJECTS\CBFortran\Petsc_Fortran\main.f90 -o obj\Debug\main.o >Are you sure that your compiler is preprocessing the file? You usually need >to name it *.F. > > Matt I've tried with every possible fortran file extension, I'm sure that file is being? pre-processed because if I comment the line of "PetscReal norm" it compiles and runs? normally and produces the print statement on console screen. Regards, From Wadud.Miah at awe.co.uk Mon Apr 29 07:19:21 2013 From: Wadud.Miah at awe.co.uk (Wadud.Miah at awe.co.uk) Date: Mon, 29 Apr 2013 12:19:21 +0000 Subject: [petsc-users] EXTERNAL: Re: EXTERNAL: How to run Petsc in Fortran In-Reply-To: References: , <201304291107.r3TB7Avl027935@msw1.awe.co.uk> Message-ID: <201304291219.r3TCJUMa007059@msw1.awe.co.uk> Hi Sonya, As this is Fortran 90, I think you should be using the double colon to allocate variables: PetscReal :: norm Regards, Wadud. -----Original Message----- From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Sonya Blade Sent: 29 April 2013 13:10 To: petsc-users at mcs.anl.gov Subject: EXTERNAL: Re: [petsc-users] EXTERNAL: How to run Petsc in Fortran > What is the compilation command you are executing? It is probably? > because you are not running the pre-processor in Fortran which is > switched off in the PGI compiler by default. > Regards, > > Wadud. > ? What it's got anything to do with PGI compiler I'm using Gnu Fortran? I try to compile program with the given command lines executions. -------------- Build: Debug in Petsc_Fortran (compiler: GNU GFortran Compiler)--------------- gfortran.exe -Wall ?-g ?-Wall ? -IC:\Users\....\Downloads\petsc-3.3-p6\include\finclude -IC:\Users\...\Downloads\petsc-3.3-p6\arch-mswin-c-debug\include ?-c D:\....\PROJECTS\CBFortran\Petsc_Fortran\main.f90 -o obj\Debug\main.o >Are you sure that your compiler is preprocessing the file? You usually need >to name it *.F. > > Matt I've tried with every possible fortran file extension, I'm sure that file is being? pre-processed because if I comment the line of "PetscReal norm" it compiles and runs? normally and produces the print statement on console screen. Regards, ___________________________________________________ ____________________________ The information in this email and in any attachment(s) is commercial in confidence. If you are not the named addressee(s) or if you receive this email in error then any distribution, copying or use of this communication or the information in it is strictly prohibited. Please notify us immediately by email at admin.internet(at)awe.co.uk, and then delete this message from your computer. While attachments are virus checked, AWE plc does not accept any liability in respect of any virus which is not detected. AWE Plc Registered in England and Wales Registration No 02763902 AWE, Aldermaston, Reading, RG7 4PR From sonyablade2010 at hotmail.com Mon Apr 29 07:28:28 2013 From: sonyablade2010 at hotmail.com (Sonya Blade) Date: Mon, 29 Apr 2013 13:28:28 +0100 Subject: [petsc-users] EXTERNAL: Re: EXTERNAL: How to run Petsc in Fortran In-Reply-To: <201304291219.r3TCJUMa007059@msw1.awe.co.uk> References: , , <201304291107.r3TB7Avl027935@msw1.awe.co.uk>, , <201304291219.r3TCJUMa007059@msw1.awe.co.uk> Message-ID: > Hi Sonya, > > As this is Fortran 90, I think you should be using the double colon to allocate variables: > > PetscReal :: norm > > Regards, > Wadud. Although you are right, compiler doesn't raises error due to the wrong declaration type of norm, it keeps showing where PetscReal is not known. "Error: Unclassifiable statement at (1)" Regards, From Wadud.Miah at awe.co.uk Mon Apr 29 07:36:35 2013 From: Wadud.Miah at awe.co.uk (Wadud.Miah at awe.co.uk) Date: Mon, 29 Apr 2013 12:36:35 +0000 Subject: [petsc-users] EXTERNAL: Re: EXTERNAL: Re: EXTERNAL: How to run Petsc in Fortran In-Reply-To: References: , , <201304291107.r3TB7Avl027935@msw1.awe.co.uk>, , <201304291219.r3TCJUMa007059@msw1.awe.co.uk> Message-ID: <201304291236.r3TCaemT027034@msw2.awe.co.uk> Hi Sonya, >From the command line you wrote, you are not running the pre-processor. Include the flag: -cpp In your compilation command. Regards, Wadud. -----Original Message----- From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Sonya Blade Sent: 29 April 2013 13:28 To: petsc-users at mcs.anl.gov Subject: EXTERNAL: Re: [petsc-users] EXTERNAL: Re: EXTERNAL: How to run Petsc in Fortran > Hi Sonya, > > As this is Fortran 90, I think you should be using the double colon to allocate variables: > > PetscReal :: norm > > Regards, > Wadud. Although you are right, compiler doesn't raises error due to the wrong declaration type of norm, it keeps showing where PetscReal is not known. "Error: Unclassifiable statement at (1)" Regards, ___________________________________________________ ____________________________ The information in this email and in any attachment(s) is commercial in confidence. If you are not the named addressee(s) or if you receive this email in error then any distribution, copying or use of this communication or the information in it is strictly prohibited. Please notify us immediately by email at admin.internet(at)awe.co.uk, and then delete this message from your computer. While attachments are virus checked, AWE plc does not accept any liability in respect of any virus which is not detected. AWE Plc Registered in England and Wales Registration No 02763902 AWE, Aldermaston, Reading, RG7 4PR From knepley at gmail.com Mon Apr 29 07:38:54 2013 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 29 Apr 2013 07:38:54 -0500 Subject: [petsc-users] EXTERNAL: How to run Petsc in Fortran In-Reply-To: References: <201304291107.r3TB7Avl027935@msw1.awe.co.uk> Message-ID: On Mon, Apr 29, 2013 at 7:09 AM, Sonya Blade wrote: > > What is the compilation command you are executing? It is probably > > because you are not running the pre-processor in Fortran which is > > switched off in the PGI compiler by default. > > Regards, > > > > Wadud. > > > > What it's got anything to do with PGI compiler I'm using Gnu Fortran? > I try to compile program with the given command lines executions. > > -------------- Build: Debug in Petsc_Fortran (compiler: GNU GFortran > Compiler)--------------- > > gfortran.exe -Wall -g -Wall > -IC:\Users\....\Downloads\petsc-3.3-p6\include\finclude > -IC:\Users\...\Downloads\petsc-3.3-p6\arch-mswin-c-debug\include -c > D:\....\PROJECTS\CBFortran\Petsc_Fortran\main.f90 -o obj\Debug\main.o > > > > > >Are you sure that your compiler is preprocessing the file? You usually > need > >to name it *.F. > > > > Matt > > I've tried with every possible fortran file extension, I'm sure that file > is being > pre-processed because if I comment the line of "PetscReal norm" it > compiles and runs That is not evidence for the file being preprocessed. Preprocessing means that the #include statements bring in other code from the headers. > normally and produces the print statement on console screen. > So the examples run? If so, there is a problem with your build, and we would recommend building with the makefiles we provide. Thanks, Matt > Regards, -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From Wadud.Miah at awe.co.uk Mon Apr 29 07:39:08 2013 From: Wadud.Miah at awe.co.uk (Wadud.Miah at awe.co.uk) Date: Mon, 29 Apr 2013 12:39:08 +0000 Subject: [petsc-users] EXTERNAL: Re: EXTERNAL: Re: EXTERNAL: Re: EXTERNAL: How to run Petsc in Fortran In-Reply-To: <201304291236.r3TCaemT027034@msw2.awe.co.uk> References: , , <201304291107.r3TB7Avl027935@msw1.awe.co.uk>, , <201304291219.r3TCJUMa007059@msw1.awe.co.uk> <201304291236.r3TCaemT027034@msw2.awe.co.uk> Message-ID: <201304291239.r3TCdG08008971@msw2.awe.co.uk> Also, you should either be including the MPI header path in your compilation command or use mpif90 wrapper when compiling codes that use PETSc as PETSc uses MPI. Regards, Wadud. -----Original Message----- From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Wadud.Miah at awe1.awe.co.uk Sent: 29 April 2013 13:37 To: sonyablade2010 at hotmail.com; petsc-users at mcs.anl.gov Subject: EXTERNAL: Re: [petsc-users] EXTERNAL: Re: EXTERNAL: Re: EXTERNAL: How to run Petsc in Fortran Hi Sonya, >From the command line you wrote, you are not running the pre-processor. Include the flag: -cpp In your compilation command. Regards, Wadud. -----Original Message----- From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Sonya Blade Sent: 29 April 2013 13:28 To: petsc-users at mcs.anl.gov Subject: EXTERNAL: Re: [petsc-users] EXTERNAL: Re: EXTERNAL: How to run Petsc in Fortran > Hi Sonya, > > As this is Fortran 90, I think you should be using the double colon to allocate variables: > > PetscReal :: norm > > Regards, > Wadud. Although you are right, compiler doesn't raises error due to the wrong declaration type of norm, it keeps showing where PetscReal is not known. "Error: Unclassifiable statement at (1)" Regards, ___________________________________________________ ____________________________ The information in this email and in any attachment(s) is commercial in confidence. If you are not the named addressee(s) or if you receive this email in error then any distribution, copying or use of this communication or the information in it is strictly prohibited. Please notify us immediately by email at admin.internet(at)awe.co.uk, and then delete this message from your computer. While attachments are virus checked, AWE plc does not accept any liability in respect of any virus which is not detected. AWE Plc Registered in England and Wales Registration No 02763902 AWE, Aldermaston, Reading, RG7 4PR ___________________________________________________ ____________________________ The information in this email and in any attachment(s) is commercial in confidence. If you are not the named addressee(s) or if you receive this email in error then any distribution, copying or use of this communication or the information in it is strictly prohibited. Please notify us immediately by email at admin.internet(at)awe.co.uk, and then delete this message from your computer. While attachments are virus checked, AWE plc does not accept any liability in respect of any virus which is not detected. AWE Plc Registered in England and Wales Registration No 02763902 AWE, Aldermaston, Reading, RG7 4PR From jedbrown at mcs.anl.gov Mon Apr 29 08:40:29 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 29 Apr 2013 08:40:29 -0500 Subject: [petsc-users] Help with ML/BoomerAMG In-Reply-To: References: Message-ID: <87wqrlzc8i.fsf@mcs.anl.gov> Gaetan Kenway writes: > Hello > > I am the process of trying out some of the multigrid functionality in PETSc > and not having much luck. The simple system I am trying to solve is adjoint > system of equations resulting from the finite volume discretization of the > Euler equation on a 147,456 cell mesh resulting in a linear system of > equations of size 5*147,456=737280. All of the test are done on a single > processor and use petsc-3.2-p7. Is this steady-state Euler? Exterior or recirculating flow? Conservative variables? What Mach number? The heuristics used in algebraic multigrid do not work for hyperbolic systems like Euler. There has been some research, but the multigrid efficiency that we enjoy for elliptic problems continues to elude us. For low Mach number, we can build preconditioners based on splitting, reducing to an elliptic solve in the pressure space (changing variables in the preconditioner if you use conservative variables for the full problem). Otherwise, we're currently stuck with geometric multigrid if we want significant coarse-grid acceleration. With finite volume methods, this is done by agglomeration, leading to large cells with many faces, but that exactly preserve the conservation statement of the fine-grid problem. The implementation effort required for such methods is why it's still popular to use one-level domain decomposition. From gaetank at gmail.com Mon Apr 29 08:51:31 2013 From: gaetank at gmail.com (Gaetan Kenway) Date: Mon, 29 Apr 2013 09:51:31 -0400 Subject: [petsc-users] Help with ML/BoomerAMG In-Reply-To: <87wqrlzc8i.fsf@mcs.anl.gov> References: <87wqrlzc8i.fsf@mcs.anl.gov> Message-ID: Hi Jed This problem was external flow, transonic Euler, (M=0.85), conserved variables. As I stated in my email, the additive schwartz method + (block) ILU on the subdomains works extremely well for this problem. The real problem I am interested in however, is preconditioning for the RANS equations. For the most part, ASM+ILU works fine for these problems as well, but I am investigating other methods that may potentially increase robustness/reduce memory/reduce computational cost. Since the solver I'm using is a structured multiblock solver that uses multigrid for the primal problem, I can use geometric multigrid, provided I construct the restriction and prolongation operators myself. I guess geometric multigrid is the best approach here. Thank you Gaetan On Mon, Apr 29, 2013 at 9:40 AM, Jed Brown wrote: > Gaetan Kenway writes: > > > Hello > > > > I am the process of trying out some of the multigrid functionality in > PETSc > > and not having much luck. The simple system I am trying to solve is > adjoint > > system of equations resulting from the finite volume discretization of > the > > Euler equation on a 147,456 cell mesh resulting in a linear system of > > equations of size 5*147,456=737280. All of the test are done on a single > > processor and use petsc-3.2-p7. > > Is this steady-state Euler? Exterior or recirculating flow? > Conservative variables? What Mach number? > > The heuristics used in algebraic multigrid do not work for hyperbolic > systems like Euler. There has been some research, but the multigrid > efficiency that we enjoy for elliptic problems continues to elude us. > > For low Mach number, we can build preconditioners based on splitting, > reducing to an elliptic solve in the pressure space (changing variables > in the preconditioner if you use conservative variables for the full > problem). Otherwise, we're currently stuck with geometric multigrid if > we want significant coarse-grid acceleration. With finite volume > methods, this is done by agglomeration, leading to large cells with many > faces, but that exactly preserve the conservation statement of the > fine-grid problem. > > The implementation effort required for such methods is why it's still > popular to use one-level domain decomposition. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Mon Apr 29 08:57:45 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 29 Apr 2013 08:57:45 -0500 Subject: [petsc-users] Help with ML/BoomerAMG In-Reply-To: References: <87wqrlzc8i.fsf@mcs.anl.gov> Message-ID: <87r4htzbfq.fsf@mcs.anl.gov> Gaetan Kenway writes: > Hi Jed > > This problem was external flow, transonic Euler, (M=0.85), conserved > variables. As I stated in my email, the additive schwartz method + (block) > ILU on the subdomains works extremely well for this problem. The real > problem I am interested in however, is preconditioning for the RANS > equations. For the most part, ASM+ILU works fine for these problems as > well, but I am investigating other methods that may potentially increase > robustness/reduce memory/reduce computational cost. ASM/ILU is pretty much the workhorse method for those not using geometric multigrid. > Since the solver I'm using is a structured multiblock solver that uses > multigrid for the primal problem, I can use geometric multigrid, > provided I construct the restriction and prolongation operators myself. > > I guess geometric multigrid is the best approach here. Yes, and the adjoint problem should be faster than the forward problem because it's linear. Are you using FAS or linear MG for the forward problem? Is this a steady-state solve? From gaetank at gmail.com Mon Apr 29 09:23:35 2013 From: gaetank at gmail.com (Gaetan Kenway) Date: Mon, 29 Apr 2013 10:23:35 -0400 Subject: [petsc-users] Help with ML/BoomerAMG In-Reply-To: <87haipzajp.fsf@mcs.anl.gov> References: <87wqrlzc8i.fsf@mcs.anl.gov> <87r4htzbfq.fsf@mcs.anl.gov> <87haipzajp.fsf@mcs.anl.gov> Message-ID: For the forward solve I use ASM+ILU in the same manner as for the adjoint problem. The ASM not a bottleneck per se. Typically we see the adjoint problem taking the same amount of time as the non-linear problem for well-behaved flows, and the adjoint is shorter for less well-behaved flows. The real problem I am having is for certain RANS cases, the frozen turbulence adjoint is extremely difficult to solve --- requiring GMRES subspace sizes on the order of 400-500 to converge. That's why I was investigating alternative preconditioning methods that could help solve these problems more efficiently. Gaetan On Mon, Apr 29, 2013 at 10:16 AM, Jed Brown wrote: > Gaetan Kenway writes: > > > The problems I am looking at are steady state or quasi-steady state (time > > spectral approach). The example I sent before was steady state. The > > nonlinear solver uses FAS multigrid (only used for full multigrid and > > start-up on the fine grid) followed by an inexact Newton-Krylov method. > > Is NK preconditioned by linear MG? > > Is the ASM preconditioner for the adjoint problem a bottleneck compared > to the forward solve? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Mon Apr 29 09:31:26 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 29 Apr 2013 09:31:26 -0500 Subject: [petsc-users] Help with ML/BoomerAMG In-Reply-To: References: <87wqrlzc8i.fsf@mcs.anl.gov> <87r4htzbfq.fsf@mcs.anl.gov> <87haipzajp.fsf@mcs.anl.gov> Message-ID: <87d2tdz9vl.fsf@mcs.anl.gov> Gaetan Kenway writes: > For the forward solve I use ASM+ILU in the same manner as for the adjoint > problem. > The ASM not a bottleneck per se. Typically we see the adjoint problem > taking the same amount of time as the non-linear problem for well-behaved > flows, and the adjoint is shorter for less well-behaved flows. Sounds reasonable. > The real problem I am having is for certain RANS cases, the > frozen turbulence adjoint is extremely difficult to solve --- requiring > GMRES subspace sizes on the order of 400-500 to converge. Hmm, which turbulence model are you using? Is it related to stretched grids? Continuous or discrete adjoint? From gaetank at gmail.com Mon Apr 29 09:41:32 2013 From: gaetank at gmail.com (Gaetan Kenway) Date: Mon, 29 Apr 2013 10:41:32 -0400 Subject: [petsc-users] Help with ML/BoomerAMG In-Reply-To: <87d2tdz9vl.fsf@mcs.anl.gov> References: <87wqrlzc8i.fsf@mcs.anl.gov> <87r4htzbfq.fsf@mcs.anl.gov> <87haipzajp.fsf@mcs.anl.gov> <87d2tdz9vl.fsf@mcs.anl.gov> Message-ID: It is an SA turbulence model and the discrete adjoint computed exactly with AD. Certainly the grids are highly stretched in the BL since the grids are resolving the viscous sublayer (y+ < 1) and the Reynolds numbers are on the order of 10's of millions. I tend only to see this behaviour at higher mach numbers when stronger shocks start to appear. For example, the adjoint system may solve fine at M=0.80, and fail to converge at M=0.85. For these RANS cases, the non-linear solution is solved using only RK with multigrid. It is entirely possible a different preconditioner may not help at all, but there's not much else you can do. Gaetan On Mon, Apr 29, 2013 at 10:31 AM, Jed Brown wrote: > Gaetan Kenway writes: > > > For the forward solve I use ASM+ILU in the same manner as for the adjoint > > problem. > > The ASM not a bottleneck per se. Typically we see the adjoint problem > > taking the same amount of time as the non-linear problem for well-behaved > > flows, and the adjoint is shorter for less well-behaved flows. > > Sounds reasonable. > > > The real problem I am having is for certain RANS cases, the > > frozen turbulence adjoint is extremely difficult to solve --- requiring > > GMRES subspace sizes on the order of 400-500 to converge. > > Hmm, which turbulence model are you using? Is it related to stretched > grids? Continuous or discrete adjoint? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Mon Apr 29 09:51:53 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 29 Apr 2013 09:51:53 -0500 Subject: [petsc-users] Help with ML/BoomerAMG In-Reply-To: References: <87wqrlzc8i.fsf@mcs.anl.gov> <87r4htzbfq.fsf@mcs.anl.gov> <87haipzajp.fsf@mcs.anl.gov> <87d2tdz9vl.fsf@mcs.anl.gov> Message-ID: <877gjlz8xi.fsf@mcs.anl.gov> Gaetan Kenway writes: > It is an SA turbulence model and the discrete adjoint computed exactly with > AD. Certainly the grids are highly stretched in the BL since the grids are > resolving the viscous sublayer (y+ < 1) and the Reynolds numbers are on > the order of 10's of millions. I tend only to see this behaviour at > higher mach numbers when stronger shocks start to appear. For example, the > adjoint system may solve fine at M=0.80, and fail to converge at M=0.85. How meaningful is the information provided by the discrete adjoint here? Limiters and even just upwind discretizations on non-uniform grids lead to inconsistent discretizations of the adjoint equations. If the adjoint equation is full of numerical artifacts, it can cause the linear problem to lose structure, resulting in singular sub-problems, negative pivots, and other badness. What happens when you use a direct solve for subdomain problems (ASM+LU; use smaller subdomains if necessary)? From gaetank at gmail.com Mon Apr 29 10:19:34 2013 From: gaetank at gmail.com (Gaetan Kenway) Date: Mon, 29 Apr 2013 11:19:34 -0400 Subject: [petsc-users] Help with ML/BoomerAMG In-Reply-To: <877gjlz8xi.fsf@mcs.anl.gov> References: <87wqrlzc8i.fsf@mcs.anl.gov> <87r4htzbfq.fsf@mcs.anl.gov> <87haipzajp.fsf@mcs.anl.gov> <87d2tdz9vl.fsf@mcs.anl.gov> <877gjlz8xi.fsf@mcs.anl.gov> Message-ID: It is possible the information provided by the discrete adjoint here is somewhat less meaningful, but I need to analyze them for off-design conditions for optimizations. I am using a centered discretrization plus scalar JST dissipation. I have not tried using LU on the subdomains, that is certainly something to try. Thank you for your help, Gaetan On Mon, Apr 29, 2013 at 10:51 AM, Jed Brown wrote: > Gaetan Kenway writes: > > > It is an SA turbulence model and the discrete adjoint computed exactly > with > > AD. Certainly the grids are highly stretched in the BL since the grids > are > > resolving the viscous sublayer (y+ < 1) and the Reynolds numbers are on > > the order of 10's of millions. I tend only to see this behaviour at > > higher mach numbers when stronger shocks start to appear. For example, > the > > adjoint system may solve fine at M=0.80, and fail to converge at > M=0.85. > > How meaningful is the information provided by the discrete adjoint here? > Limiters and even just upwind discretizations on non-uniform grids lead > to inconsistent discretizations of the adjoint equations. If the > adjoint equation is full of numerical artifacts, it can cause the linear > problem to lose structure, resulting in singular sub-problems, negative > pivots, and other badness. What happens when you use a direct solve for > subdomain problems (ASM+LU; use smaller subdomains if necessary)? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Mon Apr 29 11:03:29 2013 From: balay at mcs.anl.gov (Satish Balay) Date: Mon, 29 Apr 2013 11:03:29 -0500 (CDT) Subject: [petsc-users] EXTERNAL: How to run Petsc in Fortran In-Reply-To: References: , <201304291107.r3TB7Avl027935@msw1.awe.co.uk> Message-ID: On Mon, 29 Apr 2013, Sonya Blade wrote: > -------------- Build: Debug in Petsc_Fortran (compiler: GNU GFortran Compiler)--------------- > > gfortran.exe -Wall ?-g ?-Wall ? -IC:\Users\....\Downloads\petsc-3.3-p6\include\finclude -IC:\Users\...\Downloads\petsc-3.3-p6\arch-mswin-c-debug\include ?-c D:\....\PROJECTS\CBFortran\Petsc_Fortran\main.f90 -o obj\Debug\main.o > Rename your sourcefile from 'main.f90' to 'main.F90'. gfortran will preprocess it with the .F90 name. Are you using makefiles or somthing else to build your application? We suggest using PETSc makefiles. Satish From gaetank at gmail.com Mon Apr 29 11:03:57 2013 From: gaetank at gmail.com (Gaetan Kenway) Date: Mon, 29 Apr 2013 12:03:57 -0400 Subject: [petsc-users] Help with ML/BoomerAMG In-Reply-To: References: <87wqrlzc8i.fsf@mcs.anl.gov> <87r4htzbfq.fsf@mcs.anl.gov> <87haipzajp.fsf@mcs.anl.gov> <87d2tdz9vl.fsf@mcs.anl.gov> <877gjlz8xi.fsf@mcs.anl.gov> Message-ID: Hi Again I did try running LU instead of ILU on the sub-domains. (This is the result of a RANS simulation at M=0.85, SA model, frozen turbulence and 96768 cells or matrix dimension or 483840). However, the ILU does not seem to be the cause of the issue. I've attached a plot of the two convergence histories. Both take 426 iterations of full GMRES (restart was set at 600). Column F is ASM(1), ILU(1) Column L is ASM(1), LU. This is right preconditioning and a check of the residual after the solver has completed confirms the actual residual has reduced by 6 orders of magnitude. Any thoughts? Thank you, Gaetan Kenway On Mon, Apr 29, 2013 at 11:19 AM, Gaetan Kenway wrote: > It is possible the information provided by the discrete adjoint here is > somewhat less meaningful, but I need to analyze them for > off-design conditions for optimizations. I am using > a centered discretrization plus scalar JST dissipation. I have not tried > using LU on the subdomains, that is certainly something to try. > > Thank you for your help, > > Gaetan > > > On Mon, Apr 29, 2013 at 10:51 AM, Jed Brown wrote: > >> Gaetan Kenway writes: >> >> > It is an SA turbulence model and the discrete adjoint computed exactly >> with >> > AD. Certainly the grids are highly stretched in the BL since the grids >> are >> > resolving the viscous sublayer (y+ < 1) and the Reynolds numbers are on >> > the order of 10's of millions. I tend only to see this behaviour at >> > higher mach numbers when stronger shocks start to appear. For example, >> the >> > adjoint system may solve fine at M=0.80, and fail to converge at >> M=0.85. >> >> How meaningful is the information provided by the discrete adjoint here? >> Limiters and even just upwind discretizations on non-uniform grids lead >> to inconsistent discretizations of the adjoint equations. If the >> adjoint equation is full of numerical artifacts, it can cause the linear >> problem to lose structure, resulting in singular sub-problems, negative >> pivots, and other badness. What happens when you use a direct solve for >> subdomain problems (ASM+LU; use smaller subdomains if necessary)? >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: convergence.pdf Type: application/pdf Size: 24237 bytes Desc: not available URL: From jedbrown at mcs.anl.gov Mon Apr 29 11:30:27 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 29 Apr 2013 11:30:27 -0500 Subject: [petsc-users] Help with ML/BoomerAMG In-Reply-To: References: <87wqrlzc8i.fsf@mcs.anl.gov> <87r4htzbfq.fsf@mcs.anl.gov> <87haipzajp.fsf@mcs.anl.gov> <87d2tdz9vl.fsf@mcs.anl.gov> <877gjlz8xi.fsf@mcs.anl.gov> Message-ID: <87sj29xpss.fsf@mcs.anl.gov> Gaetan Kenway writes: > Hi Again > > I did try running LU instead of ILU on the sub-domains. (This is the result > of a RANS simulation at M=0.85, SA model, frozen turbulence and 96768 cells > or matrix dimension or 483840). However, the ILU does not seem to be the > cause of the issue. I've attached a plot of the two convergence histories. > Both take 426 iterations of full GMRES (restart was set at 600). Column F > is ASM(1), ILU(1) Column L is ASM(1), LU. This is right preconditioning and > a check of the residual after the solver has completed confirms the actual > residual has reduced by 6 orders of magnitude. I assume that methods other than GMRES are not making much progress for you. Have you tried using FGMRES (or GCR) and nesting an iterative method as the preconditioner, to reduce the size of the GMRES subspace that needs to be stored? It would be interesting to plot the spectrum, and more interesting to look at the "problematic" eigenmodes. What sort of system do you use for visualization? I'm working on a tool that would help with this sort of analysis so I'm interested in enough of your workflow to make it work for you. From sonyablade2010 at hotmail.com Mon Apr 29 13:53:23 2013 From: sonyablade2010 at hotmail.com (Sonya Blade) Date: Mon, 29 Apr 2013 19:53:23 +0100 Subject: [petsc-users] How to run Petsc in Fortran In-Reply-To: References: , <201304291107.r3TB7Avl027935@msw1.awe.co.uk> , Message-ID: > Are you using makefiles or somthing else to build your application? > We suggest using PETSc makefiles. > > Satish I'm bit confused here, AFAIK makefiles are supposed to be used for compilation of Petsc source code not the intended application. Probably I'm wrong here. I've used gcc and gfortran in cygwin to compile the Petsc codes without MPI support. There is no error recorded at the error.log and all the examples works like charm in? C language. So I think that there is no discrepancy during the compilation. But when I try to run the examples in gfortran (in Code Block editor) then compiler raises? so many errors such as : Undefined reference to __getreent in function PetscInitialize() Undefined reference to __errno in funciton PetscSynchronizedFGets() etc... Do those errors gives any hints on what is going wrong? From knepley at gmail.com Mon Apr 29 14:21:17 2013 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 29 Apr 2013 14:21:17 -0500 Subject: [petsc-users] How to run Petsc in Fortran In-Reply-To: References: <201304291107.r3TB7Avl027935@msw1.awe.co.uk> Message-ID: On Mon, Apr 29, 2013 at 1:53 PM, Sonya Blade wrote: > > Are you using makefiles or somthing else to build your application? > > We suggest using PETSc makefiles. > > > > Satish > > I'm bit confused here, AFAIK makefiles are supposed to be used for > compilation of > Petsc source code not the intended application. Probably I'm wrong here. > This is wrong. Please read the manual section on this. Then we can better help you. > I've used gcc and gfortran in cygwin to compile the Petsc codes without > MPI support. > There is no error recorded at the error.log and all the examples works > like charm in > C language. So I think that there is no discrepancy during the compilation. > > But when I try to run the examples in gfortran (in Code Block editor) then > compiler raises > so many errors such as : > 1) THIS IS NOT A COMPILE, IT IS A LINK. 2) Editing the output, as you do below: > Undefined reference to __getreent in function PetscInitialize() > Undefined reference to __errno in funciton PetscSynchronizedFGets() > is extremely unhelpful. Please send all input and output. 3) You are not giving the correct libraries to your "code wizard". Please try the makefiles. Matt > etc... > > Do those errors gives any hints on what is going wrong? > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From dharmareddy84 at gmail.com Mon Apr 29 16:05:44 2013 From: dharmareddy84 at gmail.com (Dharmendar Reddy) Date: Mon, 29 Apr 2013 16:05:44 -0500 Subject: [petsc-users] How to run Petsc in Fortran In-Reply-To: References: <201304291107.r3TB7Avl027935@msw1.awe.co.uk> Message-ID: Hello, You may want to take a look at the attachment in this email.. https://lists.mcs.anl.gov/mailman/htdig/petsc-users/2012-August/015026.html It is an example of using petsc from fortran. If you have PETSC_DIR and PETSC_ARCH defined as env variables...you can run the example. If the make file generated by code blocks does some thing similar your code should work. See if you can get this simple example working on code block. On Mon, Apr 29, 2013 at 1:53 PM, Sonya Blade wrote: > > Are you using makefiles or somthing else to build your application? > > We suggest using PETSc makefiles. > > > > Satish > > I'm bit confused here, AFAIK makefiles are supposed to be used for > compilation of > Petsc source code not the intended application. Probably I'm wrong here. > > I've used gcc and gfortran in cygwin to compile the Petsc codes without > MPI support. > There is no error recorded at the error.log and all the examples works > like charm in > C language. So I think that there is no discrepancy during the compilation. > > But when I try to run the examples in gfortran (in Code Block editor) then > compiler raises > so many errors such as : > > Undefined reference to __getreent in function PetscInitialize() > Undefined reference to __errno in funciton PetscSynchronizedFGets() > > etc... > > Do those errors gives any hints on what is going wrong? > -- ----------------------------------------------------- Dharmendar Reddy Palle Graduate Student Microelectronics Research center, University of Texas at Austin, 10100 Burnet Road, Bldg. 160 MER 2.608F, TX 78758-4445 e-mail: dharmareddy84 at gmail.com Phone: +1-512-350-9082 United States of America. Homepage: https://webspace.utexas.edu/~dpr342 -------------- next part -------------- An HTML attachment was scrubbed... URL: From dharmareddy84 at gmail.com Mon Apr 29 16:26:19 2013 From: dharmareddy84 at gmail.com (Dharmendar Reddy) Date: Mon, 29 Apr 2013 16:26:19 -0500 Subject: [petsc-users] IS map for sub mesh In-Reply-To: References: Message-ID: Hello Matthew, Did you get time to look into this request. A single call would be nicer....some thing like.. DMPlexGetSubpointStratumIS(DM dm, depth, IS) ? That ways no need to provide Fortran bindings to DMLabel and related functions..... I could create my wrapper code...can you tell me how to do this ? get The IS using DMPlexCreateSubpointIS create a new IS but with size IS for a given depth. I am thinking clone the original IS and resize ...is there a functionality for resizing ? or chopping the IS from startId to endId to get new IS Thanks Reddy On Thu, Apr 25, 2013 at 3:57 PM, Matthew Knepley wrote: > On Thu, Apr 25, 2013 at 4:48 PM, Dharmendar Reddy > wrote: > >> >> >> >> On Thu, Apr 25, 2013 at 3:31 PM, Matthew Knepley wrote: >> >>> On Thu, Apr 25, 2013 at 3:35 PM, Dharmendar Reddy < >>> dharmareddy84 at gmail.com> wrote: >>> >>>> Hello, >>>> I need to access the map from points in subdm to points in dm. Can you >>>> please provide fortran binding for >>>> >>>> DMPlexCreateSubpointIS(DM dm, IS *subpointIS) >>>> >>>> Pushed to next. >>> >>> >>>> Also, i was thinking it may be of use to have interface like this.. >>>> >>>> DMPlexCreateSubpointIS(DM dm, PetscInt pointDimInSubdm, IS *subpointIS) >>>> >>>> this way i can have map from say (dim)-cells in subdm to corresponding >>>> (dim)-cells in dm. >>>> >>> >>> The intention here is to use DMPlexGetSubpointMap(). >>> >>> So, i should do the calls below ? >> >> DMPlexGetSubpointMap(DM dm, DMLabel subpointMap) >> >> DMLabelGetStratumIS(subpointMap,depth,IS) >> >> >> I have had trouble using DMLabel in my fortran code earlier. >> I can give it a try again, Is there a fortran binding for above functions ? >> >> > Hmm, I have not tested it. I will put it in an example. > > Thanks, > > Matt > > >> Thanks >> Reddy >> >> >> >> >> >>> Matt >>> >>> >>>> >>>> thanks >>>> Reddy >>>> >>>> -- >>>> ----------------------------------------------------- >>>> Dharmendar Reddy Palle >>>> Graduate Student >>>> Microelectronics Research center, >>>> University of Texas at Austin, >>>> 10100 Burnet Road, Bldg. 160 >>>> MER 2.608F, TX 78758-4445 >>>> e-mail: dharmareddy84 at gmail.com >>>> Phone: +1-512-350-9082 >>>> United States of America. >>>> Homepage: https://webspace.utexas.edu/~dpr342 >>>> >>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >> >> >> >> -- >> ----------------------------------------------------- >> Dharmendar Reddy Palle >> Graduate Student >> Microelectronics Research center, >> University of Texas at Austin, >> 10100 Burnet Road, Bldg. 160 >> MER 2.608F, TX 78758-4445 >> e-mail: dharmareddy84 at gmail.com >> Phone: +1-512-350-9082 >> United States of America. >> Homepage: https://webspace.utexas.edu/~dpr342 >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -- ----------------------------------------------------- Dharmendar Reddy Palle Graduate Student Microelectronics Research center, University of Texas at Austin, 10100 Burnet Road, Bldg. 160 MER 2.608F, TX 78758-4445 e-mail: dharmareddy84 at gmail.com Phone: +1-512-350-9082 United States of America. Homepage: https://webspace.utexas.edu/~dpr342 -------------- next part -------------- An HTML attachment was scrubbed... URL: From gaetank at gmail.com Mon Apr 29 21:07:22 2013 From: gaetank at gmail.com (Gaetan Kenway) Date: Mon, 29 Apr 2013 22:07:22 -0400 Subject: [petsc-users] Help with ML/BoomerAMG In-Reply-To: <87zjwhw00n.fsf@mcs.anl.gov> References: <87wqrlzc8i.fsf@mcs.anl.gov> <87r4htzbfq.fsf@mcs.anl.gov> <87haipzajp.fsf@mcs.anl.gov> <87d2tdz9vl.fsf@mcs.anl.gov> <877gjlz8xi.fsf@mcs.anl.gov> <87sj29xpss.fsf@mcs.anl.gov> <87zjwhw00n.fsf@mcs.anl.gov> Message-ID: That makes sense. Is there a reasonably easy way of doing that in PETSc currently for reasonably large systems? Gaetan On Mon, Apr 29, 2013 at 4:32 PM, Jed Brown wrote: > Gaetan Kenway writes: > > > I would be very interested in looking at the problematic eigenmodes as > > well. On my end I use Tecplot for all my visualization. What sort of > > visualization technique are you thinking about? Is it the KSP subspace > > vectors you want to look at? > > No, it would be an eigensolve for either outliers or eigenvalues very > close to zero. The cost to find the vector is like several solves, but > it would tell us exactly what sort of functions are poorly approximated > by the preconditioner. Then we would think about how we can change > algorithms to make the preconditioner correct those functions better. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Apr 29 21:12:29 2013 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 29 Apr 2013 21:12:29 -0500 Subject: [petsc-users] Help with ML/BoomerAMG In-Reply-To: References: <87wqrlzc8i.fsf@mcs.anl.gov> <87r4htzbfq.fsf@mcs.anl.gov> <87haipzajp.fsf@mcs.anl.gov> <87d2tdz9vl.fsf@mcs.anl.gov> <877gjlz8xi.fsf@mcs.anl.gov> <87sj29xpss.fsf@mcs.anl.gov> <87zjwhw00n.fsf@mcs.anl.gov> Message-ID: On Mon, Apr 29, 2013 at 9:07 PM, Gaetan Kenway wrote: > That makes sense. Is there a reasonably easy way of doing that in PETSc > currently for reasonably large systems? Jed is working on it :) Matt > Gaetan > > On Mon, Apr 29, 2013 at 4:32 PM, Jed Brown wrote: > >> Gaetan Kenway writes: >> >> > I would be very interested in looking at the problematic eigenmodes as >> > well. On my end I use Tecplot for all my visualization. What sort of >> > visualization technique are you thinking about? Is it the KSP subspace >> > vectors you want to look at? >> >> No, it would be an eigensolve for either outliers or eigenvalues very >> close to zero. The cost to find the vector is like several solves, but >> it would tell us exactly what sort of functions are poorly approximated >> by the preconditioner. Then we would think about how we can change >> algorithms to make the preconditioner correct those functions better. >> > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jedbrown at mcs.anl.gov Mon Apr 29 21:16:28 2013 From: jedbrown at mcs.anl.gov (Jed Brown) Date: Mon, 29 Apr 2013 21:16:28 -0500 Subject: [petsc-users] Help with ML/BoomerAMG In-Reply-To: References: <87wqrlzc8i.fsf@mcs.anl.gov> <87r4htzbfq.fsf@mcs.anl.gov> <87haipzajp.fsf@mcs.anl.gov> <87d2tdz9vl.fsf@mcs.anl.gov> <877gjlz8xi.fsf@mcs.anl.gov> <87sj29xpss.fsf@mcs.anl.gov> <87zjwhw00n.fsf@mcs.anl.gov> Message-ID: <87a9ogwyo3.fsf@mcs.anl.gov> Gaetan Kenway writes: > That makes sense. Is there a reasonably easy way of doing that in PETSc > currently for reasonably large systems? You can use SLEPc to compute eigenvectors of the preconditioned operator, then plot them using your own workflow. I'm working on a plugin that will make it easier and more interactive. From gaetank at gmail.com Mon Apr 29 21:25:11 2013 From: gaetank at gmail.com (Gaetan Kenway) Date: Mon, 29 Apr 2013 22:25:11 -0400 Subject: [petsc-users] Help with ML/BoomerAMG In-Reply-To: References: <87wqrlzc8i.fsf@mcs.anl.gov> <87r4htzbfq.fsf@mcs.anl.gov> <87haipzajp.fsf@mcs.anl.gov> <87d2tdz9vl.fsf@mcs.anl.gov> <877gjlz8xi.fsf@mcs.anl.gov> <87sj29xpss.fsf@mcs.anl.gov> <87zjwhw00n.fsf@mcs.anl.gov> Message-ID: To Karthik Yes, GMRES is my nominal default. For systems that are easier to solve, TFQMR also seems to be competitive with GMRES. It is the high mach RANS problems with strong shocks shock induced flow separation that seem to give me the most trouble. Gaetan On Mon, Apr 29, 2013 at 10:07 PM, Gaetan Kenway wrote: > That makes sense. Is there a reasonably easy way of doing that in PETSc > currently for reasonably large systems? > > Gaetan > > > On Mon, Apr 29, 2013 at 4:32 PM, Jed Brown wrote: > >> Gaetan Kenway writes: >> >> > I would be very interested in looking at the problematic eigenmodes as >> > well. On my end I use Tecplot for all my visualization. What sort of >> > visualization technique are you thinking about? Is it the KSP subspace >> > vectors you want to look at? >> >> No, it would be an eigensolve for either outliers or eigenvalues very >> close to zero. The cost to find the vector is like several solves, but >> it would tell us exactly what sort of functions are poorly approximated >> by the preconditioner. Then we would think about how we can change >> algorithms to make the preconditioner correct those functions better. >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dkarthik at stanford.edu Mon Apr 29 21:37:34 2013 From: dkarthik at stanford.edu (Karthik Duraisamy) Date: Mon, 29 Apr 2013 19:37:34 -0700 (PDT) Subject: [petsc-users] Help with ML/BoomerAMG In-Reply-To: Message-ID: <2084446540.22656807.1367289454469.JavaMail.root@stanford.edu> Thanks. I was asking because I use PETSc+GMRES for RANS of turbulence and combustion (on unstructured grids). The most difficult problems don't get off the ground without a local LU preconditioner, but otherwise ASM+ILU or ILU(k) works OK for me. - Karthik ----- Original Message ----- From: "Gaetan Kenway" To: "Jed Brown" Cc: "petsc-users" Sent: Monday, April 29, 2013 7:25:11 PM Subject: Re: [petsc-users] Help with ML/BoomerAMG To Karthik Yes, GMRES is my nominal default. For systems that are easier to solve, TFQMR also seems to be competitive with GMRES. It is the high mach RANS problems with strong shocks shock induced flow separation that seem to give me the most trouble. Gaetan On Mon, Apr 29, 2013 at 10:07 PM, Gaetan Kenway < gaetank at gmail.com > wrote: That makes sense. Is there a reasonably easy way of doing that in PETSc currently for reasonably large systems? Gaetan On Mon, Apr 29, 2013 at 4:32 PM, Jed Brown < jedbrown at mcs.anl.gov > wrote: Gaetan Kenway < gaetank at gmail.com > writes: > I would be very interested in looking at the problematic eigenmodes as > well. On my end I use Tecplot for all my visualization. What sort of > visualization technique are you thinking about? Is it the KSP subspace > vectors you want to look at? No, it would be an eigensolve for either outliers or eigenvalues very close to zero. The cost to find the vector is like several solves, but it would tell us exactly what sort of functions are poorly approximated by the preconditioner. Then we would think about how we can change algorithms to make the preconditioner correct those functions better. From dharmareddy84 at gmail.com Tue Apr 30 02:57:08 2013 From: dharmareddy84 at gmail.com (Dharmendar Reddy) Date: Tue, 30 Apr 2013 02:57:08 -0500 Subject: [petsc-users] Plex Submesh Message-ID: Hello, I have a doubt about how the node ordering in plex submesh is created. I create a oneD mesh along y-direction from a two dimensional square mesh in x-y direction. In the original mesh the nodes are indexed such that the count increase in y-direction first and then in x-direction. Consider a 5 by 5 square mesh Now the oneD subdm has 5 nodes and 4 cells if the coordinates of the nodes is [0,1.0,2.0,3.0,4.0] in x and y drection I see that cells in the subdm for x=1.0 have support: Vertex id here is the numbering with respect to subdm. cellId: vertexId ExpectedVertexId (or atleast i want this also this will be the case for x = 0.0) 0 5 4 4 5 1 6 5 5 6 2 7 6 6 7 3 8 7 7 8 Of course this is consistent with the node ordering in original dm but i i use the coordinates of the nodes i get detJacobain negative. I am not sure if this make sense .... Should not (or can ) the nodes in the subdm be reoriented such that they all have positive orientation ? Since i am creating the subdm using indexset of a select points in original dm, i can see that the connectivity is preserved. Should i create a new dm using DMcreatefromCellList after fixing the orientations ? but then i need maps from this dm to orignal dm to access the dof values. Thanks Reddy -- ----------------------------------------------------- Dharmendar Reddy Palle Graduate Student Microelectronics Research center, University of Texas at Austin, 10100 Burnet Road, Bldg. 160 MER 2.608F, TX 78758-4445 e-mail: dharmareddy84 at gmail.com Phone: +1-512-350-9082 United States of America. Homepage: https://webspace.utexas.edu/~dpr342 -------------- next part -------------- An HTML attachment was scrubbed... URL: From parsani.matteo at gmail.com Tue Apr 30 07:21:31 2013 From: parsani.matteo at gmail.com (Matteo Parsani) Date: Tue, 30 Apr 2013 08:21:31 -0400 Subject: [petsc-users] unexpected ordering when VecSetValues set multiples values In-Reply-To: References: <6C9A6BAE-E7B8-4498-8D77-DD23C81334EC@mcs.anl.gov> Message-ID: Hello Barry, it does not work also if m is not so large. Thanks again. On Fri, Apr 26, 2013 at 3:23 PM, Barry Smith wrote: > > Shouldn't matter that it is called from Fortran we do it all the time. > > Does it work if the final m is not very large? > > You may need to run in the debugger and follow the values through the > code to see why they don't get to where they belong. > > > Barry > > Could also be a buggy fortran compiler. > > > On Apr 26, 2013, at 2:06 PM, Matteo Parsani > wrote: > > > Hello Barry, > > sorry I modified few things just before to write the mail. > > The correct loop with the correct name of the variables is the following > > > > ! Number of DOFs owned by this process > > len_local = (elem_high-elem_low+1)*nodesperelem*nequations > > > > ! Allocate memory for x_vec_local and index_list > > allocate(x_vec_local(len_local)) > > allocate(index_list(len_local)) > > > > > > m = 1 > > ! Loop over all elements > > do ielem = elem_low, elem_high > > ! Loop over all nodes in the element > > do inode = 1, nodesperelem > > !Loop over all equations > > do ieq = 1, nequations > > ! Add element to x_vec_local > > x_vec_local(m) = ug(ieq,inode,ielem) > > ! Add element to index list > > index_list(m) = > (elem_low-1)*nodesperelem*nequations+m-1 > > ! Update m index > > m = m+1 > > end do > > end do > > end do > > > > ! HERE I HAVE PRINTED x_vec_local, ug and index_list > > > > ! Set values in the portion of the vector owned by the process > > call > VecSetValues(x_vec_in,m-1,index_list,x_vec_local,INSERT_VALUES,& > > & ierr_local) > > > > ! Assemble initial guess > > call VecAssemblyBegin(x_vec_in,ierr_local) > > call VecAssemblyEnd(x_vec_in,ierr_local) > > > > > > > > I have printed the values and the indices I want to pass to > VecSetValues() and they are correct. > > > > I also printed the values after VecSetValues() has been called and they > are wrong. > > > > The attachment shows that. > > > > > > Could it be a problem of VecSetValues() + F90 when more than 1 elements > is set? > > > > I order to debug my the code I am running just with 1 processor. Thus > the process owns all the DOFs. > > > > Thank you. > > > > > > > > > > > > > > > > > > > > > > > > On Fri, Apr 26, 2013 at 2:39 PM, Barry Smith wrote: > > > > On Apr 26, 2013, at 10:20 AM, Matteo Parsani > wrote: > > > > > Hello, > > > I have some problem when I try to set multiple values to a PETSc > vector that I will use later on with SNES. I am using Fortran 90. > > > Here the problem and two fixes that however are not so good for > performances reasons. The code is very simple. > > > > > > Standard approach that does not work correctly: (I am probably doing > something wrong) > > > > > > m = 1 > > > ! Loop over all elements > > > do ielem = elem_low, elem_high > > > ! Loop over all nodes in the element > > > do inode = 1, nodesperelem > > > !Loop over all equations > > > do ieq = 1, nequations > > > ! Add element to x_vec_local > > > x_vec_local(m) = ug(ieq,inode,ielem) > > > ! Add element to index list > > > ind(m) = (elem_low-1)*nodesperelem*nequations+m-1 > > > ! Update m index > > > m = m+1 > > > end do > > > end do > > > end do > > > > > > ! Set values in the portion of the vector owned by the process > > > call > VecSetValues(x_vec_in,len_local,index_list,x_vec_local,INSERT_VALUES,& > > > > What is len_local and index_list? They do not appear in the loop > above. Shouldn't you be passing m-1 for the length and ind for the indices? > > > > I would first print out the all the values in your input to > VecSetValues() and make sure they are correct. > > > > Barry > > > > > & ierr_local) > > > > > > ! Assemble initial guess > > > call VecAssemblyBegin(x_vec_in,ierr_local) > > > call VecAssemblyEnd(x_vec_in,ierr_local) > > > > > > Then I print my expected values and the values contained in the PETSc > vector to a file. See attachment. I am running in serial for the moment BUT > strangely if you look at the file I have attached the first 79 DOFs values > have a wrong ordering and the remaining 80 are zero. > > > > > > > > > 1st approach: set just one value at the time inside the loop. > > > m = 1 > > > ! Loop over all elements > > > do ielem = elem_low, elem_high > > > ! Loop over all nodes in the element > > > do inode = 1, nodesperelem > > > !Loop over all equations > > > do ieq = 1, nequations > > > ! Add element to x_vec_local > > > value = ug(ieq,inode,ielem) > > > ! Add element to index list > > > ind = (elem_low-1)*nodesperelem*nequations+m-1 > > > call > VecSetValues(x_vec_in,1,ind,value,INSERT_VALUES,& > > > & ierr_local) > > > ! Update m index > > > m = m+1 > > > end do > > > end do > > > end do > > > > > > > > > This works fine. As you can see I am using the same expression used in > the previous loop to compute the index of the element that I have to add in > the x_vec_in, i.e. > > > ind = (elem_low-1)*nodesperelem*nequations+m-1 > > > > > > Thus I cannot see which is the problem. > > > > > > 2nd approach: get the pointer to the local part of the global vector > and use it to set the values in the global vector > > > > > > m = 1 > > > ! Loop over all elements > > > do ielem = elem_low, elem_high > > > ! Loop over all nodes in the element > > > do inode = 1, nodesperelem > > > !Loop over all equations > > > do ieq = 1, nequations > > > ! Add element to x_vec_local > > > tmp(m) = ug(ieq,inode,ielem) > > > ! Update m index > > > m = m+1 > > > end do > > > end do > > > end do > > > > > > > > > This works fine too. > > > > > > > > > Jut to be complete. I use the following two approaches to view the > vector: > > > > > > call VecView(x_vec_in,PETSC_VIEWER_STDOUT_WORLD,ierr_local) > > > > > > > > > and > > > > > > call VecGetArrayF90(x_vec_in,tmp,ierr_local) > > > > > > > > > m = 1 > > > ! Loop over all elements > > > do ielem = elem_low, elem_high > > > ! Loop over all nodes in the element > > > do inode = 1, nodesperelem > > > !Loop over all equations > > > do ieq = 1, nequations > > > write(*,*) m,index_list(m),x_vec_local(m),tmp(m) > > > ! Update m index > > > m = m+1 > > > end do > > > end do > > > end do > > > > > > > > > Thank you. > > > > > > > > > -- > > > Matteo > > > > > > > > > > > > > -- > > Matteo > > > > -- Matteo -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Apr 30 07:35:14 2013 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 30 Apr 2013 07:35:14 -0500 Subject: [petsc-users] unexpected ordering when VecSetValues set multiples values In-Reply-To: References: <6C9A6BAE-E7B8-4498-8D77-DD23C81334EC@mcs.anl.gov> Message-ID: <039BF5D1-9DBB-4201-916F-4F981FC25E50@mcs.anl.gov> On Apr 30, 2013, at 7:21 AM, Matteo Parsani wrote: > Hello Barry, > it does not work also if m is not so large. Then you need to run in the debugger. You can use the command line options -start_in_debugger noxterm in the debugger put a break point in vecsetvalues_ (yes lower case followed by an underscore) then when it gets to that point print the size of the array and the values in the integer and double array to see if they match, then step into the VecSetValues() function that is called by vecsetvalues_() and look at the array values ago, continue to step along and it will go into VecSetValues_Seq() and start actually putting the values into the Vec array. Barry > > Thanks again. > > > > > > > On Fri, Apr 26, 2013 at 3:23 PM, Barry Smith wrote: > > Shouldn't matter that it is called from Fortran we do it all the time. > > Does it work if the final m is not very large? > > You may need to run in the debugger and follow the values through the code to see why they don't get to where they belong. > > > Barry > > Could also be a buggy fortran compiler. > > > On Apr 26, 2013, at 2:06 PM, Matteo Parsani wrote: > > > Hello Barry, > > sorry I modified few things just before to write the mail. > > The correct loop with the correct name of the variables is the following > > > > ! Number of DOFs owned by this process > > len_local = (elem_high-elem_low+1)*nodesperelem*nequations > > > > ! Allocate memory for x_vec_local and index_list > > allocate(x_vec_local(len_local)) > > allocate(index_list(len_local)) > > > > > > m = 1 > > ! Loop over all elements > > do ielem = elem_low, elem_high > > ! Loop over all nodes in the element > > do inode = 1, nodesperelem > > !Loop over all equations > > do ieq = 1, nequations > > ! Add element to x_vec_local > > x_vec_local(m) = ug(ieq,inode,ielem) > > ! Add element to index list > > index_list(m) = (elem_low-1)*nodesperelem*nequations+m-1 > > ! Update m index > > m = m+1 > > end do > > end do > > end do > > > > ! HERE I HAVE PRINTED x_vec_local, ug and index_list > > > > ! Set values in the portion of the vector owned by the process > > call VecSetValues(x_vec_in,m-1,index_list,x_vec_local,INSERT_VALUES,& > > & ierr_local) > > > > ! Assemble initial guess > > call VecAssemblyBegin(x_vec_in,ierr_local) > > call VecAssemblyEnd(x_vec_in,ierr_local) > > > > > > > > I have printed the values and the indices I want to pass to VecSetValues() and they are correct. > > > > I also printed the values after VecSetValues() has been called and they are wrong. > > > > The attachment shows that. > > > > > > Could it be a problem of VecSetValues() + F90 when more than 1 elements is set? > > > > I order to debug my the code I am running just with 1 processor. Thus the process owns all the DOFs. > > > > Thank you. > > > > > > > > > > > > > > > > > > > > > > > > On Fri, Apr 26, 2013 at 2:39 PM, Barry Smith wrote: > > > > On Apr 26, 2013, at 10:20 AM, Matteo Parsani wrote: > > > > > Hello, > > > I have some problem when I try to set multiple values to a PETSc vector that I will use later on with SNES. I am using Fortran 90. > > > Here the problem and two fixes that however are not so good for performances reasons. The code is very simple. > > > > > > Standard approach that does not work correctly: (I am probably doing something wrong) > > > > > > m = 1 > > > ! Loop over all elements > > > do ielem = elem_low, elem_high > > > ! Loop over all nodes in the element > > > do inode = 1, nodesperelem > > > !Loop over all equations > > > do ieq = 1, nequations > > > ! Add element to x_vec_local > > > x_vec_local(m) = ug(ieq,inode,ielem) > > > ! Add element to index list > > > ind(m) = (elem_low-1)*nodesperelem*nequations+m-1 > > > ! Update m index > > > m = m+1 > > > end do > > > end do > > > end do > > > > > > ! Set values in the portion of the vector owned by the process > > > call VecSetValues(x_vec_in,len_local,index_list,x_vec_local,INSERT_VALUES,& > > > > What is len_local and index_list? They do not appear in the loop above. Shouldn't you be passing m-1 for the length and ind for the indices? > > > > I would first print out the all the values in your input to VecSetValues() and make sure they are correct. > > > > Barry > > > > > & ierr_local) > > > > > > ! Assemble initial guess > > > call VecAssemblyBegin(x_vec_in,ierr_local) > > > call VecAssemblyEnd(x_vec_in,ierr_local) > > > > > > Then I print my expected values and the values contained in the PETSc vector to a file. See attachment. I am running in serial for the moment BUT strangely if you look at the file I have attached the first 79 DOFs values have a wrong ordering and the remaining 80 are zero. > > > > > > > > > 1st approach: set just one value at the time inside the loop. > > > m = 1 > > > ! Loop over all elements > > > do ielem = elem_low, elem_high > > > ! Loop over all nodes in the element > > > do inode = 1, nodesperelem > > > !Loop over all equations > > > do ieq = 1, nequations > > > ! Add element to x_vec_local > > > value = ug(ieq,inode,ielem) > > > ! Add element to index list > > > ind = (elem_low-1)*nodesperelem*nequations+m-1 > > > call VecSetValues(x_vec_in,1,ind,value,INSERT_VALUES,& > > > & ierr_local) > > > ! Update m index > > > m = m+1 > > > end do > > > end do > > > end do > > > > > > > > > This works fine. As you can see I am using the same expression used in the previous loop to compute the index of the element that I have to add in the x_vec_in, i.e. > > > ind = (elem_low-1)*nodesperelem*nequations+m-1 > > > > > > Thus I cannot see which is the problem. > > > > > > 2nd approach: get the pointer to the local part of the global vector and use it to set the values in the global vector > > > > > > m = 1 > > > ! Loop over all elements > > > do ielem = elem_low, elem_high > > > ! Loop over all nodes in the element > > > do inode = 1, nodesperelem > > > !Loop over all equations > > > do ieq = 1, nequations > > > ! Add element to x_vec_local > > > tmp(m) = ug(ieq,inode,ielem) > > > ! Update m index > > > m = m+1 > > > end do > > > end do > > > end do > > > > > > > > > This works fine too. > > > > > > > > > Jut to be complete. I use the following two approaches to view the vector: > > > > > > call VecView(x_vec_in,PETSC_VIEWER_STDOUT_WORLD,ierr_local) > > > > > > > > > and > > > > > > call VecGetArrayF90(x_vec_in,tmp,ierr_local) > > > > > > > > > m = 1 > > > ! Loop over all elements > > > do ielem = elem_low, elem_high > > > ! Loop over all nodes in the element > > > do inode = 1, nodesperelem > > > !Loop over all equations > > > do ieq = 1, nequations > > > write(*,*) m,index_list(m),x_vec_local(m),tmp(m) > > > ! Update m index > > > m = m+1 > > > end do > > > end do > > > end do > > > > > > > > > Thank you. > > > > > > > > > -- > > > Matteo > > > > > > > > > > > > > -- > > Matteo > > > > > > > -- > Matteo From knepley at gmail.com Tue Apr 30 08:12:54 2013 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 30 Apr 2013 08:12:54 -0500 Subject: [petsc-users] Plex Submesh In-Reply-To: References: Message-ID: On Tue, Apr 30, 2013 at 2:57 AM, Dharmendar Reddy wrote: > Hello, > I have a doubt about how the node ordering in plex submesh is > created. > > I create a oneD mesh along y-direction from a two dimensional square mesh > in x-y direction. In the original mesh the nodes are indexed such that the > count increase in y-direction first and then in x-direction. > > Consider a 5 by 5 square mesh > > Now the oneD subdm has 5 nodes and 4 cells > > if the coordinates of the nodes is [0,1.0,2.0,3.0,4.0] in x and y > drection > > I see that cells in the subdm for x=1.0 have support: > Vertex id here is the numbering with respect to subdm. > > cellId: vertexId ExpectedVertexId (or atleast i want this also > this will be the > case > for x = 0.0) > > 0 5 4 4 5 > 1 6 5 5 6 > 2 7 6 6 7 > 3 8 7 7 8 > > Of course this is consistent with the node ordering in original dm but i > i use the coordinates of the nodes i get detJacobain negative. > > I am not sure if this make sense .... > > Should not (or can ) the nodes in the subdm be reoriented such that they > all have positive orientation ? > I am working all of this out now, which is why I have been slow in replying. I have a code that uses this call in every possible way, in all dimensions. I should be done this week. You are correct that the orientation is likely off. That is what I am checking. Thanks, Matt > Since i am creating the subdm using indexset of a select points in > original dm, i can see that the connectivity is preserved. > > Should i create a new dm using DMcreatefromCellList after fixing the > orientations ? but then i need maps from this dm to orignal dm to access > the dof values. > > Thanks > Reddy > -- > ----------------------------------------------------- > Dharmendar Reddy Palle > Graduate Student > Microelectronics Research center, > University of Texas at Austin, > 10100 Burnet Road, Bldg. 160 > MER 2.608F, TX 78758-4445 > e-mail: dharmareddy84 at gmail.com > Phone: +1-512-350-9082 > United States of America. > Homepage: https://webspace.utexas.edu/~dpr342 > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From dharmareddy84 at gmail.com Tue Apr 30 16:20:52 2013 From: dharmareddy84 at gmail.com (Dharmendar Reddy) Date: Tue, 30 Apr 2013 16:20:52 -0500 Subject: [petsc-users] Plex Submesh In-Reply-To: References: Message-ID: On Tue, Apr 30, 2013 at 8:12 AM, Matthew Knepley wrote: > On Tue, Apr 30, 2013 at 2:57 AM, Dharmendar Reddy > wrote: > >> Hello, >> I have a doubt about how the node ordering in plex submesh is >> created. >> >> I create a oneD mesh along y-direction from a two dimensional square mesh >> in x-y direction. In the original mesh the nodes are indexed such that the >> count increase in y-direction first and then in x-direction. >> >> Consider a 5 by 5 square mesh >> >> Now the oneD subdm has 5 nodes and 4 cells >> >> if the coordinates of the nodes is [0,1.0,2.0,3.0,4.0] in x and y >> drection >> >> I see that cells in the subdm for x=1.0 have support: >> Vertex id here is the numbering with respect to subdm. >> >> cellId: vertexId ExpectedVertexId (or atleast i want this also >> this will be the >> case >> for x = 0.0) >> >> 0 5 4 4 5 >> 1 6 5 5 6 >> 2 7 6 6 7 >> 3 8 7 7 8 >> >> Of course this is consistent with the node ordering in original dm but i >> i use the coordinates of the nodes i get detJacobain negative. >> >> I am not sure if this make sense .... >> >> Should not (or can ) the nodes in the subdm be reoriented such that they >> all have positive orientation ? >> > > I am working all of this out now, which is why I have been slow in > replying. I have a code that uses > this call in every possible way, in all dimensions. I should be done this > week. > > You are correct that the orientation is likely off. That is what I am > checking. > Thanks...i am eagerly waiting for the updates...Hopefully I will graduate soon... J > > Thanks, > > Matt > > >> Since i am creating the subdm using indexset of a select points in >> original dm, i can see that the connectivity is preserved. >> >> Should i create a new dm using DMcreatefromCellList after fixing the >> orientations ? but then i need maps from this dm to orignal dm to access >> the dof values. >> >> Thanks >> Reddy >> -- >> ----------------------------------------------------- >> Dharmendar Reddy Palle >> Graduate Student >> Microelectronics Research center, >> University of Texas at Austin, >> 10100 Burnet Road, Bldg. 160 >> MER 2.608F, TX 78758-4445 >> e-mail: dharmareddy84 at gmail.com >> Phone: +1-512-350-9082 >> United States of America. >> Homepage: https://webspace.utexas.edu/~dpr342 >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -- ----------------------------------------------------- Dharmendar Reddy Palle Graduate Student Microelectronics Research center, University of Texas at Austin, 10100 Burnet Road, Bldg. 160 MER 2.608F, TX 78758-4445 e-mail: dharmareddy84 at gmail.com Phone: +1-512-350-9082 United States of America. Homepage: https://webspace.utexas.edu/~dpr342 -------------- next part -------------- An HTML attachment was scrubbed... URL: