From hbuesing at eonerc.rwth-aachen.de Fri Dec 1 03:24:09 2017 From: hbuesing at eonerc.rwth-aachen.de (Buesing, Henrik) Date: Fri, 1 Dec 2017 09:24:09 +0000 Subject: [petsc-users] Newton methods that converge all the time In-Reply-To: <209cb0a6-b6ac-a264-1b6a-a15e98949438@auckland.ac.nz> References: <209cb0a6-b6ac-a264-1b6a-a15e98949438@auckland.ac.nz> Message-ID: > > > > Please describe in some detail how you are handling phase change. > > If you have if () tests of any sort in your FormFunction() or > > FormJacobian() this can kill Newton's method. If you are using > > "variable switching" this WILL kill Newtons' method. Are you monkeying > > with phase definitions in TSPostStep or with > > SNESLineSearchSetPostCheck(). This will also kill Newton's method. > > I'm doing variable switching (in a geothermal flow application) with > Newton's method (in SNESLineSearchSetPostCheck()) and it generally works > fine. > > For pure water (no other components present) my variables are pressure > and temperature for single-phase (liquid or vapour) and pressure and > vapour saturation for two-phase. > > You have to be pretty careful how you do the switching though. The beauty of the pressure-enthalpy formulation is that you do not need to initialize the saturations with small epsilon when you go into two-phase. You can directly compute a correct enthalpy (and from it a saturation) and "jump" into the two-phase region. Anyhow, no one says that you are not jumping in and out of the two-phase region. Consequently, there are examples for which either method works best. I am trying to simulate a supercritical reservoir (T>450 ?C, p>35 MPa) from surface down to 3.5 km depth. (It is in Italy so geothermal gradient is large). There is a steam region, which forms and either I get osciallations in enthalpy (saturations) or small time-steps kill me. I want to simulate a quasi steady-state after 1 million years. Henrik > > - Adrian > -- > Dr Adrian Croucher > Senior Research Fellow > Department of Engineering Science > University of Auckland, New Zealand > email: a.croucher at auckland.ac.nz > tel: +64 (0)9 923 4611 From matteo.semplice at unito.it Fri Dec 1 04:50:50 2017 From: matteo.semplice at unito.it (Matteo Semplice) Date: Fri, 1 Dec 2017 11:50:50 +0100 Subject: [petsc-users] preallocation after DMCreateMatrix? In-Reply-To: References: <33f753de-e783-03a0-711a-510a88389cb7@unito.it> <0b480f6d-d642-e444-e24f-2f5e94743956@unito.it> Message-ID: <02b82677-018a-ea78-083e-c1c16213a1cb@unito.it> Thanks for the fix! (If you need a volunteer for testing the bug-fix, drop me a line) Best, ??? Matteo On 30/11/2017 13:46, Matthew Knepley wrote: > Thanks for finding this bug. I guess no one has been making matrices > with FVM. I will fix this internally, but here > is a workaround which makes the code go for now. > > ? Thanks, > > ? ? ?Matt > > On Wed, Nov 29, 2017 at 6:36 AM, Matteo Semplice > > wrote: > > > > On 29/11/2017 12:46, Matthew Knepley wrote: >> On Wed, Nov 29, 2017 at 2:26 AM, Matteo Semplice >> > wrote: >> >> On 25/11/2017 02:05, Matthew Knepley wrote: >>> On Fri, Nov 24, 2017 at 4:21 PM, Matteo Semplice >>> > >>> wrote: >>> >>> Hi. >>> >>> The manual for DMCreateMatrix says "Notes: This properly >>> preallocates the number of nonzeros in the sparse matrix >>> so you do not need to do it yourself", so I got the >>> impression that one does not need to call the >>> preallocation routine for the matrix and indeed in most >>> examples listed in the manual page for DMCreateMatrix >>> this is not done or (KSP tutorial ex4) it is called >>> declaring 0 entries per row. >>> >>> However, if read in a mesh in a DMPlex ore create a DMDA >>> and then call DMCreateMatrix, the resulting matrix >>> errors out when I call MatSetValues. I have currently >>> followed the suggestion of the error message and call >>> MatSetOption(A, MAT_NEW_NONZERO_ALLOCATION_ERR, >>> PETSC_FALSE), but I'd like to fix this properly. >>> >>> >>> It sounds like your nonzero pattern does not obey the >>> topology. What nonzero pattern are you trying to input? >> >> Hi. >> >> ??? The problem with the DMDA was a bug in our code. Sorry >> for the noise. >> >> On the other hand, with the DMPLex, I still experience >> problems. It's a FV code and I reduced it to the case of the >> simple case of a laplacian operator: I need non-diagonal >> entries at (i,j) if cell i and cell j have a common face. >> Here below is my code that reads in a grid of 4 cells (unit >> square divided by the diagonals), create a section with 1 dof >> per cell, creates a matrix and assembles it. >> >> >> ? ierr = DMPlexCreateFromFile(PETSC_COMM_WORLD, >> "square1.msh", PETSC_TRUE, &dm);CHKERRQ(ierr); >> >> >> Can you show me the mesh? >> >> ? ? ? ? ?ierr = DMViewFromOptions(dm, NULL, >> "-dm_view");CHKERRQ(ierr); >> >> and then run with -dm_view ::ascii_info_detail >> >> >> ? ierr = DMPlexSetAdjacencyUseCone(dm, PETSC_TRUE); >> CHKERRQ(ierr); >> ? ierr = DMPlexSetAdjacencyUseClosure(dm, PETSC_FALSE); >> CHKERRQ(ierr); >> >> >> This looks like the right FVM adjacency, but your matrix is >> diagonal it appears below. TS ex11 >> has an identical call, but produces the correct matrix, which is >> why I want to look at your mesh. > > I have put the DMViewFromOptions call after the SetAdjacency calls > and this is the output: > > matteo at signalkuppe:~/software/petscMplex$ ./mplexMatrix -dm_view > ::ascii_info_detail > DM Object: 1 MPI processes > ? type: plex > Mesh 'DM_0x557496ad8380_0': > orientation is missing > cap --> base: > [0] Max sizes cone: 3 support: 4 > [0]: 4 ----> 9 > [0]: 4 ----> 11 > [0]: 4 ----> 12 > [0]: 5 ----> 10 > [0]: 5 ----> 11 > [0]: 5 ----> 15 > [0]: 6 ----> 14 > [0]: 6 ----> 15 > [0]: 6 ----> 16 > [0]: 7 ----> 12 > [0]: 7 ----> 13 > [0]: 7 ----> 16 > [0]: 8 ----> 9 > [0]: 8 ----> 10 > [0]: 8 ----> 13 > [0]: 8 ----> 14 > [0]: 9 ----> 0 > [0]: 9 ----> 1 > [0]: 10 ----> 0 > [0]: 10 ----> 2 > [0]: 11 ----> 0 > [0]: 12 ----> 1 > [0]: 13 ----> 1 > [0]: 13 ----> 3 > [0]: 14 ----> 2 > [0]: 14 ----> 3 > [0]: 15 ----> 2 > [0]: 16 ----> 3 > base <-- cap: > [0]: 0 <---- 9 (0) > [0]: 0 <---- 10 (0) > [0]: 0 <---- 11 (0) > [0]: 1 <---- 12 (0) > [0]: 1 <---- 13 (0) > [0]: 1 <---- 9 (-2) > [0]: 2 <---- 10 (-2) > [0]: 2 <---- 14 (0) > [0]: 2 <---- 15 (0) > [0]: 3 <---- 14 (-2) > [0]: 3 <---- 13 (-2) > [0]: 3 <---- 16 (0) > [0]: 9 <---- 4 (0) > [0]: 9 <---- 8 (0) > [0]: 10 <---- 8 (0) > [0]: 10 <---- 5 (0) > [0]: 11 <---- 5 (0) > [0]: 11 <---- 4 (0) > [0]: 12 <---- 4 (0) > [0]: 12 <---- 7 (0) > [0]: 13 <---- 7 (0) > [0]: 13 <---- 8 (0) > [0]: 14 <---- 8 (0) > [0]: 14 <---- 6 (0) > [0]: 15 <---- 6 (0) > [0]: 15 <---- 5 (0) > [0]: 16 <---- 7 (0) > [0]: 16 <---- 6 (0) > coordinates with 1 fields > ? field 0 with 2 components > Process 0: > ? (?? 4) dim? 2 offset?? 0 0. 0. > ? (?? 5) dim? 2 offset?? 2 0. 1. > ? (?? 6) dim? 2 offset?? 4 1. 1. > ? (?? 7) dim? 2 offset?? 6 1. 0. > ? (?? 8) dim? 2 offset?? 8 0.5 0.5 > > For the records, the mesh is loaded from the (gmsh generated) file > > ==== square1.msh ===== > $MeshFormat > 2.2 0 8 > $EndMeshFormat > $Nodes > 5 > 1 0 0 0 > 2 0 1 0 > 3 1 1 0 > 4 1 0 0 > 5 0.5 0.5 0 > $EndNodes > $Elements > 12 > 1 15 2 0 1 1 > 2 15 2 0 2 2 > 3 15 2 0 3 3 > 4 15 2 0 4 4 > 5 1 2 0 1 4 3 > 6 1 2 0 2 3 2 > 7 1 2 0 3 2 1 > 8 1 2 0 4 1 4 > 9 2 2 0 6 1 5 2 > 10 2 2 0 6 1 4 5 > 11 2 2 0 6 2 5 3 > 12 2 2 0 6 3 5 4 > $EndElements > =================== > > Thanks, > ??? Matteo > > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From edoardo.alinovi at gmail.com Fri Dec 1 05:20:39 2017 From: edoardo.alinovi at gmail.com (Edoardo alinovi) Date: Fri, 1 Dec 2017 12:20:39 +0100 Subject: [petsc-users] Use 16 decimal digit in petsc In-Reply-To: References: Message-ID: Dear user, I am solving the few last issues in interfacing petsc with my code. My code works with 16 decimal digits after the "." , but it seems that petsc keeps up to 10 digits for some reason. Is there a way to fix this? Thank you very much for the help, Edoardo -------------- next part -------------- An HTML attachment was scrubbed... URL: From lukas.drinkt.thee at gmail.com Fri Dec 1 05:40:10 2017 From: lukas.drinkt.thee at gmail.com (Lukas van de Wiel) Date: Fri, 1 Dec 2017 12:40:10 +0100 Subject: [petsc-users] Use 16 decimal digit in petsc In-Reply-To: References: Message-ID: Hi Edoardo, the PETSc Changelog https://www.mcs.anl.gov/petsc/documentation/changes/32.html says: Using gcc 4.6 you can now ./configure --with-precision=__float128 --download-qblaslapack to get computations in quad precision. It might be worth giving a try. Cheers Lukas On 12/1/17, Edoardo alinovi wrote: > Dear user, > > I am solving the few last issues in interfacing petsc with my code. > > My code works with 16 decimal digits after the "." , but it seems that > petsc keeps up to 10 digits for some reason. > > Is there a way to fix this? > > Thank you very much for the help, > > Edoardo > From hbuesing at eonerc.rwth-aachen.de Fri Dec 1 07:11:14 2017 From: hbuesing at eonerc.rwth-aachen.de (Buesing, Henrik) Date: Fri, 1 Dec 2017 13:11:14 +0000 Subject: [petsc-users] SNESLineSearchSetPre/PostCheck() Message-ID: Dear all, So what is the difference between the pre and post check? When should I use what? Thank you! Henrik -- Dipl.-Math. Henrik B?sing Institute for Applied Geophysics and Geothermal Energy E.ON Energy Research Center RWTH Aachen University ------------------------------------------------------ Mathieustr. 10 | Tel +49 (0)241 80 49907 52074 Aachen, Germany | Fax +49 (0)241 80 49889 ------------------------------------------------------ http://www.eonerc.rwth-aachen.de/GGE hbuesing at eonerc.rwth-aachen.de ------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Fri Dec 1 07:18:06 2017 From: jed at jedbrown.org (Jed Brown) Date: Fri, 01 Dec 2017 06:18:06 -0700 Subject: [petsc-users] SNESLineSearchSetPre/PostCheck() In-Reply-To: References: Message-ID: <87d13ysi29.fsf@jedbrown.org> "Buesing, Henrik" writes: > Dear all, > > So what is the difference between the pre and post check? When should I use what? PreCheck runs before starting the line search, PostCheck runs after the line search has completed and (if relevant) the inequality projection. From knepley at gmail.com Fri Dec 1 07:18:54 2017 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 1 Dec 2017 08:18:54 -0500 Subject: [petsc-users] Use 16 decimal digit in petsc In-Reply-To: References: Message-ID: On Fri, Dec 1, 2017 at 6:20 AM, Edoardo alinovi wrote: > Dear user, > > I am solving the few last issues in interfacing petsc with my code. > > My code works with 16 decimal digits after the "." , but it seems that > petsc keeps up to 10 digits for some reason. > Are you talking about ASCII output? Internally these are just IEEE 754 double precision by default. Matt > Is there a way to fix this? > > Thank you very much for the help, > > Edoardo > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From hbuesing at eonerc.rwth-aachen.de Fri Dec 1 07:44:25 2017 From: hbuesing at eonerc.rwth-aachen.de (Buesing, Henrik) Date: Fri, 1 Dec 2017 13:44:25 +0000 Subject: [petsc-users] SNESLineSearchSetPre/PostCheck() In-Reply-To: <87d13ysi29.fsf@jedbrown.org> References: <87d13ysi29.fsf@jedbrown.org> Message-ID: > -----Urspr?ngliche Nachricht----- > Von: Jed Brown [mailto:jed at jedbrown.org] > Gesendet: 01 December 2017 14:18 > An: Buesing, Henrik ; petsc-users > > Betreff: Re: [petsc-users] SNESLineSearchSetPre/PostCheck() > > "Buesing, Henrik" writes: > > > Dear all, > > > > So what is the difference between the pre and post check? When should > I use what? > > PreCheck runs before starting the line search, PostCheck runs after the > line search has completed and (if relevant) the inequality projection. [Buesing, Henrik] What I did was using the PreCheck to change the step length if pressure or enthalpy went out of physical bounds (e.g. got negative). Is the impact on Newton's method doing this more severe or less than using the PostCheck? Thank you! Henrik From hbuesing at eonerc.rwth-aachen.de Fri Dec 1 10:35:07 2017 From: hbuesing at eonerc.rwth-aachen.de (Buesing, Henrik) Date: Fri, 1 Dec 2017 16:35:07 +0000 Subject: [petsc-users] SNESLineSearchSetPre/PostCheck() In-Reply-To: References: <87d13ysi29.fsf@jedbrown.org> Message-ID: > > > So what is the difference between the pre and post check? When > > > should > > I use what? > > > > PreCheck runs before starting the line search, PostCheck runs after > > the line search has completed and (if relevant) the inequality > projection. > > [Buesing, Henrik] What I did was using the PreCheck to change the step > length if pressure or enthalpy went out of physical bounds (e.g. got > negative). > > Is the impact on Newton's method doing this more severe or less than > using the PostCheck? > In SNESLineSearchPostCheck: Should I update the direction with Y = W - \lambda*X, when I change W? Do I get the \lambda from the linesearch context? Henrik From jed at jedbrown.org Fri Dec 1 15:35:22 2017 From: jed at jedbrown.org (Jed Brown) Date: Fri, 01 Dec 2017 14:35:22 -0700 Subject: [petsc-users] SNESLineSearchSetPre/PostCheck() In-Reply-To: References: <87d13ysi29.fsf@jedbrown.org> Message-ID: <87vahqqgh1.fsf@jedbrown.org> "Buesing, Henrik" writes: >> -----Urspr?ngliche Nachricht----- >> Von: Jed Brown [mailto:jed at jedbrown.org] >> Gesendet: 01 December 2017 14:18 >> An: Buesing, Henrik ; petsc-users >> >> Betreff: Re: [petsc-users] SNESLineSearchSetPre/PostCheck() >> >> "Buesing, Henrik" writes: >> >> > Dear all, >> > >> > So what is the difference between the pre and post check? When should >> I use what? >> >> PreCheck runs before starting the line search, PostCheck runs after the >> line search has completed and (if relevant) the inequality projection. > > [Buesing, Henrik] What I did was using the PreCheck to change the step length if pressure or enthalpy went out of physical bounds (e.g. got negative). > > Is the impact on Newton's method doing this more severe or less than using the PostCheck? You probably want to start your search in the feasible set so might as well use the precheck. From hbuesing at eonerc.rwth-aachen.de Sat Dec 2 04:58:41 2017 From: hbuesing at eonerc.rwth-aachen.de (Buesing, Henrik) Date: Sat, 2 Dec 2017 10:58:41 +0000 Subject: [petsc-users] SNESLineSearchSetPre/PostCheck() In-Reply-To: <87vahqqgh1.fsf@jedbrown.org> References: <87d13ysi29.fsf@jedbrown.org> <87vahqqgh1.fsf@jedbrown.org> Message-ID: > >> > Dear all, > >> > > >> > So what is the difference between the pre and post check? When > >> > should > >> I use what? > >> > >> PreCheck runs before starting the line search, PostCheck runs after > >> the line search has completed and (if relevant) the inequality projection. > > > > [Buesing, Henrik] What I did was using the PreCheck to change the step > length if pressure or enthalpy went out of physical bounds (e.g. got > negative). > > > > Is the impact on Newton's method doing this more severe or less than > using the PostCheck? > > You probably want to start your search in the feasible set so might as well > use the precheck. In my test example I see the following: Using the precheck and changing the step direction Y, Newton converges fine. Using the postcheck and changing the updated solution W and leaving the step direction Y as is, breaks Newton convergence. Henrik http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/SNES/SNESLineSearchPreCheck.html http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/SNES/SNESLineSearchPostCheck.html From a.croucher at auckland.ac.nz Sun Dec 3 16:25:44 2017 From: a.croucher at auckland.ac.nz (Adrian Croucher) Date: Mon, 4 Dec 2017 11:25:44 +1300 Subject: [petsc-users] Newton methods that converge all the time In-Reply-To: References: <209cb0a6-b6ac-a264-1b6a-a15e98949438@auckland.ac.nz> Message-ID: <2c90d940-9f7a-ca56-1fe6-562aad18e476@auckland.ac.nz> hi Henrik, On 01/12/17 22:24, Buesing, Henrik wrote: > The beauty of the pressure-enthalpy formulation is that you do not > need to initialize the saturations with small epsilon when you go into > two-phase. You can directly compute a correct enthalpy (and from it a > saturation) and "jump" into the two-phase region. > Anyhow, no one says that you are not jumping in and out of the two-phase region. Consequently, there are examples for which either method works best. > > I am trying to simulate a supercritical reservoir (T>450 ?C, p>35 MPa) from surface down to 3.5 km depth. (It is in Italy so geothermal gradient is large). > > There is a steam region, which forms and either I get osciallations in enthalpy (saturations) or small time-steps kill me. I want to simulate a quasi steady-state after 1 million years. We regularly simulate systems like this, with steam zones near the top, and sometimes supercritical fluid at the bottom. We generally run steady states up to time step sizes of 1E15 s or more. We have done quite a bit of work on getting the variable-switching approach to work well under these conditions. - Adrian -- Dr Adrian Croucher Senior Research Fellow Department of Engineering Science University of Auckland, New Zealand email: a.croucher at auckland.ac.nz tel: +64 (0)9 923 4611 From knepley at gmail.com Mon Dec 4 10:01:22 2017 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 4 Dec 2017 11:01:22 -0500 Subject: [petsc-users] preallocation after DMCreateMatrix? In-Reply-To: <02b82677-018a-ea78-083e-c1c16213a1cb@unito.it> References: <33f753de-e783-03a0-711a-510a88389cb7@unito.it> <0b480f6d-d642-e444-e24f-2f5e94743956@unito.it> <02b82677-018a-ea78-083e-c1c16213a1cb@unito.it> Message-ID: On Fri, Dec 1, 2017 at 5:50 AM, Matteo Semplice wrote: > Thanks for the fix! > > (If you need a volunteer for testing the bug-fix, drop me a line) > Cool. Its in next, and in https://bitbucket.org/petsc/petsc/branch/knepley/fix-plex-fvm-adjacency Thanks, Matt > Best, > Matteo > > > On 30/11/2017 13:46, Matthew Knepley wrote: > > Thanks for finding this bug. I guess no one has been making matrices with > FVM. I will fix this internally, but here > is a workaround which makes the code go for now. > > Thanks, > > Matt > > On Wed, Nov 29, 2017 at 6:36 AM, Matteo Semplice > wrote: > >> >> >> On 29/11/2017 12:46, Matthew Knepley wrote: >> >> On Wed, Nov 29, 2017 at 2:26 AM, Matteo Semplice < >> matteo.semplice at unito.it> wrote: >> >>> On 25/11/2017 02:05, Matthew Knepley wrote: >>> >>> On Fri, Nov 24, 2017 at 4:21 PM, Matteo Semplice < >>> matteo.semplice at unito.it> wrote: >>> >>>> Hi. >>>> >>>> The manual for DMCreateMatrix says "Notes: This properly preallocates >>>> the number of nonzeros in the sparse matrix so you do not need to do it >>>> yourself", so I got the impression that one does not need to call the >>>> preallocation routine for the matrix and indeed in most examples listed in >>>> the manual page for DMCreateMatrix this is not done or (KSP tutorial ex4) >>>> it is called declaring 0 entries per row. >>>> >>>> However, if read in a mesh in a DMPlex ore create a DMDA and then call >>>> DMCreateMatrix, the resulting matrix errors out when I call MatSetValues. I >>>> have currently followed the suggestion of the error message and call >>>> MatSetOption(A, MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_FALSE), but I'd >>>> like to fix this properly. >>>> >>> >>> It sounds like your nonzero pattern does not obey the topology. What >>> nonzero pattern are you trying to input? >>> >>> >>> Hi. >>> >>> The problem with the DMDA was a bug in our code. Sorry for the noise. >>> >>> On the other hand, with the DMPLex, I still experience problems. It's a >>> FV code and I reduced it to the case of the simple case of a laplacian >>> operator: I need non-diagonal entries at (i,j) if cell i and cell j have a >>> common face. >>> Here below is my code that reads in a grid of 4 cells (unit square >>> divided by the diagonals), create a section with 1 dof per cell, creates a >>> matrix and assembles it. >>> >> >>> ierr = DMPlexCreateFromFile(PETSC_COMM_WORLD, "square1.msh", >>> PETSC_TRUE, &dm);CHKERRQ(ierr); >>> >> >> Can you show me the mesh? >> >> ierr = DMViewFromOptions(dm, NULL, "-dm_view");CHKERRQ(ierr); >> >> and then run with -dm_view ::ascii_info_detail >> >> >>> ierr = DMPlexSetAdjacencyUseCone(dm, PETSC_TRUE); CHKERRQ(ierr); >>> ierr = DMPlexSetAdjacencyUseClosure(dm, PETSC_FALSE); CHKERRQ(ierr); >>> >> >> This looks like the right FVM adjacency, but your matrix is diagonal it >> appears below. TS ex11 >> has an identical call, but produces the correct matrix, which is why I >> want to look at your mesh. >> >> >> I have put the DMViewFromOptions call after the SetAdjacency calls and >> this is the output: >> >> matteo at signalkuppe:~/software/petscMplex$ ./mplexMatrix -dm_view >> ::ascii_info_detail >> DM Object: 1 MPI processes >> type: plex >> Mesh 'DM_0x557496ad8380_0': >> orientation is missing >> cap --> base: >> [0] Max sizes cone: 3 support: 4 >> [0]: 4 ----> 9 >> [0]: 4 ----> 11 >> [0]: 4 ----> 12 >> [0]: 5 ----> 10 >> [0]: 5 ----> 11 >> [0]: 5 ----> 15 >> [0]: 6 ----> 14 >> [0]: 6 ----> 15 >> [0]: 6 ----> 16 >> [0]: 7 ----> 12 >> [0]: 7 ----> 13 >> [0]: 7 ----> 16 >> [0]: 8 ----> 9 >> [0]: 8 ----> 10 >> [0]: 8 ----> 13 >> [0]: 8 ----> 14 >> [0]: 9 ----> 0 >> [0]: 9 ----> 1 >> [0]: 10 ----> 0 >> [0]: 10 ----> 2 >> [0]: 11 ----> 0 >> [0]: 12 ----> 1 >> [0]: 13 ----> 1 >> [0]: 13 ----> 3 >> [0]: 14 ----> 2 >> [0]: 14 ----> 3 >> [0]: 15 ----> 2 >> [0]: 16 ----> 3 >> base <-- cap: >> [0]: 0 <---- 9 (0) >> [0]: 0 <---- 10 (0) >> [0]: 0 <---- 11 (0) >> [0]: 1 <---- 12 (0) >> [0]: 1 <---- 13 (0) >> [0]: 1 <---- 9 (-2) >> [0]: 2 <---- 10 (-2) >> [0]: 2 <---- 14 (0) >> [0]: 2 <---- 15 (0) >> [0]: 3 <---- 14 (-2) >> [0]: 3 <---- 13 (-2) >> [0]: 3 <---- 16 (0) >> [0]: 9 <---- 4 (0) >> [0]: 9 <---- 8 (0) >> [0]: 10 <---- 8 (0) >> [0]: 10 <---- 5 (0) >> [0]: 11 <---- 5 (0) >> [0]: 11 <---- 4 (0) >> [0]: 12 <---- 4 (0) >> [0]: 12 <---- 7 (0) >> [0]: 13 <---- 7 (0) >> [0]: 13 <---- 8 (0) >> [0]: 14 <---- 8 (0) >> [0]: 14 <---- 6 (0) >> [0]: 15 <---- 6 (0) >> [0]: 15 <---- 5 (0) >> [0]: 16 <---- 7 (0) >> [0]: 16 <---- 6 (0) >> coordinates with 1 fields >> field 0 with 2 components >> Process 0: >> ( 4) dim 2 offset 0 0. 0. >> ( 5) dim 2 offset 2 0. 1. >> ( 6) dim 2 offset 4 1. 1. >> ( 7) dim 2 offset 6 1. 0. >> ( 8) dim 2 offset 8 0.5 0.5 >> >> For the records, the mesh is loaded from the (gmsh generated) file >> >> ==== square1.msh ===== >> $MeshFormat >> 2.2 0 8 >> $EndMeshFormat >> $Nodes >> 5 >> 1 0 0 0 >> 2 0 1 0 >> 3 1 1 0 >> 4 1 0 0 >> 5 0.5 0.5 0 >> $EndNodes >> $Elements >> 12 >> 1 15 2 0 1 1 >> 2 15 2 0 2 2 >> 3 15 2 0 3 3 >> 4 15 2 0 4 4 >> 5 1 2 0 1 4 3 >> 6 1 2 0 2 3 2 >> 7 1 2 0 3 2 1 >> 8 1 2 0 4 1 4 >> 9 2 2 0 6 1 5 2 >> 10 2 2 0 6 1 4 5 >> 11 2 2 0 6 2 5 3 >> 12 2 2 0 6 3 5 4 >> $EndElements >> =================== >> >> Thanks, >> Matteo >> >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jswenson at mail.smu.edu Mon Dec 4 15:26:13 2017 From: jswenson at mail.smu.edu (Swenson, Jennifer) Date: Mon, 4 Dec 2017 21:26:13 +0000 Subject: [petsc-users] complex values in a bvp Message-ID: Dear PETSc Users, I have modified ex14 "Bratu nonlinear PDE in 3D" from the PETSc online examples (http://www.mcs.anl.gov/petsc/petsc-3.7/src/snes/examples/tutorials/ex14.c.html) to fit my current research problem. Notable changes of ex14 include reducing the size of the problem to 1D and using -snes_fd_color in the Makefile instead of the given "FormJacobian" function. When using this edited version of ex14, we obtained the desired outcome of a simple Stokes test problem. Now, we extend this work to our actual problem, a boundary value problem. Although the code compiles and runs, the output does not match our analytical results. We think that the issue is at least in part due to our equations including complex numbers via PETSC_i. We have noticed that there is a difference in output when using PetscScalar vs PetscComplex when testing PETSC_i: PetscScalar test=10.0+PETSC_i*2.0; printf("%f+%fi\n", crealf(test), cimagf(test)); The result is 10.000000+0.000000i. PetscComplex test=10.0+PETSC_i*2.0; printf("%f+%fi\n", crealf(test), cimagf(test)); The result is 10.000000+2.000000i. This compelled us to change our variables (and even real parameters used with those variables) from PetscScalar to PetscComplex in FormFunction, while leaving our hz, dhz as PetscScalars: PetscScalar hz, dhz; PetscComplex Q=0.5, h21=1.0; PetscComplex u, uz, w, wz, p, pz; hz = 1.0/(PetscScalar)(Mx-1); dhz = 1.0/hz; ... // u component of velocity u = x[i].x_vel; uz = (x[i].x_vel - x[i-1].x_vel)*dhz; // w component of velocity w = x[i].z_vel; wz = (x[i].z_vel - x[i-1].z_vel)*dhz; // pressure p = x[i].pressure; pz = (x[i].pressure - x[i-1].pressure)*dhz; f[i].x_vel = (uz + PETSC_i*Q*w + 2.0*PETSC_i*Q*h21)/(dhz); f[i].z_vel = (-p + 2.0*wz)/(dhz); f[i].pressure = (PETSC_i*Q*u + wz)/(dhz); This switch from PetscScalar to PetscComplex did fix the discrepancy. Therefore, we also tried to change our Field variables from PetscScalar to PetscComplex: typedef struct { PetscScalar x_vel, z_vel, pressure; // variables for our system } Field; However, an error is thrown when the program runs: [0]PETSC ERROR: PetscMallocValidate: error detected at SNESComputeFunction() line 2144 in /usr/local/src/petsc-dev/src/snes/interface/snes.c [0]PETSC ERROR: Memory at address 0x1af7d20 is corrupted [0]PETSC ERROR: Probably write past beginning or end of array [0]PETSC ERROR: Last intact block allocated in PetscLayoutSetUp() line 147 in /usr/local/src/petsc-dev/src/vec/is/utils/pmap.c [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Memory corruption: http://www.mcs.anl.gov/petsc/documentation/installation.html#valgrind [0]PETSC ERROR: [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Release Version 3.7.6, unknown [0]PETSC ERROR: ./idea2.exe on a arch-linux2-c-debug named lovelace by jswenson Mon Dec 4 14:59:04 2017 [0]PETSC ERROR: Configure options --prefix=/usr/local/petsc-dev --with-64-bit-indices=0 --with-fortran-interfaces=1 --CFLAGS=-O2 --CXXFLAGS=-O2 --FFLAGS=-O2 --download-hypre=yes --download-fftw=yes --download-superlu=yes --download-sundials=yes --download-metis=yes --download-parmetis=yes --download-superlu_dist=yes --download-spai=yes --download-sprng=1 --download-hdf5=yes --with-valgrind=0 --with-mpi-dir=/usr --with-shared-libraries=1 --with-c2html=0 --PETSC_ARCH=arch-linux2-c-debug [0]PETSC ERROR: #1 PetscMallocValidate() line 136 in /usr/local/src/petsc-dev/src/sys/memory/mtr.c [0]PETSC ERROR: #2 SNESComputeFunction() line 2144 in /usr/local/src/petsc-dev/src/snes/interface/snes.c [0]PETSC ERROR: #3 SNESSolve_NEWTONLS() line 181 in /usr/local/src/petsc-dev/src/snes/impls/ls/ls.c [0]PETSC ERROR: #4 SNESSolve() line 4005 in /usr/local/src/petsc-dev/src/snes/interface/snes.c My specific questions are the following: 1. Is it important to have my Field variables PetscComplex or is PetscScalar sufficient? 2. Are there any foreseeable problems with using PETSC_i (or complex numbers in general) with this method from ex14? Do I need to somehow indicate to PETSc that I am using complex values? Am I just lacking an argument or an additional line? 3. Is there a nice way to save the solution (to be plotted) besides including -snes_monitor_solution ascii:myproblem.txt in the Makefile and writing an additional code to read and plot the results? For example, on a separate project, I really appreciated the functionality of -ts_monitor_solution_vtk aks=.5%05D.vts in the Makefile. A similar technique would be useful here, if it exists. I have tried -snes_monitor_solution draw -draw_pause -1, but do not know if that works with complex values. Additionally, as far as I can tell, these plots cannot be saved. Any other comments or insight would be greatly appreciated. Thank you for your time. Sincerely, Jennifer Swenson -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Mon Dec 4 15:30:24 2017 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Mon, 4 Dec 2017 21:30:24 +0000 Subject: [petsc-users] complex values in a bvp In-Reply-To: References: Message-ID: <6DC587B5-E68E-4847-A71A-E4541098A9E8@mcs.anl.gov> You have headed down the wrong track. You must ./configure PETSc with the option --with-scalar-type=complex to work with complex numbers and should declare the variables with PetscScalar just like in the example. Barry > On Dec 4, 2017, at 3:26 PM, Swenson, Jennifer wrote: > > Dear PETSc Users, > > I have modified ex14 "Bratu nonlinear PDE in 3D" from the PETSc online examples (http://www.mcs.anl.gov/petsc/petsc-3.7/src/snes/examples/tutorials/ex14.c.html) to fit my current research problem. > > Notable changes of ex14 include reducing the size of the problem to 1D and using -snes_fd_color in the Makefile instead of the given "FormJacobian" function. When using this edited version of ex14, we obtained the desired outcome of a simple Stokes test problem. Now, we extend this work to our actual problem, a boundary value problem. Although the code compiles and runs, the output does not match our analytical results. > > We think that the issue is at least in part due to our equations including complex numbers via PETSC_i. We have noticed that there is a difference in output when using PetscScalar vs PetscComplex when testing PETSC_i: > > PetscScalar test=10.0+PETSC_i*2.0; > printf("%f+%fi\n", crealf(test), cimagf(test)); > The result is 10.000000+0.000000i. > > PetscComplex test=10.0+PETSC_i*2.0; > printf("%f+%fi\n", crealf(test), cimagf(test)); > The result is 10.000000+2.000000i. > > This compelled us to change our variables (and even real parameters used with those variables) from PetscScalar to PetscComplex in FormFunction, while leaving our hz, dhz as PetscScalars: > > PetscScalar hz, dhz; > PetscComplex Q=0.5, h21=1.0; > PetscComplex u, uz, w, wz, p, pz; > > hz = 1.0/(PetscScalar)(Mx-1); > dhz = 1.0/hz; > > ... > > // u component of velocity > u = x[i].x_vel; > uz = (x[i].x_vel - x[i-1].x_vel)*dhz; > > // w component of velocity > w = x[i].z_vel; > wz = (x[i].z_vel - x[i-1].z_vel)*dhz; > > // pressure > p = x[i].pressure; > pz = (x[i].pressure - x[i-1].pressure)*dhz; > > f[i].x_vel = (uz + PETSC_i*Q*w + 2.0*PETSC_i*Q*h21)/(dhz); > f[i].z_vel = (-p + 2.0*wz)/(dhz); > f[i].pressure = (PETSC_i*Q*u + wz)/(dhz); > > This switch from PetscScalar to PetscComplex did fix the discrepancy. Therefore, we also tried to change our Field variables from PetscScalar to PetscComplex: > > typedef struct { > PetscScalar x_vel, z_vel, pressure; // variables for our system > } Field; > > However, an error is thrown when the program runs: > > [0]PETSC ERROR: PetscMallocValidate: error detected at SNESComputeFunction() line 2144 in /usr/local/src/petsc-dev/src/snes/interface/snes.c > [0]PETSC ERROR: Memory at address 0x1af7d20 is corrupted > [0]PETSC ERROR: Probably write past beginning or end of array > [0]PETSC ERROR: Last intact block allocated in PetscLayoutSetUp() line 147 in /usr/local/src/petsc-dev/src/vec/is/utils/pmap.c > [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > [0]PETSC ERROR: Memory corruption: http://www.mcs.anl.gov/petsc/documentation/installation.html#valgrind > [0]PETSC ERROR: > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.7.6, unknown > [0]PETSC ERROR: ./idea2.exe on a arch-linux2-c-debug named lovelace by jswenson Mon Dec 4 14:59:04 2017 > [0]PETSC ERROR: Configure options --prefix=/usr/local/petsc-dev --with-64-bit-indices=0 --with-fortran-interfaces=1 --CFLAGS=-O2 --CXXFLAGS=-O2 --FFLAGS=-O2 --download-hypre=yes --download-fftw=yes --download-superlu=yes --download-sundials=yes --download-metis=yes --download-parmetis=yes --download-superlu_dist=yes --download-spai=yes --download-sprng=1 --download-hdf5=yes --with-valgrind=0 --with-mpi-dir=/usr --with-shared-libraries=1 --with-c2html=0 --PETSC_ARCH=arch-linux2-c-debug > [0]PETSC ERROR: #1 PetscMallocValidate() line 136 in /usr/local/src/petsc-dev/src/sys/memory/mtr.c > [0]PETSC ERROR: #2 SNESComputeFunction() line 2144 in /usr/local/src/petsc-dev/src/snes/interface/snes.c > [0]PETSC ERROR: #3 SNESSolve_NEWTONLS() line 181 in /usr/local/src/petsc-dev/src/snes/impls/ls/ls.c > [0]PETSC ERROR: #4 SNESSolve() line 4005 in /usr/local/src/petsc-dev/src/snes/interface/snes.c > > My specific questions are the following: > ? Is it important to have my Field variables PetscComplex or is PetscScalar sufficient? > ? Are there any foreseeable problems with using PETSC_i (or complex numbers in general) with this method from ex14? Do I need to somehow indicate to PETSc that I am using complex values? Am I just lacking an argument or an additional line? > ? Is there a nice way to save the solution (to be plotted) besides including -snes_monitor_solution ascii:myproblem.txt in the Makefile and writing an additional code to read and plot the results? For example, on a separate project, I really appreciated the functionality of -ts_monitor_solution_vtk aks=.5%05D.vts in the Makefile. A similar technique would be useful here, if it exists. I have tried -snes_monitor_solution draw -draw_pause -1, but do not know if that works with complex values. Additionally, as far as I can tell, these plots cannot be saved. > > Any other comments or insight would be greatly appreciated. Thank you for your time. > > Sincerely, > Jennifer Swenson From wenlonggong at gmail.com Mon Dec 4 21:07:12 2017 From: wenlonggong at gmail.com (Wenlong Gong) Date: Mon, 4 Dec 2017 21:07:12 -0600 Subject: [petsc-users] Incomplete Cholesky Factorization in petsc Message-ID: Hello, I'm trying to use the Incomplete Cholesky Factorization for a sparse matrix in petsc. I started with a 10*10 matrix and used ksp and pc in order to get the ICC(0) factor, with no fill-in, natural ordering. However, the returned factor matrix does not match with the answer I got from matlab ichol() function. The code with the hard-coded data is attached here. I would appreciate if anyone could help check if I did anything wrong.Please let me know if there is easier way to get this incomplete cholesky factor. Thanks! Best regards, Wendy -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: repex.c Type: text/x-csrc Size: 2982 bytes Desc: not available URL: From bsmith at mcs.anl.gov Mon Dec 4 22:07:08 2017 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Tue, 5 Dec 2017 04:07:08 +0000 Subject: [petsc-users] Incomplete Cholesky Factorization in petsc In-Reply-To: References: Message-ID: I'm not sure what your goal is. In general two different IC(0) codes might produce slightly different factorizations based on implementation details even if they claim to run the same algorithm so I don't think there is a reason to try to compare the factors they produce. In addition PETSc, as well as most IC/ILU/LU codes store the factored matrices in a "non-conventional" form that is optimized for performance so it is not easy to just pull out the "factors" to look at them or compare them. Barry > On Dec 4, 2017, at 9:07 PM, Wenlong Gong wrote: > > Hello, > > I'm trying to use the Incomplete Cholesky Factorization for a sparse matrix in petsc. I started with a 10*10 matrix and used ksp and pc in order to get the ICC(0) factor, with no fill-in, natural ordering. However, the returned factor matrix does not match with the answer I got from matlab ichol() function. > > The code with the hard-coded data is attached here. I would appreciate if anyone could help check if I did anything wrong.Please let me know if there is easier way to get this incomplete cholesky factor. Thanks! > > Best regards, > Wendy > > From s.lanthaler at gmail.com Tue Dec 5 08:05:11 2017 From: s.lanthaler at gmail.com (Samuel Lanthaler) Date: Tue, 5 Dec 2017 15:05:11 +0100 Subject: [petsc-users] MatPtAP problem after subsequent call to MatDuplicate Message-ID: <5A26A797.60901@gmail.com> Hi there, I am getting error messages after using MatPtAP to create a new matrix C = Pt*A*P and then trying to assign A=C. The following is a minimal example reproducing the problem: #include "slepc/finclude/slepc.h" USE slepcsys USE slepceps IMPLICIT NONE LOGICAL :: cause_error ! --- pure PETSc PetscErrorCode :: ierr PetscScalar :: vals(3,3), val Mat :: matA, matP, matC PetscInt :: m,idxn(3),idxm(3),idone(1) ! initialize SLEPc & Petsc etc. CALL SlepcInitialize(PETSC_NULL_CHARACTER,ierr) ! Set up a new matrix m = 3 CALL MatCreate(PETSC_COMM_WORLD,matA,ierr); CHKERRQ(ierr); CALL MatSetType(matA,MATMPIAIJ,ierr); CHKERRQ(ierr); CALL MatSetSizes(matA,PETSC_DECIDE,PETSC_DECIDE,m,m,ierr); CHKERRQ(ierr); CALL MatMPIAIJSetPreallocation(matA,3,PETSC_NULL_INTEGER,3,PETSC_NULL_INTEGER,ierr); CHKERRQ(ierr); ! [.... call to MatSetValues to set values of matA] ! assemble matrix CALL MatAssemblyBegin(matA,MAT_FINAL_ASSEMBLY,ierr); CHKERRQ(ierr); CALL MatAssemblyEnd(matA,MAT_FINAL_ASSEMBLY,ierr); CHKERRQ(ierr); ! duplicate matrix CALL MatDuplicate(matA,MAT_DO_NOT_COPY_VALUES,matP,ierr); CHKERRQ(ierr); ! [.... call to MatSetValues to set values of matP] ! assemble matrix CALL MatAssemblyBegin(matP,MAT_FINAL_ASSEMBLY,ierr); CHKERRQ(ierr); CALL MatAssemblyEnd(matP,MAT_FINAL_ASSEMBLY,ierr); CHKERRQ(ierr); ! compute C=Pt*A*P cause_error = .TRUE. ! set to .TRUE. to cause error, .FALSE. seems to work fine IF(.NOT.cause_error) THEN CALL MatDuplicate(matA,MAT_COPY_VALUES,matC,ierr); CHKERRQ(ierr); ELSE CALL MatPtAP(matA,matP,MAT_INITIAL_MATRIX,PETSC_DEFAULT_REAL,matC,ierr); CHKERRQ(ierr); END IF ! destroy matA and duplicate A=C CALL MatDestroy(matA,ierr); CHKERRQ(ierr); CALL MatDuplicate(matC,MAT_COPY_VALUES,matA,ierr); CHKERRQ(ierr); ! display resulting matrix A CALL MatView(matA,PETSC_VIEWER_STDOUT_WORLD,ierr); CHKERRQ(ierr); Whether A and P are assigned any values doesn't seem to matter at all. The error message I'm getting is: Mat Object: 1 MPI processes type: mpiaij [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind [0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [0]PETSC ERROR: likely location of problem given in stack below [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, [0]PETSC ERROR: INSTEAD the line number of the start of the function [0]PETSC ERROR: is given. [0]PETSC ERROR: [0] MatView_MPIAIJ_PtAP line 23 /home/lanthale/Progs/petsc-3.8.2/src/mat/impls/aij/mpi/mpiptap.c [0]PETSC ERROR: [0] MatView line 949 /home/lanthale/Progs/petsc-3.8.2/src/mat/interface/matrix.c [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Signal received [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Release Version 3.8.2, Nov, 09, 2017 Could someone maybe tell me where I'm doing things wrong? The documentation for MatPtAP says that C will be created and that this routine is "currently only implemented for pairs of AIJ matrices and classes which inherit from AIJ". Does this maybe exclude MPIAIJ matrices? Thanks, Samuel -------------- next part -------------- An HTML attachment was scrubbed... URL: From yann.jobic at univ-amu.fr Tue Dec 5 08:12:31 2017 From: yann.jobic at univ-amu.fr (Yann Jobic) Date: Tue, 5 Dec 2017 15:12:31 +0100 Subject: [petsc-users] 2 Dirichlet conditions for one Element in PetscFE In-Reply-To: References: Message-ID: <7ead0d73-f554-c8ae-5db2-b744fea5b936@univ-amu.fr> Hello, I tried the master of the git directory, as you have merged the branch. It works ! Many thanks for the fix. Best regards, Yann Le 22/11/2017 ? 18:51, Matthew Knepley a ?crit?: > On Wed, Nov 22, 2017 at 12:39 PM, Yann Jobic > wrote: > > Hello, > > I've found a strange behavior when looking into a bug for the > pressure convergence of a simple Navier-Stokes problem using PetscFE. > > I followed many examples for labeling boundary faces. I first use > DMPlexMarkBoundaryFaces, (label=1 to the faces). > I find those faces using DMGetStratumIS and searching 1 as it is > the value of the marked boundary faces. > Finally i use DMPlexLabelComplete over the new label. > I then use : > ? ierr = PetscDSAddBoundary(prob, DM_BC_ESSENTIAL, "in", "Faces", > 0, Ncomp, components, (void (*)(void)) uIn, NWest, west, > NULL);CHKERRQ(ierr); > in order to impose a dirichlet condition for the faces labeled by > the correct value (west=1, south=3,...). > > However, the function "uIn()" is called in all the Elements > containing the boundary faces, and thus impose the values at nodes > that are not in the labeled faces. > Is it a normal behavior ? I then have to test the position of the > node calling uIn, in order to impose the good value. > I have this problem for a Poiseuille flow, where at 2 corner > Elements i have a zero velocity dirichlet condition (wall) and a > In flow velocity one. > > > I believe I have fixed this in knepley/fix-plex-bc-multiple which > should be merged soon. Do you know how to merge that branch and try? > > ? Thanks, > > ? ? ?Matt > > The pressure is then very high at the corner nodes of those 2 > Elements. > Do you think my pressure problem comes from there ? (The velocity > field is correct) > > Many thanks, > > Regards, > > Yann > > PS : i'm using those runtime options : > -vel_petscspace_order 2 -pres_petscspace_order 1 \ > -ksp_type fgmres -pc_type fieldsplit -pc_fieldsplit_type schur > -pc_fieldsplit_schur_fact_type full? \ > -fieldsplit_velocity_pc_type lu -fieldsplit_pressure_ksp_rtol > 1.0e-10 -fieldsplit_pressure_pc_type jacobi > > > --- > L'absence de virus dans ce courrier ?lectronique a ?t? v?rifi?e > par le logiciel antivirus Avast. > https://www.avast.com/antivirus > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ -- ___________________________ Yann JOBIC HPC engineer IUSTI-CNRS UMR 7343 - Polytech Marseille Technop?le de Ch?teau Gombert 5 rue Enrico Fermi 13453 Marseille cedex 13 Tel : (33) 4 91 10 69 43 Fax : (33) 4 91 10 69 69 --- L'absence de virus dans ce courrier ?lectronique a ?t? v?rifi?e par le logiciel antivirus Avast. https://www.avast.com/antivirus -------------- next part -------------- An HTML attachment was scrubbed... URL: From C.Klaij at marin.nl Tue Dec 5 09:07:54 2017 From: C.Klaij at marin.nl (Klaij, Christiaan) Date: Tue, 5 Dec 2017 15:07:54 +0000 Subject: [petsc-users] segfault after recent scientific linux upgrade Message-ID: <1512486474462.7744@marin.nl> I'm running production software with petsc-3.7.5 and, among others, superlu_dist 5.1.3 on scientific linux 7.4. After a recent update of SL7.4, notably of the kernel and glibc, we found that superlu is somehow broken. Below's a backtrace of a serial example. Is this a known issue? Could you please advice on how to proceed (preferably while keeping 3.7.5 for now). Thanks, Chris $ gdb ./refresco ./core.9810 GNU gdb (GDB) Red Hat Enterprise Linux 7.6.1-100.el7 Copyright (C) 2013 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-redhat-linux-gnu". For bug reporting instructions, please see: ... Reading symbols from /home/cklaij/ReFRESCO/Dev/trunk/Suites/testSuite/FlatPlate_laminar/calcs/Grid64x64/refresco...done. [New LWP 9810] Missing separate debuginfo for /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/licensing-1.55.0/sll/lib64/libssl.so.10 Try: yum --enablerepo='*debug*' install /usr/lib/debug/.build-id/68/6a25d0a83d002183c835fa5694a8110c78d3bc.debug Missing separate debuginfo for /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/licensing-1.55.0/sll/lib64/libcrypto.so.10 Try: yum --enablerepo='*debug*' install /usr/lib/debug/.build-id/68/d2958189303f421b1082abc33fd87338826c65.debug [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib64/libthread_db.so.1". Core was generated by `./refresco'. Program terminated with signal 11, Segmentation fault. #0 0x00002ba501c132bc in mc64wd_dist (n=0x5213270, ne=0x2, ip=0x1, irn=0x51af520, a=0x51ef260, iperm=0x1000, num=0x7ffc545b2d94, jperm=0x51e7260, out=0x51eb260, pr=0x51ef260, q=0x51f3260, l=0x51f7260, u=0x51fb270, d__=0x5203270) at /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/SuperLU_DIST_5.1.3/SRC/mc64ad_dist.c:2322 2322 if (iperm[i__] != 0 || iperm[i0] == 0) { Missing separate debuginfos, use: debuginfo-install bzip2-libs-1.0.6-13.el7.x86_64 glibc-2.17-196.el7.x86_64 keyutils-libs-1.5.8-3.el7.x86_64 krb5-libs-1.15.1-8.el7.x86_64 libcom_err-1.42.9-10.el7.x86_64 libgcc-4.8.5-16.el7.x86_64 libselinux-2.5-11.el7.x86_64 libstdc++-4.8.5-16.el7.x86_64 libxml2-2.9.1-6.el7_2.3.x86_64 numactl-libs-2.0.9-6.el7_2.x86_64 pcre-8.32-17.el7.x86_64 xz-libs-5.2.2-1.el7.x86_64 zlib-1.2.7-17.el7.x86_64 (gdb) bt #0 0x00002ba501c132bc in mc64wd_dist (n=0x5213270, ne=0x2, ip=0x1, irn=0x51af520, a=0x51ef260, iperm=0x1000, num=0x7ffc545b2d94, jperm=0x51e7260, out=0x51eb260, pr=0x51ef260, q=0x51f3260, l=0x51f7260, u=0x51fb270, d__=0x5203270) at /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/SuperLU_DIST_5.1.3/SRC/mc64ad_dist.c:2322 #1 0x00002ba501c0ef2b in mc64ad_dist (job=0x5213270, n=0x2, ne=0x1, ip=0x51af520, irn=0x51ef260, a=0x1000, num=0x7ffc545b2db0, cperm=0x51fb270, liw=0x5187d10, iw=0x51c3130, ldw=0x51af520, dw=0x517b570, icntl=0x51e7260, info=0x2ba501c2e556 ) at /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/SuperLU_DIST_5.1.3/SRC/mc64ad_dist.c:596 #2 0x00002ba501c2e556 in dldperm_dist (job=0, n=0, nnz=0, colptr=0x51af520, adjncy=0x51ef260, nzval=0x1000, perm=0x4f00, u=0x1000, v=0x517b001) at /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/SuperLU_DIST_5.1.3/SRC/dldperm_dist.c:141 #3 0x00002ba501c26296 in pdgssvx_ABglobal (options=0x5213270, A=0x2, ScalePermstruct=0x1, B=0x51af520, ldb=85914208, nrhs=4096, grid=0x516da30, LUstruct=0x517af40, berr=0x1000, stat=0x2ba500b36a7d , info=0x517af58) at /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/SuperLU_DIST_5.1.3/SRC/pdgssvx_ABglobal.c:716 #4 0x00002ba500b36a7d in MatLUFactorNumeric_SuperLU_DIST (F=0x5213270, A=0x2, ---Type to continue, or q to quit--- info=0x1) at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/mat/impls/aij/mpi/superlu_dist/superlu_dist.c:419 #5 0x00002ba500b45a1a in MatLUFactorNumeric (fact=0x5213270, mat=0x2, info=0x1) at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/mat/interface/matrix.c:2996 #6 0x00002ba500e9e6c7 in PCSetUp_LU (pc=0x5213270) at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/pc/impls/factor/lu/lu.c:172 #7 0x00002ba500ded084 in PCSetUp (pc=0x5213270) at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/pc/interface/precon.c:968 #8 0x00002ba500f2968d in KSPSetUp (ksp=0x5213270) at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/ksp/interface/itfunc.c:390 #9 0x00002ba500f257be in KSPSolve (ksp=0x5213270, b=0x2, x=0x4193510) at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/ksp/interface/itfunc.c:599 #10 0x00002ba500f3e142 in kspsolve_ (ksp=0x5213270, b=0x2, x=0x1, __ierr=0x51af520) at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/ksp/interface/ftn-auto/itfuncf.c:261 ---Type to continue, or q to quit--- #11 0x0000000000bccf71 in petsc_solvers::petsc_solvers_solve ( regname='massTransport', rhs_c=..., phi_c=..., tol=0.01, maxiter=500, res0=-9.2559631349317831e+61, usediter=0, .tmp.REGNAME.len_V$1790=13) at petsc_solvers.F90:580 #12 0x0000000000c2c9c5 in mass_momentum::mass_momentum_pressureprediction () at mass_momentum.F90:989 #13 0x0000000000c0ffc1 in mass_momentum::mass_momentum_core () at mass_momentum.F90:626 #14 0x0000000000c26a2c in mass_momentum::mass_momentum_systempcapply ( aa_system=76390912, xx_system=68983024, rr_system=68984544, ierr=0) at mass_momentum.F90:919 #15 0x00002ba500eaa763 in ourshellapply (pc=0x48da200, x=0x41c98f0, y=0x41c9ee0) at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/pc/impls/shell/ftn-custom/zshellpcf.c:41 #16 0x00002ba500ea79be in PCApply_Shell (pc=0x5213270, x=0x2, y=0x1) at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/pc/impls/shell/shellpc.c:124 #17 0x00002ba500df1800 in PCApply (pc=0x5213270, x=0x2, y=0x1) at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/pc/interface/precon.c:482 #18 0x00002ba500f2592a in KSPSolve (ksp=0x5213270, b=0x2, x=0x41c9ee0) at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/ksp/interf---Type to continue, or q to quit--- ace/itfunc.c:631 #19 0x00002ba500f3e142 in kspsolve_ (ksp=0x5213270, b=0x2, x=0x1, __ierr=0x51af520) at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/ksp/interface/ftn-auto/itfuncf.c:261 #20 0x0000000000c1b0ea in mass_momentum::mass_momentum_krylov () at mass_momentum.F90:777 #21 0x0000000000c0d242 in mass_momentum::mass_momentum_simple () at mass_momentum.F90:548 #22 0x0000000000c0841f in mass_momentum::mass_momentum_solve () at mass_momentum.F90:465 #23 0x000000000041b5ec in refresco () at refresco.F90:259 #24 0x000000000041999e in main () #25 0x00002ba508c98c05 in __libc_start_main () from /lib64/libc.so.6 #26 0x00000000004198a3 in _start () (gdb) dr. ir. Christiaan Klaij | Senior Researcher | Research & Development MARIN | T +31 317 49 33 44 | mailto:C.Klaij at marin.nl | http://www.marin.nl MARIN news: http://www.marin.nl/web/News/News-items/Seminar-Blauwe-toekomst-versnellen-van-innovaties-door-samenwerken.htm From fdkong.jd at gmail.com Tue Dec 5 09:30:56 2017 From: fdkong.jd at gmail.com (Fande Kong) Date: Tue, 5 Dec 2017 08:30:56 -0700 Subject: [petsc-users] segfault after recent scientific linux upgrade In-Reply-To: <1512486474462.7744@marin.nl> References: <1512486474462.7744@marin.nl> Message-ID: I would like to suggest you to use PETSc-3.8.x. Then the bug should go away. It is a known bug related to the reuse of the factorization pattern. Fande, On Tue, Dec 5, 2017 at 8:07 AM, Klaij, Christiaan wrote: > I'm running production software with petsc-3.7.5 and, among > others, superlu_dist 5.1.3 on scientific linux 7.4. > > After a recent update of SL7.4, notably of the kernel and glibc, > we found that superlu is somehow broken. Below's a backtrace of a > serial example. Is this a known issue? Could you please advice on > how to proceed (preferably while keeping 3.7.5 for now). > > Thanks, > Chris > > $ gdb ./refresco ./core.9810 > GNU gdb (GDB) Red Hat Enterprise Linux 7.6.1-100.el7 > Copyright (C) 2013 Free Software Foundation, Inc. > License GPLv3+: GNU GPL version 3 or later html> > This is free software: you are free to change and redistribute it. > There is NO WARRANTY, to the extent permitted by law. Type "show copying" > and "show warranty" for details. > This GDB was configured as "x86_64-redhat-linux-gnu". > For bug reporting instructions, please see: > ... > Reading symbols from /home/cklaij/ReFRESCO/Dev/trunk/Suites/testSuite/ > FlatPlate_laminar/calcs/Grid64x64/refresco...done. > [New LWP 9810] > Missing separate debuginfo for /home/cklaij/ReFRESCO/Dev/ > trunk/Libs/install/licensing-1.55.0/sll/lib64/libssl.so.10 > Try: yum --enablerepo='*debug*' install /usr/lib/debug/.build-id/68/ > 6a25d0a83d002183c835fa5694a8110c78d3bc.debug > Missing separate debuginfo for /home/cklaij/ReFRESCO/Dev/ > trunk/Libs/install/licensing-1.55.0/sll/lib64/libcrypto.so.10 > Try: yum --enablerepo='*debug*' install /usr/lib/debug/.build-id/68/ > d2958189303f421b1082abc33fd87338826c65.debug > [Thread debugging using libthread_db enabled] > Using host libthread_db library "/lib64/libthread_db.so.1". > Core was generated by `./refresco'. > Program terminated with signal 11, Segmentation fault. > #0 0x00002ba501c132bc in mc64wd_dist (n=0x5213270, ne=0x2, ip=0x1, > irn=0x51af520, a=0x51ef260, iperm=0x1000, num=0x7ffc545b2d94, > jperm=0x51e7260, out=0x51eb260, pr=0x51ef260, q=0x51f3260, l=0x51f7260, > u=0x51fb270, d__=0x5203270) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_ > 64-Intel/SuperLU_DIST_5.1.3/SRC/mc64ad_dist.c:2322 > 2322 if (iperm[i__] != 0 || iperm[i0] == 0) { > Missing separate debuginfos, use: debuginfo-install > bzip2-libs-1.0.6-13.el7.x86_64 glibc-2.17-196.el7.x86_64 > keyutils-libs-1.5.8-3.el7.x86_64 krb5-libs-1.15.1-8.el7.x86_64 > libcom_err-1.42.9-10.el7.x86_64 libgcc-4.8.5-16.el7.x86_64 > libselinux-2.5-11.el7.x86_64 libstdc++-4.8.5-16.el7.x86_64 > libxml2-2.9.1-6.el7_2.3.x86_64 numactl-libs-2.0.9-6.el7_2.x86_64 > pcre-8.32-17.el7.x86_64 xz-libs-5.2.2-1.el7.x86_64 zlib-1.2.7-17.el7.x86_64 > (gdb) bt > #0 0x00002ba501c132bc in mc64wd_dist (n=0x5213270, ne=0x2, ip=0x1, > irn=0x51af520, a=0x51ef260, iperm=0x1000, num=0x7ffc545b2d94, > jperm=0x51e7260, out=0x51eb260, pr=0x51ef260, q=0x51f3260, l=0x51f7260, > u=0x51fb270, d__=0x5203270) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_ > 64-Intel/SuperLU_DIST_5.1.3/SRC/mc64ad_dist.c:2322 > #1 0x00002ba501c0ef2b in mc64ad_dist (job=0x5213270, n=0x2, ne=0x1, > ip=0x51af520, irn=0x51ef260, a=0x1000, num=0x7ffc545b2db0, > cperm=0x51fb270, liw=0x5187d10, iw=0x51c3130, ldw=0x51af520, > dw=0x517b570, > icntl=0x51e7260, info=0x2ba501c2e556 ) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_ > 64-Intel/SuperLU_DIST_5.1.3/SRC/mc64ad_dist.c:596 > #2 0x00002ba501c2e556 in dldperm_dist (job=0, n=0, nnz=0, > colptr=0x51af520, > adjncy=0x51ef260, nzval=0x1000, perm=0x4f00, u=0x1000, v=0x517b001) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_ > 64-Intel/SuperLU_DIST_5.1.3/SRC/dldperm_dist.c:141 > #3 0x00002ba501c26296 in pdgssvx_ABglobal (options=0x5213270, A=0x2, > ScalePermstruct=0x1, B=0x51af520, ldb=85914208, nrhs=4096, > grid=0x516da30, > LUstruct=0x517af40, berr=0x1000, > stat=0x2ba500b36a7d , > info=0x517af58) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_ > 64-Intel/SuperLU_DIST_5.1.3/SRC/pdgssvx_ABglobal.c:716 > #4 0x00002ba500b36a7d in MatLUFactorNumeric_SuperLU_DIST (F=0x5213270, > A=0x2, > ---Type to continue, or q to quit--- > info=0x1) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/ > src/mat/impls/aij/mpi/superlu_dist/superlu_dist.c:419 > #5 0x00002ba500b45a1a in MatLUFactorNumeric (fact=0x5213270, mat=0x2, > info=0x1) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/ > src/mat/interface/matrix.c:2996 > #6 0x00002ba500e9e6c7 in PCSetUp_LU (pc=0x5213270) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/ > src/ksp/pc/impls/factor/lu/lu.c:172 > #7 0x00002ba500ded084 in PCSetUp (pc=0x5213270) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/ > src/ksp/pc/interface/precon.c:968 > #8 0x00002ba500f2968d in KSPSetUp (ksp=0x5213270) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/ > src/ksp/ksp/interface/itfunc.c:390 > #9 0x00002ba500f257be in KSPSolve (ksp=0x5213270, b=0x2, x=0x4193510) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/ > src/ksp/ksp/interface/itfunc.c:599 > #10 0x00002ba500f3e142 in kspsolve_ (ksp=0x5213270, b=0x2, x=0x1, > __ierr=0x51af520) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/ > src/ksp/ksp/interface/ftn-auto/itfuncf.c:261 > ---Type to continue, or q to quit--- > #11 0x0000000000bccf71 in petsc_solvers::petsc_solvers_solve ( > regname='massTransport', rhs_c=..., phi_c=..., tol=0.01, maxiter=500, > res0=-9.2559631349317831e+61, usediter=0, .tmp.REGNAME.len_V$1790=13) > at petsc_solvers.F90:580 > #12 0x0000000000c2c9c5 in mass_momentum::mass_momentum_pressureprediction > () > at mass_momentum.F90:989 > #13 0x0000000000c0ffc1 in mass_momentum::mass_momentum_core () > at mass_momentum.F90:626 > #14 0x0000000000c26a2c in mass_momentum::mass_momentum_systempcapply ( > aa_system=76390912, xx_system=68983024, rr_system=68984544, ierr=0) > at mass_momentum.F90:919 > #15 0x00002ba500eaa763 in ourshellapply (pc=0x48da200, x=0x41c98f0, > y=0x41c9ee0) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/ > src/ksp/pc/impls/shell/ftn-custom/zshellpcf.c:41 > #16 0x00002ba500ea79be in PCApply_Shell (pc=0x5213270, x=0x2, y=0x1) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/ > src/ksp/pc/impls/shell/shellpc.c:124 > #17 0x00002ba500df1800 in PCApply (pc=0x5213270, x=0x2, y=0x1) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/ > src/ksp/pc/interface/precon.c:482 > #18 0x00002ba500f2592a in KSPSolve (ksp=0x5213270, b=0x2, x=0x41c9ee0) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/ksp/interf---Type > to continue, or q to quit--- > ace/itfunc.c:631 > #19 0x00002ba500f3e142 in kspsolve_ (ksp=0x5213270, b=0x2, x=0x1, > __ierr=0x51af520) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/ > src/ksp/ksp/interface/ftn-auto/itfuncf.c:261 > #20 0x0000000000c1b0ea in mass_momentum::mass_momentum_krylov () > at mass_momentum.F90:777 > #21 0x0000000000c0d242 in mass_momentum::mass_momentum_simple () > at mass_momentum.F90:548 > #22 0x0000000000c0841f in mass_momentum::mass_momentum_solve () > at mass_momentum.F90:465 > #23 0x000000000041b5ec in refresco () at refresco.F90:259 > #24 0x000000000041999e in main () > #25 0x00002ba508c98c05 in __libc_start_main () from /lib64/libc.so.6 > #26 0x00000000004198a3 in _start () > (gdb) > > > dr. ir. Christiaan Klaij | Senior Researcher | Research & Development > MARIN | T +31 317 49 33 44 | mailto:C.Klaij at marin.nl | http://www.marin.nl > > MARIN news: http://www.marin.nl/web/News/News-items/Seminar-Blauwe- > toekomst-versnellen-van-innovaties-door-samenwerken.htm > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wenlonggong at gmail.com Tue Dec 5 09:32:17 2017 From: wenlonggong at gmail.com (Wenlong Gong) Date: Tue, 5 Dec 2017 09:32:17 -0600 Subject: [petsc-users] Incomplete Cholesky Factorization in petsc In-Reply-To: References: Message-ID: Barry, Thank you for the reply! It makes sense If two IC() function returns slightly different factored matrix. But the IC factor matrix from PETSc is not even close to what I get from ichol in Matlab. Ultimately I need this incomplete cholesky factorization function wrapped in R once it gives correct result. Not sure if I used the easiest way to get IC so I also seek for help on getting a concise version of IC(). -Wendy On Mon, Dec 4, 2017 at 10:07 PM, Smith, Barry F. wrote: > > I'm not sure what your goal is. In general two different IC(0) codes > might produce slightly different factorizations based on implementation > details even if they claim to run the same algorithm so I don't think there > is a reason to try to compare the factors they produce. > > In addition PETSc, as well as most IC/ILU/LU codes store the factored > matrices in a "non-conventional" form that is optimized for performance so > it is not easy to just pull out the "factors" to look at them or compare > them. > > Barry > > > > On Dec 4, 2017, at 9:07 PM, Wenlong Gong wrote: > > > > Hello, > > > > I'm trying to use the Incomplete Cholesky Factorization for a sparse > matrix in petsc. I started with a 10*10 matrix and used ksp and pc in order > to get the ICC(0) factor, with no fill-in, natural ordering. However, the > returned factor matrix does not match with the answer I got from matlab > ichol() function. > > > > The code with the hard-coded data is attached here. I would appreciate > if anyone could help check if I did anything wrong.Please let me know if > there is easier way to get this incomplete cholesky factor. Thanks! > > > > Best regards, > > Wendy > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hzhang at mcs.anl.gov Tue Dec 5 09:38:31 2017 From: hzhang at mcs.anl.gov (Hong) Date: Tue, 5 Dec 2017 09:38:31 -0600 Subject: [petsc-users] MatPtAP problem after subsequent call to MatDuplicate In-Reply-To: <5A26A797.60901@gmail.com> References: <5A26A797.60901@gmail.com> Message-ID: Samuel: You try to do following: 1) Create A; 2) Create P; 3) C = PtAP: CALL MatPtAP(matA,matP,MAT_INITIAL_MATRIX,PETSC_DEFAULT_REAL, matC,ierr); 4) MatDestroy(matA,ierr); 5) MatDuplicate(matC,MAT_COPY_VALUES,matA,ierr); 6) MatView(matA,PETSC_VIEWER_STDOUT_WORLD,ierr); The error occurs at (6). Do you get any error for MatView(matC)? C has some special data structures as a matrix product which could be lost during MatDuplicate for matA. It might be a bug in our code. I'll check it. Hong > Hi there, > > I am getting error messages after using MatPtAP to create a new matrix C = > Pt*A*P and then trying to assign A=C. The following is a minimal example > reproducing the problem: > > #include "slepc/finclude/slepc.h" > USE slepcsys > USE slepceps > IMPLICIT NONE > LOGICAL :: cause_error > ! --- pure PETSc > PetscErrorCode :: ierr > PetscScalar :: vals(3,3), val > Mat :: matA, matP, matC > PetscInt :: m,idxn(3),idxm(3),idone(1) > > ! initialize SLEPc & Petsc etc. > CALL SlepcInitialize(PETSC_NULL_CHARACTER,ierr) > > ! Set up a new matrix > m = 3 > CALL MatCreate(PETSC_COMM_WORLD,matA,ierr); CHKERRQ(ierr); > CALL MatSetType(matA,MATMPIAIJ,ierr); CHKERRQ(ierr); > CALL MatSetSizes(matA,PETSC_DECIDE,PETSC_DECIDE,m,m,ierr); > CHKERRQ(ierr); > CALL MatMPIAIJSetPreallocation(matA,3,PETSC_NULL_INTEGER,3,PETSC_NULL_INTEGER,ierr); > CHKERRQ(ierr); > > ! [.... call to MatSetValues to set values of matA] > > ! assemble matrix > CALL MatAssemblyBegin(matA,MAT_FINAL_ASSEMBLY,ierr); CHKERRQ(ierr); > CALL MatAssemblyEnd(matA,MAT_FINAL_ASSEMBLY,ierr); CHKERRQ(ierr); > > ! duplicate matrix > CALL MatDuplicate(matA,MAT_DO_NOT_COPY_VALUES,matP,ierr); > CHKERRQ(ierr); > > ! [.... call to MatSetValues to set values of matP] > > ! assemble matrix > CALL MatAssemblyBegin(matP,MAT_FINAL_ASSEMBLY,ierr); CHKERRQ(ierr); > CALL MatAssemblyEnd(matP,MAT_FINAL_ASSEMBLY,ierr); CHKERRQ(ierr); > > ! compute C=Pt*A*P > cause_error = .TRUE. ! set to .TRUE. to cause error, .FALSE. seems > to work fine > IF(.NOT.cause_error) THEN > CALL MatDuplicate(matA,MAT_COPY_VALUES,matC,ierr); CHKERRQ(ierr); > ELSE > CALL MatPtAP(matA,matP,MAT_INITIAL_MATRIX,PETSC_DEFAULT_REAL,matC,ierr); > CHKERRQ(ierr); > END IF > > ! destroy matA and duplicate A=C > CALL MatDestroy(matA,ierr); CHKERRQ(ierr); > CALL MatDuplicate(matC,MAT_COPY_VALUES,matA,ierr); CHKERRQ(ierr); > > ! display resulting matrix A > CALL MatView(matA,PETSC_VIEWER_STDOUT_WORLD,ierr); CHKERRQ(ierr); > > Whether A and P are assigned any values doesn't seem to matter at all. The > error message I'm getting is: > > Mat Object: 1 MPI processes > type: mpiaij > [0]PETSC ERROR: ------------------------------ > ------------------------------------------ > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, > probably memory access out of range > [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/ > documentation/faq.html#valgrind > [0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS > X to find memory corruption errors > [0]PETSC ERROR: likely location of problem given in stack below > [0]PETSC ERROR: --------------------- Stack Frames > ------------------------------------ > [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not > available, > [0]PETSC ERROR: INSTEAD the line number of the start of the function > [0]PETSC ERROR: is given. > [0]PETSC ERROR: [0] MatView_MPIAIJ_PtAP line 23 > /home/lanthale/Progs/petsc-3.8.2/src/mat/impls/aij/mpi/mpiptap.c > [0]PETSC ERROR: [0] MatView line 949 /home/lanthale/Progs/petsc-3. > 8.2/src/mat/interface/matrix.c > [0]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [0]PETSC ERROR: Signal received > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.8.2, Nov, 09, 2017 > > Could someone maybe tell me where I'm doing things wrong? The documentation > for MatPtAP > > says that C will be created and that this routine is "currently only > implemented for pairs of AIJ matrices and classes which inherit from AIJ". > Does this maybe exclude MPIAIJ matrices? > > Thanks, > Samuel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From s.lanthaler at gmail.com Tue Dec 5 09:42:21 2017 From: s.lanthaler at gmail.com (Samuel Lanthaler) Date: Tue, 5 Dec 2017 16:42:21 +0100 Subject: [petsc-users] MatPtAP problem after subsequent call to MatDuplicate In-Reply-To: References: <5A26A797.60901@gmail.com> Message-ID: <5A26BE5D.6030005@gmail.com> Thank you for your swift reply, Hong! Yes, what you said is exactly what I'm doing. No, I don't get an error when doing a MatView of matC. Samuel On 12/05/2017 04:38 PM, Hong wrote: > Samuel: > You try to do following: > 1) Create A; > 2) Create P; > 3) C = PtAP: > CALL MatPtAP(matA,matP,MAT_INITIAL_MATRIX,PETSC_DEFAULT_REAL,matC,ierr); > 4) MatDestroy(matA,ierr); > 5) MatDuplicate(matC,MAT_COPY_VALUES,matA,ierr); > 6) MatView(matA,PETSC_VIEWER_STDOUT_WORLD,ierr); > > The error occurs at (6). Do you get any error for > MatView(matC)? > > C has some special data structures as a matrix product which could be > lost during > MatDuplicate for matA. It might be a bug in our code. I'll check it. > > Hong > > Hi there, > > I am getting error messages after using MatPtAP to create a new > matrix C = Pt*A*P and then trying to assign A=C. The following is > a minimal example reproducing the problem: > > #include "slepc/finclude/slepc.h" > USE slepcsys > USE slepceps > IMPLICIT NONE > LOGICAL :: cause_error > ! --- pure PETSc > PetscErrorCode :: ierr > PetscScalar :: vals(3,3), val > Mat :: matA, matP, matC > PetscInt :: m,idxn(3),idxm(3),idone(1) > > ! initialize SLEPc & Petsc etc. > CALL SlepcInitialize(PETSC_NULL_CHARACTER,ierr) > > ! Set up a new matrix > m = 3 > CALL MatCreate(PETSC_COMM_WORLD,matA,ierr); CHKERRQ(ierr); > CALL MatSetType(matA,MATMPIAIJ,ierr); CHKERRQ(ierr); > CALL > MatSetSizes(matA,PETSC_DECIDE,PETSC_DECIDE,m,m,ierr); > CHKERRQ(ierr); > CALL > MatMPIAIJSetPreallocation(matA,3,PETSC_NULL_INTEGER,3,PETSC_NULL_INTEGER,ierr); > CHKERRQ(ierr); > > ! [.... call to MatSetValues to set values of matA] > > ! assemble matrix > CALL MatAssemblyBegin(matA,MAT_FINAL_ASSEMBLY,ierr); > CHKERRQ(ierr); > CALL MatAssemblyEnd(matA,MAT_FINAL_ASSEMBLY,ierr); > CHKERRQ(ierr); > > ! duplicate matrix > CALL > MatDuplicate(matA,MAT_DO_NOT_COPY_VALUES,matP,ierr); > CHKERRQ(ierr); > > ! [.... call to MatSetValues to set values of matP] > > ! assemble matrix > CALL MatAssemblyBegin(matP,MAT_FINAL_ASSEMBLY,ierr); > CHKERRQ(ierr); > CALL MatAssemblyEnd(matP,MAT_FINAL_ASSEMBLY,ierr); > CHKERRQ(ierr); > > ! compute C=Pt*A*P > cause_error = .TRUE. ! set to .TRUE. to cause error, > .FALSE. seems to work fine > IF(.NOT.cause_error) THEN > CALL MatDuplicate(matA,MAT_COPY_VALUES,matC,ierr); > CHKERRQ(ierr); > ELSE > CALL > MatPtAP(matA,matP,MAT_INITIAL_MATRIX,PETSC_DEFAULT_REAL,matC,ierr); > CHKERRQ(ierr); > END IF > > ! destroy matA and duplicate A=C > CALL MatDestroy(matA,ierr); CHKERRQ(ierr); > CALL MatDuplicate(matC,MAT_COPY_VALUES,matA,ierr); > CHKERRQ(ierr); > > ! display resulting matrix A > CALL MatView(matA,PETSC_VIEWER_STDOUT_WORLD,ierr); > CHKERRQ(ierr); > > Whether A and P are assigned any values doesn't seem to matter at > all. The error message I'm getting is: > > Mat Object: 1 MPI processes > type: mpiaij > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation > Violation, probably memory access out of range > [0]PETSC ERROR: Try option -start_in_debugger or > -on_error_attach_debugger > [0]PETSC ERROR: or see > http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind > > [0]PETSC ERROR: or try http://valgrind.org on GNU/linux and > Apple Mac OS X to find memory corruption errors > [0]PETSC ERROR: likely location of problem given in stack below > [0]PETSC ERROR: --------------------- Stack Frames > ------------------------------------ > [0]PETSC ERROR: Note: The EXACT line numbers in the stack are > not available, > [0]PETSC ERROR: INSTEAD the line number of the start of > the function > [0]PETSC ERROR: is given. > [0]PETSC ERROR: [0] MatView_MPIAIJ_PtAP line 23 > /home/lanthale/Progs/petsc-3.8.2/src/mat/impls/aij/mpi/mpiptap.c > [0]PETSC ERROR: [0] MatView line 949 > /home/lanthale/Progs/petsc-3.8.2/src/mat/interface/matrix.c > [0]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [0]PETSC ERROR: Signal received > [0]PETSC ERROR: See > http://www.mcs.anl.gov/petsc/documentation/faq.html > for > trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.8.2, Nov, 09, 2017 > > Could someone maybe tell me where I'm doing things wrong? The > documentation for MatPtAP > > says that C will be created and that this routine is "currently > only implemented for pairs of AIJ matrices and classes which > inherit from AIJ". Does this maybe exclude MPIAIJ matrices? > > Thanks, > Samuel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hzhang at mcs.anl.gov Tue Dec 5 09:50:52 2017 From: hzhang at mcs.anl.gov (Hong) Date: Tue, 5 Dec 2017 09:50:52 -0600 Subject: [petsc-users] Incomplete Cholesky Factorization in petsc In-Reply-To: References: Message-ID: Wendy, Petsc factor matrix as A = U^T * U, U: upper triangular matrix. When A is stored in seqaij format, we store U as /* icc() under revised new data structure. Factored arrays bj and ba are stored as U(0,:),...,U(i,:),U(n-1,:) ui=fact->i is an array of size n+1, in which ui+ ui[i]: points to 1st entry of U(i,:),i=0,...,n-1 ui[n]: points to U(n-1,n-1)+1 udiag=fact->diag is an array of size n,in which udiag[i]: points to diagonal of U(i,:), i=0,...,n-1 U(i,:) contains udiag[i] as its last entry, i.e., U(i,:) = (u[i,i+1],...,u[i,n-1],diag[i]) */ see petsc/src/mat/impls/aij/seq/aijfact.c: MatICCFactorSymbolic_SeqAIJ() Petsc matrix factors only work for Petsc MatSolve(). Hong On Tue, Dec 5, 2017 at 9:32 AM, Wenlong Gong wrote: > Barry, Thank you for the reply! It makes sense If two IC() function > returns slightly different factored matrix. But the IC factor matrix from > PETSc is not even close to what I get from ichol in Matlab. > Ultimately I need this incomplete cholesky factorization function wrapped > in R once it gives correct result. Not sure if I used the easiest way to > get IC so I also seek for help on getting a concise version of IC(). > > -Wendy > > > On Mon, Dec 4, 2017 at 10:07 PM, Smith, Barry F. > wrote: > >> >> I'm not sure what your goal is. In general two different IC(0) codes >> might produce slightly different factorizations based on implementation >> details even if they claim to run the same algorithm so I don't think there >> is a reason to try to compare the factors they produce. >> >> In addition PETSc, as well as most IC/ILU/LU codes store the factored >> matrices in a "non-conventional" form that is optimized for performance so >> it is not easy to just pull out the "factors" to look at them or compare >> them. >> >> Barry >> >> >> > On Dec 4, 2017, at 9:07 PM, Wenlong Gong wrote: >> > >> > Hello, >> > >> > I'm trying to use the Incomplete Cholesky Factorization for a sparse >> matrix in petsc. I started with a 10*10 matrix and used ksp and pc in order >> to get the ICC(0) factor, with no fill-in, natural ordering. However, the >> returned factor matrix does not match with the answer I got from matlab >> ichol() function. >> > >> > The code with the hard-coded data is attached here. I would appreciate >> if anyone could help check if I did anything wrong.Please let me know if >> there is easier way to get this incomplete cholesky factor. Thanks! >> > >> > Best regards, >> > Wendy >> > >> > >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wenlonggong at gmail.com Tue Dec 5 11:25:20 2017 From: wenlonggong at gmail.com (Wenlong Gong) Date: Tue, 5 Dec 2017 11:25:20 -0600 Subject: [petsc-users] Incomplete Cholesky Factorization in petsc In-Reply-To: References: Message-ID: Hong, Thank you for the clarification! Is there a function to get row/col indices and non-zeros vector of the factored matrix , instead of extracting the elements one by one? -Wendy On Tue, Dec 5, 2017 at 9:50 AM, Hong wrote: > Wendy, > Petsc factor matrix as > A = U^T * U, U: upper triangular matrix. > When A is stored in seqaij format, we store U as > /* > icc() under revised new data structure. > Factored arrays bj and ba are stored as > U(0,:),...,U(i,:),U(n-1,:) > > ui=fact->i is an array of size n+1, in which > ui+ > ui[i]: points to 1st entry of U(i,:),i=0,...,n-1 > ui[n]: points to U(n-1,n-1)+1 > > udiag=fact->diag is an array of size n,in which > udiag[i]: points to diagonal of U(i,:), i=0,...,n-1 > > U(i,:) contains udiag[i] as its last entry, i.e., > U(i,:) = (u[i,i+1],...,u[i,n-1],diag[i]) > */ > > see petsc/src/mat/impls/aij/seq/aijfact.c: MatICCFactorSymbolic_SeqAIJ() > > Petsc matrix factors only work for Petsc MatSolve(). > > Hong > > On Tue, Dec 5, 2017 at 9:32 AM, Wenlong Gong > wrote: > >> Barry, Thank you for the reply! It makes sense If two IC() function >> returns slightly different factored matrix. But the IC factor matrix from >> PETSc is not even close to what I get from ichol in Matlab. >> Ultimately I need this incomplete cholesky factorization function wrapped >> in R once it gives correct result. Not sure if I used the easiest way to >> get IC so I also seek for help on getting a concise version of IC(). >> >> -Wendy >> >> >> On Mon, Dec 4, 2017 at 10:07 PM, Smith, Barry F. >> wrote: >> >>> >>> I'm not sure what your goal is. In general two different IC(0) codes >>> might produce slightly different factorizations based on implementation >>> details even if they claim to run the same algorithm so I don't think there >>> is a reason to try to compare the factors they produce. >>> >>> In addition PETSc, as well as most IC/ILU/LU codes store the factored >>> matrices in a "non-conventional" form that is optimized for performance so >>> it is not easy to just pull out the "factors" to look at them or compare >>> them. >>> >>> Barry >>> >>> >>> > On Dec 4, 2017, at 9:07 PM, Wenlong Gong >>> wrote: >>> > >>> > Hello, >>> > >>> > I'm trying to use the Incomplete Cholesky Factorization for a sparse >>> matrix in petsc. I started with a 10*10 matrix and used ksp and pc in order >>> to get the ICC(0) factor, with no fill-in, natural ordering. However, the >>> returned factor matrix does not match with the answer I got from matlab >>> ichol() function. >>> > >>> > The code with the hard-coded data is attached here. I would appreciate >>> if anyone could help check if I did anything wrong.Please let me know if >>> there is easier way to get this incomplete cholesky factor. Thanks! >>> > >>> > Best regards, >>> > Wendy >>> > >>> > >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hzhang at mcs.anl.gov Tue Dec 5 12:10:27 2017 From: hzhang at mcs.anl.gov (Hong) Date: Tue, 5 Dec 2017 12:10:27 -0600 Subject: [petsc-users] Incomplete Cholesky Factorization in petsc In-Reply-To: References: Message-ID: Wendy, > > Is there a function to get row/col indices and non-zeros vector of the > factored matrix , instead of extracting the elements one by one? > You can look at MatSolve_SeqSBAIJ_1_NaturalOrdering() in petsc/src/mat/impls/sbaij/seq/sbaijfact2.c /* solve U^T*D*y = b by forward substitution */ ierr = PetscMemcpy(x,b,mbs*sizeof(PetscScalar));CHKERRQ(ierr); for (i=0; i > -Wendy > > On Tue, Dec 5, 2017 at 9:50 AM, Hong wrote: > >> Wendy, >> Petsc factor matrix as >> A = U^T * U, U: upper triangular matrix. >> When A is stored in seqaij format, we store U as >> /* >> icc() under revised new data structure. >> Factored arrays bj and ba are stored as >> U(0,:),...,U(i,:),U(n-1,:) >> >> ui=fact->i is an array of size n+1, in which >> ui+ >> ui[i]: points to 1st entry of U(i,:),i=0,...,n-1 >> ui[n]: points to U(n-1,n-1)+1 >> >> udiag=fact->diag is an array of size n,in which >> udiag[i]: points to diagonal of U(i,:), i=0,...,n-1 >> >> U(i,:) contains udiag[i] as its last entry, i.e., >> U(i,:) = (u[i,i+1],...,u[i,n-1],diag[i]) >> */ >> >> see petsc/src/mat/impls/aij/seq/aijfact.c: MatICCFactorSymbolic_SeqAIJ() >> >> Petsc matrix factors only work for Petsc MatSolve(). >> >> Hong >> >> On Tue, Dec 5, 2017 at 9:32 AM, Wenlong Gong >> wrote: >> >>> Barry, Thank you for the reply! It makes sense If two IC() function >>> returns slightly different factored matrix. But the IC factor matrix from >>> PETSc is not even close to what I get from ichol in Matlab. >>> Ultimately I need this incomplete cholesky factorization function >>> wrapped in R once it gives correct result. Not sure if I used the easiest >>> way to get IC so I also seek for help on getting a concise version of IC(). >>> >>> -Wendy >>> >>> >>> On Mon, Dec 4, 2017 at 10:07 PM, Smith, Barry F. >>> wrote: >>> >>>> >>>> I'm not sure what your goal is. In general two different IC(0) codes >>>> might produce slightly different factorizations based on implementation >>>> details even if they claim to run the same algorithm so I don't think there >>>> is a reason to try to compare the factors they produce. >>>> >>>> In addition PETSc, as well as most IC/ILU/LU codes store the >>>> factored matrices in a "non-conventional" form that is optimized for >>>> performance so it is not easy to just pull out the "factors" to look at >>>> them or compare them. >>>> >>>> Barry >>>> >>>> >>>> > On Dec 4, 2017, at 9:07 PM, Wenlong Gong >>>> wrote: >>>> > >>>> > Hello, >>>> > >>>> > I'm trying to use the Incomplete Cholesky Factorization for a sparse >>>> matrix in petsc. I started with a 10*10 matrix and used ksp and pc in order >>>> to get the ICC(0) factor, with no fill-in, natural ordering. However, the >>>> returned factor matrix does not match with the answer I got from matlab >>>> ichol() function. >>>> > >>>> > The code with the hard-coded data is attached here. I would >>>> appreciate if anyone could help check if I did anything wrong.Please let me >>>> know if there is easier way to get this incomplete cholesky factor. Thanks! >>>> > >>>> > Best regards, >>>> > Wendy >>>> > >>>> > >>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wenlonggong at gmail.com Tue Dec 5 13:34:49 2017 From: wenlonggong at gmail.com (Wenlong Gong) Date: Tue, 5 Dec 2017 13:34:49 -0600 Subject: [petsc-users] Incomplete Cholesky Factorization in petsc In-Reply-To: References: Message-ID: Thanks, I'll look into it. -Wendy On Tue, Dec 5, 2017 at 12:10 PM, Hong wrote: > Wendy, > >> >> Is there a function to get row/col indices and non-zeros vector of the >> factored matrix , instead of extracting the elements one by one? >> > > You can look at MatSolve_SeqSBAIJ_1_NaturalOrdering() in > petsc/src/mat/impls/sbaij/seq/sbaijfact2.c > > /* solve U^T*D*y = b by forward substitution */ > ierr = PetscMemcpy(x,b,mbs*sizeof(PetscScalar));CHKERRQ(ierr); > for (i=0; i v = aa + ai[i]; > vj = aj + ai[i]; > xi = x[i]; > nz = ai[i+1] - ai[i] - 1; /* exclude diag[i] */ > for (j=0; j x[i] = xi*v[nz]; /* v[nz] = aa[diag[i]] = 1/D(i) */ > } > > i -- row index > vj[j] -- col index > v[j] -- matrix value > > Note: the input of this routine is a matrix icc/cholesky factor. > > Hong > >> >> -Wendy >> >> On Tue, Dec 5, 2017 at 9:50 AM, Hong wrote: >> >>> Wendy, >>> Petsc factor matrix as >>> A = U^T * U, U: upper triangular matrix. >>> When A is stored in seqaij format, we store U as >>> /* >>> icc() under revised new data structure. >>> Factored arrays bj and ba are stored as >>> U(0,:),...,U(i,:),U(n-1,:) >>> >>> ui=fact->i is an array of size n+1, in which >>> ui+ >>> ui[i]: points to 1st entry of U(i,:),i=0,...,n-1 >>> ui[n]: points to U(n-1,n-1)+1 >>> >>> udiag=fact->diag is an array of size n,in which >>> udiag[i]: points to diagonal of U(i,:), i=0,...,n-1 >>> >>> U(i,:) contains udiag[i] as its last entry, i.e., >>> U(i,:) = (u[i,i+1],...,u[i,n-1],diag[i]) >>> */ >>> >>> see petsc/src/mat/impls/aij/seq/aijfact.c: MatICCFactorSymbolic_SeqAIJ() >>> >>> Petsc matrix factors only work for Petsc MatSolve(). >>> >>> Hong >>> >>> On Tue, Dec 5, 2017 at 9:32 AM, Wenlong Gong >>> wrote: >>> >>>> Barry, Thank you for the reply! It makes sense If two IC() function >>>> returns slightly different factored matrix. But the IC factor matrix from >>>> PETSc is not even close to what I get from ichol in Matlab. >>>> Ultimately I need this incomplete cholesky factorization function >>>> wrapped in R once it gives correct result. Not sure if I used the easiest >>>> way to get IC so I also seek for help on getting a concise version of IC(). >>>> >>>> -Wendy >>>> >>>> >>>> On Mon, Dec 4, 2017 at 10:07 PM, Smith, Barry F. >>>> wrote: >>>> >>>>> >>>>> I'm not sure what your goal is. In general two different IC(0) codes >>>>> might produce slightly different factorizations based on implementation >>>>> details even if they claim to run the same algorithm so I don't think there >>>>> is a reason to try to compare the factors they produce. >>>>> >>>>> In addition PETSc, as well as most IC/ILU/LU codes store the >>>>> factored matrices in a "non-conventional" form that is optimized for >>>>> performance so it is not easy to just pull out the "factors" to look at >>>>> them or compare them. >>>>> >>>>> Barry >>>>> >>>>> >>>>> > On Dec 4, 2017, at 9:07 PM, Wenlong Gong >>>>> wrote: >>>>> > >>>>> > Hello, >>>>> > >>>>> > I'm trying to use the Incomplete Cholesky Factorization for a sparse >>>>> matrix in petsc. I started with a 10*10 matrix and used ksp and pc in order >>>>> to get the ICC(0) factor, with no fill-in, natural ordering. However, the >>>>> returned factor matrix does not match with the answer I got from matlab >>>>> ichol() function. >>>>> > >>>>> > The code with the hard-coded data is attached here. I would >>>>> appreciate if anyone could help check if I did anything wrong.Please let me >>>>> know if there is easier way to get this incomplete cholesky factor. Thanks! >>>>> > >>>>> > Best regards, >>>>> > Wendy >>>>> > >>>>> > >>>>> >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Dec 5 13:49:13 2017 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Tue, 5 Dec 2017 19:49:13 +0000 Subject: [petsc-users] Incomplete Cholesky Factorization in petsc In-Reply-To: References: Message-ID: <6337D02A-5ED7-48AE-93F8-9660C5B7FD52@mcs.anl.gov> > On Dec 5, 2017, at 9:32 AM, Wenlong Gong wrote: > > Barry, Thank you for the reply! It makes sense If two IC() function returns slightly different factored matrix. But the IC factor matrix from PETSc is not even close to what I get from ichol in Matlab. > Ultimately I need this incomplete cholesky factorization function wrapped in R once it gives correct result. What do you mean by wrapped in R? Does R not have a ICC(0)? ICC(0) with the natural ordering is actually very simple to write. You could easily write it presumably directly in R or in C in a format callable from R. Barry > Not sure if I used the easiest way to get IC so I also seek for help on getting a concise version of IC(). > > -Wendy > > > On Mon, Dec 4, 2017 at 10:07 PM, Smith, Barry F. wrote: > > I'm not sure what your goal is. In general two different IC(0) codes might produce slightly different factorizations based on implementation details even if they claim to run the same algorithm so I don't think there is a reason to try to compare the factors they produce. > > In addition PETSc, as well as most IC/ILU/LU codes store the factored matrices in a "non-conventional" form that is optimized for performance so it is not easy to just pull out the "factors" to look at them or compare them. > > Barry > > > > On Dec 4, 2017, at 9:07 PM, Wenlong Gong wrote: > > > > Hello, > > > > I'm trying to use the Incomplete Cholesky Factorization for a sparse matrix in petsc. I started with a 10*10 matrix and used ksp and pc in order to get the ICC(0) factor, with no fill-in, natural ordering. However, the returned factor matrix does not match with the answer I got from matlab ichol() function. > > > > The code with the hard-coded data is attached here. I would appreciate if anyone could help check if I did anything wrong.Please let me know if there is easier way to get this incomplete cholesky factor. Thanks! > > > > Best regards, > > Wendy > > > > > > From wenlonggong at gmail.com Tue Dec 5 14:15:35 2017 From: wenlonggong at gmail.com (Wenlong Gong) Date: Tue, 5 Dec 2017 14:15:35 -0600 Subject: [petsc-users] Incomplete Cholesky Factorization in petsc In-Reply-To: <6337D02A-5ED7-48AE-93F8-9660C5B7FD52@mcs.anl.gov> References: <6337D02A-5ED7-48AE-93F8-9660C5B7FD52@mcs.anl.gov> Message-ID: My main project is written in R that's why I need a wrapper function of ICC(0) in order to call it from R. I tried the IncompleteCholesky() in rcppEigen, basically the Eigen library but it turns out to be a modified incomplete Cholesky. Beside that, I didn't find any incomplete Cholesky functions available in R so I turned to the petsc. I will apply this ICC for very big sparse matrix and hope the function to have almost linear computation complexity. I didn't write the function by myself because it might be too slow comparing to an well-written and optimized ICC() in existing C/C++ library. -Wendy On Tue, Dec 5, 2017 at 1:49 PM, Smith, Barry F. wrote: > > > > On Dec 5, 2017, at 9:32 AM, Wenlong Gong wrote: > > > > Barry, Thank you for the reply! It makes sense If two IC() function > returns slightly different factored matrix. But the IC factor matrix from > PETSc is not even close to what I get from ichol in Matlab. > > Ultimately I need this incomplete cholesky factorization function > wrapped in R once it gives correct result. > > What do you mean by wrapped in R? > > Does R not have a ICC(0)? ICC(0) with the natural ordering is actually > very simple to write. You could easily write it presumably directly in R or > in C in a format callable from R. > > Barry > > > > > Not sure if I used the easiest way to get IC so I also seek for help on > getting a concise version of IC(). > > > > -Wendy > > > > > > On Mon, Dec 4, 2017 at 10:07 PM, Smith, Barry F. > wrote: > > > > I'm not sure what your goal is. In general two different IC(0) codes > might produce slightly different factorizations based on implementation > details even if they claim to run the same algorithm so I don't think there > is a reason to try to compare the factors they produce. > > > > In addition PETSc, as well as most IC/ILU/LU codes store the factored > matrices in a "non-conventional" form that is optimized for performance so > it is not easy to just pull out the "factors" to look at them or compare > them. > > > > Barry > > > > > > > On Dec 4, 2017, at 9:07 PM, Wenlong Gong > wrote: > > > > > > Hello, > > > > > > I'm trying to use the Incomplete Cholesky Factorization for a sparse > matrix in petsc. I started with a 10*10 matrix and used ksp and pc in order > to get the ICC(0) factor, with no fill-in, natural ordering. However, the > returned factor matrix does not match with the answer I got from matlab > ichol() function. > > > > > > The code with the hard-coded data is attached here. I would appreciate > if anyone could help check if I did anything wrong.Please let me know if > there is easier way to get this incomplete cholesky factor. Thanks! > > > > > > Best regards, > > > Wendy > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Dec 5 14:25:18 2017 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Tue, 5 Dec 2017 20:25:18 +0000 Subject: [petsc-users] Incomplete Cholesky Factorization in petsc In-Reply-To: References: <6337D02A-5ED7-48AE-93F8-9660C5B7FD52@mcs.anl.gov> Message-ID: <80EED156-2881-4C2F-A7ED-0873070FB8C3@mcs.anl.gov> > On Dec 5, 2017, at 2:15 PM, Wenlong Gong wrote: > > My main project is written in R that's why I need a wrapper function of ICC(0) in order to call it from R. I tried the IncompleteCholesky() in rcppEigen, basically the Eigen library but it turns out to be a modified incomplete Cholesky. Beside that, I didn't find any incomplete Cholesky functions available in R so I turned to the petsc. > > I will apply this ICC for very big sparse matrix and hope the function to have almost linear computation complexity. I didn't write the function by myself because it might be too slow comparing to an well-written and optimized ICC() in existing C/C++ library. Actually ICC(0) is so simple it probably won't benefit to using a "well-written and optimized ICC(0) in existing C/C++ library." Note also that it is rare that ICC would give an almost linear computation complexity since the number of iterations generally goes up with the size of the problem for ICC. Only specially well conditioned matrices will have the property that ICC(0) gives an almost linear computational complexity. Barry > > -Wendy > > On Tue, Dec 5, 2017 at 1:49 PM, Smith, Barry F. wrote: > > > > On Dec 5, 2017, at 9:32 AM, Wenlong Gong wrote: > > > > Barry, Thank you for the reply! It makes sense If two IC() function returns slightly different factored matrix. But the IC factor matrix from PETSc is not even close to what I get from ichol in Matlab. > > Ultimately I need this incomplete cholesky factorization function wrapped in R once it gives correct result. > > What do you mean by wrapped in R? > > Does R not have a ICC(0)? ICC(0) with the natural ordering is actually very simple to write. You could easily write it presumably directly in R or in C in a format callable from R. > > Barry > > > > > Not sure if I used the easiest way to get IC so I also seek for help on getting a concise version of IC(). > > > > -Wendy > > > > > > On Mon, Dec 4, 2017 at 10:07 PM, Smith, Barry F. wrote: > > > > I'm not sure what your goal is. In general two different IC(0) codes might produce slightly different factorizations based on implementation details even if they claim to run the same algorithm so I don't think there is a reason to try to compare the factors they produce. > > > > In addition PETSc, as well as most IC/ILU/LU codes store the factored matrices in a "non-conventional" form that is optimized for performance so it is not easy to just pull out the "factors" to look at them or compare them. > > > > Barry > > > > > > > On Dec 4, 2017, at 9:07 PM, Wenlong Gong wrote: > > > > > > Hello, > > > > > > I'm trying to use the Incomplete Cholesky Factorization for a sparse matrix in petsc. I started with a 10*10 matrix and used ksp and pc in order to get the ICC(0) factor, with no fill-in, natural ordering. However, the returned factor matrix does not match with the answer I got from matlab ichol() function. > > > > > > The code with the hard-coded data is attached here. I would appreciate if anyone could help check if I did anything wrong.Please let me know if there is easier way to get this incomplete cholesky factor. Thanks! > > > > > > Best regards, > > > Wendy > > > > > > > > > > > > From hongzhang at anl.gov Tue Dec 5 14:27:20 2017 From: hongzhang at anl.gov (Zhang, Hong) Date: Tue, 5 Dec 2017 20:27:20 +0000 Subject: [petsc-users] Incomplete Cholesky Factorization in petsc In-Reply-To: References: <6337D02A-5ED7-48AE-93F8-9660C5B7FD52@mcs.anl.gov> Message-ID: <96E94C7A-ACF9-499F-9D6C-2471894F8344@anl.gov> Why not include PETSc MatSolve() (which solves Ax=b) in your wrapper? Getting the factored matrix is not typically needed. Hong (Mr.) On Dec 5, 2017, at 2:15 PM, Wenlong Gong > wrote: My main project is written in R that's why I need a wrapper function of ICC(0) in order to call it from R. I tried the IncompleteCholesky() in rcppEigen, basically the Eigen library but it turns out to be a modified incomplete Cholesky. Beside that, I didn't find any incomplete Cholesky functions available in R so I turned to the petsc. I will apply this ICC for very big sparse matrix and hope the function to have almost linear computation complexity. I didn't write the function by myself because it might be too slow comparing to an well-written and optimized ICC() in existing C/C++ library. -Wendy On Tue, Dec 5, 2017 at 1:49 PM, Smith, Barry F. > wrote: > On Dec 5, 2017, at 9:32 AM, Wenlong Gong > wrote: > > Barry, Thank you for the reply! It makes sense If two IC() function returns slightly different factored matrix. But the IC factor matrix from PETSc is not even close to what I get from ichol in Matlab. > Ultimately I need this incomplete cholesky factorization function wrapped in R once it gives correct result. What do you mean by wrapped in R? Does R not have a ICC(0)? ICC(0) with the natural ordering is actually very simple to write. You could easily write it presumably directly in R or in C in a format callable from R. Barry > Not sure if I used the easiest way to get IC so I also seek for help on getting a concise version of IC(). > > -Wendy > > > On Mon, Dec 4, 2017 at 10:07 PM, Smith, Barry F. > wrote: > > I'm not sure what your goal is. In general two different IC(0) codes might produce slightly different factorizations based on implementation details even if they claim to run the same algorithm so I don't think there is a reason to try to compare the factors they produce. > > In addition PETSc, as well as most IC/ILU/LU codes store the factored matrices in a "non-conventional" form that is optimized for performance so it is not easy to just pull out the "factors" to look at them or compare them. > > Barry > > > > On Dec 4, 2017, at 9:07 PM, Wenlong Gong > wrote: > > > > Hello, > > > > I'm trying to use the Incomplete Cholesky Factorization for a sparse matrix in petsc. I started with a 10*10 matrix and used ksp and pc in order to get the ICC(0) factor, with no fill-in, natural ordering. However, the returned factor matrix does not match with the answer I got from matlab ichol() function. > > > > The code with the hard-coded data is attached here. I would appreciate if anyone could help check if I did anything wrong.Please let me know if there is easier way to get this incomplete cholesky factor. Thanks! > > > > Best regards, > > Wendy > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Dec 5 15:53:36 2017 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 5 Dec 2017 16:53:36 -0500 Subject: [petsc-users] 2 Dirichlet conditions for one Element in PetscFE In-Reply-To: <7ead0d73-f554-c8ae-5db2-b744fea5b936@univ-amu.fr> References: <7ead0d73-f554-c8ae-5db2-b744fea5b936@univ-amu.fr> Message-ID: On Tue, Dec 5, 2017 at 9:12 AM, Yann Jobic wrote: > Hello, > > I tried the master of the git directory, as you have merged the branch. > > It works ! Many thanks for the fix. > > Great! Feel free to mail any suggestion for improvements. Thanks, Matt > Best regards, > > Yann > > Le 22/11/2017 ? 18:51, Matthew Knepley a ?crit : > > On Wed, Nov 22, 2017 at 12:39 PM, Yann Jobic > wrote: > >> Hello, >> >> I've found a strange behavior when looking into a bug for the pressure >> convergence of a simple Navier-Stokes problem using PetscFE. >> >> I followed many examples for labeling boundary faces. I first use >> DMPlexMarkBoundaryFaces, (label=1 to the faces). >> I find those faces using DMGetStratumIS and searching 1 as it is the >> value of the marked boundary faces. >> Finally i use DMPlexLabelComplete over the new label. >> I then use : >> ierr = PetscDSAddBoundary(prob, DM_BC_ESSENTIAL, "in", "Faces", 0, >> Ncomp, components, (void (*)(void)) uIn, NWest, west, NULL);CHKERRQ(ierr); >> in order to impose a dirichlet condition for the faces labeled by the >> correct value (west=1, south=3,...). >> >> However, the function "uIn()" is called in all the Elements containing >> the boundary faces, and thus impose the values at nodes that are not in the >> labeled faces. >> Is it a normal behavior ? I then have to test the position of the node >> calling uIn, in order to impose the good value. >> I have this problem for a Poiseuille flow, where at 2 corner Elements i >> have a zero velocity dirichlet condition (wall) and a In flow velocity one. >> > > I believe I have fixed this in knepley/fix-plex-bc-multiple which should > be merged soon. Do you know how to merge that branch and try? > > Thanks, > > Matt > > >> The pressure is then very high at the corner nodes of those 2 Elements. >> Do you think my pressure problem comes from there ? (The velocity field >> is correct) >> >> Many thanks, >> >> Regards, >> >> Yann >> >> PS : i'm using those runtime options : >> -vel_petscspace_order 2 -pres_petscspace_order 1 \ >> -ksp_type fgmres -pc_type fieldsplit -pc_fieldsplit_type schur >> -pc_fieldsplit_schur_fact_type full \ >> -fieldsplit_velocity_pc_type lu -fieldsplit_pressure_ksp_rtol 1.0e-10 >> -fieldsplit_pressure_pc_type jacobi >> >> >> --- >> L'absence de virus dans ce courrier ?lectronique a ?t? v?rifi?e par le >> logiciel antivirus Avast. >> https://www.avast.com/antivirus >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > > -- > ___________________________ > > Yann JOBIC > HPC engineer > IUSTI-CNRS UMR 7343 - Polytech Marseille > Technop?le de Ch?teau Gombert5 rue Enrico Fermi > 13453 Marseille cedex 13 > Tel : (33) 4 91 10 69 43 > Fax : (33) 4 91 10 69 69 > > > > Garanti > sans virus. www.avast.com > > <#m_135884221591823617_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From hzhang at mcs.anl.gov Tue Dec 5 16:51:35 2017 From: hzhang at mcs.anl.gov (Hong) Date: Tue, 5 Dec 2017 16:51:35 -0600 Subject: [petsc-users] MatPtAP problem after subsequent call to MatDuplicate In-Reply-To: <5A26BE5D.6030005@gmail.com> References: <5A26A797.60901@gmail.com> <5A26BE5D.6030005@gmail.com> Message-ID: Samuel: It is fixed https://bitbucket.org/petsc/petsc/commits/cff31925ac4fa731f75d96f7dbc9974207834be9 It will be merged to petsc-release once it passes all regression tests. Thanks for your report! Hong Thank you for your swift reply, Hong! > > Yes, what you said is exactly what I'm doing. No, I don't get an error > when doing a MatView of matC. > > Samuel > > > On 12/05/2017 04:38 PM, Hong wrote: > > Samuel: > You try to do following: > 1) Create A; > 2) Create P; > 3) C = PtAP: > CALL MatPtAP(matA,matP,MAT_INITIAL_MATRIX,PETSC_DEFAULT_REAL,matC > ,ierr); > 4) MatDestroy(matA,ierr); > 5) MatDuplicate(matC,MAT_COPY_VALUES,matA,ierr); > 6) MatView(matA,PETSC_VIEWER_STDOUT_WORLD,ierr); > > The error occurs at (6). Do you get any error for > MatView(matC)? > > C has some special data structures as a matrix product which could be lost > during > MatDuplicate for matA. It might be a bug in our code. I'll check it. > > Hong > > >> Hi there, >> >> I am getting error messages after using MatPtAP to create a new matrix C >> = Pt*A*P and then trying to assign A=C. The following is a minimal example >> reproducing the problem: >> >> #include "slepc/finclude/slepc.h" >> USE slepcsys >> USE slepceps >> IMPLICIT NONE >> LOGICAL :: cause_error >> ! --- pure PETSc >> PetscErrorCode :: ierr >> PetscScalar :: vals(3,3), val >> Mat :: matA, matP, matC >> PetscInt :: m,idxn(3),idxm(3),idone(1) >> >> ! initialize SLEPc & Petsc etc. >> CALL SlepcInitialize(PETSC_NULL_CHARACTER,ierr) >> >> ! Set up a new matrix >> m = 3 >> CALL MatCreate(PETSC_COMM_WORLD,matA,ierr); CHKERRQ(ierr); >> CALL MatSetType(matA,MATMPIAIJ,ierr); CHKERRQ(ierr); >> CALL MatSetSizes(matA,PETSC_DECIDE,PETSC_DECIDE,m,m,ierr); >> CHKERRQ(ierr); >> CALL MatMPIAIJSetPreallocation(matA,3,PETSC_NULL_INTEGER,3,PETSC_NULL_INTEGER,ierr); >> CHKERRQ(ierr); >> >> ! [.... call to MatSetValues to set values of matA] >> >> ! assemble matrix >> CALL MatAssemblyBegin(matA,MAT_FINAL_ASSEMBLY,ierr); CHKERRQ(ierr); >> CALL MatAssemblyEnd(matA,MAT_FINAL_ASSEMBLY,ierr); CHKERRQ(ierr); >> >> ! duplicate matrix >> CALL MatDuplicate(matA,MAT_DO_NOT_COPY_VALUES,matP,ierr); >> CHKERRQ(ierr); >> >> ! [.... call to MatSetValues to set values of matP] >> >> ! assemble matrix >> CALL MatAssemblyBegin(matP,MAT_FINAL_ASSEMBLY,ierr); CHKERRQ(ierr); >> CALL MatAssemblyEnd(matP,MAT_FINAL_ASSEMBLY,ierr); CHKERRQ(ierr); >> >> ! compute C=Pt*A*P >> cause_error = .TRUE. ! set to .TRUE. to cause error, .FALSE. seems >> to work fine >> IF(.NOT.cause_error) THEN >> CALL MatDuplicate(matA,MAT_COPY_VALUES,matC,ierr); >> CHKERRQ(ierr); >> ELSE >> CALL MatPtAP(matA,matP,MAT_INITIAL_ >> MATRIX,PETSC_DEFAULT_REAL,matC,ierr); CHKERRQ(ierr); >> END IF >> >> ! destroy matA and duplicate A=C >> CALL MatDestroy(matA,ierr); CHKERRQ(ierr); >> CALL MatDuplicate(matC,MAT_COPY_VALUES,matA,ierr); CHKERRQ(ierr); >> >> ! display resulting matrix A >> CALL MatView(matA,PETSC_VIEWER_STDOUT_WORLD,ierr); CHKERRQ(ierr); >> >> Whether A and P are assigned any values doesn't seem to matter at all. >> The error message I'm getting is: >> >> Mat Object: 1 MPI processes >> type: mpiaij >> [0]PETSC ERROR: ------------------------------ >> ------------------------------------------ >> [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, >> probably memory access out of range >> [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >> [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/d >> ocumentation/faq.html#valgrind >> [0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS >> X to find memory corruption errors >> [0]PETSC ERROR: likely location of problem given in stack below >> [0]PETSC ERROR: --------------------- Stack Frames >> ------------------------------------ >> [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not >> available, >> [0]PETSC ERROR: INSTEAD the line number of the start of the function >> [0]PETSC ERROR: is given. >> [0]PETSC ERROR: [0] MatView_MPIAIJ_PtAP line 23 >> /home/lanthale/Progs/petsc-3.8.2/src/mat/impls/aij/mpi/mpiptap.c >> [0]PETSC ERROR: [0] MatView line 949 /home/lanthale/Progs/petsc-3.8 >> .2/src/mat/interface/matrix.c >> [0]PETSC ERROR: --------------------- Error Message >> -------------------------------------------------------------- >> [0]PETSC ERROR: Signal received >> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html >> for trouble shooting. >> [0]PETSC ERROR: Petsc Release Version 3.8.2, Nov, 09, 2017 >> >> Could someone maybe tell me where I'm doing things wrong? The documentation >> for MatPtAP >> >> says that C will be created and that this routine is "currently only >> implemented for pairs of AIJ matrices and classes which inherit from AIJ". >> Does this maybe exclude MPIAIJ matrices? >> >> Thanks, >> Samuel >> > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matteo.semplice at unito.it Wed Dec 6 00:34:04 2017 From: matteo.semplice at unito.it (Matteo Semplice) Date: Wed, 6 Dec 2017 07:34:04 +0100 Subject: [petsc-users] preallocation after DMCreateMatrix? In-Reply-To: References: <33f753de-e783-03a0-711a-510a88389cb7@unito.it> <0b480f6d-d642-e444-e24f-2f5e94743956@unito.it> <02b82677-018a-ea78-083e-c1c16213a1cb@unito.it> Message-ID: On 04/12/2017 17:01, Matthew Knepley wrote: > On Fri, Dec 1, 2017 at 5:50 AM, Matteo Semplice > > wrote: > > Thanks for the fix! > > (If you need a volunteer for testing the bug-fix, drop me a line) > > Cool. Its in next, and in > > https://bitbucket.org/petsc/petsc/branch/knepley/fix-plex-fvm-adjacency > > ? Thanks, > > ? ? Matt > Works for me. Thank you! Matteo -------------- next part -------------- An HTML attachment was scrubbed... URL: From C.Klaij at marin.nl Wed Dec 6 01:34:17 2017 From: C.Klaij at marin.nl (Klaij, Christiaan) Date: Wed, 6 Dec 2017 07:34:17 +0000 Subject: [petsc-users] segfault after recent scientific linux upgrade In-Reply-To: References: <1512486474462.7744@marin.nl>, Message-ID: <1512545657930.54754@marin.nl> Fande, Thanks, that's good to know. Upgrading to 3.8.x is definitely my long-term plan, but is there anything I can do short-term to fix the problem while keeping 3.7.5? Chris dr. ir. Christiaan Klaij | Senior Researcher | Research & Development MARIN | T +31 317 49 33 44 | C.Klaij at marin.nl | www.marin.nl [LinkedIn] [YouTube] [Twitter] [Facebook] MARIN news: Seminar ?Blauwe toekomst: versnellen van innovaties door samenwerken ________________________________ From: Fande Kong Sent: Tuesday, December 05, 2017 4:30 PM To: Klaij, Christiaan Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] segfault after recent scientific linux upgrade I would like to suggest you to use PETSc-3.8.x. Then the bug should go away. It is a known bug related to the reuse of the factorization pattern. Fande, On Tue, Dec 5, 2017 at 8:07 AM, Klaij, Christiaan > wrote: I'm running production software with petsc-3.7.5 and, among others, superlu_dist 5.1.3 on scientific linux 7.4. After a recent update of SL7.4, notably of the kernel and glibc, we found that superlu is somehow broken. Below's a backtrace of a serial example. Is this a known issue? Could you please advice on how to proceed (preferably while keeping 3.7.5 for now). Thanks, Chris $ gdb ./refresco ./core.9810 GNU gdb (GDB) Red Hat Enterprise Linux 7.6.1-100.el7 Copyright (C) 2013 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-redhat-linux-gnu". For bug reporting instructions, please see: ... Reading symbols from /home/cklaij/ReFRESCO/Dev/trunk/Suites/testSuite/FlatPlate_laminar/calcs/Grid64x64/refresco...done. [New LWP 9810] Missing separate debuginfo for /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/licensing-1.55.0/sll/lib64/libssl.so.10 Try: yum --enablerepo='*debug*' install /usr/lib/debug/.build-id/68/6a25d0a83d002183c835fa5694a8110c78d3bc.debug Missing separate debuginfo for /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/licensing-1.55.0/sll/lib64/libcrypto.so.10 Try: yum --enablerepo='*debug*' install /usr/lib/debug/.build-id/68/d2958189303f421b1082abc33fd87338826c65.debug [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib64/libthread_db.so.1". Core was generated by `./refresco'. Program terminated with signal 11, Segmentation fault. #0 0x00002ba501c132bc in mc64wd_dist (n=0x5213270, ne=0x2, ip=0x1, irn=0x51af520, a=0x51ef260, iperm=0x1000, num=0x7ffc545b2d94, jperm=0x51e7260, out=0x51eb260, pr=0x51ef260, q=0x51f3260, l=0x51f7260, u=0x51fb270, d__=0x5203270) at /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/SuperLU_DIST_5.1.3/SRC/mc64ad_dist.c:2322 2322 if (iperm[i__] != 0 || iperm[i0] == 0) { Missing separate debuginfos, use: debuginfo-install bzip2-libs-1.0.6-13.el7.x86_64 glibc-2.17-196.el7.x86_64 keyutils-libs-1.5.8-3.el7.x86_64 krb5-libs-1.15.1-8.el7.x86_64 libcom_err-1.42.9-10.el7.x86_64 libgcc-4.8.5-16.el7.x86_64 libselinux-2.5-11.el7.x86_64 libstdc++-4.8.5-16.el7.x86_64 libxml2-2.9.1-6.el7_2.3.x86_64 numactl-libs-2.0.9-6.el7_2.x86_64 pcre-8.32-17.el7.x86_64 xz-libs-5.2.2-1.el7.x86_64 zlib-1.2.7-17.el7.x86_64 (gdb) bt #0 0x00002ba501c132bc in mc64wd_dist (n=0x5213270, ne=0x2, ip=0x1, irn=0x51af520, a=0x51ef260, iperm=0x1000, num=0x7ffc545b2d94, jperm=0x51e7260, out=0x51eb260, pr=0x51ef260, q=0x51f3260, l=0x51f7260, u=0x51fb270, d__=0x5203270) at /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/SuperLU_DIST_5.1.3/SRC/mc64ad_dist.c:2322 #1 0x00002ba501c0ef2b in mc64ad_dist (job=0x5213270, n=0x2, ne=0x1, ip=0x51af520, irn=0x51ef260, a=0x1000, num=0x7ffc545b2db0, cperm=0x51fb270, liw=0x5187d10, iw=0x51c3130, ldw=0x51af520, dw=0x517b570, icntl=0x51e7260, info=0x2ba501c2e556 ) at /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/SuperLU_DIST_5.1.3/SRC/mc64ad_dist.c:596 #2 0x00002ba501c2e556 in dldperm_dist (job=0, n=0, nnz=0, colptr=0x51af520, adjncy=0x51ef260, nzval=0x1000, perm=0x4f00, u=0x1000, v=0x517b001) at /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/SuperLU_DIST_5.1.3/SRC/dldperm_dist.c:141 #3 0x00002ba501c26296 in pdgssvx_ABglobal (options=0x5213270, A=0x2, ScalePermstruct=0x1, B=0x51af520, ldb=85914208, nrhs=4096, grid=0x516da30, LUstruct=0x517af40, berr=0x1000, stat=0x2ba500b36a7d , info=0x517af58) at /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/SuperLU_DIST_5.1.3/SRC/pdgssvx_ABglobal.c:716 #4 0x00002ba500b36a7d in MatLUFactorNumeric_SuperLU_DIST (F=0x5213270, A=0x2, ---Type to continue, or q to quit--- info=0x1) at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/mat/impls/aij/mpi/superlu_dist/superlu_dist.c:419 #5 0x00002ba500b45a1a in MatLUFactorNumeric (fact=0x5213270, mat=0x2, info=0x1) at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/mat/interface/matrix.c:2996 #6 0x00002ba500e9e6c7 in PCSetUp_LU (pc=0x5213270) at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/pc/impls/factor/lu/lu.c:172 #7 0x00002ba500ded084 in PCSetUp (pc=0x5213270) at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/pc/interface/precon.c:968 #8 0x00002ba500f2968d in KSPSetUp (ksp=0x5213270) at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/ksp/interface/itfunc.c:390 #9 0x00002ba500f257be in KSPSolve (ksp=0x5213270, b=0x2, x=0x4193510) at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/ksp/interface/itfunc.c:599 #10 0x00002ba500f3e142 in kspsolve_ (ksp=0x5213270, b=0x2, x=0x1, __ierr=0x51af520) at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/ksp/interface/ftn-auto/itfuncf.c:261 ---Type to continue, or q to quit--- #11 0x0000000000bccf71 in petsc_solvers::petsc_solvers_solve ( regname='massTransport', rhs_c=..., phi_c=..., tol=0.01, maxiter=500, res0=-9.2559631349317831e+61, usediter=0, .tmp.REGNAME.len_V$1790=13) at petsc_solvers.F90:580 #12 0x0000000000c2c9c5 in mass_momentum::mass_momentum_pressureprediction () at mass_momentum.F90:989 #13 0x0000000000c0ffc1 in mass_momentum::mass_momentum_core () at mass_momentum.F90:626 #14 0x0000000000c26a2c in mass_momentum::mass_momentum_systempcapply ( aa_system=76390912, xx_system=68983024, rr_system=68984544, ierr=0) at mass_momentum.F90:919 #15 0x00002ba500eaa763 in ourshellapply (pc=0x48da200, x=0x41c98f0, y=0x41c9ee0) at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/pc/impls/shell/ftn-custom/zshellpcf.c:41 #16 0x00002ba500ea79be in PCApply_Shell (pc=0x5213270, x=0x2, y=0x1) at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/pc/impls/shell/shellpc.c:124 #17 0x00002ba500df1800 in PCApply (pc=0x5213270, x=0x2, y=0x1) at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/pc/interface/precon.c:482 #18 0x00002ba500f2592a in KSPSolve (ksp=0x5213270, b=0x2, x=0x41c9ee0) at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/ksp/interf---Type to continue, or q to quit--- ace/itfunc.c:631 #19 0x00002ba500f3e142 in kspsolve_ (ksp=0x5213270, b=0x2, x=0x1, __ierr=0x51af520) at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/ksp/interface/ftn-auto/itfuncf.c:261 #20 0x0000000000c1b0ea in mass_momentum::mass_momentum_krylov () at mass_momentum.F90:777 #21 0x0000000000c0d242 in mass_momentum::mass_momentum_simple () at mass_momentum.F90:548 #22 0x0000000000c0841f in mass_momentum::mass_momentum_solve () at mass_momentum.F90:465 #23 0x000000000041b5ec in refresco () at refresco.F90:259 #24 0x000000000041999e in main () #25 0x00002ba508c98c05 in __libc_start_main () from /lib64/libc.so.6 #26 0x00000000004198a3 in _start () (gdb) dr. ir. Christiaan Klaij | Senior Researcher | Research & Development MARIN | T +31 317 49 33 44 | mailto:C.Klaij at marin.nl | http://www.marin.nl MARIN news: http://www.marin.nl/web/News/News-items/Seminar-Blauwe-toekomst-versnellen-van-innovaties-door-samenwerken.htm -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image962398.PNG Type: image/png Size: 293 bytes Desc: image962398.PNG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: imagefdeb13.PNG Type: image/png Size: 331 bytes Desc: imagefdeb13.PNG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image0cf412.PNG Type: image/png Size: 333 bytes Desc: image0cf412.PNG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image4b2382.PNG Type: image/png Size: 253 bytes Desc: image4b2382.PNG URL: From ccetinbas at anl.gov Wed Dec 6 10:40:44 2017 From: ccetinbas at anl.gov (Cetinbas, Cankur Firat) Date: Wed, 6 Dec 2017 16:40:44 +0000 Subject: [petsc-users] petsc4py csr sparse matrix different number of local rows at each processor Message-ID: Hello, Is there a way to set different number of local rows for each processor when using createAIJ with csr option with petsc4py? In petsc documentation it is clearly shown ( set int m) but I couldn't figure it out with petsc4py. I tried the following but it did not work (ploc is the local number of rows, ptot is the total number of rows and it is square sparse matrix) pA = PETSc.Mat().createAIJ(size=(ploc, ptot, ptot, ptot), csr=(indptr.astype(dtype='int32'), indices.astype(dtype='int32'), data)) For petsc4py the createAIJ inputs are as follows: def petsc.Mat.Mat.createAIJ ( self, size, bsize = None, nz = None, d_nz = None, o_nz = None, csr = None, comm = None ) Thanks, Firat -------------- next part -------------- An HTML attachment was scrubbed... URL: From fe.wallner at gmail.com Wed Dec 6 10:52:52 2017 From: fe.wallner at gmail.com (Felipe Giacomelli) Date: Wed, 6 Dec 2017 14:52:52 -0200 Subject: [petsc-users] MatZeroRows question Message-ID: Hello, According to PETSc documentation, all processes that share the matrix MUST call this routine for the parallel case. However, it is possible that, after the domain decomposition, some subdomains wouldn?t have Dirichlet boundary conditions. Hence, MatZeroRows wouldn?t be called by all processes. Is there a standard (or proposed) approach for this scenario? Thank you, -- Felipe M Wallner Giacomelli -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Dec 6 10:58:40 2017 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Wed, 6 Dec 2017 16:58:40 +0000 Subject: [petsc-users] MatZeroRows question In-Reply-To: References: Message-ID: <60E0CB2B-3450-4479-94BA-2B0439098DB3@mcs.anl.gov> They call the function and provide zero rows > On Dec 6, 2017, at 10:52 AM, Felipe Giacomelli wrote: > > Hello, > According to PETSc documentation, all processes that share the matrix MUST call this routine for the parallel case. However, it is possible that, after the domain decomposition, some subdomains wouldn?t have Dirichlet boundary conditions. Hence, MatZeroRows wouldn?t be called by all processes. Is there a standard (or proposed) approach for this scenario? > Thank you, > > > > -- > Felipe M Wallner Giacomelli From fdkong.jd at gmail.com Wed Dec 6 10:58:50 2017 From: fdkong.jd at gmail.com (Fande Kong) Date: Wed, 6 Dec 2017 09:58:50 -0700 Subject: [petsc-users] segfault after recent scientific linux upgrade In-Reply-To: <1512545657930.54754@marin.nl> References: <1512486474462.7744@marin.nl> <1512545657930.54754@marin.nl> Message-ID: I still think the simplest solution is to upgrade PETSc. I won't try anything else. If you really want to try anything else, you have the following options (1) Not use superlu_dist, and try other preconditioners. (2) Try "-mat_superlu_dist_fact" with different values Fande, On Wed, Dec 6, 2017 at 12:34 AM, Klaij, Christiaan wrote: > Fande, > > Thanks, that's good to know. Upgrading to 3.8.x is definitely my > long-term plan, but is there anything I can do short-term to fix > the problem while keeping 3.7.5? > > Chris > > dr. ir. Christiaan Klaij | Senior Researcher | Research & Development > MARIN | T +31 317 49 33 44 <+31%20317%20493%20344> | C.Klaij at marin.nl | > www.marin.nl > > [image: LinkedIn] [image: > YouTube] [image: Twitter] > [image: Facebook] > > MARIN news: Seminar ?Blauwe toekomst: versnellen van innovaties door > samenwerken > > > ------------------------------ > *From:* Fande Kong > *Sent:* Tuesday, December 05, 2017 4:30 PM > *To:* Klaij, Christiaan > *Cc:* petsc-users at mcs.anl.gov > *Subject:* Re: [petsc-users] segfault after recent scientific linux > upgrade > > I would like to suggest you to use PETSc-3.8.x. Then the bug should go > away. It is a known bug related to the reuse of the factorization pattern. > > > Fande, > > On Tue, Dec 5, 2017 at 8:07 AM, Klaij, Christiaan > wrote: > >> I'm running production software with petsc-3.7.5 and, among >> others, superlu_dist 5.1.3 on scientific linux 7.4. >> >> After a recent update of SL7.4, notably of the kernel and glibc, >> we found that superlu is somehow broken. Below's a backtrace of a >> serial example. Is this a known issue? Could you please advice on >> how to proceed (preferably while keeping 3.7.5 for now). >> >> Thanks, >> Chris >> >> $ gdb ./refresco ./core.9810 >> GNU gdb (GDB) Red Hat Enterprise Linux 7.6.1-100.el7 >> Copyright (C) 2013 Free Software Foundation, Inc. >> License GPLv3+: GNU GPL version 3 or later > tml> >> This is free software: you are free to change and redistribute it. >> There is NO WARRANTY, to the extent permitted by law. Type "show copying" >> and "show warranty" for details. >> This GDB was configured as "x86_64-redhat-linux-gnu". >> For bug reporting instructions, please see: >> ... >> Reading symbols from /home/cklaij/ReFRESCO/Dev/trun >> k/Suites/testSuite/FlatPlate_laminar/calcs/Grid64x64/refresco...done. >> [New LWP 9810] >> Missing separate debuginfo for /home/cklaij/ReFRESCO/Dev/trun >> k/Libs/install/licensing-1.55.0/sll/lib64/libssl.so.10 >> Try: yum --enablerepo='*debug*' install /usr/lib/debug/.build-id/68/6a >> 25d0a83d002183c835fa5694a8110c78d3bc.debug >> Missing separate debuginfo for /home/cklaij/ReFRESCO/Dev/trun >> k/Libs/install/licensing-1.55.0/sll/lib64/libcrypto.so.10 >> Try: yum --enablerepo='*debug*' install /usr/lib/debug/.build-id/68/d2 >> 958189303f421b1082abc33fd87338826c65.debug >> [Thread debugging using libthread_db enabled] >> Using host libthread_db library "/lib64/libthread_db.so.1". >> Core was generated by `./refresco'. >> Program terminated with signal 11, Segmentation fault. >> #0 0x00002ba501c132bc in mc64wd_dist (n=0x5213270, ne=0x2, ip=0x1, >> irn=0x51af520, a=0x51ef260, iperm=0x1000, num=0x7ffc545b2d94, >> jperm=0x51e7260, out=0x51eb260, pr=0x51ef260, q=0x51f3260, >> l=0x51f7260, >> u=0x51fb270, d__=0x5203270) >> at /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64- >> Intel/SuperLU_DIST_5.1.3/SRC/mc64ad_dist.c:2322 >> 2322 if (iperm[i__] != 0 || iperm[i0] == 0) { >> Missing separate debuginfos, use: debuginfo-install >> bzip2-libs-1.0.6-13.el7.x86_64 glibc-2.17-196.el7.x86_64 >> keyutils-libs-1.5.8-3.el7.x86_64 krb5-libs-1.15.1-8.el7.x86_64 >> libcom_err-1.42.9-10.el7.x86_64 libgcc-4.8.5-16.el7.x86_64 >> libselinux-2.5-11.el7.x86_64 libstdc++-4.8.5-16.el7.x86_64 >> libxml2-2.9.1-6.el7_2.3.x86_64 numactl-libs-2.0.9-6.el7_2.x86_64 >> pcre-8.32-17.el7.x86_64 xz-libs-5.2.2-1.el7.x86_64 zlib-1.2.7-17.el7.x86_64 >> (gdb) bt >> #0 0x00002ba501c132bc in mc64wd_dist (n=0x5213270, ne=0x2, ip=0x1, >> irn=0x51af520, a=0x51ef260, iperm=0x1000, num=0x7ffc545b2d94, >> jperm=0x51e7260, out=0x51eb260, pr=0x51ef260, q=0x51f3260, >> l=0x51f7260, >> u=0x51fb270, d__=0x5203270) >> at /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64- >> Intel/SuperLU_DIST_5.1.3/SRC/mc64ad_dist.c:2322 >> #1 0x00002ba501c0ef2b in mc64ad_dist (job=0x5213270, n=0x2, ne=0x1, >> ip=0x51af520, irn=0x51ef260, a=0x1000, num=0x7ffc545b2db0, >> cperm=0x51fb270, liw=0x5187d10, iw=0x51c3130, ldw=0x51af520, >> dw=0x517b570, >> icntl=0x51e7260, info=0x2ba501c2e556 ) >> at /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64- >> Intel/SuperLU_DIST_5.1.3/SRC/mc64ad_dist.c:596 >> #2 0x00002ba501c2e556 in dldperm_dist (job=0, n=0, nnz=0, >> colptr=0x51af520, >> adjncy=0x51ef260, nzval=0x1000, perm=0x4f00, u=0x1000, v=0x517b001) >> at /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64- >> Intel/SuperLU_DIST_5.1.3/SRC/dldperm_dist.c:141 >> #3 0x00002ba501c26296 in pdgssvx_ABglobal (options=0x5213270, A=0x2, >> ScalePermstruct=0x1, B=0x51af520, ldb=85914208, nrhs=4096, >> grid=0x516da30, >> LUstruct=0x517af40, berr=0x1000, >> stat=0x2ba500b36a7d , >> info=0x517af58) >> at /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64- >> Intel/SuperLU_DIST_5.1.3/SRC/pdgssvx_ABglobal.c:716 >> #4 0x00002ba500b36a7d in MatLUFactorNumeric_SuperLU_DIST (F=0x5213270, >> A=0x2, >> ---Type to continue, or q to quit--- >> info=0x1) >> at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ >> mat/impls/aij/mpi/superlu_dist/superlu_dist.c:419 >> #5 0x00002ba500b45a1a in MatLUFactorNumeric (fact=0x5213270, mat=0x2, >> info=0x1) >> at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ >> mat/interface/matrix.c:2996 >> #6 0x00002ba500e9e6c7 in PCSetUp_LU (pc=0x5213270) >> at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ >> ksp/pc/impls/factor/lu/lu.c:172 >> #7 0x00002ba500ded084 in PCSetUp (pc=0x5213270) >> at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ >> ksp/pc/interface/precon.c:968 >> #8 0x00002ba500f2968d in KSPSetUp (ksp=0x5213270) >> at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ >> ksp/ksp/interface/itfunc.c:390 >> #9 0x00002ba500f257be in KSPSolve (ksp=0x5213270, b=0x2, x=0x4193510) >> at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ >> ksp/ksp/interface/itfunc.c:599 >> #10 0x00002ba500f3e142 in kspsolve_ (ksp=0x5213270, b=0x2, x=0x1, >> __ierr=0x51af520) >> at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ >> ksp/ksp/interface/ftn-auto/itfuncf.c:261 >> ---Type to continue, or q to quit--- >> #11 0x0000000000bccf71 in petsc_solvers::petsc_solvers_solve ( >> regname='massTransport', rhs_c=..., phi_c=..., tol=0.01, maxiter=500, >> res0=-9.2559631349317831e+61, usediter=0, .tmp.REGNAME.len_V$1790=13) >> at petsc_solvers.F90:580 >> #12 0x0000000000c2c9c5 in mass_momentum::mass_momentum_pressureprediction >> () >> at mass_momentum.F90:989 >> #13 0x0000000000c0ffc1 in mass_momentum::mass_momentum_core () >> at mass_momentum.F90:626 >> #14 0x0000000000c26a2c in mass_momentum::mass_momentum_systempcapply ( >> aa_system=76390912, xx_system=68983024, rr_system=68984544, ierr=0) >> at mass_momentum.F90:919 >> #15 0x00002ba500eaa763 in ourshellapply (pc=0x48da200, x=0x41c98f0, >> y=0x41c9ee0) >> at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ >> ksp/pc/impls/shell/ftn-custom/zshellpcf.c:41 >> #16 0x00002ba500ea79be in PCApply_Shell (pc=0x5213270, x=0x2, y=0x1) >> at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ >> ksp/pc/impls/shell/shellpc.c:124 >> #17 0x00002ba500df1800 in PCApply (pc=0x5213270, x=0x2, y=0x1) >> at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ >> ksp/pc/interface/precon.c:482 >> #18 0x00002ba500f2592a in KSPSolve (ksp=0x5213270, b=0x2, x=0x41c9ee0) >> at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/ksp/interf---Type >> to continue, or q to quit--- >> ace/itfunc.c:631 >> #19 0x00002ba500f3e142 in kspsolve_ (ksp=0x5213270, b=0x2, x=0x1, >> __ierr=0x51af520) >> at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ >> ksp/ksp/interface/ftn-auto/itfuncf.c:261 >> #20 0x0000000000c1b0ea in mass_momentum::mass_momentum_krylov () >> at mass_momentum.F90:777 >> #21 0x0000000000c0d242 in mass_momentum::mass_momentum_simple () >> at mass_momentum.F90:548 >> #22 0x0000000000c0841f in mass_momentum::mass_momentum_solve () >> at mass_momentum.F90:465 >> #23 0x000000000041b5ec in refresco () at refresco.F90:259 >> #24 0x000000000041999e in main () >> #25 0x00002ba508c98c05 in __libc_start_main () from /lib64/libc.so.6 >> #26 0x00000000004198a3 in _start () >> (gdb) >> >> >> dr. ir. Christiaan Klaij | Senior Researcher | Research & Development >> MARIN | T +31 317 49 33 44 <+31%20317%20493%20344> | mailto: >> C.Klaij at marin.nl | http://www.marin.nl >> >> MARIN news: http://www.marin.nl/web/News/News-items/Seminar-Blauwe-toeko >> mst-versnellen-van-innovaties-door-samenwerken.htm >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image0cf412.PNG Type: image/png Size: 333 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: imagefdeb13.PNG Type: image/png Size: 331 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image4b2382.PNG Type: image/png Size: 253 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image962398.PNG Type: image/png Size: 293 bytes Desc: not available URL: From balay at mcs.anl.gov Wed Dec 6 11:05:00 2017 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 6 Dec 2017 11:05:00 -0600 Subject: [petsc-users] segfault after recent scientific linux upgrade In-Reply-To: <1512545657930.54754@marin.nl> References: <1512486474462.7744@marin.nl>, <1512545657930.54754@marin.nl> Message-ID: petsc 3.7 - and 3.8 both default to superlu_dist snapshot: self.gitcommit = 'xsdk-0.2.0-rc1' If using petsc-3.7 - you can use latest maint-3.7 [i.e 3.7.7+] [3.7.7 is a latest bugfix update to 3.7 - so there should be no reason to stick to 3.7.5] But if you really want to stick to 3.7.5 you can use: --download-superlu_dist=1 --download-superlu_dist-commit=xsdk-0.2.0-rc1 Satish On Wed, 6 Dec 2017, Klaij, Christiaan wrote: > Fande, > > Thanks, that's good to know. Upgrading to 3.8.x is definitely my > long-term plan, but is there anything I can do short-term to fix > the problem while keeping 3.7.5? > > Chris > > dr. ir. Christiaan Klaij | Senior Researcher | Research & Development > MARIN | T +31 317 49 33 44 | C.Klaij at marin.nl | www.marin.nl > > [LinkedIn] [YouTube] [Twitter] [Facebook] > MARIN news: Seminar ?Blauwe toekomst: versnellen van innovaties door samenwerken > > ________________________________ > From: Fande Kong > Sent: Tuesday, December 05, 2017 4:30 PM > To: Klaij, Christiaan > Cc: petsc-users at mcs.anl.gov > Subject: Re: [petsc-users] segfault after recent scientific linux upgrade > > I would like to suggest you to use PETSc-3.8.x. Then the bug should go away. It is a known bug related to the reuse of the factorization pattern. > > > Fande, > > On Tue, Dec 5, 2017 at 8:07 AM, Klaij, Christiaan > wrote: > I'm running production software with petsc-3.7.5 and, among > others, superlu_dist 5.1.3 on scientific linux 7.4. > > After a recent update of SL7.4, notably of the kernel and glibc, > we found that superlu is somehow broken. Below's a backtrace of a > serial example. Is this a known issue? Could you please advice on > how to proceed (preferably while keeping 3.7.5 for now). > > Thanks, > Chris > > $ gdb ./refresco ./core.9810 > GNU gdb (GDB) Red Hat Enterprise Linux 7.6.1-100.el7 > Copyright (C) 2013 Free Software Foundation, Inc. > License GPLv3+: GNU GPL version 3 or later > This is free software: you are free to change and redistribute it. > There is NO WARRANTY, to the extent permitted by law. Type "show copying" > and "show warranty" for details. > This GDB was configured as "x86_64-redhat-linux-gnu". > For bug reporting instructions, please see: > ... > Reading symbols from /home/cklaij/ReFRESCO/Dev/trunk/Suites/testSuite/FlatPlate_laminar/calcs/Grid64x64/refresco...done. > [New LWP 9810] > Missing separate debuginfo for /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/licensing-1.55.0/sll/lib64/libssl.so.10 > Try: yum --enablerepo='*debug*' install /usr/lib/debug/.build-id/68/6a25d0a83d002183c835fa5694a8110c78d3bc.debug > Missing separate debuginfo for /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/licensing-1.55.0/sll/lib64/libcrypto.so.10 > Try: yum --enablerepo='*debug*' install /usr/lib/debug/.build-id/68/d2958189303f421b1082abc33fd87338826c65.debug > [Thread debugging using libthread_db enabled] > Using host libthread_db library "/lib64/libthread_db.so.1". > Core was generated by `./refresco'. > Program terminated with signal 11, Segmentation fault. > #0 0x00002ba501c132bc in mc64wd_dist (n=0x5213270, ne=0x2, ip=0x1, > irn=0x51af520, a=0x51ef260, iperm=0x1000, num=0x7ffc545b2d94, > jperm=0x51e7260, out=0x51eb260, pr=0x51ef260, q=0x51f3260, l=0x51f7260, > u=0x51fb270, d__=0x5203270) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/SuperLU_DIST_5.1.3/SRC/mc64ad_dist.c:2322 > 2322 if (iperm[i__] != 0 || iperm[i0] == 0) { > Missing separate debuginfos, use: debuginfo-install bzip2-libs-1.0.6-13.el7.x86_64 glibc-2.17-196.el7.x86_64 keyutils-libs-1.5.8-3.el7.x86_64 krb5-libs-1.15.1-8.el7.x86_64 libcom_err-1.42.9-10.el7.x86_64 libgcc-4.8.5-16.el7.x86_64 libselinux-2.5-11.el7.x86_64 libstdc++-4.8.5-16.el7.x86_64 libxml2-2.9.1-6.el7_2.3.x86_64 numactl-libs-2.0.9-6.el7_2.x86_64 pcre-8.32-17.el7.x86_64 xz-libs-5.2.2-1.el7.x86_64 zlib-1.2.7-17.el7.x86_64 > (gdb) bt > #0 0x00002ba501c132bc in mc64wd_dist (n=0x5213270, ne=0x2, ip=0x1, > irn=0x51af520, a=0x51ef260, iperm=0x1000, num=0x7ffc545b2d94, > jperm=0x51e7260, out=0x51eb260, pr=0x51ef260, q=0x51f3260, l=0x51f7260, > u=0x51fb270, d__=0x5203270) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/SuperLU_DIST_5.1.3/SRC/mc64ad_dist.c:2322 > #1 0x00002ba501c0ef2b in mc64ad_dist (job=0x5213270, n=0x2, ne=0x1, > ip=0x51af520, irn=0x51ef260, a=0x1000, num=0x7ffc545b2db0, > cperm=0x51fb270, liw=0x5187d10, iw=0x51c3130, ldw=0x51af520, dw=0x517b570, > icntl=0x51e7260, info=0x2ba501c2e556 ) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/SuperLU_DIST_5.1.3/SRC/mc64ad_dist.c:596 > #2 0x00002ba501c2e556 in dldperm_dist (job=0, n=0, nnz=0, colptr=0x51af520, > adjncy=0x51ef260, nzval=0x1000, perm=0x4f00, u=0x1000, v=0x517b001) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/SuperLU_DIST_5.1.3/SRC/dldperm_dist.c:141 > #3 0x00002ba501c26296 in pdgssvx_ABglobal (options=0x5213270, A=0x2, > ScalePermstruct=0x1, B=0x51af520, ldb=85914208, nrhs=4096, grid=0x516da30, > LUstruct=0x517af40, berr=0x1000, > stat=0x2ba500b36a7d , info=0x517af58) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/SuperLU_DIST_5.1.3/SRC/pdgssvx_ABglobal.c:716 > #4 0x00002ba500b36a7d in MatLUFactorNumeric_SuperLU_DIST (F=0x5213270, A=0x2, > ---Type to continue, or q to quit--- > info=0x1) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/mat/impls/aij/mpi/superlu_dist/superlu_dist.c:419 > #5 0x00002ba500b45a1a in MatLUFactorNumeric (fact=0x5213270, mat=0x2, > info=0x1) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/mat/interface/matrix.c:2996 > #6 0x00002ba500e9e6c7 in PCSetUp_LU (pc=0x5213270) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/pc/impls/factor/lu/lu.c:172 > #7 0x00002ba500ded084 in PCSetUp (pc=0x5213270) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/pc/interface/precon.c:968 > #8 0x00002ba500f2968d in KSPSetUp (ksp=0x5213270) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/ksp/interface/itfunc.c:390 > #9 0x00002ba500f257be in KSPSolve (ksp=0x5213270, b=0x2, x=0x4193510) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/ksp/interface/itfunc.c:599 > #10 0x00002ba500f3e142 in kspsolve_ (ksp=0x5213270, b=0x2, x=0x1, > __ierr=0x51af520) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/ksp/interface/ftn-auto/itfuncf.c:261 > ---Type to continue, or q to quit--- > #11 0x0000000000bccf71 in petsc_solvers::petsc_solvers_solve ( > regname='massTransport', rhs_c=..., phi_c=..., tol=0.01, maxiter=500, > res0=-9.2559631349317831e+61, usediter=0, .tmp.REGNAME.len_V$1790=13) > at petsc_solvers.F90:580 > #12 0x0000000000c2c9c5 in mass_momentum::mass_momentum_pressureprediction () > at mass_momentum.F90:989 > #13 0x0000000000c0ffc1 in mass_momentum::mass_momentum_core () > at mass_momentum.F90:626 > #14 0x0000000000c26a2c in mass_momentum::mass_momentum_systempcapply ( > aa_system=76390912, xx_system=68983024, rr_system=68984544, ierr=0) > at mass_momentum.F90:919 > #15 0x00002ba500eaa763 in ourshellapply (pc=0x48da200, x=0x41c98f0, > y=0x41c9ee0) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/pc/impls/shell/ftn-custom/zshellpcf.c:41 > #16 0x00002ba500ea79be in PCApply_Shell (pc=0x5213270, x=0x2, y=0x1) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/pc/impls/shell/shellpc.c:124 > #17 0x00002ba500df1800 in PCApply (pc=0x5213270, x=0x2, y=0x1) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/pc/interface/precon.c:482 > #18 0x00002ba500f2592a in KSPSolve (ksp=0x5213270, b=0x2, x=0x41c9ee0) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/ksp/interf---Type to continue, or q to quit--- > ace/itfunc.c:631 > #19 0x00002ba500f3e142 in kspsolve_ (ksp=0x5213270, b=0x2, x=0x1, > __ierr=0x51af520) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/ksp/interface/ftn-auto/itfuncf.c:261 > #20 0x0000000000c1b0ea in mass_momentum::mass_momentum_krylov () > at mass_momentum.F90:777 > #21 0x0000000000c0d242 in mass_momentum::mass_momentum_simple () > at mass_momentum.F90:548 > #22 0x0000000000c0841f in mass_momentum::mass_momentum_solve () > at mass_momentum.F90:465 > #23 0x000000000041b5ec in refresco () at refresco.F90:259 > #24 0x000000000041999e in main () > #25 0x00002ba508c98c05 in __libc_start_main () from /lib64/libc.so.6 > #26 0x00000000004198a3 in _start () > (gdb) > > > dr. ir. Christiaan Klaij | Senior Researcher | Research & Development > MARIN | T +31 317 49 33 44 | mailto:C.Klaij at marin.nl | http://www.marin.nl > > MARIN news: http://www.marin.nl/web/News/News-items/Seminar-Blauwe-toekomst-versnellen-van-innovaties-door-samenwerken.htm > > > > > From mhbaghaei at mail.sjtu.edu.cn Thu Dec 7 01:23:23 2017 From: mhbaghaei at mail.sjtu.edu.cn (Mohammad Hassan Baghaei) Date: Thu, 7 Dec 2017 15:23:23 +0800 (CST) Subject: [petsc-users] Boundary condition IS Message-ID: <004001d36f2c$29b9c140$7d2d43c0$@mail.sjtu.edu.cn> Hello I am using DMPlex to construct a fully circular 2D domain for my PDEs. As I am discretizing the PDEs using finite difference method, I need to know , to define the layout of PetscSection, how I can find the boundary condition IS. I know the location of the boundaries in Sieve chart. However, I noticed that I need to know the Index Set of that location. Can I use the Sieve point number for Index Set. It seems I think I am not familiar with IS. Can you help me? Thank for your help. Amir -------------- next part -------------- An HTML attachment was scrubbed... URL: From yann.jobic at univ-amu.fr Thu Dec 7 01:51:54 2017 From: yann.jobic at univ-amu.fr (Yann Jobic) Date: Thu, 7 Dec 2017 08:51:54 +0100 Subject: [petsc-users] Boundary condition IS In-Reply-To: <004001d36f2c$29b9c140$7d2d43c0$@mail.sjtu.edu.cn> References: <004001d36f2c$29b9c140$7d2d43c0$@mail.sjtu.edu.cn> Message-ID: Le 07/12/2017 ? 08:23, Mohammad Hassan Baghaei a ?crit?: > > Hello > > I am using DMPlex to construct a fully circular 2D domain for my PDEs. > As I am discretizing the PDEs using finite difference method, I need > to know , to define the layout of PetscSection, how I can find the > boundary condition IS. I know the location of the boundaries in Sieve > chart. However, I noticed that I need to know the Index Set of that > location. Can I use the Sieve point number for Index Set. It seems I > think I am not familiar with IS. Can you help me? Thank for your help. > > Amir > Hello, In snes/examples/tutorials/ex56.c you can find the following code, which does what you want as i understood : 260:?? { 261:???? DMLabel???????? label; 262:???? IS????????????? is; 263:???? DMCreateLabel(dm, "boundary"); 264:???? DMGetLabel(dm, "boundary", &label); 265:???? DMPlexMarkBoundaryFaces(dm, label); 266:???? if (run_type==0) { 267:?????? DMGetStratumIS(dm, "boundary", 1,? &is); 268:?????? DMCreateLabel(dm,"Faces"); 269:?????? if (is) { 270:???????? PetscInt??????? d, f, Nf; 271:???????? const PetscInt *faces; 272:???????? PetscInt??????? csize; 273:???????? PetscSection??? cs; 274:???????? Vec???????????? coordinates ; 275:???????? DM????????????? cdm; 276:???????? ISGetLocalSize(is, &Nf); 277:???????? ISGetIndices(is, &faces); 278:???????? DMGetCoordinatesLocal(dm, &coordinates); 279:???????? DMGetCoordinateDM(dm, &cdm); 280:???????? DMGetDefaultSection(cdm, &cs); 281:???????? /* Check for each boundary face if any component of its centroid is either 0.0 or 1.0 */ 282:???????? for (f = 0; f < Nf; ++f) { 283:?????????? PetscReal?? faceCoord; 284:?????????? PetscInt??? b,v; 285:?????????? PetscScalar *coords = NULL; 286:?????????? PetscInt??? Nv; 287:?????????? DMPlexVecGetClosure(cdm, cs, coordinates, faces[f], &csize, &coords); 288:?????????? Nv?? = csize/dim; /* Calculate mean coordinate vector */ 289:?????????? for (d = 0; d < dim; ++d) { 290:???????????? faceCoord = 0.0; 291:???????????? for (v = 0; v < Nv; ++v) faceCoord += PetscRealPart(coords[v*dim+d]); 292:???????????? faceCoord /= Nv; 293:???????????? for (b = 0; b < 2; ++b) { 294:?????????????? if (PetscAbs(faceCoord - b) < PETSC_SMALL) { /* domain have not been set yet, still [0,1]^3 */ 295:???????????????? DMSetLabelValue(dm, "Faces", faces[f], d*2+b+1); 296:?????????????? } 297:???????????? } 298:?????????? } 299:?????????? DMPlexVecRestoreClosure(cdm, cs, coordinates, faces[f], &csize, &coords); 300:???????? } 301:???????? ISRestoreIndices(is, &faces); 302:?????? } 303:?????? ISDestroy(&is); 304:?????? DMGetLabel(dm, "Faces", &label); 305:?????? DMPlexLabelComplete(dm, label); 306:???? } 307:?? } The idea is to mark boundary faces with DMPlexMarkBoundaryFaces, and get the IS of the labeled faces, and sort them with the position of the centroid of the face. Regards, Yann -------------- next part -------------- An HTML attachment was scrubbed... URL: From C.Klaij at marin.nl Thu Dec 7 05:02:21 2017 From: C.Klaij at marin.nl (Klaij, Christiaan) Date: Thu, 7 Dec 2017 11:02:21 +0000 Subject: [petsc-users] segfault after recent scientific linux upgrade In-Reply-To: References: <1512486474462.7744@marin.nl>, <1512545657930.54754@marin.nl>, Message-ID: <1512644541197.67462@marin.nl> Thanks Satish, I will give it shot and let you know. Chris dr. ir. Christiaan Klaij | Senior Researcher | Research & Development MARIN | T +31 317 49 33 44 | mailto:C.Klaij at marin.nl | http://www.marin.nl MARIN news: http://www.marin.nl/web/News/News-items/MARIN-at-Marintec-China-Shanghai-December-58-1.htm ________________________________________ From: Satish Balay Sent: Wednesday, December 06, 2017 6:05 PM To: Klaij, Christiaan Cc: Fande Kong; petsc-users at mcs.anl.gov Subject: Re: [petsc-users] segfault after recent scientific linux upgrade petsc 3.7 - and 3.8 both default to superlu_dist snapshot: self.gitcommit = 'xsdk-0.2.0-rc1' If using petsc-3.7 - you can use latest maint-3.7 [i.e 3.7.7+] [3.7.7 is a latest bugfix update to 3.7 - so there should be no reason to stick to 3.7.5] But if you really want to stick to 3.7.5 you can use: --download-superlu_dist=1 --download-superlu_dist-commit=xsdk-0.2.0-rc1 Satish On Wed, 6 Dec 2017, Klaij, Christiaan wrote: > Fande, > > Thanks, that's good to know. Upgrading to 3.8.x is definitely my > long-term plan, but is there anything I can do short-term to fix > the problem while keeping 3.7.5? > > Chris > > dr. ir. Christiaan Klaij | Senior Researcher | Research & Development > MARIN | T +31 317 49 33 44 | C.Klaij at marin.nl | www.marin.nl > > [LinkedIn] [YouTube] [Twitter] [Facebook] > MARIN news: Seminar ?Blauwe toekomst: versnellen van innovaties door samenwerken > > ________________________________ > From: Fande Kong > Sent: Tuesday, December 05, 2017 4:30 PM > To: Klaij, Christiaan > Cc: petsc-users at mcs.anl.gov > Subject: Re: [petsc-users] segfault after recent scientific linux upgrade > > I would like to suggest you to use PETSc-3.8.x. Then the bug should go away. It is a known bug related to the reuse of the factorization pattern. > > > Fande, > > On Tue, Dec 5, 2017 at 8:07 AM, Klaij, Christiaan > wrote: > I'm running production software with petsc-3.7.5 and, among > others, superlu_dist 5.1.3 on scientific linux 7.4. > > After a recent update of SL7.4, notably of the kernel and glibc, > we found that superlu is somehow broken. Below's a backtrace of a > serial example. Is this a known issue? Could you please advice on > how to proceed (preferably while keeping 3.7.5 for now). > > Thanks, > Chris > > $ gdb ./refresco ./core.9810 > GNU gdb (GDB) Red Hat Enterprise Linux 7.6.1-100.el7 > Copyright (C) 2013 Free Software Foundation, Inc. > License GPLv3+: GNU GPL version 3 or later > This is free software: you are free to change and redistribute it. > There is NO WARRANTY, to the extent permitted by law. Type "show copying" > and "show warranty" for details. > This GDB was configured as "x86_64-redhat-linux-gnu". > For bug reporting instructions, please see: > ... > Reading symbols from /home/cklaij/ReFRESCO/Dev/trunk/Suites/testSuite/FlatPlate_laminar/calcs/Grid64x64/refresco...done. > [New LWP 9810] > Missing separate debuginfo for /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/licensing-1.55.0/sll/lib64/libssl.so.10 > Try: yum --enablerepo='*debug*' install /usr/lib/debug/.build-id/68/6a25d0a83d002183c835fa5694a8110c78d3bc.debug > Missing separate debuginfo for /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/licensing-1.55.0/sll/lib64/libcrypto.so.10 > Try: yum --enablerepo='*debug*' install /usr/lib/debug/.build-id/68/d2958189303f421b1082abc33fd87338826c65.debug > [Thread debugging using libthread_db enabled] > Using host libthread_db library "/lib64/libthread_db.so.1". > Core was generated by `./refresco'. > Program terminated with signal 11, Segmentation fault. > #0 0x00002ba501c132bc in mc64wd_dist (n=0x5213270, ne=0x2, ip=0x1, > irn=0x51af520, a=0x51ef260, iperm=0x1000, num=0x7ffc545b2d94, > jperm=0x51e7260, out=0x51eb260, pr=0x51ef260, q=0x51f3260, l=0x51f7260, > u=0x51fb270, d__=0x5203270) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/SuperLU_DIST_5.1.3/SRC/mc64ad_dist.c:2322 > 2322 if (iperm[i__] != 0 || iperm[i0] == 0) { > Missing separate debuginfos, use: debuginfo-install bzip2-libs-1.0.6-13.el7.x86_64 glibc-2.17-196.el7.x86_64 keyutils-libs-1.5.8-3.el7.x86_64 krb5-libs-1.15.1-8.el7.x86_64 libcom_err-1.42.9-10.el7.x86_64 libgcc-4.8.5-16.el7.x86_64 libselinux-2.5-11.el7.x86_64 libstdc++-4.8.5-16.el7.x86_64 libxml2-2.9.1-6.el7_2.3.x86_64 numactl-libs-2.0.9-6.el7_2.x86_64 pcre-8.32-17.el7.x86_64 xz-libs-5.2.2-1.el7.x86_64 zlib-1.2.7-17.el7.x86_64 > (gdb) bt > #0 0x00002ba501c132bc in mc64wd_dist (n=0x5213270, ne=0x2, ip=0x1, > irn=0x51af520, a=0x51ef260, iperm=0x1000, num=0x7ffc545b2d94, > jperm=0x51e7260, out=0x51eb260, pr=0x51ef260, q=0x51f3260, l=0x51f7260, > u=0x51fb270, d__=0x5203270) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/SuperLU_DIST_5.1.3/SRC/mc64ad_dist.c:2322 > #1 0x00002ba501c0ef2b in mc64ad_dist (job=0x5213270, n=0x2, ne=0x1, > ip=0x51af520, irn=0x51ef260, a=0x1000, num=0x7ffc545b2db0, > cperm=0x51fb270, liw=0x5187d10, iw=0x51c3130, ldw=0x51af520, dw=0x517b570, > icntl=0x51e7260, info=0x2ba501c2e556 ) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/SuperLU_DIST_5.1.3/SRC/mc64ad_dist.c:596 > #2 0x00002ba501c2e556 in dldperm_dist (job=0, n=0, nnz=0, colptr=0x51af520, > adjncy=0x51ef260, nzval=0x1000, perm=0x4f00, u=0x1000, v=0x517b001) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/SuperLU_DIST_5.1.3/SRC/dldperm_dist.c:141 > #3 0x00002ba501c26296 in pdgssvx_ABglobal (options=0x5213270, A=0x2, > ScalePermstruct=0x1, B=0x51af520, ldb=85914208, nrhs=4096, grid=0x516da30, > LUstruct=0x517af40, berr=0x1000, > stat=0x2ba500b36a7d , info=0x517af58) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/SuperLU_DIST_5.1.3/SRC/pdgssvx_ABglobal.c:716 > #4 0x00002ba500b36a7d in MatLUFactorNumeric_SuperLU_DIST (F=0x5213270, A=0x2, > ---Type to continue, or q to quit--- > info=0x1) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/mat/impls/aij/mpi/superlu_dist/superlu_dist.c:419 > #5 0x00002ba500b45a1a in MatLUFactorNumeric (fact=0x5213270, mat=0x2, > info=0x1) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/mat/interface/matrix.c:2996 > #6 0x00002ba500e9e6c7 in PCSetUp_LU (pc=0x5213270) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/pc/impls/factor/lu/lu.c:172 > #7 0x00002ba500ded084 in PCSetUp (pc=0x5213270) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/pc/interface/precon.c:968 > #8 0x00002ba500f2968d in KSPSetUp (ksp=0x5213270) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/ksp/interface/itfunc.c:390 > #9 0x00002ba500f257be in KSPSolve (ksp=0x5213270, b=0x2, x=0x4193510) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/ksp/interface/itfunc.c:599 > #10 0x00002ba500f3e142 in kspsolve_ (ksp=0x5213270, b=0x2, x=0x1, > __ierr=0x51af520) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/ksp/interface/ftn-auto/itfuncf.c:261 > ---Type to continue, or q to quit--- > #11 0x0000000000bccf71 in petsc_solvers::petsc_solvers_solve ( > regname='massTransport', rhs_c=..., phi_c=..., tol=0.01, maxiter=500, > res0=-9.2559631349317831e+61, usediter=0, .tmp.REGNAME.len_V$1790=13) > at petsc_solvers.F90:580 > #12 0x0000000000c2c9c5 in mass_momentum::mass_momentum_pressureprediction () > at mass_momentum.F90:989 > #13 0x0000000000c0ffc1 in mass_momentum::mass_momentum_core () > at mass_momentum.F90:626 > #14 0x0000000000c26a2c in mass_momentum::mass_momentum_systempcapply ( > aa_system=76390912, xx_system=68983024, rr_system=68984544, ierr=0) > at mass_momentum.F90:919 > #15 0x00002ba500eaa763 in ourshellapply (pc=0x48da200, x=0x41c98f0, > y=0x41c9ee0) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/pc/impls/shell/ftn-custom/zshellpcf.c:41 > #16 0x00002ba500ea79be in PCApply_Shell (pc=0x5213270, x=0x2, y=0x1) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/pc/impls/shell/shellpc.c:124 > #17 0x00002ba500df1800 in PCApply (pc=0x5213270, x=0x2, y=0x1) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/pc/interface/precon.c:482 > #18 0x00002ba500f2592a in KSPSolve (ksp=0x5213270, b=0x2, x=0x41c9ee0) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/ksp/interf---Type to continue, or q to quit--- > ace/itfunc.c:631 > #19 0x00002ba500f3e142 in kspsolve_ (ksp=0x5213270, b=0x2, x=0x1, > __ierr=0x51af520) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/ksp/interface/ftn-auto/itfuncf.c:261 > #20 0x0000000000c1b0ea in mass_momentum::mass_momentum_krylov () > at mass_momentum.F90:777 > #21 0x0000000000c0d242 in mass_momentum::mass_momentum_simple () > at mass_momentum.F90:548 > #22 0x0000000000c0841f in mass_momentum::mass_momentum_solve () > at mass_momentum.F90:465 > #23 0x000000000041b5ec in refresco () at refresco.F90:259 > #24 0x000000000041999e in main () > #25 0x00002ba508c98c05 in __libc_start_main () from /lib64/libc.so.6 > #26 0x00000000004198a3 in _start () > (gdb) > > > dr. ir. Christiaan Klaij | Senior Researcher | Research & Development > MARIN | T +31 317 49 33 44 | mailto:C.Klaij at marin.nl | http://www.marin.nl > > MARIN news: http://www.marin.nl/web/News/News-items/Seminar-Blauwe-toekomst-versnellen-van-innovaties-door-samenwerken.htm > > > > > From knepley at gmail.com Thu Dec 7 09:08:58 2017 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 7 Dec 2017 10:08:58 -0500 Subject: [petsc-users] Boundary condition IS In-Reply-To: <004001d36f2c$29b9c140$7d2d43c0$@mail.sjtu.edu.cn> References: <004001d36f2c$29b9c140$7d2d43c0$@mail.sjtu.edu.cn> Message-ID: On Thu, Dec 7, 2017 at 2:23 AM, Mohammad Hassan Baghaei < mhbaghaei at mail.sjtu.edu.cn> wrote: > Hello > > I am using DMPlex to construct a fully circular 2D domain for my PDEs. As > I am discretizing the PDEs using finite difference method, > How do you do FD on an unstructured mesh? > I need to know , to define the layout of PetscSection, how I can find the > boundary condition IS. I know the location of the boundaries in Sieve > chart. However, I noticed that I need to know the Index Set of that > location. Can I use the Sieve point number for Index Set. It seems I think > I am not familiar with IS. Can you help me? Thank for your help. > I like Yann's response. You can simplify the processing somewhat: 1) You can use DMPlexLabelComplete() to add in edges and vertices on the boundary 2) You can use DMPlexComputeCellGeometryFVM to compute the centroid Thanks, Matt > Amir > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From C.Klaij at marin.nl Thu Dec 7 09:15:00 2017 From: C.Klaij at marin.nl (Klaij, Christiaan) Date: Thu, 7 Dec 2017 15:15:00 +0000 Subject: [petsc-users] segfault after recent scientific linux upgrade In-Reply-To: <1512644541197.67462@marin.nl> References: <1512486474462.7744@marin.nl>, <1512545657930.54754@marin.nl>, , <1512644541197.67462@marin.nl> Message-ID: <1512659700316.50646@marin.nl> Satish, As a first try, I've kept petsc-3.7.5 and only replaced superlu by the new xsdk-0.2.0-rc1 version. Unfortunately, this doesn't fix the problem, see the backtrace below. Fande, Perhaps the problem is related to petsc, not superlu? What really puzzles me is that everything was working fine with petsc-3.7.5 and superlu_dist_5.3.1, it only broke after we updated Scientific Linux 7. So this bug (in petsc or in superlu) was already there but somehow not triggered before the SL7 update? Chris (gdb) bt #0 0x00002b38995fa30c in mc64wd_dist (n=0x3da6230, ne=0x2, ip=0x1, irn=0x3d424e0, a=0x3d82220, iperm=0x1000, num=0x7ffc505dd294, jperm=0x3d7a220, out=0x3d7e220, pr=0x3d82220, q=0x3d86220, l=0x3d8a220, u=0x3d8e230, d__=0x3d96230) at /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/superlu_dist-xsdk-0.2.0-rc1/SRC/mc64ad_dist.c:2322 #1 0x00002b38995f5f7b in mc64ad_dist (job=0x3da6230, n=0x2, ne=0x1, ip=0x3d424e0, irn=0x3d82220, a=0x1000, num=0x7ffc505dd2b0, cperm=0x3d8e230, liw=0x3d1acd0, iw=0x3d560f0, ldw=0x3d424e0, dw=0x3d0e530, icntl=0x3d7a220, info=0x2b3899615546 ) at /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/superlu_dist-xsdk-0.2.0-rc1/SRC/mc64ad_dist.c:596 #2 0x00002b3899615546 in dldperm_dist (job=0, n=0, nnz=0, colptr=0x3d424e0, adjncy=0x3d82220, nzval=0x1000, perm=0x4f00, u=0x1000, v=0x3d0e001) at /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/superlu_dist-xsdk-0.2.0-rc1/SRC/dldperm_dist.c:141 #3 0x00002b389960d286 in pdgssvx_ABglobal (options=0x3da6230, A=0x2, ScalePermstruct=0x1, B=0x3d424e0, ldb=64496160, nrhs=4096, grid=0x3d009f0, LUstruct=0x3d0df00, berr=0x1000, stat=0x2b389851da7d , info=0x3d0df18) at /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/superlu_dist-xsdk-0.2.0-rc1/SRC/pdgssvx_ABglobal.c:716 #4 0x00002b389851da7d in MatLUFactorNumeric_SuperLU_DIST (F=0x3da6230, A=0x2, ---Type to continue, or q to quit--- info=0x1) at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/mat/impls/aij/mpi/superlu_dist/superlu_dist.c:419 #5 0x00002b389852ca1a in MatLUFactorNumeric (fact=0x3da6230, mat=0x2, info=0x1) at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/mat/interface/matrix.c:2996 #6 0x00002b38988856c7 in PCSetUp_LU (pc=0x3da6230) at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/pc/impls/factor/lu/lu.c:172 #7 0x00002b38987d4084 in PCSetUp (pc=0x3da6230) at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/pc/interface/precon.c:968 #8 0x00002b389891068d in KSPSetUp (ksp=0x3da6230) at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/ksp/interface/itfunc.c:390 #9 0x00002b389890c7be in KSPSolve (ksp=0x3da6230, b=0x2, x=0x2d18d90) at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/ksp/interface/itfunc.c:599 #10 0x00002b3898925142 in kspsolve_ (ksp=0x3da6230, b=0x2, x=0x1, __ierr=0x3d424e0) at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/ksp/interface/ftn-auto/itfuncf.c:261 ---Type to continue, or q to quit--- #11 0x0000000000bccf71 in petsc_solvers::petsc_solvers_solve ( regname='massTransport', rhs_c=..., phi_c=..., tol=0.01, maxiter=500, res0=-9.2559631349317831e+61, usediter=0, .tmp.REGNAME.len_V$1790=13) at petsc_solvers.F90:580 #12 0x0000000000c2c9c5 in mass_momentum::mass_momentum_pressureprediction () at mass_momentum.F90:989 #13 0x0000000000c0ffc1 in mass_momentum::mass_momentum_core () at mass_momentum.F90:626 #14 0x0000000000c26a2c in mass_momentum::mass_momentum_systempcapply ( aa_system=54952496, xx_system=47570896, rr_system=47572416, ierr=0) at mass_momentum.F90:919 #15 0x00002b3898891763 in ourshellapply (pc=0x3468230, x=0x2d5dfd0, y=0x2d5e5c0) at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/pc/impls/shell/ftn-custom/zshellpcf.c:41 #16 0x00002b389888e9be in PCApply_Shell (pc=0x3da6230, x=0x2, y=0x1) at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/pc/impls/shell/shellpc.c:124 #17 0x00002b38987d8800 in PCApply (pc=0x3da6230, x=0x2, y=0x1) at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/pc/interface/precon.c:482 #18 0x00002b389890c92a in KSPSolve (ksp=0x3da6230, b=0x2, x=0x2d5e5c0) at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/ksp/interf---Type to continue, or q to quit--- ace/itfunc.c:631 #19 0x00002b3898925142 in kspsolve_ (ksp=0x3da6230, b=0x2, x=0x1, __ierr=0x3d424e0) at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/ksp/interface/ftn-auto/itfuncf.c:261 #20 0x0000000000c1b0ea in mass_momentum::mass_momentum_krylov () at mass_momentum.F90:777 #21 0x0000000000c0d242 in mass_momentum::mass_momentum_simple () at mass_momentum.F90:548 #22 0x0000000000c0841f in mass_momentum::mass_momentum_solve () at mass_momentum.F90:465 #23 0x000000000041b5ec in refresco () at refresco.F90:259 #24 0x000000000041999e in main () #25 0x00002b38a067fc05 in __libc_start_main () from /lib64/libc.so.6 #26 0x00000000004198a3 in _start () (gdb) dr. ir. Christiaan Klaij | Senior Researcher | Research & Development MARIN | T +31 317 49 33 44 | mailto:C.Klaij at marin.nl | http://www.marin.nl MARIN news: http://www.marin.nl/web/News/News-items/Simulator-facility-in-Houston-as-bridge-between-engineering-and-operations.htm ________________________________________ From: Klaij, Christiaan Sent: Thursday, December 07, 2017 12:02 PM To: petsc-users Cc: Fande Kong Subject: Re: [petsc-users] segfault after recent scientific linux upgrade Thanks Satish, I will give it shot and let you know. Chris ________________________________________ From: Satish Balay Sent: Wednesday, December 06, 2017 6:05 PM To: Klaij, Christiaan Cc: Fande Kong; petsc-users at mcs.anl.gov Subject: Re: [petsc-users] segfault after recent scientific linux upgrade petsc 3.7 - and 3.8 both default to superlu_dist snapshot: self.gitcommit = 'xsdk-0.2.0-rc1' If using petsc-3.7 - you can use latest maint-3.7 [i.e 3.7.7+] [3.7.7 is a latest bugfix update to 3.7 - so there should be no reason to stick to 3.7.5] But if you really want to stick to 3.7.5 you can use: --download-superlu_dist=1 --download-superlu_dist-commit=xsdk-0.2.0-rc1 Satish On Wed, 6 Dec 2017, Klaij, Christiaan wrote: > Fande, > > Thanks, that's good to know. Upgrading to 3.8.x is definitely my > long-term plan, but is there anything I can do short-term to fix > the problem while keeping 3.7.5? > > Chris > > dr. ir. Christiaan Klaij | Senior Researcher | Research & Development > MARIN | T +31 317 49 33 44 | C.Klaij at marin.nl | www.marin.nl > > [LinkedIn] [YouTube] [Twitter] [Facebook] > MARIN news: Seminar ?Blauwe toekomst: versnellen van innovaties door samenwerken > > ________________________________ > From: Fande Kong > Sent: Tuesday, December 05, 2017 4:30 PM > To: Klaij, Christiaan > Cc: petsc-users at mcs.anl.gov > Subject: Re: [petsc-users] segfault after recent scientific linux upgrade > > I would like to suggest you to use PETSc-3.8.x. Then the bug should go away. It is a known bug related to the reuse of the factorization pattern. > > > Fande, > > On Tue, Dec 5, 2017 at 8:07 AM, Klaij, Christiaan > wrote: > I'm running production software with petsc-3.7.5 and, among > others, superlu_dist 5.1.3 on scientific linux 7.4. > > After a recent update of SL7.4, notably of the kernel and glibc, > we found that superlu is somehow broken. Below's a backtrace of a > serial example. Is this a known issue? Could you please advice on > how to proceed (preferably while keeping 3.7.5 for now). > > Thanks, > Chris > > $ gdb ./refresco ./core.9810 > GNU gdb (GDB) Red Hat Enterprise Linux 7.6.1-100.el7 > Copyright (C) 2013 Free Software Foundation, Inc. > License GPLv3+: GNU GPL version 3 or later > This is free software: you are free to change and redistribute it. > There is NO WARRANTY, to the extent permitted by law. Type "show copying" > and "show warranty" for details. > This GDB was configured as "x86_64-redhat-linux-gnu". > For bug reporting instructions, please see: > ... > Reading symbols from /home/cklaij/ReFRESCO/Dev/trunk/Suites/testSuite/FlatPlate_laminar/calcs/Grid64x64/refresco...done. > [New LWP 9810] > Missing separate debuginfo for /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/licensing-1.55.0/sll/lib64/libssl.so.10 > Try: yum --enablerepo='*debug*' install /usr/lib/debug/.build-id/68/6a25d0a83d002183c835fa5694a8110c78d3bc.debug > Missing separate debuginfo for /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/licensing-1.55.0/sll/lib64/libcrypto.so.10 > Try: yum --enablerepo='*debug*' install /usr/lib/debug/.build-id/68/d2958189303f421b1082abc33fd87338826c65.debug > [Thread debugging using libthread_db enabled] > Using host libthread_db library "/lib64/libthread_db.so.1". > Core was generated by `./refresco'. > Program terminated with signal 11, Segmentation fault. > #0 0x00002ba501c132bc in mc64wd_dist (n=0x5213270, ne=0x2, ip=0x1, > irn=0x51af520, a=0x51ef260, iperm=0x1000, num=0x7ffc545b2d94, > jperm=0x51e7260, out=0x51eb260, pr=0x51ef260, q=0x51f3260, l=0x51f7260, > u=0x51fb270, d__=0x5203270) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/SuperLU_DIST_5.1.3/SRC/mc64ad_dist.c:2322 > 2322 if (iperm[i__] != 0 || iperm[i0] == 0) { > Missing separate debuginfos, use: debuginfo-install bzip2-libs-1.0.6-13.el7.x86_64 glibc-2.17-196.el7.x86_64 keyutils-libs-1.5.8-3.el7.x86_64 krb5-libs-1.15.1-8.el7.x86_64 libcom_err-1.42.9-10.el7.x86_64 libgcc-4.8.5-16.el7.x86_64 libselinux-2.5-11.el7.x86_64 libstdc++-4.8.5-16.el7.x86_64 libxml2-2.9.1-6.el7_2.3.x86_64 numactl-libs-2.0.9-6.el7_2.x86_64 pcre-8.32-17.el7.x86_64 xz-libs-5.2.2-1.el7.x86_64 zlib-1.2.7-17.el7.x86_64 > (gdb) bt > #0 0x00002ba501c132bc in mc64wd_dist (n=0x5213270, ne=0x2, ip=0x1, > irn=0x51af520, a=0x51ef260, iperm=0x1000, num=0x7ffc545b2d94, > jperm=0x51e7260, out=0x51eb260, pr=0x51ef260, q=0x51f3260, l=0x51f7260, > u=0x51fb270, d__=0x5203270) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/SuperLU_DIST_5.1.3/SRC/mc64ad_dist.c:2322 > #1 0x00002ba501c0ef2b in mc64ad_dist (job=0x5213270, n=0x2, ne=0x1, > ip=0x51af520, irn=0x51ef260, a=0x1000, num=0x7ffc545b2db0, > cperm=0x51fb270, liw=0x5187d10, iw=0x51c3130, ldw=0x51af520, dw=0x517b570, > icntl=0x51e7260, info=0x2ba501c2e556 ) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/SuperLU_DIST_5.1.3/SRC/mc64ad_dist.c:596 > #2 0x00002ba501c2e556 in dldperm_dist (job=0, n=0, nnz=0, colptr=0x51af520, > adjncy=0x51ef260, nzval=0x1000, perm=0x4f00, u=0x1000, v=0x517b001) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/SuperLU_DIST_5.1.3/SRC/dldperm_dist.c:141 > #3 0x00002ba501c26296 in pdgssvx_ABglobal (options=0x5213270, A=0x2, > ScalePermstruct=0x1, B=0x51af520, ldb=85914208, nrhs=4096, grid=0x516da30, > LUstruct=0x517af40, berr=0x1000, > stat=0x2ba500b36a7d , info=0x517af58) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/SuperLU_DIST_5.1.3/SRC/pdgssvx_ABglobal.c:716 > #4 0x00002ba500b36a7d in MatLUFactorNumeric_SuperLU_DIST (F=0x5213270, A=0x2, > ---Type to continue, or q to quit--- > info=0x1) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/mat/impls/aij/mpi/superlu_dist/superlu_dist.c:419 > #5 0x00002ba500b45a1a in MatLUFactorNumeric (fact=0x5213270, mat=0x2, > info=0x1) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/mat/interface/matrix.c:2996 > #6 0x00002ba500e9e6c7 in PCSetUp_LU (pc=0x5213270) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/pc/impls/factor/lu/lu.c:172 > #7 0x00002ba500ded084 in PCSetUp (pc=0x5213270) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/pc/interface/precon.c:968 > #8 0x00002ba500f2968d in KSPSetUp (ksp=0x5213270) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/ksp/interface/itfunc.c:390 > #9 0x00002ba500f257be in KSPSolve (ksp=0x5213270, b=0x2, x=0x4193510) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/ksp/interface/itfunc.c:599 > #10 0x00002ba500f3e142 in kspsolve_ (ksp=0x5213270, b=0x2, x=0x1, > __ierr=0x51af520) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/ksp/interface/ftn-auto/itfuncf.c:261 > ---Type to continue, or q to quit--- > #11 0x0000000000bccf71 in petsc_solvers::petsc_solvers_solve ( > regname='massTransport', rhs_c=..., phi_c=..., tol=0.01, maxiter=500, > res0=-9.2559631349317831e+61, usediter=0, .tmp.REGNAME.len_V$1790=13) > at petsc_solvers.F90:580 > #12 0x0000000000c2c9c5 in mass_momentum::mass_momentum_pressureprediction () > at mass_momentum.F90:989 > #13 0x0000000000c0ffc1 in mass_momentum::mass_momentum_core () > at mass_momentum.F90:626 > #14 0x0000000000c26a2c in mass_momentum::mass_momentum_systempcapply ( > aa_system=76390912, xx_system=68983024, rr_system=68984544, ierr=0) > at mass_momentum.F90:919 > #15 0x00002ba500eaa763 in ourshellapply (pc=0x48da200, x=0x41c98f0, > y=0x41c9ee0) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/pc/impls/shell/ftn-custom/zshellpcf.c:41 > #16 0x00002ba500ea79be in PCApply_Shell (pc=0x5213270, x=0x2, y=0x1) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/pc/impls/shell/shellpc.c:124 > #17 0x00002ba500df1800 in PCApply (pc=0x5213270, x=0x2, y=0x1) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/pc/interface/precon.c:482 > #18 0x00002ba500f2592a in KSPSolve (ksp=0x5213270, b=0x2, x=0x41c9ee0) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/ksp/interf---Type to continue, or q to quit--- > ace/itfunc.c:631 > #19 0x00002ba500f3e142 in kspsolve_ (ksp=0x5213270, b=0x2, x=0x1, > __ierr=0x51af520) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/ksp/interface/ftn-auto/itfuncf.c:261 > #20 0x0000000000c1b0ea in mass_momentum::mass_momentum_krylov () > at mass_momentum.F90:777 > #21 0x0000000000c0d242 in mass_momentum::mass_momentum_simple () > at mass_momentum.F90:548 > #22 0x0000000000c0841f in mass_momentum::mass_momentum_solve () > at mass_momentum.F90:465 > #23 0x000000000041b5ec in refresco () at refresco.F90:259 > #24 0x000000000041999e in main () > #25 0x00002ba508c98c05 in __libc_start_main () from /lib64/libc.so.6 > #26 0x00000000004198a3 in _start () > (gdb) > > > dr. ir. Christiaan Klaij | Senior Researcher | Research & Development > MARIN | T +31 317 49 33 44 | mailto:C.Klaij at marin.nl | http://www.marin.nl > > MARIN news: http://www.marin.nl/web/News/News-items/Seminar-Blauwe-toekomst-versnellen-van-innovaties-door-samenwerken.htm > > > > > From fande.kong at inl.gov Thu Dec 7 09:26:49 2017 From: fande.kong at inl.gov (Kong, Fande) Date: Thu, 7 Dec 2017 08:26:49 -0700 Subject: [petsc-users] segfault after recent scientific linux upgrade In-Reply-To: <1512659700316.50646@marin.nl> References: <1512486474462.7744@marin.nl> <1512545657930.54754@marin.nl> <1512644541197.67462@marin.nl> <1512659700316.50646@marin.nl> Message-ID: On Thu, Dec 7, 2017 at 8:15 AM, Klaij, Christiaan wrote: > Satish, > > > > As a first try, I've kept petsc-3.7.5 and only replaced superlu > > by the new xsdk-0.2.0-rc1 version. Unfortunately, this doesn't > > fix the problem, see the backtrace below. > > > > Fande, > > > > Perhaps the problem is related to petsc, not superlu? > > > > What really puzzles me is that everything was working fine with > > petsc-3.7.5 and superlu_dist_5.3.1, it only broke after we > > updated Scientific Linux 7. So this bug (in petsc or in superlu) > > was already there but somehow not triggered before the SL7 > > update? > > > > Chris > > > > I do not know how you installed PETSc. It looks like you are keeping using the old superlu_dist. You have to delete the old package, and start from the scratch. PETSc does not automatically clean the old one. For me, I just simply "rm -rf $PETSC_ARCH" every time before I reinstall a new PETSc. Fande, -------------- next part -------------- An HTML attachment was scrubbed... URL: From C.Klaij at marin.nl Thu Dec 7 09:55:30 2017 From: C.Klaij at marin.nl (Klaij, Christiaan) Date: Thu, 7 Dec 2017 15:55:30 +0000 Subject: [petsc-users] segfault after recent scientific linux upgrade In-Reply-To: References: <1512486474462.7744@marin.nl> <1512545657930.54754@marin.nl> <1512644541197.67462@marin.nl> <1512659700316.50646@marin.nl>, Message-ID: <1512662130405.65916@marin.nl> Fande, That's what I did, I started the whole petsc config and build from scratch. The backtrace now says: /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/superlu_dist-xsdk-0.2.0-rc1/SRC/mc64ad_dist.c:2322 instead of the old: /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/SuperLU_DIST_5.1.3/SRC/mc64ad_dist.c:2322 which doesn't exist anymore. The entire install directory is new: $ ls -lh /home/cklaij/ReFRESCO/Dev/trunk/Libs/install drwxr-xr-x. 5 cklaij domain users 85 Dec 7 14:31 Linux-x86_64-Intel drwxr-xr-x. 6 cklaij domain users 75 Dec 7 14:33 petsc-3.7.5 $ ls -lh /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel total 12K drwxr-xr-x. 7 cklaij domain users 4.0K Dec 7 14:30 metis-5.1.0-p3 drwxrwxr-x. 7 cklaij domain users 4.0K Dec 7 14:30 parmetis-4.0.3-p3 drwxrwxr-x. 11 cklaij domain users 4.0K Dec 7 14:31 superlu_dist-xsdk-0.2.0-rc1 Chris dr. ir. Christiaan Klaij | Senior Researcher | Research & Development MARIN | T +31 317 49 33 44 | C.Klaij at marin.nl | www.marin.nl [LinkedIn] [YouTube] [Twitter] [Facebook] MARIN news: MARIN at Marintec China, Shanghai, December 5-8 ________________________________ From: Kong, Fande Sent: Thursday, December 07, 2017 4:26 PM To: Klaij, Christiaan Cc: petsc-users Subject: Re: [petsc-users] segfault after recent scientific linux upgrade On Thu, Dec 7, 2017 at 8:15 AM, Klaij, Christiaan > wrote: Satish, As a first try, I've kept petsc-3.7.5 and only replaced superlu by the new xsdk-0.2.0-rc1 version. Unfortunately, this doesn't fix the problem, see the backtrace below. Fande, Perhaps the problem is related to petsc, not superlu? What really puzzles me is that everything was working fine with petsc-3.7.5 and superlu_dist_5.3.1, it only broke after we updated Scientific Linux 7. So this bug (in petsc or in superlu) was already there but somehow not triggered before the SL7 update? Chris I do not know how you installed PETSc. It looks like you are keeping using the old superlu_dist. You have to delete the old package, and start from the scratch. PETSc does not automatically clean the old one. For me, I just simply "rm -rf $PETSC_ARCH" every time before I reinstall a new PETSc. Fande, -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: imageb7697d.PNG Type: image/png Size: 293 bytes Desc: imageb7697d.PNG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image89adfe.PNG Type: image/png Size: 331 bytes Desc: image89adfe.PNG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image0e9eb8.PNG Type: image/png Size: 333 bytes Desc: image0e9eb8.PNG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image58fd92.PNG Type: image/png Size: 253 bytes Desc: image58fd92.PNG URL: From balay at mcs.anl.gov Thu Dec 7 11:07:15 2017 From: balay at mcs.anl.gov (Satish Balay) Date: Thu, 7 Dec 2017 11:07:15 -0600 Subject: [petsc-users] segfault after recent scientific linux upgrade In-Reply-To: <1512659700316.50646@marin.nl> References: <1512486474462.7744@marin.nl>, <1512545657930.54754@marin.nl>, , <1512644541197.67462@marin.nl> <1512659700316.50646@marin.nl> Message-ID: Could you check if your code is valgrind clean? Satish On Thu, 7 Dec 2017, Klaij, Christiaan wrote: > Satish, > > As a first try, I've kept petsc-3.7.5 and only replaced superlu > by the new xsdk-0.2.0-rc1 version. Unfortunately, this doesn't > fix the problem, see the backtrace below. > > Fande, > > Perhaps the problem is related to petsc, not superlu? > > What really puzzles me is that everything was working fine with > petsc-3.7.5 and superlu_dist_5.3.1, it only broke after we > updated Scientific Linux 7. So this bug (in petsc or in superlu) > was already there but somehow not triggered before the SL7 > update? > > Chris > > (gdb) bt > #0 0x00002b38995fa30c in mc64wd_dist (n=0x3da6230, ne=0x2, ip=0x1, > irn=0x3d424e0, a=0x3d82220, iperm=0x1000, num=0x7ffc505dd294, > jperm=0x3d7a220, out=0x3d7e220, pr=0x3d82220, q=0x3d86220, l=0x3d8a220, > u=0x3d8e230, d__=0x3d96230) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/superlu_dist-xsdk-0.2.0-rc1/SRC/mc64ad_dist.c:2322 > #1 0x00002b38995f5f7b in mc64ad_dist (job=0x3da6230, n=0x2, ne=0x1, > ip=0x3d424e0, irn=0x3d82220, a=0x1000, num=0x7ffc505dd2b0, > cperm=0x3d8e230, liw=0x3d1acd0, iw=0x3d560f0, ldw=0x3d424e0, dw=0x3d0e530, > icntl=0x3d7a220, info=0x2b3899615546 ) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/superlu_dist-xsdk-0.2.0-rc1/SRC/mc64ad_dist.c:596 > #2 0x00002b3899615546 in dldperm_dist (job=0, n=0, nnz=0, colptr=0x3d424e0, > adjncy=0x3d82220, nzval=0x1000, perm=0x4f00, u=0x1000, v=0x3d0e001) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/superlu_dist-xsdk-0.2.0-rc1/SRC/dldperm_dist.c:141 > #3 0x00002b389960d286 in pdgssvx_ABglobal (options=0x3da6230, A=0x2, > ScalePermstruct=0x1, B=0x3d424e0, ldb=64496160, nrhs=4096, grid=0x3d009f0, > LUstruct=0x3d0df00, berr=0x1000, > stat=0x2b389851da7d , info=0x3d0df18) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/superlu_dist-xsdk-0.2.0-rc1/SRC/pdgssvx_ABglobal.c:716 > #4 0x00002b389851da7d in MatLUFactorNumeric_SuperLU_DIST (F=0x3da6230, A=0x2, > ---Type to continue, or q to quit--- > info=0x1) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/mat/impls/aij/mpi/superlu_dist/superlu_dist.c:419 > #5 0x00002b389852ca1a in MatLUFactorNumeric (fact=0x3da6230, mat=0x2, > info=0x1) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/mat/interface/matrix.c:2996 > #6 0x00002b38988856c7 in PCSetUp_LU (pc=0x3da6230) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/pc/impls/factor/lu/lu.c:172 > #7 0x00002b38987d4084 in PCSetUp (pc=0x3da6230) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/pc/interface/precon.c:968 > #8 0x00002b389891068d in KSPSetUp (ksp=0x3da6230) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/ksp/interface/itfunc.c:390 > #9 0x00002b389890c7be in KSPSolve (ksp=0x3da6230, b=0x2, x=0x2d18d90) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/ksp/interface/itfunc.c:599 > #10 0x00002b3898925142 in kspsolve_ (ksp=0x3da6230, b=0x2, x=0x1, > __ierr=0x3d424e0) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/ksp/interface/ftn-auto/itfuncf.c:261 > ---Type to continue, or q to quit--- > #11 0x0000000000bccf71 in petsc_solvers::petsc_solvers_solve ( > regname='massTransport', rhs_c=..., phi_c=..., tol=0.01, maxiter=500, > res0=-9.2559631349317831e+61, usediter=0, .tmp.REGNAME.len_V$1790=13) > at petsc_solvers.F90:580 > #12 0x0000000000c2c9c5 in mass_momentum::mass_momentum_pressureprediction () > at mass_momentum.F90:989 > #13 0x0000000000c0ffc1 in mass_momentum::mass_momentum_core () > at mass_momentum.F90:626 > #14 0x0000000000c26a2c in mass_momentum::mass_momentum_systempcapply ( > aa_system=54952496, xx_system=47570896, rr_system=47572416, ierr=0) > at mass_momentum.F90:919 > #15 0x00002b3898891763 in ourshellapply (pc=0x3468230, x=0x2d5dfd0, > y=0x2d5e5c0) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/pc/impls/shell/ftn-custom/zshellpcf.c:41 > #16 0x00002b389888e9be in PCApply_Shell (pc=0x3da6230, x=0x2, y=0x1) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/pc/impls/shell/shellpc.c:124 > #17 0x00002b38987d8800 in PCApply (pc=0x3da6230, x=0x2, y=0x1) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/pc/interface/precon.c:482 > #18 0x00002b389890c92a in KSPSolve (ksp=0x3da6230, b=0x2, x=0x2d5e5c0) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/ksp/interf---Type to continue, or q to quit--- > ace/itfunc.c:631 > #19 0x00002b3898925142 in kspsolve_ (ksp=0x3da6230, b=0x2, x=0x1, > __ierr=0x3d424e0) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/ksp/interface/ftn-auto/itfuncf.c:261 > #20 0x0000000000c1b0ea in mass_momentum::mass_momentum_krylov () > at mass_momentum.F90:777 > #21 0x0000000000c0d242 in mass_momentum::mass_momentum_simple () > at mass_momentum.F90:548 > #22 0x0000000000c0841f in mass_momentum::mass_momentum_solve () > at mass_momentum.F90:465 > #23 0x000000000041b5ec in refresco () at refresco.F90:259 > #24 0x000000000041999e in main () > #25 0x00002b38a067fc05 in __libc_start_main () from /lib64/libc.so.6 > #26 0x00000000004198a3 in _start () > (gdb) > > > > dr. ir. Christiaan Klaij | Senior Researcher | Research & Development > MARIN | T +31 317 49 33 44 | mailto:C.Klaij at marin.nl | http://www.marin.nl > > MARIN news: http://www.marin.nl/web/News/News-items/Simulator-facility-in-Houston-as-bridge-between-engineering-and-operations.htm > > ________________________________________ > From: Klaij, Christiaan > Sent: Thursday, December 07, 2017 12:02 PM > To: petsc-users > Cc: Fande Kong > Subject: Re: [petsc-users] segfault after recent scientific linux upgrade > > Thanks Satish, I will give it shot and let you know. > > Chris > ________________________________________ > From: Satish Balay > Sent: Wednesday, December 06, 2017 6:05 PM > To: Klaij, Christiaan > Cc: Fande Kong; petsc-users at mcs.anl.gov > Subject: Re: [petsc-users] segfault after recent scientific linux upgrade > > petsc 3.7 - and 3.8 both default to superlu_dist snapshot: > > self.gitcommit = 'xsdk-0.2.0-rc1' > > If using petsc-3.7 - you can use latest maint-3.7 [i.e 3.7.7+] > [3.7.7 is a latest bugfix update to 3.7 - so there should be no reason to stick to 3.7.5] > > But if you really want to stick to 3.7.5 you can use: > > --download-superlu_dist=1 --download-superlu_dist-commit=xsdk-0.2.0-rc1 > > Satish > > On Wed, 6 Dec 2017, Klaij, Christiaan wrote: > > > Fande, > > > > Thanks, that's good to know. Upgrading to 3.8.x is definitely my > > long-term plan, but is there anything I can do short-term to fix > > the problem while keeping 3.7.5? > > > > Chris > > > > dr. ir. Christiaan Klaij | Senior Researcher | Research & Development > > MARIN | T +31 317 49 33 44 | C.Klaij at marin.nl | www.marin.nl > > > > [LinkedIn] [YouTube] [Twitter] [Facebook] > > MARIN news: Seminar ?Blauwe toekomst: versnellen van innovaties door samenwerken > > > > ________________________________ > > From: Fande Kong > > Sent: Tuesday, December 05, 2017 4:30 PM > > To: Klaij, Christiaan > > Cc: petsc-users at mcs.anl.gov > > Subject: Re: [petsc-users] segfault after recent scientific linux upgrade > > > > I would like to suggest you to use PETSc-3.8.x. Then the bug should go away. It is a known bug related to the reuse of the factorization pattern. > > > > > > Fande, > > > > On Tue, Dec 5, 2017 at 8:07 AM, Klaij, Christiaan > wrote: > > I'm running production software with petsc-3.7.5 and, among > > others, superlu_dist 5.1.3 on scientific linux 7.4. > > > > After a recent update of SL7.4, notably of the kernel and glibc, > > we found that superlu is somehow broken. Below's a backtrace of a > > serial example. Is this a known issue? Could you please advice on > > how to proceed (preferably while keeping 3.7.5 for now). > > > > Thanks, > > Chris > > > > $ gdb ./refresco ./core.9810 > > GNU gdb (GDB) Red Hat Enterprise Linux 7.6.1-100.el7 > > Copyright (C) 2013 Free Software Foundation, Inc. > > License GPLv3+: GNU GPL version 3 or later > > This is free software: you are free to change and redistribute it. > > There is NO WARRANTY, to the extent permitted by law. Type "show copying" > > and "show warranty" for details. > > This GDB was configured as "x86_64-redhat-linux-gnu". > > For bug reporting instructions, please see: > > ... > > Reading symbols from /home/cklaij/ReFRESCO/Dev/trunk/Suites/testSuite/FlatPlate_laminar/calcs/Grid64x64/refresco...done. > > [New LWP 9810] > > Missing separate debuginfo for /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/licensing-1.55.0/sll/lib64/libssl.so.10 > > Try: yum --enablerepo='*debug*' install /usr/lib/debug/.build-id/68/6a25d0a83d002183c835fa5694a8110c78d3bc.debug > > Missing separate debuginfo for /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/licensing-1.55.0/sll/lib64/libcrypto.so.10 > > Try: yum --enablerepo='*debug*' install /usr/lib/debug/.build-id/68/d2958189303f421b1082abc33fd87338826c65.debug > > [Thread debugging using libthread_db enabled] > > Using host libthread_db library "/lib64/libthread_db.so.1". > > Core was generated by `./refresco'. > > Program terminated with signal 11, Segmentation fault. > > #0 0x00002ba501c132bc in mc64wd_dist (n=0x5213270, ne=0x2, ip=0x1, > > irn=0x51af520, a=0x51ef260, iperm=0x1000, num=0x7ffc545b2d94, > > jperm=0x51e7260, out=0x51eb260, pr=0x51ef260, q=0x51f3260, l=0x51f7260, > > u=0x51fb270, d__=0x5203270) > > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/SuperLU_DIST_5.1.3/SRC/mc64ad_dist.c:2322 > > 2322 if (iperm[i__] != 0 || iperm[i0] == 0) { > > Missing separate debuginfos, use: debuginfo-install bzip2-libs-1.0.6-13.el7.x86_64 glibc-2.17-196.el7.x86_64 keyutils-libs-1.5.8-3.el7.x86_64 krb5-libs-1.15.1-8.el7.x86_64 libcom_err-1.42.9-10.el7.x86_64 libgcc-4.8.5-16.el7.x86_64 libselinux-2.5-11.el7.x86_64 libstdc++-4.8.5-16.el7.x86_64 libxml2-2.9.1-6.el7_2.3.x86_64 numactl-libs-2.0.9-6.el7_2.x86_64 pcre-8.32-17.el7.x86_64 xz-libs-5.2.2-1.el7.x86_64 zlib-1.2.7-17.el7.x86_64 > > (gdb) bt > > #0 0x00002ba501c132bc in mc64wd_dist (n=0x5213270, ne=0x2, ip=0x1, > > irn=0x51af520, a=0x51ef260, iperm=0x1000, num=0x7ffc545b2d94, > > jperm=0x51e7260, out=0x51eb260, pr=0x51ef260, q=0x51f3260, l=0x51f7260, > > u=0x51fb270, d__=0x5203270) > > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/SuperLU_DIST_5.1.3/SRC/mc64ad_dist.c:2322 > > #1 0x00002ba501c0ef2b in mc64ad_dist (job=0x5213270, n=0x2, ne=0x1, > > ip=0x51af520, irn=0x51ef260, a=0x1000, num=0x7ffc545b2db0, > > cperm=0x51fb270, liw=0x5187d10, iw=0x51c3130, ldw=0x51af520, dw=0x517b570, > > icntl=0x51e7260, info=0x2ba501c2e556 ) > > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/SuperLU_DIST_5.1.3/SRC/mc64ad_dist.c:596 > > #2 0x00002ba501c2e556 in dldperm_dist (job=0, n=0, nnz=0, colptr=0x51af520, > > adjncy=0x51ef260, nzval=0x1000, perm=0x4f00, u=0x1000, v=0x517b001) > > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/SuperLU_DIST_5.1.3/SRC/dldperm_dist.c:141 > > #3 0x00002ba501c26296 in pdgssvx_ABglobal (options=0x5213270, A=0x2, > > ScalePermstruct=0x1, B=0x51af520, ldb=85914208, nrhs=4096, grid=0x516da30, > > LUstruct=0x517af40, berr=0x1000, > > stat=0x2ba500b36a7d , info=0x517af58) > > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/SuperLU_DIST_5.1.3/SRC/pdgssvx_ABglobal.c:716 > > #4 0x00002ba500b36a7d in MatLUFactorNumeric_SuperLU_DIST (F=0x5213270, A=0x2, > > ---Type to continue, or q to quit--- > > info=0x1) > > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/mat/impls/aij/mpi/superlu_dist/superlu_dist.c:419 > > #5 0x00002ba500b45a1a in MatLUFactorNumeric (fact=0x5213270, mat=0x2, > > info=0x1) > > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/mat/interface/matrix.c:2996 > > #6 0x00002ba500e9e6c7 in PCSetUp_LU (pc=0x5213270) > > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/pc/impls/factor/lu/lu.c:172 > > #7 0x00002ba500ded084 in PCSetUp (pc=0x5213270) > > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/pc/interface/precon.c:968 > > #8 0x00002ba500f2968d in KSPSetUp (ksp=0x5213270) > > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/ksp/interface/itfunc.c:390 > > #9 0x00002ba500f257be in KSPSolve (ksp=0x5213270, b=0x2, x=0x4193510) > > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/ksp/interface/itfunc.c:599 > > #10 0x00002ba500f3e142 in kspsolve_ (ksp=0x5213270, b=0x2, x=0x1, > > __ierr=0x51af520) > > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/ksp/interface/ftn-auto/itfuncf.c:261 > > ---Type to continue, or q to quit--- > > #11 0x0000000000bccf71 in petsc_solvers::petsc_solvers_solve ( > > regname='massTransport', rhs_c=..., phi_c=..., tol=0.01, maxiter=500, > > res0=-9.2559631349317831e+61, usediter=0, .tmp.REGNAME.len_V$1790=13) > > at petsc_solvers.F90:580 > > #12 0x0000000000c2c9c5 in mass_momentum::mass_momentum_pressureprediction () > > at mass_momentum.F90:989 > > #13 0x0000000000c0ffc1 in mass_momentum::mass_momentum_core () > > at mass_momentum.F90:626 > > #14 0x0000000000c26a2c in mass_momentum::mass_momentum_systempcapply ( > > aa_system=76390912, xx_system=68983024, rr_system=68984544, ierr=0) > > at mass_momentum.F90:919 > > #15 0x00002ba500eaa763 in ourshellapply (pc=0x48da200, x=0x41c98f0, > > y=0x41c9ee0) > > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/pc/impls/shell/ftn-custom/zshellpcf.c:41 > > #16 0x00002ba500ea79be in PCApply_Shell (pc=0x5213270, x=0x2, y=0x1) > > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/pc/impls/shell/shellpc.c:124 > > #17 0x00002ba500df1800 in PCApply (pc=0x5213270, x=0x2, y=0x1) > > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/pc/interface/precon.c:482 > > #18 0x00002ba500f2592a in KSPSolve (ksp=0x5213270, b=0x2, x=0x41c9ee0) > > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/ksp/interf---Type to continue, or q to quit--- > > ace/itfunc.c:631 > > #19 0x00002ba500f3e142 in kspsolve_ (ksp=0x5213270, b=0x2, x=0x1, > > __ierr=0x51af520) > > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/ksp/interface/ftn-auto/itfuncf.c:261 > > #20 0x0000000000c1b0ea in mass_momentum::mass_momentum_krylov () > > at mass_momentum.F90:777 > > #21 0x0000000000c0d242 in mass_momentum::mass_momentum_simple () > > at mass_momentum.F90:548 > > #22 0x0000000000c0841f in mass_momentum::mass_momentum_solve () > > at mass_momentum.F90:465 > > #23 0x000000000041b5ec in refresco () at refresco.F90:259 > > #24 0x000000000041999e in main () > > #25 0x00002ba508c98c05 in __libc_start_main () from /lib64/libc.so.6 > > #26 0x00000000004198a3 in _start () > > (gdb) > > > > > > dr. ir. Christiaan Klaij | Senior Researcher | Research & Development > > MARIN | T +31 317 49 33 44 | mailto:C.Klaij at marin.nl | http://www.marin.nl > > > > MARIN news: http://www.marin.nl/web/News/News-items/Seminar-Blauwe-toekomst-versnellen-van-innovaties-door-samenwerken.htm > > > > > > > > > > > From C.Klaij at marin.nl Fri Dec 8 01:55:40 2017 From: C.Klaij at marin.nl (Klaij, Christiaan) Date: Fri, 8 Dec 2017 07:55:40 +0000 Subject: [petsc-users] segfault after recent scientific linux upgrade In-Reply-To: References: <1512486474462.7744@marin.nl>, <1512545657930.54754@marin.nl>, , <1512644541197.67462@marin.nl> <1512659700316.50646@marin.nl>, Message-ID: <1512719740356.73790@marin.nl> Almost valgrind clean. We use intelmpi so we need a handfull of suppressions. Chris dr. ir. Christiaan Klaij | Senior Researcher | Research & Development MARIN | T +31 317 49 33 44 | mailto:C.Klaij at marin.nl | http://www.marin.nl MARIN news: http://www.marin.nl/web/News/News-items/GROW-partners-innovate-together-in-offshore-wind-industry.htm ________________________________________ From: Satish Balay Sent: Thursday, December 07, 2017 6:07 PM To: Klaij, Christiaan Cc: petsc-users Subject: Re: [petsc-users] segfault after recent scientific linux upgrade Could you check if your code is valgrind clean? Satish On Thu, 7 Dec 2017, Klaij, Christiaan wrote: > Satish, > > As a first try, I've kept petsc-3.7.5 and only replaced superlu > by the new xsdk-0.2.0-rc1 version. Unfortunately, this doesn't > fix the problem, see the backtrace below. > > Fande, > > Perhaps the problem is related to petsc, not superlu? > > What really puzzles me is that everything was working fine with > petsc-3.7.5 and superlu_dist_5.3.1, it only broke after we > updated Scientific Linux 7. So this bug (in petsc or in superlu) > was already there but somehow not triggered before the SL7 > update? > > Chris > > (gdb) bt > #0 0x00002b38995fa30c in mc64wd_dist (n=0x3da6230, ne=0x2, ip=0x1, > irn=0x3d424e0, a=0x3d82220, iperm=0x1000, num=0x7ffc505dd294, > jperm=0x3d7a220, out=0x3d7e220, pr=0x3d82220, q=0x3d86220, l=0x3d8a220, > u=0x3d8e230, d__=0x3d96230) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/superlu_dist-xsdk-0.2.0-rc1/SRC/mc64ad_dist.c:2322 > #1 0x00002b38995f5f7b in mc64ad_dist (job=0x3da6230, n=0x2, ne=0x1, > ip=0x3d424e0, irn=0x3d82220, a=0x1000, num=0x7ffc505dd2b0, > cperm=0x3d8e230, liw=0x3d1acd0, iw=0x3d560f0, ldw=0x3d424e0, dw=0x3d0e530, > icntl=0x3d7a220, info=0x2b3899615546 ) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/superlu_dist-xsdk-0.2.0-rc1/SRC/mc64ad_dist.c:596 > #2 0x00002b3899615546 in dldperm_dist (job=0, n=0, nnz=0, colptr=0x3d424e0, > adjncy=0x3d82220, nzval=0x1000, perm=0x4f00, u=0x1000, v=0x3d0e001) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/superlu_dist-xsdk-0.2.0-rc1/SRC/dldperm_dist.c:141 > #3 0x00002b389960d286 in pdgssvx_ABglobal (options=0x3da6230, A=0x2, > ScalePermstruct=0x1, B=0x3d424e0, ldb=64496160, nrhs=4096, grid=0x3d009f0, > LUstruct=0x3d0df00, berr=0x1000, > stat=0x2b389851da7d , info=0x3d0df18) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/superlu_dist-xsdk-0.2.0-rc1/SRC/pdgssvx_ABglobal.c:716 > #4 0x00002b389851da7d in MatLUFactorNumeric_SuperLU_DIST (F=0x3da6230, A=0x2, > ---Type to continue, or q to quit--- > info=0x1) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/mat/impls/aij/mpi/superlu_dist/superlu_dist.c:419 > #5 0x00002b389852ca1a in MatLUFactorNumeric (fact=0x3da6230, mat=0x2, > info=0x1) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/mat/interface/matrix.c:2996 > #6 0x00002b38988856c7 in PCSetUp_LU (pc=0x3da6230) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/pc/impls/factor/lu/lu.c:172 > #7 0x00002b38987d4084 in PCSetUp (pc=0x3da6230) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/pc/interface/precon.c:968 > #8 0x00002b389891068d in KSPSetUp (ksp=0x3da6230) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/ksp/interface/itfunc.c:390 > #9 0x00002b389890c7be in KSPSolve (ksp=0x3da6230, b=0x2, x=0x2d18d90) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/ksp/interface/itfunc.c:599 > #10 0x00002b3898925142 in kspsolve_ (ksp=0x3da6230, b=0x2, x=0x1, > __ierr=0x3d424e0) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/ksp/interface/ftn-auto/itfuncf.c:261 > ---Type to continue, or q to quit--- > #11 0x0000000000bccf71 in petsc_solvers::petsc_solvers_solve ( > regname='massTransport', rhs_c=..., phi_c=..., tol=0.01, maxiter=500, > res0=-9.2559631349317831e+61, usediter=0, .tmp.REGNAME.len_V$1790=13) > at petsc_solvers.F90:580 > #12 0x0000000000c2c9c5 in mass_momentum::mass_momentum_pressureprediction () > at mass_momentum.F90:989 > #13 0x0000000000c0ffc1 in mass_momentum::mass_momentum_core () > at mass_momentum.F90:626 > #14 0x0000000000c26a2c in mass_momentum::mass_momentum_systempcapply ( > aa_system=54952496, xx_system=47570896, rr_system=47572416, ierr=0) > at mass_momentum.F90:919 > #15 0x00002b3898891763 in ourshellapply (pc=0x3468230, x=0x2d5dfd0, > y=0x2d5e5c0) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/pc/impls/shell/ftn-custom/zshellpcf.c:41 > #16 0x00002b389888e9be in PCApply_Shell (pc=0x3da6230, x=0x2, y=0x1) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/pc/impls/shell/shellpc.c:124 > #17 0x00002b38987d8800 in PCApply (pc=0x3da6230, x=0x2, y=0x1) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/pc/interface/precon.c:482 > #18 0x00002b389890c92a in KSPSolve (ksp=0x3da6230, b=0x2, x=0x2d5e5c0) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/ksp/interf---Type to continue, or q to quit--- > ace/itfunc.c:631 > #19 0x00002b3898925142 in kspsolve_ (ksp=0x3da6230, b=0x2, x=0x1, > __ierr=0x3d424e0) > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/ksp/interface/ftn-auto/itfuncf.c:261 > #20 0x0000000000c1b0ea in mass_momentum::mass_momentum_krylov () > at mass_momentum.F90:777 > #21 0x0000000000c0d242 in mass_momentum::mass_momentum_simple () > at mass_momentum.F90:548 > #22 0x0000000000c0841f in mass_momentum::mass_momentum_solve () > at mass_momentum.F90:465 > #23 0x000000000041b5ec in refresco () at refresco.F90:259 > #24 0x000000000041999e in main () > #25 0x00002b38a067fc05 in __libc_start_main () from /lib64/libc.so.6 > #26 0x00000000004198a3 in _start () > (gdb) > > > > dr. ir. Christiaan Klaij | Senior Researcher | Research & Development > MARIN | T +31 317 49 33 44 | mailto:C.Klaij at marin.nl | http://www.marin.nl > > MARIN news: http://www.marin.nl/web/News/News-items/Simulator-facility-in-Houston-as-bridge-between-engineering-and-operations.htm > > ________________________________________ > From: Klaij, Christiaan > Sent: Thursday, December 07, 2017 12:02 PM > To: petsc-users > Cc: Fande Kong > Subject: Re: [petsc-users] segfault after recent scientific linux upgrade > > Thanks Satish, I will give it shot and let you know. > > Chris > ________________________________________ > From: Satish Balay > Sent: Wednesday, December 06, 2017 6:05 PM > To: Klaij, Christiaan > Cc: Fande Kong; petsc-users at mcs.anl.gov > Subject: Re: [petsc-users] segfault after recent scientific linux upgrade > > petsc 3.7 - and 3.8 both default to superlu_dist snapshot: > > self.gitcommit = 'xsdk-0.2.0-rc1' > > If using petsc-3.7 - you can use latest maint-3.7 [i.e 3.7.7+] > [3.7.7 is a latest bugfix update to 3.7 - so there should be no reason to stick to 3.7.5] > > But if you really want to stick to 3.7.5 you can use: > > --download-superlu_dist=1 --download-superlu_dist-commit=xsdk-0.2.0-rc1 > > Satish > > On Wed, 6 Dec 2017, Klaij, Christiaan wrote: > > > Fande, > > > > Thanks, that's good to know. Upgrading to 3.8.x is definitely my > > long-term plan, but is there anything I can do short-term to fix > > the problem while keeping 3.7.5? > > > > Chris > > > > dr. ir. Christiaan Klaij | Senior Researcher | Research & Development > > MARIN | T +31 317 49 33 44 | C.Klaij at marin.nl | www.marin.nl > > > > [LinkedIn] [YouTube] [Twitter] [Facebook] > > MARIN news: Seminar ?Blauwe toekomst: versnellen van innovaties door samenwerken > > > > ________________________________ > > From: Fande Kong > > Sent: Tuesday, December 05, 2017 4:30 PM > > To: Klaij, Christiaan > > Cc: petsc-users at mcs.anl.gov > > Subject: Re: [petsc-users] segfault after recent scientific linux upgrade > > > > I would like to suggest you to use PETSc-3.8.x. Then the bug should go away. It is a known bug related to the reuse of the factorization pattern. > > > > > > Fande, > > > > On Tue, Dec 5, 2017 at 8:07 AM, Klaij, Christiaan > wrote: > > I'm running production software with petsc-3.7.5 and, among > > others, superlu_dist 5.1.3 on scientific linux 7.4. > > > > After a recent update of SL7.4, notably of the kernel and glibc, > > we found that superlu is somehow broken. Below's a backtrace of a > > serial example. Is this a known issue? Could you please advice on > > how to proceed (preferably while keeping 3.7.5 for now). > > > > Thanks, > > Chris > > > > $ gdb ./refresco ./core.9810 > > GNU gdb (GDB) Red Hat Enterprise Linux 7.6.1-100.el7 > > Copyright (C) 2013 Free Software Foundation, Inc. > > License GPLv3+: GNU GPL version 3 or later > > This is free software: you are free to change and redistribute it. > > There is NO WARRANTY, to the extent permitted by law. Type "show copying" > > and "show warranty" for details. > > This GDB was configured as "x86_64-redhat-linux-gnu". > > For bug reporting instructions, please see: > > ... > > Reading symbols from /home/cklaij/ReFRESCO/Dev/trunk/Suites/testSuite/FlatPlate_laminar/calcs/Grid64x64/refresco...done. > > [New LWP 9810] > > Missing separate debuginfo for /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/licensing-1.55.0/sll/lib64/libssl.so.10 > > Try: yum --enablerepo='*debug*' install /usr/lib/debug/.build-id/68/6a25d0a83d002183c835fa5694a8110c78d3bc.debug > > Missing separate debuginfo for /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/licensing-1.55.0/sll/lib64/libcrypto.so.10 > > Try: yum --enablerepo='*debug*' install /usr/lib/debug/.build-id/68/d2958189303f421b1082abc33fd87338826c65.debug > > [Thread debugging using libthread_db enabled] > > Using host libthread_db library "/lib64/libthread_db.so.1". > > Core was generated by `./refresco'. > > Program terminated with signal 11, Segmentation fault. > > #0 0x00002ba501c132bc in mc64wd_dist (n=0x5213270, ne=0x2, ip=0x1, > > irn=0x51af520, a=0x51ef260, iperm=0x1000, num=0x7ffc545b2d94, > > jperm=0x51e7260, out=0x51eb260, pr=0x51ef260, q=0x51f3260, l=0x51f7260, > > u=0x51fb270, d__=0x5203270) > > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/SuperLU_DIST_5.1.3/SRC/mc64ad_dist.c:2322 > > 2322 if (iperm[i__] != 0 || iperm[i0] == 0) { > > Missing separate debuginfos, use: debuginfo-install bzip2-libs-1.0.6-13.el7.x86_64 glibc-2.17-196.el7.x86_64 keyutils-libs-1.5.8-3.el7.x86_64 krb5-libs-1.15.1-8.el7.x86_64 libcom_err-1.42.9-10.el7.x86_64 libgcc-4.8.5-16.el7.x86_64 libselinux-2.5-11.el7.x86_64 libstdc++-4.8.5-16.el7.x86_64 libxml2-2.9.1-6.el7_2.3.x86_64 numactl-libs-2.0.9-6.el7_2.x86_64 pcre-8.32-17.el7.x86_64 xz-libs-5.2.2-1.el7.x86_64 zlib-1.2.7-17.el7.x86_64 > > (gdb) bt > > #0 0x00002ba501c132bc in mc64wd_dist (n=0x5213270, ne=0x2, ip=0x1, > > irn=0x51af520, a=0x51ef260, iperm=0x1000, num=0x7ffc545b2d94, > > jperm=0x51e7260, out=0x51eb260, pr=0x51ef260, q=0x51f3260, l=0x51f7260, > > u=0x51fb270, d__=0x5203270) > > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/SuperLU_DIST_5.1.3/SRC/mc64ad_dist.c:2322 > > #1 0x00002ba501c0ef2b in mc64ad_dist (job=0x5213270, n=0x2, ne=0x1, > > ip=0x51af520, irn=0x51ef260, a=0x1000, num=0x7ffc545b2db0, > > cperm=0x51fb270, liw=0x5187d10, iw=0x51c3130, ldw=0x51af520, dw=0x517b570, > > icntl=0x51e7260, info=0x2ba501c2e556 ) > > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/SuperLU_DIST_5.1.3/SRC/mc64ad_dist.c:596 > > #2 0x00002ba501c2e556 in dldperm_dist (job=0, n=0, nnz=0, colptr=0x51af520, > > adjncy=0x51ef260, nzval=0x1000, perm=0x4f00, u=0x1000, v=0x517b001) > > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/SuperLU_DIST_5.1.3/SRC/dldperm_dist.c:141 > > #3 0x00002ba501c26296 in pdgssvx_ABglobal (options=0x5213270, A=0x2, > > ScalePermstruct=0x1, B=0x51af520, ldb=85914208, nrhs=4096, grid=0x516da30, > > LUstruct=0x517af40, berr=0x1000, > > stat=0x2ba500b36a7d , info=0x517af58) > > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/SuperLU_DIST_5.1.3/SRC/pdgssvx_ABglobal.c:716 > > #4 0x00002ba500b36a7d in MatLUFactorNumeric_SuperLU_DIST (F=0x5213270, A=0x2, > > ---Type to continue, or q to quit--- > > info=0x1) > > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/mat/impls/aij/mpi/superlu_dist/superlu_dist.c:419 > > #5 0x00002ba500b45a1a in MatLUFactorNumeric (fact=0x5213270, mat=0x2, > > info=0x1) > > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/mat/interface/matrix.c:2996 > > #6 0x00002ba500e9e6c7 in PCSetUp_LU (pc=0x5213270) > > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/pc/impls/factor/lu/lu.c:172 > > #7 0x00002ba500ded084 in PCSetUp (pc=0x5213270) > > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/pc/interface/precon.c:968 > > #8 0x00002ba500f2968d in KSPSetUp (ksp=0x5213270) > > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/ksp/interface/itfunc.c:390 > > #9 0x00002ba500f257be in KSPSolve (ksp=0x5213270, b=0x2, x=0x4193510) > > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/ksp/interface/itfunc.c:599 > > #10 0x00002ba500f3e142 in kspsolve_ (ksp=0x5213270, b=0x2, x=0x1, > > __ierr=0x51af520) > > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/ksp/interface/ftn-auto/itfuncf.c:261 > > ---Type to continue, or q to quit--- > > #11 0x0000000000bccf71 in petsc_solvers::petsc_solvers_solve ( > > regname='massTransport', rhs_c=..., phi_c=..., tol=0.01, maxiter=500, > > res0=-9.2559631349317831e+61, usediter=0, .tmp.REGNAME.len_V$1790=13) > > at petsc_solvers.F90:580 > > #12 0x0000000000c2c9c5 in mass_momentum::mass_momentum_pressureprediction () > > at mass_momentum.F90:989 > > #13 0x0000000000c0ffc1 in mass_momentum::mass_momentum_core () > > at mass_momentum.F90:626 > > #14 0x0000000000c26a2c in mass_momentum::mass_momentum_systempcapply ( > > aa_system=76390912, xx_system=68983024, rr_system=68984544, ierr=0) > > at mass_momentum.F90:919 > > #15 0x00002ba500eaa763 in ourshellapply (pc=0x48da200, x=0x41c98f0, > > y=0x41c9ee0) > > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/pc/impls/shell/ftn-custom/zshellpcf.c:41 > > #16 0x00002ba500ea79be in PCApply_Shell (pc=0x5213270, x=0x2, y=0x1) > > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/pc/impls/shell/shellpc.c:124 > > #17 0x00002ba500df1800 in PCApply (pc=0x5213270, x=0x2, y=0x1) > > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/pc/interface/precon.c:482 > > #18 0x00002ba500f2592a in KSPSolve (ksp=0x5213270, b=0x2, x=0x41c9ee0) > > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/ksp/interf---Type to continue, or q to quit--- > > ace/itfunc.c:631 > > #19 0x00002ba500f3e142 in kspsolve_ (ksp=0x5213270, b=0x2, x=0x1, > > __ierr=0x51af520) > > at /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/ksp/interface/ftn-auto/itfuncf.c:261 > > #20 0x0000000000c1b0ea in mass_momentum::mass_momentum_krylov () > > at mass_momentum.F90:777 > > #21 0x0000000000c0d242 in mass_momentum::mass_momentum_simple () > > at mass_momentum.F90:548 > > #22 0x0000000000c0841f in mass_momentum::mass_momentum_solve () > > at mass_momentum.F90:465 > > #23 0x000000000041b5ec in refresco () at refresco.F90:259 > > #24 0x000000000041999e in main () > > #25 0x00002ba508c98c05 in __libc_start_main () from /lib64/libc.so.6 > > #26 0x00000000004198a3 in _start () > > (gdb) > > > > > > dr. ir. Christiaan Klaij | Senior Researcher | Research & Development > > MARIN | T +31 317 49 33 44 | mailto:C.Klaij at marin.nl | http://www.marin.nl > > > > MARIN news: http://www.marin.nl/web/News/News-items/Seminar-Blauwe-toekomst-versnellen-van-innovaties-door-samenwerken.htm > > > > > > > > > > > From s.lanthaler at gmail.com Mon Dec 11 12:41:57 2017 From: s.lanthaler at gmail.com (Samuel Lanthaler) Date: Mon, 11 Dec 2017 19:41:57 +0100 Subject: [petsc-users] MatCreateShell, MatShellGetContext, MatShellSetContext in fortran Message-ID: <61890642-9a16-7730-49f1-efb889c38fc4@gmail.com> Dear petsc-/slepc-users, I have been trying to understand matrix-free/shell matrices in PETSc for eventual use in solving a non-linear eigenvalue problem using SLEPC. But I seem to be having trouble with calls to MatShellGetContext. As far as I understand, this function should initialize a pointer (second argument) so that the subroutine output will point to the context associated with my shell-matrix (let's say of TYPE(MatCtx))? When calling that subroutine, I get the following error message: [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Null argument, when expecting valid pointer [0]PETSC ERROR: Null Pointer: Parameter # 2 In my code, the second input argument to the routine is a null-pointer of TYPE(MatCtx),POINTER :: arg2. Which the error message appears to be unhappy with. I noticed that there is no error message if I instead pass an object TYPE(MatCtx) :: arg2 to the routine... which doesn't really make sense to me? Could someone maybe explain to me what is going on, here? Just in case, let me also attach my concrete example code (it is supposed to be a Fortran version of the slepc-example in slepc-3.8.1/src/nep/examples/tutorials/ex21.c). I have added an extra call to MatShellGetContext on line 138, after the function and jacobian should supposedly have been set up. Thanks a lot for your help! Cheers, Samuel -------------- next part -------------- A non-text attachment was scrubbed... Name: test.f90 Type: text/x-fortran Size: 11400 bytes Desc: not available URL: From s.lanthaler at gmail.com Tue Dec 12 03:52:14 2017 From: s.lanthaler at gmail.com (Samuel Lanthaler) Date: Tue, 12 Dec 2017 10:52:14 +0100 Subject: [petsc-users] MatCreateShell, MatShellGetContext, MatShellSetContext in fortran In-Reply-To: <61890642-9a16-7730-49f1-efb889c38fc4@gmail.com> References: <61890642-9a16-7730-49f1-efb889c38fc4@gmail.com> Message-ID: <5A2FA6CE.3050300@gmail.com> Let me also add a minimal example (relying only on petsc), which leads to the same error message on my machine. Again, what I'm trying to do is very simple: 1. MatCreateShell => Initialize Mat :: F 2. MatShellSetContext => set the context of F to ctxF (looking at the petsc source code, this call actually seems to be superfluous, but nevermind) 3. MatShellGetContext => get the pointer ctxF_pt to point to the matrix context I'm getting an error message in the third step. [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Null argument, when expecting valid pointer [0]PETSC ERROR: Null Pointer: Parameter # 2 Again, thanks for your help! Cheers, Samuel On 12/11/2017 07:41 PM, Samuel Lanthaler wrote: > Dear petsc-/slepc-users, > > I have been trying to understand matrix-free/shell matrices in PETSc > for eventual use in solving a non-linear eigenvalue problem using > SLEPC. But I seem to be having trouble with calls to > MatShellGetContext. As far as I understand, this function should > initialize a pointer (second argument) so that the subroutine output > will point to the context associated with my shell-matrix (let's say > of TYPE(MatCtx))? When calling that subroutine, I get the following > error message: > > [0]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [0]PETSC ERROR: Null argument, when expecting valid pointer > [0]PETSC ERROR: Null Pointer: Parameter # 2 > > In my code, the second input argument to the routine is a null-pointer > of TYPE(MatCtx),POINTER :: arg2. Which the error message appears to be > unhappy with. I noticed that there is no error message if I instead > pass an object TYPE(MatCtx) :: arg2 to the routine... which doesn't > really make sense to me? Could someone maybe explain to me what is > going on, here? > > Just in case, let me also attach my concrete example code (it is > supposed to be a Fortran version of the slepc-example in > slepc-3.8.1/src/nep/examples/tutorials/ex21.c). I have added an extra > call to MatShellGetContext on line 138, after the function and > jacobian should supposedly have been set up. > > Thanks a lot for your help! > > Cheers, > > Samuel > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: matshell.f90 Type: text/x-fortran Size: 2503 bytes Desc: not available URL: From jroman at dsic.upv.es Tue Dec 12 04:47:06 2017 From: jroman at dsic.upv.es (Jose E. Roman) Date: Tue, 12 Dec 2017 11:47:06 +0100 Subject: [petsc-users] MatCreateShell, MatShellGetContext, MatShellSetContext in fortran In-Reply-To: <5A2FA6CE.3050300@gmail.com> References: <61890642-9a16-7730-49f1-efb889c38fc4@gmail.com> <5A2FA6CE.3050300@gmail.com> Message-ID: I cannot answer the question about the context, don't understand how pointers work in Fortran. Maybe someone from PETSc can give you advice. Have you seen SLEPc's ex20f90? It solves a NEP without needing shell matrices. http://slepc.upv.es/documentation/current/src/nep/examples/tutorials/ex20f90.F90.html Isn't it enough for your needs? I am interested in knowing more details about your nonlinear eigenproblem. If you want, send a description to my personal email or to slepc-maint. Thanks. Jose > El 12 dic 2017, a las 10:52, Samuel Lanthaler escribi?: > > Let me also add a minimal example (relying only on petsc), which leads to the same error message on my machine. Again, what I'm trying to do is very simple: > ? MatCreateShell => Initialize Mat :: F > ? MatShellSetContext => set the context of F to ctxF (looking at the petsc source code, this call actually seems to be superfluous, but nevermind) > ? MatShellGetContext => get the pointer ctxF_pt to point to the matrix context > I'm getting an error message in the third step. > > [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > [0]PETSC ERROR: Null argument, when expecting valid pointer > [0]PETSC ERROR: Null Pointer: Parameter # 2 > Again, thanks for your help! > Cheers, > Samuel > > On 12/11/2017 07:41 PM, Samuel Lanthaler wrote: >> Dear petsc-/slepc-users, >> >> I have been trying to understand matrix-free/shell matrices in PETSc for eventual use in solving a non-linear eigenvalue problem using SLEPC. But I seem to be having trouble with calls to MatShellGetContext. As far as I understand, this function should initialize a pointer (second argument) so that the subroutine output will point to the context associated with my shell-matrix (let's say of TYPE(MatCtx))? When calling that subroutine, I get the following error message: >> >> [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- >> [0]PETSC ERROR: Null argument, when expecting valid pointer >> [0]PETSC ERROR: Null Pointer: Parameter # 2 >> >> In my code, the second input argument to the routine is a null-pointer of TYPE(MatCtx),POINTER :: arg2. Which the error message appears to be unhappy with. I noticed that there is no error message if I instead pass an object TYPE(MatCtx) :: arg2 to the routine... which doesn't really make sense to me? Could someone maybe explain to me what is going on, here? >> >> Just in case, let me also attach my concrete example code (it is supposed to be a Fortran version of the slepc-example in slepc-3.8.1/src/nep/examples/tutorials/ex21.c). I have added an extra call to MatShellGetContext on line 138, after the function and jacobian should supposedly have been set up. >> >> Thanks a lot for your help! >> >> Cheers, >> >> Samuel >> > > From alexlindsay239 at gmail.com Tue Dec 12 09:54:08 2017 From: alexlindsay239 at gmail.com (Alexander Lindsay) Date: Tue, 12 Dec 2017 08:54:08 -0700 Subject: [petsc-users] Matrix-free vs finite differenced Jacobian approximation Message-ID: I'm working with a relatively new set of physics (new to me) and the Jacobians are bad. While debugging the Jacobians, I've been running with different finite difference approximations. I've found in general that matrix-free approximation of the Jacobian action leads to much better convergence than explicitly forming the Jacobian matrix using finite differences. Should I be surprised by this or is this something that's known? It would be great if anyone has a reference they could point me to. Just to illustrate the different solves: Matrix-free (preconditioner formed from finite-differenced approximation of Jacobian): 0 Nonlinear |R| = 2.259203e-02 0 Linear |R| = 2.259203e-02 1 Linear |R| = 2.259203e-02 2 Linear |R| = 1.682777e-02 3 Linear |R| = 9.274378e-09 1 Nonlinear |R| = 1.744830e-02 0 Linear |R| = 1.744830e-02 1 Linear |R| = 2.335817e-08 2 Nonlinear |R| = 2.704512e-08 0 Linear |R| = 2.704512e-08 1 Linear |R| = 1.265577e-14 3 Nonlinear |R| = 1.478929e-10 Solve Converged! Explicit formation of Jacobian using finite-differences (preconditioner formed from same matrix): 0 Nonlinear |R| = 2.259203e-02 0 Linear |R| = 2.259203e-02 1 Linear |R| = 2.259203e-02 2 Linear |R| = 1.481452e-12 1 Nonlinear |R| = 2.258733e-02 0 Linear |R| = 2.258733e-02 1 Linear |R| = 2.258520e-02 2 Linear |R| = 1.594456e-07 2 Nonlinear |R| = 2.258733e-02 0 Linear |R| = 2.258733e-02 1 Linear |R| = 2.258520e-02 2 Linear |R| = 1.869913e-07 Nonlinear solve did not converge due to DIVERGED_LINE_SEARCH iterations 2 -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Dec 12 10:00:49 2017 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 12 Dec 2017 11:00:49 -0500 Subject: [petsc-users] Matrix-free vs finite differenced Jacobian approximation In-Reply-To: References: Message-ID: On Tue, Dec 12, 2017 at 10:54 AM, Alexander Lindsay < alexlindsay239 at gmail.com> wrote: > I'm working with a relatively new set of physics (new to me) and the > Jacobians are bad. While debugging the Jacobians, I've been running with > different finite difference approximations. I've found in general that > matrix-free approximation of the Jacobian action leads to much better > convergence than explicitly forming the Jacobian matrix using finite > differences. Should I be surprised by this or is this something that's > known? It would be great if anyone has a reference they could point me to. > They should give the same answer. Maybe I do not understand what you are doing. First, use -pc_type lu so that the linear solver is not a factor. Matt > Just to illustrate the different solves: > > Matrix-free (preconditioner formed from finite-differenced approximation > of Jacobian): > > 0 Nonlinear |R| = 2.259203e-02 > 0 Linear |R| = 2.259203e-02 > 1 Linear |R| = 2.259203e-02 > 2 Linear |R| = 1.682777e-02 > 3 Linear |R| = 9.274378e-09 > 1 Nonlinear |R| = 1.744830e-02 > 0 Linear |R| = 1.744830e-02 > 1 Linear |R| = 2.335817e-08 > 2 Nonlinear |R| = 2.704512e-08 > 0 Linear |R| = 2.704512e-08 > 1 Linear |R| = 1.265577e-14 > 3 Nonlinear |R| = 1.478929e-10 > Solve Converged! > > Explicit formation of Jacobian using finite-differences (preconditioner > formed from same matrix): > > 0 Nonlinear |R| = 2.259203e-02 > 0 Linear |R| = 2.259203e-02 > 1 Linear |R| = 2.259203e-02 > 2 Linear |R| = 1.481452e-12 > 1 Nonlinear |R| = 2.258733e-02 > 0 Linear |R| = 2.258733e-02 > 1 Linear |R| = 2.258520e-02 > 2 Linear |R| = 1.594456e-07 > 2 Nonlinear |R| = 2.258733e-02 > 0 Linear |R| = 2.258733e-02 > 1 Linear |R| = 2.258520e-02 > 2 Linear |R| = 1.869913e-07 > Nonlinear solve did not converge due to DIVERGED_LINE_SEARCH iterations 2 > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Dec 12 10:11:33 2017 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Tue, 12 Dec 2017 16:11:33 +0000 Subject: [petsc-users] Matrix-free vs finite differenced Jacobian approximation In-Reply-To: References: Message-ID: > On Dec 12, 2017, at 9:54 AM, Alexander Lindsay wrote: > > I'm working with a relatively new set of physics (new to me) and the Jacobians are bad. While debugging the Jacobians, I've been running with different finite difference approximations. I've found in general that matrix-free approximation of the Jacobian action leads to much better convergence than explicitly forming the Jacobian matrix using finite differences. How are you forming the Jacobian matrix using finite differences? Are you using PETSc routines (which?) or your own? The behavior you report is almost always due to incorrect Jacobian entries. If you are using your own computed Jacobian you can run with -snes_type test to compare with sample Jacobians that PETSc produces. Barry > Should I be surprised by this or is this something that's known? It would be great if anyone has a reference they could point me to. > > Just to illustrate the different solves: > > Matrix-free (preconditioner formed from finite-differenced approximation of Jacobian): > > 0 Nonlinear |R| = 2.259203e-02 > 0 Linear |R| = 2.259203e-02 > 1 Linear |R| = 2.259203e-02 > 2 Linear |R| = 1.682777e-02 > 3 Linear |R| = 9.274378e-09 > 1 Nonlinear |R| = 1.744830e-02 > 0 Linear |R| = 1.744830e-02 > 1 Linear |R| = 2.335817e-08 > 2 Nonlinear |R| = 2.704512e-08 > 0 Linear |R| = 2.704512e-08 > 1 Linear |R| = 1.265577e-14 > 3 Nonlinear |R| = 1.478929e-10 > Solve Converged! > > Explicit formation of Jacobian using finite-differences (preconditioner formed from same matrix): > > 0 Nonlinear |R| = 2.259203e-02 > 0 Linear |R| = 2.259203e-02 > 1 Linear |R| = 2.259203e-02 > 2 Linear |R| = 1.481452e-12 > 1 Nonlinear |R| = 2.258733e-02 > 0 Linear |R| = 2.258733e-02 > 1 Linear |R| = 2.258520e-02 > 2 Linear |R| = 1.594456e-07 > 2 Nonlinear |R| = 2.258733e-02 > 0 Linear |R| = 2.258733e-02 > 1 Linear |R| = 2.258520e-02 > 2 Linear |R| = 1.869913e-07 > Nonlinear solve did not converge due to DIVERGED_LINE_SEARCH iterations 2 > > > From zakaryah at gmail.com Tue Dec 12 10:12:29 2017 From: zakaryah at gmail.com (zakaryah .) Date: Tue, 12 Dec 2017 11:12:29 -0500 Subject: [petsc-users] Matrix-free vs finite differenced Jacobian approximation In-Reply-To: References: Message-ID: When you say "Jacobians are bad" and "debugging the Jacobians", do you mean that the hand-coded Jacobian is wrong? In that case, why would you be surprised that the finite difference Jacobians, which are "correct" to approximation error, perform better? Otherwise, what does "Jacobians are bad" mean - ill-conditioned? Singular? Not symmetric? Not positive definite? -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexlindsay239 at gmail.com Tue Dec 12 10:30:44 2017 From: alexlindsay239 at gmail.com (Alexander Lindsay) Date: Tue, 12 Dec 2017 09:30:44 -0700 Subject: [petsc-users] Matrix-free vs finite differenced Jacobian approximation In-Reply-To: References: Message-ID: I'm not using any hand-coded Jacobians. Case 1 options: -snes_fd -pc_type lu 0 Nonlinear |R| = 2.259203e-02 0 Linear |R| = 2.259203e-02 1 Linear |R| = 7.821248e-11 1 Nonlinear |R| = 2.258733e-02 0 Linear |R| = 2.258733e-02 1 Linear |R| = 5.277296e-11 2 Nonlinear |R| = 2.258733e-02 0 Linear |R| = 2.258733e-02 1 Linear |R| = 5.993971e-11 Nonlinear solve did not converge due to DIVERGED_LINE_SEARCH iterations 2 Case 2 options: -snes_fd -snes_mf_operator -pc_type lu 0 Nonlinear |R| = 2.259203e-02 0 Linear |R| = 2.259203e-02 1 Linear |R| = 2.258733e-02 2 Linear |R| = 3.103342e-06 3 Linear |R| = 6.779865e-12 1 Nonlinear |R| = 7.497740e-06 0 Linear |R| = 7.497740e-06 1 Linear |R| = 8.265413e-12 2 Nonlinear |R| = 7.993729e-12 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE iterations 2 On Tue, Dec 12, 2017 at 9:12 AM, zakaryah . wrote: > When you say "Jacobians are bad" and "debugging the Jacobians", do you > mean that the hand-coded Jacobian is wrong? In that case, why would you be > surprised that the finite difference Jacobians, which are "correct" to > approximation error, perform better? Otherwise, what does "Jacobians are > bad" mean - ill-conditioned? Singular? Not symmetric? Not positive > definite? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Dec 12 10:43:03 2017 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 12 Dec 2017 11:43:03 -0500 Subject: [petsc-users] Matrix-free vs finite differenced Jacobian approximation In-Reply-To: References: Message-ID: On Tue, Dec 12, 2017 at 11:30 AM, Alexander Lindsay < alexlindsay239 at gmail.com> wrote: > I'm not using any hand-coded Jacobians. > This looks to me like the rules for FormFunction/Jacobian() are being broken. If the residual function depends on some third variable, and it changes between calls independent of the solution U, then the stored Jacobian could look wrong, but one done every time on the fly might converge. Matt > Case 1 options: -snes_fd -pc_type lu > > 0 Nonlinear |R| = 2.259203e-02 > 0 Linear |R| = 2.259203e-02 > 1 Linear |R| = 7.821248e-11 > 1 Nonlinear |R| = 2.258733e-02 > 0 Linear |R| = 2.258733e-02 > 1 Linear |R| = 5.277296e-11 > 2 Nonlinear |R| = 2.258733e-02 > 0 Linear |R| = 2.258733e-02 > 1 Linear |R| = 5.993971e-11 > Nonlinear solve did not converge due to DIVERGED_LINE_SEARCH iterations 2 > > Case 2 options: -snes_fd -snes_mf_operator -pc_type lu > > 0 Nonlinear |R| = 2.259203e-02 > 0 Linear |R| = 2.259203e-02 > 1 Linear |R| = 2.258733e-02 > 2 Linear |R| = 3.103342e-06 > 3 Linear |R| = 6.779865e-12 > 1 Nonlinear |R| = 7.497740e-06 > 0 Linear |R| = 7.497740e-06 > 1 Linear |R| = 8.265413e-12 > 2 Nonlinear |R| = 7.993729e-12 > Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE iterations 2 > > On Tue, Dec 12, 2017 at 9:12 AM, zakaryah . wrote: > >> When you say "Jacobians are bad" and "debugging the Jacobians", do you >> mean that the hand-coded Jacobian is wrong? In that case, why would you be >> surprised that the finite difference Jacobians, which are "correct" to >> approximation error, perform better? Otherwise, what does "Jacobians are >> bad" mean - ill-conditioned? Singular? Not symmetric? Not positive >> definite? >> > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexlindsay239 at gmail.com Tue Dec 12 11:26:46 2017 From: alexlindsay239 at gmail.com (Alexander Lindsay) Date: Tue, 12 Dec 2017 10:26:46 -0700 Subject: [petsc-users] Matrix-free vs finite differenced Jacobian approximation In-Reply-To: References: Message-ID: Ok, I'm going to go back on my original statement...the physics being run here is a sub-set of a much larger set of physics; for the current set the hand-coded Jacobian actually appears to be quite good. With hand-coded Jacobian, -pc_type lu, the convergence is perfect: 0 Nonlinear |R| = 2.259203e-02 0 Linear |R| = 2.259203e-02 1 Linear |R| = 1.129089e-10 1 Nonlinear |R| = 6.295583e-11 So yea I guess at this point I'm just curious about the different behavior between `-snes_fd` and `-snes_fd -snes_mf_operator`. Does the hand-coded result change your opinion Matt that the rules for FormFunction/Jacobian might be being violated? I understand that a finite difference approximation of the true Jacobian *is an approximation. *However, in the absence of possible complications like Matt suggested where an on-the-fly calculation might stand a better chance of capturing the behavior, I would expect both `-snes_mf_operator -snes_fd` and `-snes_fd` to suffer from the same approximations, right? On Tue, Dec 12, 2017 at 9:43 AM, Matthew Knepley wrote: > On Tue, Dec 12, 2017 at 11:30 AM, Alexander Lindsay < > alexlindsay239 at gmail.com> wrote: > >> I'm not using any hand-coded Jacobians. >> > > This looks to me like the rules for FormFunction/Jacobian() are being > broken. If the residual function > depends on some third variable, and it changes between calls independent > of the solution U, then > the stored Jacobian could look wrong, but one done every time on the fly > might converge. > > Matt > > >> Case 1 options: -snes_fd -pc_type lu >> >> 0 Nonlinear |R| = 2.259203e-02 >> 0 Linear |R| = 2.259203e-02 >> 1 Linear |R| = 7.821248e-11 >> 1 Nonlinear |R| = 2.258733e-02 >> 0 Linear |R| = 2.258733e-02 >> 1 Linear |R| = 5.277296e-11 >> 2 Nonlinear |R| = 2.258733e-02 >> 0 Linear |R| = 2.258733e-02 >> 1 Linear |R| = 5.993971e-11 >> Nonlinear solve did not converge due to DIVERGED_LINE_SEARCH iterations 2 >> >> Case 2 options: -snes_fd -snes_mf_operator -pc_type lu >> >> 0 Nonlinear |R| = 2.259203e-02 >> 0 Linear |R| = 2.259203e-02 >> 1 Linear |R| = 2.258733e-02 >> 2 Linear |R| = 3.103342e-06 >> 3 Linear |R| = 6.779865e-12 >> 1 Nonlinear |R| = 7.497740e-06 >> 0 Linear |R| = 7.497740e-06 >> 1 Linear |R| = 8.265413e-12 >> 2 Nonlinear |R| = 7.993729e-12 >> Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE iterations 2 >> >> On Tue, Dec 12, 2017 at 9:12 AM, zakaryah . wrote: >> >>> When you say "Jacobians are bad" and "debugging the Jacobians", do you >>> mean that the hand-coded Jacobian is wrong? In that case, why would you be >>> surprised that the finite difference Jacobians, which are "correct" to >>> approximation error, perform better? Otherwise, what does "Jacobians are >>> bad" mean - ill-conditioned? Singular? Not symmetric? Not positive >>> definite? >>> >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Dec 12 11:39:19 2017 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 12 Dec 2017 12:39:19 -0500 Subject: [petsc-users] Matrix-free vs finite differenced Jacobian approximation In-Reply-To: References: Message-ID: On Tue, Dec 12, 2017 at 12:26 PM, Alexander Lindsay < alexlindsay239 at gmail.com> wrote: > Ok, I'm going to go back on my original statement...the physics being run > here is a sub-set of a much larger set of physics; for the current set the > hand-coded Jacobian actually appears to be quite good. > > With hand-coded Jacobian, -pc_type lu, the convergence is perfect: > > 0 Nonlinear |R| = 2.259203e-02 > 0 Linear |R| = 2.259203e-02 > 1 Linear |R| = 1.129089e-10 > 1 Nonlinear |R| = 6.295583e-11 > > So yea I guess at this point I'm just curious about the different behavior > between `-snes_fd` and `-snes_fd -snes_mf_operator`. Does the hand-coded > result change your opinion Matt that the rules for FormFunction/Jacobian > might be being violated? > > I understand that a finite difference approximation of the true Jacobian *is > an approximation. *However, in the absence of possible complications like > Matt suggested where an on-the-fly calculation might stand a better chance > of capturing the behavior, I would expect both `-snes_mf_operator -snes_fd` > and `-snes_fd` to suffer from the same approximations, right? > There are too many things that do not make sense: 1) How could LU be working with -snes_mf_operator? What operator are you using here. The hand-coded Jacobian? 2) How can LU take more than one iterate? 0 Nonlinear |R| = 2.259203e-02 0 Linear |R| = 2.259203e-02 1 Linear |R| = 2.258733e-02 2 Linear |R| = 3.103342e-06 3 Linear |R| = 6.779865e-12 That seems to say that we are not solving a linear system for some reason. 3) Why would -snes_fd be different from a hand-coded Jacobian? Did you try -snes_type test? Matt On Tue, Dec 12, 2017 at 9:43 AM, Matthew Knepley wrote: > >> On Tue, Dec 12, 2017 at 11:30 AM, Alexander Lindsay < >> alexlindsay239 at gmail.com> wrote: >> >>> I'm not using any hand-coded Jacobians. >>> >> >> This looks to me like the rules for FormFunction/Jacobian() are being >> broken. If the residual function >> depends on some third variable, and it changes between calls independent >> of the solution U, then >> the stored Jacobian could look wrong, but one done every time on the fly >> might converge. >> >> Matt >> >> >>> Case 1 options: -snes_fd -pc_type lu >>> >>> 0 Nonlinear |R| = 2.259203e-02 >>> 0 Linear |R| = 2.259203e-02 >>> 1 Linear |R| = 7.821248e-11 >>> 1 Nonlinear |R| = 2.258733e-02 >>> 0 Linear |R| = 2.258733e-02 >>> 1 Linear |R| = 5.277296e-11 >>> 2 Nonlinear |R| = 2.258733e-02 >>> 0 Linear |R| = 2.258733e-02 >>> 1 Linear |R| = 5.993971e-11 >>> Nonlinear solve did not converge due to DIVERGED_LINE_SEARCH iterations 2 >>> >>> Case 2 options: -snes_fd -snes_mf_operator -pc_type lu >>> >>> 0 Nonlinear |R| = 2.259203e-02 >>> 0 Linear |R| = 2.259203e-02 >>> 1 Linear |R| = 2.258733e-02 >>> 2 Linear |R| = 3.103342e-06 >>> 3 Linear |R| = 6.779865e-12 >>> 1 Nonlinear |R| = 7.497740e-06 >>> 0 Linear |R| = 7.497740e-06 >>> 1 Linear |R| = 8.265413e-12 >>> 2 Nonlinear |R| = 7.993729e-12 >>> Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE iterations 2 >>> >>> On Tue, Dec 12, 2017 at 9:12 AM, zakaryah . wrote: >>> >>>> When you say "Jacobians are bad" and "debugging the Jacobians", do you >>>> mean that the hand-coded Jacobian is wrong? In that case, why would you be >>>> surprised that the finite difference Jacobians, which are "correct" to >>>> approximation error, perform better? Otherwise, what does "Jacobians are >>>> bad" mean - ill-conditioned? Singular? Not symmetric? Not positive >>>> definite? >>>> >>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From hbcbh1999 at gmail.com Tue Dec 12 12:04:05 2017 From: hbcbh1999 at gmail.com (Hao Zhang) Date: Tue, 12 Dec 2017 13:04:05 -0500 Subject: [petsc-users] HYPRE BOOMERAMG no output Message-ID: hi, before I introduce HYPRE with BOOMERAMG to my CFD code, I will have output with good convergence rate. with BOOMERAMG, the same code will take longer time to run and there's a good chance that no output will be produced whatsoever. what happened? thanks! -- Hao Zhang Dept. of Applid Mathematics and Statistics, Stony Brook University, Stony Brook, New York, 11790 -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Dec 12 13:03:42 2017 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Tue, 12 Dec 2017 19:03:42 +0000 Subject: [petsc-users] HYPRE BOOMERAMG no output In-Reply-To: References: Message-ID: > On Dec 12, 2017, at 12:04 PM, Hao Zhang wrote: > > hi, > > before I introduce HYPRE with BOOMERAMG to my CFD code, I will have output with good convergence rate. with BOOMERAMG, the same code will take longer time to run and there's a good chance that no output will be produced whatsoever. Are you running with -ksp_monitor -ksp_view_pre -ksp_converged_reason ? If you are using -ksp_monitor and getting no output it could be that hypre is taking a very long time to form the AMG preconditioner, but I won't expect to see this unless the problem is very very big. You can always run with -start_in_debugger and then type cont in the debugger wait a couple minutes and then use control C in the debugger and type where to determine what it is doing at the time (you can email the output to petsc-maint at mcs.anl.gov). Boomeramg is not good for all classes of matrices, it is fine for elliptic/parabolic generally but if you hand it something that is hyperbolically dominated it will not work well. Barry > > what happened? thanks! > > -- > Hao Zhang > Dept. of Applid Mathematics and Statistics, > Stony Brook University, > Stony Brook, New York, 11790 From bsmith at mcs.anl.gov Tue Dec 12 13:33:33 2017 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Tue, 12 Dec 2017 19:33:33 +0000 Subject: [petsc-users] Matrix-free vs finite differenced Jacobian approximation In-Reply-To: References: Message-ID: <5B29F67D-8F24-4D0E-B12F-23CE2F335D0C@mcs.anl.gov> > On Dec 12, 2017, at 11:26 AM, Alexander Lindsay wrote: > > Ok, I'm going to go back on my original statement...the physics being run here is a sub-set of a much larger set of physics; for the current set the hand-coded Jacobian actually appears to be quite good. > > With hand-coded Jacobian, -pc_type lu, the convergence is perfect: > > 0 Nonlinear |R| = 2.259203e-02 > 0 Linear |R| = 2.259203e-02 > 1 Linear |R| = 1.129089e-10 > 1 Nonlinear |R| = 6.295583e-11 > > So yea I guess at this point I'm just curious about the different behavior between `-snes_fd` and `-snes_fd -snes_mf_operator`. Now that you have provided the exact options you are using, yes it is very unexpected behavior. Is there any chance you can send us the code that reproduces this? The code that does the differencing in -snes_fd is similar to the code that does the differencing for -snes_mf_operator so normally one expects similar behavior but there are a couple of options you can try. Run with -snes_mf_operator and -help | grep mat_mffd and this will show options to control the differencing for the matrix free. For -snes_fd you have the option -mat_fd_type wp or ds > Does the hand-coded result change your opinion Matt that the rules for FormFunction/Jacobian might be being violated? > > I understand that a finite difference approximation of the true Jacobian is an approximation. However, in the absence of possible complications like Matt suggested where an on-the-fly calculation might stand a better chance of capturing the behavior, I would expect both `-snes_mf_operator -snes_fd` and `-snes_fd` to suffer from the same approximations, right? > > On Tue, Dec 12, 2017 at 9:43 AM, Matthew Knepley wrote: > On Tue, Dec 12, 2017 at 11:30 AM, Alexander Lindsay wrote: > I'm not using any hand-coded Jacobians. > > This looks to me like the rules for FormFunction/Jacobian() are being broken. If the residual function > depends on some third variable, and it changes between calls independent of the solution U, then > the stored Jacobian could look wrong, but one done every time on the fly might converge. > > Matt > > Case 1 options: -snes_fd -pc_type lu > > 0 Nonlinear |R| = 2.259203e-02 > 0 Linear |R| = 2.259203e-02 > 1 Linear |R| = 7.821248e-11 > 1 Nonlinear |R| = 2.258733e-02 > 0 Linear |R| = 2.258733e-02 > 1 Linear |R| = 5.277296e-11 > 2 Nonlinear |R| = 2.258733e-02 > 0 Linear |R| = 2.258733e-02 > 1 Linear |R| = 5.993971e-11 > Nonlinear solve did not converge due to DIVERGED_LINE_SEARCH iterations 2 > > Case 2 options: -snes_fd -snes_mf_operator -pc_type lu > > 0 Nonlinear |R| = 2.259203e-02 > 0 Linear |R| = 2.259203e-02 > 1 Linear |R| = 2.258733e-02 > 2 Linear |R| = 3.103342e-06 > 3 Linear |R| = 6.779865e-12 > 1 Nonlinear |R| = 7.497740e-06 > 0 Linear |R| = 7.497740e-06 > 1 Linear |R| = 8.265413e-12 > 2 Nonlinear |R| = 7.993729e-12 > Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE iterations 2 > > > On Tue, Dec 12, 2017 at 9:12 AM, zakaryah . wrote: > When you say "Jacobians are bad" and "debugging the Jacobians", do you mean that the hand-coded Jacobian is wrong? In that case, why would you be surprised that the finite difference Jacobians, which are "correct" to approximation error, perform better? Otherwise, what does "Jacobians are bad" mean - ill-conditioned? Singular? Not symmetric? Not positive definite? > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > From alexlindsay239 at gmail.com Tue Dec 12 13:48:37 2017 From: alexlindsay239 at gmail.com (Alexander Lindsay) Date: Tue, 12 Dec 2017 12:48:37 -0700 Subject: [petsc-users] Matrix-free vs finite differenced Jacobian approximation In-Reply-To: References: Message-ID: On Tue, Dec 12, 2017 at 10:39 AM, Matthew Knepley wrote: > On Tue, Dec 12, 2017 at 12:26 PM, Alexander Lindsay < > alexlindsay239 at gmail.com> wrote: > >> Ok, I'm going to go back on my original statement...the physics being run >> here is a sub-set of a much larger set of physics; for the current set the >> hand-coded Jacobian actually appears to be quite good. >> >> With hand-coded Jacobian, -pc_type lu, the convergence is perfect: >> >> 0 Nonlinear |R| = 2.259203e-02 >> 0 Linear |R| = 2.259203e-02 >> 1 Linear |R| = 1.129089e-10 >> 1 Nonlinear |R| = 6.295583e-11 >> >> So yea I guess at this point I'm just curious about the different >> behavior between `-snes_fd` and `-snes_fd -snes_mf_operator`. Does the >> hand-coded result change your opinion Matt that the rules for >> FormFunction/Jacobian might be being violated? >> >> I understand that a finite difference approximation of the true Jacobian *is >> an approximation. *However, in the absence of possible complications >> like Matt suggested where an on-the-fly calculation might stand a better >> chance of capturing the behavior, I would expect both `-snes_mf_operator >> -snes_fd` and `-snes_fd` to suffer from the same approximations, right? >> > > There are too many things that do not make sense: > > 1) How could LU be working with -snes_mf_operator? > > What operator are you using here. The hand-coded Jacobian? > Why wouldn't LU work with -snes_mf_operator? My understanding is that LU is configured to operate interchangeably with iterative preconditioners. Jacobian: (lldb) call MatView(snes->jacobian, 0) Mat Object: 1 MPI processes type: mffd Matrix-free approximation: err=1.49012e-08 (relative error in function evaluation) The compute h routine has not yet been set Preconditioner is computed with with SNESComputeJacobianDefault > 2) How can LU take more than one iterate? > > 0 Nonlinear |R| = 2.259203e-02 > 0 Linear |R| = 2.259203e-02 > 1 Linear |R| = 2.258733e-02 > 2 Linear |R| = 3.103342e-06 > 3 Linear |R| = 6.779865e-12 > > That seems to say that we are not solving a linear system for some reason. > I'm curious about the same thing :-) Perhaps because B^-1 is not a perfect inverse of the action of J on x in the matrix-free case? If I actually run with matrix-free J and *hand-coded* B, I do get one linear iteration: SMP jacobian PJFNK, -pc_type lu: 0 Nonlinear |R| = 2.259203e-02 0 Linear |R| = 2.259203e-02 1 Linear |R| = 2.857149e-11 1 Nonlinear |R| = 1.404604e-11 Solve Converged! > > 3) Why would -snes_fd be different from a hand-coded Jacobian? > > Did you try -snes_type test? > The matrix ratio norm of the difference is on the order of 1e-6, which isn't perfect but here the hand-coded performs better than the finite-differenced Jacobian. Alex > > Matt > > On Tue, Dec 12, 2017 at 9:43 AM, Matthew Knepley >> wrote: >> >>> On Tue, Dec 12, 2017 at 11:30 AM, Alexander Lindsay < >>> alexlindsay239 at gmail.com> wrote: >>> >>>> I'm not using any hand-coded Jacobians. >>>> >>> >>> This looks to me like the rules for FormFunction/Jacobian() are being >>> broken. If the residual function >>> depends on some third variable, and it changes between calls independent >>> of the solution U, then >>> the stored Jacobian could look wrong, but one done every time on the fly >>> might converge. >>> >>> Matt >>> >>> >>>> Case 1 options: -snes_fd -pc_type lu >>>> >>>> 0 Nonlinear |R| = 2.259203e-02 >>>> 0 Linear |R| = 2.259203e-02 >>>> 1 Linear |R| = 7.821248e-11 >>>> 1 Nonlinear |R| = 2.258733e-02 >>>> 0 Linear |R| = 2.258733e-02 >>>> 1 Linear |R| = 5.277296e-11 >>>> 2 Nonlinear |R| = 2.258733e-02 >>>> 0 Linear |R| = 2.258733e-02 >>>> 1 Linear |R| = 5.993971e-11 >>>> Nonlinear solve did not converge due to DIVERGED_LINE_SEARCH iterations >>>> 2 >>>> >>>> Case 2 options: -snes_fd -snes_mf_operator -pc_type lu >>>> >>>> 0 Nonlinear |R| = 2.259203e-02 >>>> 0 Linear |R| = 2.259203e-02 >>>> 1 Linear |R| = 2.258733e-02 >>>> 2 Linear |R| = 3.103342e-06 >>>> 3 Linear |R| = 6.779865e-12 >>>> 1 Nonlinear |R| = 7.497740e-06 >>>> 0 Linear |R| = 7.497740e-06 >>>> 1 Linear |R| = 8.265413e-12 >>>> 2 Nonlinear |R| = 7.993729e-12 >>>> Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE iterations 2 >>>> >>>> On Tue, Dec 12, 2017 at 9:12 AM, zakaryah . wrote: >>>> >>>>> When you say "Jacobians are bad" and "debugging the Jacobians", do you >>>>> mean that the hand-coded Jacobian is wrong? In that case, why would you be >>>>> surprised that the finite difference Jacobians, which are "correct" to >>>>> approximation error, perform better? Otherwise, what does "Jacobians are >>>>> bad" mean - ill-conditioned? Singular? Not symmetric? Not positive >>>>> definite? >>>>> >>>> >>>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >>> https://www.cse.buffalo.edu/~knepley/ >>> >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Dec 12 13:53:13 2017 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Tue, 12 Dec 2017 19:53:13 +0000 Subject: [petsc-users] Matrix-free vs finite differenced Jacobian approximation In-Reply-To: References: Message-ID: > On Dec 12, 2017, at 1:48 PM, Alexander Lindsay wrote: > > On Tue, Dec 12, 2017 at 10:39 AM, Matthew Knepley wrote: > On Tue, Dec 12, 2017 at 12:26 PM, Alexander Lindsay wrote: > Ok, I'm going to go back on my original statement...the physics being run here is a sub-set of a much larger set of physics; for the current set the hand-coded Jacobian actually appears to be quite good. > > With hand-coded Jacobian, -pc_type lu, the convergence is perfect: > > 0 Nonlinear |R| = 2.259203e-02 > 0 Linear |R| = 2.259203e-02 > 1 Linear |R| = 1.129089e-10 > 1 Nonlinear |R| = 6.295583e-11 > > So yea I guess at this point I'm just curious about the different behavior between `-snes_fd` and `-snes_fd -snes_mf_operator`. Does the hand-coded result change your opinion Matt that the rules for FormFunction/Jacobian might be being violated? > > I understand that a finite difference approximation of the true Jacobian is an approximation. However, in the absence of possible complications like Matt suggested where an on-the-fly calculation might stand a better chance of capturing the behavior, I would expect both `-snes_mf_operator -snes_fd` and `-snes_fd` to suffer from the same approximations, right? > > There are too many things that do not make sense: > > 1) How could LU be working with -snes_mf_operator? > > What operator are you using here. The hand-coded Jacobian? > > Why wouldn't LU work with -snes_mf_operator? My understanding is that LU is configured to operate interchangeably with iterative preconditioners. > Matt didn't see that in addition to the -snes_mf_operator you also had the -snes_fd. Thus it is doing the factorization on the fd matrix and using it as a preconditioner for the mf matrix. No mystery. > Jacobian: > > (lldb) call MatView(snes->jacobian, 0) > Mat Object: 1 MPI processes > type: mffd > Matrix-free approximation: > err=1.49012e-08 (relative error in function evaluation) > The compute h routine has not yet been set > > Preconditioner is computed with with SNESComputeJacobianDefault > > > 2) How can LU take more than one iterate? > > 0 Nonlinear |R| = 2.259203e-02 > 0 Linear |R| = 2.259203e-02 > 1 Linear |R| = 2.258733e-02 > 2 Linear |R| = 3.103342e-06 > 3 Linear |R| = 6.779865e-12 > > That seems to say that we are not solving a linear system for some reason. > > I'm curious about the same thing :-) Perhaps because B^-1 is not a perfect inverse of the action of J on x in the matrix-free case? Yes, this is completely normal. The matrix-free application is not identical to the application of the fd matrix hence solving LU with the fd matrix will not make a perfect preconditioner for the matrix free operator. You will always see this behavior in this situation. Please read my email, the issue you report is still there (Matt got sidetracked). Barry > If I actually run with matrix-free J and hand-coded B, I do get one linear iteration: > > SMP jacobian PJFNK, -pc_type lu: > 0 Nonlinear |R| = 2.259203e-02 > 0 Linear |R| = 2.259203e-02 > 1 Linear |R| = 2.857149e-11 > 1 Nonlinear |R| = 1.404604e-11 > Solve Converged! > > > 3) Why would -snes_fd be different from a hand-coded Jacobian? > > Did you try -snes_type test? > > The matrix ratio norm of the difference is on the order of 1e-6, which isn't perfect but here the hand-coded performs better than the finite-differenced Jacobian. > > Alex > > Matt > > On Tue, Dec 12, 2017 at 9:43 AM, Matthew Knepley wrote: > On Tue, Dec 12, 2017 at 11:30 AM, Alexander Lindsay wrote: > I'm not using any hand-coded Jacobians. > > This looks to me like the rules for FormFunction/Jacobian() are being broken. If the residual function > depends on some third variable, and it changes between calls independent of the solution U, then > the stored Jacobian could look wrong, but one done every time on the fly might converge. > > Matt > > Case 1 options: -snes_fd -pc_type lu > > 0 Nonlinear |R| = 2.259203e-02 > 0 Linear |R| = 2.259203e-02 > 1 Linear |R| = 7.821248e-11 > 1 Nonlinear |R| = 2.258733e-02 > 0 Linear |R| = 2.258733e-02 > 1 Linear |R| = 5.277296e-11 > 2 Nonlinear |R| = 2.258733e-02 > 0 Linear |R| = 2.258733e-02 > 1 Linear |R| = 5.993971e-11 > Nonlinear solve did not converge due to DIVERGED_LINE_SEARCH iterations 2 > > Case 2 options: -snes_fd -snes_mf_operator -pc_type lu > > 0 Nonlinear |R| = 2.259203e-02 > 0 Linear |R| = 2.259203e-02 > 1 Linear |R| = 2.258733e-02 > 2 Linear |R| = 3.103342e-06 > 3 Linear |R| = 6.779865e-12 > 1 Nonlinear |R| = 7.497740e-06 > 0 Linear |R| = 7.497740e-06 > 1 Linear |R| = 8.265413e-12 > 2 Nonlinear |R| = 7.993729e-12 > Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE iterations 2 > > > On Tue, Dec 12, 2017 at 9:12 AM, zakaryah . wrote: > When you say "Jacobians are bad" and "debugging the Jacobians", do you mean that the hand-coded Jacobian is wrong? In that case, why would you be surprised that the finite difference Jacobians, which are "correct" to approximation error, perform better? Otherwise, what does "Jacobians are bad" mean - ill-conditioned? Singular? Not symmetric? Not positive definite? > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > From alexlindsay239 at gmail.com Tue Dec 12 14:19:12 2017 From: alexlindsay239 at gmail.com (Alexander Lindsay) Date: Tue, 12 Dec 2017 13:19:12 -0700 Subject: [petsc-users] Matrix-free vs finite differenced Jacobian approximation In-Reply-To: <5B29F67D-8F24-4D0E-B12F-23CE2F335D0C@mcs.anl.gov> References: <5B29F67D-8F24-4D0E-B12F-23CE2F335D0C@mcs.anl.gov> Message-ID: I'm helping debug the finite strain models in the TensorMechanics module in MOOSE, so unfortunately I don't have a nice small PetSc code I can hand you guys :-( Hmm, interesting, if I run with `-snes_mf_operator -snes_fd -mat_mffd_type ds`, I get DIVERGED_BREAKDOWN during the initial linear solve. If I run with `-snes_fd -mat_fd_type ds`, then the solve converges. So summary: - J = B = finite-differenced, differencing type = wp : Solve fails due to DIVERGED_LINE_SEARCH - J = B = finite-differenced, differencing type = ds : Solve converges in 3 non-linear iterations 0 Nonlinear |R| = 2.259203e-02 0 Linear |R| = 2.259203e-02 1 Linear |R| = 6.084393e-11 1 Nonlinear |R| = 4.780691e-03 0 Linear |R| = 4.780691e-03 1 Linear |R| = 8.580132e-19 2 Nonlinear |R| = 4.806625e-09 0 Linear |R| = 4.806625e-09 1 Linear |R| = 1.650725e-24 3 Nonlinear |R| = 9.603678e-12 - J = matrix-free, B = finite-differenced, mat_mffd_type = mat_fd_type = wp: Solve converges in 2 non-linear iterations 0 Nonlinear |R| = 2.259203e-02 0 Linear |R| = 2.259203e-02 1 Linear |R| = 2.258733e-02 2 Linear |R| = 3.103342e-06 3 Linear |R| = 6.779865e-12 1 Nonlinear |R| = 7.497740e-06 0 Linear |R| = 7.497740e-06 1 Linear |R| = 8.265413e-12 2 Nonlinear |R| = 7.993729e-12 - J = matrix-free, B = finite-differenced, mat_mffd_type = ds, mat_fd_type = wp: DIVERGED_BREAKDOWN in linear solve - J = matrix-free, B = finite-differenced, mat_mffd_type = wp, mat_fd_type = ds: Solve converges in 2 non-linear iterations 0 Nonlinear |R| = 2.259203e-02 0 Linear |R| = 2.259203e-02 1 Linear |R| = 4.635397e-03 2 Linear |R| = 5.413676e-11 1 Nonlinear |R| = 1.068626e-05 0 Linear |R| = 1.068626e-05 1 Linear |R| = 7.942385e-12 2 Nonlinear |R| = 5.444448e-11 - J = matrix-free, B = finite-differenced, mat_mffd_type = mat_fd_type = ds: Solves converges in 3 non-linear iterations: 0 Nonlinear |R| = 2.259203e-02 0 Linear |R| = 2.259203e-02 1 Linear |R| = 1.312921e-06 2 Linear |R| = 7.714018e-09 1 Nonlinear |R| = 4.780690e-03 0 Linear |R| = 4.780690e-03 1 Linear |R| = 7.773053e-09 2 Nonlinear |R| = 1.226836e-08 0 Linear |R| = 1.226836e-08 1 Linear |R| = 1.546288e-14 3 Nonlinear |R| = 1.295982e-10 On Tue, Dec 12, 2017 at 12:33 PM, Smith, Barry F. wrote: > > > > On Dec 12, 2017, at 11:26 AM, Alexander Lindsay < > alexlindsay239 at gmail.com> wrote: > > > > Ok, I'm going to go back on my original statement...the physics being > run here is a sub-set of a much larger set of physics; for the current set > the hand-coded Jacobian actually appears to be quite good. > > > > With hand-coded Jacobian, -pc_type lu, the convergence is perfect: > > > > 0 Nonlinear |R| = 2.259203e-02 > > 0 Linear |R| = 2.259203e-02 > > 1 Linear |R| = 1.129089e-10 > > 1 Nonlinear |R| = 6.295583e-11 > > > > So yea I guess at this point I'm just curious about the different > behavior between `-snes_fd` and `-snes_fd -snes_mf_operator`. > > Now that you have provided the exact options you are using, yes it is > very unexpected behavior. Is there any chance you can send us the code that > reproduces this? > > The code that does the differencing in -snes_fd is similar to the code > that does the differencing for -snes_mf_operator so normally one expects > similar behavior but there are a couple of options you can try. Run with > -snes_mf_operator and -help | grep mat_mffd and this will show options to > control the differencing for the matrix free. For -snes_fd you have the > option -mat_fd_type wp or ds > > > > Does the hand-coded result change your opinion Matt that the rules for > FormFunction/Jacobian might be being violated? > > > > I understand that a finite difference approximation of the true Jacobian > is an approximation. However, in the absence of possible complications like > Matt suggested where an on-the-fly calculation might stand a better chance > of capturing the behavior, I would expect both `-snes_mf_operator -snes_fd` > and `-snes_fd` to suffer from the same approximations, right? > > > > On Tue, Dec 12, 2017 at 9:43 AM, Matthew Knepley > wrote: > > On Tue, Dec 12, 2017 at 11:30 AM, Alexander Lindsay < > alexlindsay239 at gmail.com> wrote: > > I'm not using any hand-coded Jacobians. > > > > This looks to me like the rules for FormFunction/Jacobian() are being > broken. If the residual function > > depends on some third variable, and it changes between calls independent > of the solution U, then > > the stored Jacobian could look wrong, but one done every time on the fly > might converge. > > > > Matt > > > > Case 1 options: -snes_fd -pc_type lu > > > > 0 Nonlinear |R| = 2.259203e-02 > > 0 Linear |R| = 2.259203e-02 > > 1 Linear |R| = 7.821248e-11 > > 1 Nonlinear |R| = 2.258733e-02 > > 0 Linear |R| = 2.258733e-02 > > 1 Linear |R| = 5.277296e-11 > > 2 Nonlinear |R| = 2.258733e-02 > > 0 Linear |R| = 2.258733e-02 > > 1 Linear |R| = 5.993971e-11 > > Nonlinear solve did not converge due to DIVERGED_LINE_SEARCH iterations 2 > > > > Case 2 options: -snes_fd -snes_mf_operator -pc_type lu > > > > 0 Nonlinear |R| = 2.259203e-02 > > 0 Linear |R| = 2.259203e-02 > > 1 Linear |R| = 2.258733e-02 > > 2 Linear |R| = 3.103342e-06 > > 3 Linear |R| = 6.779865e-12 > > 1 Nonlinear |R| = 7.497740e-06 > > 0 Linear |R| = 7.497740e-06 > > 1 Linear |R| = 8.265413e-12 > > 2 Nonlinear |R| = 7.993729e-12 > > Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE iterations 2 > > > > > > On Tue, Dec 12, 2017 at 9:12 AM, zakaryah . wrote: > > When you say "Jacobians are bad" and "debugging the Jacobians", do you > mean that the hand-coded Jacobian is wrong? In that case, why would you be > surprised that the finite difference Jacobians, which are "correct" to > approximation error, perform better? Otherwise, what does "Jacobians are > bad" mean - ill-conditioned? Singular? Not symmetric? Not positive > definite? > > > > > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From j.wagner at rice.edu Tue Dec 12 14:44:32 2017 From: j.wagner at rice.edu (Jordan Wagner) Date: Tue, 12 Dec 2017 14:44:32 -0600 Subject: [petsc-users] Function to convert a dense matrix holding the cell connectivity to a MPIADJ for use with MatMeshToCellGraph Message-ID: Hi, I am trying to use the function MatMeshToCellGraph. I currently have a matrix that holds the cell connectivity of simplex elements. So it is a numCells x 3 matrix where the row corresponds to the cell number and the column is a vertex of that cell. To use this function, it appears I need to get the corresponding adjacency matrix. I found the function MatConvert, which I was hoping could be the function I am looking for, but I keep getting a memory error when using it, which I have added at the bottom. Is this the correct function to use to convert my cell connectivity matrix, or do I need to loop through to get the proper offsets (i,j) needed to create the adjacency matrix with MatCreateMPIAdj, as is done in ex11.c? Thanks very much for any tips. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind [0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [0]PETSC ERROR: PetscMallocValidate: error detected at PetscSignalHandlerDefault() line 145 in /home/jordan/petsc/src/sys/error/signal.c [0]PETSC ERROR: Memory [id=0(16)] at address 0x1b4cb80 is corrupted (probably write past end of array) [0]PETSC ERROR: Memory originally allocated in MatConvertFrom_MPIAdj() line 444 in /home/jordan/petsc/src/mat/impls/adj/mpi/mpiadj.c [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Memory corruption: http://www.mcs.anl.gov/petsc/documentation/installation.html#valgrind [0]PETSC ERROR: [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Release Version 3.8.2, unknown [0]PETSC ERROR: ./preprocess.exe on a arch-linux2-c-debug named jordan-nest by jordan Tue Dec 12 14:40:02 2017 [0]PETSC ERROR: Configure options --with-shared-libraries=1 --download-metis --download-parmetis [0]PETSC ERROR: #1 PetscMallocValidate() line 146 in /home/jordan/petsc/src/sys/memory/mtr.c [0]PETSC ERROR: #2 PetscSignalHandlerDefault() line 145 in /home/jordan/petsc/src/sys/error/signal.c From knepley at gmail.com Tue Dec 12 15:49:43 2017 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 12 Dec 2017 16:49:43 -0500 Subject: [petsc-users] Matrix-free vs finite differenced Jacobian approximation In-Reply-To: References: <5B29F67D-8F24-4D0E-B12F-23CE2F335D0C@mcs.anl.gov> Message-ID: On Tue, Dec 12, 2017 at 3:19 PM, Alexander Lindsay wrote: > I'm helping debug the finite strain models in the TensorMechanics module > in MOOSE, so unfortunately I don't have a nice small PetSc code I can hand > you guys :-( > > Hmm, interesting, if I run with `-snes_mf_operator -snes_fd -mat_mffd_type > ds`, I get DIVERGED_BREAKDOWN during the initial linear solve. > So the MF operator always converges. The FD operator does not always converge, and factorization also can fail (DIVERGED_BREAKDOWN) so it seems that the FD operator is incorrect. Usually we have bugs with coloring, but I do not think coloring is used by -snes_fd. What happens if you get the coloring version by just deleting the FormJacobian pointer? Thanks, Matt > If I run with `-snes_fd -mat_fd_type ds`, then the solve converges. > > So summary: > > - J = B = finite-differenced, differencing type = wp : Solve fails due to > DIVERGED_LINE_SEARCH > > - J = B = finite-differenced, differencing type = ds : Solve converges in > 3 non-linear iterations > 0 Nonlinear |R| = 2.259203e-02 > 0 Linear |R| = 2.259203e-02 > 1 Linear |R| = 6.084393e-11 > 1 Nonlinear |R| = 4.780691e-03 > 0 Linear |R| = 4.780691e-03 > 1 Linear |R| = 8.580132e-19 > 2 Nonlinear |R| = 4.806625e-09 > 0 Linear |R| = 4.806625e-09 > 1 Linear |R| = 1.650725e-24 > 3 Nonlinear |R| = 9.603678e-12 > > - J = matrix-free, B = finite-differenced, mat_mffd_type = mat_fd_type = > wp: Solve converges in 2 non-linear iterations > 0 Nonlinear |R| = 2.259203e-02 > 0 Linear |R| = 2.259203e-02 > 1 Linear |R| = 2.258733e-02 > 2 Linear |R| = 3.103342e-06 > 3 Linear |R| = 6.779865e-12 > 1 Nonlinear |R| = 7.497740e-06 > 0 Linear |R| = 7.497740e-06 > 1 Linear |R| = 8.265413e-12 > 2 Nonlinear |R| = 7.993729e-12 > > - J = matrix-free, B = finite-differenced, mat_mffd_type = ds, mat_fd_type > = wp: DIVERGED_BREAKDOWN in linear solve > > - J = matrix-free, B = finite-differenced, mat_mffd_type = wp, mat_fd_type > = ds: Solve converges in 2 non-linear iterations > 0 Nonlinear |R| = 2.259203e-02 > 0 Linear |R| = 2.259203e-02 > 1 Linear |R| = 4.635397e-03 > 2 Linear |R| = 5.413676e-11 > 1 Nonlinear |R| = 1.068626e-05 > 0 Linear |R| = 1.068626e-05 > 1 Linear |R| = 7.942385e-12 > 2 Nonlinear |R| = 5.444448e-11 > > - J = matrix-free, B = finite-differenced, mat_mffd_type = mat_fd_type = > ds: Solves converges in 3 non-linear iterations: > 0 Nonlinear |R| = 2.259203e-02 > 0 Linear |R| = 2.259203e-02 > 1 Linear |R| = 1.312921e-06 > 2 Linear |R| = 7.714018e-09 > 1 Nonlinear |R| = 4.780690e-03 > 0 Linear |R| = 4.780690e-03 > 1 Linear |R| = 7.773053e-09 > 2 Nonlinear |R| = 1.226836e-08 > 0 Linear |R| = 1.226836e-08 > 1 Linear |R| = 1.546288e-14 > 3 Nonlinear |R| = 1.295982e-10 > > > > > On Tue, Dec 12, 2017 at 12:33 PM, Smith, Barry F. > wrote: > >> >> >> > On Dec 12, 2017, at 11:26 AM, Alexander Lindsay < >> alexlindsay239 at gmail.com> wrote: >> > >> > Ok, I'm going to go back on my original statement...the physics being >> run here is a sub-set of a much larger set of physics; for the current set >> the hand-coded Jacobian actually appears to be quite good. >> > >> > With hand-coded Jacobian, -pc_type lu, the convergence is perfect: >> > >> > 0 Nonlinear |R| = 2.259203e-02 >> > 0 Linear |R| = 2.259203e-02 >> > 1 Linear |R| = 1.129089e-10 >> > 1 Nonlinear |R| = 6.295583e-11 >> > >> > So yea I guess at this point I'm just curious about the different >> behavior between `-snes_fd` and `-snes_fd -snes_mf_operator`. >> >> Now that you have provided the exact options you are using, yes it is >> very unexpected behavior. Is there any chance you can send us the code that >> reproduces this? >> >> The code that does the differencing in -snes_fd is similar to the code >> that does the differencing for -snes_mf_operator so normally one expects >> similar behavior but there are a couple of options you can try. Run with >> -snes_mf_operator and -help | grep mat_mffd and this will show options to >> control the differencing for the matrix free. For -snes_fd you have the >> option -mat_fd_type wp or ds >> >> >> > Does the hand-coded result change your opinion Matt that the rules for >> FormFunction/Jacobian might be being violated? >> > >> > I understand that a finite difference approximation of the true >> Jacobian is an approximation. However, in the absence of possible >> complications like Matt suggested where an on-the-fly calculation might >> stand a better chance of capturing the behavior, I would expect both >> `-snes_mf_operator -snes_fd` and `-snes_fd` to suffer from the same >> approximations, right? >> > >> > On Tue, Dec 12, 2017 at 9:43 AM, Matthew Knepley >> wrote: >> > On Tue, Dec 12, 2017 at 11:30 AM, Alexander Lindsay < >> alexlindsay239 at gmail.com> wrote: >> > I'm not using any hand-coded Jacobians. >> > >> > This looks to me like the rules for FormFunction/Jacobian() are being >> broken. If the residual function >> > depends on some third variable, and it changes between calls >> independent of the solution U, then >> > the stored Jacobian could look wrong, but one done every time on the >> fly might converge. >> > >> > Matt >> > >> > Case 1 options: -snes_fd -pc_type lu >> > >> > 0 Nonlinear |R| = 2.259203e-02 >> > 0 Linear |R| = 2.259203e-02 >> > 1 Linear |R| = 7.821248e-11 >> > 1 Nonlinear |R| = 2.258733e-02 >> > 0 Linear |R| = 2.258733e-02 >> > 1 Linear |R| = 5.277296e-11 >> > 2 Nonlinear |R| = 2.258733e-02 >> > 0 Linear |R| = 2.258733e-02 >> > 1 Linear |R| = 5.993971e-11 >> > Nonlinear solve did not converge due to DIVERGED_LINE_SEARCH iterations >> 2 >> > >> > Case 2 options: -snes_fd -snes_mf_operator -pc_type lu >> > >> > 0 Nonlinear |R| = 2.259203e-02 >> > 0 Linear |R| = 2.259203e-02 >> > 1 Linear |R| = 2.258733e-02 >> > 2 Linear |R| = 3.103342e-06 >> > 3 Linear |R| = 6.779865e-12 >> > 1 Nonlinear |R| = 7.497740e-06 >> > 0 Linear |R| = 7.497740e-06 >> > 1 Linear |R| = 8.265413e-12 >> > 2 Nonlinear |R| = 7.993729e-12 >> > Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE iterations 2 >> > >> > >> > On Tue, Dec 12, 2017 at 9:12 AM, zakaryah . wrote: >> > When you say "Jacobians are bad" and "debugging the Jacobians", do you >> mean that the hand-coded Jacobian is wrong? In that case, why would you be >> surprised that the finite difference Jacobians, which are "correct" to >> approximation error, perform better? Otherwise, what does "Jacobians are >> bad" mean - ill-conditioned? Singular? Not symmetric? Not positive >> definite? >> > >> > >> > >> > >> > -- >> > What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> > -- Norbert Wiener >> > >> > https://www.cse.buffalo.edu/~knepley/ >> > >> >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Dec 12 15:54:01 2017 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 12 Dec 2017 16:54:01 -0500 Subject: [petsc-users] Function to convert a dense matrix holding the cell connectivity to a MPIADJ for use with MatMeshToCellGraph In-Reply-To: References: Message-ID: Barry wrote this, so he probably knows how to fix it. Another option is to use DMPlex for your mesh. It will give you the dual if you want. Thanks, Matt On Tue, Dec 12, 2017 at 3:44 PM, Jordan Wagner wrote: > Hi, > > I am trying to use the function MatMeshToCellGraph. I currently have a > matrix that holds the cell connectivity of simplex elements. So it is a > numCells x 3 matrix where the row corresponds to the cell number and the > column is a vertex of that cell. To use this function, it appears I need to > get the corresponding adjacency matrix. > > I found the function MatConvert, which I was hoping could be the function > I am looking for, but I keep getting a memory error when using it, which I > have added at the bottom. Is this the correct function to use to convert my > cell connectivity matrix, or do I need to loop through to get the proper > offsets (i,j) needed to create the adjacency matrix with MatCreateMPIAdj, > as is done in ex11.c? > > Thanks very much for any tips. > > > [0]PETSC ERROR: ------------------------------ > ------------------------------------------ > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, > probably memory access out of range > [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/d > ocumentation/faq.html#valgrind > [0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS > X to find memory corruption errors > [0]PETSC ERROR: PetscMallocValidate: error detected at > PetscSignalHandlerDefault() line 145 in /home/jordan/petsc/src/sys/err > or/signal.c > [0]PETSC ERROR: Memory [id=0(16)] at address 0x1b4cb80 is corrupted > (probably write past end of array) > [0]PETSC ERROR: Memory originally allocated in MatConvertFrom_MPIAdj() > line 444 in /home/jordan/petsc/src/mat/impls/adj/mpi/mpiadj.c > [0]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [0]PETSC ERROR: Memory corruption: http://www.mcs.anl.gov/petsc/d > ocumentation/installation.html#valgrind > [0]PETSC ERROR: > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.8.2, unknown > [0]PETSC ERROR: ./preprocess.exe on a arch-linux2-c-debug named > jordan-nest by jordan Tue Dec 12 14:40:02 2017 > [0]PETSC ERROR: Configure options --with-shared-libraries=1 > --download-metis --download-parmetis > [0]PETSC ERROR: #1 PetscMallocValidate() line 146 in > /home/jordan/petsc/src/sys/memory/mtr.c > [0]PETSC ERROR: #2 PetscSignalHandlerDefault() line 145 in > /home/jordan/petsc/src/sys/error/signal.c > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexlindsay239 at gmail.com Tue Dec 12 15:56:19 2017 From: alexlindsay239 at gmail.com (Alexander Lindsay) Date: Tue, 12 Dec 2017 14:56:19 -0700 Subject: [petsc-users] Matrix-free vs finite differenced Jacobian approximation In-Reply-To: References: <5B29F67D-8F24-4D0E-B12F-23CE2F335D0C@mcs.anl.gov> Message-ID: So I decided to look at the condition number of our matrix, running with `-pc_type svd -pc_svd_monitor` and it was atrocious, roughly on the order of 1e9. After doing some scaling we are down to a condition number of 1e3, and both MF and FD operators now converge, regardless of the differencing types chosen. I would say the problem was definitely on our end! On Tue, Dec 12, 2017 at 2:49 PM, Matthew Knepley wrote: > On Tue, Dec 12, 2017 at 3:19 PM, Alexander Lindsay < > alexlindsay239 at gmail.com> wrote: > >> I'm helping debug the finite strain models in the TensorMechanics module >> in MOOSE, so unfortunately I don't have a nice small PetSc code I can hand >> you guys :-( >> >> Hmm, interesting, if I run with `-snes_mf_operator -snes_fd >> -mat_mffd_type ds`, I get DIVERGED_BREAKDOWN during the initial linear >> solve. >> > > So the MF operator always converges. The FD operator does not always > converge, and factorization also can fail (DIVERGED_BREAKDOWN) > so it seems that the FD operator is incorrect. Usually we have bugs with > coloring, but I do not think coloring is used by -snes_fd. What happens > if you get the coloring version by just deleting the FormJacobian pointer? > > Thanks, > > Matt > > >> If I run with `-snes_fd -mat_fd_type ds`, then the solve converges. >> >> So summary: >> >> - J = B = finite-differenced, differencing type = wp : Solve fails due to >> DIVERGED_LINE_SEARCH >> >> - J = B = finite-differenced, differencing type = ds : Solve converges in >> 3 non-linear iterations >> 0 Nonlinear |R| = 2.259203e-02 >> 0 Linear |R| = 2.259203e-02 >> 1 Linear |R| = 6.084393e-11 >> 1 Nonlinear |R| = 4.780691e-03 >> 0 Linear |R| = 4.780691e-03 >> 1 Linear |R| = 8.580132e-19 >> 2 Nonlinear |R| = 4.806625e-09 >> 0 Linear |R| = 4.806625e-09 >> 1 Linear |R| = 1.650725e-24 >> 3 Nonlinear |R| = 9.603678e-12 >> >> - J = matrix-free, B = finite-differenced, mat_mffd_type = mat_fd_type = >> wp: Solve converges in 2 non-linear iterations >> 0 Nonlinear |R| = 2.259203e-02 >> 0 Linear |R| = 2.259203e-02 >> 1 Linear |R| = 2.258733e-02 >> 2 Linear |R| = 3.103342e-06 >> 3 Linear |R| = 6.779865e-12 >> 1 Nonlinear |R| = 7.497740e-06 >> 0 Linear |R| = 7.497740e-06 >> 1 Linear |R| = 8.265413e-12 >> 2 Nonlinear |R| = 7.993729e-12 >> >> - J = matrix-free, B = finite-differenced, mat_mffd_type = ds, >> mat_fd_type = wp: DIVERGED_BREAKDOWN in linear solve >> >> - J = matrix-free, B = finite-differenced, mat_mffd_type = wp, >> mat_fd_type = ds: Solve converges in 2 non-linear iterations >> 0 Nonlinear |R| = 2.259203e-02 >> 0 Linear |R| = 2.259203e-02 >> 1 Linear |R| = 4.635397e-03 >> 2 Linear |R| = 5.413676e-11 >> 1 Nonlinear |R| = 1.068626e-05 >> 0 Linear |R| = 1.068626e-05 >> 1 Linear |R| = 7.942385e-12 >> 2 Nonlinear |R| = 5.444448e-11 >> >> - J = matrix-free, B = finite-differenced, mat_mffd_type = mat_fd_type = >> ds: Solves converges in 3 non-linear iterations: >> 0 Nonlinear |R| = 2.259203e-02 >> 0 Linear |R| = 2.259203e-02 >> 1 Linear |R| = 1.312921e-06 >> 2 Linear |R| = 7.714018e-09 >> 1 Nonlinear |R| = 4.780690e-03 >> 0 Linear |R| = 4.780690e-03 >> 1 Linear |R| = 7.773053e-09 >> 2 Nonlinear |R| = 1.226836e-08 >> 0 Linear |R| = 1.226836e-08 >> 1 Linear |R| = 1.546288e-14 >> 3 Nonlinear |R| = 1.295982e-10 >> >> >> >> >> On Tue, Dec 12, 2017 at 12:33 PM, Smith, Barry F. >> wrote: >> >>> >>> >>> > On Dec 12, 2017, at 11:26 AM, Alexander Lindsay < >>> alexlindsay239 at gmail.com> wrote: >>> > >>> > Ok, I'm going to go back on my original statement...the physics being >>> run here is a sub-set of a much larger set of physics; for the current set >>> the hand-coded Jacobian actually appears to be quite good. >>> > >>> > With hand-coded Jacobian, -pc_type lu, the convergence is perfect: >>> > >>> > 0 Nonlinear |R| = 2.259203e-02 >>> > 0 Linear |R| = 2.259203e-02 >>> > 1 Linear |R| = 1.129089e-10 >>> > 1 Nonlinear |R| = 6.295583e-11 >>> > >>> > So yea I guess at this point I'm just curious about the different >>> behavior between `-snes_fd` and `-snes_fd -snes_mf_operator`. >>> >>> Now that you have provided the exact options you are using, yes it is >>> very unexpected behavior. Is there any chance you can send us the code that >>> reproduces this? >>> >>> The code that does the differencing in -snes_fd is similar to the >>> code that does the differencing for -snes_mf_operator so normally one >>> expects similar behavior but there are a couple of options you can try. Run >>> with -snes_mf_operator and -help | grep mat_mffd and this will show >>> options to control the differencing for the matrix free. For -snes_fd you >>> have the option -mat_fd_type wp or ds >>> >>> >>> > Does the hand-coded result change your opinion Matt that the rules for >>> FormFunction/Jacobian might be being violated? >>> > >>> > I understand that a finite difference approximation of the true >>> Jacobian is an approximation. However, in the absence of possible >>> complications like Matt suggested where an on-the-fly calculation might >>> stand a better chance of capturing the behavior, I would expect both >>> `-snes_mf_operator -snes_fd` and `-snes_fd` to suffer from the same >>> approximations, right? >>> > >>> > On Tue, Dec 12, 2017 at 9:43 AM, Matthew Knepley >>> wrote: >>> > On Tue, Dec 12, 2017 at 11:30 AM, Alexander Lindsay < >>> alexlindsay239 at gmail.com> wrote: >>> > I'm not using any hand-coded Jacobians. >>> > >>> > This looks to me like the rules for FormFunction/Jacobian() are being >>> broken. If the residual function >>> > depends on some third variable, and it changes between calls >>> independent of the solution U, then >>> > the stored Jacobian could look wrong, but one done every time on the >>> fly might converge. >>> > >>> > Matt >>> > >>> > Case 1 options: -snes_fd -pc_type lu >>> > >>> > 0 Nonlinear |R| = 2.259203e-02 >>> > 0 Linear |R| = 2.259203e-02 >>> > 1 Linear |R| = 7.821248e-11 >>> > 1 Nonlinear |R| = 2.258733e-02 >>> > 0 Linear |R| = 2.258733e-02 >>> > 1 Linear |R| = 5.277296e-11 >>> > 2 Nonlinear |R| = 2.258733e-02 >>> > 0 Linear |R| = 2.258733e-02 >>> > 1 Linear |R| = 5.993971e-11 >>> > Nonlinear solve did not converge due to DIVERGED_LINE_SEARCH >>> iterations 2 >>> > >>> > Case 2 options: -snes_fd -snes_mf_operator -pc_type lu >>> > >>> > 0 Nonlinear |R| = 2.259203e-02 >>> > 0 Linear |R| = 2.259203e-02 >>> > 1 Linear |R| = 2.258733e-02 >>> > 2 Linear |R| = 3.103342e-06 >>> > 3 Linear |R| = 6.779865e-12 >>> > 1 Nonlinear |R| = 7.497740e-06 >>> > 0 Linear |R| = 7.497740e-06 >>> > 1 Linear |R| = 8.265413e-12 >>> > 2 Nonlinear |R| = 7.993729e-12 >>> > Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE iterations 2 >>> > >>> > >>> > On Tue, Dec 12, 2017 at 9:12 AM, zakaryah . >>> wrote: >>> > When you say "Jacobians are bad" and "debugging the Jacobians", do you >>> mean that the hand-coded Jacobian is wrong? In that case, why would you be >>> surprised that the finite difference Jacobians, which are "correct" to >>> approximation error, perform better? Otherwise, what does "Jacobians are >>> bad" mean - ill-conditioned? Singular? Not symmetric? Not positive >>> definite? >>> > >>> > >>> > >>> > >>> > -- >>> > What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> > -- Norbert Wiener >>> > >>> > https://www.cse.buffalo.edu/~knepley/ >>> > >>> >>> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Dec 12 17:22:29 2017 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Tue, 12 Dec 2017 23:22:29 +0000 Subject: [petsc-users] Matrix-free vs finite differenced Jacobian approximation In-Reply-To: References: <5B29F67D-8F24-4D0E-B12F-23CE2F335D0C@mcs.anl.gov> Message-ID: <6B885B37-0914-437C-8CBE-A04C5B4CE1D2@mcs.anl.gov> Cool. > On Dec 12, 2017, at 3:56 PM, Alexander Lindsay wrote: > > So I decided to look at the condition number of our matrix, running with `-pc_type svd -pc_svd_monitor` and it was atrocious, roughly on the order of 1e9. After doing some scaling we are down to a condition number of 1e3, and both MF and FD operators now converge, regardless of the differencing types chosen. I would say the problem was definitely on our end! > > On Tue, Dec 12, 2017 at 2:49 PM, Matthew Knepley wrote: > On Tue, Dec 12, 2017 at 3:19 PM, Alexander Lindsay wrote: > I'm helping debug the finite strain models in the TensorMechanics module in MOOSE, so unfortunately I don't have a nice small PetSc code I can hand you guys :-( > > Hmm, interesting, if I run with `-snes_mf_operator -snes_fd -mat_mffd_type ds`, I get DIVERGED_BREAKDOWN during the initial linear solve. > > So the MF operator always converges. The FD operator does not always converge, and factorization also can fail (DIVERGED_BREAKDOWN) > so it seems that the FD operator is incorrect. Usually we have bugs with coloring, but I do not think coloring is used by -snes_fd. What happens > if you get the coloring version by just deleting the FormJacobian pointer? > > Thanks, > > Matt > > If I run with `-snes_fd -mat_fd_type ds`, then the solve converges. > > So summary: > > - J = B = finite-differenced, differencing type = wp : Solve fails due to DIVERGED_LINE_SEARCH > > - J = B = finite-differenced, differencing type = ds : Solve converges in 3 non-linear iterations > 0 Nonlinear |R| = 2.259203e-02 > 0 Linear |R| = 2.259203e-02 > 1 Linear |R| = 6.084393e-11 > 1 Nonlinear |R| = 4.780691e-03 > 0 Linear |R| = 4.780691e-03 > 1 Linear |R| = 8.580132e-19 > 2 Nonlinear |R| = 4.806625e-09 > 0 Linear |R| = 4.806625e-09 > 1 Linear |R| = 1.650725e-24 > 3 Nonlinear |R| = 9.603678e-12 > > - J = matrix-free, B = finite-differenced, mat_mffd_type = mat_fd_type = wp: Solve converges in 2 non-linear iterations > 0 Nonlinear |R| = 2.259203e-02 > 0 Linear |R| = 2.259203e-02 > 1 Linear |R| = 2.258733e-02 > 2 Linear |R| = 3.103342e-06 > 3 Linear |R| = 6.779865e-12 > 1 Nonlinear |R| = 7.497740e-06 > 0 Linear |R| = 7.497740e-06 > 1 Linear |R| = 8.265413e-12 > 2 Nonlinear |R| = 7.993729e-12 > > - J = matrix-free, B = finite-differenced, mat_mffd_type = ds, mat_fd_type = wp: DIVERGED_BREAKDOWN in linear solve > > - J = matrix-free, B = finite-differenced, mat_mffd_type = wp, mat_fd_type = ds: Solve converges in 2 non-linear iterations > 0 Nonlinear |R| = 2.259203e-02 > 0 Linear |R| = 2.259203e-02 > 1 Linear |R| = 4.635397e-03 > 2 Linear |R| = 5.413676e-11 > 1 Nonlinear |R| = 1.068626e-05 > 0 Linear |R| = 1.068626e-05 > 1 Linear |R| = 7.942385e-12 > 2 Nonlinear |R| = 5.444448e-11 > > - J = matrix-free, B = finite-differenced, mat_mffd_type = mat_fd_type = ds: Solves converges in 3 non-linear iterations: > 0 Nonlinear |R| = 2.259203e-02 > 0 Linear |R| = 2.259203e-02 > 1 Linear |R| = 1.312921e-06 > 2 Linear |R| = 7.714018e-09 > 1 Nonlinear |R| = 4.780690e-03 > 0 Linear |R| = 4.780690e-03 > 1 Linear |R| = 7.773053e-09 > 2 Nonlinear |R| = 1.226836e-08 > 0 Linear |R| = 1.226836e-08 > 1 Linear |R| = 1.546288e-14 > 3 Nonlinear |R| = 1.295982e-10 > > > > > On Tue, Dec 12, 2017 at 12:33 PM, Smith, Barry F. wrote: > > > > On Dec 12, 2017, at 11:26 AM, Alexander Lindsay wrote: > > > > Ok, I'm going to go back on my original statement...the physics being run here is a sub-set of a much larger set of physics; for the current set the hand-coded Jacobian actually appears to be quite good. > > > > With hand-coded Jacobian, -pc_type lu, the convergence is perfect: > > > > 0 Nonlinear |R| = 2.259203e-02 > > 0 Linear |R| = 2.259203e-02 > > 1 Linear |R| = 1.129089e-10 > > 1 Nonlinear |R| = 6.295583e-11 > > > > So yea I guess at this point I'm just curious about the different behavior between `-snes_fd` and `-snes_fd -snes_mf_operator`. > > Now that you have provided the exact options you are using, yes it is very unexpected behavior. Is there any chance you can send us the code that reproduces this? > > The code that does the differencing in -snes_fd is similar to the code that does the differencing for -snes_mf_operator so normally one expects similar behavior but there are a couple of options you can try. Run with -snes_mf_operator and -help | grep mat_mffd and this will show options to control the differencing for the matrix free. For -snes_fd you have the option -mat_fd_type wp or ds > > > > Does the hand-coded result change your opinion Matt that the rules for FormFunction/Jacobian might be being violated? > > > > I understand that a finite difference approximation of the true Jacobian is an approximation. However, in the absence of possible complications like Matt suggested where an on-the-fly calculation might stand a better chance of capturing the behavior, I would expect both `-snes_mf_operator -snes_fd` and `-snes_fd` to suffer from the same approximations, right? > > > > On Tue, Dec 12, 2017 at 9:43 AM, Matthew Knepley wrote: > > On Tue, Dec 12, 2017 at 11:30 AM, Alexander Lindsay wrote: > > I'm not using any hand-coded Jacobians. > > > > This looks to me like the rules for FormFunction/Jacobian() are being broken. If the residual function > > depends on some third variable, and it changes between calls independent of the solution U, then > > the stored Jacobian could look wrong, but one done every time on the fly might converge. > > > > Matt > > > > Case 1 options: -snes_fd -pc_type lu > > > > 0 Nonlinear |R| = 2.259203e-02 > > 0 Linear |R| = 2.259203e-02 > > 1 Linear |R| = 7.821248e-11 > > 1 Nonlinear |R| = 2.258733e-02 > > 0 Linear |R| = 2.258733e-02 > > 1 Linear |R| = 5.277296e-11 > > 2 Nonlinear |R| = 2.258733e-02 > > 0 Linear |R| = 2.258733e-02 > > 1 Linear |R| = 5.993971e-11 > > Nonlinear solve did not converge due to DIVERGED_LINE_SEARCH iterations 2 > > > > Case 2 options: -snes_fd -snes_mf_operator -pc_type lu > > > > 0 Nonlinear |R| = 2.259203e-02 > > 0 Linear |R| = 2.259203e-02 > > 1 Linear |R| = 2.258733e-02 > > 2 Linear |R| = 3.103342e-06 > > 3 Linear |R| = 6.779865e-12 > > 1 Nonlinear |R| = 7.497740e-06 > > 0 Linear |R| = 7.497740e-06 > > 1 Linear |R| = 8.265413e-12 > > 2 Nonlinear |R| = 7.993729e-12 > > Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE iterations 2 > > > > > > On Tue, Dec 12, 2017 at 9:12 AM, zakaryah . wrote: > > When you say "Jacobians are bad" and "debugging the Jacobians", do you mean that the hand-coded Jacobian is wrong? In that case, why would you be surprised that the finite difference Jacobians, which are "correct" to approximation error, perform better? Otherwise, what does "Jacobians are bad" mean - ill-conditioned? Singular? Not symmetric? Not positive definite? > > > > > > > > > > -- > > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > > > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > From j.wagner at rice.edu Tue Dec 12 18:30:39 2017 From: j.wagner at rice.edu (Jordan Wagner) Date: Tue, 12 Dec 2017 18:30:39 -0600 Subject: [petsc-users] Function to convert a dense matrix holding the cell connectivity to a MPIADJ for use with MatMeshToCellGraph In-Reply-To: References: Message-ID: <89336115-d8ee-8c91-007b-5738be30b4a3@rice.edu> Thanks for the quick reply! I have been reviewing DMPlex for a few weeks. It looks awesome (I like topology :) ); great work. I planned on implementing it in my code sooner or later. The problem for me, however, is that I am mainly using multi-section CGNS meshes in my code. This currently isn't supported in DMPlexCreateCGNS. Though, I guess I could just use DMPlexCreateFromCellList. Would that be the route you would recommend for creating a DMPlex with a connectivity matrix that I have extracted myself from the cgns file? I've been somewhat contemplating trying to add multi-section capability to the DMPlexCreateCGNS function; however, I figured there was a good reason why this wasn't already done and assumed would take me way longer than you guys who are much more knowledgeable. Would this be something worth thinking more about? Really appreciate it. On 12/12/2017 03:54 PM, Matthew Knepley wrote: > Barry wrote this, so he probably knows how to fix it. > > Another option is to use DMPlex for your mesh. It will give you the > dual if you want. > > ? Thanks, > > ? ? ?Matt > > On Tue, Dec 12, 2017 at 3:44 PM, Jordan Wagner > wrote: > > Hi, > > I am trying to use the function MatMeshToCellGraph. I currently > have a matrix that holds the cell connectivity of simplex > elements. So it is a numCells x 3 matrix where the row corresponds > to the cell number and the column is a vertex of that cell. To use > this function, it appears I need to get the corresponding > adjacency matrix. > > I found the function MatConvert, which I was hoping could be the > function I am looking for, but I keep getting a memory error when > using it, which I have added at the bottom. Is this the correct > function to use to convert my cell connectivity matrix, or do I > need to loop through to get the proper offsets (i,j) needed to > create the adjacency matrix with MatCreateMPIAdj, as is done in > ex11.c? > > Thanks very much for any tips. > > > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation > Violation, probably memory access out of range > [0]PETSC ERROR: Try option -start_in_debugger or > -on_error_attach_debugger > [0]PETSC ERROR: or see > http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind > > [0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple > Mac OS X to find memory corruption errors > [0]PETSC ERROR: PetscMallocValidate: error detected at > PetscSignalHandlerDefault() line 145 in > /home/jordan/petsc/src/sys/error/signal.c > [0]PETSC ERROR: Memory [id=0(16)] at address 0x1b4cb80 is > corrupted (probably write past end of array) > [0]PETSC ERROR: Memory originally allocated in > MatConvertFrom_MPIAdj() line 444 in > /home/jordan/petsc/src/mat/impls/adj/mpi/mpiadj.c > [0]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [0]PETSC ERROR: Memory corruption: > http://www.mcs.anl.gov/petsc/documentation/installation.html#valgrind > > [0]PETSC ERROR: > [0]PETSC ERROR: See > http://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble > shooting. > [0]PETSC ERROR: Petsc Release Version 3.8.2, unknown > [0]PETSC ERROR: ./preprocess.exe on a arch-linux2-c-debug named > jordan-nest by jordan Tue Dec 12 14:40:02 2017 > [0]PETSC ERROR: Configure options --with-shared-libraries=1 > --download-metis --download-parmetis > [0]PETSC ERROR: #1 PetscMallocValidate() line 146 in > /home/jordan/petsc/src/sys/memory/mtr.c > [0]PETSC ERROR: #2 PetscSignalHandlerDefault() line 145 in > /home/jordan/petsc/src/sys/error/signal.c > > > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Dec 12 18:48:21 2017 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 12 Dec 2017 19:48:21 -0500 Subject: [petsc-users] Function to convert a dense matrix holding the cell connectivity to a MPIADJ for use with MatMeshToCellGraph In-Reply-To: <89336115-d8ee-8c91-007b-5738be30b4a3@rice.edu> References: <89336115-d8ee-8c91-007b-5738be30b4a3@rice.edu> Message-ID: On Tue, Dec 12, 2017 at 7:30 PM, Jordan Wagner wrote: > Thanks for the quick reply! > > I have been reviewing DMPlex for a few weeks. It looks awesome (I like > topology :) ); great work. I planned on implementing it in my code sooner > or later. The problem for me, however, is that I am mainly using > multi-section CGNS meshes in my code. This currently isn't supported in > DMPlexCreateCGNS. Though, I guess I could just use > DMPlexCreateFromCellList. Would that be the route you would recommend for > creating a DMPlex with a connectivity matrix that I have extracted myself > from the cgns file? > > Yes, I think that is the best way. That is what I do in the CGNS code I believe. We have much more complete support for ExodusII and Gmsh (and MED). > I've been somewhat contemplating trying to add multi-section capability to > the DMPlexCreateCGNS function; however, I figured there was a good reason > why this wasn't already done and assumed would take me way longer than you > guys who are much more knowledgeable. Would this be something worth > thinking more about? > Actually, it is not done because a) I know very little about CGNS, b) no one has ever requested it and c) we got requests for other formats. I believe it would not be that hard and we could help you. Most of the ExodusII support was done by a user (Blaise Bourdin) and the GMsh support by Lisandro and Stefano, and the MED support by Michael Lange, so most of this stuff is not from me. Thanks, Matt > Really appreciate it. > > On 12/12/2017 03:54 PM, Matthew Knepley wrote: > > Barry wrote this, so he probably knows how to fix it. > > Another option is to use DMPlex for your mesh. It will give you the dual > if you want. > > Thanks, > > Matt > > On Tue, Dec 12, 2017 at 3:44 PM, Jordan Wagner wrote: > >> Hi, >> >> I am trying to use the function MatMeshToCellGraph. I currently have a >> matrix that holds the cell connectivity of simplex elements. So it is a >> numCells x 3 matrix where the row corresponds to the cell number and the >> column is a vertex of that cell. To use this function, it appears I need to >> get the corresponding adjacency matrix. >> >> I found the function MatConvert, which I was hoping could be the function >> I am looking for, but I keep getting a memory error when using it, which I >> have added at the bottom. Is this the correct function to use to convert my >> cell connectivity matrix, or do I need to loop through to get the proper >> offsets (i,j) needed to create the adjacency matrix with MatCreateMPIAdj, >> as is done in ex11.c? >> >> Thanks very much for any tips. >> >> >> [0]PETSC ERROR: ------------------------------ >> ------------------------------------------ >> [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, >> probably memory access out of range >> [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger >> [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/d >> ocumentation/faq.html#valgrind >> [0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS >> X to find memory corruption errors >> [0]PETSC ERROR: PetscMallocValidate: error detected at >> PetscSignalHandlerDefault() line 145 in /home/jordan/petsc/src/sys/err >> or/signal.c >> [0]PETSC ERROR: Memory [id=0(16)] at address 0x1b4cb80 is corrupted >> (probably write past end of array) >> [0]PETSC ERROR: Memory originally allocated in MatConvertFrom_MPIAdj() >> line 444 in /home/jordan/petsc/src/mat/impls/adj/mpi/mpiadj.c >> [0]PETSC ERROR: --------------------- Error Message >> -------------------------------------------------------------- >> [0]PETSC ERROR: Memory corruption: http://www.mcs.anl.gov/petsc/d >> ocumentation/installation.html#valgrind >> [0]PETSC ERROR: >> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html >> for trouble shooting. >> [0]PETSC ERROR: Petsc Release Version 3.8.2, unknown >> [0]PETSC ERROR: ./preprocess.exe on a arch-linux2-c-debug named >> jordan-nest by jordan Tue Dec 12 14:40:02 2017 >> [0]PETSC ERROR: Configure options --with-shared-libraries=1 >> --download-metis --download-parmetis >> [0]PETSC ERROR: #1 PetscMallocValidate() line 146 in >> /home/jordan/petsc/src/sys/memory/mtr.c >> [0]PETSC ERROR: #2 PetscSignalHandlerDefault() line 145 in >> /home/jordan/petsc/src/sys/error/signal.c >> >> >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Dec 12 19:49:48 2017 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Wed, 13 Dec 2017 01:49:48 +0000 Subject: [petsc-users] Function to convert a dense matrix holding the cell connectivity to a MPIADJ for use with MatMeshToCellGraph In-Reply-To: References: Message-ID: <04DCC424-53BD-4BB8-BAA3-B5383A7529D7@mcs.anl.gov> Can you send a code that reproduces the crash so I can debug it? Barry > On Dec 12, 2017, at 2:44 PM, Jordan Wagner wrote: > > Hi, > > I am trying to use the function MatMeshToCellGraph. I currently have a matrix that holds the cell connectivity of simplex elements. So it is a numCells x 3 matrix where the row corresponds to the cell number and the column is a vertex of that cell. To use this function, it appears I need to get the corresponding adjacency matrix. > > I found the function MatConvert, which I was hoping could be the function I am looking for, but I keep getting a memory error when using it, which I have added at the bottom. Is this the correct function to use to convert my cell connectivity matrix, or do I need to loop through to get the proper offsets (i,j) needed to create the adjacency matrix with MatCreateMPIAdj, as is done in ex11.c? > > Thanks very much for any tips. > > > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range > [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind > [0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors > [0]PETSC ERROR: PetscMallocValidate: error detected at PetscSignalHandlerDefault() line 145 in /home/jordan/petsc/src/sys/error/signal.c > [0]PETSC ERROR: Memory [id=0(16)] at address 0x1b4cb80 is corrupted (probably write past end of array) > [0]PETSC ERROR: Memory originally allocated in MatConvertFrom_MPIAdj() line 444 in /home/jordan/petsc/src/mat/impls/adj/mpi/mpiadj.c > [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > [0]PETSC ERROR: Memory corruption: http://www.mcs.anl.gov/petsc/documentation/installation.html#valgrind > [0]PETSC ERROR: > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.8.2, unknown > [0]PETSC ERROR: ./preprocess.exe on a arch-linux2-c-debug named jordan-nest by jordan Tue Dec 12 14:40:02 2017 > [0]PETSC ERROR: Configure options --with-shared-libraries=1 --download-metis --download-parmetis > [0]PETSC ERROR: #1 PetscMallocValidate() line 146 in /home/jordan/petsc/src/sys/memory/mtr.c > [0]PETSC ERROR: #2 PetscSignalHandlerDefault() line 145 in /home/jordan/petsc/src/sys/error/signal.c > > > From hbcbh1999 at gmail.com Tue Dec 12 20:09:42 2017 From: hbcbh1999 at gmail.com (Hao Zhang) Date: Tue, 12 Dec 2017 21:09:42 -0500 Subject: [petsc-users] HYPRE BOOMERAMG no output In-Reply-To: References: Message-ID: Thanks. Barry. On Tue, Dec 12, 2017 at 2:03 PM, Smith, Barry F. wrote: > > > > On Dec 12, 2017, at 12:04 PM, Hao Zhang wrote: > > > > hi, > > > > before I introduce HYPRE with BOOMERAMG to my CFD code, I will have > output with good convergence rate. with BOOMERAMG, the same code will take > longer time to run and there's a good chance that no output will be > produced whatsoever. > > Are you running with -ksp_monitor -ksp_view_pre -ksp_converged_reason ? > > If you are using -ksp_monitor and getting no output it could be that > hypre is taking a very long time to form the AMG preconditioner, but I > won't expect to see this unless the problem is very very big. > > You can always run with -start_in_debugger and then type cont in the > debugger wait a couple minutes and then use control C in the debugger and > type where to determine what it is doing at the time (you can email the > output to petsc-maint at mcs.anl.gov). > > Boomeramg is not good for all classes of matrices, it is fine for > elliptic/parabolic generally but if you hand it something that is > hyperbolically dominated it will not work well. > > Barry > > > > > what happened? thanks! > > > > -- > > Hao Zhang > > Dept. of Applid Mathematics and Statistics, > > Stony Brook University, > > Stony Brook, New York, 11790 > > -- Hao Zhang Dept. of Applid Mathematics and Statistics, Stony Brook University, Stony Brook, New York, 11790 -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Dec 12 21:01:49 2017 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Wed, 13 Dec 2017 03:01:49 +0000 Subject: [petsc-users] MatCreateShell, MatShellGetContext, MatShellSetContext in fortran In-Reply-To: <5A2FA6CE.3050300@gmail.com> References: <61890642-9a16-7730-49f1-efb889c38fc4@gmail.com> <5A2FA6CE.3050300@gmail.com> Message-ID: <60ECF900-6539-467E-BF8A-E527F26568BA@mcs.anl.gov> Samuel, I have attached a fixed up version of your example that works correctly. Your basic problem was that you did not "use" the final module in the main program so it did know the interfaces for the routines. I have included this example in PETSc for others to benefit from in the branch barry/add-f90-matshellgetcontext-example Thanks for the question, Barry > On Dec 12, 2017, at 3:52 AM, Samuel Lanthaler wrote: > > Let me also add a minimal example (relying only on petsc), which leads to the same error message on my machine. Again, what I'm trying to do is very simple: > ? MatCreateShell => Initialize Mat :: F > ? MatShellSetContext => set the context of F to ctxF (looking at the petsc source code, this call actually seems to be superfluous, but nevermind) > ? MatShellGetContext => get the pointer ctxF_pt to point to the matrix context > I'm getting an error message in the third step. > > [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > [0]PETSC ERROR: Null argument, when expecting valid pointer > [0]PETSC ERROR: Null Pointer: Parameter # 2 > Again, thanks for your help! > Cheers, > Samuel > > On 12/11/2017 07:41 PM, Samuel Lanthaler wrote: >> Dear petsc-/slepc-users, >> >> I have been trying to understand matrix-free/shell matrices in PETSc for eventual use in solving a non-linear eigenvalue problem using SLEPC. But I seem to be having trouble with calls to MatShellGetContext. As far as I understand, this function should initialize a pointer (second argument) so that the subroutine output will point to the context associated with my shell-matrix (let's say of TYPE(MatCtx))? When calling that subroutine, I get the following error message: >> >> [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- >> [0]PETSC ERROR: Null argument, when expecting valid pointer >> [0]PETSC ERROR: Null Pointer: Parameter # 2 >> >> In my code, the second input argument to the routine is a null-pointer of TYPE(MatCtx),POINTER :: arg2. Which the error message appears to be unhappy with. I noticed that there is no error message if I instead pass an object TYPE(MatCtx) :: arg2 to the routine... which doesn't really make sense to me? Could someone maybe explain to me what is going on, here? >> >> Just in case, let me also attach my concrete example code (it is supposed to be a Fortran version of the slepc-example in slepc-3.8.1/src/nep/examples/tutorials/ex21.c). I have added an extra call to MatShellGetContext on line 138, after the function and jacobian should supposedly have been set up. >> >> Thanks a lot for your help! >> >> Cheers, >> >> Samuel >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ex6f.F90 Type: application/octet-stream Size: 2805 bytes Desc: ex6f.F90 URL: From edoardo.alinovi at gmail.com Wed Dec 13 01:28:00 2017 From: edoardo.alinovi at gmail.com (Edoardo alinovi) Date: Wed, 13 Dec 2017 08:28:00 +0100 Subject: [petsc-users] Advices to decrease bicgstab iterations In-Reply-To: References: Message-ID: Dear petsc users, I am doing some calculations to test my code in parallel using petsc. With respect the serial code, which uses an in house bicstab+diagonal ILU preconditioner, I note that petsc's bicgstab+ASM with standard options need a larger number of iterations to converge (order 3 to 5). In base on your expirience, do you have any method to improve the preconditioning? Thak you ery much, Edoardo -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Wed Dec 13 01:39:36 2017 From: jed at jedbrown.org (Jed Brown) Date: Wed, 13 Dec 2017 00:39:36 -0700 Subject: [petsc-users] Advices to decrease bicgstab iterations In-Reply-To: References: Message-ID: <87efnz13gn.fsf@jedbrown.org> When asking solvers advice, please always explain the problem you are solving and the discretization that you use. Edoardo alinovi writes: > Dear petsc users, > > I am doing some calculations to test my code in parallel using petsc. With > respect the serial code, which uses an in house bicstab+diagonal ILU > preconditioner, I note that petsc's bicgstab+ASM with standard options need > a larger number of iterations to converge (order 3 to 5). In base on your > expirience, do you have any method to improve the preconditioning? > > Thak you ery much, > > Edoardo From edoardo.alinovi at gmail.com Wed Dec 13 01:46:39 2017 From: edoardo.alinovi at gmail.com (Edoardo alinovi) Date: Wed, 13 Dec 2017 08:46:39 +0100 Subject: [petsc-users] Advices to decrease bicgstab iterations In-Reply-To: <87efnz13gn.fsf@jedbrown.org> References: <87efnz13gn.fsf@jedbrown.org> Message-ID: Ps: I am solving Navier-Stokes equation (pressure-correction equation in particular). The matrix is sparse and in mpiaij fotmat. Il 13 Dic 2017 8:39 AM, "Jed Brown" ha scritto: > When asking solvers advice, please always explain the problem you are > solving and the discretization that you use. > > Edoardo alinovi writes: > > > Dear petsc users, > > > > I am doing some calculations to test my code in parallel using petsc. > With > > respect the serial code, which uses an in house bicstab+diagonal ILU > > preconditioner, I note that petsc's bicgstab+ASM with standard options > need > > a larger number of iterations to converge (order 3 to 5). In base on your > > expirience, do you have any method to improve the preconditioning? > > > > Thak you ery much, > > > > Edoardo > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Wed Dec 13 02:05:02 2017 From: jed at jedbrown.org (Jed Brown) Date: Wed, 13 Dec 2017 01:05:02 -0700 Subject: [petsc-users] Advices to decrease bicgstab iterations In-Reply-To: References: <87efnz13gn.fsf@jedbrown.org> Message-ID: <87bmj312a9.fsf@jedbrown.org> Edoardo alinovi writes: > Ps: I am solving Navier-Stokes equation (pressure-correction equation in > particular). The matrix is sparse and in mpiaij fotmat. If you're just solving a pressure projection then you can and should use multigrid. For example, -pc_type gamg. From fsantost at student.ethz.ch Wed Dec 13 03:30:01 2017 From: fsantost at student.ethz.ch (Santos Teixeira Frederico) Date: Wed, 13 Dec 2017 09:30:01 +0000 Subject: [petsc-users] Problem with DMRefine and DMLabel Message-ID: <682CC3CD7A208742B8C2D116C6719901563CF530@MBX13.d.ethz.ch> Hi folks, In the example SNES/62.c, when I use "-simplex 1" and "-refinement_limit 0.05", it seems the refined DMPlex does not carry all the labels from the original DM. You can see that if you place, in the function CreateMesh, one DMViewFromOptions right after the DMPlexCreateBoxMesh and another DMViewFromOptions after DMRefine. The first output is DM Object: DM_0x84000000_0 1 MPI processes type: plex DM_0x84000000_0 in 2 dimensions: 0-cells: 9 1-cells: 16 2-cells: 8 Labels: Face Sets: 4 strata with value/size (1 (2), 4 (2), 2 (2), 3 (2)) marker: 4 strata with value/size (4 (5), 1 (3), 2 (5), 3 (3)) depth: 3 strata with value/size (0 (9), 1 (16), 2 (8)) which is correct w.r.t. the definition from DMPlexCreateBoxMesh. However, the second output is DM Object: DM_0x84000000_1 1 MPI processes type: plex DM_0x84000000_1 in 2 dimensions: 0-cells: 145 1-cells: 400 2-cells: 256 Labels: marker: 4 strata with value/size (4 (3), 1 (57), 2 (3), 3 (1)) depth: 3 strata with value/size (0 (145), 1 (400), 2 (256)) Note that the label "Face Sets" disappeared and that the label "marker" does not include all the boundary points. The full set of options is: -run_type full -refinement_limit 0.005 -simplex 1 -dm_view -dm_plex_separate_marker -interpolate 1 -vel_petscspace_order 2 -pres_petscspace_order 1 -ksp_view -ksp_monitor -ksp_type fgmres -ksp_gmres_restart 10 -ksp_rtol 1.0e-9 -pc_type fieldsplit -pc_fieldsplit_type schur -pc_fieldsplit_schur_factorization_type full -fieldsplit_pressure_ksp_rtol 1e-10 -fieldsplit_velocity_ksp_type gmres -fieldsplit_velocity_pc_type lu -fieldsplit_pressure_pc_type jacobi -snes_error_if_not_converged -ksp_error_if_not_converged -snes_view -snes_monitor Am I missing anything? Regards, Fred. -------------- next part -------------- An HTML attachment was scrubbed... URL: From s.lanthaler at gmail.com Wed Dec 13 03:52:04 2017 From: s.lanthaler at gmail.com (Samuel Lanthaler) Date: Wed, 13 Dec 2017 10:52:04 +0100 Subject: [petsc-users] MatCreateShell, MatShellGetContext, MatShellSetContext in fortran In-Reply-To: <60ECF900-6539-467E-BF8A-E527F26568BA@mcs.anl.gov> References: <61890642-9a16-7730-49f1-efb889c38fc4@gmail.com> <5A2FA6CE.3050300@gmail.com> <60ECF900-6539-467E-BF8A-E527F26568BA@mcs.anl.gov> Message-ID: <5A30F844.8080602@gmail.com> Ah, silly me... Of course, if the program can't actually see the interface then it makes sense that it won't work. As always, thanks a lot for your help, Barry! Samuel On 12/13/2017 04:01 AM, Smith, Barry F. wrote: > > Samuel, > > I have attached a fixed up version of your example that works > correctly. Your basic problem was that you did not "use" the final > module in the main program so it did know the interfaces for the > routines. I have included this example in PETSc for others to > benefit from in the branch barry/add-f90-matshellgetcontext-example > > Thanks for the question, > > Barry > > > > > > On Dec 12, 2017, at 3:52 AM, Samuel Lanthaler > wrote: > > > > Let me also add a minimal example (relying only on petsc), which > leads to the same error message on my machine. Again, what I'm trying > to do is very simple: > > ? MatCreateShell => Initialize Mat :: F > > ? MatShellSetContext => set the context of F to ctxF (looking > at the petsc source code, this call actually seems to be superfluous, > but nevermind) > > ? MatShellGetContext => get the pointer ctxF_pt to point to > the matrix context > > I'm getting an error message in the third step. > > > > [0]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > > [0]PETSC ERROR: Null argument, when expecting valid pointer > > [0]PETSC ERROR: Null Pointer: Parameter # 2 > > Again, thanks for your help! > > Cheers, > > Samuel > > > > On 12/11/2017 07:41 PM, Samuel Lanthaler wrote: > >> Dear petsc-/slepc-users, > >> > >> I have been trying to understand matrix-free/shell matrices in > PETSc for eventual use in solving a non-linear eigenvalue problem > using SLEPC. But I seem to be having trouble with calls to > MatShellGetContext. As far as I understand, this function should > initialize a pointer (second argument) so that the subroutine output > will point to the context associated with my shell-matrix (let's say > of TYPE(MatCtx))? When calling that subroutine, I get the following > error message: > >> > >> [0]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > >> [0]PETSC ERROR: Null argument, when expecting valid pointer > >> [0]PETSC ERROR: Null Pointer: Parameter # 2 > >> > >> In my code, the second input argument to the routine is a > null-pointer of TYPE(MatCtx),POINTER :: arg2. Which the error message > appears to be unhappy with. I noticed that there is no error message > if I instead pass an object TYPE(MatCtx) :: arg2 to the routine... > which doesn't really make sense to me? Could someone maybe explain to > me what is going on, here? > >> > >> Just in case, let me also attach my concrete example code (it is > supposed to be a Fortran version of the slepc-example in > slepc-3.8.1/src/nep/examples/tutorials/ex21.c). I have added an extra > call to MatShellGetContext on line 138, after the function and > jacobian should supposedly have been set up. > >> > >> Thanks a lot for your help! > >> > >> Cheers, > >> > >> Samuel > >> > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Dec 13 05:15:48 2017 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 13 Dec 2017 06:15:48 -0500 Subject: [petsc-users] Problem with DMRefine and DMLabel In-Reply-To: <682CC3CD7A208742B8C2D116C6719901563CF530@MBX13.d.ethz.ch> References: <682CC3CD7A208742B8C2D116C6719901563CF530@MBX13.d.ethz.ch> Message-ID: On Wed, Dec 13, 2017 at 4:30 AM, Santos Teixeira Frederico < fsantost at student.ethz.ch> wrote: > Hi folks, > > In the example SNES/62.c, when I use "-simplex 1" and "-refinement_limit > 0.05", it seems the refined DMPlex does not carry all the labels from the > original DM. > This is correct. Here is the problem. When you use -refinement_limit (instead of -dm_refine), I call Triangle/TetGen to do the refinement. They do not tell me how they changed the mesh to get there, so I have no way of propagating the labels. There is a special name ("marker") that I was using for the boundary, which I automatically mark after generation. We could choose to also mark boundary faces, but it seems redundant. If you want them marked, you can call DMPlexMarkBoundaryFaces() on the new mesh. Does this seem reasonable? Thanks, Matt > You can see that if you place, in the function CreateMesh, one > DMViewFromOptions right after the DMPlexCreateBoxMesh and another > DMViewFromOptions after DMRefine. The first output is > > DM Object: DM_0x84000000_0 1 MPI processes > type: plex > DM_0x84000000_0 in 2 dimensions: > 0-cells: 9 > 1-cells: 16 > 2-cells: 8 > Labels: > Face Sets: 4 strata with value/size (1 (2), 4 (2), 2 (2), 3 (2)) > marker: 4 strata with value/size (4 (5), 1 (3), 2 (5), 3 (3)) > depth: 3 strata with value/size (0 (9), 1 (16), 2 (8)) > > which is correct w.r.t. the definition from DMPlexCreateBoxMesh. However, > the second output is > > DM Object: DM_0x84000000_1 1 MPI processes > type: plex > DM_0x84000000_1 in 2 dimensions: > 0-cells: 145 > 1-cells: 400 > 2-cells: 256 > Labels: > marker: 4 strata with value/size (4 (3), 1 (57), 2 (3), 3 (1)) > depth: 3 strata with value/size (0 (145), 1 (400), 2 (256)) > > Note that the label "Face Sets" disappeared and that the label "marker" > does not include all the boundary points. > The full set of options is: > > -run_type full > -refinement_limit 0.005 > -simplex 1 > -dm_view > -dm_plex_separate_marker > -interpolate 1 > -vel_petscspace_order 2 > -pres_petscspace_order 1 > -ksp_view > -ksp_monitor > -ksp_type fgmres > -ksp_gmres_restart 10 > -ksp_rtol 1.0e-9 > -pc_type fieldsplit > -pc_fieldsplit_type schur > -pc_fieldsplit_schur_factorization_type full > -fieldsplit_pressure_ksp_rtol 1e-10 > -fieldsplit_velocity_ksp_type gmres > -fieldsplit_velocity_pc_type lu > -fieldsplit_pressure_pc_type jacobi > -snes_error_if_not_converged > -ksp_error_if_not_converged > -snes_view > -snes_monitor > > Am I missing anything? > > Regards, > Fred. > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From fsantost at student.ethz.ch Wed Dec 13 06:25:22 2017 From: fsantost at student.ethz.ch (Santos Teixeira Frederico) Date: Wed, 13 Dec 2017 12:25:22 +0000 Subject: [petsc-users] Problem with DMRefine and DMLabel In-Reply-To: References: <682CC3CD7A208742B8C2D116C6719901563CF530@MBX13.d.ethz.ch>, Message-ID: <682CC3CD7A208742B8C2D116C6719901563D05AC@MBX13.d.ethz.ch> Hi Matt, Thanks for your explanation. The option -dm_refine worked for me because I could keep the effect of -dm_plex_separate_marker. I playing with Neumann and Dirichlet boundaries to solve the Poiseulle flow, so I need different ids. The function DMPlexMarkBoundaryFace would set all the points with the same id, right? Regards, Fred. ________________________________ From: Matthew Knepley [knepley at gmail.com] Sent: 13 December 2017 12:15 To: Santos Teixeira Frederico Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] Problem with DMRefine and DMLabel On Wed, Dec 13, 2017 at 4:30 AM, Santos Teixeira Frederico > wrote: Hi folks, In the example SNES/62.c, when I use "-simplex 1" and "-refinement_limit 0.05", it seems the refined DMPlex does not carry all the labels from the original DM. This is correct. Here is the problem. When you use -refinement_limit (instead of -dm_refine), I call Triangle/TetGen to do the refinement. They do not tell me how they changed the mesh to get there, so I have no way of propagating the labels. There is a special name ("marker") that I was using for the boundary, which I automatically mark after generation. We could choose to also mark boundary faces, but it seems redundant. If you want them marked, you can call DMPlexMarkBoundaryFaces() on the new mesh. Does this seem reasonable? Thanks, Matt You can see that if you place, in the function CreateMesh, one DMViewFromOptions right after the DMPlexCreateBoxMesh and another DMViewFromOptions after DMRefine. The first output is DM Object: DM_0x84000000_0 1 MPI processes type: plex DM_0x84000000_0 in 2 dimensions: 0-cells: 9 1-cells: 16 2-cells: 8 Labels: Face Sets: 4 strata with value/size (1 (2), 4 (2), 2 (2), 3 (2)) marker: 4 strata with value/size (4 (5), 1 (3), 2 (5), 3 (3)) depth: 3 strata with value/size (0 (9), 1 (16), 2 (8)) which is correct w.r.t. the definition from DMPlexCreateBoxMesh. However, the second output is DM Object: DM_0x84000000_1 1 MPI processes type: plex DM_0x84000000_1 in 2 dimensions: 0-cells: 145 1-cells: 400 2-cells: 256 Labels: marker: 4 strata with value/size (4 (3), 1 (57), 2 (3), 3 (1)) depth: 3 strata with value/size (0 (145), 1 (400), 2 (256)) Note that the label "Face Sets" disappeared and that the label "marker" does not include all the boundary points. The full set of options is: -run_type full -refinement_limit 0.005 -simplex 1 -dm_view -dm_plex_separate_marker -interpolate 1 -vel_petscspace_order 2 -pres_petscspace_order 1 -ksp_view -ksp_monitor -ksp_type fgmres -ksp_gmres_restart 10 -ksp_rtol 1.0e-9 -pc_type fieldsplit -pc_fieldsplit_type schur -pc_fieldsplit_schur_factorization_type full -fieldsplit_pressure_ksp_rtol 1e-10 -fieldsplit_velocity_ksp_type gmres -fieldsplit_velocity_pc_type lu -fieldsplit_pressure_pc_type jacobi -snes_error_if_not_converged -ksp_error_if_not_converged -snes_view -snes_monitor Am I missing anything? Regards, Fred. -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Dec 13 07:47:41 2017 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 13 Dec 2017 08:47:41 -0500 Subject: [petsc-users] Problem with DMRefine and DMLabel In-Reply-To: <682CC3CD7A208742B8C2D116C6719901563D05AC@MBX13.d.ethz.ch> References: <682CC3CD7A208742B8C2D116C6719901563CF530@MBX13.d.ethz.ch> <682CC3CD7A208742B8C2D116C6719901563D05AC@MBX13.d.ethz.ch> Message-ID: On Wed, Dec 13, 2017 at 7:25 AM, Santos Teixeira Frederico < fsantost at student.ethz.ch> wrote: > Hi Matt, > > Thanks for your explanation. > > The option -dm_refine worked for me because I could keep the effect of - > dm_plex_separate_marker. I playing with Neumann and Dirichlet boundaries > to solve the Poiseulle flow, so I need different ids. > > The function DMPlexMarkBoundaryFace would set all the points with the same > id, right? > Yes, you are right. Thanks, Matt > Regards, > Fred. > > > ------------------------------ > *From:* Matthew Knepley [knepley at gmail.com] > *Sent:* 13 December 2017 12:15 > *To:* Santos Teixeira Frederico > *Cc:* petsc-users at mcs.anl.gov > *Subject:* Re: [petsc-users] Problem with DMRefine and DMLabel > > On Wed, Dec 13, 2017 at 4:30 AM, Santos Teixeira Frederico < > fsantost at student.ethz.ch> wrote: > >> Hi folks, >> >> In the example SNES/62.c, when I use "-simplex 1" and "-refinement_limit >> 0.05", it seems the refined DMPlex does not carry all the labels from the >> original DM. >> > > This is correct. Here is the problem. When you use -refinement_limit > (instead of -dm_refine), I call Triangle/TetGen to do > the refinement. They do not tell me how they changed the mesh to get > there, so I have no way of propagating the labels. > There is a special name ("marker") that I was using for the boundary, > which I automatically mark after generation. > > We could choose to also mark boundary faces, but it seems redundant. If > you want them marked, you can call > DMPlexMarkBoundaryFaces() on the new mesh. Does this seem reasonable? > > Thanks, > > Matt > > >> You can see that if you place, in the function CreateMesh, one >> DMViewFromOptions right after the DMPlexCreateBoxMesh and another >> DMViewFromOptions after DMRefine. The first output is >> >> DM Object: DM_0x84000000_0 1 MPI processes >> type: plex >> DM_0x84000000_0 in 2 dimensions: >> 0-cells: 9 >> 1-cells: 16 >> 2-cells: 8 >> Labels: >> Face Sets: 4 strata with value/size (1 (2), 4 (2), 2 (2), 3 (2)) >> marker: 4 strata with value/size (4 (5), 1 (3), 2 (5), 3 (3)) >> depth: 3 strata with value/size (0 (9), 1 (16), 2 (8)) >> >> which is correct w.r.t. the definition from DMPlexCreateBoxMesh. However, >> the second output is >> >> DM Object: DM_0x84000000_1 1 MPI processes >> type: plex >> DM_0x84000000_1 in 2 dimensions: >> 0-cells: 145 >> 1-cells: 400 >> 2-cells: 256 >> Labels: >> marker: 4 strata with value/size (4 (3), 1 (57), 2 (3), 3 (1)) >> depth: 3 strata with value/size (0 (145), 1 (400), 2 (256)) >> >> Note that the label "Face Sets" disappeared and that the label "marker" >> does not include all the boundary points. >> The full set of options is: >> >> -run_type full >> -refinement_limit 0.005 >> -simplex 1 >> -dm_view >> -dm_plex_separate_marker >> -interpolate 1 >> -vel_petscspace_order 2 >> -pres_petscspace_order 1 >> -ksp_view >> -ksp_monitor >> -ksp_type fgmres >> -ksp_gmres_restart 10 >> -ksp_rtol 1.0e-9 >> -pc_type fieldsplit >> -pc_fieldsplit_type schur >> -pc_fieldsplit_schur_factorization_type full >> -fieldsplit_pressure_ksp_rtol 1e-10 >> -fieldsplit_velocity_ksp_type gmres >> -fieldsplit_velocity_pc_type lu >> -fieldsplit_pressure_pc_type jacobi >> -snes_error_if_not_converged >> -ksp_error_if_not_converged >> -snes_view >> -snes_monitor >> >> Am I missing anything? >> >> Regards, >> Fred. >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From hbcbh1999 at gmail.com Wed Dec 13 10:07:14 2017 From: hbcbh1999 at gmail.com (Hao Zhang) Date: Wed, 13 Dec 2017 11:07:14 -0500 Subject: [petsc-users] HYPRE BOOMERAMG no output In-Reply-To: References: Message-ID: CFD is 3D with problem size 150 X 20 X 400 for X, Y and Z direction. : If I'm using less total number of processors conducting parallel simulation, the code runs but at late simulating time, my matrix will have 0 max and min singular value out of nowhere, which confuses me deeply. when I double the total number of processors, my simulations halt at late simulating time. I have petsc running options on but no more output no error message. It's been 12 hours. very weird. to sum up, 3 problems are: (1) max(min) singular value of matrix is 0s starting at late time out of nowhere with or without BOOMERAMG (2) different number of processors BOOMERAMG behaves very differently (3) petsc part halt for unknown reason without crashing or any information On Tue, Dec 12, 2017 at 9:09 PM, Hao Zhang wrote: > Thanks. Barry. > > > > On Tue, Dec 12, 2017 at 2:03 PM, Smith, Barry F. > wrote: > >> >> >> > On Dec 12, 2017, at 12:04 PM, Hao Zhang wrote: >> > >> > hi, >> > >> > before I introduce HYPRE with BOOMERAMG to my CFD code, I will have >> output with good convergence rate. with BOOMERAMG, the same code will take >> longer time to run and there's a good chance that no output will be >> produced whatsoever. >> >> Are you running with -ksp_monitor -ksp_view_pre -ksp_converged_reason ? >> >> If you are using -ksp_monitor and getting no output it could be that >> hypre is taking a very long time to form the AMG preconditioner, but I >> won't expect to see this unless the problem is very very big. >> >> You can always run with -start_in_debugger and then type cont in the >> debugger wait a couple minutes and then use control C in the debugger and >> type where to determine what it is doing at the time (you can email the >> output to petsc-maint at mcs.anl.gov). >> >> Boomeramg is not good for all classes of matrices, it is fine for >> elliptic/parabolic generally but if you hand it something that is >> hyperbolically dominated it will not work well. >> >> Barry >> >> > >> > what happened? thanks! >> > >> > -- >> > Hao Zhang >> > Dept. of Applid Mathematics and Statistics, >> > Stony Brook University, >> > Stony Brook, New York, 11790 >> >> > > > -- > Hao Zhang > Dept. of Applid Mathematics and Statistics, > Stony Brook University, > Stony Brook, New York, 11790 > -- Hao Zhang Dept. of Applid Mathematics and Statistics, Stony Brook University, Stony Brook, New York, 11790 -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Dec 13 10:26:46 2017 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Wed, 13 Dec 2017 16:26:46 +0000 Subject: [petsc-users] HYPRE BOOMERAMG no output In-Reply-To: References: Message-ID: First http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind then ask again > On Dec 13, 2017, at 10:07 AM, Hao Zhang wrote: > > CFD is 3D with problem size 150 X 20 X 400 for X, Y and Z direction. : If I'm using less total number of processors conducting parallel simulation, the code runs but at late simulating time, my matrix will have 0 max and min singular value out of nowhere, which confuses me deeply. when I double the total number of processors, my simulations halt at late simulating time. I have petsc running options on but no more output no error message. It's been 12 hours. very weird. > > to sum up, 3 problems are: > (1) max(min) singular value of matrix is 0s starting at late time out of nowhere with or without BOOMERAMG > (2) different number of processors BOOMERAMG behaves very differently > (3) petsc part halt for unknown reason without crashing or any information > > On Tue, Dec 12, 2017 at 9:09 PM, Hao Zhang wrote: > Thanks. Barry. > > > > On Tue, Dec 12, 2017 at 2:03 PM, Smith, Barry F. wrote: > > > > On Dec 12, 2017, at 12:04 PM, Hao Zhang wrote: > > > > hi, > > > > before I introduce HYPRE with BOOMERAMG to my CFD code, I will have output with good convergence rate. with BOOMERAMG, the same code will take longer time to run and there's a good chance that no output will be produced whatsoever. > > Are you running with -ksp_monitor -ksp_view_pre -ksp_converged_reason ? > > If you are using -ksp_monitor and getting no output it could be that hypre is taking a very long time to form the AMG preconditioner, but I won't expect to see this unless the problem is very very big. > > You can always run with -start_in_debugger and then type cont in the debugger wait a couple minutes and then use control C in the debugger and type where to determine what it is doing at the time (you can email the output to petsc-maint at mcs.anl.gov). > > Boomeramg is not good for all classes of matrices, it is fine for elliptic/parabolic generally but if you hand it something that is hyperbolically dominated it will not work well. > > Barry > > > > > what happened? thanks! > > > > -- > > Hao Zhang > > Dept. of Applid Mathematics and Statistics, > > Stony Brook University, > > Stony Brook, New York, 11790 > > > > > -- > Hao Zhang > Dept. of Applid Mathematics and Statistics, > Stony Brook University, > Stony Brook, New York, 11790 > > > > -- > Hao Zhang > Dept. of Applid Mathematics and Statistics, > Stony Brook University, > Stony Brook, New York, 11790 From edoardo.alinovi at gmail.com Fri Dec 15 02:41:43 2017 From: edoardo.alinovi at gmail.com (Edoardo alinovi) Date: Fri, 15 Dec 2017 09:41:43 +0100 Subject: [petsc-users] How to disable debugging mode? Message-ID: Dear users, I would like to disable the debugging mode in petsc and turn it on when needed. my actual configure is: --prefix=/home/edo/software/petsc_3.8.1/ --with-mpi-dir=/home/edo/software/openMPI-3.0/ & --download-fblaslapack=1 --download-superlu_dist --download-mumps & --download-hypre --download-metis --download-parmetis --download-scalapack Have I only to add the flag "--with-debugging=no" to the above lines and re-run the configure script? Have I to substitute "--download-libname" with --with-libname-lib= for each external library? Sorry for the dummy question, but I really do not want to mess up the current working install :) Thak you very much, Edoardo -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Dec 15 05:35:38 2017 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 15 Dec 2017 06:35:38 -0500 Subject: [petsc-users] How to disable debugging mode? In-Reply-To: References: Message-ID: On Fri, Dec 15, 2017 at 3:41 AM, Edoardo alinovi wrote: > Dear users, > > I would like to disable the debugging mode in petsc and turn it on when > needed. > > my actual configure is: > > > --prefix=/home/edo/software/petsc_3.8.1/ --with-mpi-dir=/home/edo/software/openMPI-3.0/ > & > --download-fblaslapack=1 --download-superlu_dist --download-mumps & > --download-hypre --download-metis --download-parmetis --download-scalapack > > > Have I only to add the flag "--with-debugging=no" to the above lines and > re-run the configure script? > Yes. > Have I to substitute "--download-libname" with --with-libname-lib= > for each external library? > No. > Sorry for the dummy question, but I really do not want to mess up the > current working install :) > You do not have to. Rerun the configure like this: $PETSC_DIR/$PETSC_ARCH/lib/petsc/conf/reconfgiure-$PETSC_ARCH.py --with-debugging=0 --PETSC_ARCH=arch-opt Thanks, Matt > Thak you very much, > > Edoardo > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From cpraveen at gmail.com Sat Dec 16 19:06:46 2017 From: cpraveen at gmail.com (Praveen C) Date: Sun, 17 Dec 2017 06:36:46 +0530 Subject: [petsc-users] Multiblock examples Message-ID: <9991EAE6-3E0D-43AA-B3CF-7E73AF87CC11@gmail.com> Dear all I want to convert an Euler solver which currently uses DMDA to a multiblock code. I would like to write a general code where the number of blocks and connectivity will be read from a file. Is there an example which shows how such things have been done ? Thanks a lot praveen From knepley at gmail.com Sat Dec 16 19:14:55 2017 From: knepley at gmail.com (Matthew Knepley) Date: Sat, 16 Dec 2017 20:14:55 -0500 Subject: [petsc-users] Multiblock examples In-Reply-To: <9991EAE6-3E0D-43AA-B3CF-7E73AF87CC11@gmail.com> References: <9991EAE6-3E0D-43AA-B3CF-7E73AF87CC11@gmail.com> Message-ID: On Sat, Dec 16, 2017 at 8:06 PM, Praveen C wrote: > Dear all > > I want to convert an Euler solver which currently uses DMDA to a > multiblock code. What precisely do you mean by this? There are lots of things people do. Thanks, Matt > I would like to write a general code where the number of blocks and > connectivity will be read from a file. > > Is there an example which shows how such things have been done ? > > Thanks a lot > praveen -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From cpraveen at gmail.com Sat Dec 16 19:56:42 2017 From: cpraveen at gmail.com (Praveen C) Date: Sun, 17 Dec 2017 07:26:42 +0530 Subject: [petsc-users] Multiblock examples In-Reply-To: References: <9991EAE6-3E0D-43AA-B3CF-7E73AF87CC11@gmail.com> Message-ID: <9B68F876-2365-4D65-9BF5-D6B8CA4B950C@gmail.com> > On 17-Dec-2017, at 6:44 AM, Matthew Knepley wrote: > > I want to convert an Euler solver which currently uses DMDA to a multiblock code. > > What precisely do you mean by this? There are lots of things people do. > > Thanks, I have a complicated geometry so I have to use multiblock structured grid. Thanks praveen -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Kloefkorn at iris.no Sat Dec 16 07:03:07 2017 From: Robert.Kloefkorn at iris.no (Robert Kloefkorn) Date: Sat, 16 Dec 2017 14:03:07 +0100 Subject: [petsc-users] PDE Software Frameworks 2018 -- May 28-30, Hotel Zander K, Bergen, Norway. In-Reply-To: <14f2b4df-3243-8057-3231-15848987bc04@iris.no> References: <14f2b4df-3243-8057-3231-15848987bc04@iris.no> Message-ID: <34e29f22-5fa5-8492-0eb3-07a0b228c2e9@iris.no> Dear friends and colleagues, It is a great pleasure to announce the PDE Software Frameworks (PDESoft) 2018 Conference, which will be held at the hotel Zander K (https://www.zanderk.no/en/), Bergen, Norway, May 28 - 30, 2018. The scientific committee of the conference consists of the following people: - Donna Calhoun (Boise State University) - Lois Curfman McInnes (Argonne National Laboratory) - Guido Kanschat (Heidelberg University) - Eirik Keilegavlen (University of Bergen) - Robert Kl?fkorn (International Research Institute of Stavanger) - Lawrence Mitchell (Imperial College London) - Christophe Prud'homme (University of Strasbourg) - Garth Wells (University of Cambridge) - Barbara Wohlmuth (Technical University of Munich) We welcome abstracts on topics related to software for - meshing and adaptive mesh refinement, - solvers for large systems of equations, - numerical PDE solvers, - data visualization systems, - user interfaces to scientific software, and - reproducible science. The accepted abstracts will be scheduled for either oral or poster presentations. This workshop does not publish full papers, so submission of a full paper is not required. The deadline for abstract submission is Feb. 1, 2018. For more information, please see the PDESoft18 web page: http://www.iris.no/pdesoft18/. We are looking forward to meeting you in Bergen. With best regards, Robert Kl?fkorn, on behalf of the organizing committee -- Dr. Robert Kloefkorn Senior Research Scientist | http://www.iris.no International Research Institute of Stavanger (Bergen office) Thormoehlensgt. 55 | 5006 Bergen Phone: +47 482 93 024 From knepley at gmail.com Sun Dec 17 09:29:09 2017 From: knepley at gmail.com (Matthew Knepley) Date: Sun, 17 Dec 2017 10:29:09 -0500 Subject: [petsc-users] Multiblock examples In-Reply-To: <9B68F876-2365-4D65-9BF5-D6B8CA4B950C@gmail.com> References: <9991EAE6-3E0D-43AA-B3CF-7E73AF87CC11@gmail.com> <9B68F876-2365-4D65-9BF5-D6B8CA4B950C@gmail.com> Message-ID: On Sat, Dec 16, 2017 at 8:56 PM, Praveen C wrote: > > > On 17-Dec-2017, at 6:44 AM, Matthew Knepley wrote: > > I want to convert an Euler solver which currently uses DMDA to a >> multiblock code. > > > What precisely do you mean by this? There are lots of things people do. > > Thanks, > > > I have a complicated geometry so I have to use multiblock structured grid. > But this is not an explanation. Is it non-overlapping or overlapping? If non-overlapping, how does it match up at edges? Does it use mapped geometry? Matt > Thanks > praveen > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From cpraveen at gmail.com Sun Dec 17 09:38:29 2017 From: cpraveen at gmail.com (Praveen C) Date: Sun, 17 Dec 2017 21:08:29 +0530 Subject: [petsc-users] Multiblock examples In-Reply-To: References: <9991EAE6-3E0D-43AA-B3CF-7E73AF87CC11@gmail.com> <9B68F876-2365-4D65-9BF5-D6B8CA4B950C@gmail.com> Message-ID: <6588D718-4AB4-4E05-95C1-F3BA6F6BAEED@gmail.com> > On 17-Dec-2017, at 8:59 PM, Matthew Knepley wrote: > > But this is not an explanation. Is it non-overlapping or overlapping? If non-overlapping, how > does it match up at edges? Does it use mapped geometry? Sorry, I did not realize there are so many variations. I want to use non-overlapping grids. The grids will match exactly at the block boundaries. > does it match up at edges? Does it use mapped geometry? No, I will not be able to map the whole domain to a reference domain. So I will not use a mapping. If there are any examples of such use case, that would be great help. Ideally, I would like the specify block information and their connectivity from an input file. Thanks praveen -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sun Dec 17 09:47:37 2017 From: knepley at gmail.com (Matthew Knepley) Date: Sun, 17 Dec 2017 10:47:37 -0500 Subject: [petsc-users] Multiblock examples In-Reply-To: <6588D718-4AB4-4E05-95C1-F3BA6F6BAEED@gmail.com> References: <9991EAE6-3E0D-43AA-B3CF-7E73AF87CC11@gmail.com> <9B68F876-2365-4D65-9BF5-D6B8CA4B950C@gmail.com> <6588D718-4AB4-4E05-95C1-F3BA6F6BAEED@gmail.com> Message-ID: On Sun, Dec 17, 2017 at 10:38 AM, Praveen C wrote: > > > On 17-Dec-2017, at 8:59 PM, Matthew Knepley wrote: > > But this is not an explanation. Is it non-overlapping or overlapping? If > non-overlapping, how > does it match up at edges? Does it use mapped geometry? > > > Sorry, I did not realize there are so many variations. > > I want to use non-overlapping grids. The grids will match exactly at the > block boundaries. > > does it match up at edges? Does it use mapped geometry? > > > No, I will not be able to map the whole domain to a reference domain. So I > will not use a mapping. > > If there are any examples of such use case, that would be great help. > Ideally, I would like the specify block information and their connectivity > from an input file. > There are no examples of this. DMDA cannot do this. You may be able to do what you want with http://www.p4est.org/ which is supported by the DMForest in PETSc. You would have to use PetscSection to describe your variable layout and to access values, so it would not be as easy as DMDA. This is a very new part of PETSc, so there is not much documentation and no examples. Thanks, Matt > Thanks > praveen > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From repepo at gmail.com Sun Dec 17 14:29:17 2017 From: repepo at gmail.com (Santiago Andres Triana) Date: Sun, 17 Dec 2017 21:29:17 +0100 Subject: [petsc-users] configure fails with batch+scalapack Message-ID: Dear petsc-users, I'm trying to install petsc in a cluster that uses a job manager. This is the configure command I use: ./configure --known-mpi-shared-libraries=1 --with-scalar-type=complex --with-mumps=1 --download-mumps --download-parmetis --with-blaslapack-dir=/sw/sdev/intel/psxe2015u3/composer_xe_2015.3.187/mkl --download-metis --with-scalapack=1 --download-scalapack --with-batch This fails when including the option --with-batch together with --download-scalapack: =============================================================================== Configuring PETSc to compile on your system =============================================================================== TESTING: check from config.libraries(config/BuildSystem/config/libraries.py:158) ******************************************************************************* UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details): ------------------------------------------------------------------------------- Unable to find scalapack in default locations! Perhaps you can specify with --with-scalapack-dir= If you do not want scalapack, then give --with-scalapack=0 You might also consider using --download-scalapack instead ******************************************************************************* However, if I omit the --with-batch option, the configure script manages to succeed (it downloads and compiles scalapack; the install fails later at the make debug because of the job manager). Any help or suggestion is highly appreciated. Thanks in advance! Andres -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Sun Dec 17 16:06:22 2017 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Sun, 17 Dec 2017 22:06:22 +0000 Subject: [petsc-users] configure fails with batch+scalapack In-Reply-To: References: Message-ID: <6A2D6663-E21A-4B10-9E63-AC48ABA90CA2@mcs.anl.gov> It helps if we have configure.log But if the non-batch version installed the Scalapack then why not just run a second time with the batch option; it should automatically use the scalapack that was already successfully installed. Send configure.log if it fails. Barry > On Dec 17, 2017, at 2:29 PM, Santiago Andres Triana wrote: > > Dear petsc-users, > > I'm trying to install petsc in a cluster that uses a job manager. This is the configure command I use: > > ./configure --known-mpi-shared-libraries=1 --with-scalar-type=complex --with-mumps=1 --download-mumps --download-parmetis --with-blaslapack-dir=/sw/sdev/intel/psxe2015u3/composer_xe_2015.3.187/mkl --download-metis --with-scalapack=1 --download-scalapack --with-batch > > This fails when including the option --with-batch together with --download-scalapack: > > =============================================================================== > Configuring PETSc to compile on your system > =============================================================================== > TESTING: check from config.libraries(config/BuildSystem/config/libraries.py:158) ******************************************************************************* > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details): > ------------------------------------------------------------------------------- > Unable to find scalapack in default locations! > Perhaps you can specify with --with-scalapack-dir= > If you do not want scalapack, then give --with-scalapack=0 > You might also consider using --download-scalapack instead > ******************************************************************************* > > > However, if I omit the --with-batch option, the configure script manages to succeed (it downloads and compiles scalapack; the install fails later at the make debug because of the job manager). Any help or suggestion is highly appreciated. Thanks in advance! > > Andres From balay at mcs.anl.gov Sun Dec 17 16:26:49 2017 From: balay at mcs.anl.gov (Satish Balay) Date: Sun, 17 Dec 2017 16:26:49 -0600 Subject: [petsc-users] configure fails with batch+scalapack In-Reply-To: <6A2D6663-E21A-4B10-9E63-AC48ABA90CA2@mcs.anl.gov> References: <6A2D6663-E21A-4B10-9E63-AC48ABA90CA2@mcs.anl.gov> Message-ID: Note: you don't need --with-batch for PETSc - just because the cluster uses the job manager. PETSc configure attempts to compile/run trivial MPI snippets [without mpi_int] - that usually works on the frontend for most MPI impls installed on clusters. Only if this fails [and configure gives a failure] - you would need the --with-batch=1 option Satish On Sun, 17 Dec 2017, Smith, Barry F. wrote: > > > It helps if we have configure.log > > But if the non-batch version installed the Scalapack then why not just run a second time with the batch option; it should automatically use the scalapack that was already successfully installed. Send configure.log if it fails. > > Barry > > > On Dec 17, 2017, at 2:29 PM, Santiago Andres Triana wrote: > > > > Dear petsc-users, > > > > I'm trying to install petsc in a cluster that uses a job manager. This is the configure command I use: > > > > ./configure --known-mpi-shared-libraries=1 --with-scalar-type=complex --with-mumps=1 --download-mumps --download-parmetis --with-blaslapack-dir=/sw/sdev/intel/psxe2015u3/composer_xe_2015.3.187/mkl --download-metis --with-scalapack=1 --download-scalapack --with-batch > > > > This fails when including the option --with-batch together with --download-scalapack: > > > > =============================================================================== > > Configuring PETSc to compile on your system > > =============================================================================== > > TESTING: check from config.libraries(config/BuildSystem/config/libraries.py:158) ******************************************************************************* > > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details): > > ------------------------------------------------------------------------------- > > Unable to find scalapack in default locations! > > Perhaps you can specify with --with-scalapack-dir= > > If you do not want scalapack, then give --with-scalapack=0 > > You might also consider using --download-scalapack instead > > ******************************************************************************* > > > > > > However, if I omit the --with-batch option, the configure script manages to succeed (it downloads and compiles scalapack; the install fails later at the make debug because of the job manager). Any help or suggestion is highly appreciated. Thanks in advance! > > > > Andres > > From knepley at gmail.com Sun Dec 17 16:32:55 2017 From: knepley at gmail.com (Matthew Knepley) Date: Sun, 17 Dec 2017 17:32:55 -0500 Subject: [petsc-users] configure fails with batch+scalapack In-Reply-To: References: Message-ID: On Sun, Dec 17, 2017 at 3:29 PM, Santiago Andres Triana wrote: > Dear petsc-users, > > I'm trying to install petsc in a cluster that uses a job manager. This is > the configure command I use: > > ./configure --known-mpi-shared-libraries=1 --with-scalar-type=complex > --with-mumps=1 --download-mumps --download-parmetis > --with-blaslapack-dir=/sw/sdev/intel/psxe2015u3/composer_xe_2015.3.187/mkl > --download-metis --with-scalapack=1 --download-scalapack --with-batch > > This fails when including the option --with-batch together with > --download-scalapack: > We need configure.log > ============================================================ > =================== > Configuring PETSc to compile on your system > > ============================================================ > =================== > TESTING: check from config.libraries(config/BuildSystem/config/libraries.py:158) > ***************************** > ************************************************** > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for > details): > ------------------------------------------------------------ > ------------------- > Unable to find scalapack in default locations! > Perhaps you can specify with --with-scalapack-dir= > If you do not want scalapack, then give --with-scalapack=0 > You might also consider using --download-scalapack instead > ************************************************************ > ******************* > > > However, if I omit the --with-batch option, the configure script manages > to succeed (it downloads and compiles scalapack; the install fails later at > the make debug because of the job manager). > Can you send this failure as well? Thanks, Matt > Any help or suggestion is highly appreciated. Thanks in advance! > > Andres > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From repepo at gmail.com Sun Dec 17 16:55:38 2017 From: repepo at gmail.com (Santiago Andres Triana) Date: Sun, 17 Dec 2017 23:55:38 +0100 Subject: [petsc-users] configure fails with batch+scalapack In-Reply-To: References: Message-ID: Thanks for your quick responses! Attached is the configure.log obtained without using the --with-batch option. Configures without errors but fails at the 'make test' stage. A snippet of the output with the error (which I attributed to the job manager) is: > Local host: hpca-login > Registerable memory: 32768 MiB > Total memory: 65427 MiB > > Your MPI job will continue, but may be behave poorly and/or hang. > -------------------------------------------------------------------------- 3c25 < 0 KSP Residual norm 0.239155 --- > 0 KSP Residual norm 0.235858 6c28 < 0 KSP Residual norm 6.81968e-05 --- > 0 KSP Residual norm 2.30906e-05 9a32,33 > [hpca-login:38557] 1 more process has sent help message help-mpi-btl-openib.txt / reg mem limit low > [hpca-login:38557] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages /home/trianas/petsc-3.8.3/src/snes/examples/tutorials Possible problem with ex19_fieldsplit_fieldsplit_mumps, diffs above ========================================= Possible error running Fortran example src/snes/examples/tutorials/ex5f with 1 MPI process See http://www.mcs.anl.gov/petsc/documentation/faq.html -------------------------------------------------------------------------- WARNING: It appears that your OpenFabrics subsystem is configured to only allow registering part of your physical memory. This can cause MPI jobs to run with erratic performance, hang, and/or crash. This may be caused by your OpenFabrics vendor limiting the amount of physical memory that can be registered. You should investigate the relevant Linux kernel module parameters that control how much physical memory can be registered, and increase them to allow registering all physical memory on your machine. See this Open MPI FAQ item for more information on these Linux kernel module parameters: http://www.open-mpi.org/faq/?category=openfabrics#ib-locked-pages Local host: hpca-login Registerable memory: 32768 MiB Total memory: 65427 MiB Your MPI job will continue, but may be behave poorly and/or hang. -------------------------------------------------------------------------- Number of SNES iterations = 4 Completed test examples ========================================= Now to evaluate the computer systems you plan use - do: make PETSC_DIR=/home/trianas/petsc-3.8.3 PETSC_ARCH=arch-linux2-c-debug streams On Sun, Dec 17, 2017 at 11:32 PM, Matthew Knepley wrote: > On Sun, Dec 17, 2017 at 3:29 PM, Santiago Andres Triana > wrote: > >> Dear petsc-users, >> >> I'm trying to install petsc in a cluster that uses a job manager. This >> is the configure command I use: >> >> ./configure --known-mpi-shared-libraries=1 --with-scalar-type=complex >> --with-mumps=1 --download-mumps --download-parmetis >> --with-blaslapack-dir=/sw/sdev/intel/psxe2015u3/composer_xe_2015.3.187/mkl >> --download-metis --with-scalapack=1 --download-scalapack --with-batch >> >> This fails when including the option --with-batch together with >> --download-scalapack: >> > > We need configure.log > > >> ============================================================ >> =================== >> Configuring PETSc to compile on your system >> >> ============================================================ >> =================== >> TESTING: check from config.libraries(config/BuildS >> ystem/config/libraries.py:158) >> *********************************************************** >> ******************** >> UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for >> details): >> ------------------------------------------------------------ >> ------------------- >> Unable to find scalapack in default locations! >> Perhaps you can specify with --with-scalapack-dir= >> If you do not want scalapack, then give --with-scalapack=0 >> You might also consider using --download-scalapack instead >> ************************************************************ >> ******************* >> >> >> However, if I omit the --with-batch option, the configure script manages >> to succeed (it downloads and compiles scalapack; the install fails later at >> the make debug because of the job manager). >> > > Can you send this failure as well? > > Thanks, > > Matt > > >> Any help or suggestion is highly appreciated. Thanks in advance! >> >> Andres >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: application/octet-stream Size: 6319458 bytes Desc: not available URL: From repepo at gmail.com Sun Dec 17 17:07:26 2017 From: repepo at gmail.com (Santiago Andres Triana) Date: Mon, 18 Dec 2017 00:07:26 +0100 Subject: [petsc-users] configure fails with batch+scalapack In-Reply-To: References: Message-ID: After the last attempt, I tried the --with-batch option, in the hope that it will pick up the scalapack that compiled and installed successfully earlier. But this fails to configure properly. Configure.log attached. Thanks! On Sun, Dec 17, 2017 at 11:55 PM, Santiago Andres Triana wrote: > Thanks for your quick responses! > > Attached is the configure.log obtained without using the --with-batch > option. Configures without errors but fails at the 'make test' stage. A > snippet of the output with the error (which I attributed to the job > manager) is: > > > > > Local host: hpca-login > > Registerable memory: 32768 MiB > > Total memory: 65427 MiB > > > > Your MPI job will continue, but may be behave poorly and/or hang. > > ------------------------------------------------------------ > -------------- > 3c25 > < 0 KSP Residual norm 0.239155 > --- > > 0 KSP Residual norm 0.235858 > 6c28 > < 0 KSP Residual norm 6.81968e-05 > --- > > 0 KSP Residual norm 2.30906e-05 > 9a32,33 > > [hpca-login:38557] 1 more process has sent help message > help-mpi-btl-openib.txt / reg mem limit low > > [hpca-login:38557] Set MCA parameter "orte_base_help_aggregate" to 0 to > see all help / error messages > /home/trianas/petsc-3.8.3/src/snes/examples/tutorials > Possible problem with ex19_fieldsplit_fieldsplit_mumps, diffs above > ========================================= > Possible error running Fortran example src/snes/examples/tutorials/ex5f > with 1 MPI process > See http://www.mcs.anl.gov/petsc/documentation/faq.html > -------------------------------------------------------------------------- > WARNING: It appears that your OpenFabrics subsystem is configured to only > allow registering part of your physical memory. This can cause MPI jobs to > run with erratic performance, hang, and/or crash. > > This may be caused by your OpenFabrics vendor limiting the amount of > physical memory that can be registered. You should investigate the > relevant Linux kernel module parameters that control how much physical > memory can be registered, and increase them to allow registering all > physical memory on your machine. > > See this Open MPI FAQ item for more information on these Linux kernel > module > parameters: > > http://www.open-mpi.org/faq/?category=openfabrics#ib-locked-pages > > Local host: hpca-login > Registerable memory: 32768 MiB > Total memory: 65427 MiB > > Your MPI job will continue, but may be behave poorly and/or hang. > -------------------------------------------------------------------------- > Number of SNES iterations = 4 > Completed test examples > ========================================= > Now to evaluate the computer systems you plan use - do: > make PETSC_DIR=/home/trianas/petsc-3.8.3 PETSC_ARCH=arch-linux2-c-debug > streams > > > > > On Sun, Dec 17, 2017 at 11:32 PM, Matthew Knepley > wrote: > >> On Sun, Dec 17, 2017 at 3:29 PM, Santiago Andres Triana > > wrote: >> >>> Dear petsc-users, >>> >>> I'm trying to install petsc in a cluster that uses a job manager. This >>> is the configure command I use: >>> >>> ./configure --known-mpi-shared-libraries=1 --with-scalar-type=complex >>> --with-mumps=1 --download-mumps --download-parmetis >>> --with-blaslapack-dir=/sw/sdev/intel/psxe2015u3/composer_xe_2015.3.187/mkl >>> --download-metis --with-scalapack=1 --download-scalapack --with-batch >>> >>> This fails when including the option --with-batch together with >>> --download-scalapack: >>> >> >> We need configure.log >> >> >>> ============================================================ >>> =================== >>> Configuring PETSc to compile on your system >>> >>> ============================================================ >>> =================== >>> TESTING: check from config.libraries(config/BuildS >>> ystem/config/libraries.py:158) >>> *********************************************************** >>> ******************** >>> UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log >>> for details): >>> ------------------------------------------------------------ >>> ------------------- >>> Unable to find scalapack in default locations! >>> Perhaps you can specify with --with-scalapack-dir= >>> If you do not want scalapack, then give --with-scalapack=0 >>> You might also consider using --download-scalapack instead >>> ************************************************************ >>> ******************* >>> >>> >>> However, if I omit the --with-batch option, the configure script manages >>> to succeed (it downloads and compiles scalapack; the install fails later at >>> the make debug because of the job manager). >>> >> >> Can you send this failure as well? >> >> Thanks, >> >> Matt >> >> >>> Any help or suggestion is highly appreciated. Thanks in advance! >>> >>> Andres >>> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: application/octet-stream Size: 5309938 bytes Desc: not available URL: From bsmith at mcs.anl.gov Sun Dec 17 18:03:09 2017 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Mon, 18 Dec 2017 00:03:09 +0000 Subject: [petsc-users] configure fails with batch+scalapack In-Reply-To: References: Message-ID: <6CC5E8A1-A38E-4E24-A4A7-865B570B4A7F@mcs.anl.gov> Configure runs fine. When it runs fine absolutely no reason to run it with --with-batch. Make test fails because it cannot launch parallel jobs directly using the mpiexec it is using. You need to determine how to submit jobs on this system and then you are ready to go. Barry > On Dec 17, 2017, at 4:55 PM, Santiago Andres Triana wrote: > > Thanks for your quick responses! > > Attached is the configure.log obtained without using the --with-batch option. Configures without errors but fails at the 'make test' stage. A snippet of the output with the error (which I attributed to the job manager) is: > > > > > Local host: hpca-login > > Registerable memory: 32768 MiB > > Total memory: 65427 MiB > > > > Your MPI job will continue, but may be behave poorly and/or hang. > > -------------------------------------------------------------------------- > 3c25 > < 0 KSP Residual norm 0.239155 > --- > > 0 KSP Residual norm 0.235858 > 6c28 > < 0 KSP Residual norm 6.81968e-05 > --- > > 0 KSP Residual norm 2.30906e-05 > 9a32,33 > > [hpca-login:38557] 1 more process has sent help message help-mpi-btl-openib.txt / reg mem limit low > > [hpca-login:38557] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages > /home/trianas/petsc-3.8.3/src/snes/examples/tutorials > Possible problem with ex19_fieldsplit_fieldsplit_mumps, diffs above > ========================================= > Possible error running Fortran example src/snes/examples/tutorials/ex5f with 1 MPI process > See http://www.mcs.anl.gov/petsc/documentation/faq.html > -------------------------------------------------------------------------- > WARNING: It appears that your OpenFabrics subsystem is configured to only > allow registering part of your physical memory. This can cause MPI jobs to > run with erratic performance, hang, and/or crash. > > This may be caused by your OpenFabrics vendor limiting the amount of > physical memory that can be registered. You should investigate the > relevant Linux kernel module parameters that control how much physical > memory can be registered, and increase them to allow registering all > physical memory on your machine. > > See this Open MPI FAQ item for more information on these Linux kernel module > parameters: > > http://www.open-mpi.org/faq/?category=openfabrics#ib-locked-pages > > Local host: hpca-login > Registerable memory: 32768 MiB > Total memory: 65427 MiB > > Your MPI job will continue, but may be behave poorly and/or hang. > -------------------------------------------------------------------------- > Number of SNES iterations = 4 > Completed test examples > ========================================= > Now to evaluate the computer systems you plan use - do: > make PETSC_DIR=/home/trianas/petsc-3.8.3 PETSC_ARCH=arch-linux2-c-debug streams > > > > > On Sun, Dec 17, 2017 at 11:32 PM, Matthew Knepley wrote: > On Sun, Dec 17, 2017 at 3:29 PM, Santiago Andres Triana wrote: > Dear petsc-users, > > I'm trying to install petsc in a cluster that uses a job manager. This is the configure command I use: > > ./configure --known-mpi-shared-libraries=1 --with-scalar-type=complex --with-mumps=1 --download-mumps --download-parmetis --with-blaslapack-dir=/sw/sdev/intel/psxe2015u3/composer_xe_2015.3.187/mkl --download-metis --with-scalapack=1 --download-scalapack --with-batch > > This fails when including the option --with-batch together with --download-scalapack: > > We need configure.log > > =============================================================================== > Configuring PETSc to compile on your system > =============================================================================== > TESTING: check from config.libraries(config/BuildSystem/config/libraries.py:158) ******************************************************************************* > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details): > ------------------------------------------------------------------------------- > Unable to find scalapack in default locations! > Perhaps you can specify with --with-scalapack-dir= > If you do not want scalapack, then give --with-scalapack=0 > You might also consider using --download-scalapack instead > ******************************************************************************* > > > However, if I omit the --with-batch option, the configure script manages to succeed (it downloads and compiles scalapack; the install fails later at the make debug because of the job manager). > > Can you send this failure as well? > > Thanks, > > Matt > > Any help or suggestion is highly appreciated. Thanks in advance! > > Andres > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > From niko.karin at gmail.com Mon Dec 18 06:42:35 2017 From: niko.karin at gmail.com (Karin&NiKo) Date: Mon, 18 Dec 2017 13:42:35 +0100 Subject: [petsc-users] Golub-Kahan bidiagonalization Message-ID: Dear PETSc team, I would like to implement and possibly commit to PETSc the Golub-Kahan bidiagonalization algorithm (GK) describe in Arioli's paper : http://epubs.siam.org/doi/pdf/10.1137/120866543. In this work, Mario Arioli uses GK to solve saddle point problems, of the form A=[A00, A01; A10, A11]. There is an outer-loop which treats the constraints and an inner-loop, with its own KSP, to solve the linear systems with A00 as operator. We have evaluated this algorithm on different problems and have found that it exhibits very nice convergence of the outer-loop (independant of the problem size). In order to developp a source that could be commited to PETSc, I would like to have your opinion on how to implement it. Since the algorithm treats saddle point problems, it seems to me that it should be implemented in the fieldsplit framework. Should we add for instance a new -pc_fieldsplit_type, say gk? Have you other ideas? I look forward to hearing your opinion on the best design for implementing this algorithm in PETSc. Regards, Nicolas -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Dec 18 07:03:21 2017 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 18 Dec 2017 08:03:21 -0500 Subject: [petsc-users] [petsc-dev] Golub-Kahan bidiagonalization In-Reply-To: References: Message-ID: On Mon, Dec 18, 2017 at 7:42 AM, Karin&NiKo wrote: > Dear PETSc team, > > I would like to implement and possibly commit to PETSc the Golub-Kahan > bidiagonalization algorithm (GK) describe in Arioli's paper : > http://epubs.siam.org/doi/pdf/10.1137/120866543. > In this work, Mario Arioli uses GK to solve saddle point problems, of the > form A=[A00, A01; A10, A11]. There is an outer-loop which treats the > constraints and an inner-loop, with its own KSP, to solve the linear > systems with A00 as operator. We have evaluated this algorithm on different > problems and have found that it exhibits very nice convergence of the > outer-loop (independant of the problem size). > > In order to developp a source that could be commited to PETSc, I would > like to have your opinion on how to implement it. Since the algorithm > treats saddle point problems, it seems to me that it should be implemented > in the fieldsplit framework. Should we add for instance a > new -pc_fieldsplit_type, say gk? Have you other ideas? > That was my first idea. From quickly looking at the paper, it looks like you need an auxiliary matrix N which does not come from the decomposition, so you will have to attach it to something, like we do for LSC, or demand that it come in as the (1,1) block of the preconditioning matrix which is a little hacky as well. Thanks, Matt > I look forward to hearing your opinion on the best design for implementing > this algorithm in PETSc. > > Regards, > Nicolas > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From niko.karin at gmail.com Mon Dec 18 10:32:11 2017 From: niko.karin at gmail.com (Karin&NiKo) Date: Mon, 18 Dec 2017 17:32:11 +0100 Subject: [petsc-users] [petsc-dev] Golub-Kahan bidiagonalization In-Reply-To: References: Message-ID: The N matrix is not mandatory ; in some case, it can be usefull to accelerate the convergence. So you confirm we should look at a fieldsplit implementation with a new -pc_fieldsplit_type gk? Thanks, Nicolas 2017-12-18 14:03 GMT+01:00 Matthew Knepley : > On Mon, Dec 18, 2017 at 7:42 AM, Karin&NiKo wrote: > >> Dear PETSc team, >> >> I would like to implement and possibly commit to PETSc the Golub-Kahan >> bidiagonalization algorithm (GK) describe in Arioli's paper : >> http://epubs.siam.org/doi/pdf/10.1137/120866543. >> In this work, Mario Arioli uses GK to solve saddle point problems, of the >> form A=[A00, A01; A10, A11]. There is an outer-loop which treats the >> constraints and an inner-loop, with its own KSP, to solve the linear >> systems with A00 as operator. We have evaluated this algorithm on different >> problems and have found that it exhibits very nice convergence of the >> outer-loop (independant of the problem size). >> >> In order to developp a source that could be commited to PETSc, I would >> like to have your opinion on how to implement it. Since the algorithm >> treats saddle point problems, it seems to me that it should be implemented >> in the fieldsplit framework. Should we add for instance a >> new -pc_fieldsplit_type, say gk? Have you other ideas? >> > > That was my first idea. From quickly looking at the paper, it looks like > you need an auxiliary matrix N which > does not come from the decomposition, so you will have to attach it to > something, like we do for LSC, or > demand that it come in as the (1,1) block of the preconditioning matrix > which is a little hacky as well. > > Thanks, > > Matt > > >> I look forward to hearing your opinion on the best design for >> implementing this algorithm in PETSc. >> >> Regards, >> Nicolas >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Dec 18 10:36:15 2017 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 18 Dec 2017 11:36:15 -0500 Subject: [petsc-users] [petsc-dev] Golub-Kahan bidiagonalization In-Reply-To: References: Message-ID: On Mon, Dec 18, 2017 at 11:32 AM, Karin&NiKo wrote: > The N matrix is not mandatory ; in some case, it can be usefull to > accelerate the convergence. > So you confirm we should look at a fieldsplit implementation with a new > -pc_fieldsplit_type gk? > Yes, right now that is how we are structuring it. You could imagine that we break it down further, so that all the block solvers are separate KSP and just require a PCFIELDSPLIT, but that seems like overkill to me. Thanks, Matt > Thanks, > Nicolas > > 2017-12-18 14:03 GMT+01:00 Matthew Knepley : > >> On Mon, Dec 18, 2017 at 7:42 AM, Karin&NiKo wrote: >> >>> Dear PETSc team, >>> >>> I would like to implement and possibly commit to PETSc the Golub-Kahan >>> bidiagonalization algorithm (GK) describe in Arioli's paper : >>> http://epubs.siam.org/doi/pdf/10.1137/120866543. >>> In this work, Mario Arioli uses GK to solve saddle point problems, of >>> the form A=[A00, A01; A10, A11]. There is an outer-loop which treats the >>> constraints and an inner-loop, with its own KSP, to solve the linear >>> systems with A00 as operator. We have evaluated this algorithm on different >>> problems and have found that it exhibits very nice convergence of the >>> outer-loop (independant of the problem size). >>> >>> In order to developp a source that could be commited to PETSc, I would >>> like to have your opinion on how to implement it. Since the algorithm >>> treats saddle point problems, it seems to me that it should be implemented >>> in the fieldsplit framework. Should we add for instance a >>> new -pc_fieldsplit_type, say gk? Have you other ideas? >>> >> >> That was my first idea. From quickly looking at the paper, it looks like >> you need an auxiliary matrix N which >> does not come from the decomposition, so you will have to attach it to >> something, like we do for LSC, or >> demand that it come in as the (1,1) block of the preconditioning matrix >> which is a little hacky as well. >> >> Thanks, >> >> Matt >> >> >>> I look forward to hearing your opinion on the best design for >>> implementing this algorithm in PETSc. >>> >>> Regards, >>> Nicolas >>> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From repepo at gmail.com Tue Dec 19 06:05:08 2017 From: repepo at gmail.com (Santiago Andres Triana) Date: Tue, 19 Dec 2017 13:05:08 +0100 Subject: [petsc-users] configure fails with batch+scalapack In-Reply-To: <6CC5E8A1-A38E-4E24-A4A7-865B570B4A7F@mcs.anl.gov> References: <6CC5E8A1-A38E-4E24-A4A7-865B570B4A7F@mcs.anl.gov> Message-ID: Epilogue: I was able to complete the configuration and compilation using an interactive session in one compute node. Certainly, there was no need for the --with-batch option. However, at run time, the SGI MPT's mpiexec_mpt (required by the job scheduler in this cluster) throws a cryptic error: Cannot find executable: -f It seems not petsc specific, though, as other mpi programs also fail. In any case I would like to thank you all for the prompt help! Santiago On Mon, Dec 18, 2017 at 1:03 AM, Smith, Barry F. wrote: > > Configure runs fine. When it runs fine absolutely no reason to run it > with --with-batch. > > Make test fails because it cannot launch parallel jobs directly using > the mpiexec it is using. > > You need to determine how to submit jobs on this system and then you > are ready to go. > > Barry > > > > On Dec 17, 2017, at 4:55 PM, Santiago Andres Triana > wrote: > > > > Thanks for your quick responses! > > > > Attached is the configure.log obtained without using the --with-batch > option. Configures without errors but fails at the 'make test' stage. A > snippet of the output with the error (which I attributed to the job > manager) is: > > > > > > > > > Local host: hpca-login > > > Registerable memory: 32768 MiB > > > Total memory: 65427 MiB > > > > > > Your MPI job will continue, but may be behave poorly and/or hang. > > > ------------------------------------------------------------ > -------------- > > 3c25 > > < 0 KSP Residual norm 0.239155 > > --- > > > 0 KSP Residual norm 0.235858 > > 6c28 > > < 0 KSP Residual norm 6.81968e-05 > > --- > > > 0 KSP Residual norm 2.30906e-05 > > 9a32,33 > > > [hpca-login:38557] 1 more process has sent help message > help-mpi-btl-openib.txt / reg mem limit low > > > [hpca-login:38557] Set MCA parameter "orte_base_help_aggregate" to 0 > to see all help / error messages > > /home/trianas/petsc-3.8.3/src/snes/examples/tutorials > > Possible problem with ex19_fieldsplit_fieldsplit_mumps, diffs above > > ========================================= > > Possible error running Fortran example src/snes/examples/tutorials/ex5f > with 1 MPI process > > See http://www.mcs.anl.gov/petsc/documentation/faq.html > > ------------------------------------------------------------ > -------------- > > WARNING: It appears that your OpenFabrics subsystem is configured to only > > allow registering part of your physical memory. This can cause MPI jobs > to > > run with erratic performance, hang, and/or crash. > > > > This may be caused by your OpenFabrics vendor limiting the amount of > > physical memory that can be registered. You should investigate the > > relevant Linux kernel module parameters that control how much physical > > memory can be registered, and increase them to allow registering all > > physical memory on your machine. > > > > See this Open MPI FAQ item for more information on these Linux kernel > module > > parameters: > > > > http://www.open-mpi.org/faq/?category=openfabrics#ib-locked-pages > > > > Local host: hpca-login > > Registerable memory: 32768 MiB > > Total memory: 65427 MiB > > > > Your MPI job will continue, but may be behave poorly and/or hang. > > ------------------------------------------------------------ > -------------- > > Number of SNES iterations = 4 > > Completed test examples > > ========================================= > > Now to evaluate the computer systems you plan use - do: > > make PETSC_DIR=/home/trianas/petsc-3.8.3 PETSC_ARCH=arch-linux2-c-debug > streams > > > > > > > > > > On Sun, Dec 17, 2017 at 11:32 PM, Matthew Knepley > wrote: > > On Sun, Dec 17, 2017 at 3:29 PM, Santiago Andres Triana < > repepo at gmail.com> wrote: > > Dear petsc-users, > > > > I'm trying to install petsc in a cluster that uses a job manager. This > is the configure command I use: > > > > ./configure --known-mpi-shared-libraries=1 --with-scalar-type=complex > --with-mumps=1 --download-mumps --download-parmetis > --with-blaslapack-dir=/sw/sdev/intel/psxe2015u3/composer_xe_2015.3.187/mkl > --download-metis --with-scalapack=1 --download-scalapack --with-batch > > > > This fails when including the option --with-batch together with > --download-scalapack: > > > > We need configure.log > > > > ============================================================ > =================== > > Configuring PETSc to compile on your system > > ============================================================ > =================== > > TESTING: check from config.libraries(config/ > BuildSystem/config/libraries.py:158) > ************************************************************ > ******************* > > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log > for details): > > ------------------------------------------------------------ > ------------------- > > Unable to find scalapack in default locations! > > Perhaps you can specify with --with-scalapack-dir= > > If you do not want scalapack, then give --with-scalapack=0 > > You might also consider using --download-scalapack instead > > ************************************************************ > ******************* > > > > > > However, if I omit the --with-batch option, the configure script manages > to succeed (it downloads and compiles scalapack; the install fails later at > the make debug because of the job manager). > > > > Can you send this failure as well? > > > > Thanks, > > > > Matt > > > > Any help or suggestion is highly appreciated. Thanks in advance! > > > > Andres > > > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yann.jobic at univ-amu.fr Tue Dec 19 10:40:21 2017 From: yann.jobic at univ-amu.fr (Yann JOBIC) Date: Tue, 19 Dec 2017 17:40:21 +0100 Subject: [petsc-users] local to global mapping for DMPlex Message-ID: Hello, We want to extract the cell connectivity from a DMPlex. We have no problem for a sequential run. However for parallel ones, we need to get the node numbering in the global ordering, as when we distribute the mesh, we only have local nodes, and thus local numbering. It seems that we should use DMGetLocalToGlobalMapping (we are using Fortran with Petsc 3.8p3). However, we get the running error : [0]PETSC ERROR: No support for this operation for this object type [0]PETSC ERROR: DM can not create LocalToGlobalMapping Is it the right way to do it ? Many thanks, Regards, Yann From knepley at gmail.com Tue Dec 19 10:50:46 2017 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 19 Dec 2017 11:50:46 -0500 Subject: [petsc-users] local to global mapping for DMPlex In-Reply-To: References: Message-ID: On Tue, Dec 19, 2017 at 11:40 AM, Yann JOBIC wrote: > Hello, > > We want to extract the cell connectivity from a DMPlex. We have no problem > for a sequential run. > Do you want it on disk? If so, you can just DMView() for HDF5. That outputs the connectivity in a global numbering. I can show you the calls I use inside if you want. I usually put DMViewFromOptions(dm, NULL, "-dm_view") Then -dm_view hdf5:mesh.h5 Thanks, Matt > However for parallel ones, we need to get the node numbering in the global > ordering, as when we distribute the mesh, we only have local nodes, and > thus local numbering. > > It seems that we should use DMGetLocalToGlobalMapping (we are using > Fortran with Petsc 3.8p3). However, we get the running error : > > [0]PETSC ERROR: No support for this operation for this object type > [0]PETSC ERROR: DM can not create LocalToGlobalMapping > > Is it the right way to do it ? > > Many thanks, > > Regards, > > Yann > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexlindsay239 at gmail.com Wed Dec 20 08:59:06 2017 From: alexlindsay239 at gmail.com (Alexander Lindsay) Date: Wed, 20 Dec 2017 07:59:06 -0700 Subject: [petsc-users] Mechanical contact -> Changing form of residual function Message-ID: This question comes from modeling mechanical contact with MOOSE; from talking with Derek Gaston, this has been a topic of conversation before... With contact, our residual function is not continuous. Depending on the values of our displacements, we may or may not have mechanical contact resulting in extra residual terms from contact forces. We currently have the following non-linear algorithm, which is just wrong: Sometime past the first initial non-linear residual evaluation... Update contact state to state n Evaluate Jacobian (contact state n) KSPSolve with Jacobian in contact state n, residual in contact state (n - 1) Apply line search; evaluate residual in contact state n Back to top of non-linear loop Update contact state to state (n + 1) Evaluate Jacobian (contact state n + 1) KSPSolve with Jacobian in contact state (n+1), residual in contact state (n) Apply line search; evaluate residual in contact state (n+1) ... So clearly this is stupid because we're conducting a linear solve with J(-du) = R, with R = R_n(u) and J = dR_{n+1}(u) / du, e.g. the Jacobian is the derivative of our function in an updated contact state whereas our actual function is in an older state. What's the right thing to do here? We could update the contact state before evaluating the residual, e.g. make the contact update part of our residual evaluation routine called by SNESComputeFunction. However, we often run using MFFD, which means there's a chance that when evaluating our Jacobian action within the linear solve, we might cross the discontinuity when perturbing the residual. In my mind, a good option would be to update the contact state only when evaluating non-linear residuals, e.g. when evaluating the initial non-linear residual or when applying the line search. Is there a hook into Petsc for doing such a thing? Alternatively, I see that there's an update function called at the beginning of Petsc's non-linear iteration that can be set with SNESSetUpdate. There we could update our contact state and then re-evaluate F using the updated contact state. This would have the effect that our last evaluated non-linear residual which was tested for convergence would be different from the new residual that we are inserting into the RHS of our new KSPSolve, e.g. the functions were evaluated at different contact states using the same non-linear solution vector. Anyways, this is a long message and if it's not initially clear, I apologize in advance. We're aware that Newton can struggle with discontinuous functions so we're already in a tough spot, but we'd like to be as algorithmically correct as possible within the Newton context. Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Dec 20 09:34:27 2017 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Wed, 20 Dec 2017 15:34:27 +0000 Subject: [petsc-users] Mechanical contact -> Changing form of residual function In-Reply-To: References: Message-ID: Alex, This is the first step in understanding the situation and far from the last. I would start by trying to think about the process more abstractly without thinking about post-solves, pre-solves etc since those details and their mechanics often get in the way of coming up with a good solution. In other words don't even think about it in terms of PETSc. So I start by thinking about Outer loop - the contact problem Inner loop - try to solve the nonlinear problem with the given contacts Do Newton until either satisfactory reduction in residual or residual is stuck and one needs to change the contact. check if contacts should be changed (note this has two flavors (1) the Newton got stuck or (2) Newton did not get stuck but one can improve the residual by changing a contact I would never update the contact state inside Newton, inside a function evaluation or inside a line search, it should be done outside of all of those things. This is how one solves DVI problems. Let's iterate now by email until we understand each other, Barry > On Dec 20, 2017, at 8:59 AM, Alexander Lindsay wrote: > > This question comes from modeling mechanical contact with MOOSE; from talking with Derek Gaston, this has been a topic of conversation before... > > With contact, our residual function is not continuous. Depending on the values of our displacements, we may or may not have mechanical contact resulting in extra residual terms from contact forces. We currently have the following non-linear algorithm, which is just wrong: > > Sometime past the first initial non-linear residual evaluation... > > Update contact state to state n > Evaluate Jacobian (contact state n) > KSPSolve with Jacobian in contact state n, residual in contact state (n - 1) > Apply line search; evaluate residual in contact state n > > Back to top of non-linear loop > > Update contact state to state (n + 1) > Evaluate Jacobian (contact state n + 1) > KSPSolve with Jacobian in contact state (n+1), residual in contact state (n) > Apply line search; evaluate residual in contact state (n+1) > > ... > > So clearly this is stupid because we're conducting a linear solve with J(-du) = R, with R = R_n(u) and J = dR_{n+1}(u) / du, e.g. the Jacobian is the derivative of our function in an updated contact state whereas our actual function is in an older state. > > What's the right thing to do here? We could update the contact state before evaluating the residual, e.g. make the contact update part of our residual evaluation routine called by SNESComputeFunction. However, we often run using MFFD, which means there's a chance that when evaluating our Jacobian action within the linear solve, we might cross the discontinuity when perturbing the residual. > > In my mind, a good option would be to update the contact state only when evaluating non-linear residuals, e.g. when evaluating the initial non-linear residual or when applying the line search. Is there a hook into Petsc for doing such a thing? Alternatively, I see that there's an update function called at the beginning of Petsc's non-linear iteration that can be set with SNESSetUpdate. There we could update our contact state and then re-evaluate F using the updated contact state. This would have the effect that our last evaluated non-linear residual which was tested for convergence would be different from the new residual that we are inserting into the RHS of our new KSPSolve, e.g. the functions were evaluated at different contact states using the same non-linear solution vector. > > Anyways, this is a long message and if it's not initially clear, I apologize in advance. We're aware that Newton can struggle with discontinuous functions so we're already in a tough spot, but we'd like to be as algorithmically correct as possible within the Newton context. > > Alex > From yann.jobic at univ-amu.fr Wed Dec 20 10:08:42 2017 From: yann.jobic at univ-amu.fr (Yann JOBIC) Date: Wed, 20 Dec 2017 17:08:42 +0100 Subject: [petsc-users] local to global mapping for DMPlex In-Reply-To: References: Message-ID: On 12/19/2017 05:50 PM, Matthew Knepley wrote: > On Tue, Dec 19, 2017 at 11:40 AM, Yann JOBIC > wrote: > > Hello, > > We want to extract the cell connectivity from a DMPlex. We have no > problem for a sequential run. > > > Do you want it on disk? If so, you can just DMView() for HDF5. That > outputs the connectivity in a global numbering. > I can show you the calls I use inside if you want. I looked at the code, specifically DMPlexWriteTopology_Vertices_HDF5_Static cellIS should have what i want. However it seems that it is not the case. Do i look at the right spot in the code ? I also looked at DMPlexGetCellNumbering, which does exactly what i want for the global numbering of Cells, even if the ordering is different for different number of processors. When i use DMPlexGetVertexNumbering, i've got negative values, which is correctly handeled by DMPlexWriteTopology_Vertices_HDF5_Static, but i really don't understand this part. I succeed in having the coordinates, but maybe not in the right order. I reused DMPlexWriteCoordinates_HDF5_Static, which create a Vec of vertex coordinates, but i couldn't understand the order of vertex coordinate linked (or not) with DMPlexWriteTopology_Vertices_HDF5_Static. The number of local coordinates is also strange, and does not correspond to the cell's topology. Thanks for the help! Regards, Yann > I usually put > > ? DMViewFromOptions(dm, NULL, "-dm_view") > > Then > > ? -dm_view hdf5:mesh.h5 > > ? Thanks, > > ? ? Matt > > However for parallel ones, we need to get the node numbering in > the global ordering, as when we distribute the mesh, we only have > local nodes, and thus local numbering. > > It seems that we should use DMGetLocalToGlobalMapping (we are > using Fortran with Petsc 3.8p3). However, we get the running error : > > [0]PETSC ERROR: No support for this operation for this object type > [0]PETSC ERROR: DM can not create LocalToGlobalMapping > > Is it the right way to do it ? > > Many thanks, > > Regards, > > Yann > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Dec 20 10:40:33 2017 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 20 Dec 2017 11:40:33 -0500 Subject: [petsc-users] local to global mapping for DMPlex In-Reply-To: References: Message-ID: On Wed, Dec 20, 2017 at 11:08 AM, Yann JOBIC wrote: > On 12/19/2017 05:50 PM, Matthew Knepley wrote: > > On Tue, Dec 19, 2017 at 11:40 AM, Yann JOBIC > wrote: > >> Hello, >> >> We want to extract the cell connectivity from a DMPlex. We have no >> problem for a sequential run. >> > > Do you want it on disk? If so, you can just DMView() for HDF5. That > outputs the connectivity in a global numbering. > I can show you the calls I use inside if you want. > > I looked at the code, specifically DMPlexWriteTopology_Vertices_H > DF5_Static > cellIS should have what i want. > However it seems that it is not the case. Do i look at the right spot in > the code ? > cellIS has the entire connectivity. You sound like you want cell-vertex only. You could get that by just DMPlexUninterpolate(dm, &udm); DMDestroy(&udm); > I also looked at DMPlexGetCellNumbering, which does exactly what i want > for the global numbering of Cells, even if the ordering is different for > different number of processors. > Yes. > When i use DMPlexGetVertexNumbering, i've got negative values, which is > correctly handeled by DMPlexWriteTopology_Vertices_HDF5_Static, but i > really don't understand this part. > There are no shared cells in your partition, so you get all positive numbers. You have shared vertices, so the negative numbers are for vertices you do not own. If the negative number is n, the corresponding global number is -(n+1). Notice that the transformation is 1-1 and its square is I. > I succeed in having the coordinates, but maybe not in the right order. I > reused DMPlexWriteCoordinates_HDF5_Static, which create a Vec of vertex > coordinates, but i couldn't understand the order of vertex coordinate > linked (or not) with DMPlexWriteTopology_Vertices_HDF5_Static. The number > of local coordinates is also strange, and does not correspond to the cell's > topology. > It matches the order of the global numbering. Thanks Matt > Thanks for the help! > > Regards, > > Yann > > > I usually put > > DMViewFromOptions(dm, NULL, "-dm_view") > > Then > > -dm_view hdf5:mesh.h5 > > Thanks, > > Matt > > >> However for parallel ones, we need to get the node numbering in the >> global ordering, as when we distribute the mesh, we only have local nodes, >> and thus local numbering. >> >> It seems that we should use DMGetLocalToGlobalMapping (we are using >> Fortran with Petsc 3.8p3). However, we get the running error : >> >> [0]PETSC ERROR: No support for this operation for this object type >> [0]PETSC ERROR: DM can not create LocalToGlobalMapping >> >> Is it the right way to do it ? >> >> Many thanks, >> >> Regards, >> >> Yann >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From yann.jobic at univ-amu.fr Wed Dec 20 15:30:19 2017 From: yann.jobic at univ-amu.fr (Yann JOBIC) Date: Wed, 20 Dec 2017 22:30:19 +0100 Subject: [petsc-users] local to global mapping for DMPlex In-Reply-To: References: Message-ID: <25559b15-aac4-3570-2d0d-58ee6da58611@univ-amu.fr> That's working just fine. I don't know if that could be useful for someone, just in case here is the methodology : I got the cell-vertex connectivity from the function DMPlexWriteTopology_Vertices_HDF5_Static of petsc/src/dm/impls/plex/plexhdf5.c. The interesting result is in cellIS. The vertex coordinates are from the function DMPlexWriteCoordinates_HDF5_Static of the same file. The global numering is just by adding the position of the vertex in the Vec + all the other ones from previous proc numbers. The cell global numbering is trivial : we use the function DMPlexGetCellNumbering. The global numbering is quite strange, but is consistant. Here is some examples, for 2x2 Cells (4 vertex per cell). I attach a text file for a better reading On 1 processor, we have : 6-----7-----8 | | | | 2 | 3 | | | | 3-----4-----5 | | | | 0 | 1 | | | | 0-----1-----2 On 2 processors we have : 0-----1-----2 | | | | 0 | 1 | | | | 6-----7-----8 | | | | 2 | 3 | | | | 3-----4-----5 Cells 0 and 1 are on proc 0; 2 and 3 are on proc 1. On 4 processors we have : 0-----1-----2 | | | | 0 | 1 | | | | 7-----8-----4 | | | | 3 | 2 | | | | 5-----6-----3 On cell per proc : cell 0 on proc 0, cell 1 on proc 1, ... The text file (edit with notepad++) contains all the ISView for understanding purpose. Matthew, many thanks for the help !! Regards, Yann Le 20/12/2017 ? 17:40, Matthew Knepley a ?crit : > On Wed, Dec 20, 2017 at 11:08 AM, Yann JOBIC > wrote: > > On 12/19/2017 05:50 PM, Matthew Knepley wrote: >> On Tue, Dec 19, 2017 at 11:40 AM, Yann JOBIC >> > wrote: >> >> Hello, >> >> We want to extract the cell connectivity from a DMPlex. We >> have no problem for a sequential run. >> >> >> Do you want it on disk? If so, you can just DMView() for HDF5. >> That outputs the connectivity in a global numbering. >> I can show you the calls I use inside if you want. > I looked at the code, specifically > DMPlexWriteTopology_Vertices_HDF5_Static > cellIS should have what i want. > However it seems that it is not the case. Do i look at the right > spot in the code ? > > > cellIS has the entire connectivity. You sound like you want > cell-vertex only. You could get that by just > > DMPlexUninterpolate(dm, &udm); > > DMDestroy(&udm); > > I also looked at DMPlexGetCellNumbering, which does exactly what i > want for the global numbering of Cells, even if the ordering is > different for different number of processors. > > > Yes. > > When i use DMPlexGetVertexNumbering, i've got negative values, > which is correctly handeled by > DMPlexWriteTopology_Vertices_HDF5_Static, but i really don't > understand this part. > > > There are no shared cells in your partition, so you get all positive > numbers. You have shared vertices, so the negative numbers are for > vertices > you do not own. If the negative number is n, the corresponding global > number is -(n+1). Notice that the transformation is 1-1 and its square > is I. > > I succeed in having the coordinates, but maybe not in the right > order. I reused DMPlexWriteCoordinates_HDF5_Static, which create a > Vec of vertex coordinates, but i couldn't understand the order of > vertex coordinate linked (or not) with > DMPlexWriteTopology_Vertices_HDF5_Static. The number of local > coordinates is also strange, and does not correspond to the cell's > topology. > > > It matches the order of the global numbering. > > Thanks > > Matt > > Thanks for the help! > > Regards, > > Yann > > >> I usually put >> >> DMViewFromOptions(dm, NULL, "-dm_view") >> >> Then >> >> -dm_view hdf5:mesh.h5 >> >> Thanks, >> >> Matt >> >> However for parallel ones, we need to get the node numbering >> in the global ordering, as when we distribute the mesh, we >> only have local nodes, and thus local numbering. >> >> It seems that we should use DMGetLocalToGlobalMapping (we are >> using Fortran with Petsc 3.8p3). However, we get the running >> error : >> >> [0]PETSC ERROR: No support for this operation for this object >> type >> [0]PETSC ERROR: DM can not create LocalToGlobalMapping >> >> Is it the right way to do it ? >> >> Many thanks, >> >> Regards, >> >> Yann >> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to >> which their experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- 6-----7-----8 | | | | 2 | 3 | | | | 3-----4-----5 | | | | 0 | 1 | | | | 0-----1-----2 Coordinates : CellIS rank 0 vertice 0 : 0. 0. IS Object: 1 MPI processes rank 0 vertice 1 : 0.5 0. type: general rank 0 vertice 2 : 1. 0. Number of indices in set 16 rank 0 vertice 3 : 0. 0.5 0 0 rank 0 vertice 4 : 0.5 0.5 1 1 rank 0 vertice 5 : 1. 0.5 2 4 rank 0 vertice 6 : 0. 1. 3 3 rank 0 vertice 7 : 0.5 1. 4 1 rank 0 vertice 8 : 1. 1. 5 2 6 5 7 4 8 3 9 4 10 7 11 6 12 4 13 5 14 8 15 7 Cells Global numbering: IS Object: 1 MPI processes type: general Number of indices in set 4 0 0 1 1 2 2 3 3 0-----1-----2 | | | | 0 | 1 | | | | 6-----7-----8 | | | | 2 | 3 | | | | 3-----4-----5 Coordinates : CellIS rank 0 vertice 0 : 0. 1. IS Object: 2 MPI processes rank 0 vertice 1 : 0.5 1. type: general rank 0 vertice 2 : 1. 1. [0] Number of indices in set 8 rank 1 vertice 3 : 0. 0. [0] 0 6 rank 1 vertice 4 : 0.5 0. [0] 1 7 rank 1 vertice 5 : 1. 0. [0] 2 1 rank 1 vertice 6 : 0. 0.5 [0] 3 0 rank 1 vertice 7 : 0.5 0.5 [0] 4 7 rank 1 vertice 8 : 1. 0.5 [0] 5 8 [0] 6 2 [0] 7 1 [1] Number of indices in set 8 [1] 0 3 [1] 1 4 [1] 2 7 [1] 3 6 [1] 4 4 [1] 5 5 [1] 6 8 [1] 7 7 Cells Global numbering: IS Object: 2 MPI processes type: general [0] Number of indices in set 2 [0] 0 0 [0] 1 1 [1] Number of indices in set 2 [1] 0 2 [1] 1 3 0-----1-----2 | | | | 0 | 1 | | | | 7-----8-----4 | | | | 3 | 2 | | | | 5-----6-----3 Coordinates : CellIS rank 0 vertice 0 : 0. 1. IS Object: 4 MPI processes rank 1 vertice 1 : 0.5 1. type: general rank 1 vertice 2 : 1. 1. [0] Number of indices in set 4 rank 2 vertice 3 : 1. 0. [0] 0 7 rank 2 vertice 4 : 1. 0.5 [0] 1 8 rank 3 vertice 5 : 0. 0. [0] 2 1 rank 3 vertice 6 : 0.5 0. [0] 3 0 rank 3 vertice 7 : 0. 0.5 [1] Number of indices in set 4 rank 3 vertice 8 : 0.5 0.5 [1] 0 8 [1] 1 4 [1] 2 2 [1] 3 1 [2] Number of indices in set 4 [2] 0 6 [2] 1 3 [2] 2 4 [2] 3 8 [3] Number of indices in set 4 [3] 0 5 [3] 1 6 [3] 2 8 [3] 3 7 Cells Global numbering: IS Object: 4 MPI processes type: general [0] Number of indices in set 1 [0] 0 0 [1] Number of indices in set 1 [1] 0 1 [2] Number of indices in set 1 [2] 0 2 [3] Number of indices in set 1 [3] 0 3 From repepo at gmail.com Wed Dec 20 16:46:17 2017 From: repepo at gmail.com (Santiago Andres Triana) Date: Wed, 20 Dec 2017 23:46:17 +0100 Subject: [petsc-users] configure cannot find a c preprocessor Message-ID: Dear petsc-users, I'm trying to install petsc in a cluster using SGI's MPT. The mpicc compiler is in the search path. The configure command is: ./configure --with-scalar-type=complex --with-mumps=1 --download-mumps --download-parmetis --download-metis --download-scalapack However, this leads to an error (configure.log attached): =============================================================================== Configuring PETSc to compile on your system =============================================================================== TESTING: checkCPreprocessor from config.setCompilers(config/BuildSystem/config/setCompilers.py:599) ******************************************************************************* UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details): ------------------------------------------------------------------------------- Cannot find a C preprocessor ******************************************************************************* The configure.log says something about cpp32, here's the excerpt: Possible ERROR while running preprocessor: exit code 256 stderr: gcc: error: cpp32: No such file or directory Any ideas of what is going wrong? any help or comments are highly appreciated. Thanks in advance! Andres -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: application/octet-stream Size: 88969 bytes Desc: not available URL: From knepley at gmail.com Wed Dec 20 16:51:27 2017 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 20 Dec 2017 17:51:27 -0500 Subject: [petsc-users] local to global mapping for DMPlex In-Reply-To: <25559b15-aac4-3570-2d0d-58ee6da58611@univ-amu.fr> References: <25559b15-aac4-3570-2d0d-58ee6da58611@univ-amu.fr> Message-ID: I'm glad it worked. Maybe I should take what you have done an abstract it into something generally useful for output. Thanks, Matt On Wed, Dec 20, 2017 at 4:30 PM, Yann JOBIC wrote: > That's working just fine. I don't know if that could be useful for > someone, just in case here is the methodology : > > I got the cell-vertex connectivity from the function > DMPlexWriteTopology_Vertices_HDF5_Static of petsc/src/dm/impls/plex/plexhdf5.c. > The interesting result is in cellIS. > > The vertex coordinates are from the function DMPlexWriteCoordinates_HDF5_Static > of the same file. The global numering is just by adding the position of > the vertex in the Vec + all the other ones from previous proc numbers. > The cell global numbering is trivial : we use the function > DMPlexGetCellNumbering. > > The global numbering is quite strange, but is consistant. Here is some > examples, for 2x2 Cells (4 vertex per cell). > I attach a text file for a better reading > On 1 processor, we have : > 6-----7-----8 > | | | > | 2 | 3 | > | | | > 3-----4-----5 > | | | > | 0 | 1 | > | | | > 0-----1-----2 > > On 2 processors we have : > 0-----1-----2 > | | | > | 0 | 1 | > | | | > 6-----7-----8 > | | | > | 2 | 3 | > | | | > 3-----4-----5 > Cells 0 and 1 are on proc 0; 2 and 3 are on proc 1. > > On 4 processors we have : > 0-----1-----2 > | | | > | 0 | 1 | > | | | > 7-----8-----4 > | | | > | 3 | 2 | > | | | > 5-----6-----3 > > On cell per proc : cell 0 on proc 0, cell 1 on proc 1, ... > > The text file (edit with notepad++) contains all the ISView for > understanding purpose. > > Matthew, many thanks for the help !! > > Regards, > > Yann > > Le 20/12/2017 ? 17:40, Matthew Knepley a ?crit : > > On Wed, Dec 20, 2017 at 11:08 AM, Yann JOBIC > wrote: > >> On 12/19/2017 05:50 PM, Matthew Knepley wrote: >> >> On Tue, Dec 19, 2017 at 11:40 AM, Yann JOBIC >> wrote: >> >>> Hello, >>> >>> We want to extract the cell connectivity from a DMPlex. We have no >>> problem for a sequential run. >>> >> >> Do you want it on disk? If so, you can just DMView() for HDF5. That >> outputs the connectivity in a global numbering. >> I can show you the calls I use inside if you want. >> >> I looked at the code, specifically DMPlexWriteTopology_Vertices_H >> DF5_Static >> cellIS should have what i want. >> However it seems that it is not the case. Do i look at the right spot in >> the code ? >> > > cellIS has the entire connectivity. You sound like you want cell-vertex > only. You could get that by just > > DMPlexUninterpolate(dm, &udm); > > DMDestroy(&udm); > > >> I also looked at DMPlexGetCellNumbering, which does exactly what i want >> for the global numbering of Cells, even if the ordering is different for >> different number of processors. >> > > Yes. > > >> When i use DMPlexGetVertexNumbering, i've got negative values, which is >> correctly handeled by DMPlexWriteTopology_Vertices_HDF5_Static, but i >> really don't understand this part. >> > > There are no shared cells in your partition, so you get all positive > numbers. You have shared vertices, so the negative numbers are for vertices > you do not own. If the negative number is n, the corresponding global > number is -(n+1). Notice that the transformation is 1-1 and its square is I. > > >> I succeed in having the coordinates, but maybe not in the right order. I >> reused DMPlexWriteCoordinates_HDF5_Static, which create a Vec of vertex >> coordinates, but i couldn't understand the order of vertex coordinate >> linked (or not) with DMPlexWriteTopology_Vertices_HDF5_Static. The >> number of local coordinates is also strange, and does not correspond to the >> cell's topology. >> > > It matches the order of the global numbering. > > Thanks > > Matt > > >> Thanks for the help! >> >> Regards, >> >> Yann >> >> >> I usually put >> >> DMViewFromOptions(dm, NULL, "-dm_view") >> >> Then >> >> -dm_view hdf5:mesh.h5 >> >> Thanks, >> >> Matt >> >> >>> However for parallel ones, we need to get the node numbering in the >>> global ordering, as when we distribute the mesh, we only have local nodes, >>> and thus local numbering. >>> >>> It seems that we should use DMGetLocalToGlobalMapping (we are using >>> Fortran with Petsc 3.8p3). However, we get the running error : >>> >>> [0]PETSC ERROR: No support for this operation for this object type >>> [0]PETSC ERROR: DM can not create LocalToGlobalMapping >>> >>> Is it the right way to do it ? >>> >>> Many thanks, >>> >>> Regards, >>> >>> Yann >>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Wed Dec 20 16:59:31 2017 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 20 Dec 2017 16:59:31 -0600 Subject: [petsc-users] configure cannot find a c preprocessor In-Reply-To: References: Message-ID: >>> Executing: mpicc -E -I/dev/shm/pbs.3111462.hpc-pbs/petsc-fdYfuH/config.setCompilers /dev/shm/pbs.3111462.hpc-pbs/petsc-fdYfuH/config.setCompilers/conftest.c stderr: gcc: warning: /usr/lib64/libcpuset.so.1: linker input file unused because linking not done gcc: warning: /usr/lib64/libbitmask.so.1: linker input file unused because linking not done <<<< Looks like your mpicc is printing this verbose thing on stdout [why is it doing a link check during preprocesing?] - thus confusing PETSc configure. Workarround is to fix this compiler not to print such messages. Or use different compilers.. What do you have for: mpicc -show Satish On Wed, 20 Dec 2017, Santiago Andres Triana wrote: > Dear petsc-users, > > I'm trying to install petsc in a cluster using SGI's MPT. The mpicc > compiler is in the search path. The configure command is: > > ./configure --with-scalar-type=complex --with-mumps=1 --download-mumps > --download-parmetis --download-metis --download-scalapack > > However, this leads to an error (configure.log attached): > > =============================================================================== > Configuring PETSc to compile on your system > > =============================================================================== > TESTING: checkCPreprocessor from > config.setCompilers(config/BuildSystem/config/setCompilers.py:599) > > ******************************************************************************* > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for > details): > ------------------------------------------------------------------------------- > Cannot find a C preprocessor > ******************************************************************************* > > The configure.log says something about cpp32, here's the excerpt: > > Possible ERROR while running preprocessor: exit code 256 > stderr: > gcc: error: cpp32: No such file or directory > > > Any ideas of what is going wrong? any help or comments are highly > appreciated. Thanks in advance! > > Andres > From repepo at gmail.com Wed Dec 20 17:03:02 2017 From: repepo at gmail.com (Santiago Andres Triana) Date: Thu, 21 Dec 2017 00:03:02 +0100 Subject: [petsc-users] configure cannot find a c preprocessor In-Reply-To: References: Message-ID: This is what I get: hpca-login:~> mpicc -show gcc -I/opt/sgi/mpt/mpt-2.12/include -L/opt/sgi/mpt/mpt-2.12/lib -lmpi -lpthread /usr/lib64/libcpuset.so.1 /usr/lib64/libbitmask.so.1 On Wed, Dec 20, 2017 at 11:59 PM, Satish Balay wrote: > >>> > Executing: mpicc -E -I/dev/shm/pbs.3111462.hpc-pbs/petsc-fdYfuH/config.setCompilers > /dev/shm/pbs.3111462.hpc-pbs/petsc-fdYfuH/config.setCompilers/conftest.c > stderr: > gcc: warning: /usr/lib64/libcpuset.so.1: linker input file unused because > linking not done > gcc: warning: /usr/lib64/libbitmask.so.1: linker input file unused because > linking not done > <<<< > > Looks like your mpicc is printing this verbose thing on stdout [why is > it doing a link check during preprocesing?] - thus confusing PETSc > configure. > > Workarround is to fix this compiler not to print such messages. Or use > different compilers.. > > What do you have for: > > mpicc -show > > > Satish > > On Wed, 20 Dec 2017, Santiago Andres Triana wrote: > > > Dear petsc-users, > > > > I'm trying to install petsc in a cluster using SGI's MPT. The mpicc > > compiler is in the search path. The configure command is: > > > > ./configure --with-scalar-type=complex --with-mumps=1 --download-mumps > > --download-parmetis --download-metis --download-scalapack > > > > However, this leads to an error (configure.log attached): > > > > ============================================================ > =================== > > Configuring PETSc to compile on your system > > > > ============================================================ > =================== > > TESTING: checkCPreprocessor from > > config.setCompilers(config/BuildSystem/config/setCompilers.py:599) > > > > ************************************************************ > ******************* > > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for > > details): > > ------------------------------------------------------------ > ------------------- > > Cannot find a C preprocessor > > ************************************************************ > ******************* > > > > The configure.log says something about cpp32, here's the excerpt: > > > > Possible ERROR while running preprocessor: exit code 256 > > stderr: > > gcc: error: cpp32: No such file or directory > > > > > > Any ideas of what is going wrong? any help or comments are highly > > appreciated. Thanks in advance! > > > > Andres > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Wed Dec 20 17:07:35 2017 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 20 Dec 2017 17:07:35 -0600 Subject: [petsc-users] configure cannot find a c preprocessor In-Reply-To: References: Message-ID: Its strange compiler. You can try: ./configure --with-cc=gcc --with-fc=gfortran --with-cxx=g++ --with-mpi-include=/opt/sgi/mpt/mpt-2.12/include --with-mpi-lib="-L/opt/sgi/mpt/mpt-2.12/lib -lmpi -lpthread /usr/lib64/libcpuset.so.1 /usr/lib64/libbitmask.so.1" Satish On Wed, 20 Dec 2017, Santiago Andres Triana wrote: > This is what I get: > > hpca-login:~> mpicc -show > gcc -I/opt/sgi/mpt/mpt-2.12/include -L/opt/sgi/mpt/mpt-2.12/lib -lmpi > -lpthread /usr/lib64/libcpuset.so.1 /usr/lib64/libbitmask.so.1 > > On Wed, Dec 20, 2017 at 11:59 PM, Satish Balay wrote: > > > >>> > > Executing: mpicc -E -I/dev/shm/pbs.3111462.hpc-pbs/petsc-fdYfuH/config.setCompilers > > /dev/shm/pbs.3111462.hpc-pbs/petsc-fdYfuH/config.setCompilers/conftest.c > > stderr: > > gcc: warning: /usr/lib64/libcpuset.so.1: linker input file unused because > > linking not done > > gcc: warning: /usr/lib64/libbitmask.so.1: linker input file unused because > > linking not done > > <<<< > > > > Looks like your mpicc is printing this verbose thing on stdout [why is > > it doing a link check during preprocesing?] - thus confusing PETSc > > configure. > > > > Workarround is to fix this compiler not to print such messages. Or use > > different compilers.. > > > > What do you have for: > > > > mpicc -show > > > > > > Satish > > > > On Wed, 20 Dec 2017, Santiago Andres Triana wrote: > > > > > Dear petsc-users, > > > > > > I'm trying to install petsc in a cluster using SGI's MPT. The mpicc > > > compiler is in the search path. The configure command is: > > > > > > ./configure --with-scalar-type=complex --with-mumps=1 --download-mumps > > > --download-parmetis --download-metis --download-scalapack > > > > > > However, this leads to an error (configure.log attached): > > > > > > ============================================================ > > =================== > > > Configuring PETSc to compile on your system > > > > > > ============================================================ > > =================== > > > TESTING: checkCPreprocessor from > > > config.setCompilers(config/BuildSystem/config/setCompilers.py:599) > > > > > > ************************************************************ > > ******************* > > > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for > > > details): > > > ------------------------------------------------------------ > > ------------------- > > > Cannot find a C preprocessor > > > ************************************************************ > > ******************* > > > > > > The configure.log says something about cpp32, here's the excerpt: > > > > > > Possible ERROR while running preprocessor: exit code 256 > > > stderr: > > > gcc: error: cpp32: No such file or directory > > > > > > > > > Any ideas of what is going wrong? any help or comments are highly > > > appreciated. Thanks in advance! > > > > > > Andres > > > > > > > > From repepo at gmail.com Wed Dec 20 17:17:58 2017 From: repepo at gmail.com (Santiago Andres Triana) Date: Thu, 21 Dec 2017 00:17:58 +0100 Subject: [petsc-users] configure cannot find a c preprocessor In-Reply-To: References: Message-ID: thanks Satish, did not work unfortunately, configure.log attached. Here's the output: hpca-login:~/petsc-3.8.3> ./configure --with-cc=gcc --with-fc=gfortran --with-cxx=g++ --with-mpi-include=/opt/sgi/mpt/mpt-2.12/include --with-mpi-lib="-L/opt/sgi/mpt/mpt-2.12/lib -lmpi -lpthread /usr/lib64/libcpuset.so.1 /usr/lib64/libbitmask.so.1" =============================================================================== Configuring PETSc to compile on your system =============================================================================== TESTING: check from config.libraries(config/BuildSystem/config/libraries.py:158) ******************************************************************************* UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details): ------------------------------------------------------------------------------- --with-mpi-lib=['-L/opt/sgi/mpt/mpt-2.12/lib', '-lmpi', '-lpthread', '/usr/lib64/libcpuset.so.1', '/usr/lib64/libbitmask.so.1'] and --with-mpi-include=['/opt/sgi/mpt/mpt-2.12/include'] did not work ******************************************************************************* On Thu, Dec 21, 2017 at 12:07 AM, Satish Balay wrote: > Its strange compiler. > > You can try: > > ./configure --with-cc=gcc --with-fc=gfortran --with-cxx=g++ > --with-mpi-include=/opt/sgi/mpt/mpt-2.12/include > --with-mpi-lib="-L/opt/sgi/mpt/mpt-2.12/lib -lmpi -lpthread > /usr/lib64/libcpuset.so.1 /usr/lib64/libbitmask.so.1" > > Satish > > On Wed, 20 Dec 2017, Santiago Andres Triana wrote: > > > This is what I get: > > > > hpca-login:~> mpicc -show > > gcc -I/opt/sgi/mpt/mpt-2.12/include -L/opt/sgi/mpt/mpt-2.12/lib -lmpi > > -lpthread /usr/lib64/libcpuset.so.1 /usr/lib64/libbitmask.so.1 > > > > On Wed, Dec 20, 2017 at 11:59 PM, Satish Balay > wrote: > > > > > >>> > > > Executing: mpicc -E -I/dev/shm/pbs.3111462.hpc- > pbs/petsc-fdYfuH/config.setCompilers > > > /dev/shm/pbs.3111462.hpc-pbs/petsc-fdYfuH/config. > setCompilers/conftest.c > > > stderr: > > > gcc: warning: /usr/lib64/libcpuset.so.1: linker input file unused > because > > > linking not done > > > gcc: warning: /usr/lib64/libbitmask.so.1: linker input file unused > because > > > linking not done > > > <<<< > > > > > > Looks like your mpicc is printing this verbose thing on stdout [why is > > > it doing a link check during preprocesing?] - thus confusing PETSc > > > configure. > > > > > > Workarround is to fix this compiler not to print such messages. Or use > > > different compilers.. > > > > > > What do you have for: > > > > > > mpicc -show > > > > > > > > > Satish > > > > > > On Wed, 20 Dec 2017, Santiago Andres Triana wrote: > > > > > > > Dear petsc-users, > > > > > > > > I'm trying to install petsc in a cluster using SGI's MPT. The mpicc > > > > compiler is in the search path. The configure command is: > > > > > > > > ./configure --with-scalar-type=complex --with-mumps=1 > --download-mumps > > > > --download-parmetis --download-metis --download-scalapack > > > > > > > > However, this leads to an error (configure.log attached): > > > > > > > > ============================================================ > > > =================== > > > > Configuring PETSc to compile on your system > > > > > > > > ============================================================ > > > =================== > > > > TESTING: checkCPreprocessor from > > > > config.setCompilers(config/BuildSystem/config/setCompilers.py:599) > > > > > > > > ************************************************************ > > > ******************* > > > > UNABLE to CONFIGURE with GIVEN OPTIONS (see > configure.log for > > > > details): > > > > ------------------------------------------------------------ > > > ------------------- > > > > Cannot find a C preprocessor > > > > ************************************************************ > > > ******************* > > > > > > > > The configure.log says something about cpp32, here's the excerpt: > > > > > > > > Possible ERROR while running preprocessor: exit code 256 > > > > stderr: > > > > gcc: error: cpp32: No such file or directory > > > > > > > > > > > > Any ideas of what is going wrong? any help or comments are highly > > > > appreciated. Thanks in advance! > > > > > > > > Andres > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: application/octet-stream Size: 1268380 bytes Desc: not available URL: From balay at mcs.anl.gov Wed Dec 20 17:21:51 2017 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 20 Dec 2017 17:21:51 -0600 Subject: [petsc-users] configure cannot find a c preprocessor In-Reply-To: References: Message-ID: Hm configure is misbehaving with /usr/lib64/libcpuset.so.1 notation. Try: ./configure --with-cc=gcc --with-fc=gfortran --with-cxx=g++ --with-mpi-include=/opt/sgi/mpt/mpt-2.12/include --with-mpi-lib="-L/opt/sgi/mpt/mpt-2.12/lib -lmpi -lpthread" LIBS="/usr/lib64/libcpuset.so.1 /usr/lib64/libbitmask.so.1" Satish On Wed, 20 Dec 2017, Santiago Andres Triana wrote: > thanks Satish, > > did not work unfortunately, configure.log attached. Here's the output: > > hpca-login:~/petsc-3.8.3> ./configure --with-cc=gcc --with-fc=gfortran > --with-cxx=g++ --with-mpi-include=/opt/sgi/mpt/mpt-2.12/include > --with-mpi-lib="-L/opt/sgi/mpt/mpt-2.12/lib -lmpi -lpthread > /usr/lib64/libcpuset.so.1 /usr/lib64/libbitmask.so.1" > =============================================================================== > Configuring PETSc to compile on your system > > =============================================================================== > TESTING: check from > config.libraries(config/BuildSystem/config/libraries.py:158) > > ******************************************************************************* > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for > details): > ------------------------------------------------------------------------------- > --with-mpi-lib=['-L/opt/sgi/mpt/mpt-2.12/lib', '-lmpi', '-lpthread', > '/usr/lib64/libcpuset.so.1', '/usr/lib64/libbitmask.so.1'] and > --with-mpi-include=['/opt/sgi/mpt/mpt-2.12/include'] did not work > ******************************************************************************* > > On Thu, Dec 21, 2017 at 12:07 AM, Satish Balay wrote: > > > Its strange compiler. > > > > You can try: > > > > ./configure --with-cc=gcc --with-fc=gfortran --with-cxx=g++ > > --with-mpi-include=/opt/sgi/mpt/mpt-2.12/include > > --with-mpi-lib="-L/opt/sgi/mpt/mpt-2.12/lib -lmpi -lpthread > > /usr/lib64/libcpuset.so.1 /usr/lib64/libbitmask.so.1" > > > > Satish > > > > On Wed, 20 Dec 2017, Santiago Andres Triana wrote: > > > > > This is what I get: > > > > > > hpca-login:~> mpicc -show > > > gcc -I/opt/sgi/mpt/mpt-2.12/include -L/opt/sgi/mpt/mpt-2.12/lib -lmpi > > > -lpthread /usr/lib64/libcpuset.so.1 /usr/lib64/libbitmask.so.1 > > > > > > On Wed, Dec 20, 2017 at 11:59 PM, Satish Balay > > wrote: > > > > > > > >>> > > > > Executing: mpicc -E -I/dev/shm/pbs.3111462.hpc- > > pbs/petsc-fdYfuH/config.setCompilers > > > > /dev/shm/pbs.3111462.hpc-pbs/petsc-fdYfuH/config. > > setCompilers/conftest.c > > > > stderr: > > > > gcc: warning: /usr/lib64/libcpuset.so.1: linker input file unused > > because > > > > linking not done > > > > gcc: warning: /usr/lib64/libbitmask.so.1: linker input file unused > > because > > > > linking not done > > > > <<<< > > > > > > > > Looks like your mpicc is printing this verbose thing on stdout [why is > > > > it doing a link check during preprocesing?] - thus confusing PETSc > > > > configure. > > > > > > > > Workarround is to fix this compiler not to print such messages. Or use > > > > different compilers.. > > > > > > > > What do you have for: > > > > > > > > mpicc -show > > > > > > > > > > > > Satish > > > > > > > > On Wed, 20 Dec 2017, Santiago Andres Triana wrote: > > > > > > > > > Dear petsc-users, > > > > > > > > > > I'm trying to install petsc in a cluster using SGI's MPT. The mpicc > > > > > compiler is in the search path. The configure command is: > > > > > > > > > > ./configure --with-scalar-type=complex --with-mumps=1 > > --download-mumps > > > > > --download-parmetis --download-metis --download-scalapack > > > > > > > > > > However, this leads to an error (configure.log attached): > > > > > > > > > > ============================================================ > > > > =================== > > > > > Configuring PETSc to compile on your system > > > > > > > > > > ============================================================ > > > > =================== > > > > > TESTING: checkCPreprocessor from > > > > > config.setCompilers(config/BuildSystem/config/setCompilers.py:599) > > > > > > > > > > ************************************************************ > > > > ******************* > > > > > UNABLE to CONFIGURE with GIVEN OPTIONS (see > > configure.log for > > > > > details): > > > > > ------------------------------------------------------------ > > > > ------------------- > > > > > Cannot find a C preprocessor > > > > > ************************************************************ > > > > ******************* > > > > > > > > > > The configure.log says something about cpp32, here's the excerpt: > > > > > > > > > > Possible ERROR while running preprocessor: exit code 256 > > > > > stderr: > > > > > gcc: error: cpp32: No such file or directory > > > > > > > > > > > > > > > Any ideas of what is going wrong? any help or comments are highly > > > > > appreciated. Thanks in advance! > > > > > > > > > > Andres > > > > > > > > > > > > > > > > > > > > > From repepo at gmail.com Wed Dec 20 17:31:50 2017 From: repepo at gmail.com (Santiago Andres Triana) Date: Thu, 21 Dec 2017 00:31:50 +0100 Subject: [petsc-users] configure cannot find a c preprocessor In-Reply-To: References: Message-ID: I got a different error now... hope it's a good sign! hpca-login:~/petsc-3.8.3> ./configure --with-cc=gcc --with-fc=gfortran --with-cxx=g++ --with-mpi-include=/opt/sgi/mpt/mpt-2.12/include --with-mpi-lib="-L/opt/sgi/mpt/mpt-2.12/lib -lmpi -lpthread" LIBS="/usr/lib64/libcpuset.so.1 /usr/lib64/libbitmask.so.1" =============================================================================== Configuring PETSc to compile on your system =============================================================================== TESTING: CxxMPICheck from config.packages.MPI(config/BuildSystem/config/packages/MPI.py:351) ******************************************************************************* UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details): ------------------------------------------------------------------------------- C++ error! MPI_Finalize() could not be located! ******************************************************************************* On Thu, Dec 21, 2017 at 12:21 AM, Satish Balay wrote: > Hm configure is misbehaving with /usr/lib64/libcpuset.so.1 notation. Try: > > ./configure --with-cc=gcc --with-fc=gfortran --with-cxx=g++ > --with-mpi-include=/opt/sgi/mpt/mpt-2.12/include > --with-mpi-lib="-L/opt/sgi/mpt/mpt-2.12/lib -lmpi -lpthread" > LIBS="/usr/lib64/libcpuset.so.1 /usr/lib64/libbitmask.so.1" > > Satish > > > On Wed, 20 Dec 2017, Santiago Andres Triana wrote: > > > thanks Satish, > > > > did not work unfortunately, configure.log attached. Here's the output: > > > > hpca-login:~/petsc-3.8.3> ./configure --with-cc=gcc --with-fc=gfortran > > --with-cxx=g++ --with-mpi-include=/opt/sgi/mpt/mpt-2.12/include > > --with-mpi-lib="-L/opt/sgi/mpt/mpt-2.12/lib -lmpi -lpthread > > /usr/lib64/libcpuset.so.1 /usr/lib64/libbitmask.so.1" > > ============================================================ > =================== > > Configuring PETSc to compile on your system > > > > ============================================================ > =================== > > TESTING: check from > > config.libraries(config/BuildSystem/config/libraries.py:158) > > > > ************************************************************ > ******************* > > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for > > details): > > ------------------------------------------------------------ > ------------------- > > --with-mpi-lib=['-L/opt/sgi/mpt/mpt-2.12/lib', '-lmpi', '-lpthread', > > '/usr/lib64/libcpuset.so.1', '/usr/lib64/libbitmask.so.1'] and > > --with-mpi-include=['/opt/sgi/mpt/mpt-2.12/include'] did not work > > ************************************************************ > ******************* > > > > On Thu, Dec 21, 2017 at 12:07 AM, Satish Balay > wrote: > > > > > Its strange compiler. > > > > > > You can try: > > > > > > ./configure --with-cc=gcc --with-fc=gfortran --with-cxx=g++ > > > --with-mpi-include=/opt/sgi/mpt/mpt-2.12/include > > > --with-mpi-lib="-L/opt/sgi/mpt/mpt-2.12/lib -lmpi -lpthread > > > /usr/lib64/libcpuset.so.1 /usr/lib64/libbitmask.so.1" > > > > > > Satish > > > > > > On Wed, 20 Dec 2017, Santiago Andres Triana wrote: > > > > > > > This is what I get: > > > > > > > > hpca-login:~> mpicc -show > > > > gcc -I/opt/sgi/mpt/mpt-2.12/include -L/opt/sgi/mpt/mpt-2.12/lib > -lmpi > > > > -lpthread /usr/lib64/libcpuset.so.1 /usr/lib64/libbitmask.so.1 > > > > > > > > On Wed, Dec 20, 2017 at 11:59 PM, Satish Balay > > > wrote: > > > > > > > > > >>> > > > > > Executing: mpicc -E -I/dev/shm/pbs.3111462.hpc- > > > pbs/petsc-fdYfuH/config.setCompilers > > > > > /dev/shm/pbs.3111462.hpc-pbs/petsc-fdYfuH/config. > > > setCompilers/conftest.c > > > > > stderr: > > > > > gcc: warning: /usr/lib64/libcpuset.so.1: linker input file unused > > > because > > > > > linking not done > > > > > gcc: warning: /usr/lib64/libbitmask.so.1: linker input file unused > > > because > > > > > linking not done > > > > > <<<< > > > > > > > > > > Looks like your mpicc is printing this verbose thing on stdout > [why is > > > > > it doing a link check during preprocesing?] - thus confusing PETSc > > > > > configure. > > > > > > > > > > Workarround is to fix this compiler not to print such messages. Or > use > > > > > different compilers.. > > > > > > > > > > What do you have for: > > > > > > > > > > mpicc -show > > > > > > > > > > > > > > > Satish > > > > > > > > > > On Wed, 20 Dec 2017, Santiago Andres Triana wrote: > > > > > > > > > > > Dear petsc-users, > > > > > > > > > > > > I'm trying to install petsc in a cluster using SGI's MPT. The > mpicc > > > > > > compiler is in the search path. The configure command is: > > > > > > > > > > > > ./configure --with-scalar-type=complex --with-mumps=1 > > > --download-mumps > > > > > > --download-parmetis --download-metis --download-scalapack > > > > > > > > > > > > However, this leads to an error (configure.log attached): > > > > > > > > > > > > ============================================================ > > > > > =================== > > > > > > Configuring PETSc to compile on your system > > > > > > > > > > > > ============================================================ > > > > > =================== > > > > > > TESTING: checkCPreprocessor from > > > > > > config.setCompilers(config/BuildSystem/config/ > setCompilers.py:599) > > > > > > > > > > > > ************************************************************ > > > > > ******************* > > > > > > UNABLE to CONFIGURE with GIVEN OPTIONS (see > > > configure.log for > > > > > > details): > > > > > > ------------------------------------------------------------ > > > > > ------------------- > > > > > > Cannot find a C preprocessor > > > > > > ************************************************************ > > > > > ******************* > > > > > > > > > > > > The configure.log says something about cpp32, here's the excerpt: > > > > > > > > > > > > Possible ERROR while running preprocessor: exit code 256 > > > > > > stderr: > > > > > > gcc: error: cpp32: No such file or directory > > > > > > > > > > > > > > > > > > Any ideas of what is going wrong? any help or comments are highly > > > > > > appreciated. Thanks in advance! > > > > > > > > > > > > Andres > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: application/octet-stream Size: 1418020 bytes Desc: not available URL: From knepley at gmail.com Wed Dec 20 17:52:54 2017 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 20 Dec 2017 18:52:54 -0500 Subject: [petsc-users] configure cannot find a c preprocessor In-Reply-To: References: Message-ID: On Wed, Dec 20, 2017 at 6:31 PM, Santiago Andres Triana wrote: > I got a different error now... hope it's a good sign! > > hpca-login:~/petsc-3.8.3> ./configure --with-cc=gcc --with-fc=gfortran > --with-cxx=g++ --with-mpi-include=/opt/sgi/mpt/mpt-2.12/include > --with-mpi-lib="-L/opt/sgi/mpt/mpt-2.12/lib -lmpi -lpthread" > LIBS="/usr/lib64/libcpuset.so.1 /usr/lib64/libbitmask.so.1" > ============================================================ > =================== > Configuring PETSc to compile on your system > > ============================================================ > =================== > TESTING: CxxMPICheck from config.packages.MPI(config/ > BuildSystem/config/packages/MPI.py:351) > ************************************************************ > ******************* > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for > details): > ------------------------------------------------------------ > ------------------- > C++ error! MPI_Finalize() could not be located! > ************************************************************ > ******************* > It looks like there is crazy C++ stuff in SGI MPT. I can see two chioces: a) Turn off C++: --with-cxx=0 b) Find out what crazy C++ library MPT has and stick it in --with-mpi-lib No amount of MPI optimization is worth this pain. Does your machine have an MPICH install? Thanks, Matt > On Thu, Dec 21, 2017 at 12:21 AM, Satish Balay wrote: > >> Hm configure is misbehaving with /usr/lib64/libcpuset.so.1 notation. Try: >> >> ./configure --with-cc=gcc --with-fc=gfortran --with-cxx=g++ >> --with-mpi-include=/opt/sgi/mpt/mpt-2.12/include >> --with-mpi-lib="-L/opt/sgi/mpt/mpt-2.12/lib -lmpi -lpthread" >> LIBS="/usr/lib64/libcpuset.so.1 /usr/lib64/libbitmask.so.1" >> >> Satish >> >> >> On Wed, 20 Dec 2017, Santiago Andres Triana wrote: >> >> > thanks Satish, >> > >> > did not work unfortunately, configure.log attached. Here's the output: >> > >> > hpca-login:~/petsc-3.8.3> ./configure --with-cc=gcc --with-fc=gfortran >> > --with-cxx=g++ --with-mpi-include=/opt/sgi/mpt/mpt-2.12/include >> > --with-mpi-lib="-L/opt/sgi/mpt/mpt-2.12/lib -lmpi -lpthread >> > /usr/lib64/libcpuset.so.1 /usr/lib64/libbitmask.so.1" >> > ============================================================ >> =================== >> > Configuring PETSc to compile on your system >> > >> > ============================================================ >> =================== >> > TESTING: check from >> > config.libraries(config/BuildSystem/config/libraries.py:158) >> > >> > ************************************************************ >> ******************* >> > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log >> for >> > details): >> > ------------------------------------------------------------ >> ------------------- >> > --with-mpi-lib=['-L/opt/sgi/mpt/mpt-2.12/lib', '-lmpi', '-lpthread', >> > '/usr/lib64/libcpuset.so.1', '/usr/lib64/libbitmask.so.1'] and >> > --with-mpi-include=['/opt/sgi/mpt/mpt-2.12/include'] did not work >> > ************************************************************ >> ******************* >> > >> > On Thu, Dec 21, 2017 at 12:07 AM, Satish Balay >> wrote: >> > >> > > Its strange compiler. >> > > >> > > You can try: >> > > >> > > ./configure --with-cc=gcc --with-fc=gfortran --with-cxx=g++ >> > > --with-mpi-include=/opt/sgi/mpt/mpt-2.12/include >> > > --with-mpi-lib="-L/opt/sgi/mpt/mpt-2.12/lib -lmpi -lpthread >> > > /usr/lib64/libcpuset.so.1 /usr/lib64/libbitmask.so.1" >> > > >> > > Satish >> > > >> > > On Wed, 20 Dec 2017, Santiago Andres Triana wrote: >> > > >> > > > This is what I get: >> > > > >> > > > hpca-login:~> mpicc -show >> > > > gcc -I/opt/sgi/mpt/mpt-2.12/include -L/opt/sgi/mpt/mpt-2.12/lib >> -lmpi >> > > > -lpthread /usr/lib64/libcpuset.so.1 /usr/lib64/libbitmask.so.1 >> > > > >> > > > On Wed, Dec 20, 2017 at 11:59 PM, Satish Balay >> > > wrote: >> > > > >> > > > > >>> >> > > > > Executing: mpicc -E -I/dev/shm/pbs.3111462.hpc- >> > > pbs/petsc-fdYfuH/config.setCompilers >> > > > > /dev/shm/pbs.3111462.hpc-pbs/petsc-fdYfuH/config. >> > > setCompilers/conftest.c >> > > > > stderr: >> > > > > gcc: warning: /usr/lib64/libcpuset.so.1: linker input file unused >> > > because >> > > > > linking not done >> > > > > gcc: warning: /usr/lib64/libbitmask.so.1: linker input file unused >> > > because >> > > > > linking not done >> > > > > <<<< >> > > > > >> > > > > Looks like your mpicc is printing this verbose thing on stdout >> [why is >> > > > > it doing a link check during preprocesing?] - thus confusing PETSc >> > > > > configure. >> > > > > >> > > > > Workarround is to fix this compiler not to print such messages. >> Or use >> > > > > different compilers.. >> > > > > >> > > > > What do you have for: >> > > > > >> > > > > mpicc -show >> > > > > >> > > > > >> > > > > Satish >> > > > > >> > > > > On Wed, 20 Dec 2017, Santiago Andres Triana wrote: >> > > > > >> > > > > > Dear petsc-users, >> > > > > > >> > > > > > I'm trying to install petsc in a cluster using SGI's MPT. The >> mpicc >> > > > > > compiler is in the search path. The configure command is: >> > > > > > >> > > > > > ./configure --with-scalar-type=complex --with-mumps=1 >> > > --download-mumps >> > > > > > --download-parmetis --download-metis --download-scalapack >> > > > > > >> > > > > > However, this leads to an error (configure.log attached): >> > > > > > >> > > > > > ============================================================ >> > > > > =================== >> > > > > > Configuring PETSc to compile on your system >> > > > > > >> > > > > > ============================================================ >> > > > > =================== >> > > > > > TESTING: checkCPreprocessor from >> > > > > > config.setCompilers(config/BuildSystem/config/setCompilers. >> py:599) >> > > > > > >> > > > > > ************************************************************ >> > > > > ******************* >> > > > > > UNABLE to CONFIGURE with GIVEN OPTIONS (see >> > > configure.log for >> > > > > > details): >> > > > > > ------------------------------------------------------------ >> > > > > ------------------- >> > > > > > Cannot find a C preprocessor >> > > > > > ************************************************************ >> > > > > ******************* >> > > > > > >> > > > > > The configure.log says something about cpp32, here's the >> excerpt: >> > > > > > >> > > > > > Possible ERROR while running preprocessor: exit code 256 >> > > > > > stderr: >> > > > > > gcc: error: cpp32: No such file or directory >> > > > > > >> > > > > > >> > > > > > Any ideas of what is going wrong? any help or comments are >> highly >> > > > > > appreciated. Thanks in advance! >> > > > > > >> > > > > > Andres >> > > > > > >> > > > > >> > > > > >> > > > >> > > >> > > >> > >> >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Dec 20 22:10:09 2017 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Thu, 21 Dec 2017 04:10:09 +0000 Subject: [petsc-users] configure cannot find a c preprocessor In-Reply-To: References: Message-ID: <8556DEB2-0F2A-4873-92DF-965F17DAEF80@mcs.anl.gov> > On Dec 20, 2017, at 5:52 PM, Matthew Knepley wrote: > > On Wed, Dec 20, 2017 at 6:31 PM, Santiago Andres Triana wrote: > I got a different error now... hope it's a good sign! > > hpca-login:~/petsc-3.8.3> ./configure --with-cc=gcc --with-fc=gfortran --with-cxx=g++ --with-mpi-include=/opt/sgi/mpt/mpt-2.12/include --with-mpi-lib="-L/opt/sgi/mpt/mpt-2.12/lib -lmpi -lpthread" LIBS="/usr/lib64/libcpuset.so.1 /usr/lib64/libbitmask.so.1" > =============================================================================== > Configuring PETSc to compile on your system > =============================================================================== > TESTING: CxxMPICheck from config.packages.MPI(config/BuildSystem/config/packages/MPI.py:351) ******************************************************************************* > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details): > ------------------------------------------------------------------------------- > C++ error! MPI_Finalize() could not be located! > ******************************************************************************* > > It looks like there is crazy C++ stuff in SGI MPT. I can see two chioces: > > a) Turn off C++: --with-cxx=0 > > b) Find out what crazy C++ library MPT has and stick it in --with-mpi-lib 3) filter the error message as previously discussed (this time for C++), then one does not have "find out what crazy..." since the mpicxx knows it. Barry > > No amount of MPI optimization is worth this pain. Does your machine have an MPICH install? > > Thanks, > > Matt > > On Thu, Dec 21, 2017 at 12:21 AM, Satish Balay wrote: > Hm configure is misbehaving with /usr/lib64/libcpuset.so.1 notation. Try: > > ./configure --with-cc=gcc --with-fc=gfortran --with-cxx=g++ --with-mpi-include=/opt/sgi/mpt/mpt-2.12/include --with-mpi-lib="-L/opt/sgi/mpt/mpt-2.12/lib -lmpi -lpthread" LIBS="/usr/lib64/libcpuset.so.1 /usr/lib64/libbitmask.so.1" > > Satish > > > On Wed, 20 Dec 2017, Santiago Andres Triana wrote: > > > thanks Satish, > > > > did not work unfortunately, configure.log attached. Here's the output: > > > > hpca-login:~/petsc-3.8.3> ./configure --with-cc=gcc --with-fc=gfortran > > --with-cxx=g++ --with-mpi-include=/opt/sgi/mpt/mpt-2.12/include > > --with-mpi-lib="-L/opt/sgi/mpt/mpt-2.12/lib -lmpi -lpthread > > /usr/lib64/libcpuset.so.1 /usr/lib64/libbitmask.so.1" > > =============================================================================== > > Configuring PETSc to compile on your system > > > > =============================================================================== > > TESTING: check from > > config.libraries(config/BuildSystem/config/libraries.py:158) > > > > ******************************************************************************* > > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for > > details): > > ------------------------------------------------------------------------------- > > --with-mpi-lib=['-L/opt/sgi/mpt/mpt-2.12/lib', '-lmpi', '-lpthread', > > '/usr/lib64/libcpuset.so.1', '/usr/lib64/libbitmask.so.1'] and > > --with-mpi-include=['/opt/sgi/mpt/mpt-2.12/include'] did not work > > ******************************************************************************* > > > > On Thu, Dec 21, 2017 at 12:07 AM, Satish Balay wrote: > > > > > Its strange compiler. > > > > > > You can try: > > > > > > ./configure --with-cc=gcc --with-fc=gfortran --with-cxx=g++ > > > --with-mpi-include=/opt/sgi/mpt/mpt-2.12/include > > > --with-mpi-lib="-L/opt/sgi/mpt/mpt-2.12/lib -lmpi -lpthread > > > /usr/lib64/libcpuset.so.1 /usr/lib64/libbitmask.so.1" > > > > > > Satish > > > > > > On Wed, 20 Dec 2017, Santiago Andres Triana wrote: > > > > > > > This is what I get: > > > > > > > > hpca-login:~> mpicc -show > > > > gcc -I/opt/sgi/mpt/mpt-2.12/include -L/opt/sgi/mpt/mpt-2.12/lib -lmpi > > > > -lpthread /usr/lib64/libcpuset.so.1 /usr/lib64/libbitmask.so.1 > > > > > > > > On Wed, Dec 20, 2017 at 11:59 PM, Satish Balay > > > wrote: > > > > > > > > > >>> > > > > > Executing: mpicc -E -I/dev/shm/pbs.3111462.hpc- > > > pbs/petsc-fdYfuH/config.setCompilers > > > > > /dev/shm/pbs.3111462.hpc-pbs/petsc-fdYfuH/config. > > > setCompilers/conftest.c > > > > > stderr: > > > > > gcc: warning: /usr/lib64/libcpuset.so.1: linker input file unused > > > because > > > > > linking not done > > > > > gcc: warning: /usr/lib64/libbitmask.so.1: linker input file unused > > > because > > > > > linking not done > > > > > <<<< > > > > > > > > > > Looks like your mpicc is printing this verbose thing on stdout [why is > > > > > it doing a link check during preprocesing?] - thus confusing PETSc > > > > > configure. > > > > > > > > > > Workarround is to fix this compiler not to print such messages. Or use > > > > > different compilers.. > > > > > > > > > > What do you have for: > > > > > > > > > > mpicc -show > > > > > > > > > > > > > > > Satish > > > > > > > > > > On Wed, 20 Dec 2017, Santiago Andres Triana wrote: > > > > > > > > > > > Dear petsc-users, > > > > > > > > > > > > I'm trying to install petsc in a cluster using SGI's MPT. The mpicc > > > > > > compiler is in the search path. The configure command is: > > > > > > > > > > > > ./configure --with-scalar-type=complex --with-mumps=1 > > > --download-mumps > > > > > > --download-parmetis --download-metis --download-scalapack > > > > > > > > > > > > However, this leads to an error (configure.log attached): > > > > > > > > > > > > ============================================================ > > > > > =================== > > > > > > Configuring PETSc to compile on your system > > > > > > > > > > > > ============================================================ > > > > > =================== > > > > > > TESTING: checkCPreprocessor from > > > > > > config.setCompilers(config/BuildSystem/config/setCompilers.py:599) > > > > > > > > > > > > ************************************************************ > > > > > ******************* > > > > > > UNABLE to CONFIGURE with GIVEN OPTIONS (see > > > configure.log for > > > > > > details): > > > > > > ------------------------------------------------------------ > > > > > ------------------- > > > > > > Cannot find a C preprocessor > > > > > > ************************************************************ > > > > > ******************* > > > > > > > > > > > > The configure.log says something about cpp32, here's the excerpt: > > > > > > > > > > > > Possible ERROR while running preprocessor: exit code 256 > > > > > > stderr: > > > > > > gcc: error: cpp32: No such file or directory > > > > > > > > > > > > > > > > > > Any ideas of what is going wrong? any help or comments are highly > > > > > > appreciated. Thanks in advance! > > > > > > > > > > > > Andres > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ From mistloin at unist.ac.kr Thu Dec 21 00:27:39 2017 From: mistloin at unist.ac.kr (=?ks_c_5601-1987?B?vK29wsH4ICix4rDox9ew+LnXv/jA2rfCsPjH0LrOKQ==?=) Date: Thu, 21 Dec 2017 06:27:39 +0000 Subject: [petsc-users] I have a question for PETSc example Message-ID: Dear PETSc-user I am Seungjin Seo, researcher of Korean Advanced Institute of Science and Technology, South Korea. I am trying to solve thermal and fluid equations within a porous structure. The thermal equation includes a non-linear term of pressure and the fluid equation has a boundary condition using temperature gradient. Is there any way I can solve these two equations at the same time, instead of solving temperature first with previous pressure distribution and then pressure and update temperature? Many examples solve only one PDF instead of coupling several physics. Please recommend me which solver is appropriate for my case? Best regards, Seungjin Seo -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Thu Dec 21 00:46:42 2017 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Thu, 21 Dec 2017 06:46:42 +0000 Subject: [petsc-users] I have a question for PETSc example In-Reply-To: References: Message-ID: <888DF7F8-1A65-4343-AEE8-E4387F9FA716@mcs.anl.gov> In deed you can, yes many PETSc examples only feature a single field for simplicity, but multiple fields are fine. You simple provide your FormFunction and FormJacobian to handle both degrees of freedom per cell/node at the same time. You use SNES or TS depending on if your problem is time dependent and then use -pc_type lu to start. Once you have the physics correct come back and ask us about optimizing the linear solver with preconditioners; but until you have the physics correct it is absurd to waste time worrying about making the linear solver efficient. What you want to do, many people do and is not a big deal. Barry > On Dec 21, 2017, at 12:27 AM, ??? (???????????) wrote: > > Dear PETSc-user > > > I am Seungjin Seo, researcher of Korean Advanced Institute of Science and Technology, South Korea. > > > > I am trying to solve thermal and fluid equations within a porous structure. > The thermal equation includes a non-linear term of pressure and the fluid equation has a boundary condition using temperature gradient. > Is there any way I can solve these two equations at the same time, instead of solving temperature first with previous pressure distribution and then pressure and update temperature? > Many examples solve only one PDF instead of coupling several physics. > Please recommend me which solver is appropriate for my case? > > Best regards, > Seungjin Seo From repepo at gmail.com Thu Dec 21 02:28:04 2017 From: repepo at gmail.com (Santiago Andres Triana) Date: Thu, 21 Dec 2017 09:28:04 +0100 Subject: [petsc-users] configure cannot find a c preprocessor In-Reply-To: <8556DEB2-0F2A-4873-92DF-965F17DAEF80@mcs.anl.gov> References: <8556DEB2-0F2A-4873-92DF-965F17DAEF80@mcs.anl.gov> Message-ID: There is no mpich install on this cluster... I will talk to the sysadmins to see if this is feasible... On other news, configure was successful by turning off C++, however make failed: (attached logs) ... ------------------------------------------ Using mpiexec: /opt/sgi/mpt/mpt-2.12/bin/mpirun ========================================== Building PETSc using GNU Make with 32 build threads ========================================== gmake[2]: Entering directory `/space/hpc-home/trianas/petsc-3.8.3' Use "/usr/bin/gmake V=1" to see verbose compile lines, "/usr/bin/gmake V=0" to suppress. CLINKER /space/hpc-home/trianas/petsc-3.8.3/arch-linux2-c-debug/lib/libpetsc.so.3.8.3 /sw/sdev/binutils/x86_64/2.22/bin/ld: cannot find -lcpuset.so /sw/sdev/binutils/x86_64/2.22/bin/ld: cannot find -lbitmask.so collect2: error: ld returned 1 exit status gmake[2]: *** [/space/hpc-home/trianas/petsc-3.8.3/arch-linux2-c-debug/lib/libpetsc.so.3.8.3] Error 1 gmake[2]: Leaving directory `/space/hpc-home/trianas/petsc-3.8.3' gmake[1]: *** [gnumake] Error 2 gmake[1]: Leaving directory `/space/hpc-home/trianas/petsc-3.8.3' **************************ERROR************************************* Error during compile, check arch-linux2-c-debug/lib/petsc/conf/make.log Send it and arch-linux2-c-debug/lib/petsc/conf/configure.log to petsc-maint at mcs.anl.gov ******************************************************************** there seems to be a problem with the libcpuset.so and libbitmask.so libraries. Make wants lcpuset.so and lbitmask.so, which do not exist in this system. On Thu, Dec 21, 2017 at 5:10 AM, Smith, Barry F. wrote: > > > > On Dec 20, 2017, at 5:52 PM, Matthew Knepley wrote: > > > > On Wed, Dec 20, 2017 at 6:31 PM, Santiago Andres Triana < > repepo at gmail.com> wrote: > > I got a different error now... hope it's a good sign! > > > > hpca-login:~/petsc-3.8.3> ./configure --with-cc=gcc --with-fc=gfortran > --with-cxx=g++ --with-mpi-include=/opt/sgi/mpt/mpt-2.12/include > --with-mpi-lib="-L/opt/sgi/mpt/mpt-2.12/lib -lmpi -lpthread" > LIBS="/usr/lib64/libcpuset.so.1 /usr/lib64/libbitmask.so.1" > > ============================================================ > =================== > > Configuring PETSc to compile on your system > > ============================================================ > =================== > > TESTING: CxxMPICheck from config.packages.MPI(config/ > BuildSystem/config/packages/MPI.py:351) > ************************************************************ > ******************* > > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log > for details): > > ------------------------------------------------------------ > ------------------- > > C++ error! MPI_Finalize() could not be located! > > ************************************************************ > ******************* > > > > It looks like there is crazy C++ stuff in SGI MPT. I can see two chioces: > > > > a) Turn off C++: --with-cxx=0 > > > > b) Find out what crazy C++ library MPT has and stick it in > --with-mpi-lib > > 3) filter the error message as previously discussed (this time for > C++), then one does not have "find out what crazy..." since the mpicxx > knows it. > > Barry > > > > > No amount of MPI optimization is worth this pain. Does your machine have > an MPICH install? > > > > Thanks, > > > > Matt > > > > On Thu, Dec 21, 2017 at 12:21 AM, Satish Balay > wrote: > > Hm configure is misbehaving with /usr/lib64/libcpuset.so.1 notation. Try: > > > > ./configure --with-cc=gcc --with-fc=gfortran --with-cxx=g++ > --with-mpi-include=/opt/sgi/mpt/mpt-2.12/include > --with-mpi-lib="-L/opt/sgi/mpt/mpt-2.12/lib -lmpi -lpthread" > LIBS="/usr/lib64/libcpuset.so.1 /usr/lib64/libbitmask.so.1" > > > > Satish > > > > > > On Wed, 20 Dec 2017, Santiago Andres Triana wrote: > > > > > thanks Satish, > > > > > > did not work unfortunately, configure.log attached. Here's the output: > > > > > > hpca-login:~/petsc-3.8.3> ./configure --with-cc=gcc --with-fc=gfortran > > > --with-cxx=g++ --with-mpi-include=/opt/sgi/mpt/mpt-2.12/include > > > --with-mpi-lib="-L/opt/sgi/mpt/mpt-2.12/lib -lmpi -lpthread > > > /usr/lib64/libcpuset.so.1 /usr/lib64/libbitmask.so.1" > > > ============================================================ > =================== > > > Configuring PETSc to compile on your system > > > > > > ============================================================ > =================== > > > TESTING: check from > > > config.libraries(config/BuildSystem/config/libraries.py:158) > > > > > > ************************************************************ > ******************* > > > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log > for > > > details): > > > ------------------------------------------------------------ > ------------------- > > > --with-mpi-lib=['-L/opt/sgi/mpt/mpt-2.12/lib', '-lmpi', '-lpthread', > > > '/usr/lib64/libcpuset.so.1', '/usr/lib64/libbitmask.so.1'] and > > > --with-mpi-include=['/opt/sgi/mpt/mpt-2.12/include'] did not work > > > ************************************************************ > ******************* > > > > > > On Thu, Dec 21, 2017 at 12:07 AM, Satish Balay > wrote: > > > > > > > Its strange compiler. > > > > > > > > You can try: > > > > > > > > ./configure --with-cc=gcc --with-fc=gfortran --with-cxx=g++ > > > > --with-mpi-include=/opt/sgi/mpt/mpt-2.12/include > > > > --with-mpi-lib="-L/opt/sgi/mpt/mpt-2.12/lib -lmpi -lpthread > > > > /usr/lib64/libcpuset.so.1 /usr/lib64/libbitmask.so.1" > > > > > > > > Satish > > > > > > > > On Wed, 20 Dec 2017, Santiago Andres Triana wrote: > > > > > > > > > This is what I get: > > > > > > > > > > hpca-login:~> mpicc -show > > > > > gcc -I/opt/sgi/mpt/mpt-2.12/include -L/opt/sgi/mpt/mpt-2.12/lib > -lmpi > > > > > -lpthread /usr/lib64/libcpuset.so.1 /usr/lib64/libbitmask.so.1 > > > > > > > > > > On Wed, Dec 20, 2017 at 11:59 PM, Satish Balay > > > > wrote: > > > > > > > > > > > >>> > > > > > > Executing: mpicc -E -I/dev/shm/pbs.3111462.hpc- > > > > pbs/petsc-fdYfuH/config.setCompilers > > > > > > /dev/shm/pbs.3111462.hpc-pbs/petsc-fdYfuH/config. > > > > setCompilers/conftest.c > > > > > > stderr: > > > > > > gcc: warning: /usr/lib64/libcpuset.so.1: linker input file unused > > > > because > > > > > > linking not done > > > > > > gcc: warning: /usr/lib64/libbitmask.so.1: linker input file > unused > > > > because > > > > > > linking not done > > > > > > <<<< > > > > > > > > > > > > Looks like your mpicc is printing this verbose thing on stdout > [why is > > > > > > it doing a link check during preprocesing?] - thus confusing > PETSc > > > > > > configure. > > > > > > > > > > > > Workarround is to fix this compiler not to print such messages. > Or use > > > > > > different compilers.. > > > > > > > > > > > > What do you have for: > > > > > > > > > > > > mpicc -show > > > > > > > > > > > > > > > > > > Satish > > > > > > > > > > > > On Wed, 20 Dec 2017, Santiago Andres Triana wrote: > > > > > > > > > > > > > Dear petsc-users, > > > > > > > > > > > > > > I'm trying to install petsc in a cluster using SGI's MPT. The > mpicc > > > > > > > compiler is in the search path. The configure command is: > > > > > > > > > > > > > > ./configure --with-scalar-type=complex --with-mumps=1 > > > > --download-mumps > > > > > > > --download-parmetis --download-metis --download-scalapack > > > > > > > > > > > > > > However, this leads to an error (configure.log attached): > > > > > > > > > > > > > > ============================================================ > > > > > > =================== > > > > > > > Configuring PETSc to compile on your system > > > > > > > > > > > > > > ============================================================ > > > > > > =================== > > > > > > > TESTING: checkCPreprocessor from > > > > > > > config.setCompilers(config/BuildSystem/config/ > setCompilers.py:599) > > > > > > > > > > > > > > ************************************************************ > > > > > > ******************* > > > > > > > UNABLE to CONFIGURE with GIVEN OPTIONS (see > > > > configure.log for > > > > > > > details): > > > > > > > ------------------------------------------------------------ > > > > > > ------------------- > > > > > > > Cannot find a C preprocessor > > > > > > > ************************************************************ > > > > > > ******************* > > > > > > > > > > > > > > The configure.log says something about cpp32, here's the > excerpt: > > > > > > > > > > > > > > Possible ERROR while running preprocessor: exit code 256 > > > > > > > stderr: > > > > > > > gcc: error: cpp32: No such file or directory > > > > > > > > > > > > > > > > > > > > > Any ideas of what is going wrong? any help or comments are > highly > > > > > > > appreciated. Thanks in advance! > > > > > > > > > > > > > > Andres > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: make.log Type: application/octet-stream Size: 16026 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: application/octet-stream Size: 5478794 bytes Desc: not available URL: From knepley at gmail.com Thu Dec 21 07:30:35 2017 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 21 Dec 2017 08:30:35 -0500 Subject: [petsc-users] configure cannot find a c preprocessor In-Reply-To: References: <8556DEB2-0F2A-4873-92DF-965F17DAEF80@mcs.anl.gov> Message-ID: On Thu, Dec 21, 2017 at 3:28 AM, Santiago Andres Triana wrote: > There is no mpich install on this cluster... I will talk to the sysadmins > to see if this is feasible... > > On other news, configure was successful by turning off C++, however make > failed: (attached logs) > > ... > ------------------------------------------ > Using mpiexec: /opt/sgi/mpt/mpt-2.12/bin/mpirun > ========================================== > Building PETSc using GNU Make with 32 build threads > ========================================== > gmake[2]: Entering directory `/space/hpc-home/trianas/petsc-3.8.3' > Use "/usr/bin/gmake V=1" to see verbose compile lines, "/usr/bin/gmake > V=0" to suppress. > CLINKER /space/hpc-home/trianas/petsc-3.8.3/arch-linux2-c-debug/lib/ > libpetsc.so.3.8.3 > /sw/sdev/binutils/x86_64/2.22/bin/ld: cannot find -lcpuset.so > /sw/sdev/binutils/x86_64/2.22/bin/ld: cannot find -lbitmask.so > collect2: error: ld returned 1 exit status > gmake[2]: *** [/space/hpc-home/trianas/petsc-3.8.3/arch-linux2-c-debug/lib/libpetsc.so.3.8.3] > Error 1 > gmake[2]: Leaving directory `/space/hpc-home/trianas/petsc-3.8.3' > gmake[1]: *** [gnumake] Error 2 > gmake[1]: Leaving directory `/space/hpc-home/trianas/petsc-3.8.3' > **************************ERROR************************************* > Error during compile, check arch-linux2-c-debug/lib/petsc/conf/make.log > Send it and arch-linux2-c-debug/lib/petsc/conf/configure.log to > petsc-maint at mcs.anl.gov > ******************************************************************** > > there seems to be a problem with the libcpuset.so and libbitmask.so > libraries. Make wants lcpuset.so and lbitmask.so, which do not exist in > this system. > You can see by looking a few lines above in make.log that we preserve your input: ... /sw/sdev/intel/parallel_studio_xe_2015_update_3-pguyan/composer_xe_2015.3.187/mkl/lib/mic -L/sw/sdev/intel/parallel_studio_xe_2015_update_3-pguyan/composer_xe_2015.3.187/mkl/lib/mic -Wl,-rpath,/space/hpc-apps/sw/sdev/gcc/x86_64/4.9.2/lib -L/space/hpc-apps/sw/sdev/gcc/x86_64/4.9.2/lib -ldl -lgcc_s -ldl /usr/lib64/libcpuset.so.1 /usr/lib64/libbitmask.so.1 So its the linker complaining. Why are these libraries necessary? Also, maybe you could properly install them, meaning make a link ln -d /usr/lib64/libcpuset.so.1 /usr/lib64/libcpuset.so Thanks, Matt > On Thu, Dec 21, 2017 at 5:10 AM, Smith, Barry F. > wrote: > >> >> >> > On Dec 20, 2017, at 5:52 PM, Matthew Knepley wrote: >> > >> > On Wed, Dec 20, 2017 at 6:31 PM, Santiago Andres Triana < >> repepo at gmail.com> wrote: >> > I got a different error now... hope it's a good sign! >> > >> > hpca-login:~/petsc-3.8.3> ./configure --with-cc=gcc --with-fc=gfortran >> --with-cxx=g++ --with-mpi-include=/opt/sgi/mpt/mpt-2.12/include >> --with-mpi-lib="-L/opt/sgi/mpt/mpt-2.12/lib -lmpi -lpthread" >> LIBS="/usr/lib64/libcpuset.so.1 /usr/lib64/libbitmask.so.1" >> > ============================================================ >> =================== >> > Configuring PETSc to compile on your system >> > ============================================================ >> =================== >> > TESTING: CxxMPICheck from config.packages.MPI(config/Bui >> ldSystem/config/packages/MPI.py:351) >> ************************************************************ >> ******************* >> > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log >> for details): >> > ------------------------------------------------------------ >> ------------------- >> > C++ error! MPI_Finalize() could not be located! >> > ************************************************************ >> ******************* >> > >> > It looks like there is crazy C++ stuff in SGI MPT. I can see two >> chioces: >> > >> > a) Turn off C++: --with-cxx=0 >> > >> > b) Find out what crazy C++ library MPT has and stick it in >> --with-mpi-lib >> >> 3) filter the error message as previously discussed (this time for >> C++), then one does not have "find out what crazy..." since the mpicxx >> knows it. >> >> Barry >> >> > >> > No amount of MPI optimization is worth this pain. Does your machine >> have an MPICH install? >> > >> > Thanks, >> > >> > Matt >> > >> > On Thu, Dec 21, 2017 at 12:21 AM, Satish Balay >> wrote: >> > Hm configure is misbehaving with /usr/lib64/libcpuset.so.1 notation. >> Try: >> > >> > ./configure --with-cc=gcc --with-fc=gfortran --with-cxx=g++ >> --with-mpi-include=/opt/sgi/mpt/mpt-2.12/include >> --with-mpi-lib="-L/opt/sgi/mpt/mpt-2.12/lib -lmpi -lpthread" >> LIBS="/usr/lib64/libcpuset.so.1 /usr/lib64/libbitmask.so.1" >> > >> > Satish >> > >> > >> > On Wed, 20 Dec 2017, Santiago Andres Triana wrote: >> > >> > > thanks Satish, >> > > >> > > did not work unfortunately, configure.log attached. Here's the output: >> > > >> > > hpca-login:~/petsc-3.8.3> ./configure --with-cc=gcc --with-fc=gfortran >> > > --with-cxx=g++ --with-mpi-include=/opt/sgi/mpt/mpt-2.12/include >> > > --with-mpi-lib="-L/opt/sgi/mpt/mpt-2.12/lib -lmpi -lpthread >> > > /usr/lib64/libcpuset.so.1 /usr/lib64/libbitmask.so.1" >> > > ============================================================ >> =================== >> > > Configuring PETSc to compile on your system >> > > >> > > ============================================================ >> =================== >> > > TESTING: check from >> > > config.libraries(config/BuildSystem/config/libraries.py:158) >> > > >> > > ************************************************************ >> ******************* >> > > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log >> for >> > > details): >> > > ------------------------------------------------------------ >> ------------------- >> > > --with-mpi-lib=['-L/opt/sgi/mpt/mpt-2.12/lib', '-lmpi', '-lpthread', >> > > '/usr/lib64/libcpuset.so.1', '/usr/lib64/libbitmask.so.1'] and >> > > --with-mpi-include=['/opt/sgi/mpt/mpt-2.12/include'] did not work >> > > ************************************************************ >> ******************* >> > > >> > > On Thu, Dec 21, 2017 at 12:07 AM, Satish Balay >> wrote: >> > > >> > > > Its strange compiler. >> > > > >> > > > You can try: >> > > > >> > > > ./configure --with-cc=gcc --with-fc=gfortran --with-cxx=g++ >> > > > --with-mpi-include=/opt/sgi/mpt/mpt-2.12/include >> > > > --with-mpi-lib="-L/opt/sgi/mpt/mpt-2.12/lib -lmpi -lpthread >> > > > /usr/lib64/libcpuset.so.1 /usr/lib64/libbitmask.so.1" >> > > > >> > > > Satish >> > > > >> > > > On Wed, 20 Dec 2017, Santiago Andres Triana wrote: >> > > > >> > > > > This is what I get: >> > > > > >> > > > > hpca-login:~> mpicc -show >> > > > > gcc -I/opt/sgi/mpt/mpt-2.12/include -L/opt/sgi/mpt/mpt-2.12/lib >> -lmpi >> > > > > -lpthread /usr/lib64/libcpuset.so.1 /usr/lib64/libbitmask.so.1 >> > > > > >> > > > > On Wed, Dec 20, 2017 at 11:59 PM, Satish Balay > > >> > > > wrote: >> > > > > >> > > > > > >>> >> > > > > > Executing: mpicc -E -I/dev/shm/pbs.3111462.hpc- >> > > > pbs/petsc-fdYfuH/config.setCompilers >> > > > > > /dev/shm/pbs.3111462.hpc-pbs/petsc-fdYfuH/config. >> > > > setCompilers/conftest.c >> > > > > > stderr: >> > > > > > gcc: warning: /usr/lib64/libcpuset.so.1: linker input file >> unused >> > > > because >> > > > > > linking not done >> > > > > > gcc: warning: /usr/lib64/libbitmask.so.1: linker input file >> unused >> > > > because >> > > > > > linking not done >> > > > > > <<<< >> > > > > > >> > > > > > Looks like your mpicc is printing this verbose thing on stdout >> [why is >> > > > > > it doing a link check during preprocesing?] - thus confusing >> PETSc >> > > > > > configure. >> > > > > > >> > > > > > Workarround is to fix this compiler not to print such messages. >> Or use >> > > > > > different compilers.. >> > > > > > >> > > > > > What do you have for: >> > > > > > >> > > > > > mpicc -show >> > > > > > >> > > > > > >> > > > > > Satish >> > > > > > >> > > > > > On Wed, 20 Dec 2017, Santiago Andres Triana wrote: >> > > > > > >> > > > > > > Dear petsc-users, >> > > > > > > >> > > > > > > I'm trying to install petsc in a cluster using SGI's MPT. The >> mpicc >> > > > > > > compiler is in the search path. The configure command is: >> > > > > > > >> > > > > > > ./configure --with-scalar-type=complex --with-mumps=1 >> > > > --download-mumps >> > > > > > > --download-parmetis --download-metis --download-scalapack >> > > > > > > >> > > > > > > However, this leads to an error (configure.log attached): >> > > > > > > >> > > > > > > ============================================================ >> > > > > > =================== >> > > > > > > Configuring PETSc to compile on your system >> > > > > > > >> > > > > > > ============================================================ >> > > > > > =================== >> > > > > > > TESTING: checkCPreprocessor from >> > > > > > > config.setCompilers(config/BuildSystem/config/setCompilers. >> py:599) >> > > > > > > >> > > > > > > ************************************************************ >> > > > > > ******************* >> > > > > > > UNABLE to CONFIGURE with GIVEN OPTIONS (see >> > > > configure.log for >> > > > > > > details): >> > > > > > > ------------------------------------------------------------ >> > > > > > ------------------- >> > > > > > > Cannot find a C preprocessor >> > > > > > > ************************************************************ >> > > > > > ******************* >> > > > > > > >> > > > > > > The configure.log says something about cpp32, here's the >> excerpt: >> > > > > > > >> > > > > > > Possible ERROR while running preprocessor: exit code 256 >> > > > > > > stderr: >> > > > > > > gcc: error: cpp32: No such file or directory >> > > > > > > >> > > > > > > >> > > > > > > Any ideas of what is going wrong? any help or comments are >> highly >> > > > > > > appreciated. Thanks in advance! >> > > > > > > >> > > > > > > Andres >> > > > > > > >> > > > > > >> > > > > > >> > > > > >> > > > >> > > > >> > > >> > >> > >> > >> > >> > >> > -- >> > What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> > -- Norbert Wiener >> > >> > https://www.cse.buffalo.edu/~knepley/ >> >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From curfman at mcs.anl.gov Thu Dec 21 11:54:32 2017 From: curfman at mcs.anl.gov (McInnes, Lois Curfman) Date: Thu, 21 Dec 2017 17:54:32 +0000 Subject: [petsc-users] FW: PDE Software Frameworks 2018 -- May 28-30, Hotel Zander K, Bergen, Norway. In-Reply-To: <85aecc47-0c84-285d-b47a-7500f3dbba9c@iris.no> References: <5094f25f-2b6c-2e84-44e0-05c984558257@iris.no> <85aecc47-0c84-285d-b47a-7500f3dbba9c@iris.no> Message-ID: Dear all ? Please see below. From: Robert Kloefkorn Date: Thursday, December 21, 2017 at 11:44 AM To: "McInnes, Lois Curfman" Subject: PDE Software Frameworks 2018 -- May 28-30, Hotel Zander K, Bergen, Norway. Hi Lois, please forward this announcement to the PETSc mailing list. It seemed like I wasn't able to post messages there. Thanks and happy holidays, Robert ---------------------------------------------------------------- Dear friends and colleagues, It is a great pleasure to announce the PDE Software Frameworks (PDESoft) 2018 Conference, which will be held at the hotel Zander K (https://www.zanderk.no/en/), Bergen, Norway, May 28 - 30, 2018. The scientific committee of the conference consists of the following people: - Donna Calhoun (Boise State University) - Guido Kanschat (Heidelberg University) - Eirik Keilegavlen (University of Bergen) - Robert Kl?fkorn (International Research Institute of Stavanger) - Lois Curfman McInnes (Argonne National Laboratory) - Lawrence Mitchell (Imperial College London) - Christophe Prud'homme (University of Strasbourg) - Garth Wells (University of Cambridge) - Barbara Wohlmuth (Technical University of Munich) We welcome abstracts on topics related to software for - meshing and adaptive mesh refinement, - solvers for large systems of equations, - numerical PDE solvers, - data visualization systems, - user interfaces to scientific software, and - reproducible science. The accepted abstracts will be scheduled for either oral or poster presentations. This workshop does not publish full papers, so submission of a full paper is not required. The deadline for abstract submission is Feb. 1, 2018. For more information, please see the PDESoft18 web page: http://www.iris.no/pdesoft18/. We are looking forward to meeting you in Bergen. With best regards, Robert Kl?fkorn, on behalf of the organizing committee -- Dr. Robert Kloefkorn Senior Research Scientist | http://www.iris.no International Research Institute of Stavanger (Bergen office) Thormoehlensgt. 55 | 5006 Bergen Phone: +47 482 93 024 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mhbaghaei at mail.sjtu.edu.cn Sun Dec 24 02:43:05 2017 From: mhbaghaei at mail.sjtu.edu.cn (Mohammad Hassan Baghaei) Date: Sun, 24 Dec 2017 16:43:05 +0800 (CST) Subject: [petsc-users] Using DMPlex Message-ID: <000601d37c93$1dce1c40$596a54c0$@mail.sjtu.edu.cn> Hello I am using the DMPlex interface for the solving PDEs. A part of mesh, considering, is staggered grid, at the location of middle of each edge. After generation of main grid points, I find it hard to have the staggered grid at the prescribed location. I want to know how to deal with the staggered besides of main grid. Is it better to have another DM for the staggered? Is it possible to extend the DMChart and inserting the points. I would really appreciate for your time. Amir -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sun Dec 24 07:57:57 2017 From: knepley at gmail.com (Matthew Knepley) Date: Sun, 24 Dec 2017 08:57:57 -0500 Subject: [petsc-users] Using DMPlex In-Reply-To: <000601d37c93$1dce1c40$596a54c0$@mail.sjtu.edu.cn> References: <000601d37c93$1dce1c40$596a54c0$@mail.sjtu.edu.cn> Message-ID: On Sun, Dec 24, 2017 at 3:43 AM, Mohammad Hassan Baghaei < mhbaghaei at mail.sjtu.edu.cn> wrote: > Hello > > I am using the DMPlex interface for the solving PDEs. > Great. What discretization are you using? > A part of mesh, considering, is staggered grid, at the location of middle > of each edge. > So you would like to put variables at each edge midpoint? > After generation of main grid points, I find it hard to have the staggered > grid at the prescribed location. > In Plex, the topology is specified by the DMPlex, but the dof layout is specified by a PetscSection. To put variables on edges, you could use: DMGetDefaultSection(dm, &s); DMPlexGetDepthStratum(dm, 1, &eStart, &eEnd); for (e = eStart; e < eEnd; ++e) { PetscSectionAddDof(s, e, 1); } and of course any other dofs you are using. > I want to know how to deal with the staggered besides of main grid. Is it > better to have another DM for the staggered? > Another option is to use several DMDA. This has its own drawbacks. > Is it possible to extend the DMChart and inserting the points. > If the chart does not have edges, it is because it has not been interpolated. Either pass the PETSC_TRUE, or call http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DMPLEX/DMPlexInterpolate.html Thanks, Matt > I would really appreciate for your time. > > Amir > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From dayedut123 at 163.com Mon Dec 25 00:17:26 2017 From: dayedut123 at 163.com (=?GBK?B?ztI=?=) Date: Mon, 25 Dec 2017 14:17:26 +0800 (CST) Subject: [petsc-users] Linear system solvers for unsymmetrical matrix Message-ID: <5a20043d.8454.1608c519a44.Coremail.dayedut123@163.com> Hello, I want to use PETSC to solve an unsymmetrical matrix. I find many linear system solvers in PETSC. But I don't know which one is suitable for the unsymmetrical matrix. I want to choose the best solver for my problem. Would you mind give me some alternative solvers for the unsymmetrical matrix in PETSC? Thank you very much! Daye -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Mon Dec 25 06:56:31 2017 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Mon, 25 Dec 2017 12:56:31 +0000 Subject: [petsc-users] Linear system solvers for unsymmetrical matrix In-Reply-To: <5a20043d.8454.1608c519a44.Coremail.dayedut123@163.com> References: <5a20043d.8454.1608c519a44.Coremail.dayedut123@163.com> Message-ID: Most of the linear solvers in PETSc are suitable for non symmetric problems. You should not use KSPCG, KSPCR or PCCHOLESKY since they are only for symmetric problems. Finding the best solver for your problem depends on your problem and how large a problem you want to solve. Please tell use as much as possible 1) where the comes from, for example CFD, structural mechanics, etc 2) if it comes from an elliptic, parabolic, or hyperbolic PDE and 3) how large a problem you need to solve, 100,000 unknowns, 10 million, a billion? Barry > On Dec 25, 2017, at 12:17 AM, ? wrote: > > Hello, > I want to use PETSC to solve an unsymmetrical matrix. I find many linear system solvers in PETSC. But I don't know which one is suitable for the unsymmetrical matrix. I want to choose the best solver for my problem. Would you mind give me some alternative solvers for the unsymmetrical matrix in PETSC? > Thank you very much! > Daye > > > From bsmith at mcs.anl.gov Mon Dec 25 13:49:55 2017 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Mon, 25 Dec 2017 19:49:55 +0000 Subject: [petsc-users] Linear system solvers for unsymmetrical matrix In-Reply-To: <3de0abaf.f8d8.1608e03880f.Coremail.dayedut123@163.com> References: <5a20043d.8454.1608c519a44.Coremail.dayedut123@163.com> <3de0abaf.f8d8.1608e03880f.Coremail.dayedut123@163.com> Message-ID: <130D7A41-D82A-4B66-9D11-3F304CC2F208@mcs.anl.gov> Normally for pressure Poisson's with standard discretizations such as finite differences, finite elements/volumes multigrid (either geometric or algebraic depending on the circumstances) is absolutely the go to method and it is rare to have a reason to use something else. For SPH you will need to do a literature search to see what is viable for solving the pressure Poisson. Intuitively for very large problems you will need a hierarchical solver (like multigrid) that takes advantage of the elliptic nature of the problem. Simple arguments about communication of information seem to indicate that solvers such as conjugate gradient with Jacobi preconditioner will not scale for large problems. Barry > On Dec 25, 2017, at 8:11 AM, ? wrote: > > Thanks for your reply. My research is focused on the in-compressible SPH which is kind of particle method in CFD and I want to solve the pressure Poisson's equation during the computation. I think the PDE is an elliptic one. The matrix size is as large as possible. For now, the problem is just 2 dimension and the number of unknowns is about 250,000. But in the future, it will expand to 3D and the number of unknowns will be millions. In addition, choosing the suitable preconditioner should be another key factor for the iterative algorithm, would you mind providing me some suggestions about it ? > Thank you again! > Daye > > > > > > > At 2017-12-25 19:56:31, "Smith, Barry F." wrote: > > > > Most of the linear solvers in PETSc are suitable for non symmetric problems. You should not use KSPCG, KSPCR or PCCHOLESKY since they are only for symmetric problems. > > > > Finding the best solver for your problem depends on your problem and how large a problem you want to solve. Please tell use as much as possible > > > >1) where the comes from, for example CFD, structural mechanics, etc > > > >2) if it comes from an elliptic, parabolic, or hyperbolic PDE > > > >and > > > >3) how large a problem you need to solve, 100,000 unknowns, 10 million, a billion? > > > > Barry > > > > > >> On Dec 25, 2017, at 12:17 AM, ? wrote: > >> > >> Hello, > >> I want to use PETSC to solve an unsymmetrical matrix. I find many linear system solvers in PETSC. But I don't know which one is suitable for the unsymmetrical matrix. I want to choose the best solver for my problem. Would you mind give me some alternative solvers for the unsymmetrical matrix in PETSC? > >> Thank you very much! > >> Daye > >> > >> > >> > > > > > > From mhbaghaei at mail.sjtu.edu.cn Tue Dec 26 00:54:19 2017 From: mhbaghaei at mail.sjtu.edu.cn (Mohammad Hassan Baghaei) Date: Tue, 26 Dec 2017 14:54:19 +0800 (CST) Subject: [petsc-users] Using DMPlex In-Reply-To: References: <000601d37c93$1dce1c40$596a54c0$@mail.sjtu.edu.cn> Message-ID: <001801d37e16$3ff57eb0$bfe07c10$@mail.sjtu.edu.cn> Hello I want to know whether is it possible that a specific field in the section have been defined at some time on edges and other times on the vertices. This change in the dof , I think, may cause problem, especially in the global vector size of the dm. At times when the field changes to be defined on the edges, I think, I need to reset the dof with routine. I know how to do this, thanks to Matt. But, how I can deal with the global vector. At first, the global vector was defined on the vertices, but by this change. How would the global vector would response? Do I need to change the global vector? Thanks Amir Hello I am using the DMPlex interface for the solving PDEs. Great. What discretization are you using? A part of mesh, considering, is staggered grid, at the location of middle of each edge. So you would like to put variables at each edge midpoint? After generation of main grid points, I find it hard to have the staggered grid at the prescribed location. In Plex, the topology is specified by the DMPlex, but the dof layout is specified by a PetscSection. To put variables on edges, you could use: DMGetDefaultSection(dm, &s); DMPlexGetDepthStratum(dm, 1, &eStart, &eEnd); for (e = eStart; e < eEnd; ++e) { PetscSectionAddDof(s, e, 1); } and of course any other dofs you are using. I want to know how to deal with the staggered besides of main grid. Is it better to have another DM for the staggered? Another option is to use several DMDA. This has its own drawbacks. Is it possible to extend the DMChart and inserting the points. If the chart does not have edges, it is because it has not been interpolated. Either pass the PETSC_TRUE, or call http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DMPLEX/DMPlexInterpolate.html Thanks, Matt I would really appreciate for your time. Amir -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From cpraveen at gmail.com Tue Dec 26 06:05:00 2017 From: cpraveen at gmail.com (Praveen C) Date: Tue, 26 Dec 2017 17:35:00 +0530 Subject: [petsc-users] Growing memory use with VecGhostUpdate Message-ID: Dear all I have a fortran CFD code, 3d, unstructured grid. I run the following code do i=1,10000000 call VecGhostUpdateBegin(p%v_u, INSERT_VALUES, SCATTER_FORWARD, & ierr); CHKERRQ(ierr) call VecGhostUpdateEnd (p%v_u, INSERT_VALUES, SCATTER_FORWARD, & ierr); CHKERRQ(ierr) enddo and monitor memory usage using ?top?. I find that the memory used by the processes increases continuously. The vector p%v_u was created like this call VecCreateGhostBlockWithArray(PETSC_COMM_WORLD, nvar, nvar*g%nvl, & PETSC_DECIDE, g%nvg, g%vghost, & s%u(1,1), p%v_u, ierr); CHKERRQ(ierr) where I use a preallocated array s%u(nvar, g%nvl + g%nvg). I have observed this memory issue on Linux with Petsc 3.8.3 and Openmpi 3.0.0, gcc-7.2.1, gfortran-7.2.1 On my macbook using clang and gfortran, I do not see this growing memory issue. Can you suggest some way to debug this problem ? Thank you praveen From jed at jedbrown.org Tue Dec 26 08:03:59 2017 From: jed at jedbrown.org (Jed Brown) Date: Tue, 26 Dec 2017 07:03:59 -0700 Subject: [petsc-users] Growing memory use with VecGhostUpdate In-Reply-To: References: Message-ID: <87fu7x60xc.fsf@jedbrown.org> First check PetscMallocGetCurrentUsage() in the loop to confirm that there is no leak of PetscMalloc()'ed memory. That would mean the leak comes from elsewhere, maybe MPI. Then get a stack trace for the leaking memory (e.g., using valgrind --tool=massif or a debugger)? Praveen C writes: > Dear all > > I have a fortran CFD code, 3d, unstructured grid. > > I run the following code > > do i=1,10000000 > call VecGhostUpdateBegin(p%v_u, INSERT_VALUES, SCATTER_FORWARD, & > ierr); CHKERRQ(ierr) > call VecGhostUpdateEnd (p%v_u, INSERT_VALUES, SCATTER_FORWARD, & > ierr); CHKERRQ(ierr) > enddo > > and monitor memory usage using ?top?. I find that the memory used by the processes increases continuously. > > The vector p%v_u was created like this > > call VecCreateGhostBlockWithArray(PETSC_COMM_WORLD, nvar, nvar*g%nvl, & > PETSC_DECIDE, g%nvg, g%vghost, & > s%u(1,1), p%v_u, ierr); CHKERRQ(ierr) > > where I use a preallocated array s%u(nvar, g%nvl + g%nvg). > > I have observed this memory issue on Linux with Petsc 3.8.3 and Openmpi 3.0.0, gcc-7.2.1, gfortran-7.2.1 > > On my macbook using clang and gfortran, I do not see this growing memory issue. > > Can you suggest some way to debug this problem ? > > Thank you > praveen From knepley at gmail.com Tue Dec 26 09:03:31 2017 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 26 Dec 2017 10:03:31 -0500 Subject: [petsc-users] Using DMPlex In-Reply-To: <001801d37e16$3ff57eb0$bfe07c10$@mail.sjtu.edu.cn> References: <000601d37c93$1dce1c40$596a54c0$@mail.sjtu.edu.cn> <001801d37e16$3ff57eb0$bfe07c10$@mail.sjtu.edu.cn> Message-ID: On Tue, Dec 26, 2017 at 1:54 AM, Mohammad Hassan Baghaei < mhbaghaei at mail.sjtu.edu.cn> wrote: > Hello > > I want to know whether is it possible that a specific field in the section > have been defined at some time on edges and other times on the vertices. > What do you mean "at some time"? If you mean that some vertices have dofs, but not all, and some edges have dofs, but not all, then this is fine. Otherwise, I do not understand what you mean. If you mean your simulation is running, and then you decide that they discretization should change, you will have to recreate everything, including the PetscSection, the Vec and Mat, and solver. Thanks, Mattt > This change in the dof , I think, may cause problem, especially in the > global vector size of the dm. At times when the field changes to be defined > on the edges, I think, I need to reset the dof with routine. I know how to > do this, thanks to Matt. But, how I can deal with the global vector. At > first, the global vector was defined on the vertices, but by this change. > How would the global vector would response? Do I need to change the global > vector? > > Thanks > > Amir > > > > > > > > Hello > > I am using the DMPlex interface for the solving PDEs. > > > > Great. What discretization are you using? > > > > A part of mesh, considering, is staggered grid, at the location of middle > of each edge. > > > > So you would like to put variables at each edge midpoint? > > > > After generation of main grid points, I find it hard to have the staggered > grid at the prescribed location. > > > > In Plex, the topology is specified by the DMPlex, but the dof layout is > specified by a PetscSection. To put > > variables on edges, you could use: > > > > DMGetDefaultSection(dm, &s); > > DMPlexGetDepthStratum(dm, 1, &eStart, &eEnd); > > for (e = eStart; e < eEnd; ++e) { > > PetscSectionAddDof(s, e, 1); > > } > > > > and of course any other dofs you are using. > > > > I want to know how to deal with the staggered besides of main grid. Is it > better to have another DM for the staggered? > > > > Another option is to use several DMDA. This has its own drawbacks. > > > > Is it possible to extend the DMChart and inserting the points. > > > > If the chart does not have edges, it is because it has not been > interpolated. Either pass the PETSC_TRUE, or call > > > > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DMPLEX/ > DMPlexInterpolate.html > > > > Thanks, > > > > Matt > > > > I would really appreciate for your time. > > Amir > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From cpraveen at gmail.com Tue Dec 26 09:31:07 2017 From: cpraveen at gmail.com (Praveen C) Date: Tue, 26 Dec 2017 21:01:07 +0530 Subject: [petsc-users] Growing memory use with VecGhostUpdate In-Reply-To: <87fu7x60xc.fsf@jedbrown.org> References: <87fu7x60xc.fsf@jedbrown.org> Message-ID: <22A59FBB-A0CD-4698-9BD1-3094238BBEC6@gmail.com> I did this do i=1,10000000 call VecGhostUpdateBegin(p%v_u, INSERT_VALUES, SCATTER_FORWARD, & ierr); CHKERRQ(ierr) call VecGhostUpdateEnd (p%v_u, INSERT_VALUES, SCATTER_FORWARD, & ierr); CHKERRQ(ierr) call PetscMallocGetCurrentUsage(space, ierr); CHKERRQ(ierr) if(rank==0) print*,space enddo and the value printed is zero, so this means the problem must come from mpi. Since I am not directly using MPI, what should I look for with valgrind and how will that help to solve this ? There is a recent issue related to memory leak https://github.com/open-mpi/ompi/issues/4567 Thank you praveen > On 26-Dec-2017, at 7:33 PM, Jed Brown wrote: > > First check PetscMallocGetCurrentUsage() in the loop to confirm that > there is no leak of PetscMalloc()'ed memory. That would mean the leak > comes from elsewhere, maybe MPI. > > Then get a stack trace for the leaking memory (e.g., using valgrind > --tool=massif or a debugger)? -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Tue Dec 26 11:33:18 2017 From: jed at jedbrown.org (Jed Brown) Date: Tue, 26 Dec 2017 10:33:18 -0700 Subject: [petsc-users] Growing memory use with VecGhostUpdate In-Reply-To: <22A59FBB-A0CD-4698-9BD1-3094238BBEC6@gmail.com> References: <87fu7x60xc.fsf@jedbrown.org> <22A59FBB-A0CD-4698-9BD1-3094238BBEC6@gmail.com> Message-ID: <877et95r8h.fsf@jedbrown.org> PETSc isn't calling MPI_Alloc_mem. You should run with valgrind --tool=massif or use a debugger and set a breakpoint on malloc (or possibly other allocation functions) inside that loop. If you don't want to debug it, use a different MPI. Praveen C writes: > I did this > > do i=1,10000000 > call VecGhostUpdateBegin(p%v_u, INSERT_VALUES, SCATTER_FORWARD, & > ierr); CHKERRQ(ierr) > call VecGhostUpdateEnd (p%v_u, INSERT_VALUES, SCATTER_FORWARD, & > ierr); CHKERRQ(ierr) > call PetscMallocGetCurrentUsage(space, ierr); CHKERRQ(ierr) > if(rank==0) print*,space > enddo > > and the value printed is zero, so this means the problem must come from mpi. Since I am not directly using MPI, what should I look for with valgrind and how will that help to solve this ? > > There is a recent issue related to memory leak > > https://github.com/open-mpi/ompi/issues/4567 > > Thank you > praveen > >> On 26-Dec-2017, at 7:33 PM, Jed Brown wrote: >> >> First check PetscMallocGetCurrentUsage() in the loop to confirm that >> there is no leak of PetscMalloc()'ed memory. That would mean the leak >> comes from elsewhere, maybe MPI. >> >> Then get a stack trace for the leaking memory (e.g., using valgrind >> --tool=massif or a debugger)? From cpraveen at gmail.com Wed Dec 27 03:37:46 2017 From: cpraveen at gmail.com (Praveen C) Date: Wed, 27 Dec 2017 15:07:46 +0530 Subject: [petsc-users] PetscOptionsAllUsed in fortran Message-ID: Hello I want to use this http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/PetscOptionsAllUsed.html in a fortran code like this call PetscOptionsAllUsed(PETSC_NULL_OPTIONS, nunused, ierr); CHKERRQ(ierr) if(nunused > 0)then write(*,*)'Some command line options not used' ierr = 1 endif but I cannot compile the code Undefined symbols for architecture x86_64: "_petscoptionsallused_", referenced from: _readparam_ in ccHwkiA6.o ld: symbol(s) not found for architecture x86_64 collect2: error: ld returned 1 exit status make[1]: *** [ug3] Error 1 make: *** [euler] Error 2 Is this not implemented for fortran ? Is there any other way to detect some unused command line arguments ? If some argument is mis-spelled, it will be silently ignored, and I want to detect this case. Thanks praveen From bsmith at mcs.anl.gov Wed Dec 27 08:53:38 2017 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Wed, 27 Dec 2017 14:53:38 +0000 Subject: [petsc-users] PetscOptionsAllUsed in fortran In-Reply-To: References: Message-ID: Here is a patch to provide this for PETSc version 3.8.x Apply with patch -p1 < petscoptionsallused.patch make gnumake It is also in the maint and master git branches of PETSc and will be in the next patch release. Barry > On Dec 27, 2017, at 3:37 AM, Praveen C wrote: > > Hello > > I want to use this > > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/PetscOptionsAllUsed.html > > in a fortran code like this > > call PetscOptionsAllUsed(PETSC_NULL_OPTIONS, nunused, ierr); CHKERRQ(ierr) > if(nunused > 0)then > write(*,*)'Some command line options not used' > ierr = 1 > endif > > but I cannot compile the code > > Undefined symbols for architecture x86_64: > "_petscoptionsallused_", referenced from: > _readparam_ in ccHwkiA6.o > ld: symbol(s) not found for architecture x86_64 > collect2: error: ld returned 1 exit status > make[1]: *** [ug3] Error 1 > make: *** [euler] Error 2 > > Is this not implemented for fortran ? > > Is there any other way to detect some unused command line arguments ? If some argument is mis-spelled, it will be silently ignored, and I want to detect this case. > > Thanks > praveen -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: petscoptionsallused.patch Type: application/octet-stream Size: 3304 bytes Desc: petscoptionsallused.patch URL: From daniel.s.kokron at nasa.gov Wed Dec 27 14:53:14 2017 From: daniel.s.kokron at nasa.gov (Kokron, Daniel S. (ARC-606.2)[CSRA, LLC]) Date: Wed, 27 Dec 2017 20:53:14 +0000 Subject: [petsc-users] coding VecMDot_Seq as gemv Message-ID: <7D1EEC03AA242A48BD7620803F67F9AD04E9ED6F@NDMSMBX304.ndc.nasa.gov> I am looking into ways to improve performance of the VecMDot_Seq routine. I am focusing on the variant that gets called when PETSC_THREADCOMM_ACTIVE and PETSC_USE_FORTRAN_KERNEL_MDOT are NOT defined. My current version of PETSc is 3.4.5 due solely to user requirement. I am linking against MKL. I tried and failed to implement VecMDot_Seq as a call to cblas_dgemv in ~/mpi/pvec2.c cblas_dgemv(CblasRowMajor, CblasNoTrans, nv, n, 1., b, n, xbase, 1, 0., work, 1); I could not figure out a way to extract the vectors from 'Vec y[]' and store them as rows of an allocated array. This user post starts off with a similar request (how to construct a matrix from many vectors) https://lists.mcs.anl.gov/pipermail/petsc-users/2015-August/026848.html I understand that this sort of memory shuffling is expensive. I was just hoping to prove the point to myself that it's possible. The action performed by VecMDot_Seq is the same as matrix-vector multiplication, so I was wondering why it wasn't implemented as a call ?gemv? Daniel Kokron NASA Ames (ARC-TN) SciCon group -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Dec 27 14:58:41 2017 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Wed, 27 Dec 2017 20:58:41 +0000 Subject: [petsc-users] coding VecMDot_Seq as gemv In-Reply-To: <7D1EEC03AA242A48BD7620803F67F9AD04E9ED6F@NDMSMBX304.ndc.nasa.gov> References: <7D1EEC03AA242A48BD7620803F67F9AD04E9ED6F@NDMSMBX304.ndc.nasa.gov> Message-ID: <2F9EF4B4-E623-4D92-8438-82DD735E50DE@mcs.anl.gov> > On Dec 27, 2017, at 2:53 PM, Kokron, Daniel S. (ARC-606.2)[CSRA, LLC] wrote: > > I am looking into ways to improve performance of the VecMDot_Seq routine. I am focusing on the variant that gets called when PETSC_THREADCOMM_ACTIVE and PETSC_USE_FORTRAN_KERNEL_MDOT are NOT defined. > > My current version of PETSc is 3.4.5 due solely to user requirement. I am linking against MKL. > > I tried and failed to implement VecMDot_Seq as a call to cblas_dgemv in ~/mpi/pvec2.c > > cblas_dgemv(CblasRowMajor, CblasNoTrans, nv, n, 1., b, n, xbase, 1, 0., work, 1); > > I could not figure out a way to extract the vectors from 'Vec y[]' and store them as rows of an allocated array. Use VecGetArray on each one then do a memcpy of those values into the big allocated array (or a loop copy or whatever). > > This user post starts off with a similar request (how to construct a matrix from many vectors) > https://lists.mcs.anl.gov/pipermail/petsc-users/2015-August/026848.html > > I understand that this sort of memory shuffling is expensive. I was just hoping to prove the point to myself that it's possible. Yes the memory shuffling is possible and not that difficult, perhaps you just have the rows and columns backwards? > > The action performed by VecMDot_Seq is the same as matrix-vector multiplication, so I was wondering why it wasn't implemented as a call ?gemv? The reason is that we want each of the vectors to be independent of the other ones; to use gemv they have be shared in a single common array. I doubt that copying into a single array and calling the gemv will ever be faster. Better to just try to optimize/vectorize better the current code. > > Daniel Kokron > NASA Ames (ARC-TN) > SciCon group From daniel.s.kokron at nasa.gov Wed Dec 27 16:24:31 2017 From: daniel.s.kokron at nasa.gov (Kokron, Daniel S. (ARC-606.2)[CSRA, LLC]) Date: Wed, 27 Dec 2017 22:24:31 +0000 Subject: [petsc-users] coding VecMDot_Seq as gemv In-Reply-To: <2F9EF4B4-E623-4D92-8438-82DD735E50DE@mcs.anl.gov> References: <7D1EEC03AA242A48BD7620803F67F9AD04E9ED6F@NDMSMBX304.ndc.nasa.gov>, <2F9EF4B4-E623-4D92-8438-82DD735E50DE@mcs.anl.gov> Message-ID: <7D1EEC03AA242A48BD7620803F67F9AD04E9ED85@NDMSMBX304.ndc.nasa.gov> Barry, Thanks for the reply. Here is my attempt. Do you see anything wrong with it? #if 1 #include PetscInt n = xin->map->n,i,j; const PetscScalar *xbase,*a; PetscScalar *b; Vec *yy; yy = (Vec*)y; ierr = PetscMalloc(n*nv*sizeof(PetscScalar),&b);CHKERRQ(ierr); for(i=0; i On Dec 27, 2017, at 2:53 PM, Kokron, Daniel S. (ARC-606.2)[CSRA, LLC] wrote: > > I am looking into ways to improve performance of the VecMDot_Seq routine. I am focusing on the variant that gets called when PETSC_THREADCOMM_ACTIVE and PETSC_USE_FORTRAN_KERNEL_MDOT are NOT defined. > > My current version of PETSc is 3.4.5 due solely to user requirement. I am linking against MKL. > > I tried and failed to implement VecMDot_Seq as a call to cblas_dgemv in ~/mpi/pvec2.c > > cblas_dgemv(CblasRowMajor, CblasNoTrans, nv, n, 1., b, n, xbase, 1, 0., work, 1); > > I could not figure out a way to extract the vectors from 'Vec y[]' and store them as rows of an allocated array. Use VecGetArray on each one then do a memcpy of those values into the big allocated array (or a loop copy or whatever). > > This user post starts off with a similar request (how to construct a matrix from many vectors) > https://lists.mcs.anl.gov/pipermail/petsc-users/2015-August/026848.html > > I understand that this sort of memory shuffling is expensive. I was just hoping to prove the point to myself that it's possible. Yes the memory shuffling is possible and not that difficult, perhaps you just have the rows and columns backwards? > > The action performed by VecMDot_Seq is the same as matrix-vector multiplication, so I was wondering why it wasn't implemented as a call ?gemv? The reason is that we want each of the vectors to be independent of the other ones; to use gemv they have be shared in a single common array. I doubt that copying into a single array and calling the gemv will ever be faster. Better to just try to optimize/vectorize better the current code. > > Daniel Kokron > NASA Ames (ARC-TN) > SciCon group From bsmith at mcs.anl.gov Wed Dec 27 17:32:23 2017 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Wed, 27 Dec 2017 23:32:23 +0000 Subject: [petsc-users] coding VecMDot_Seq as gemv In-Reply-To: <7D1EEC03AA242A48BD7620803F67F9AD04E9ED85@NDMSMBX304.ndc.nasa.gov> References: <7D1EEC03AA242A48BD7620803F67F9AD04E9ED6F@NDMSMBX304.ndc.nasa.gov> <2F9EF4B4-E623-4D92-8438-82DD735E50DE@mcs.anl.gov> <7D1EEC03AA242A48BD7620803F67F9AD04E9ED85@NDMSMBX304.ndc.nasa.gov> Message-ID: <22413173-352A-4B1A-95EE-9303B0A57014@mcs.anl.gov> > On Dec 27, 2017, at 4:24 PM, Kokron, Daniel S. (ARC-606.2)[CSRA, LLC] wrote: > > Barry, > > Thanks for the reply. > > Here is my attempt. Do you see anything wrong with it? > > #if 1 > #include > PetscInt n = xin->map->n,i,j; > const PetscScalar *xbase,*a; > PetscScalar *b; > Vec *yy; > yy = (Vec*)y; > > ierr = PetscMalloc(n*nv*sizeof(PetscScalar),&b);CHKERRQ(ierr); > for(i=0; i ierr = VecGetArrayRead(yy[i],&a);CHKERRQ(ierr); > for(j=0; j b[j] = a[j]; > } > b += n; > ierr = VecRestoreArrayRead(yy[i],&a);CHKERRQ(ierr); > } > ierr = VecGetArrayRead(xin,&xbase);CHKERRQ(ierr); You've been incrementing b but now you use it as if it was the original value you malloced. You need to, for example, decrement it by n*nv > cblas_dgemv(CblasRowMajor, CblasNoTrans, nv, n, 1., b, n, xbase, 1, 0., work, 1); > ierr = VecRestoreArrayRead(xin,&xbase);CHKERRQ(ierr); > for(i=0; i PetscPrintf(PETSC_COMM_SELF,"VecMDot_MPI: %D %.16f\n",i,work[i]); > } > ierr = PetscFree(b);CHKERRQ(ierr); > #else > ierr = VecMDot_Seq(xin,nv,y,work);CHKERRQ(ierr); > PetscInt i; > for(i=0; i PetscPrintf(PETSC_COMM_SELF,"VecMDot_MPI: %D %.16f\n",i,work[i]); > } > #endif > > > Daniel Kokron > NASA Ames (ARC-TN) > SciCon group > ________________________________________ > From: Smith, Barry F. [bsmith at mcs.anl.gov] > Sent: Wednesday, December 27, 2017 14:58 > To: Kokron, Daniel S. (ARC-606.2)[CSRA, LLC] > Cc: petsc-users at mcs.anl.gov > Subject: Re: [petsc-users] coding VecMDot_Seq as gemv > >> On Dec 27, 2017, at 2:53 PM, Kokron, Daniel S. (ARC-606.2)[CSRA, LLC] wrote: >> >> I am looking into ways to improve performance of the VecMDot_Seq routine. I am focusing on the variant that gets called when PETSC_THREADCOMM_ACTIVE and PETSC_USE_FORTRAN_KERNEL_MDOT are NOT defined. >> >> My current version of PETSc is 3.4.5 due solely to user requirement. I am linking against MKL. >> >> I tried and failed to implement VecMDot_Seq as a call to cblas_dgemv in ~/mpi/pvec2.c >> >> cblas_dgemv(CblasRowMajor, CblasNoTrans, nv, n, 1., b, n, xbase, 1, 0., work, 1); >> >> I could not figure out a way to extract the vectors from 'Vec y[]' and store them as rows of an allocated array. > > Use VecGetArray on each one then do a memcpy of those values into the big allocated array (or a loop copy or whatever). >> >> This user post starts off with a similar request (how to construct a matrix from many vectors) >> https://lists.mcs.anl.gov/pipermail/petsc-users/2015-August/026848.html >> >> I understand that this sort of memory shuffling is expensive. I was just hoping to prove the point to myself that it's possible. > > Yes the memory shuffling is possible and not that difficult, perhaps you just have the rows and columns backwards? > >> >> The action performed by VecMDot_Seq is the same as matrix-vector multiplication, so I was wondering why it wasn't implemented as a call ?gemv? > > The reason is that we want each of the vectors to be independent of the other ones; to use gemv they have be shared in a single common array. > > I doubt that copying into a single array and calling the gemv will ever be faster. Better to just try to optimize/vectorize better the current code. > > >> >> Daniel Kokron >> NASA Ames (ARC-TN) >> SciCon group > From repepo at gmail.com Sun Dec 31 08:18:23 2017 From: repepo at gmail.com (Santiago Andres Triana) Date: Sun, 31 Dec 2017 15:18:23 +0100 Subject: [petsc-users] quad precision solvers Message-ID: Hi petsc-users, What solvers (either petsc-native or external packages) are available for quad precision (i.e. __float128) computations? I am dealing with a large (1e6 x 1e6), sparse, complex-valued, non-hermitian, and non-symmetric generalized eigenvalue problem. So far I have been using mumps (Krylov-Schur) with double precision but I'd like to have an idea about the rounding off errors I might be incurring. Thanks in advance for any comments! Cheers, Andres. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Sun Dec 31 09:10:54 2017 From: jed at jedbrown.org (Jed Brown) Date: Sun, 31 Dec 2017 08:10:54 -0700 Subject: [petsc-users] quad precision solvers In-Reply-To: References: Message-ID: <87incn0w75.fsf@jedbrown.org> Santiago Andres Triana writes: > Hi petsc-users, > > What solvers (either petsc-native or external packages) are available for > quad precision (i.e. __float128) computations? All native solvers and none of the external packages. > I am dealing with a large (1e6 x 1e6), sparse, complex-valued, > non-hermitian, and non-symmetric generalized eigenvalue problem. So > far I have been using mumps (Krylov-Schur) with double precision but > I'd like to have an idea about the rounding off errors I might be > incurring. > > Thanks in advance for any comments! > Cheers, Andres. From box at topupmarketings.com Sun Dec 31 10:37:19 2017 From: box at topupmarketings.com (box at topupmarketings.com) Date: Mon, 1 Jan 2018 00:37:19 +0800 Subject: [petsc-users] =?utf-8?b?6Lef5L2g5YiG5LqrLi7mgI7mqKPog73lpKDnlKg=?= =?utf-8?b?5pyA5bCR5oiQ5pys5Ym15qWtLi4h?= Message-ID: ???????: ?????????????????????? Onlineheungkung.com ??????????????????????????? ??????????????????????????????????????????????? Online Heung Kung ??????????????????????????????????? ???????????????????/??/????/??/???????????!! ??????/??????????????? (??/??/??/?/?/??/???) ???????? (????????????) ?????? (??????????????????????) ???????????????? (??part time?????????????????) ????????????????? http://onlineheungkung.com ????????????Whatsapp???????????????????? ?? 35905629???? 96833275??WhatsApp 96833275 ???????????????? ?? ??? To opt-out from receiving email, please click here to unsubscribe. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 86_E4_CC5_BD915914_A2_C7354_A1_E0_F280962421_D7_DA9_B191_D894_C_pimgpsh_full.jpg Type: image/jpeg Size: 527119 bytes Desc: not available URL: