From sblondel at utk.edu Tue Dec 1 15:14:09 2020 From: sblondel at utk.edu (Blondel, Sophie) Date: Tue, 1 Dec 2020 21:14:09 +0000 Subject: [petsc-users] TSSetEventHandler and TSSetPostEventIntervalStep In-Reply-To: References: <52CD65C8-DB69-4799-ACC7-0B2E5C32FE54@petsc.dev> , , Message-ID: Hi again Barry and Matt, Just wanted to check the status for the TSSetEventHandler. Let me know if I can help in any way. Cheers, Sophie ________________________________ From: Blondel, Sophie Sent: Monday, November 16, 2020 11:38 To: Barry Smith ; Matthew Knepley Cc: petsc-users at mcs.anl.gov ; xolotl-psi-development at lists.sourceforge.net Subject: Re: [petsc-users] TSSetEventHandler and TSSetPostEventIntervalStep Hi Matt and Barry, I wanted to check if you wanted me to test anything with the branch from Barry, it was not clear from the previous emails. Cheers, Sophie ________________________________ From: Barry Smith Sent: Tuesday, October 27, 2020 17:01 To: Matthew Knepley Cc: Blondel, Sophie ; petsc-users at mcs.anl.gov ; xolotl-psi-development at lists.sourceforge.net Subject: Re: [petsc-users] TSSetEventHandler and TSSetPostEventIntervalStep Pushed On Oct 27, 2020, at 3:41 PM, Matthew Knepley > wrote: On Tue, Oct 27, 2020 at 4:24 PM Barry Smith > wrote: I'm sorry the code is still fundamentally broken, I know I promised a long time ago to fix it all up but it is actually pretty hard to get right. It detects the zero by finding a small value when it should detect it by find a small region where it changes sign but surprising it is so hardwired to the size test that fixing it and testing the new code has been very difficult to me. My branch is barry/2019-08-18/fix-tsevent-posteventdt Barry, I do not see this branch on gitlab. Can you give a URL? Thanks, Matt Barry On Oct 27, 2020, at 3:02 PM, Blondel, Sophie via petsc-users > wrote: Hi Matt, With the ex40 I attached in my previous email here is what I get printed on screen when running "./ex40 -ts_monitor -ts_event_monitor": 0 TS dt 0.1 time 0. 1 TS dt 0.5 time 0.1 2 TS dt 0.5 time 0.6 3 TS dt 0.5 time 1.1 4 TS dt 0.5 time 1.6 5 TS dt 0.5 time 2.1 6 TS dt 0.5 time 2.6 7 TS dt 0.5 time 3.1 8 TS dt 0.5 time 3.6 9 TS dt 0.5 time 4.1 10 TS dt 0.5 time 4.6 11 TS dt 0.5 time 5.1 12 TS dt 0.5 time 5.6 13 TS dt 0.5 time 6.1 14 TS dt 0.5 time 6.6 15 TS dt 0.5 time 7.1 TSEvent: Event 0 zero crossing at time 7.6 located in 0 iterations Ball hit the ground at t = 7.60 seconds 16 TS dt 0.5 time 7.6 17 TS dt 0.5 time 8.1 18 TS dt 0.5 time 8.6 19 TS dt 0.5 time 9.1 20 TS dt 0.5 time 9.6 21 TS dt 0.5 time 10.1 22 TS dt 0.5 time 10.6 23 TS dt 0.5 time 11.1 24 TS dt 0.5 time 11.6 25 TS dt 0.5 time 12.1 26 TS dt 0.5 time 12.6 27 TS dt 0.5 time 13.1 28 TS dt 0.5 time 13.6 29 TS dt 0.5 time 14.1 30 TS dt 0.5 time 14.6 31 TS dt 0.5 time 15.1 32 TS dt 0.5 time 15.6 33 TS dt 0.5 time 16.1 34 TS dt 0.5 time 16.6 35 TS dt 0.5 time 17.1 36 TS dt 0.5 time 17.6 37 TS dt 0.5 time 18.1 38 TS dt 0.5 time 18.6 39 TS dt 0.5 time 19.1 40 TS dt 0.5 time 19.6 41 TS dt 0.5 time 20.1 42 TS dt 0.5 time 20.6 43 TS dt 0.5 time 21.1 44 TS dt 0.5 time 21.6 45 TS dt 0.5 time 22.1 46 TS dt 0.5 time 22.6 47 TS dt 0.5 time 23.1 48 TS dt 0.5 time 23.6 49 TS dt 0.5 time 24.1 50 TS dt 0.5 time 24.6 51 TS dt 0.5 time 25.1 TSEvent: Event 0 zero crossing at time 25.6 located in 0 iterations Ball hit the ground at t = 25.60 seconds 52 TS dt 0.5 time 25.6 53 TS dt 0.5 time 26.1 54 TS dt 0.5 time 26.6 55 TS dt 0.5 time 27.1 56 TS dt 0.5 time 27.6 57 TS dt 0.5 time 28.1 58 TS dt 0.5 time 28.6 59 TS dt 0.5 time 29.1 60 TS dt 0.5 time 29.6 61 TS dt 0.5 time 30.1 0 TS dt 0.1 time 0. 1 TS dt 0.5 time 0.1 2 TS dt 0.5 time 0.6 3 TS dt 0.5 time 1.1 4 TS dt 0.5 time 1.6 5 TS dt 0.5 time 2.1 6 TS dt 0.5 time 2.6 7 TS dt 0.5 time 3.1 8 TS dt 0.5 time 3.6 9 TS dt 0.5 time 4.1 10 TS dt 0.5 time 4.6 11 TS dt 0.5 time 5.1 12 TS dt 0.5 time 5.6 13 TS dt 0.5 time 6.1 14 TS dt 0.5 time 6.6 15 TS dt 0.5 time 7.1 16 TS dt 0.5 time 7.6 17 TS dt 0.5 time 8.1 18 TS dt 0.5 time 8.6 19 TS dt 0.5 time 9.1 20 TS dt 0.5 time 9.6 21 TS dt 0.5 time 10.1 22 TS dt 0.5 time 10.6 23 TS dt 0.5 time 11.1 24 TS dt 0.5 time 11.6 25 TS dt 0.5 time 12.1 26 TS dt 0.5 time 12.6 TSEvent: Event 0 zero crossing at time 13.1 located in 0 iterations Ball hit the ground at t = 13.10 seconds 27 TS dt 0.5 time 13.1 28 TS dt 0.5 time 13.6 29 TS dt 0.5 time 14.1 30 TS dt 0.5 time 14.6 31 TS dt 0.5 time 15.1 32 TS dt 0.5 time 15.6 33 TS dt 0.5 time 16.1 34 TS dt 0.5 time 16.6 35 TS dt 0.5 time 17.1 36 TS dt 0.5 time 17.6 37 TS dt 0.5 time 18.1 38 TS dt 0.5 time 18.6 39 TS dt 0.5 time 19.1 40 TS dt 0.5 time 19.6 41 TS dt 0.5 time 20.1 42 TS dt 0.5 time 20.6 43 TS dt 0.5 time 21.1 44 TS dt 0.5 time 21.6 45 TS dt 0.5 time 22.1 46 TS dt 0.5 time 22.6 47 TS dt 0.5 time 23.1 TSEvent: Event 0 zero crossing at time 23.6 located in 0 iterations Ball hit the ground at t = 23.60 seconds 48 TS dt 0.5 time 23.6 49 TS dt 0.5 time 24.1 50 TS dt 0.5 time 24.6 51 TS dt 0.5 time 25.1 52 TS dt 0.5 time 25.6 53 TS dt 0.5 time 26.1 TSEvent: Event 0 zero crossing at time 26.6 located in 0 iterations Ball hit the ground at t = 26.60 seconds 54 TS dt 0.5 time 26.6 55 TS dt 0.5 time 27.1 56 TS dt 0.5 time 27.6 57 TS dt 0.5 time 28.1 58 TS dt 0.5 time 28.6 59 TS dt 0.5 time 29.1 60 TS dt 0.5 time 29.6 61 TS dt 0. time 30.1 I don't see the 0.001 timestep here, do you get a different behavior? Thank you, Sophie ________________________________ From: Matthew Knepley > Sent: Tuesday, October 27, 2020 15:34 To: Blondel, Sophie > Cc: petsc-users at mcs.anl.gov >; xolotl-psi-development at lists.sourceforge.net > Subject: Re: [petsc-users] TSSetEventHandler and TSSetPostEventIntervalStep [External Email] On Tue, Oct 27, 2020 at 3:09 PM Blondel, Sophie via petsc-users > wrote: Hi, I am currently using TSSetEventHandler in my code to detect a random event where the solution vector gets modified during the event. Ideally, after the event happens I want the solver to use a much smaller timestep using TSSetPostEventIntervalStep. However, when I use TSSetPostEventIntervalStep the solver doesn't use the set value. I managed to reproduce the behavior by modifying ex40.c as attached. I stepped through ex40, and it does indeed change the timestep to 0.001. Can you be more specific, perhaps with monitors, about what you think is wrong? Thanks, Matt I think the issue is related to the fact that the fvalue is not technically "approaching" 0 with a random event, it is more of a step function instead. Do you have any recommendation on how to implement the behavior I'm looking for? Let me know if I can provide additional information. Best, Sophie -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From bhatiamanav at gmail.com Wed Dec 2 15:54:23 2020 From: bhatiamanav at gmail.com (Manav Bhatia) Date: Wed, 2 Dec 2020 15:54:23 -0600 Subject: [petsc-users] Eigensolution of Dirichlet Constrained problems Message-ID: <3EC52613-63DC-4903-BB23-D2A4E7A2E946@gmail.com> Hi, When solving an eigenproblem with Dirichlet constraints on some of the DoFs, I have so far been creating a new matrix including only the unconstrained rows/columns and the calling the eigensolver on that. This works without issues. I am writing to check if there are other recommended ways to handle such constraints in SLEPc. Thanks, Manav From jroman at dsic.upv.es Thu Dec 3 01:53:49 2020 From: jroman at dsic.upv.es (Jose E. Roman) Date: Thu, 3 Dec 2020 08:53:49 +0100 Subject: [petsc-users] Eigensolution of Dirichlet Constrained problems In-Reply-To: <3EC52613-63DC-4903-BB23-D2A4E7A2E946@gmail.com> References: <3EC52613-63DC-4903-BB23-D2A4E7A2E946@gmail.com> Message-ID: <65A23E16-231C-4D2E-89D7-FC4B09267AE1@dsic.upv.es> Removing rows and columns is the recommended way. An alternative is to assemble the matrix with those rows and columns included, then use MatZeroRows() or MatZeroRowsColumns() to zero out the constrained row and set a value alpha in the diagonal entry of that row. In this way, alpha becomes an eigenvalue of the new matrix. The value alpha must be chosen carefully. If it is close to the wanted eigenvalues then it might be returned as a computed eigenvalue. Normally one would set it to be far away from wanted eigenvalues, but care must be taken not to expand the range of the spectrum because this may have an impact on convergence or conditioning. Jose > El 2 dic 2020, a las 22:54, Manav Bhatia escribi?: > > Hi, > > When solving an eigenproblem with Dirichlet constraints on some of the DoFs, I have so far been creating a new matrix including only the unconstrained rows/columns and the calling the eigensolver on that. This works without issues. > > I am writing to check if there are other recommended ways to handle such constraints in SLEPc. > > Thanks, > Manav > From wangyijia at lsec.cc.ac.cn Thu Dec 3 03:19:37 2020 From: wangyijia at lsec.cc.ac.cn (=?UTF-8?B?546L5LiA55Sy?=) Date: Thu, 3 Dec 2020 17:19:37 +0800 (GMT+08:00) Subject: [petsc-users] Questions On CGS iteration refinement in GMRES Message-ID: <2a1d4f12.f679.17627e65f7b.Coremail.wangyijia@lsec.cc.ac.cn> To any one who it may concern: Hi, my name is Wang Yijia from Chinese Academy of Science. Recently I've been working on some sparse linear system baseline tests using PETSc. When using gmres methods in KSP, I found that the option -ksp_gmres_cgs_refinement refine_always worked well on my sparse linear system, however I can not find more about how this process is done in the Hessenberg matrix generation(classical Gram-Schmidt process). I am curious about how the cgs iteration refinement process work in the whole KSP solve stage. I would appreciate it if you could offer some help. Yours sincerely Wang Yijia 2020/12/3 -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Dec 3 12:35:55 2020 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 3 Dec 2020 13:35:55 -0500 Subject: [petsc-users] Questions On CGS iteration refinement in GMRES In-Reply-To: <2a1d4f12.f679.17627e65f7b.Coremail.wangyijia@lsec.cc.ac.cn> References: <2a1d4f12.f679.17627e65f7b.Coremail.wangyijia@lsec.cc.ac.cn> Message-ID: On Thu, Dec 3, 2020 at 11:16 AM ??? wrote: > To any one who it may concern: > > Hi, my name is Wang Yijia from Chinese Academy of Science. Recently I've > been working on some sparse linear system baseline tests using PETSc. When > using gmres methods in KSP, I found that the option > -ksp_gmres_cgs_refinement refine_always worked well on my sparse linear > system, however I can not find more about how this process is done in the > Hessenberg matrix generation(classical Gram-Schmidt process). I am curious > about how the cgs iteration refinement process work in the whole KSP solve > stage. I would appreciate it if you could offer some help. > The code is fairly short: https://gitlab.com/petsc/petsc/-/blob/master/src/ksp/ksp/impls/gmres/borthog2.c Thanks, Matt > > Yours sincerely > > > Wang Yijia > > > 2020/12/3 > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From yann.jobic at univ-amu.fr Thu Dec 3 14:09:39 2020 From: yann.jobic at univ-amu.fr (Yann Jobic) Date: Thu, 3 Dec 2020 21:09:39 +0100 Subject: [petsc-users] scotch compilation error on 14.2 with intel compiler Message-ID: <77ab8b62-cc78-9d12-f70b-c3adbfdf2eba@univ-amu.fr> Hi all, I'm running on a surprising compilation problem with petsc 14.1 and 14.2 on centos 7.9, with the intel compiler (17 and 18). I've got this error on the scotch lib : make[2]: Leaving directory `/home/yjobic/petsc/petsc-3.14.2/intel-mkl-sky/externalpackages/git.ptscotch/src/libscotch' make[1]: Leaving directory `/home/yjobic/petsc/petsc-3.14.2/intel-mkl-sky/externalpackages/git.ptscotch/src/libscotch'In file included from /usr/include/sys/wait.h(30), from common.h(130), from common_string.c(57): /usr/include/signal.h(156): error: identifier "siginfo_t" is undefined extern void psiginfo (const siginfo_t *__pinfo, const char *__s); I know it's not a petsc error, but it's mandatory for mumps, and as it's a well known lib that should be used a lot, maybe someone has encountered this error. I tried intel 17, and intel 18 on centos 7.9 I don't have any problems with the gcc compiler, but i would like to compile petsc with the intel compiler. Just in case, i put the configure.log in attachment. Thanks, Regards, Yann -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.zip Type: application/x-zip-compressed Size: 336076 bytes Desc: not available URL: From knepley at gmail.com Thu Dec 3 14:14:01 2020 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 3 Dec 2020 15:14:01 -0500 Subject: [petsc-users] scotch compilation error on 14.2 with intel compiler In-Reply-To: <77ab8b62-cc78-9d12-f70b-c3adbfdf2eba@univ-amu.fr> References: <77ab8b62-cc78-9d12-f70b-c3adbfdf2eba@univ-amu.fr> Message-ID: On Thu, Dec 3, 2020 at 3:10 PM Yann Jobic wrote: > Hi all, > > I'm running on a surprising compilation problem with petsc 14.1 and > 14.2 on centos 7.9, with the intel compiler (17 and 18). > > I've got this error on the scotch lib : > > make[2]: Leaving directory > > `/home/yjobic/petsc/petsc-3.14.2/intel-mkl-sky/externalpackages/git.ptscotch/src/libscotch' > make[1]: Leaving directory > `/home/yjobic/petsc/petsc-3.14.2/intel-mkl-sky/externalpackages/git.ptscotch/src/libscotch'In > > file included from /usr/include/sys/wait.h(30), > from common.h(130), > from common_string.c(57): > /usr/include/signal.h(156): error: identifier "siginfo_t" is undefined > extern void psiginfo (const siginfo_t *__pinfo, const char *__s); > > I know it's not a petsc error, but it's mandatory for mumps, and as it's > a well known lib that should be used a lot, maybe someone has > encountered this error. > > I tried intel 17, and intel 18 on centos 7.9 > I don't have any problems with the gcc compiler, but i would like to > compile petsc with the intel compiler. > > Just in case, i put the configure.log in attachment. > I found this related thing with Clang: https://stackoverflow.com/questions/22912674/unknown-type-name-siginfo-t-with-clang-using-posix-c-source-2-why Can you see if --COPTFLAGS="-D_POSIX_C_SOURCE=199309L" fixes this? Thanks, Matt > Thanks, > > Regards, > > Yann > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From cebau.mail at gmail.com Thu Dec 3 14:25:01 2020 From: cebau.mail at gmail.com (C B) Date: Thu, 3 Dec 2020 14:25:01 -0600 Subject: [petsc-users] CPU speed or DRAM speed bottlenecks ? Message-ID: Resorting to your expertise in software performance: Subject: Looking for a crude assessment of CPU speed or DRAM speed bottlenecks in shared memory multi-core PCs On a typical PC with one Xeon CPU (8 cores), a serial code runs a case in say 10 hours of Wall time, and on the same computer 4 instances of the same code running simultaneously (the same case) take essentially the same Wall time, 10 hrs or a marginal increase such as 10hrs 30 mins. There is no I/O, lots of free physical RAM, each core running an instance shows ~ 100% utilization. Q1: What could we conclude about this hardware-software-case combination in terms of being CPU bound, memory bandwidth bound, etc ? Q2: Can we say that this hardware-software-case combination is not DRAM bound, and that it ?may be amenable? to a good speedup running multiple threads in the same shared memory environment ? I did look into the shared memory benchmark http://www.cs.virginia.edu/stream but I could not draw any conclusions. If this is a trivial question, please point me to a good resource to learn. Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From yann.jobic at univ-amu.fr Thu Dec 3 16:02:14 2020 From: yann.jobic at univ-amu.fr (Yann Jobic) Date: Thu, 3 Dec 2020 23:02:14 +0100 Subject: [petsc-users] scotch compilation error on 14.2 with intel compiler In-Reply-To: References: <77ab8b62-cc78-9d12-f70b-c3adbfdf2eba@univ-amu.fr> Message-ID: <2da83397-8eff-904c-50f2-eefdb673fb36@univ-amu.fr> Le 12/3/2020 ? 9:14 PM, Matthew Knepley a ?crit?: > On Thu, Dec 3, 2020 at 3:10 PM Yann Jobic > wrote: > > Hi all, > > I'm running on a surprising? compilation problem with petsc 14.1 and > 14.2 on centos 7.9, with the intel compiler (17 and 18). > > I've got this error on the scotch lib : > > make[2]: Leaving directory > `/home/yjobic/petsc/petsc-3.14.2/intel-mkl-sky/externalpackages/git.ptscotch/src/libscotch' > make[1]: Leaving directory > `/home/yjobic/petsc/petsc-3.14.2/intel-mkl-sky/externalpackages/git.ptscotch/src/libscotch'In > > file included from /usr/include/sys/wait.h(30), > ? ? ? ? ? ? ? ? ? from common.h(130), > ? ? ? ? ? ? ? ? ? from common_string.c(57): > /usr/include/signal.h(156): error: identifier "siginfo_t" is undefined > ? ?extern void psiginfo (const siginfo_t *__pinfo, const char *__s); > > I know it's not a petsc error, but it's mandatory for mumps, and as > it's > a well known lib that should be used a lot, maybe someone has > encountered this error. > > I tried intel 17, and intel 18 on centos 7.9 > I don't have any problems with the gcc compiler, but i would like to > compile petsc with the intel compiler. > > Just in case, i put the configure.log in attachment. > > > I found this related thing with Clang: > https://stackoverflow.com/questions/22912674/unknown-type-name-siginfo-t-with-clang-using-posix-c-source-2-why > > Can you see if > > ? --COPTFLAGS="-D_POSIX_C_SOURCE=199309L" > > fixes this? Yes it did ! For intel 17 and 18. Thanks a lot, Yann > > Thanks, > > Matt > > Thanks, > > Regards, > > Yann > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ From bsmith at petsc.dev Thu Dec 3 16:09:52 2020 From: bsmith at petsc.dev (Barry Smith) Date: Thu, 3 Dec 2020 16:09:52 -0600 Subject: [petsc-users] CPU speed or DRAM speed bottlenecks ? In-Reply-To: References: Message-ID: <5548441F-DBD1-47B4-8D76-1E73A03BC130@petsc.dev> > On Dec 3, 2020, at 2:25 PM, C B wrote: > > Resorting to your expertise in software performance: > > Subject: Looking for a crude assessment of CPU speed or DRAM speed bottlenecks in shared memory multi-core PCs > > On a typical PC with one Xeon CPU (8 cores), a serial code runs a case in say 10 hours of Wall time, and on the same computer 4 instances of the same code running simultaneously (the same case) take essentially the same Wall time, 10 hrs or a marginal increase such as 10hrs 30 mins. There is no I/O, lots of free physical RAM, each core running an instance shows ~ 100% utilization. > > Q1: What could we conclude about this hardware-software-case combination in terms of being CPU bound, memory bandwidth bound, etc ? > > It does not appear to be memory bandwidth bound. Presumably the 4 cases will each be utilizing the same memory bandwidth as one case so I think one can conclude that the 1 case is using at most 25 percent of the memory bandwidth. > Q2: Can we say that this hardware-software-case combination is not DRAM bound, and that it ?may be amenable? to a good speedup running multiple threads in the same shared memory environment ? > > I think this is good a way to say it, "since it is not DRAM bound it may be amendable to good speedup running multiple threads", it may also be amendable to MPI parallelism. There are other factors that affect parallel performance besides memory bandwidth without more information these are unknown". Barry > I did look into the shared memory benchmark http://www.cs.virginia.edu/stream but I could not draw any conclusions. > > If this is a trivial question, please point me to a good resource to learn. > > Thanks! > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cebau.mail at gmail.com Thu Dec 3 17:28:49 2020 From: cebau.mail at gmail.com (C B) Date: Thu, 3 Dec 2020 17:28:49 -0600 Subject: [petsc-users] CPU speed or DRAM speed bottlenecks ? In-Reply-To: <5548441F-DBD1-47B4-8D76-1E73A03BC130@petsc.dev> References: <5548441F-DBD1-47B4-8D76-1E73A03BC130@petsc.dev> Message-ID: Barry, Thank you so much for your quick reply and insight. Are there any tools/simple ways to determine how much time is lost in cache misses / etc, please direct me to any resources to learn about this. Thanks again! On Thu, Dec 3, 2020 at 4:09 PM Barry Smith wrote: > > > On Dec 3, 2020, at 2:25 PM, C B wrote: > > Resorting to your expertise in software performance: > > Subject: Looking for a crude assessment of CPU speed or DRAM speed > bottlenecks in shared memory multi-core PCs > > On a typical PC with one Xeon CPU (8 cores), a serial code runs a case in > say 10 hours of Wall time, and on the same computer 4 instances of the same > code running simultaneously (the same case) take essentially the same Wall > time, 10 hrs or a marginal increase such as 10hrs 30 mins. There is no > I/O, lots of free physical RAM, each core running an instance shows ~ 100% > utilization. > > Q1: What could we conclude about this hardware-software-case combination > in terms of being CPU bound, memory bandwidth bound, etc ? > > It does not appear to be memory bandwidth bound. Presumably the 4 > cases will each be utilizing the same memory bandwidth as one case so I > think one can conclude that the 1 case is using at most 25 percent of the > memory bandwidth. > > > Q2: Can we say that this hardware-software-case combination is not DRAM > bound, and that it ?may be amenable? to a good speedup running multiple > threads in the same shared memory environment ? > > I think this is good a way to say it, "since it is not DRAM bound it > may be amendable to good speedup running multiple threads", it may also be > amendable to MPI parallelism. There are other factors that affect parallel > performance besides memory bandwidth without more information these are > unknown". > > Barry > > > > I did look into the shared memory benchmark > http://www.cs.virginia.edu/stream but I could not draw any conclusions. > > If this is a trivial question, please point me to a good resource to learn. > > Thanks! > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From junchao.zhang at gmail.com Thu Dec 3 20:58:30 2020 From: junchao.zhang at gmail.com (Junchao Zhang) Date: Thu, 3 Dec 2020 20:58:30 -0600 Subject: [petsc-users] CPU speed or DRAM speed bottlenecks ? In-Reply-To: References: <5548441F-DBD1-47B4-8D76-1E73A03BC130@petsc.dev> Message-ID: You can try HPCTookit (http://hpctoolkit.org/), Tau ( https://www.cs.uoregon.edu/research/tau/home.php), or Intel VTune. But for each, you need to read its manual to learn it. --Junchao Zhang On Thu, Dec 3, 2020 at 5:29 PM C B wrote: > Barry, > > Thank you so much for your quick reply and insight. > > Are there any tools/simple ways to determine how much time is lost in > cache misses / etc, please direct me to any resources to learn about this. > > Thanks again! > > > On Thu, Dec 3, 2020 at 4:09 PM Barry Smith wrote: > >> >> >> On Dec 3, 2020, at 2:25 PM, C B wrote: >> >> Resorting to your expertise in software performance: >> >> Subject: Looking for a crude assessment of CPU speed or DRAM speed >> bottlenecks in shared memory multi-core PCs >> >> On a typical PC with one Xeon CPU (8 cores), a serial code runs a case >> in say 10 hours of Wall time, and on the same computer 4 instances of the >> same code running simultaneously (the same case) take essentially the same >> Wall time, 10 hrs or a marginal increase such as 10hrs 30 mins. There is >> no I/O, lots of free physical RAM, each core running an instance shows ~ >> 100% utilization. >> >> Q1: What could we conclude about this hardware-software-case combination >> in terms of being CPU bound, memory bandwidth bound, etc ? >> >> It does not appear to be memory bandwidth bound. Presumably the 4 >> cases will each be utilizing the same memory bandwidth as one case so I >> think one can conclude that the 1 case is using at most 25 percent of the >> memory bandwidth. >> >> >> Q2: Can we say that this hardware-software-case combination is not DRAM >> bound, and that it ?may be amenable? to a good speedup running multiple >> threads in the same shared memory environment ? >> >> I think this is good a way to say it, "since it is not DRAM bound it >> may be amendable to good speedup running multiple threads", it may also be >> amendable to MPI parallelism. There are other factors that affect parallel >> performance besides memory bandwidth without more information these are >> unknown". >> >> Barry >> >> >> >> I did look into the shared memory benchmark >> http://www.cs.virginia.edu/stream but I could not draw any conclusions. >> >> If this is a trivial question, please point me to a good resource to >> learn. >> >> Thanks! >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From cebau.mail at gmail.com Fri Dec 4 00:29:20 2020 From: cebau.mail at gmail.com (C B) Date: Fri, 4 Dec 2020 00:29:20 -0600 Subject: [petsc-users] CPU speed or DRAM speed bottlenecks ? In-Reply-To: References: <5548441F-DBD1-47B4-8D76-1E73A03BC130@petsc.dev> Message-ID: Thank you very much Junchao! Most of these tools are developed for Linux, and at this time I am mainly interested in code for Windows. I found this thread very informative; https://stackoverflow.com/questions/34641644/is-there-a-windows-equivalent-of-the-linux-command-perf-stat Thanks, On Thu, Dec 3, 2020 at 8:58 PM Junchao Zhang wrote: > You can try HPCTookit (http://hpctoolkit.org/), Tau ( > https://www.cs.uoregon.edu/research/tau/home.php), or Intel VTune. But > for each, you need to read its manual to learn it. > > --Junchao Zhang > > > On Thu, Dec 3, 2020 at 5:29 PM C B wrote: > >> Barry, >> >> Thank you so much for your quick reply and insight. >> >> Are there any tools/simple ways to determine how much time is lost in >> cache misses / etc, please direct me to any resources to learn about this. >> >> Thanks again! >> >> >> On Thu, Dec 3, 2020 at 4:09 PM Barry Smith wrote: >> >>> >>> >>> On Dec 3, 2020, at 2:25 PM, C B wrote: >>> >>> Resorting to your expertise in software performance: >>> >>> Subject: Looking for a crude assessment of CPU speed or DRAM speed >>> bottlenecks in shared memory multi-core PCs >>> >>> On a typical PC with one Xeon CPU (8 cores), a serial code runs a case >>> in say 10 hours of Wall time, and on the same computer 4 instances of the >>> same code running simultaneously (the same case) take essentially the same >>> Wall time, 10 hrs or a marginal increase such as 10hrs 30 mins. There is >>> no I/O, lots of free physical RAM, each core running an instance shows ~ >>> 100% utilization. >>> >>> Q1: What could we conclude about this hardware-software-case combination >>> in terms of being CPU bound, memory bandwidth bound, etc ? >>> >>> It does not appear to be memory bandwidth bound. Presumably the 4 >>> cases will each be utilizing the same memory bandwidth as one case so I >>> think one can conclude that the 1 case is using at most 25 percent of the >>> memory bandwidth. >>> >>> >>> Q2: Can we say that this hardware-software-case combination is not DRAM >>> bound, and that it ?may be amenable? to a good speedup running multiple >>> threads in the same shared memory environment ? >>> >>> I think this is good a way to say it, "since it is not DRAM bound it >>> may be amendable to good speedup running multiple threads", it may also be >>> amendable to MPI parallelism. There are other factors that affect parallel >>> performance besides memory bandwidth without more information these are >>> unknown". >>> >>> Barry >>> >>> >>> >>> I did look into the shared memory benchmark >>> http://www.cs.virginia.edu/stream but I could not draw any conclusions. >>> >>> If this is a trivial question, please point me to a good resource to >>> learn. >>> >>> Thanks! >>> >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From roland.richter at ntnu.no Fri Dec 4 04:32:19 2020 From: roland.richter at ntnu.no (Roland Richter) Date: Fri, 4 Dec 2020 11:32:19 +0100 Subject: [petsc-users] Usage of parallel FFT for doing batch 1d-FFTs over the columns of a dense 2d-matrix Message-ID: Hei, I am currently working on a problem which requires a large amount of transformations of a field E(r, t) from time space to Fourier space E(r, w) and back. The field is described in a 2d-matrix, with the r-dimension along the columns and the t-dimension along the rows. For the transformation from time to frequency space and back I therefore have to apply a 1d-FFT operation over each row of my matrix. For my earlier attempts I used armadillo as matrix library and FFTW for doing the transformations. Here I could use fftw_plan_many_dft to do all FFTs at the same time. Unfortunately, armadillo does not support MPI, and therefore I had to switch to PETSc for larger matrices. Based on the examples (such as example 143) PETSc has a way of doing FFTs internally by creating an FFT object (using MatCreateFFT). Unfortunately, I can not see how I could use that object to conduct the operation described above without having to iterate over each row in my original matrix (i.e. doing it sequential, not in parallel). Ideally I could distribute the FFTs such over my nodes that each node takes several rows of the original matrix and applies the FFT to each of them. As example, for a matrix with a size of 4x4 and two nodes node 0 would take row 0 and 1, while node 1 takes row 2 and 3, to avoid unnecessary memory transfer between the nodes while conducting the FFTs. Is that something PETSc can do, too? Thanks! Regards, Roland From roland.richter at ntnu.no Fri Dec 4 05:51:45 2020 From: roland.richter at ntnu.no (Roland Richter) Date: Fri, 4 Dec 2020 12:51:45 +0100 Subject: [petsc-users] Load a dense matrix, distribute it over several MPI nodes and retrieve it afterwards Message-ID: <1682e269-d4fd-79f9-6082-cbc8fceca4d6@ntnu.no> Hei, is it possible to fill a dense distributed matrix from existing data in rank 0, distribute it to all involved nodes and retrieve it afterwards, such that it can be stored in a single matrix in rank 0 again? The background behind this question is the following thought process: * I generate the matrix locally on node 0 * I distribute it to all involved nodes * I use the matrix for data processing/process the matrix data on all nodes * For storing the data I need it back on node 0 in a single matrix For testing I wrote the following code (and using armadillo for generating the initial matrices within arma-namespace): /??? Mat C, F;// //??? Vec x, y, z;// //??? PetscViewer viewer;// //??? PetscMPIInt rank, size;// //??? PetscInitialize(&argc, &args, (char*) 0, help);// // //??? MPI_Comm_size(PETSC_COMM_WORLD, &size);// //??? MPI_Comm_rank(PETSC_COMM_WORLD, &rank);// // //??? PetscPrintf(PETSC_COMM_WORLD,"Number of processors = %d, rank = %d\n", size, rank);// //// //??? PetscViewerCreate(PETSC_COMM_WORLD, &viewer);// //??? PetscViewerSetType(viewer, PETSCVIEWERASCII);// //??? arma::cx_mat local_mat, local_zero_mat;// //??? const size_t matrix_size = 5;// //??? if(rank == 0) {// //??? ??? local_mat = arma::randu(matrix_size, matrix_size);// //??? ??? local_zero_mat = arma::zeros(matrix_size, matrix_size);// //??? ??? MatCreateDense(PETSC_COMM_WORLD, matrix_size, matrix_size, PETSC_DECIDE, PETSC_DECIDE, local_mat.memptr(), &C);// //??? ??? MatAssemblyBegin(C, MAT_FINAL_ASSEMBLY);// //??? ??? MatAssemblyEnd(C, MAT_FINAL_ASSEMBLY);// //??? }// //??? MatCreateDense(PETSC_COMM_WORLD, matrix_size, matrix_size, PETSC_DECIDE, PETSC_DECIDE, NULL, &F);// //??? if(rank == 0)// //??? ??? MatCopy(C, F, SAME_NONZERO_PATTERN);// //??? MatAssemblyBegin(F, MAT_FINAL_ASSEMBLY);// //??? MatAssemblyEnd(F, MAT_FINAL_ASSEMBLY);// //??? if(rank == 0) {// //??? ??? MatView(C, viewer);// //??? ??? std::cout << local_mat << '\n';// //??? ??? std::cout << local_zero_mat << '\n';// //??? }// //??? //How can I retrieve the distributed data from F to C again when running on multiple nodes?// // //??? if(rank == 0) {// //??? ??? MatDestroy(&C);// //??? }// //??? MatDestroy(&F);// //??? PetscViewerDestroy(&viewer);/ Running the code with mpirun -n 1 works fine, but when running it with mpirun -n 2 it just stops after /PetscPrintf() /and hangs. Moreover, how can I retrieve the data from matrix F afterwards? For vectors I have the function/class VecScatter, but can I apply this for matrices (mpidense) too? Thanks! Cheers, Roland -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Dec 4 06:13:18 2020 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 4 Dec 2020 07:13:18 -0500 Subject: [petsc-users] Load a dense matrix, distribute it over several MPI nodes and retrieve it afterwards In-Reply-To: <1682e269-d4fd-79f9-6082-cbc8fceca4d6@ntnu.no> References: <1682e269-d4fd-79f9-6082-cbc8fceca4d6@ntnu.no> Message-ID: On Fri, Dec 4, 2020 at 6:51 AM Roland Richter wrote: > Hei, > > is it possible to fill a dense distributed matrix from existing data in > rank 0, distribute it to all involved nodes and retrieve it afterwards, > such that it can be stored in a single matrix in rank 0 again? The > background behind this question is the following thought process: > > - I generate the matrix locally on node 0 > > I do not understand why you would ever want to do this. > > - I distribute it to all involved nodes > > Just create your distributed matrix, but set the values from rank 0. > > - I use the matrix for data processing/process the matrix data on all > nodes > > I assume you change the matrix here. > > - For storing the data I need it back on node 0 in a single matrix > > You can use https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatCreateRedundantMatrix.html to easily create serial matrices from it. You can throw away any you do not want. Thanks, Matt > > > For testing I wrote the following code (and using armadillo for generating > the initial matrices within arma-namespace): > > * Mat C, F;* > * Vec x, y, z;* > * PetscViewer viewer;* > * PetscMPIInt rank, size;* > * PetscInitialize(&argc, &args, (char*) 0, help);* > > * MPI_Comm_size(PETSC_COMM_WORLD, &size);* > * MPI_Comm_rank(PETSC_COMM_WORLD, &rank);* > > * PetscPrintf(PETSC_COMM_WORLD,"Number of processors = %d, rank = > %d\n", size, rank);* > > * PetscViewerCreate(PETSC_COMM_WORLD, &viewer);* > * PetscViewerSetType(viewer, PETSCVIEWERASCII);* > * arma::cx_mat local_mat, local_zero_mat;* > * const size_t matrix_size = 5;* > * if(rank == 0) {* > * local_mat = arma::randu(matrix_size, matrix_size);* > * local_zero_mat = arma::zeros(matrix_size, > matrix_size);* > * MatCreateDense(PETSC_COMM_WORLD, matrix_size, matrix_size, > PETSC_DECIDE, PETSC_DECIDE, local_mat.memptr(), &C);* > * MatAssemblyBegin(C, MAT_FINAL_ASSEMBLY);* > * MatAssemblyEnd(C, MAT_FINAL_ASSEMBLY);* > * }* > * MatCreateDense(PETSC_COMM_WORLD, matrix_size, matrix_size, > PETSC_DECIDE, PETSC_DECIDE, NULL, &F);* > * if(rank == 0)* > * MatCopy(C, F, SAME_NONZERO_PATTERN);* > * MatAssemblyBegin(F, MAT_FINAL_ASSEMBLY);* > * MatAssemblyEnd(F, MAT_FINAL_ASSEMBLY);* > * if(rank == 0) {* > * MatView(C, viewer);* > * std::cout << local_mat << '\n';* > * std::cout << local_zero_mat << '\n';* > * }* > * //How can I retrieve the distributed data from F to C again when > running on multiple nodes?* > > * if(rank == 0) {* > * MatDestroy(&C);* > * }* > * MatDestroy(&F);* > * PetscViewerDestroy(&viewer);* > > Running the code with mpirun -n 1 works fine, but when running it with > mpirun -n 2 it just stops after *PetscPrintf() *and hangs. > > Moreover, how can I retrieve the data from matrix F afterwards? For > vectors I have the function/class VecScatter, but can I apply this for > matrices (mpidense) too? > > Thanks! > > Cheers, > > Roland > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at joliv.et Fri Dec 4 06:16:19 2020 From: pierre at joliv.et (Pierre Jolivet) Date: Fri, 4 Dec 2020 13:16:19 +0100 Subject: [petsc-users] Load a dense matrix, distribute it over several MPI nodes and retrieve it afterwards In-Reply-To: References: <1682e269-d4fd-79f9-6082-cbc8fceca4d6@ntnu.no> Message-ID: <7B2A5AD2-E779-42F0-9408-030C4C5CEC60@joliv.et> > On 4 Dec 2020, at 1:13 PM, Matthew Knepley wrote: > > On Fri, Dec 4, 2020 at 6:51 AM Roland Richter > wrote: > Hei, > > is it possible to fill a dense distributed matrix from existing data in rank 0, distribute it to all involved nodes and retrieve it afterwards, such that it can be stored in a single matrix in rank 0 again? The background behind this question is the following thought process: > > I generate the matrix locally on node 0 > I do not understand why you would ever want to do this. > I distribute it to all involved nodes > Just create your distributed matrix, but set the values from rank 0. > I use the matrix for data processing/process the matrix data on all nodes > I assume you change the matrix here. > For storing the data I need it back on node 0 in a single matrix > You can use https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatCreateRedundantMatrix.html > to easily create serial matrices from it. You can throw away any you do not want. > > Thanks, > > Matt > For testing I wrote the following code (and using armadillo for generating the initial matrices within arma-namespace): > > Mat C, F; > Vec x, y, z; > PetscViewer viewer; > PetscMPIInt rank, size; > PetscInitialize(&argc, &args, (char*) 0, help); > > MPI_Comm_size(PETSC_COMM_WORLD, &size); > MPI_Comm_rank(PETSC_COMM_WORLD, &rank); > > PetscPrintf(PETSC_COMM_WORLD,"Number of processors = %d, rank = %d\n", size, rank); > > PetscViewerCreate(PETSC_COMM_WORLD, &viewer); > PetscViewerSetType(viewer, PETSCVIEWERASCII); > arma::cx_mat local_mat, local_zero_mat; > const size_t matrix_size = 5; > if(rank == 0) { > local_mat = arma::randu(matrix_size, matrix_size); > local_zero_mat = arma::zeros(matrix_size, matrix_size); > MatCreateDense(PETSC_COMM_WORLD, matrix_size, matrix_size, PETSC_DECIDE, PETSC_DECIDE, local_mat.memptr(), &C); > MatAssemblyBegin(C, MAT_FINAL_ASSEMBLY); > MatAssemblyEnd(C, MAT_FINAL_ASSEMBLY); > } > MatCreateDense(PETSC_COMM_WORLD, matrix_size, matrix_size, PETSC_DECIDE, PETSC_DECIDE, NULL, &F); > if(rank == 0) > MatCopy(C, F, SAME_NONZERO_PATTERN); > MatAssemblyBegin(F, MAT_FINAL_ASSEMBLY); > MatAssemblyEnd(F, MAT_FINAL_ASSEMBLY); > if(rank == 0) { > MatView(C, viewer); > std::cout << local_mat << '\n'; > std::cout << local_zero_mat << '\n'; > } > //How can I retrieve the distributed data from F to C again when running on multiple nodes? > > if(rank == 0) { > MatDestroy(&C); > } > MatDestroy(&F); > PetscViewerDestroy(&viewer); > > Running the code with mpirun -n 1 works fine, but when running it with mpirun -n 2 it just stops after PetscPrintf() and hangs. > > Moreover, how can I retrieve the data from matrix F afterwards? For vectors I have the function/class VecScatter, but can I apply this for matrices (mpidense) too? > With respect to this very last question, this is currently an open issue https://gitlab.com/petsc/petsc/-/issues/693 . Thanks, Pierre > Thanks! > > Cheers, > > Roland > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Dec 4 06:19:02 2020 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 4 Dec 2020 07:19:02 -0500 Subject: [petsc-users] Usage of parallel FFT for doing batch 1d-FFTs over the columns of a dense 2d-matrix In-Reply-To: References: Message-ID: On Fri, Dec 4, 2020 at 5:32 AM Roland Richter wrote: > Hei, > > I am currently working on a problem which requires a large amount of > transformations of a field E(r, t) from time space to Fourier space E(r, > w) and back. The field is described in a 2d-matrix, with the r-dimension > along the columns and the t-dimension along the rows. > > For the transformation from time to frequency space and back I therefore > have to apply a 1d-FFT operation over each row of my matrix. For my > earlier attempts I used armadillo as matrix library and FFTW for doing > the transformations. Here I could use fftw_plan_many_dft to do all FFTs > at the same time. Unfortunately, armadillo does not support MPI, and > therefore I had to switch to PETSc for larger matrices. > > Based on the examples (such as example 143) PETSc has a way of doing > FFTs internally by creating an FFT object (using MatCreateFFT). > Unfortunately, I can not see how I could use that object to conduct the > operation described above without having to iterate over each row in my > original matrix (i.e. doing it sequential, not in parallel). > > Ideally I could distribute the FFTs such over my nodes that each node > takes several rows of the original matrix and applies the FFT to each of > them. As example, for a matrix with a size of 4x4 and two nodes node 0 > would take row 0 and 1, while node 1 takes row 2 and 3, to avoid > unnecessary memory transfer between the nodes while conducting the FFTs. > Is that something PETSc can do, too? > The way I understand our setup (I did not write it), we use plan_many_dft to handle multiple dof FFTs, but these would be interlaced. You want many FFTs for non-interlaced storage, which is not something we do right now. You could definitely call FFTW directly if you want. Second, above it seems like you just want serial FFTs. You can definitely create a MatFFT with PETSC_COMM_SELF, and apply it to each row in the local rows, or create the plan yourself for the stack of rows. Thanks, Matt > Thanks! > > Regards, > > Roland > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From roland.richter at ntnu.no Fri Dec 4 06:47:22 2020 From: roland.richter at ntnu.no (Roland Richter) Date: Fri, 4 Dec 2020 13:47:22 +0100 Subject: [petsc-users] Usage of parallel FFT for doing batch 1d-FFTs over the columns of a dense 2d-matrix In-Reply-To: References: Message-ID: <6c23f236-bea1-ded0-e356-35065c93b5de@ntnu.no> Ideally those FFTs could be handled in parallel, after they are not depending on each other. Is that possible with MatFFT, or should I rather use FFTW for that? Thanks, Roland Am 04.12.20 um 13:19 schrieb Matthew Knepley: > On Fri, Dec 4, 2020 at 5:32 AM Roland Richter > wrote: > > Hei, > > I am currently working on a problem which requires a large amount of > transformations of a field E(r, t) from time space to Fourier > space E(r, > w) and back. The field is described in a 2d-matrix, with the > r-dimension > along the columns and the t-dimension along the rows. > > For the transformation from time to frequency space and back I > therefore > have to apply a 1d-FFT operation over each row of my matrix. For my > earlier attempts I used armadillo as matrix library and FFTW for doing > the transformations. Here I could use fftw_plan_many_dft to do all > FFTs > at the same time. Unfortunately, armadillo does not support MPI, and > therefore I had to switch to PETSc for larger matrices. > > Based on the examples (such as example 143) PETSc has a way of doing > FFTs internally by creating an FFT object (using MatCreateFFT). > Unfortunately, I can not see how I could use that object to > conduct the > operation described above without having to iterate over each row > in my > original matrix (i.e. doing it sequential, not in parallel). > > Ideally I could distribute the FFTs such over my nodes that each node > takes several rows of the original matrix and applies the FFT to > each of > them. As example, for a matrix with a size of 4x4 and two nodes node 0 > would take row 0 and 1, while node 1 takes row 2 and 3, to avoid > unnecessary memory transfer between the nodes while conducting the > FFTs. > Is that something PETSc can do, too? > > > The way I understand our setup (I did not write it), we use > plan_many_dft to handle > multiple dof FFTs, but these would be interlaced. You want many FFTs > for non-interlaced > storage, which is not something we do right now. You could definitely > call FFTW directly > if you want. > > Second, above it seems like you just want serial FFTs. You can > definitely create a MatFFT > with PETSC_COMM_SELF, and apply it to each row in the local rows, or > create the plan > yourself for the stack of rows. > > ? ?Thanks, > > ? ? ?Matt > ? > > Thanks! > > Regards, > > Roland > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Dec 4 07:07:27 2020 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 4 Dec 2020 08:07:27 -0500 Subject: [petsc-users] Usage of parallel FFT for doing batch 1d-FFTs over the columns of a dense 2d-matrix In-Reply-To: <6c23f236-bea1-ded0-e356-35065c93b5de@ntnu.no> References: <6c23f236-bea1-ded0-e356-35065c93b5de@ntnu.no> Message-ID: On Fri, Dec 4, 2020 at 7:47 AM Roland Richter wrote: > Ideally those FFTs could be handled in parallel, after they are not > depending on each other. Is that possible with MatFFT, or should I rather > use FFTW for that? > > Which FFTs? The rows on each process are handled in parallel. For the rows on a process, you could call FFTW yourself to try and get instruction level parallelism over rows, but if the rows are long I don't think this would matter much. I guess you could measure. If you wanted parallelism among rows, you could divide the matrix more finely. If you wanted parallelism within a row, you would need to change the matrix storage format. I am not sure if any packages do this, certainly not any I know of. You could treat the rows as PETSc vectors and get parallelism this way I guess. Thanks, Matt > Thanks, > > Roland > Am 04.12.20 um 13:19 schrieb Matthew Knepley: > > On Fri, Dec 4, 2020 at 5:32 AM Roland Richter > wrote: > >> Hei, >> >> I am currently working on a problem which requires a large amount of >> transformations of a field E(r, t) from time space to Fourier space E(r, >> w) and back. The field is described in a 2d-matrix, with the r-dimension >> along the columns and the t-dimension along the rows. >> >> For the transformation from time to frequency space and back I therefore >> have to apply a 1d-FFT operation over each row of my matrix. For my >> earlier attempts I used armadillo as matrix library and FFTW for doing >> the transformations. Here I could use fftw_plan_many_dft to do all FFTs >> at the same time. Unfortunately, armadillo does not support MPI, and >> therefore I had to switch to PETSc for larger matrices. >> >> Based on the examples (such as example 143) PETSc has a way of doing >> FFTs internally by creating an FFT object (using MatCreateFFT). >> Unfortunately, I can not see how I could use that object to conduct the >> operation described above without having to iterate over each row in my >> original matrix (i.e. doing it sequential, not in parallel). >> >> Ideally I could distribute the FFTs such over my nodes that each node >> takes several rows of the original matrix and applies the FFT to each of >> them. As example, for a matrix with a size of 4x4 and two nodes node 0 >> would take row 0 and 1, while node 1 takes row 2 and 3, to avoid >> unnecessary memory transfer between the nodes while conducting the FFTs. >> Is that something PETSc can do, too? >> > > The way I understand our setup (I did not write it), we use plan_many_dft > to handle > multiple dof FFTs, but these would be interlaced. You want many FFTs for > non-interlaced > storage, which is not something we do right now. You could definitely call > FFTW directly > if you want. > > Second, above it seems like you just want serial FFTs. You can definitely > create a MatFFT > with PETSC_COMM_SELF, and apply it to each row in the local rows, or > create the plan > yourself for the stack of rows. > > Thanks, > > Matt > > >> Thanks! >> >> Regards, >> >> Roland >> >> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Fri Dec 4 10:21:06 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Fri, 4 Dec 2020 10:21:06 -0600 Subject: [petsc-users] petsc-3.14.2 now available Message-ID: Dear PETSc users, The patch release petsc-3.14.2 is now available for download. http://www.mcs.anl.gov/petsc/download/index.html Satish From bsmith at petsc.dev Fri Dec 4 18:59:34 2020 From: bsmith at petsc.dev (Barry Smith) Date: Fri, 4 Dec 2020 18:59:34 -0600 Subject: [petsc-users] Usage of parallel FFT for doing batch 1d-FFTs over the columns of a dense 2d-matrix In-Reply-To: <6c23f236-bea1-ded0-e356-35065c93b5de@ntnu.no> References: <6c23f236-bea1-ded0-e356-35065c93b5de@ntnu.no> Message-ID: Roland, If you store your matrix as described in a parallel PETSc dense matrix then you should be able to call fftw_plan_many_dft() directly on the value obtained with MatDenseGetArray(). You just need to pass the arguments regarding column major ordering appropriately. Probably identically to what you do with your previous code. Barry > On Dec 4, 2020, at 6:47 AM, Roland Richter wrote: > > Ideally those FFTs could be handled in parallel, after they are not depending on each other. Is that possible with MatFFT, or should I rather use FFTW for that? > > Thanks, > > Roland > > Am 04.12.20 um 13:19 schrieb Matthew Knepley: >> On Fri, Dec 4, 2020 at 5:32 AM Roland Richter > wrote: >> Hei, >> >> I am currently working on a problem which requires a large amount of >> transformations of a field E(r, t) from time space to Fourier space E(r, >> w) and back. The field is described in a 2d-matrix, with the r-dimension >> along the columns and the t-dimension along the rows. >> >> For the transformation from time to frequency space and back I therefore >> have to apply a 1d-FFT operation over each row of my matrix. For my >> earlier attempts I used armadillo as matrix library and FFTW for doing >> the transformations. Here I could use fftw_plan_many_dft to do all FFTs >> at the same time. Unfortunately, armadillo does not support MPI, and >> therefore I had to switch to PETSc for larger matrices. >> >> Based on the examples (such as example 143) PETSc has a way of doing >> FFTs internally by creating an FFT object (using MatCreateFFT). >> Unfortunately, I can not see how I could use that object to conduct the >> operation described above without having to iterate over each row in my >> original matrix (i.e. doing it sequential, not in parallel). >> >> Ideally I could distribute the FFTs such over my nodes that each node >> takes several rows of the original matrix and applies the FFT to each of >> them. As example, for a matrix with a size of 4x4 and two nodes node 0 >> would take row 0 and 1, while node 1 takes row 2 and 3, to avoid >> unnecessary memory transfer between the nodes while conducting the FFTs. >> Is that something PETSc can do, too? >> >> The way I understand our setup (I did not write it), we use plan_many_dft to handle >> multiple dof FFTs, but these would be interlaced. You want many FFTs for non-interlaced >> storage, which is not something we do right now. You could definitely call FFTW directly >> if you want. >> >> Second, above it seems like you just want serial FFTs. You can definitely create a MatFFT >> with PETSC_COMM_SELF, and apply it to each row in the local rows, or create the plan >> yourself for the stack of rows. >> >> Thanks, >> >> Matt >> >> Thanks! >> >> Regards, >> >> Roland >> >> >> >> -- >> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Sat Dec 5 21:40:22 2020 From: bsmith at petsc.dev (Barry Smith) Date: Sat, 5 Dec 2020 21:40:22 -0600 Subject: [petsc-users] PetscBagView issues with SAWs In-Reply-To: References: Message-ID: <9A5AAA08-432C-4E60-9A53-5AB36EFB354F@petsc.dev> Zane, I'm sorry I didn't answer this sooner. We simply don't have the code for PetscBagView() with SAWS at this time. Unfortunately I don't have the time to write it now, basically you can extend PetscBagView() with a new else if () for the SAWs viewer and then publish the Bag items with the SAWS API. Basically it would be a combination of the code in PetscBagView for ASCII and code like in PetscOptionsSAWsInput() Barry > On Nov 23, 2020, at 10:19 AM, Zane Charles Jakobs wrote: > > Hi PETSc devs, > > I'm writing a program that needs to send information that I have stored in a PetscBag to SAWs. I'm calling SAWs_Initialize() (and SAWs_Get_FullURL()), then PetscViewerSAWsOpen(PETSC_COMM_WORLD,&viewer) to get a PetscViewer, then later calling PetscBagView(bag,viewer) to publish to SAWs (and then PetscSAWsBlock() for debugging purposes). For what it's worth, I have registered all the PetscBag variables, and PetscBagView(bag,PETSC_VIEWER_STDOUT_WORLD) works as expected. > > However, when I go to the SAWs website and click on the "update all variables from server" or "update server with changes below", nothing happens. I do know that the app and server can communicate, since clicking "continue" lets the program continue through a PetscSAWsBlock() call, but for some reason, the PetscBags I'm View()-ing are not making it over to the server. Is the workflow I'm using (PetscBagView() after SAWs_Initialize() and PetscViewerSAWsOpen()) correct? If so, what else might I be doing incorrectly so that SAWs doesn't see the data I publish? And if not, what should I do differently? Thank you! > > -Zane Jakobs From alexis.marboeuf at hotmail.fr Sun Dec 6 12:46:34 2020 From: alexis.marboeuf at hotmail.fr (Alexis Marboeuf) Date: Sun, 6 Dec 2020 18:46:34 +0000 Subject: [petsc-users] How do constraint dofs work? Message-ID: Hello, I intend to contribute to the Petsc documentation, especially on PetscSF and PetscSection objects. I'm writing an example where I solve a linear elasticity problem in parallel on unstructured meshes. I discretize the system with a finite element method and P1 Lagrange base functions. I only use Petsc basics such as PetscSF, PetscSection, Mat, Vec and SNES objects and I need to implement Dirichlet and/or Neuman boundary conditions. PetscSectionSetConstraintDof and related routines allow to define which dofs are removed from the global system but are kept in local Vec. I don't find much more information about constraint dofs. Can someone explain me how it works? In particular, do I have to manually add terms related to inhomogeneous Dirichlet boundary condition in the RHS? Am I missing something? Regards, Alexis Marboeuf -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sun Dec 6 19:02:27 2020 From: knepley at gmail.com (Matthew Knepley) Date: Sun, 6 Dec 2020 20:02:27 -0500 Subject: [petsc-users] How do constraint dofs work? In-Reply-To: References: Message-ID: On Sun, Dec 6, 2020 at 1:46 PM Alexis Marboeuf wrote: > Hello, > > I intend to contribute to the Petsc documentation, especially on PetscSF > and PetscSection objects. I'm writing an example where I solve a linear > elasticity problem in parallel on unstructured meshes. I discretize the > system with a finite element method and P1 Lagrange base functions. I only > use Petsc basics such as PetscSF, PetscSection, Mat, Vec and SNES objects > and I need to implement Dirichlet and/or Neuman boundary conditions. > PetscSectionSetConstraintDof and related routines allow to define which > dofs are removed from the global system but are kept in local Vec. I don't > find much more information about constraint dofs. Can someone explain me > how it works? In particular, do I have to manually add terms related to > inhomogeneous Dirichlet boundary condition in the RHS? Am I missing > something? > The way this mechanism is intended to work is to support removal of constrained dofs from the global system. This means it solves for only unconstrained dofs and no modification of the system is necessary. However, you would be responsible for putting the correct boundary values into any local vector you use. Note that this mechanism is really only effective when you can constrain a dof itself, not a linear combination. For that, we do something more involved. Operationally, SetConstraintDof() keeps track of how many dofs are constrained on each point. Then SetConstraintIndices() tells us which dofs on that point are constrained, where the indices are in [0, n) if there are n dofs on that point. If you make a global Section, constrained dofs have negative offsets, just like ghost dofs. Thanks, Matt > Regards, > Alexis Marboeuf > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexis.marboeuf at hotmail.fr Mon Dec 7 13:25:08 2020 From: alexis.marboeuf at hotmail.fr (Alexis Marboeuf) Date: Mon, 7 Dec 2020 19:25:08 +0000 Subject: [petsc-users] How do constraint dofs work? In-Reply-To: References: , Message-ID: Hi Matt, Thank you for your reply. I don't understand how unconstrained dofs in the neighborhood of a Dirichlet boundary can see non-zero constrained dofs in a finite element framework. To me, known non-zero terms due to non-zero imposed dofs are added in the RHS of the unconstrained dofs system? Or the proper terms are automatically added in the RHS and no modification of the system is necessary? But, in that case, I don't know how to set imposed non-zero values to tell Petsc what terms to include? Thank you again for your time and I apologize if I miss something. Regards, Alexis ________________________________ De : Matthew Knepley Envoy? : lundi 7 d?cembre 2020 02:02 ? : Alexis Marboeuf Cc : petsc-users at mcs.anl.gov Objet : Re: [petsc-users] How do constraint dofs work? On Sun, Dec 6, 2020 at 1:46 PM Alexis Marboeuf > wrote: Hello, I intend to contribute to the Petsc documentation, especially on PetscSF and PetscSection objects. I'm writing an example where I solve a linear elasticity problem in parallel on unstructured meshes. I discretize the system with a finite element method and P1 Lagrange base functions. I only use Petsc basics such as PetscSF, PetscSection, Mat, Vec and SNES objects and I need to implement Dirichlet and/or Neuman boundary conditions. PetscSectionSetConstraintDof and related routines allow to define which dofs are removed from the global system but are kept in local Vec. I don't find much more information about constraint dofs. Can someone explain me how it works? In particular, do I have to manually add terms related to inhomogeneous Dirichlet boundary condition in the RHS? Am I missing something? The way this mechanism is intended to work is to support removal of constrained dofs from the global system. This means it solves for only unconstrained dofs and no modification of the system is necessary. However, you would be responsible for putting the correct boundary values into any local vector you use. Note that this mechanism is really only effective when you can constrain a dof itself, not a linear combination. For that, we do something more involved. Operationally, SetConstraintDof() keeps track of how many dofs are constrained on each point. Then SetConstraintIndices() tells us which dofs on that point are constrained, where the indices are in [0, n) if there are n dofs on that point. If you make a global Section, constrained dofs have negative offsets, just like ghost dofs. Thanks, Matt Regards, Alexis Marboeuf -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Dec 7 14:06:46 2020 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 7 Dec 2020 15:06:46 -0500 Subject: [petsc-users] How do constraint dofs work? In-Reply-To: References: Message-ID: On Mon, Dec 7, 2020 at 2:25 PM Alexis Marboeuf wrote: > Hi Matt, > > Thank you for your reply. > I don't understand how unconstrained dofs in the neighborhood of a > Dirichlet boundary can see non-zero constrained dofs in a finite element > framework. To me, known non-zero terms due to non-zero imposed dofs are > added in the RHS of the unconstrained dofs system? > I see now. I never do this, so I misunderstood your question. Yes, if you are assembling A and b separately, then you need the terms you constrained in b. You can do that, but I think it is needlessly complex and error prone. I phrase everything as a nonlinear system, forming the residual and Jacobian F(u) = A u - b J(u) = A Since you have the boundary values in u, your residual will automatically create the correct RHS for the Newton equation, which you only solve once. There is no overhead relative to the linear case, it is simpler, and you can automatically do things like iterative refinement. Thanks, Matt > Or the proper terms are automatically added in the RHS and no modification > of the system is necessary? But, in that case, I don't know how to set > imposed non-zero values to tell Petsc what terms to include? > Thank you again for your time and I apologize if I miss something. > > Regards, > Alexis > > ------------------------------ > *De :* Matthew Knepley > *Envoy? :* lundi 7 d?cembre 2020 02:02 > *? :* Alexis Marboeuf > *Cc :* petsc-users at mcs.anl.gov > *Objet :* Re: [petsc-users] How do constraint dofs work? > > On Sun, Dec 6, 2020 at 1:46 PM Alexis Marboeuf > wrote: > > Hello, > > I intend to contribute to the Petsc documentation, especially on PetscSF > and PetscSection objects. I'm writing an example where I solve a linear > elasticity problem in parallel on unstructured meshes. I discretize the > system with a finite element method and P1 Lagrange base functions. I only > use Petsc basics such as PetscSF, PetscSection, Mat, Vec and SNES objects > and I need to implement Dirichlet and/or Neuman boundary conditions. > PetscSectionSetConstraintDof and related routines allow to define which > dofs are removed from the global system but are kept in local Vec. I don't > find much more information about constraint dofs. Can someone explain me > how it works? In particular, do I have to manually add terms related to > inhomogeneous Dirichlet boundary condition in the RHS? Am I missing > something? > > > The way this mechanism is intended to work is to support removal of > constrained dofs from the global system. This means it solves for only > unconstrained dofs and no modification of the system is necessary. > However, you would be responsible for putting the correct boundary values > into > any local vector you use. Note that this mechanism is really only > effective when you can constrain a dof itself, not a linear combination. > For that, we > do something more involved. > > Operationally, SetConstraintDof() keeps track of how many dofs are > constrained on each point. Then SetConstraintIndices() tells us which dofs > on that > point are constrained, where the indices are in [0, n) if there are n dofs > on that point. If you make a global Section, constrained dofs have negative > offsets, > just like ghost dofs. > > Thanks, > > Matt > > > Regards, > Alexis Marboeuf > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexis.marboeuf at hotmail.fr Tue Dec 8 05:40:26 2020 From: alexis.marboeuf at hotmail.fr (Alexis Marboeuf) Date: Tue, 8 Dec 2020 11:40:26 +0000 Subject: [petsc-users] How do constraint dofs work? In-Reply-To: References: , Message-ID: Hi Matt, I'm not used to this formulation for a linear system so I don't get it sorry. It is actually simpler. I will write my example and if someting goes wrong, I will continue this thread. Thank you very much for your help. Alexis ________________________________ De : Matthew Knepley Envoy? : lundi 7 d?cembre 2020 21:06 ? : Alexis Marboeuf Cc : petsc-users at mcs.anl.gov Objet : Re: [petsc-users] How do constraint dofs work? On Mon, Dec 7, 2020 at 2:25 PM Alexis Marboeuf > wrote: Hi Matt, Thank you for your reply. I don't understand how unconstrained dofs in the neighborhood of a Dirichlet boundary can see non-zero constrained dofs in a finite element framework. To me, known non-zero terms due to non-zero imposed dofs are added in the RHS of the unconstrained dofs system? I see now. I never do this, so I misunderstood your question. Yes, if you are assembling A and b separately, then you need the terms you constrained in b. You can do that, but I think it is needlessly complex and error prone. I phrase everything as a nonlinear system, forming the residual and Jacobian F(u) = A u - b J(u) = A Since you have the boundary values in u, your residual will automatically create the correct RHS for the Newton equation, which you only solve once. There is no overhead relative to the linear case, it is simpler, and you can automatically do things like iterative refinement. Thanks, Matt Or the proper terms are automatically added in the RHS and no modification of the system is necessary? But, in that case, I don't know how to set imposed non-zero values to tell Petsc what terms to include? Thank you again for your time and I apologize if I miss something. Regards, Alexis ________________________________ De : Matthew Knepley > Envoy? : lundi 7 d?cembre 2020 02:02 ? : Alexis Marboeuf > Cc : petsc-users at mcs.anl.gov > Objet : Re: [petsc-users] How do constraint dofs work? On Sun, Dec 6, 2020 at 1:46 PM Alexis Marboeuf > wrote: Hello, I intend to contribute to the Petsc documentation, especially on PetscSF and PetscSection objects. I'm writing an example where I solve a linear elasticity problem in parallel on unstructured meshes. I discretize the system with a finite element method and P1 Lagrange base functions. I only use Petsc basics such as PetscSF, PetscSection, Mat, Vec and SNES objects and I need to implement Dirichlet and/or Neuman boundary conditions. PetscSectionSetConstraintDof and related routines allow to define which dofs are removed from the global system but are kept in local Vec. I don't find much more information about constraint dofs. Can someone explain me how it works? In particular, do I have to manually add terms related to inhomogeneous Dirichlet boundary condition in the RHS? Am I missing something? The way this mechanism is intended to work is to support removal of constrained dofs from the global system. This means it solves for only unconstrained dofs and no modification of the system is necessary. However, you would be responsible for putting the correct boundary values into any local vector you use. Note that this mechanism is really only effective when you can constrain a dof itself, not a linear combination. For that, we do something more involved. Operationally, SetConstraintDof() keeps track of how many dofs are constrained on each point. Then SetConstraintIndices() tells us which dofs on that point are constrained, where the indices are in [0, n) if there are n dofs on that point. If you make a global Section, constrained dofs have negative offsets, just like ghost dofs. Thanks, Matt Regards, Alexis Marboeuf -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From roland.richter at ntnu.no Tue Dec 8 07:39:48 2020 From: roland.richter at ntnu.no (Roland Richter) Date: Tue, 8 Dec 2020 14:39:48 +0100 Subject: [petsc-users] Usage of parallel FFT for doing batch 1d-FFTs over the columns of a dense 2d-matrix In-Reply-To: References: <6c23f236-bea1-ded0-e356-35065c93b5de@ntnu.no> Message-ID: <01c888c3-8970-28df-4631-8a11c9b3a4a8@ntnu.no> Dear all, I tried the following code: /int main(int argc, char **args) {// //??? Mat C, F;// //??? Vec x, y, z;// //??? PetscViewer viewer;// //??? PetscMPIInt rank, size;// //??? PetscInitialize(&argc, &args, (char*) 0, help);// // //??? MPI_Comm_size(PETSC_COMM_WORLD, &size);// //??? MPI_Comm_rank(PETSC_COMM_WORLD, &rank);// // //??? PetscPrintf(PETSC_COMM_WORLD,"Number of processors = %d, rank = %d\n", size, rank);// //??? //??? std::cout << "From rank " << rank << '\n';// // //??? //MatCreate(PETSC_COMM_WORLD, &C);// //??? PetscViewerCreate(PETSC_COMM_WORLD, &viewer);// //??? PetscViewerSetType(viewer, PETSCVIEWERASCII);// //??? arma::cx_mat local_mat, local_zero_mat;// //??? const size_t matrix_size = 5;// // //??? MatCreateDense(PETSC_COMM_WORLD, PETSC_DECIDE, PETSC_DECIDE, matrix_size, matrix_size, NULL, &C);// //??? MatCreateDense(PETSC_COMM_WORLD, PETSC_DECIDE, PETSC_DECIDE, matrix_size, matrix_size, NULL, &F);// //??? if(rank == 0) {// //??? ??? arma::Col indices = arma::linspace>(0, matrix_size - 1, matrix_size);// //??? ??? //if(rank == 0) {// //??? ??? local_mat = arma::randu(matrix_size, matrix_size);// //??? ??? local_zero_mat = arma::zeros(matrix_size, matrix_size);// //??? ??? arma::cx_mat tmp_mat = local_mat.st();// //??? ??? MatSetValues(C, matrix_size, indices.memptr(), matrix_size, indices.memptr(), tmp_mat.memptr(), INSERT_VALUES);// //??? ??? MatSetValues(F, matrix_size, indices.memptr(), matrix_size, indices.memptr(), local_zero_mat.memptr(), INSERT_VALUES);// //??? }// // //??? MatAssemblyBegin(C, MAT_FINAL_ASSEMBLY);// //??? MatAssemblyEnd(C, MAT_FINAL_ASSEMBLY);// //??? MatAssemblyBegin(F, MAT_FINAL_ASSEMBLY);// //??? MatAssemblyEnd(F, MAT_FINAL_ASSEMBLY);// // //??? //FFT test// //??? Mat FFT_A;// //??? Vec input, output;// //??? int first_owned_row_index = 0, last_owned_row_index = 0;// //??? const int FFT_length[] = {matrix_size};// // // //??? MatCreateFFT(PETSC_COMM_WORLD, 1, FFT_length, MATFFTW, &FFT_A);// //??? MatCreateVecsFFTW(FFT_A, &x, &y, &z);// //??? VecCreate(PETSC_COMM_WORLD, &input);// //??? VecSetFromOptions(input);// //??? VecSetSizes(input, PETSC_DECIDE, matrix_size);// //??? VecCreate(PETSC_COMM_WORLD, &output);// //??? VecSetFromOptions(output);// //??? VecSetSizes(output, PETSC_DECIDE, matrix_size);// //??? MatGetOwnershipRange(C, &first_owned_row_index, &last_owned_row_index);// //??? std::cout << "Rank " << rank << " owns row " << first_owned_row_index << " to row " << last_owned_row_index << '\n';// // //??? //Testing FFT// // //??? /*---------------------------------------------------------*/// //??? fftw_plan??? fplan,bplan;// //??? fftw_complex *data_in,*data_out,*data_out2;// //??? ptrdiff_t??? alloc_local, local_ni, local_i_start, local_n0,local_0_start;// //??? PetscRandom rdm;// // //??? //??? if (!rank)// //??? //??? ??? printf("Use FFTW without PETSc-FFTW interface\n");// //??? fftw_mpi_init();// //??? int N?????????? = matrix_size * matrix_size;// //??? int N0 = matrix_size;// //??? int N1 = matrix_size;// //??? const ptrdiff_t n_data[] = {N0, 1};// //??? //alloc_local = fftw_mpi_local_size_2d(N0,N1,PETSC_COMM_WORLD,&local_n0,&local_0_start);// //??? alloc_local = fftw_mpi_local_size_many(1, n_data,// //??? ??? ??? ??? ??? ??? ??? ??? ??? ??? ?? matrix_size,// //??? ??? ??? ??? ??? ??? ??? ??? ??? ??? ?? FFTW_MPI_DEFAULT_BLOCK,// //??? ??? ??? ??? ??? ??? ??? ??? ??? ??? ?? PETSC_COMM_WORLD,// //??? ??? ??? ??? ??? ??? ??? ??? ??? ??? ?? &local_n0,// //??? ??? ??? ??? ??? ??? ??? ??? ??? ??? ?? &local_0_start);// //??? //data_in?? = (fftw_complex*)fftw_malloc(sizeof(fftw_complex)*alloc_local);// //??? PetscScalar *C_ptr, *F_ptr;// //??? MatDenseGetArray(C, &C_ptr);// //??? MatDenseGetArray(F, &F_ptr);// //??? data_in = reinterpret_cast(C_ptr);// //??? data_out = reinterpret_cast(F_ptr);// //??? data_out2 = (fftw_complex*)fftw_malloc(sizeof(fftw_complex)*alloc_local);// // // //??? VecCreateMPIWithArray(PETSC_COMM_WORLD,1,(PetscInt)local_n0 * N1,(PetscInt)N,(const PetscScalar*)data_in,&x);// //??? PetscObjectSetName((PetscObject) x, "Real Space vector");// //??? VecCreateMPIWithArray(PETSC_COMM_WORLD,1,(PetscInt)local_n0 * N1,(PetscInt)N,(const PetscScalar*)data_out,&y);// //??? PetscObjectSetName((PetscObject) y, "Frequency space vector");// //??? VecCreateMPIWithArray(PETSC_COMM_WORLD,1,(PetscInt)local_n0 * N1,(PetscInt)N,(const PetscScalar*)data_out2,&z);// //??? PetscObjectSetName((PetscObject) z, "Reconstructed vector");// // //??? int FFT_rank = 1;// //??? const ptrdiff_t FFTW_size[] = {matrix_size};// //??? int howmany = last_owned_row_index - first_owned_row_index;// //??? //std::cout << "Rank " << rank << " processes " << howmany << " rows\n";// //??? int idist = matrix_size;//1;// //??? int odist = matrix_size;//1;// //??? int istride = 1;//matrix_size;// //??? int ostride = 1;//matrix_size;// //??? const ptrdiff_t *inembed = FFTW_size, *onembed = FFTW_size;// //??? fplan = fftw_mpi_plan_many_dft(FFT_rank, FFTW_size,// //??? ??? ??? ??? ??? ??? ??? ??? ?? howmany,// //??? ??? ??? ??? ??? ??? ??? ??? ?? FFTW_MPI_DEFAULT_BLOCK, FFTW_MPI_DEFAULT_BLOCK,// //??? ??? ??? ??? ??? ??? ??? ??? ?? data_in, data_out,// //??? ??? ??? ??? ??? ??? ??? ??? ?? PETSC_COMM_WORLD,// //??? ??? ??? ??? ??? ??? ??? ??? ?? FFTW_FORWARD, FFTW_ESTIMATE);// //??? bplan = fftw_mpi_plan_many_dft(FFT_rank, FFTW_size,// //??? ??? ??? ??? ??? ??? ??? ??? ?? howmany,// //??? ??? ??? ??? ??? ??? ??? ??? ?? FFTW_MPI_DEFAULT_BLOCK, FFTW_MPI_DEFAULT_BLOCK,// //??? ??? ??? ??? ??? ??? ??? ??? ?? data_out, data_out2,// //??? ??? ??? ??? ??? ??? ??? ??? ?? PETSC_COMM_WORLD,// //??? ??? ??? ??? ??? ??? ??? ??? ?? FFTW_BACKWARD, FFTW_ESTIMATE);// // //??? if (false) {VecView(x,PETSC_VIEWER_STDOUT_WORLD);}// // //??? fftw_execute(fplan);// //??? if (false) {VecView(y,PETSC_VIEWER_STDOUT_WORLD);}// // //??? fftw_execute(bplan);// // //??? double a = 1.0 / matrix_size;// //??? double enorm = 0;// //??? VecScale(z,a);// //??? if (false) {VecView(z, PETSC_VIEWER_STDOUT_WORLD);}// //??? VecAXPY(z,-1.0,x);// //??? VecNorm(z,NORM_1,&enorm);// //??? if (enorm > 1.e-11) {// //??? ??? PetscPrintf(PETSC_COMM_SELF,"? Error norm of |x - z| %g\n",(double)enorm);// //??? }// // //??? /* Free spaces */// //??? fftw_destroy_plan(fplan);// //??? fftw_destroy_plan(bplan);// //??? fftw_free(data_out2);// // //??? //Generate test matrix for comparison// //??? arma::cx_mat fft_test_mat = local_mat;// //??? fft_test_mat.each_row([&](arma::cx_rowvec &a){// //??? ??? a = arma::fft(a);// //??? });// //??? std::cout << "-----------------------------------------------------\n";// //??? std::cout << "Input matrix:\n" << local_mat << '\n';// //??? MatView(C, viewer);// //??? std::cout << "-----------------------------------------------------\n";// //??? std::cout << "Expected output matrix:\n" << fft_test_mat << '\n';// //??? MatView(F, viewer);// //??? std::cout << "-----------------------------------------------------\n";// //??? MatDestroy(&FFT_A);// //??? VecDestroy(&input);// //??? VecDestroy(&output);// //??? VecDestroy(&x);// //??? VecDestroy(&y);// //??? VecDestroy(&z);// //??? MatDestroy(&C);// //??? MatDestroy(&F);// //??? PetscViewerDestroy(&viewer);// //??? PetscFinalize();// //??? return 0;// //}/ For *mpirun -n 1* I get the expected output (i.e. armadillo and PETSc/FFTW return the same result), but for *mpirun -n x* with x > 1 every value which is not assigned to rank 0 is lost and set to zero instead. Every value assigned to rank 0 is calculated correctly, as far as I can see. Did I forget something here? Thanks, Roland Am 05.12.20 um 01:59 schrieb Barry Smith: > > ? Roland, > > ? ? If you store your matrix as described in a parallel PETSc dense > matrix then you should be able to call? > > fftw_plan_many_dft() directly on the value obtained with > MatDenseGetArray(). You just need to pass the arguments regarding > column major ordering appropriately. Probably identically to what you > do with your previous code. > > ? ?Barry > > >> On Dec 4, 2020, at 6:47 AM, Roland Richter > > wrote: >> >> Ideally those FFTs could be handled in parallel, after they are not >> depending on each other. Is that possible with MatFFT, or should I >> rather use FFTW for that? >> >> Thanks, >> >> Roland >> >> Am 04.12.20 um 13:19 schrieb Matthew Knepley: >>> On Fri, Dec 4, 2020 at 5:32 AM Roland Richter >>> > wrote: >>> >>> Hei, >>> >>> I am currently working on a problem which requires a large amount of >>> transformations of a field E(r, t) from time space to Fourier >>> space E(r, >>> w) and back. The field is described in a 2d-matrix, with the >>> r-dimension >>> along the columns and the t-dimension along the rows. >>> >>> For the transformation from time to frequency space and back I >>> therefore >>> have to apply a 1d-FFT operation over each row of my matrix. For my >>> earlier attempts I used armadillo as matrix library and FFTW for >>> doing >>> the transformations. Here I could use fftw_plan_many_dft to do >>> all FFTs >>> at the same time. Unfortunately, armadillo does not support MPI, and >>> therefore I had to switch to PETSc for larger matrices. >>> >>> Based on the examples (such as example 143) PETSc has a way of doing >>> FFTs internally by creating an FFT object (using MatCreateFFT). >>> Unfortunately, I can not see how I could use that object to >>> conduct the >>> operation described above without having to iterate over each >>> row in my >>> original matrix (i.e. doing it sequential, not in parallel). >>> >>> Ideally I could distribute the FFTs such over my nodes that each >>> node >>> takes several rows of the original matrix and applies the FFT to >>> each of >>> them. As example, for a matrix with a size of 4x4 and two nodes >>> node 0 >>> would take row 0 and 1, while node 1 takes row 2 and 3, to avoid >>> unnecessary memory transfer between the nodes while conducting >>> the FFTs. >>> Is that something PETSc can do, too? >>> >>> >>> The way I understand our setup (I did not write it), we use >>> plan_many_dft to handle >>> multiple dof FFTs, but these would be interlaced. You want many FFTs >>> for non-interlaced >>> storage, which is not something we do right now. You could >>> definitely call FFTW directly >>> if you want. >>> >>> Second, above it seems like you just want serial FFTs. You can >>> definitely create a MatFFT >>> with PETSC_COMM_SELF, and apply it to each row in the local rows, or >>> create the plan >>> yourself for the stack of rows. >>> >>> ? ?Thanks, >>> >>> ? ? ?Matt >>> ? >>> >>> Thanks! >>> >>> Regards, >>> >>> Roland >>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which >>> their experiments lead. >>> -- Norbert Wiener >>> >>> https://www.cse.buffalo.edu/~knepley/ >>> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Dec 8 07:55:21 2020 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 8 Dec 2020 08:55:21 -0500 Subject: [petsc-users] Usage of parallel FFT for doing batch 1d-FFTs over the columns of a dense 2d-matrix In-Reply-To: <01c888c3-8970-28df-4631-8a11c9b3a4a8@ntnu.no> References: <6c23f236-bea1-ded0-e356-35065c93b5de@ntnu.no> <01c888c3-8970-28df-4631-8a11c9b3a4a8@ntnu.no> Message-ID: On Tue, Dec 8, 2020 at 8:40 AM Roland Richter wrote: > Dear all, > > I tried the following code: > > *int main(int argc, char **args) {* > * Mat C, F;* > * Vec x, y, z;* > * PetscViewer viewer;* > * PetscMPIInt rank, size;* > * PetscInitialize(&argc, &args, (char*) 0, help);* > > * MPI_Comm_size(PETSC_COMM_WORLD, &size);* > * MPI_Comm_rank(PETSC_COMM_WORLD, &rank);* > > * PetscPrintf(PETSC_COMM_WORLD,"Number of processors = %d, rank = > %d\n", size, rank);* > * // std::cout << "From rank " << rank << '\n';* > > * //MatCreate(PETSC_COMM_WORLD, &C);* > * PetscViewerCreate(PETSC_COMM_WORLD, &viewer);* > * PetscViewerSetType(viewer, PETSCVIEWERASCII);* > * arma::cx_mat local_mat, local_zero_mat;* > * const size_t matrix_size = 5;* > > * MatCreateDense(PETSC_COMM_WORLD, PETSC_DECIDE, PETSC_DECIDE, > matrix_size, matrix_size, NULL, &C);* > * MatCreateDense(PETSC_COMM_WORLD, PETSC_DECIDE, PETSC_DECIDE, > matrix_size, matrix_size, NULL, &F);* > * if(rank == 0) {* > * arma::Col indices = arma::linspace>(0, > matrix_size - 1, matrix_size);* > * //if(rank == 0) {* > * local_mat = arma::randu(matrix_size, matrix_size);* > * local_zero_mat = arma::zeros(matrix_size, > matrix_size);* > * arma::cx_mat tmp_mat = local_mat.st ();* > * MatSetValues(C, matrix_size, indices.memptr(), matrix_size, > indices.memptr(), tmp_mat.memptr(), INSERT_VALUES);* > * MatSetValues(F, matrix_size, indices.memptr(), matrix_size, > indices.memptr(), local_zero_mat.memptr(), INSERT_VALUES);* > * }* > > * MatAssemblyBegin(C, MAT_FINAL_ASSEMBLY);* > * MatAssemblyEnd(C, MAT_FINAL_ASSEMBLY);* > * MatAssemblyBegin(F, MAT_FINAL_ASSEMBLY);* > * MatAssemblyEnd(F, MAT_FINAL_ASSEMBLY);* > > * //FFT test* > * Mat FFT_A;* > * Vec input, output;* > * int first_owned_row_index = 0, last_owned_row_index = 0;* > * const int FFT_length[] = {matrix_size};* > > > * MatCreateFFT(PETSC_COMM_WORLD, 1, FFT_length, MATFFTW, &FFT_A);* > * MatCreateVecsFFTW(FFT_A, &x, &y, &z);* > * VecCreate(PETSC_COMM_WORLD, &input);* > * VecSetFromOptions(input);* > * VecSetSizes(input, PETSC_DECIDE, matrix_size);* > * VecCreate(PETSC_COMM_WORLD, &output);* > * VecSetFromOptions(output);* > * VecSetSizes(output, PETSC_DECIDE, matrix_size);* > * MatGetOwnershipRange(C, &first_owned_row_index, > &last_owned_row_index);* > * std::cout << "Rank " << rank << " owns row " << first_owned_row_index > << " to row " << last_owned_row_index << '\n';* > > * //Testing FFT* > > * /*---------------------------------------------------------*/* > * fftw_plan fplan,bplan;* > * fftw_complex *data_in,*data_out,*data_out2;* > * ptrdiff_t alloc_local, local_ni, local_i_start, > local_n0,local_0_start;* > * PetscRandom rdm;* > > * // if (!rank)* > * // printf("Use FFTW without PETSc-FFTW interface\n");* > * fftw_mpi_init();* > * int N = matrix_size * matrix_size;* > * int N0 = matrix_size;* > * int N1 = matrix_size;* > * const ptrdiff_t n_data[] = {N0, 1};* > * //alloc_local = > fftw_mpi_local_size_2d(N0,N1,PETSC_COMM_WORLD,&local_n0,&local_0_start);* > * alloc_local = fftw_mpi_local_size_many(1, n_data,* > * matrix_size,* > * FFTW_MPI_DEFAULT_BLOCK,* > * PETSC_COMM_WORLD,* > * &local_n0,* > * &local_0_start);* > * //data_in = > (fftw_complex*)fftw_malloc(sizeof(fftw_complex)*alloc_local);* > * PetscScalar *C_ptr, *F_ptr;* > * MatDenseGetArray(C, &C_ptr);* > * MatDenseGetArray(F, &F_ptr);* > * data_in = reinterpret_cast(C_ptr);* > * data_out = reinterpret_cast(F_ptr);* > * data_out2 = > (fftw_complex*)fftw_malloc(sizeof(fftw_complex)*alloc_local);* > > > * VecCreateMPIWithArray(PETSC_COMM_WORLD,1,(PetscInt)local_n0 * > N1,(PetscInt)N,(const PetscScalar*)data_in,&x);* > * PetscObjectSetName((PetscObject) x, "Real Space vector");* > * VecCreateMPIWithArray(PETSC_COMM_WORLD,1,(PetscInt)local_n0 * > N1,(PetscInt)N,(const PetscScalar*)data_out,&y);* > * PetscObjectSetName((PetscObject) y, "Frequency space vector");* > * VecCreateMPIWithArray(PETSC_COMM_WORLD,1,(PetscInt)local_n0 * > N1,(PetscInt)N,(const PetscScalar*)data_out2,&z);* > * PetscObjectSetName((PetscObject) z, "Reconstructed vector");* > > * int FFT_rank = 1;* > * const ptrdiff_t FFTW_size[] = {matrix_size};* > * int howmany = last_owned_row_index - first_owned_row_index;* > * //std::cout << "Rank " << rank << " processes " << howmany << " > rows\n";* > * int idist = matrix_size;//1;* > * int odist = matrix_size;//1;* > * int istride = 1;//matrix_size;* > * int ostride = 1;//matrix_size;* > * const ptrdiff_t *inembed = FFTW_size, *onembed = FFTW_size;* > * fplan = fftw_mpi_plan_many_dft(FFT_rank, FFTW_size,* > * howmany,* > * FFTW_MPI_DEFAULT_BLOCK, > FFTW_MPI_DEFAULT_BLOCK,* > * data_in, data_out,* > * PETSC_COMM_WORLD,* > * FFTW_FORWARD, FFTW_ESTIMATE);* > * bplan = fftw_mpi_plan_many_dft(FFT_rank, FFTW_size,* > * howmany,* > * FFTW_MPI_DEFAULT_BLOCK, > FFTW_MPI_DEFAULT_BLOCK,* > * data_out, data_out2,* > * PETSC_COMM_WORLD,* > * FFTW_BACKWARD, FFTW_ESTIMATE);* > > * if (false) {VecView(x,PETSC_VIEWER_STDOUT_WORLD);}* > > * fftw_execute(fplan);* > * if (false) {VecView(y,PETSC_VIEWER_STDOUT_WORLD);}* > > * fftw_execute(bplan);* > > * double a = 1.0 / matrix_size;* > * double enorm = 0;* > * VecScale(z,a);* > * if (false) {VecView(z, PETSC_VIEWER_STDOUT_WORLD);}* > * VecAXPY(z,-1.0,x);* > * VecNorm(z,NORM_1,&enorm);* > * if (enorm > 1.e-11) {* > * PetscPrintf(PETSC_COMM_SELF," Error norm of |x - z| > %g\n",(double)enorm);* > * }* > > * /* Free spaces */* > * fftw_destroy_plan(fplan);* > * fftw_destroy_plan(bplan);* > * fftw_free(data_out2);* > > * //Generate test matrix for comparison* > * arma::cx_mat fft_test_mat = local_mat;* > * fft_test_mat.each_row([&](arma::cx_rowvec &a){* > * a = arma::fft(a);* > * });* > * std::cout << > "-----------------------------------------------------\n";* > * std::cout << "Input matrix:\n" << local_mat << '\n';* > * MatView(C, viewer);* > * std::cout << > "-----------------------------------------------------\n";* > * std::cout << "Expected output matrix:\n" << fft_test_mat << '\n';* > * MatView(F, viewer);* > * std::cout << > "-----------------------------------------------------\n";* > * MatDestroy(&FFT_A);* > * VecDestroy(&input);* > * VecDestroy(&output);* > * VecDestroy(&x);* > * VecDestroy(&y);* > * VecDestroy(&z);* > * MatDestroy(&C);* > * MatDestroy(&F);* > * PetscViewerDestroy(&viewer);* > * PetscFinalize();* > * return 0;* > *}* > > For *mpirun -n 1* I get the expected output (i.e. armadillo and > PETSc/FFTW return the same result), but for *mpirun -n x* with x > 1 > every value which is not assigned to rank 0 is lost and set to zero > instead. Every value assigned to rank 0 is calculated correctly, as far as > I can see. Did I forget something here? > > I do not understand why your FFTW calls use the WORLD communicator. Aren't they serial FFTs over the local rows? THanks, Matt > Thanks, > > Roland > Am 05.12.20 um 01:59 schrieb Barry Smith: > > > Roland, > > If you store your matrix as described in a parallel PETSc dense matrix > then you should be able to call > > fftw_plan_many_dft() directly on the value obtained with > MatDenseGetArray(). You just need to pass the arguments regarding column > major ordering appropriately. Probably identically to what you do with your > previous code. > > Barry > > > On Dec 4, 2020, at 6:47 AM, Roland Richter wrote: > > Ideally those FFTs could be handled in parallel, after they are not > depending on each other. Is that possible with MatFFT, or should I rather > use FFTW for that? > > Thanks, > > Roland > Am 04.12.20 um 13:19 schrieb Matthew Knepley: > > On Fri, Dec 4, 2020 at 5:32 AM Roland Richter > wrote: > >> Hei, >> >> I am currently working on a problem which requires a large amount of >> transformations of a field E(r, t) from time space to Fourier space E(r, >> w) and back. The field is described in a 2d-matrix, with the r-dimension >> along the columns and the t-dimension along the rows. >> >> For the transformation from time to frequency space and back I therefore >> have to apply a 1d-FFT operation over each row of my matrix. For my >> earlier attempts I used armadillo as matrix library and FFTW for doing >> the transformations. Here I could use fftw_plan_many_dft to do all FFTs >> at the same time. Unfortunately, armadillo does not support MPI, and >> therefore I had to switch to PETSc for larger matrices. >> >> Based on the examples (such as example 143) PETSc has a way of doing >> FFTs internally by creating an FFT object (using MatCreateFFT). >> Unfortunately, I can not see how I could use that object to conduct the >> operation described above without having to iterate over each row in my >> original matrix (i.e. doing it sequential, not in parallel). >> >> Ideally I could distribute the FFTs such over my nodes that each node >> takes several rows of the original matrix and applies the FFT to each of >> them. As example, for a matrix with a size of 4x4 and two nodes node 0 >> would take row 0 and 1, while node 1 takes row 2 and 3, to avoid >> unnecessary memory transfer between the nodes while conducting the FFTs. >> Is that something PETSc can do, too? >> > > The way I understand our setup (I did not write it), we use plan_many_dft > to handle > multiple dof FFTs, but these would be interlaced. You want many FFTs for > non-interlaced > storage, which is not something we do right now. You could definitely call > FFTW directly > if you want. > > Second, above it seems like you just want serial FFTs. You can definitely > create a MatFFT > with PETSC_COMM_SELF, and apply it to each row in the local rows, or > create the plan > yourself for the stack of rows. > > Thanks, > > Matt > > >> Thanks! >> >> Regards, >> >> Roland >> >> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From dav.schneider at tum.de Tue Dec 8 07:58:45 2020 From: dav.schneider at tum.de (David Schneider) Date: Tue, 8 Dec 2020 14:58:45 +0100 Subject: [petsc-users] Setting an unknown initial guess Message-ID: Dear all, I'm using a KSPLSQR solver without preconditioning and configure the solver once in the beginning. Afterwards, I solve my system multiple times in a time-dependent system and I would like to use an initial guess (from the previous solution). Currently, I use `KSPSetInitialGuessNonzero` for this purpose, but it may happen that the actual guess is zero. If the initial guess is zero, the solver fails to converge, at least with the default configuration. Setting `KSPSetInitialGuess` with `PETSC_FALSE` (which should also be the default) zeros the guess out. Is there a (native) way to preserve the initial guess, but still ensure convergence in the KSPsolver in case the guess is zero? Thanks in advance, David From roland.richter at ntnu.no Tue Dec 8 08:42:30 2020 From: roland.richter at ntnu.no (Roland Richter) Date: Tue, 8 Dec 2020 15:42:30 +0100 Subject: [petsc-users] Usage of parallel FFT for doing batch 1d-FFTs over the columns of a dense 2d-matrix In-Reply-To: References: <6c23f236-bea1-ded0-e356-35065c93b5de@ntnu.no> <01c888c3-8970-28df-4631-8a11c9b3a4a8@ntnu.no> Message-ID: <9c171fca-4049-6849-a1e2-4869490fda97@ntnu.no> I replaced the FFT-code with the following: /??? int FFT_rank = 1;// //??? const int FFTW_size[] = {matrix_size};// //??? int howmany = last_owned_row_index - first_owned_row_index;// //??? int idist = 1;// //??? int odist = 1;// //??? int istride = matrix_size / size;// //??? int ostride = matrix_size / size;// //??? const int inembed[] = {matrix_size}, onembed[] = {matrix_size};// //??? fplan = fftw_plan_many_dft(FFT_rank, FFTW_size,// //??? ??? ??? ??? ??? ??? ??? ?? howmany,// //??? ??? ??? ??? ??? ??? ??? ?? data_in, inembed, istride, idist,// //??? ??? ??? ??? ??? ??? ??? ?? data_out, onembed, ostride, odist,// //??? ??? ??? ??? ??? ??? ??? ?? FFTW_FORWARD, FFTW_ESTIMATE);// //??? bplan = fftw_plan_many_dft(FFT_rank, FFTW_size,// //??? ??? ??? ??? ??? ??? ??? ?? howmany,// //??? ??? ??? ??? ??? ??? ??? ?? data_out, inembed, istride, idist,// //??? ??? ??? ??? ??? ??? ??? ?? data_out2, onembed, ostride, odist,// //??? ??? ??? ??? ??? ??? ??? ?? FFTW_BACKWARD, FFTW_ESTIMATE);/ Now I get the expected results also for *mpirun -n x* with x > 1, but only if my matrix size is an integer multiple of x, else some parts of the resulting matrix are zeroed out. Therefore, I assume I made a mistake here with inembed and onembed, but I am not sure how they influence the result. Do you have any suggestions? Thanks, Roland Am 08.12.20 um 14:55 schrieb Matthew Knepley: > On Tue, Dec 8, 2020 at 8:40 AM Roland Richter > wrote: > > Dear all, > > I tried the following code: > > /int main(int argc, char **args) {// > //??? Mat C, F;// > //??? Vec x, y, z;// > //??? PetscViewer viewer;// > //??? PetscMPIInt rank, size;// > //??? PetscInitialize(&argc, &args, (char*) 0, help);// > // > //??? MPI_Comm_size(PETSC_COMM_WORLD, &size);// > //??? MPI_Comm_rank(PETSC_COMM_WORLD, &rank);// > // > //??? PetscPrintf(PETSC_COMM_WORLD,"Number of processors = %d, > rank = %d\n", size, rank);// > //??? //??? std::cout << "From rank " << rank << '\n';// > // > //??? //MatCreate(PETSC_COMM_WORLD, &C);// > //??? PetscViewerCreate(PETSC_COMM_WORLD, &viewer);// > //??? PetscViewerSetType(viewer, PETSCVIEWERASCII);// > //??? arma::cx_mat local_mat, local_zero_mat;// > //??? const size_t matrix_size = 5;// > // > //??? MatCreateDense(PETSC_COMM_WORLD, PETSC_DECIDE, PETSC_DECIDE, > matrix_size, matrix_size, NULL, &C);// > //??? MatCreateDense(PETSC_COMM_WORLD, PETSC_DECIDE, PETSC_DECIDE, > matrix_size, matrix_size, NULL, &F);// > //??? if(rank == 0) {// > //??? ??? arma::Col indices = > arma::linspace>(0, matrix_size - 1, matrix_size);// > //??? ??? //if(rank == 0) {// > //??? ??? local_mat = arma::randu(matrix_size, > matrix_size);// > //??? ??? local_zero_mat = arma::zeros(matrix_size, > matrix_size);// > //??? ??? arma::cx_mat tmp_mat = local_mat.st > ();// > //??? ??? MatSetValues(C, matrix_size, indices.memptr(), > matrix_size, indices.memptr(), tmp_mat.memptr(), INSERT_VALUES);// > //??? ??? MatSetValues(F, matrix_size, indices.memptr(), > matrix_size, indices.memptr(), local_zero_mat.memptr(), > INSERT_VALUES);// > //??? }// > // > //??? MatAssemblyBegin(C, MAT_FINAL_ASSEMBLY);// > //??? MatAssemblyEnd(C, MAT_FINAL_ASSEMBLY);// > //??? MatAssemblyBegin(F, MAT_FINAL_ASSEMBLY);// > //??? MatAssemblyEnd(F, MAT_FINAL_ASSEMBLY);// > // > //??? //FFT test// > //??? Mat FFT_A;// > //??? Vec input, output;// > //??? int first_owned_row_index = 0, last_owned_row_index = 0;// > //??? const int FFT_length[] = {matrix_size};// > // > // > //??? MatCreateFFT(PETSC_COMM_WORLD, 1, FFT_length, MATFFTW, > &FFT_A);// > //??? MatCreateVecsFFTW(FFT_A, &x, &y, &z);// > //??? VecCreate(PETSC_COMM_WORLD, &input);// > //??? VecSetFromOptions(input);// > //??? VecSetSizes(input, PETSC_DECIDE, matrix_size);// > //??? VecCreate(PETSC_COMM_WORLD, &output);// > //??? VecSetFromOptions(output);// > //??? VecSetSizes(output, PETSC_DECIDE, matrix_size);// > //??? MatGetOwnershipRange(C, &first_owned_row_index, > &last_owned_row_index);// > //??? std::cout << "Rank " << rank << " owns row " << > first_owned_row_index << " to row " << last_owned_row_index << '\n';// > // > //??? //Testing FFT// > // > //??? /*---------------------------------------------------------*/// > //??? fftw_plan??? fplan,bplan;// > //??? fftw_complex *data_in,*data_out,*data_out2;// > //??? ptrdiff_t??? alloc_local, local_ni, local_i_start, > local_n0,local_0_start;// > //??? PetscRandom rdm;// > // > //??? //??? if (!rank)// > //??? //??? ??? printf("Use FFTW without PETSc-FFTW interface\n");// > //??? fftw_mpi_init();// > //??? int N?????????? = matrix_size * matrix_size;// > //??? int N0 = matrix_size;// > //??? int N1 = matrix_size;// > //??? const ptrdiff_t n_data[] = {N0, 1};// > //??? //alloc_local = > fftw_mpi_local_size_2d(N0,N1,PETSC_COMM_WORLD,&local_n0,&local_0_start);// > //??? alloc_local = fftw_mpi_local_size_many(1, n_data,// > //??? ??? ??? ??? ??? ??? ??? ??? ??? ??? ?? matrix_size,// > //??? ??? ??? ??? ??? ??? ??? ??? ??? ??? ?? FFTW_MPI_DEFAULT_BLOCK,// > //??? ??? ??? ??? ??? ??? ??? ??? ??? ??? ?? PETSC_COMM_WORLD,// > //??? ??? ??? ??? ??? ??? ??? ??? ??? ??? ?? &local_n0,// > //??? ??? ??? ??? ??? ??? ??? ??? ??? ??? ?? &local_0_start);// > //??? //data_in?? = > (fftw_complex*)fftw_malloc(sizeof(fftw_complex)*alloc_local);// > //??? PetscScalar *C_ptr, *F_ptr;// > //??? MatDenseGetArray(C, &C_ptr);// > //??? MatDenseGetArray(F, &F_ptr);// > //??? data_in = reinterpret_cast(C_ptr);// > //??? data_out = reinterpret_cast(F_ptr);// > //??? data_out2 = > (fftw_complex*)fftw_malloc(sizeof(fftw_complex)*alloc_local);// > // > // > //??? VecCreateMPIWithArray(PETSC_COMM_WORLD,1,(PetscInt)local_n0 > * N1,(PetscInt)N,(const PetscScalar*)data_in,&x);// > //??? PetscObjectSetName((PetscObject) x, "Real Space vector");// > //??? VecCreateMPIWithArray(PETSC_COMM_WORLD,1,(PetscInt)local_n0 > * N1,(PetscInt)N,(const PetscScalar*)data_out,&y);// > //??? PetscObjectSetName((PetscObject) y, "Frequency space vector");// > //??? VecCreateMPIWithArray(PETSC_COMM_WORLD,1,(PetscInt)local_n0 > * N1,(PetscInt)N,(const PetscScalar*)data_out2,&z);// > //??? PetscObjectSetName((PetscObject) z, "Reconstructed vector");// > // > //??? int FFT_rank = 1;// > //??? const ptrdiff_t FFTW_size[] = {matrix_size};// > //??? int howmany = last_owned_row_index - first_owned_row_index;// > //??? //std::cout << "Rank " << rank << " processes " << howmany > << " rows\n";// > //??? int idist = matrix_size;//1;// > //??? int odist = matrix_size;//1;// > //??? int istride = 1;//matrix_size;// > //??? int ostride = 1;//matrix_size;// > //??? const ptrdiff_t *inembed = FFTW_size, *onembed = FFTW_size;// > //??? fplan = fftw_mpi_plan_many_dft(FFT_rank, FFTW_size,// > //??? ??? ??? ??? ??? ??? ??? ??? ?? howmany,// > //??? ??? ??? ??? ??? ??? ??? ??? ?? FFTW_MPI_DEFAULT_BLOCK, > FFTW_MPI_DEFAULT_BLOCK,// > //??? ??? ??? ??? ??? ??? ??? ??? ?? data_in, data_out,// > //??? ??? ??? ??? ??? ??? ??? ??? ?? PETSC_COMM_WORLD,// > //??? ??? ??? ??? ??? ??? ??? ??? ?? FFTW_FORWARD, FFTW_ESTIMATE);// > //??? bplan = fftw_mpi_plan_many_dft(FFT_rank, FFTW_size,// > //??? ??? ??? ??? ??? ??? ??? ??? ?? howmany,// > //??? ??? ??? ??? ??? ??? ??? ??? ?? FFTW_MPI_DEFAULT_BLOCK, > FFTW_MPI_DEFAULT_BLOCK,// > //??? ??? ??? ??? ??? ??? ??? ??? ?? data_out, data_out2,// > //??? ??? ??? ??? ??? ??? ??? ??? ?? PETSC_COMM_WORLD,// > //??? ??? ??? ??? ??? ??? ??? ??? ?? FFTW_BACKWARD, FFTW_ESTIMATE);// > // > //??? if (false) {VecView(x,PETSC_VIEWER_STDOUT_WORLD);}// > // > //??? fftw_execute(fplan);// > //??? if (false) {VecView(y,PETSC_VIEWER_STDOUT_WORLD);}// > // > //??? fftw_execute(bplan);// > // > //??? double a = 1.0 / matrix_size;// > //??? double enorm = 0;// > //??? VecScale(z,a);// > //??? if (false) {VecView(z, PETSC_VIEWER_STDOUT_WORLD);}// > //??? VecAXPY(z,-1.0,x);// > //??? VecNorm(z,NORM_1,&enorm);// > //??? if (enorm > 1.e-11) {// > //??? ??? PetscPrintf(PETSC_COMM_SELF,"? Error norm of |x - z| > %g\n",(double)enorm);// > //??? }// > // > //??? /* Free spaces */// > //??? fftw_destroy_plan(fplan);// > //??? fftw_destroy_plan(bplan);// > //??? fftw_free(data_out2);// > // > //??? //Generate test matrix for comparison// > //??? arma::cx_mat fft_test_mat = local_mat;// > //??? fft_test_mat.each_row([&](arma::cx_rowvec &a){// > //??? ??? a = arma::fft(a);// > //??? });// > //??? std::cout << > "-----------------------------------------------------\n";// > //??? std::cout << "Input matrix:\n" << local_mat << '\n';// > //??? MatView(C, viewer);// > //??? std::cout << > "-----------------------------------------------------\n";// > //??? std::cout << "Expected output matrix:\n" << fft_test_mat << > '\n';// > //??? MatView(F, viewer);// > //??? std::cout << > "-----------------------------------------------------\n";// > //??? MatDestroy(&FFT_A);// > //??? VecDestroy(&input);// > //??? VecDestroy(&output);// > //??? VecDestroy(&x);// > //??? VecDestroy(&y);// > //??? VecDestroy(&z);// > //??? MatDestroy(&C);// > //??? MatDestroy(&F);// > //??? PetscViewerDestroy(&viewer);// > //??? PetscFinalize();// > //??? return 0;// > //}/ > > For *mpirun -n 1* I get the expected output (i.e. armadillo and > PETSc/FFTW return the same result), but for *mpirun -n x* with x > > 1 every value which is not assigned to rank 0 is lost and set to > zero instead. Every value assigned to rank 0 is calculated > correctly, as far as I can see. Did I forget something here? > > I do not understand why your FFTW calls use the WORLD communicator. > Aren't they serial FFTs over the local rows? > > ? THanks, > > ? ? ?Matt > > Thanks, > > Roland > > Am 05.12.20 um 01:59 schrieb Barry Smith: >> >> ? Roland, >> >> ? ? If you store your matrix as described in a parallel PETSc >> dense matrix then you should be able to call? >> >> fftw_plan_many_dft() directly on the value obtained with >> MatDenseGetArray(). You just need to pass the arguments regarding >> column major ordering appropriately. Probably identically to what >> you do with your previous code. >> >> ? ?Barry >> >> >>> On Dec 4, 2020, at 6:47 AM, Roland Richter >>> > wrote: >>> >>> Ideally those FFTs could be handled in parallel, after they are >>> not depending on each other. Is that possible with MatFFT, or >>> should I rather use FFTW for that? >>> >>> Thanks, >>> >>> Roland >>> >>> Am 04.12.20 um 13:19 schrieb Matthew Knepley: >>>> On Fri, Dec 4, 2020 at 5:32 AM Roland Richter >>>> > wrote: >>>> >>>> Hei, >>>> >>>> I am currently working on a problem which requires a large >>>> amount of >>>> transformations of a field E(r, t) from time space to >>>> Fourier space E(r, >>>> w) and back. The field is described in a 2d-matrix, with >>>> the r-dimension >>>> along the columns and the t-dimension along the rows. >>>> >>>> For the transformation from time to frequency space and >>>> back I therefore >>>> have to apply a 1d-FFT operation over each row of my >>>> matrix. For my >>>> earlier attempts I used armadillo as matrix library and >>>> FFTW for doing >>>> the transformations. Here I could use fftw_plan_many_dft to >>>> do all FFTs >>>> at the same time. Unfortunately, armadillo does not support >>>> MPI, and >>>> therefore I had to switch to PETSc for larger matrices. >>>> >>>> Based on the examples (such as example 143) PETSc has a way >>>> of doing >>>> FFTs internally by creating an FFT object (using MatCreateFFT). >>>> Unfortunately, I can not see how I could use that object to >>>> conduct the >>>> operation described above without having to iterate over >>>> each row in my >>>> original matrix (i.e. doing it sequential, not in parallel). >>>> >>>> Ideally I could distribute the FFTs such over my nodes that >>>> each node >>>> takes several rows of the original matrix and applies the >>>> FFT to each of >>>> them. As example, for a matrix with a size of 4x4 and two >>>> nodes node 0 >>>> would take row 0 and 1, while node 1 takes row 2 and 3, to >>>> avoid >>>> unnecessary memory transfer between the nodes while >>>> conducting the FFTs. >>>> Is that something PETSc can do, too? >>>> >>>> >>>> The way I understand our setup (I did not write it), we use >>>> plan_many_dft to handle >>>> multiple dof FFTs, but these would be interlaced. You want many >>>> FFTs for non-interlaced >>>> storage, which is not something we do right now. You could >>>> definitely call FFTW directly >>>> if you want. >>>> >>>> Second, above it seems like you just want serial FFTs. You can >>>> definitely create a MatFFT >>>> with PETSC_COMM_SELF, and apply it to each row in the local >>>> rows, or create the plan >>>> yourself for the stack of rows. >>>> >>>> ? ?Thanks, >>>> >>>> ? ? ?Matt >>>> ? >>>> >>>> Thanks! >>>> >>>> Regards, >>>> >>>> Roland >>>> >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin >>>> their experiments is infinitely more interesting than any >>>> results to which their experiments lead. >>>> -- Norbert Wiener >>>> >>>> https://www.cse.buffalo.edu/~knepley/ >>>> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Dec 8 09:30:36 2020 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 8 Dec 2020 10:30:36 -0500 Subject: [petsc-users] Setting an unknown initial guess In-Reply-To: References: Message-ID: On Tue, Dec 8, 2020 at 8:58 AM David Schneider wrote: > Dear all, > > I'm using a KSPLSQR solver without preconditioning and configure the > solver once in the beginning. Afterwards, I solve my system multiple > times in a time-dependent system and I would like to use an initial > guess (from the previous solution). Currently, I use > `KSPSetInitialGuessNonzero` for this purpose, but it may happen that the > actual guess is zero. If the initial guess is zero, the solver fails to > converge, at least with the default configuration. Setting > `KSPSetInitialGuess` with `PETSC_FALSE` (which should also be the > default) zeros the guess out. Is there a (native) way to preserve the > initial guess, but still ensure convergence in the KSPsolver in case the > guess is zero? > I do not understand. If the solver does not converge for some initial guess, that is a function of the solver used. You might need a better preconditioner. Thanks, Matt > Thanks in advance, > David > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Dec 8 09:34:03 2020 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 8 Dec 2020 10:34:03 -0500 Subject: [petsc-users] Usage of parallel FFT for doing batch 1d-FFTs over the columns of a dense 2d-matrix In-Reply-To: <9c171fca-4049-6849-a1e2-4869490fda97@ntnu.no> References: <6c23f236-bea1-ded0-e356-35065c93b5de@ntnu.no> <01c888c3-8970-28df-4631-8a11c9b3a4a8@ntnu.no> <9c171fca-4049-6849-a1e2-4869490fda97@ntnu.no> Message-ID: On Tue, Dec 8, 2020 at 9:42 AM Roland Richter wrote: > I replaced the FFT-code with the following: > > * int FFT_rank = 1;* > * const int FFTW_size[] = {matrix_size};* > * int howmany = last_owned_row_index - first_owned_row_index;* > * int idist = 1;* > * int odist = 1;* > * int istride = matrix_size / size;* > * int ostride = matrix_size / size;* > * const int inembed[] = {matrix_size}, onembed[] = {matrix_size};* > * fplan = fftw_plan_many_dft(FFT_rank, FFTW_size,* > * howmany,* > * data_in, inembed, istride, idist,* > * data_out, onembed, ostride, odist,* > * FFTW_FORWARD, FFTW_ESTIMATE);* > * bplan = fftw_plan_many_dft(FFT_rank, FFTW_size,* > * howmany,* > * data_out, inembed, istride, idist,* > * data_out2, onembed, ostride, odist,* > * FFTW_BACKWARD, FFTW_ESTIMATE);* > > Now I get the expected results also for *mpirun -n x* with x > 1, but > only if my matrix size is an integer multiple of x, else some parts of the > resulting matrix are zeroed out. Therefore, I assume I made a mistake here > with inembed and onembed, but I am not sure how they influence the result. > Do you have any suggestions? > > I don't know what those arguments mean. Thanks, Matt > Thanks, > > Roland > Am 08.12.20 um 14:55 schrieb Matthew Knepley: > > On Tue, Dec 8, 2020 at 8:40 AM Roland Richter > wrote: > >> Dear all, >> >> I tried the following code: >> >> *int main(int argc, char **args) {* >> * Mat C, F;* >> * Vec x, y, z;* >> * PetscViewer viewer;* >> * PetscMPIInt rank, size;* >> * PetscInitialize(&argc, &args, (char*) 0, help);* >> >> * MPI_Comm_size(PETSC_COMM_WORLD, &size);* >> * MPI_Comm_rank(PETSC_COMM_WORLD, &rank);* >> >> * PetscPrintf(PETSC_COMM_WORLD,"Number of processors = %d, rank = >> %d\n", size, rank);* >> * // std::cout << "From rank " << rank << '\n';* >> >> * //MatCreate(PETSC_COMM_WORLD, &C);* >> * PetscViewerCreate(PETSC_COMM_WORLD, &viewer);* >> * PetscViewerSetType(viewer, PETSCVIEWERASCII);* >> * arma::cx_mat local_mat, local_zero_mat;* >> * const size_t matrix_size = 5;* >> >> * MatCreateDense(PETSC_COMM_WORLD, PETSC_DECIDE, PETSC_DECIDE, >> matrix_size, matrix_size, NULL, &C);* >> * MatCreateDense(PETSC_COMM_WORLD, PETSC_DECIDE, PETSC_DECIDE, >> matrix_size, matrix_size, NULL, &F);* >> * if(rank == 0) {* >> * arma::Col indices = arma::linspace>(0, >> matrix_size - 1, matrix_size);* >> * //if(rank == 0) {* >> * local_mat = arma::randu(matrix_size, matrix_size);* >> * local_zero_mat = arma::zeros(matrix_size, >> matrix_size);* >> * arma::cx_mat tmp_mat = local_mat.st ();* >> * MatSetValues(C, matrix_size, indices.memptr(), matrix_size, >> indices.memptr(), tmp_mat.memptr(), INSERT_VALUES);* >> * MatSetValues(F, matrix_size, indices.memptr(), matrix_size, >> indices.memptr(), local_zero_mat.memptr(), INSERT_VALUES);* >> * }* >> >> * MatAssemblyBegin(C, MAT_FINAL_ASSEMBLY);* >> * MatAssemblyEnd(C, MAT_FINAL_ASSEMBLY);* >> * MatAssemblyBegin(F, MAT_FINAL_ASSEMBLY);* >> * MatAssemblyEnd(F, MAT_FINAL_ASSEMBLY);* >> >> * //FFT test* >> * Mat FFT_A;* >> * Vec input, output;* >> * int first_owned_row_index = 0, last_owned_row_index = 0;* >> * const int FFT_length[] = {matrix_size};* >> >> >> * MatCreateFFT(PETSC_COMM_WORLD, 1, FFT_length, MATFFTW, &FFT_A);* >> * MatCreateVecsFFTW(FFT_A, &x, &y, &z);* >> * VecCreate(PETSC_COMM_WORLD, &input);* >> * VecSetFromOptions(input);* >> * VecSetSizes(input, PETSC_DECIDE, matrix_size);* >> * VecCreate(PETSC_COMM_WORLD, &output);* >> * VecSetFromOptions(output);* >> * VecSetSizes(output, PETSC_DECIDE, matrix_size);* >> * MatGetOwnershipRange(C, &first_owned_row_index, >> &last_owned_row_index);* >> * std::cout << "Rank " << rank << " owns row " << >> first_owned_row_index << " to row " << last_owned_row_index << '\n';* >> >> * //Testing FFT* >> >> * /*---------------------------------------------------------*/* >> * fftw_plan fplan,bplan;* >> * fftw_complex *data_in,*data_out,*data_out2;* >> * ptrdiff_t alloc_local, local_ni, local_i_start, >> local_n0,local_0_start;* >> * PetscRandom rdm;* >> >> * // if (!rank)* >> * // printf("Use FFTW without PETSc-FFTW interface\n");* >> * fftw_mpi_init();* >> * int N = matrix_size * matrix_size;* >> * int N0 = matrix_size;* >> * int N1 = matrix_size;* >> * const ptrdiff_t n_data[] = {N0, 1};* >> * //alloc_local = >> fftw_mpi_local_size_2d(N0,N1,PETSC_COMM_WORLD,&local_n0,&local_0_start);* >> * alloc_local = fftw_mpi_local_size_many(1, n_data,* >> * matrix_size,* >> * FFTW_MPI_DEFAULT_BLOCK,* >> * PETSC_COMM_WORLD,* >> * &local_n0,* >> * &local_0_start);* >> * //data_in = >> (fftw_complex*)fftw_malloc(sizeof(fftw_complex)*alloc_local);* >> * PetscScalar *C_ptr, *F_ptr;* >> * MatDenseGetArray(C, &C_ptr);* >> * MatDenseGetArray(F, &F_ptr);* >> * data_in = reinterpret_cast(C_ptr);* >> * data_out = reinterpret_cast(F_ptr);* >> * data_out2 = >> (fftw_complex*)fftw_malloc(sizeof(fftw_complex)*alloc_local);* >> >> >> * VecCreateMPIWithArray(PETSC_COMM_WORLD,1,(PetscInt)local_n0 * >> N1,(PetscInt)N,(const PetscScalar*)data_in,&x);* >> * PetscObjectSetName((PetscObject) x, "Real Space vector");* >> * VecCreateMPIWithArray(PETSC_COMM_WORLD,1,(PetscInt)local_n0 * >> N1,(PetscInt)N,(const PetscScalar*)data_out,&y);* >> * PetscObjectSetName((PetscObject) y, "Frequency space vector");* >> * VecCreateMPIWithArray(PETSC_COMM_WORLD,1,(PetscInt)local_n0 * >> N1,(PetscInt)N,(const PetscScalar*)data_out2,&z);* >> * PetscObjectSetName((PetscObject) z, "Reconstructed vector");* >> >> * int FFT_rank = 1;* >> * const ptrdiff_t FFTW_size[] = {matrix_size};* >> * int howmany = last_owned_row_index - first_owned_row_index;* >> * //std::cout << "Rank " << rank << " processes " << howmany << " >> rows\n";* >> * int idist = matrix_size;//1;* >> * int odist = matrix_size;//1;* >> * int istride = 1;//matrix_size;* >> * int ostride = 1;//matrix_size;* >> * const ptrdiff_t *inembed = FFTW_size, *onembed = FFTW_size;* >> * fplan = fftw_mpi_plan_many_dft(FFT_rank, FFTW_size,* >> * howmany,* >> * FFTW_MPI_DEFAULT_BLOCK, >> FFTW_MPI_DEFAULT_BLOCK,* >> * data_in, data_out,* >> * PETSC_COMM_WORLD,* >> * FFTW_FORWARD, FFTW_ESTIMATE);* >> * bplan = fftw_mpi_plan_many_dft(FFT_rank, FFTW_size,* >> * howmany,* >> * FFTW_MPI_DEFAULT_BLOCK, >> FFTW_MPI_DEFAULT_BLOCK,* >> * data_out, data_out2,* >> * PETSC_COMM_WORLD,* >> * FFTW_BACKWARD, FFTW_ESTIMATE);* >> >> * if (false) {VecView(x,PETSC_VIEWER_STDOUT_WORLD);}* >> >> * fftw_execute(fplan);* >> * if (false) {VecView(y,PETSC_VIEWER_STDOUT_WORLD);}* >> >> * fftw_execute(bplan);* >> >> * double a = 1.0 / matrix_size;* >> * double enorm = 0;* >> * VecScale(z,a);* >> * if (false) {VecView(z, PETSC_VIEWER_STDOUT_WORLD);}* >> * VecAXPY(z,-1.0,x);* >> * VecNorm(z,NORM_1,&enorm);* >> * if (enorm > 1.e-11) {* >> * PetscPrintf(PETSC_COMM_SELF," Error norm of |x - z| >> %g\n",(double)enorm);* >> * }* >> >> * /* Free spaces */* >> * fftw_destroy_plan(fplan);* >> * fftw_destroy_plan(bplan);* >> * fftw_free(data_out2);* >> >> * //Generate test matrix for comparison* >> * arma::cx_mat fft_test_mat = local_mat;* >> * fft_test_mat.each_row([&](arma::cx_rowvec &a){* >> * a = arma::fft(a);* >> * });* >> * std::cout << >> "-----------------------------------------------------\n";* >> * std::cout << "Input matrix:\n" << local_mat << '\n';* >> * MatView(C, viewer);* >> * std::cout << >> "-----------------------------------------------------\n";* >> * std::cout << "Expected output matrix:\n" << fft_test_mat << '\n';* >> * MatView(F, viewer);* >> * std::cout << >> "-----------------------------------------------------\n";* >> * MatDestroy(&FFT_A);* >> * VecDestroy(&input);* >> * VecDestroy(&output);* >> * VecDestroy(&x);* >> * VecDestroy(&y);* >> * VecDestroy(&z);* >> * MatDestroy(&C);* >> * MatDestroy(&F);* >> * PetscViewerDestroy(&viewer);* >> * PetscFinalize();* >> * return 0;* >> *}* >> >> For *mpirun -n 1* I get the expected output (i.e. armadillo and >> PETSc/FFTW return the same result), but for *mpirun -n x* with x > 1 >> every value which is not assigned to rank 0 is lost and set to zero >> instead. Every value assigned to rank 0 is calculated correctly, as far as >> I can see. Did I forget something here? >> > I do not understand why your FFTW calls use the WORLD communicator. Aren't > they serial FFTs over the local rows? > > THanks, > > Matt > >> Thanks, >> >> Roland >> Am 05.12.20 um 01:59 schrieb Barry Smith: >> >> >> Roland, >> >> If you store your matrix as described in a parallel PETSc dense >> matrix then you should be able to call >> >> fftw_plan_many_dft() directly on the value obtained with >> MatDenseGetArray(). You just need to pass the arguments regarding column >> major ordering appropriately. Probably identically to what you do with your >> previous code. >> >> Barry >> >> >> On Dec 4, 2020, at 6:47 AM, Roland Richter >> wrote: >> >> Ideally those FFTs could be handled in parallel, after they are not >> depending on each other. Is that possible with MatFFT, or should I rather >> use FFTW for that? >> >> Thanks, >> >> Roland >> Am 04.12.20 um 13:19 schrieb Matthew Knepley: >> >> On Fri, Dec 4, 2020 at 5:32 AM Roland Richter >> wrote: >> >>> Hei, >>> >>> I am currently working on a problem which requires a large amount of >>> transformations of a field E(r, t) from time space to Fourier space E(r, >>> w) and back. The field is described in a 2d-matrix, with the r-dimension >>> along the columns and the t-dimension along the rows. >>> >>> For the transformation from time to frequency space and back I therefore >>> have to apply a 1d-FFT operation over each row of my matrix. For my >>> earlier attempts I used armadillo as matrix library and FFTW for doing >>> the transformations. Here I could use fftw_plan_many_dft to do all FFTs >>> at the same time. Unfortunately, armadillo does not support MPI, and >>> therefore I had to switch to PETSc for larger matrices. >>> >>> Based on the examples (such as example 143) PETSc has a way of doing >>> FFTs internally by creating an FFT object (using MatCreateFFT). >>> Unfortunately, I can not see how I could use that object to conduct the >>> operation described above without having to iterate over each row in my >>> original matrix (i.e. doing it sequential, not in parallel). >>> >>> Ideally I could distribute the FFTs such over my nodes that each node >>> takes several rows of the original matrix and applies the FFT to each of >>> them. As example, for a matrix with a size of 4x4 and two nodes node 0 >>> would take row 0 and 1, while node 1 takes row 2 and 3, to avoid >>> unnecessary memory transfer between the nodes while conducting the FFTs. >>> Is that something PETSc can do, too? >>> >> >> The way I understand our setup (I did not write it), we use plan_many_dft >> to handle >> multiple dof FFTs, but these would be interlaced. You want many FFTs for >> non-interlaced >> storage, which is not something we do right now. You could definitely >> call FFTW directly >> if you want. >> >> Second, above it seems like you just want serial FFTs. You can definitely >> create a MatFFT >> with PETSC_COMM_SELF, and apply it to each row in the local rows, or >> create the plan >> yourself for the stack of rows. >> >> Thanks, >> >> Matt >> >> >>> Thanks! >>> >>> Regards, >>> >>> Roland >>> >>> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> >> >> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From roland.richter at ntnu.no Tue Dec 8 12:13:07 2020 From: roland.richter at ntnu.no (Roland Richter) Date: Tue, 8 Dec 2020 19:13:07 +0100 Subject: [petsc-users] Multiplying a row-vector to each row in a dense matrix, sliced or full Message-ID: <799f1dae-7e52-e836-09fd-60016d95a26c@ntnu.no> Hei, I would like to multiply a row-vector to each row in a dense matrix, either full or sliced (i.e. if the row-vector is larger than the row length of the matrix). Armadillo offers a each_row()-function, where I can iterate over all rows in a matrix and multiply the vector to them (similar to the operation VecPointwiseMult()). Is there a similar operation in PETSc? Ideally with the option of only multiplying a part/slice of the row vector to each row, if the corresponding row of the target matrix is shorter than the initial row vector. Thanks, Roland From knepley at gmail.com Tue Dec 8 12:26:35 2020 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 8 Dec 2020 13:26:35 -0500 Subject: [petsc-users] Multiplying a row-vector to each row in a dense matrix, sliced or full In-Reply-To: <799f1dae-7e52-e836-09fd-60016d95a26c@ntnu.no> References: <799f1dae-7e52-e836-09fd-60016d95a26c@ntnu.no> Message-ID: On Tue, Dec 8, 2020 at 1:13 PM Roland Richter wrote: > Hei, > > I would like to multiply a row-vector to each row in a dense matrix, > either full or sliced (i.e. if the row-vector is larger than the row > length of the matrix). Armadillo offers a each_row()-function, where I > can iterate over all rows in a matrix and multiply the vector to them > (similar to the operation VecPointwiseMult()). Is there a similar > operation in PETSc? Ideally with the option of only multiplying a > part/slice of the row vector to each row, if the corresponding row of > the target matrix is shorter than the initial row vector. > It helps to write in linear algebra notation so that we can be sure we are talking about the same thing. Say we have the matrix A and vector v A = / a b \ v = \ c d / and you want A * m = / ma nb \ = / a b \ / m 0 \ = A . diag(v) \ mc nd / \ c d / \ 0 n / which you can get using https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatDiagonalScale.html is that what you want? I do not have a clear picture of what you want slicing for. Thanks, Matt > Thanks, > > Roland > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From anton.glazkov at chch.ox.ac.uk Tue Dec 8 15:37:42 2020 From: anton.glazkov at chch.ox.ac.uk (Anton Glazkov) Date: Tue, 8 Dec 2020 21:37:42 +0000 Subject: [petsc-users] TSAdjoint multilevel checkpointing running out of memory Message-ID: Good evening, I?m attempting to run a multi-level checkpointing code on a cluster (ie RAM+disk storage with ?download-revolve as a configure option) with the options ?-ts_trajectory_type memory -ts_trajectory_max_cps_ram 5 -ts_trajectory_max_cps_disk 5000?, for example. My question is, if I have 100,000 time points, for example, that need to be evaluated during the forward and adjoint run, does TSAdjoint automatically optimize the checkpointing so that the number of checkpoints in RAM and disk do not exceed these values, or is one of the options ignored. I ask because I have a case that runs correctly with -ts_trajectory_type basic, but runs out of memory when attempting to fill the checkpoints in RAM when running the adjoint (I have verified that 5 checkpoints will actually fit into the available memory). This makes me think that maybe the -ts_trajectory_max_cps_ram 5 option is being ignored? Best wishes, Anton -------------- next part -------------- An HTML attachment was scrubbed... URL: From roland.richter at ntnu.no Tue Dec 8 16:16:29 2020 From: roland.richter at ntnu.no (Roland Richter) Date: Tue, 8 Dec 2020 23:16:29 +0100 Subject: [petsc-users] Multiplying a row-vector to each row in a dense matrix, sliced or full In-Reply-To: References: <799f1dae-7e52-e836-09fd-60016d95a26c@ntnu.no> Message-ID: Yes, that would be exactly what I need, I have not thought about that possibility, thanks! Concerning slicing: Assumed my matrix A and vector v are defined by A = |a b| ?????? |c d| v = |w x y z| After v is larger than the row size of A, I only can take some elements for multiplication and therefore I have to use only a slice of vector v: A*v[1:2] =? |xa yb| ??? ??? ??? ??? ??? |xc yd| or A*v[0:1] =??? |wa xb| ??? ??? ??? ??? ??? ? |wc xd| How could I do that? Thanks, Roland Am 08.12.2020 um 19:26 schrieb Matthew Knepley: > On Tue, Dec 8, 2020 at 1:13 PM Roland Richter > wrote: > > Hei, > > I would like to multiply a row-vector to each row in a dense matrix, > either full or sliced (i.e. if the row-vector is larger than the row > length of the matrix). Armadillo offers a each_row()-function, where I > can iterate over all rows in a matrix and multiply the vector to them > (similar to the operation VecPointwiseMult()). Is there a similar > operation in PETSc? Ideally with the option of only multiplying a > part/slice of the row vector to each row, if the corresponding row of > the target matrix is shorter than the initial row vector. > > > It helps to write in linear algebra notation so that we can be sure we > are talking > about the same thing. Say we have the matrix A and vector v > > ? A = / a b \? v = > ? ? ? ? \ c d / > > and you want > > ? A * m = / ma nb \ = / a b \ / m 0 \ = A . diag(v) > ? ? ? ? ? ? ? ?\ mc nd /? ? \ c d /? \ 0? n / > > which you can get > using?https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatDiagonalScale.html > > > is that what you want? I do not have a clear picture of what you want > slicing for. > > ? Thanks, > > ? ? ?Matt > ? > > Thanks, > > Roland > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hongzhang at anl.gov Tue Dec 8 17:47:50 2020 From: hongzhang at anl.gov (Zhang, Hong) Date: Tue, 8 Dec 2020 23:47:50 +0000 Subject: [petsc-users] TSAdjoint multilevel checkpointing running out of memory In-Reply-To: References: Message-ID: Anton, TSAdjoint should manage checkpointing automatically, and the number of checkpoints in RAM and disk should not exceed the user-specified values. Can you send us the output for -ts_trajectory_monitor in your case? Hong (Mr.) On Dec 8, 2020, at 3:37 PM, Anton Glazkov > wrote: Good evening, I?m attempting to run a multi-level checkpointing code on a cluster (ie RAM+disk storage with ?download-revolve as a configure option) with the options ?-ts_trajectory_type memory -ts_trajectory_max_cps_ram 5 -ts_trajectory_max_cps_disk 5000?, for example. My question is, if I have 100,000 time points, for example, that need to be evaluated during the forward and adjoint run, does TSAdjoint automatically optimize the checkpointing so that the number of checkpoints in RAM and disk do not exceed these values, or is one of the options ignored. I ask because I have a case that runs correctly with -ts_trajectory_type basic, but runs out of memory when attempting to fill the checkpoints in RAM when running the adjoint (I have verified that 5 checkpoints will actually fit into the available memory). This makes me think that maybe the -ts_trajectory_max_cps_ram 5 option is being ignored? Best wishes, Anton -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Dec 8 19:37:53 2020 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 8 Dec 2020 20:37:53 -0500 Subject: [petsc-users] TSAdjoint multilevel checkpointing running out of memory In-Reply-To: References: Message-ID: On Tue, Dec 8, 2020 at 6:47 PM Zhang, Hong via petsc-users < petsc-users at mcs.anl.gov> wrote: > Anton, > > TSAdjoint should manage checkpointing automatically, and the number of > checkpoints in RAM and disk should not exceed the user-specified values. > Can you send us the output for -ts_trajectory_monitor in your case? > One other thing. It is always possible to miscalculate RAM a little. If you set it to 4 checkpoints, does it complete? Thanks, Matt > Hong (Mr.) > > On Dec 8, 2020, at 3:37 PM, Anton Glazkov > wrote: > > Good evening, > > I?m attempting to run a multi-level checkpointing code on a cluster (ie > RAM+disk storage with ?download-revolve as a configure option) with the > options ?-ts_trajectory_type memory -ts_trajectory_max_cps_ram 5 > -ts_trajectory_max_cps_disk 5000?, for example. My question is, if I have > 100,000 time points, for example, that need to be evaluated during the > forward and adjoint run, does TSAdjoint automatically optimize the > checkpointing so that the number of checkpoints in RAM and disk do not > exceed these values, or is one of the options ignored. I ask because I have > a case that runs correctly with -ts_trajectory_type basic, but runs out of > memory when attempting to fill the checkpoints in RAM when running the > adjoint (I have verified that 5 checkpoints will actually fit into the > available memory). This makes me think that maybe the > -ts_trajectory_max_cps_ram 5 option is being ignored? > > Best wishes, > Anton > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Tue Dec 8 20:04:22 2020 From: bsmith at petsc.dev (Barry Smith) Date: Tue, 8 Dec 2020 20:04:22 -0600 Subject: [petsc-users] Multiplying a row-vector to each row in a dense matrix, sliced or full In-Reply-To: References: <799f1dae-7e52-e836-09fd-60016d95a26c@ntnu.no> Message-ID: <7D7D4031-D26E-4432-BACE-600093779B3D@petsc.dev> Roland, You would need an algorithm to decide which entries of the vector you wish to use and then make sure those entries end up on the appropriate MPI process before the matrix vector product. You would use VecScatterCreate() and VecScatterBegin/End to move the vector entries to the correct location. See the manual page for MatCreateMPIAIJ() or the users manual for a discussion of the "right" vector used in matrix-vector products. For example, Say v has 2 extra entries than the number of rows of A. Vec right; IS is; MatCreateVecs(A,&right,NULL); // right will be used to store the truncated v vector VecScatterCreate(PETSC_COMM_WORLD,v,is,right,NULL,&scatter); VecScatterBegin(scatter,v,right,INSERT_VALUES,{SCATTER_FORWARD); VecScatterEnd(scatter,v,right,INSERT_VALUES,{SCATTER_FORWARD); MatDiagonalScale(A,NULL,right); The "is" above that you need provide determines which entries of v you keep (or you could say which entries you do not keep). In say a trivial case on one process , if v has 12 entries and you just want to keep the first 10 then you would use ISCreateStride(PETSC_COMM_SELF,10,0,1,&is); If you want to drop specific entries "in the middle" of the v then you would use ISCreateGeneral() to list the entries you wish to keep. In parallel things are slightly trickier because each process needs to list in its part of "is" the entries needed for its part of the right of the right vector. So say v = [ w u ; q r] where w, u, q, r are numbers and ; indicates the split between the first and second MPI rank and right has entries [w ; q r] then is on the first rank needs to have one entry (0) to grab the w and is on the second rank needs two entries (2,3) to grab the q and r. (Note the is entries are in global numbering across all the ranks). Barry > On Dec 8, 2020, at 4:16 PM, Roland Richter wrote: > > Yes, that would be exactly what I need, I have not thought about that possibility, thanks! > > Concerning slicing: Assumed my matrix A and vector v are defined by > > A = |a b| > |c d| > v = |w x y z| > > After v is larger than the row size of A, I only can take some elements for multiplication and therefore I have to use only a slice of vector v: > > A*v[1:2] = |xa yb| > |xc yd| > > or > > A*v[0:1] = |wa xb| > |wc xd| > > How could I do that? > > Thanks, > > Roland > > > > Am 08.12.2020 um 19:26 schrieb Matthew Knepley: >> On Tue, Dec 8, 2020 at 1:13 PM Roland Richter > wrote: >> Hei, >> >> I would like to multiply a row-vector to each row in a dense matrix, >> either full or sliced (i.e. if the row-vector is larger than the row >> length of the matrix). Armadillo offers a each_row()-function, where I >> can iterate over all rows in a matrix and multiply the vector to them >> (similar to the operation VecPointwiseMult()). Is there a similar >> operation in PETSc? Ideally with the option of only multiplying a >> part/slice of the row vector to each row, if the corresponding row of >> the target matrix is shorter than the initial row vector. >> >> It helps to write in linear algebra notation so that we can be sure we are talking >> about the same thing. Say we have the matrix A and vector v >> >> A = / a b \ v = >> \ c d / >> >> and you want >> >> A * m = / ma nb \ = / a b \ / m 0 \ = A . diag(v) >> \ mc nd / \ c d / \ 0 n / >> >> which you can get using https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatDiagonalScale.html >> >> is that what you want? I do not have a clear picture of what you want slicing for. >> >> Thanks, >> >> Matt >> >> Thanks, >> >> Roland >> >> >> >> -- >> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Tue Dec 8 20:13:34 2020 From: bsmith at petsc.dev (Barry Smith) Date: Tue, 8 Dec 2020 20:13:34 -0600 Subject: [petsc-users] Usage of parallel FFT for doing batch 1d-FFTs over the columns of a dense 2d-matrix In-Reply-To: <9c171fca-4049-6849-a1e2-4869490fda97@ntnu.no> References: <6c23f236-bea1-ded0-e356-35065c93b5de@ntnu.no> <01c888c3-8970-28df-4631-8a11c9b3a4a8@ntnu.no> <9c171fca-4049-6849-a1e2-4869490fda97@ntnu.no> Message-ID: Roland, The > inembed, istride, idist, > data_out, onembed, ostride, odist, > variables all need to be set rank by rank, (since each rank is providing a matrix with its own number of local rows which may be different on other ranks) and not using the global matrix_size or or size (which I assume comes from MPI_Comm_size()?). Use MatGetLocalSize(mat,&numrows,NULL); to get the number of rows on a process. Use this to set the need FFTW parameters. The reason the current code works when "my matrix size is an integer multiple of x, " is because then all the ranks have the same numrows and your formula produces the correct local size but otherwise your formula will not produce the correct local number of rows. Barry > On Dec 8, 2020, at 8:42 AM, Roland Richter wrote: > > I replaced the FFT-code with the following: > > int FFT_rank = 1; > const int FFTW_size[] = {matrix_size}; > int howmany = last_owned_row_index - first_owned_row_index; > int idist = 1; > int odist = 1; > int istride = matrix_size / size; > int ostride = matrix_size / size; > const int inembed[] = {matrix_size}, onembed[] = {matrix_size}; > fplan = fftw_plan_many_dft(FFT_rank, FFTW_size, > howmany, > data_in, inembed, istride, idist, > data_out, onembed, ostride, odist, > FFTW_FORWARD, FFTW_ESTIMATE); > bplan = fftw_plan_many_dft(FFT_rank, FFTW_size, > howmany, > data_out, inembed, istride, idist, > data_out2, onembed, ostride, odist, > FFTW_BACKWARD, FFTW_ESTIMATE); > > Now I get the expected results also for mpirun -n x with x > 1, but only if my matrix size is an integer multiple of x, else some parts of the resulting matrix are zeroed out. Therefore, I assume I made a mistake here with inembed and onembed, but I am not sure how they influence the result. Do you have any suggestions? > > Thanks, > > Roland > > Am 08.12.20 um 14:55 schrieb Matthew Knepley: >> On Tue, Dec 8, 2020 at 8:40 AM Roland Richter > wrote: >> Dear all, >> >> I tried the following code: >> >> int main(int argc, char **args) { >> Mat C, F; >> Vec x, y, z; >> PetscViewer viewer; >> PetscMPIInt rank, size; >> PetscInitialize(&argc, &args, (char*) 0, help); >> >> MPI_Comm_size(PETSC_COMM_WORLD, &size); >> MPI_Comm_rank(PETSC_COMM_WORLD, &rank); >> >> PetscPrintf(PETSC_COMM_WORLD,"Number of processors = %d, rank = %d\n", size, rank); >> // std::cout << "From rank " << rank << '\n'; >> >> //MatCreate(PETSC_COMM_WORLD, &C); >> PetscViewerCreate(PETSC_COMM_WORLD, &viewer); >> PetscViewerSetType(viewer, PETSCVIEWERASCII); >> arma::cx_mat local_mat, local_zero_mat; >> const size_t matrix_size = 5; >> >> MatCreateDense(PETSC_COMM_WORLD, PETSC_DECIDE, PETSC_DECIDE, matrix_size, matrix_size, NULL, &C); >> MatCreateDense(PETSC_COMM_WORLD, PETSC_DECIDE, PETSC_DECIDE, matrix_size, matrix_size, NULL, &F); >> if(rank == 0) { >> arma::Col indices = arma::linspace>(0, matrix_size - 1, matrix_size); >> //if(rank == 0) { >> local_mat = arma::randu(matrix_size, matrix_size); >> local_zero_mat = arma::zeros(matrix_size, matrix_size); >> arma::cx_mat tmp_mat = local_mat.st (); >> MatSetValues(C, matrix_size, indices.memptr(), matrix_size, indices.memptr(), tmp_mat.memptr(), INSERT_VALUES); >> MatSetValues(F, matrix_size, indices.memptr(), matrix_size, indices.memptr(), local_zero_mat.memptr(), INSERT_VALUES); >> } >> >> MatAssemblyBegin(C, MAT_FINAL_ASSEMBLY); >> MatAssemblyEnd(C, MAT_FINAL_ASSEMBLY); >> MatAssemblyBegin(F, MAT_FINAL_ASSEMBLY); >> MatAssemblyEnd(F, MAT_FINAL_ASSEMBLY); >> >> //FFT test >> Mat FFT_A; >> Vec input, output; >> int first_owned_row_index = 0, last_owned_row_index = 0; >> const int FFT_length[] = {matrix_size}; >> >> >> MatCreateFFT(PETSC_COMM_WORLD, 1, FFT_length, MATFFTW, &FFT_A); >> MatCreateVecsFFTW(FFT_A, &x, &y, &z); >> VecCreate(PETSC_COMM_WORLD, &input); >> VecSetFromOptions(input); >> VecSetSizes(input, PETSC_DECIDE, matrix_size); >> VecCreate(PETSC_COMM_WORLD, &output); >> VecSetFromOptions(output); >> VecSetSizes(output, PETSC_DECIDE, matrix_size); >> MatGetOwnershipRange(C, &first_owned_row_index, &last_owned_row_index); >> std::cout << "Rank " << rank << " owns row " << first_owned_row_index << " to row " << last_owned_row_index << '\n'; >> >> //Testing FFT >> >> /*---------------------------------------------------------*/ >> fftw_plan fplan,bplan; >> fftw_complex *data_in,*data_out,*data_out2; >> ptrdiff_t alloc_local, local_ni, local_i_start, local_n0,local_0_start; >> PetscRandom rdm; >> >> // if (!rank) >> // printf("Use FFTW without PETSc-FFTW interface\n"); >> fftw_mpi_init(); >> int N = matrix_size * matrix_size; >> int N0 = matrix_size; >> int N1 = matrix_size; >> const ptrdiff_t n_data[] = {N0, 1}; >> //alloc_local = fftw_mpi_local_size_2d(N0,N1,PETSC_COMM_WORLD,&local_n0,&local_0_start); >> alloc_local = fftw_mpi_local_size_many(1, n_data, >> matrix_size, >> FFTW_MPI_DEFAULT_BLOCK, >> PETSC_COMM_WORLD, >> &local_n0, >> &local_0_start); >> //data_in = (fftw_complex*)fftw_malloc(sizeof(fftw_complex)*alloc_local); >> PetscScalar *C_ptr, *F_ptr; >> MatDenseGetArray(C, &C_ptr); >> MatDenseGetArray(F, &F_ptr); >> data_in = reinterpret_cast(C_ptr); >> data_out = reinterpret_cast(F_ptr); >> data_out2 = (fftw_complex*)fftw_malloc(sizeof(fftw_complex)*alloc_local); >> >> >> VecCreateMPIWithArray(PETSC_COMM_WORLD,1,(PetscInt)local_n0 * N1,(PetscInt)N,(const PetscScalar*)data_in,&x); >> PetscObjectSetName((PetscObject) x, "Real Space vector"); >> VecCreateMPIWithArray(PETSC_COMM_WORLD,1,(PetscInt)local_n0 * N1,(PetscInt)N,(const PetscScalar*)data_out,&y); >> PetscObjectSetName((PetscObject) y, "Frequency space vector"); >> VecCreateMPIWithArray(PETSC_COMM_WORLD,1,(PetscInt)local_n0 * N1,(PetscInt)N,(const PetscScalar*)data_out2,&z); >> PetscObjectSetName((PetscObject) z, "Reconstructed vector"); >> >> int FFT_rank = 1; >> const ptrdiff_t FFTW_size[] = {matrix_size}; >> int howmany = last_owned_row_index - first_owned_row_index; >> //std::cout << "Rank " << rank << " processes " << howmany << " rows\n"; >> int idist = matrix_size;//1; >> int odist = matrix_size;//1; >> int istride = 1;//matrix_size; >> int ostride = 1;//matrix_size; >> const ptrdiff_t *inembed = FFTW_size, *onembed = FFTW_size; >> fplan = fftw_mpi_plan_many_dft(FFT_rank, FFTW_size, >> howmany, >> FFTW_MPI_DEFAULT_BLOCK, FFTW_MPI_DEFAULT_BLOCK, >> data_in, data_out, >> PETSC_COMM_WORLD, >> FFTW_FORWARD, FFTW_ESTIMATE); >> bplan = fftw_mpi_plan_many_dft(FFT_rank, FFTW_size, >> howmany, >> FFTW_MPI_DEFAULT_BLOCK, FFTW_MPI_DEFAULT_BLOCK, >> data_out, data_out2, >> PETSC_COMM_WORLD, >> FFTW_BACKWARD, FFTW_ESTIMATE); >> >> if (false) {VecView(x,PETSC_VIEWER_STDOUT_WORLD);} >> >> fftw_execute(fplan); >> if (false) {VecView(y,PETSC_VIEWER_STDOUT_WORLD);} >> >> fftw_execute(bplan); >> >> double a = 1.0 / matrix_size; >> double enorm = 0; >> VecScale(z,a); >> if (false) {VecView(z, PETSC_VIEWER_STDOUT_WORLD);} >> VecAXPY(z,-1.0,x); >> VecNorm(z,NORM_1,&enorm); >> if (enorm > 1.e-11) { >> PetscPrintf(PETSC_COMM_SELF," Error norm of |x - z| %g\n",(double)enorm); >> } >> >> /* Free spaces */ >> fftw_destroy_plan(fplan); >> fftw_destroy_plan(bplan); >> fftw_free(data_out2); >> >> //Generate test matrix for comparison >> arma::cx_mat fft_test_mat = local_mat; >> fft_test_mat.each_row([&](arma::cx_rowvec &a){ >> a = arma::fft(a); >> }); >> std::cout << "-----------------------------------------------------\n"; >> std::cout << "Input matrix:\n" << local_mat << '\n'; >> MatView(C, viewer); >> std::cout << "-----------------------------------------------------\n"; >> std::cout << "Expected output matrix:\n" << fft_test_mat << '\n'; >> MatView(F, viewer); >> std::cout << "-----------------------------------------------------\n"; >> MatDestroy(&FFT_A); >> VecDestroy(&input); >> VecDestroy(&output); >> VecDestroy(&x); >> VecDestroy(&y); >> VecDestroy(&z); >> MatDestroy(&C); >> MatDestroy(&F); >> PetscViewerDestroy(&viewer); >> PetscFinalize(); >> return 0; >> } >> >> For mpirun -n 1 I get the expected output (i.e. armadillo and PETSc/FFTW return the same result), but for mpirun -n x with x > 1 every value which is not assigned to rank 0 is lost and set to zero instead. Every value assigned to rank 0 is calculated correctly, as far as I can see. Did I forget something here? >> >> I do not understand why your FFTW calls use the WORLD communicator. Aren't they serial FFTs over the local rows? >> >> THanks, >> >> Matt >> Thanks, >> >> Roland >> >> Am 05.12.20 um 01:59 schrieb Barry Smith: >>> >>> Roland, >>> >>> If you store your matrix as described in a parallel PETSc dense matrix then you should be able to call >>> >>> fftw_plan_many_dft() directly on the value obtained with MatDenseGetArray(). You just need to pass the arguments regarding column major ordering appropriately. Probably identically to what you do with your previous code. >>> >>> Barry >>> >>> >>>> On Dec 4, 2020, at 6:47 AM, Roland Richter > wrote: >>>> >>>> Ideally those FFTs could be handled in parallel, after they are not depending on each other. Is that possible with MatFFT, or should I rather use FFTW for that? >>>> >>>> Thanks, >>>> >>>> Roland >>>> >>>> Am 04.12.20 um 13:19 schrieb Matthew Knepley: >>>>> On Fri, Dec 4, 2020 at 5:32 AM Roland Richter > wrote: >>>>> Hei, >>>>> >>>>> I am currently working on a problem which requires a large amount of >>>>> transformations of a field E(r, t) from time space to Fourier space E(r, >>>>> w) and back. The field is described in a 2d-matrix, with the r-dimension >>>>> along the columns and the t-dimension along the rows. >>>>> >>>>> For the transformation from time to frequency space and back I therefore >>>>> have to apply a 1d-FFT operation over each row of my matrix. For my >>>>> earlier attempts I used armadillo as matrix library and FFTW for doing >>>>> the transformations. Here I could use fftw_plan_many_dft to do all FFTs >>>>> at the same time. Unfortunately, armadillo does not support MPI, and >>>>> therefore I had to switch to PETSc for larger matrices. >>>>> >>>>> Based on the examples (such as example 143) PETSc has a way of doing >>>>> FFTs internally by creating an FFT object (using MatCreateFFT). >>>>> Unfortunately, I can not see how I could use that object to conduct the >>>>> operation described above without having to iterate over each row in my >>>>> original matrix (i.e. doing it sequential, not in parallel). >>>>> >>>>> Ideally I could distribute the FFTs such over my nodes that each node >>>>> takes several rows of the original matrix and applies the FFT to each of >>>>> them. As example, for a matrix with a size of 4x4 and two nodes node 0 >>>>> would take row 0 and 1, while node 1 takes row 2 and 3, to avoid >>>>> unnecessary memory transfer between the nodes while conducting the FFTs. >>>>> Is that something PETSc can do, too? >>>>> >>>>> The way I understand our setup (I did not write it), we use plan_many_dft to handle >>>>> multiple dof FFTs, but these would be interlaced. You want many FFTs for non-interlaced >>>>> storage, which is not something we do right now. You could definitely call FFTW directly >>>>> if you want. >>>>> >>>>> Second, above it seems like you just want serial FFTs. You can definitely create a MatFFT >>>>> with PETSC_COMM_SELF, and apply it to each row in the local rows, or create the plan >>>>> yourself for the stack of rows. >>>>> >>>>> Thanks, >>>>> >>>>> Matt >>>>> >>>>> Thanks! >>>>> >>>>> Regards, >>>>> >>>>> Roland >>>>> >>>>> >>>>> >>>>> -- >>>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>>>> -- Norbert Wiener >>>>> >>>>> https://www.cse.buffalo.edu/~knepley/ >>> >> >> >> -- >> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Tue Dec 8 20:45:54 2020 From: bsmith at petsc.dev (Barry Smith) Date: Tue, 8 Dec 2020 20:45:54 -0600 Subject: [petsc-users] Setting an unknown initial guess In-Reply-To: References: Message-ID: <49A1765C-8545-4CEC-A15C-0B3E4CE3335C@petsc.dev> David, Let me rephrase what you are saying so I understand it. You have multiple linear solves in time stepping (one linear solve per time step?) and it you call KSPSetInitialGuessNonzero(ksp,PETSC_TRUE); after the first solve then the later solves converge (using the solution to the previous solve as the initial guess for the next solve). But sometimes the final solution for a particular linear solve happens to be zero, in that case the initial guess for the next solve, of course, is also zero and that solve does not converge. Is there any way to "fix" these cases so they also get convergence? Is this a correct description of your situation? Question: is your matrix A not square, so you are actually solving the normal equations (via LSQR)? Now I'll try to write it down with simple formula. F() represents whatever computation you perform to get the right hand side for the next timestep linear system. x_0 given, A' A x_1 = A' F(), ... at some timestep A' Ax_n = A' F() the solution to the normal equations x_n is zero which presumably means A' F() is zero for that solve. You attempt to solve A' A x_n+1 = A' F() for the next time-step using x_n = 0 as the initial guess and "the solver fails to converge". What do you mean "fails to converge"? Does it just say the solution is zero, does it just jump around? Does the residual just crawl really really slowly down? I would first proceed as Matt suggests, call KSPSetOperators(ksp,A,B) where B is obtained with MatMatMult(A,A,...) and use -pc_type lu so now it tries to solve the normal equations with a direct solver for the preconditioner and may be much less dependent on initial guess for the LSQR solve. If this works but Iu is too expensive a preconditioner you can then experiment with other less expensive preconditioners that depend on your problem. If all you want to do is replace the occasional zero initial guess that comes along with something else you can simply do this, continue with the flag KSPSetInitialGuessNonzero(ksp,PETSC_TRUE); and immediately before each KSPSolve() check the norm of the initial guess, if it is zero then set it to something else. What else you would set it to I don't know, just some VecSetRandom()? I would go back to the formulation and try to understand what A' F() =0 means and why it would happen and also why an initial guess of zero causes slow or no convergence. Barry > On Dec 8, 2020, at 7:58 AM, David Schneider wrote: > > Dear all, > > I'm using a KSPLSQR solver without preconditioning and configure the solver once in the beginning. Afterwards, I solve my system multiple times in a time-dependent system and I would like to use an initial guess (from the previous solution). Currently, I use `KSPSetInitialGuessNonzero` for this purpose, but it may happen that the actual guess is zero. If the initial guess is zero, the solver fails to converge, at least with the default configuration. Setting `KSPSetInitialGuess` with `PETSC_FALSE` (which should also be the default) zeros the guess out. Is there a (native) way to preserve the initial guess, but still ensure convergence in the KSPsolver in case the guess is zero? > > Thanks in advance, > David > From e0425375 at gmail.com Wed Dec 9 09:38:18 2020 From: e0425375 at gmail.com (Florian Bruckner) Date: Wed, 9 Dec 2020 16:38:18 +0100 Subject: [petsc-users] incredibly good performance of scipy lgmres Message-ID: Dear PETSc developers, I am currently re-implementing our FEM-BEM code using Firedrake. The original code we were using is based on FEniCS and uses scipy sparse solvers for the solution of the coupled FEM / BEM system. For some reason the scipy lgmres method seems to outperform all other methods which we tried. E.g. for the strayfield-calculation of a 10x10x10 unit cube scipy-lgmres needs 5 iterations (without preconditioner), whereas scipy-gmres needs 167. The new implementation uses petsc-gmres and petsc-lgmres, but both need around 170 iterations. If I understand lgmres correctly it only improves convergence if gmres is restarted. Since it only needs 5 iterations i think this cannot be the reason. But nevertheless since the method seems to perform very good, it would be worth looking at the differences in detail. I provide the dense data of the system-matrix and right-hand-side vector that I used, as well as scripts for the different considered methods. Any ideas how scipy-lgmres could be that good? It would be nice if someone could validate my results (lgmres solves within 5 iterations). For me the next step will be to wrap scipy-lgmres using petsc4py. I know how to do it with petsc4py directly, but I am not exactly sure how it works with the firedrake interface. best wishes Florian -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: solve_petsc_lgmres.py Type: text/x-python Size: 580 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: solve_petsc_gmres.py Type: text/x-python Size: 579 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: solve_scipy_lgmres.py Type: text/x-python Size: 409 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: solve_scipy_gmres.py Type: text/x-python Size: 366 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: data.npz Type: application/octet-stream Size: 4004966 bytes Desc: not available URL: From stefano.zampini at gmail.com Wed Dec 9 10:25:29 2020 From: stefano.zampini at gmail.com (Stefano Zampini) Date: Wed, 9 Dec 2020 19:25:29 +0300 Subject: [petsc-users] incredibly good performance of scipy lgmres In-Reply-To: References: Message-ID: Could it be that scipy lgmres is reporting the wrong number of iterations? I would try to replicate the scipy code first https://github.com/scipy/scipy/blob/master/scipy/sparse/linalg/isolve/lgmres.py Il Mer 9 Dic 2020, 19:17 Florian Bruckner ha scritto: > Dear PETSc developers, > I am currently re-implementing our FEM-BEM code using Firedrake. > The original code we were using is based on FEniCS and uses scipy sparse > solvers for the solution of the coupled FEM / BEM system. > > For some reason the scipy lgmres method seems to outperform all other > methods which we tried. E.g. for the strayfield-calculation of a 10x10x10 > unit cube scipy-lgmres needs 5 iterations (without preconditioner), whereas > scipy-gmres needs 167. The new implementation uses petsc-gmres and > petsc-lgmres, but both need around 170 iterations. > > If I understand lgmres correctly it only improves convergence if gmres is > restarted. Since it only needs 5 iterations i think this cannot be the > reason. But nevertheless since the method seems to perform very good, it > would be worth looking at the differences in detail. I provide the dense > data of the system-matrix and right-hand-side vector that I used, as well > as scripts for the different considered methods. > > Any ideas how scipy-lgmres could be that good? It would be nice if someone > could validate my results (lgmres solves within 5 iterations). For me the > next step will be to wrap scipy-lgmres using petsc4py. I know how to do it > with petsc4py directly, but I am not exactly sure how it works with the > firedrake interface. > > best wishes > Florian > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre.seize at onera.fr Wed Dec 9 10:34:15 2020 From: pierre.seize at onera.fr (Pierre Seize) Date: Wed, 9 Dec 2020 17:34:15 +0100 Subject: [petsc-users] incredibly good performance of scipy lgmres In-Reply-To: References: Message-ID: I think that `callback` is called once for each outer cycle, and the default inner number of iterations is 30, so 30 x 5 = 150 iterations, it seems more realistic. Pierre Seize On 09/12/20 17:25, Stefano Zampini wrote: > Could it be that scipy lgmres is reporting the wrong number of > iterations? > > I would try to replicate the scipy code first > https://github.com/scipy/scipy/blob/master/scipy/sparse/linalg/isolve/lgmres.py > > Il Mer 9 Dic 2020, 19:17 Florian Bruckner > ha scritto: > > Dear PETSc developers, > I am currently re-implementing our FEM-BEM code using Firedrake. > The original code we were using is based on FEniCS and uses scipy > sparse solvers for the solution of the coupled FEM / BEM system. > > For some reason the scipy lgmres method seems to outperform all > other methods which we tried. E.g. for the strayfield-calculation > of a 10x10x10 unit cube scipy-lgmres needs 5 iterations (without > preconditioner), whereas scipy-gmres needs 167. The new > implementation uses petsc-gmres and petsc-lgmres, but both need > around 170 iterations. > > If I understand lgmres correctly it only improves convergence if > gmres is restarted. Since it only needs 5 iterations i think this > cannot be the reason. But nevertheless since the method seems to > perform very good, it would be worth looking at the differences in > detail. I provide the dense data of the system-matrix and > right-hand-side vector that I used, as well as scripts for the > different considered methods. > > Any ideas how scipy-lgmres could be that good? It would be nice if > someone could validate my results (lgmres solves within 5 > iterations). For me the next step will be to wrap scipy-lgmres > using petsc4py. I know how to do it with petsc4py directly, but I > am not exactly sure how it works with the firedrake interface. > > best wishes > Florian > -------------- next part -------------- An HTML attachment was scrubbed... URL: From e0425375 at gmail.com Wed Dec 9 12:51:03 2020 From: e0425375 at gmail.com (Florian Bruckner) Date: Wed, 9 Dec 2020 19:51:03 +0100 Subject: [petsc-users] incredibly good performance of scipy lgmres In-Reply-To: References: Message-ID: Dear Pierre, Yes, you are right. I should have looked at the source-code directly. Sorry for the stupid question. Nevertheless it is quite misleading that scipy only reports the number of outer iterations. I wanted to use PETSc anyway. thanks for the fast reply and best wishes Florian On Wed, Dec 9, 2020 at 5:34 PM Pierre Seize wrote: > I think that `callback` is called once for each outer cycle, and the > default inner number of iterations is 30, so 30 x 5 = 150 iterations, it > seems more realistic. > > > Pierre Seize > > On 09/12/20 17:25, Stefano Zampini wrote: > > Could it be that scipy lgmres is reporting the wrong number of iterations? > > I would try to replicate the scipy code first > https://github.com/scipy/scipy/blob/master/scipy/sparse/linalg/isolve/lgmres.py > > Il Mer 9 Dic 2020, 19:17 Florian Bruckner ha scritto: > >> Dear PETSc developers, >> I am currently re-implementing our FEM-BEM code using Firedrake. >> The original code we were using is based on FEniCS and uses scipy sparse >> solvers for the solution of the coupled FEM / BEM system. >> >> For some reason the scipy lgmres method seems to outperform all other >> methods which we tried. E.g. for the strayfield-calculation of a 10x10x10 >> unit cube scipy-lgmres needs 5 iterations (without preconditioner), whereas >> scipy-gmres needs 167. The new implementation uses petsc-gmres and >> petsc-lgmres, but both need around 170 iterations. >> >> If I understand lgmres correctly it only improves convergence if gmres is >> restarted. Since it only needs 5 iterations i think this cannot be the >> reason. But nevertheless since the method seems to perform very good, it >> would be worth looking at the differences in detail. I provide the dense >> data of the system-matrix and right-hand-side vector that I used, as well >> as scripts for the different considered methods. >> >> Any ideas how scipy-lgmres could be that good? It would be nice if >> someone could validate my results (lgmres solves within 5 iterations). For >> me the next step will be to wrap scipy-lgmres using petsc4py. I know how to >> do it with petsc4py directly, but I am not exactly sure how it works with >> the firedrake interface. >> >> best wishes >> Florian >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cheluo at ethz.ch Wed Dec 9 15:16:20 2020 From: cheluo at ethz.ch (Luo Chenyi) Date: Wed, 9 Dec 2020 21:16:20 +0000 Subject: [petsc-users] PETSc with MUMPS Message-ID: Hi I am installing PETSC together with MUMPS. When I run the comman ?make ?? checked ?. I get the following messages "luochengyi at macbook-pro-2 petsc % make PETSC_DIR=/Users/luochengyi/Downloads/petsc PETSC_ARCH=arch-darwin-c-debug check Running check examples to verify correct installation Using PETSC_DIR=/Users/luochengyi/Downloads/petsc and PETSC_ARCH=arch-darwin-c-debug C/C++ example src/snes/tutorials/ex19 run successfully with 1 MPI process C/C++ example src/snes/tutorials/ex19 run successfully with 2 MPI processes C/C++ example src/snes/tutorials/ex19 run successfully with mumps *******************Error detected during compile or link!******************* See http://www.mcs.anl.gov/petsc/documentation/faq.html /Users/luochengyi/Downloads/petsc/src/snes/tutorials ex5f ********************************************************* mpif90 -Wl,-multiply_defined,suppress -Wl,-multiply_defined -Wl,suppress -Wl,-commons,use_dylibs -Wl,-search_paths_first -Wl,-no_compact_unwind -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -I/Users/luochengyi/Downloads/petsc/include -I/Users/luochengyi/Downloads/petsc/arch-darwin-c-debug/include -I/opt/X11/include ex5f.F90 -Wl,-rpath,/Users/luochengyi/Downloads/petsc/arch-darwin-c-debug/lib -L/Users/luochengyi/Downloads/petsc/arch-darwin-c-debug/lib -Wl,-rpath,/Users/luochengyi/Downloads/petsc/arch-darwin-c-debug/lib -L/Users/luochengyi/Downloads/petsc/arch-darwin-c-debug/lib -Wl,-rpath,/opt/X11/lib -L/opt/X11/lib -Wl,-rpath,/usr/local/Cellar/mpich/3.3.2_1/lib -L/usr/local/Cellar/mpich/3.3.2_1/lib -Wl,-rpath,/usr/local/lib -L/usr/local/lib -Wl,-rpath,/usr/local/Cellar/gcc/10.2.0/lib/gcc/10/gcc/x86_64-apple-darwin19/10.2.0 -L/usr/local/Cellar/gcc/10.2.0/lib/gcc/10/gcc/x86_64-apple-darwin19/10.2.0 -Wl,-rpath,/usr/local/Cellar/gcc/10.2.0/lib/gcc/10 -L/usr/local/Cellar/gcc/10.2.0/lib/gcc/10 -lpetsc -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -llapack -lblas -lptesmumps -lptscotchparmetis -lptscotch -lptscotcherr -lesmumps -lscotch -lscotcherr -lX11 -lparmetis -lmetis -lc++ -ldl -lmpifort -lmpi -lpmpi -lgfortran -lquadmath -lm -lc++ -ldl -o ex5f ld: warning: dylib (/Users/luochengyi/Downloads/petsc/arch-darwin-c-debug/lib/libpetsc.dylib) was built for newer macOS version (11.0) than being linked (10.16) ld: warning: dylib (/Users/luochengyi/Downloads/petsc/arch-darwin-c-debug/lib/libparmetis.dylib) was built for newer macOS version (11.0) than being linked (10.16) ld: warning: dylib (/Users/luochengyi/Downloads/petsc/arch-darwin-c-debug/lib/libmetis.dylib) was built for newer macOS version (11.0) than being linked (10.16) Fortran example src/snes/tutorials/ex5f run successfully with 1 MPI process Completed test examples? Does it mean the installation of MUMPS is not complete? What I should do next? Thanks a lot! Best, Chenyi -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Wed Dec 9 15:23:24 2020 From: mfadams at lbl.gov (Mark Adams) Date: Wed, 9 Dec 2020 16:23:24 -0500 Subject: [petsc-users] PETSc with MUMPS In-Reply-To: References: Message-ID: These just look like warnings. They all look successful. But look at the warnings. It looks like you have some old libraries that might give you hard to diagnose errors. I would rebuild with the same OS. On Wed, Dec 9, 2020 at 4:19 PM Luo Chenyi wrote: > Hi > > I am installing PETSC together with MUMPS. When I run the comman ?make ?? > checked ?. I get the following messages > > "luochengyi at macbook-pro-2 petsc % make > PETSC_DIR=/Users/luochengyi/Downloads/petsc PETSC_ARCH=arch-darwin-c-debug > check > Running check examples to verify correct installation > Using PETSC_DIR=/Users/luochengyi/Downloads/petsc and > PETSC_ARCH=arch-darwin-c-debug > C/C++ example src/snes/tutorials/ex19 run successfully with 1 MPI process > C/C++ example src/snes/tutorials/ex19 run successfully with 2 MPI processes > C/C++ example src/snes/tutorials/ex19 run successfully with mumps > ********************Error detected during compile or > link!******************** > *See http://www.mcs.anl.gov/petsc/documentation/faq.html > * > */Users/luochengyi/Downloads/petsc/src/snes/tutorials ex5f* > *********************************************************** > mpif90 -Wl,-multiply_defined,suppress -Wl,-multiply_defined -Wl,suppress > -Wl,-commons,use_dylibs -Wl,-search_paths_first -Wl,-no_compact_unwind > -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -Wall > -ffree-line-length-0 -Wno-unused-dummy-argument -g > -I/Users/luochengyi/Downloads/petsc/include > -I/Users/luochengyi/Downloads/petsc/arch-darwin-c-debug/include > -I/opt/X11/include ex5f.F90 > -Wl,-rpath,/Users/luochengyi/Downloads/petsc/arch-darwin-c-debug/lib > -L/Users/luochengyi/Downloads/petsc/arch-darwin-c-debug/lib > -Wl,-rpath,/Users/luochengyi/Downloads/petsc/arch-darwin-c-debug/lib > -L/Users/luochengyi/Downloads/petsc/arch-darwin-c-debug/lib > -Wl,-rpath,/opt/X11/lib -L/opt/X11/lib > -Wl,-rpath,/usr/local/Cellar/mpich/3.3.2_1/lib > -L/usr/local/Cellar/mpich/3.3.2_1/lib -Wl,-rpath,/usr/local/lib > -L/usr/local/lib > -Wl,-rpath,/usr/local/Cellar/gcc/10.2.0/lib/gcc/10/gcc/x86_64-apple-darwin19/10.2.0 > -L/usr/local/Cellar/gcc/10.2.0/lib/gcc/10/gcc/x86_64-apple-darwin19/10.2.0 > -Wl,-rpath,/usr/local/Cellar/gcc/10.2.0/lib/gcc/10 > -L/usr/local/Cellar/gcc/10.2.0/lib/gcc/10 -lpetsc -lcmumps -ldmumps > -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -llapack -lblas > -lptesmumps -lptscotchparmetis -lptscotch -lptscotcherr -lesmumps -lscotch > -lscotcherr -lX11 -lparmetis -lmetis -lc++ -ldl -lmpifort -lmpi -lpmpi > -lgfortran -lquadmath -lm -lc++ -ldl -o ex5f > ld: warning: dylib > (/Users/luochengyi/Downloads/petsc/arch-darwin-c-debug/lib/libpetsc.dylib) > was built for newer macOS version (11.0) than being linked (10.16) > ld: warning: dylib > (/Users/luochengyi/Downloads/petsc/arch-darwin-c-debug/lib/libparmetis.dylib) > was built for newer macOS version (11.0) than being linked (10.16) > ld: warning: dylib > (/Users/luochengyi/Downloads/petsc/arch-darwin-c-debug/lib/libmetis.dylib) > was built for newer macOS version (11.0) than being linked (10.16) > Fortran example src/snes/tutorials/ex5f run successfully with 1 MPI process > Completed test examples? > > Does it mean the installation of MUMPS is not complete? What I should do > next? > > Thanks a lot! > > Best, > Chenyi > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Wed Dec 9 15:26:53 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 9 Dec 2020 15:26:53 -0600 Subject: [petsc-users] PETSc with MUMPS In-Reply-To: References: Message-ID: Likely you've recently updated xcode [but have brew from prior install] Its best to reinstall brew (packages) and retry. Satish On Wed, 9 Dec 2020, Mark Adams wrote: > These just look like warnings. They all look successful. > > But look at the warnings. It looks like you have some old libraries that > might give you hard to diagnose errors. I would rebuild with the same OS. > > On Wed, Dec 9, 2020 at 4:19 PM Luo Chenyi wrote: > > > Hi > > > > I am installing PETSC together with MUMPS. When I run the comman ?make ?? > > checked ?. I get the following messages > > > > "luochengyi at macbook-pro-2 petsc % make > > PETSC_DIR=/Users/luochengyi/Downloads/petsc PETSC_ARCH=arch-darwin-c-debug > > check > > Running check examples to verify correct installation > > Using PETSC_DIR=/Users/luochengyi/Downloads/petsc and > > PETSC_ARCH=arch-darwin-c-debug > > C/C++ example src/snes/tutorials/ex19 run successfully with 1 MPI process > > C/C++ example src/snes/tutorials/ex19 run successfully with 2 MPI processes > > C/C++ example src/snes/tutorials/ex19 run successfully with mumps > > ********************Error detected during compile or > > link!******************** > > *See http://www.mcs.anl.gov/petsc/documentation/faq.html > > * > > */Users/luochengyi/Downloads/petsc/src/snes/tutorials ex5f* > > *********************************************************** > > mpif90 -Wl,-multiply_defined,suppress -Wl,-multiply_defined -Wl,suppress > > -Wl,-commons,use_dylibs -Wl,-search_paths_first -Wl,-no_compact_unwind > > -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -Wall > > -ffree-line-length-0 -Wno-unused-dummy-argument -g > > -I/Users/luochengyi/Downloads/petsc/include > > -I/Users/luochengyi/Downloads/petsc/arch-darwin-c-debug/include > > -I/opt/X11/include ex5f.F90 > > -Wl,-rpath,/Users/luochengyi/Downloads/petsc/arch-darwin-c-debug/lib > > -L/Users/luochengyi/Downloads/petsc/arch-darwin-c-debug/lib > > -Wl,-rpath,/Users/luochengyi/Downloads/petsc/arch-darwin-c-debug/lib > > -L/Users/luochengyi/Downloads/petsc/arch-darwin-c-debug/lib > > -Wl,-rpath,/opt/X11/lib -L/opt/X11/lib > > -Wl,-rpath,/usr/local/Cellar/mpich/3.3.2_1/lib > > -L/usr/local/Cellar/mpich/3.3.2_1/lib -Wl,-rpath,/usr/local/lib > > -L/usr/local/lib > > -Wl,-rpath,/usr/local/Cellar/gcc/10.2.0/lib/gcc/10/gcc/x86_64-apple-darwin19/10.2.0 > > -L/usr/local/Cellar/gcc/10.2.0/lib/gcc/10/gcc/x86_64-apple-darwin19/10.2.0 > > -Wl,-rpath,/usr/local/Cellar/gcc/10.2.0/lib/gcc/10 > > -L/usr/local/Cellar/gcc/10.2.0/lib/gcc/10 -lpetsc -lcmumps -ldmumps > > -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -llapack -lblas > > -lptesmumps -lptscotchparmetis -lptscotch -lptscotcherr -lesmumps -lscotch > > -lscotcherr -lX11 -lparmetis -lmetis -lc++ -ldl -lmpifort -lmpi -lpmpi > > -lgfortran -lquadmath -lm -lc++ -ldl -o ex5f > > ld: warning: dylib > > (/Users/luochengyi/Downloads/petsc/arch-darwin-c-debug/lib/libpetsc.dylib) > > was built for newer macOS version (11.0) than being linked (10.16) > > ld: warning: dylib > > (/Users/luochengyi/Downloads/petsc/arch-darwin-c-debug/lib/libparmetis.dylib) > > was built for newer macOS version (11.0) than being linked (10.16) > > ld: warning: dylib > > (/Users/luochengyi/Downloads/petsc/arch-darwin-c-debug/lib/libmetis.dylib) > > was built for newer macOS version (11.0) than being linked (10.16) > > Fortran example src/snes/tutorials/ex5f run successfully with 1 MPI process > > Completed test examples? > > > > Does it mean the installation of MUMPS is not complete? What I should do > > next? > > > > Thanks a lot! > > > > Best, > > Chenyi > > > > > From balay at mcs.anl.gov Wed Dec 9 15:35:03 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 9 Dec 2020 15:35:03 -0600 Subject: [petsc-users] PETSc with MUMPS In-Reply-To: References: Message-ID: <38de6770-2540-e7c-969d-eb544e562c0@mcs.anl.gov> On Wed, 9 Dec 2020, Satish Balay via petsc-users wrote: > Its best to reinstall brew (packages) and retry. And here is one way to do this: 1. Make list of pkgs to reinstall brew leaves > reinstall.lst 2. delete all installed brew packages. brew cleanup brew list > delete.lst brew remove `cat delete.lst` 3. Now reinstall all required packages brew update brew install `cat reinstall.lst` Satish From cheluo at ethz.ch Wed Dec 9 15:59:24 2020 From: cheluo at ethz.ch (Luo Chenyi) Date: Wed, 9 Dec 2020 21:59:24 +0000 Subject: [petsc-users] PETSc with MUMPS In-Reply-To: <38de6770-2540-e7c-969d-eb544e562c0@mcs.anl.gov> References: <38de6770-2540-e7c-969d-eb544e562c0@mcs.anl.gov> Message-ID: Hi Satish, many thanks for this detailed cooking recipe (very practical for a beginner)! Now I receive the following message ?luochengyi at macbook-pro-2 petsc % make PETSC_DIR=/Users/luochengyi/Downloads/petsc PETSC_ARCH=arch-darwin-c-debug check Running check examples to verify correct installation Using PETSC_DIR=/Users/luochengyi/Downloads/petsc and PETSC_ARCH=arch-darwin-c-debug C/C++ example src/snes/tutorials/ex19 run successfully with 1 MPI process C/C++ example src/snes/tutorials/ex19 run successfully with 2 MPI processes C/C++ example src/snes/tutorials/ex19 run successfully with mumps Fortran example src/snes/tutorials/ex5f run successfully with 1 MPI process Completed test examples ? So I think the installation is complete. However, when I run my dealii codes, I still received the same error message ? Additional information: Your PETSc installation does not include a copy of the MUMPS package necessary for this solver. You will need to configure PETSc so that it includes MUMPS, recompile it, and then re-configure and recompile deal.II as well. ? I think I?ve already configure the PETSc including MUMPS and recompiled it. I also use cmake to link the necessary libraries. Is there anything else I need to do? Best, Chenyi On Dec 9, 2020, at 10:35 PM, Satish Balay > wrote: On Wed, 9 Dec 2020, Satish Balay via petsc-users wrote: Its best to reinstall brew (packages) and retry. And here is one way to do this: 1. Make list of pkgs to reinstall brew leaves > reinstall.lst 2. delete all installed brew packages. brew cleanup brew list > delete.lst brew remove `cat delete.lst` 3. Now reinstall all required packages brew update brew install `cat reinstall.lst` Satish -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Dec 9 16:21:50 2020 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 9 Dec 2020 17:21:50 -0500 Subject: [petsc-users] PETSc with MUMPS In-Reply-To: References: <38de6770-2540-e7c-969d-eb544e562c0@mcs.anl.gov> Message-ID: On Wed, Dec 9, 2020 at 4:59 PM Luo Chenyi wrote: > Hi Satish, > > many thanks for this detailed cooking recipe (very practical for a > beginner)! > > Now I receive the following message > > ?luochengyi at macbook-pro-2 petsc % make > PETSC_DIR=/Users/luochengyi/Downloads/petsc PETSC_ARCH=arch-darwin-c-debug > check > Running check examples to verify correct installation > Using PETSC_DIR=/Users/luochengyi/Downloads/petsc and > PETSC_ARCH=arch-darwin-c-debug > C/C++ example src/snes/tutorials/ex19 run successfully with 1 MPI process > C/C++ example src/snes/tutorials/ex19 run successfully with 2 MPI processes > C/C++ example src/snes/tutorials/ex19 run successfully with mumps > Fortran example src/snes/tutorials/ex5f run successfully with 1 MPI process > Completed test examples > ? > You did build it with MUMPS. > So I think the installation is complete. However, when I run my dealii > codes, I still received the same error message > ? > Additional information: > Your PETSc installation does not include a copy of the MUMPS package > necessary for this solver. You will need to configure PETSc so that it > includes MUMPS, recompile it, and then re-configure and recompile deal.II > as well. > ? > Deal.II is probably looking at an old PETSc. Make sure PETSC_DIR and PETSC_ARCH are set correctly. Thanks, Matt > I think I?ve already configure the PETSc including MUMPS and recompiled it. > > I also use cmake to link the necessary libraries. Is there anything else I > need to do? > > Best, > Chenyi > > On Dec 9, 2020, at 10:35 PM, Satish Balay wrote: > > On Wed, 9 Dec 2020, Satish Balay via petsc-users wrote: > > Its best to reinstall brew (packages) and retry. > > > And here is one way to do this: > > 1. Make list of pkgs to reinstall > > brew leaves > reinstall.lst > > 2. delete all installed brew packages. > > brew cleanup > brew list > delete.lst > brew remove `cat delete.lst` > > 3. Now reinstall all required packages > brew update > brew install `cat reinstall.lst` > > > Satish > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Wed Dec 9 20:34:44 2020 From: bsmith at petsc.dev (Barry Smith) Date: Wed, 9 Dec 2020 20:34:44 -0600 Subject: [petsc-users] PETSc with MUMPS In-Reply-To: References: <38de6770-2540-e7c-969d-eb544e562c0@mcs.anl.gov> Message-ID: <694EF5F3-356E-44CE-AE88-C3DEC0966740@petsc.dev> > So I think the installation is complete. However, when I run my dealii codes, I still received the same error message > ? > Additional information: > Your PETSc installation does not include a copy of the MUMPS package necessary for this solver. You will need to configure PETSc so that it includes MUMPS, recompile it, and then re-configure and recompile deal.II as well. > ? Did you totally reinstall deal.ii after you built this PETSc? That is delete all the deal.ii directories and install it again fresh? My guess is that deal.ii cmake may be using some cached information and not properly rebuilding using the latest installed PETSc. Barry > On Dec 9, 2020, at 3:59 PM, Luo Chenyi wrote: > > Hi Satish, > > many thanks for this detailed cooking recipe (very practical for a beginner)! > > Now I receive the following message > > ?luochengyi at macbook-pro-2 petsc % make PETSC_DIR=/Users/luochengyi/Downloads/petsc PETSC_ARCH=arch-darwin-c-debug check > Running check examples to verify correct installation > Using PETSC_DIR=/Users/luochengyi/Downloads/petsc and PETSC_ARCH=arch-darwin-c-debug > C/C++ example src/snes/tutorials/ex19 run successfully with 1 MPI process > C/C++ example src/snes/tutorials/ex19 run successfully with 2 MPI processes > C/C++ example src/snes/tutorials/ex19 run successfully with mumps > Fortran example src/snes/tutorials/ex5f run successfully with 1 MPI process > Completed test examples > ? > > So I think the installation is complete. However, when I run my dealii codes, I still received the same error message > ? > Additional information: > Your PETSc installation does not include a copy of the MUMPS package necessary for this solver. You will need to configure PETSc so that it includes MUMPS, recompile it, and then re-configure and recompile deal.II as well. > ? > > I think I?ve already configure the PETSc including MUMPS and recompiled it. > > I also use cmake to link the necessary libraries. Is there anything else I need to do? > > Best, > Chenyi > >> On Dec 9, 2020, at 10:35 PM, Satish Balay > wrote: >> >> On Wed, 9 Dec 2020, Satish Balay via petsc-users wrote: >> >>> Its best to reinstall brew (packages) and retry. >> >> And here is one way to do this: >> >> 1. Make list of pkgs to reinstall >> >> brew leaves > reinstall.lst >> >> 2. delete all installed brew packages. >> >> brew cleanup >> brew list > delete.lst >> brew remove `cat delete.lst` >> >> 3. Now reinstall all required packages >> brew update >> brew install `cat reinstall.lst` >> >> >> Satish > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anton.glazkov at chch.ox.ac.uk Thu Dec 10 17:19:18 2020 From: anton.glazkov at chch.ox.ac.uk (Anton Glazkov) Date: Thu, 10 Dec 2020 23:19:18 +0000 Subject: [petsc-users] TSAdjoint multilevel checkpointing running out of memory In-Reply-To: References: , Message-ID: Dear Matt and Hong, Thank you for your quick replies! In answer to your question Matt, the application fails in the same way as with 5 checkpoints. I don?t believe the RAM capacity to be a problem though because we are running this case on a cluster with 64GB RAM per node, and we anticipate 0.1GB storage requirements for the 4 checkpoints. The case is being run in MPMD mode with the following command: aprun -n 72 /work/e01/e01/chri4903/bin/cascade-ng/checkpoints_gradients ../data/nl_adj_0-chkpts.ini -adjoint -vr "/?/?/?/?/sims/aaX-testcases/10-RotorStator/test/data/ic_0_chkpts.h5:/0000000000/field" -vTarg "/?/?/?/?/sims/aaX-testcases/10-RotorStator/test/data/targ_0_chkpts.h5:/targ/field" -vMet "/?/?/?/?/sims/aaX-testcases/10-RotorStator/test/data/targ_0_chkpts.h5:/metric_diag/field" -ts_trajectory_dirname ./test_directory_0 -ts_trajectory_type memory -ts_trajectory_max_cps_ram 4 -ts_trajectory_max_cps_disk 5000 -ts_trajectory_monitor : -n 80 /?/?/?/?/bin/cascade-ng/checkpoints_gradients ../data/nl_adj_1-chkpts.ini -adjoint -vr "/?/?/?/?/sims/aaX-testcases/10-RotorStator/test/data/ic_1_chkpts.h5:/0000000000/field" -vTarg "/?/?/?/?/sims/aaX-testcases/10-RotorStator/test/data/targ_1_chkpts.h5:/targ/field" -vMet "/?/?/?/?/sims/aaX-testcases/10-RotorStator/test/data/targ_1_chkpts.h5:/metric_diag/field" -ts_trajectory_dirname ./test_directory_1 -ts_trajectory_type memory -ts_trajectory_max_cps_ram 4 -ts_trajectory_max_cps_disk 5000 -ts_trajectory_monitor > log.txt 2> error.txt I have attached the log.txt and error.txt to this email so that you can have a look at these. It seems to look ok until the OOM killer kills the job. Best wishes, Anton From: Matthew Knepley Date: Wednesday, 9 December 2020 at 01:38 To: Zhang, Hong Cc: Anton Glazkov , petsc-users at mcs.anl.gov Subject: Re: [petsc-users] TSAdjoint multilevel checkpointing running out of memory On Tue, Dec 8, 2020 at 6:47 PM Zhang, Hong via petsc-users > wrote: Anton, TSAdjoint should manage checkpointing automatically, and the number of checkpoints in RAM and disk should not exceed the user-specified values. Can you send us the output for -ts_trajectory_monitor in your case? One other thing. It is always possible to miscalculate RAM a little. If you set it to 4 checkpoints, does it complete? Thanks, Matt Hong (Mr.) On Dec 8, 2020, at 3:37 PM, Anton Glazkov > wrote: Good evening, I?m attempting to run a multi-level checkpointing code on a cluster (ie RAM+disk storage with ?download-revolve as a configure option) with the options ?-ts_trajectory_type memory -ts_trajectory_max_cps_ram 5 -ts_trajectory_max_cps_disk 5000?, for example. My question is, if I have 100,000 time points, for example, that need to be evaluated during the forward and adjoint run, does TSAdjoint automatically optimize the checkpointing so that the number of checkpoints in RAM and disk do not exceed these values, or is one of the options ignored. I ask because I have a case that runs correctly with -ts_trajectory_type basic, but runs out of memory when attempting to fill the checkpoints in RAM when running the adjoint (I have verified that 5 checkpoints will actually fit into the available memory). This makes me think that maybe the -ts_trajectory_max_cps_ram 5 option is being ignored? Best wishes, Anton -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: error.txt URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: log.txt URL: From rlmackie862 at gmail.com Thu Dec 10 17:31:39 2020 From: rlmackie862 at gmail.com (Randall Mackie) Date: Thu, 10 Dec 2020 15:31:39 -0800 Subject: [petsc-users] Is there a way to estimate total memory prior to solve Message-ID: <6F8F0829-9AB1-4C66-B379-2A661E8FCB0D@gmail.com> Dear PETSc users: While I can calculate the amount of memory any vector arrays I allocate inside my code (and probably get pretty close to any matrices), what I don?t know how to estimate is how much memory internal PETSc iterative solvers will take. Is there some way to get a reasonable estimate (in advance) of how much memory a PETSc solve will take given the size of the matrix and right hand side. For example, if these solves always use BCGS and ASM preconditioning with sub type ILU and 3 levels. This is for runs on a PC so that too large a run won?t crash the PC. Thanks, Randy M From bsmith at petsc.dev Thu Dec 10 19:17:12 2020 From: bsmith at petsc.dev (Barry Smith) Date: Thu, 10 Dec 2020 19:17:12 -0600 Subject: [petsc-users] Is there a way to estimate total memory prior to solve In-Reply-To: <6F8F0829-9AB1-4C66-B379-2A661E8FCB0D@gmail.com> References: <6F8F0829-9AB1-4C66-B379-2A661E8FCB0D@gmail.com> Message-ID: <1097CEDC-B320-4D2A-9C78-698C4EC0C108@petsc.dev> Randy This is a great question, I have made an issue based on it https://gitlab.com/petsc/petsc/-/issues/799 It is difficult, but not impossible to get some bounds on the memory required, but it would need to be done for each preconditioner and combination of preconditioners manually based on the algorithm and then coded up. The bounds would be a imprecise but still useful. More complicated situations such as PCFIELDSPLIT with complicated preconditioners inside it might require some partial computations to get the bounds. "BCGS and ASM preconditioning with sub type ILU and 3 levels." is a relatively easy case compared to others but still not trivial. BCGS has a fixed amount of work vectors. With ASM you would need to determine the number of dof for the overlapped subdomains and then ILU has its own uncertainty of how many values you get in the fill. But you have to partially construct the preconditioner to get the number. Perhaps an alternative would be to begin to construct the preconditioner with a fixed size limit and if the code realizes it will take too much memory it could back-off and cleanup the memory and return report it would require more memory than is available and then the code could try something else. PCFailedReasons WILL_RUN_OUT_OFF_MEMORY. But still a good amount of coding. Barry > On Dec 10, 2020, at 5:31 PM, Randall Mackie wrote: > > Dear PETSc users: > > While I can calculate the amount of memory any vector arrays I allocate inside my code (and probably get pretty close to any matrices), what I don?t know how to estimate is how much memory internal PETSc iterative solvers will take. > > Is there some way to get a reasonable estimate (in advance) of how much memory a PETSc solve will take given the size of the matrix and right hand side. > For example, if these solves always use BCGS and ASM preconditioning with sub type ILU and 3 levels. > > This is for runs on a PC so that too large a run won?t crash the PC. > > > Thanks, Randy M -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Thu Dec 10 21:20:15 2020 From: bsmith at petsc.dev (Barry Smith) Date: Thu, 10 Dec 2020 21:20:15 -0600 Subject: [petsc-users] TSAdjoint multilevel checkpointing running out of memory In-Reply-To: References: Message-ID: <5D3B86CD-4E1C-48FF-A62F-8824FE73F73C@petsc.dev> Anton, You can try running a smaller problem that does not run out of memory with -log_view -log_view_memory. This should show in the right most columns how much memory is being used and added for each event in the computation and might help you track down where the memory is being gobbled up. Also running with -info may provide additional information during the run of the current status. In addition, valgrind has options for tracking memory usage which can show where there are big jumps in the memory usage and could tell you when the memory usage jumps dramatically. We are eager to insure that the TSAdjoint does not use excess memory so are happy to work with you to figure out where the excessive memory is being used. As always using the latest version of PETSc is best to track down the problems that arise. Barry [NID 01928] 2020-12-10 21:58:46 Apid 47584119: initiated application termination [NID 01928] 2020-12-10 21:58:47 Application 47584119 exit signals: Killed Application 47584119 resources: utime ~0s, stime ~25s, Rss ~11012, inblocks ~46654, outblocks ~154922 > On Dec 10, 2020, at 5:19 PM, Anton Glazkov wrote: > > Dear Matt and Hong, > > Thank you for your quick replies! > In answer to your question Matt, the application fails in the same way as with 5 checkpoints. I don?t believe the RAM capacity to be a problem though because we are running this case on a cluster with 64GB RAM per node, and we anticipate 0.1GB storage requirements for the 4 checkpoints. > The case is being run in MPMD mode with the following command: > > aprun -n 72 /work/e01/e01/chri4903/bin/cascade-ng/checkpoints_gradients ../data/nl_adj_0-chkpts.ini -adjoint -vr "/?/?/?/?/sims/aaX-testcases/10-RotorStator/test/data/ic_0_chkpts.h5:/0000000000/field" -vTarg "/?/?/?/?/sims/aaX-testcases/10-RotorStator/test/data/targ_0_chkpts.h5:/targ/field" -vMet "/?/?/?/?/sims/aaX-testcases/10-RotorStator/test/data/targ_0_chkpts.h5:/metric_diag/field" -ts_trajectory_dirname ./test_directory_0 -ts_trajectory_type memory -ts_trajectory_max_cps_ram 4 -ts_trajectory_max_cps_disk 5000 -ts_trajectory_monitor : -n 80 /?/?/?/?/bin/cascade-ng/checkpoints_gradients ../data/nl_adj_1-chkpts.ini -adjoint -vr "/?/?/?/?/sims/aaX-testcases/10-RotorStator/test/data/ic_1_chkpts.h5:/0000000000/field" -vTarg "/?/?/?/?/sims/aaX-testcases/10-RotorStator/test/data/targ_1_chkpts.h5:/targ/field" -vMet "/?/?/?/?/sims/aaX-testcases/10-RotorStator/test/data/targ_1_chkpts.h5:/metric_diag/field" -ts_trajectory_dirname ./test_directory_1 -ts_trajectory_type memory -ts_trajectory_max_cps_ram 4 -ts_trajectory_max_cps_disk 5000 -ts_trajectory_monitor > log.txt 2> error.txt > > I have attached the log.txt and error.txt to this email so that you can have a look at these. It seems to look ok until the OOM killer kills the job. > > Best wishes, > Anton > > From: Matthew Knepley > > Date: Wednesday, 9 December 2020 at 01:38 > To: Zhang, Hong > > Cc: Anton Glazkov >, petsc-users at mcs.anl.gov > > Subject: Re: [petsc-users] TSAdjoint multilevel checkpointing running out of memory > > On Tue, Dec 8, 2020 at 6:47 PM Zhang, Hong via petsc-users > wrote: > Anton, > > TSAdjoint should manage checkpointing automatically, and the number of checkpoints in RAM and disk should not exceed the user-specified values. Can you send us the output for -ts_trajectory_monitor in your case? > > One other thing. It is always possible to miscalculate RAM a little. If you set it to 4 checkpoints, does it complete? > > Thanks, > > Matt > > Hong (Mr.) > > > On Dec 8, 2020, at 3:37 PM, Anton Glazkov > wrote: > > Good evening, > > I?m attempting to run a multi-level checkpointing code on a cluster (ie RAM+disk storage with ?download-revolve as a configure option) with the options ?-ts_trajectory_type memory -ts_trajectory_max_cps_ram 5 -ts_trajectory_max_cps_disk 5000?, for example. My question is, if I have 100,000 time points, for example, that need to be evaluated during the forward and adjoint run, does TSAdjoint automatically optimize the checkpointing so that the number of checkpoints in RAM and disk do not exceed these values, or is one of the options ignored. I ask because I have a case that runs correctly with -ts_trajectory_type basic, but runs out of memory when attempting to fill the checkpoints in RAM when running the adjoint (I have verified that 5 checkpoints will actually fit into the available memory). This makes me think that maybe the -ts_trajectory_max_cps_ram 5 option is being ignored? > > Best wishes, > Anton > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From nathan.wukie at us.af.mil Fri Dec 11 08:49:57 2020 From: nathan.wukie at us.af.mil (WUKIE, NATHAN A DR-02 USAF AFMC AFRL/RQVC) Date: Fri, 11 Dec 2020 14:49:57 +0000 Subject: [petsc-users] Usage of PETSC_NULL_INTEGER for PCASMGetSubKSP via Fortran interface Message-ID: Hello, It looks like there has been some recent reorganization of Fortran interfaces. One item that has cropped up is that the interface for procedure PCASMGetSubKSP has been moved to a module via src/ksp/f90-mod/petscksp.h90. The question I have arrises when trying to pass PETSC_NULL_INTEGER, which seems to be an array, but the module interface to PCASMGetSubKSP now checks type rank consistency. PCASMGetSubKSP PetscInt arguments expect scalar values so just passing PETSC_NULL_INTEGER generates a compile-time error due to Scalar - Rank(1) mismatch. It seems one could pass PETSC_NULL_INTEGER(0) or PETSC_NULL_INTEGER(1), but I haven't found any documentation about how that's defined or if this is even the correct approach to resolve that issue. Could someone provide some insight or advise on the correct way forward? Thanks, Nathan petsc version: v3.14.2 -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas.barral at math.u-bordeaux.fr Fri Dec 11 11:02:48 2020 From: nicolas.barral at math.u-bordeaux.fr (Nicolas Barral) Date: Fri, 11 Dec 2020 18:02:48 +0100 Subject: [petsc-users] PETSCFE_CLASSID/PETSCVF_CLASSID Message-ID: <121f20bf-9ca0-0657-d856-82a8a6b98899@math.u-bordeaux.fr> Hi all (and probably more specifically Matt ?) I am trying to understand how the class IDs of a DM field are set, and can't find it in the documentation. A little background, I am mimicking SNES/utils/dmadapt.c/DMAdaptorAdapt_Sequence_Private for a specific case (I'm trying to build the same kind of metric from a single sensor field, without all the SNES layer). I need to compute the gradient of the sensor field, using DMPlexComputeGradientClementInterpolant, for which I create a DM, to which I associate a PetscFE, a DS, like in existing code: PetscFE feGrad PetscDS probGrad ierr = PetscFECreateDefault(PetscObjectComm((PetscObject) dmGrad), dim, coordDim, PETSC_TRUE, NULL, -1, &feGrad);CHKERRQ(ierr); ierr = PetscDSSetDiscretization(probGrad, f, (PetscObject) feGrad);CHKERRQ(ierr); ierr = PetscFEDestroy(&feGrad);CHKERRQ(ierr); Yet, when I call DMPlexComputeGradientClementInterpolant, I get the following error: [0]PETSC ERROR: Unknown discretization type for field 0 I don't fully understand what all these objects are (FE, DS and Field), and how they are related, where would that be documented ? And what else do I need to do to make my example work ? Thanks -- Nicolas From jacob.fai at gmail.com Fri Dec 11 11:17:32 2020 From: jacob.fai at gmail.com (Jacob Faibussowitsch) Date: Fri, 11 Dec 2020 11:17:32 -0600 Subject: [petsc-users] PETSCFE_CLASSID/PETSCVF_CLASSID In-Reply-To: <121f20bf-9ca0-0657-d856-82a8a6b98899@math.u-bordeaux.fr> References: <121f20bf-9ca0-0657-d856-82a8a6b98899@math.u-bordeaux.fr> Message-ID: Nicolas, > I am trying to understand how the class IDs of a DM field are set Class ID?s are unique (internal) identifiers that every type of object of a petsc-created class shares to identify them. This is especially useful when these objects are passed around semi-opaquely by casting to PetscObject. Think of it as similar to c++ typeid https://en.cppreference.com/w/cpp/language/typeid > ierr = PetscFECreateDefault(PetscObjectComm((PetscObject) dmGrad), dim, coordDim, PETSC_TRUE, NULL, -1, &feGrad);CHKERRQ(ierr); > ierr = PetscDSSetDiscretization(probGrad, f, (PetscObject) feGrad);CHKERRQ(ierr); > ierr = PetscFEDestroy(&feGrad);CHKERRQ(ierr); > > Yet, when I call DMPlexComputeGradientClementInterpolant, I get the following error: > [0]PETSC ERROR: Unknown discretization type for field 0 This function walks through all of the fields you have added to the DM and performs a sanity check (using the classics) to determine whether they are all the correct object. It seems like you?ve missed a step here. Have you called DMAddField()/DMSetField() to associate your PetscFE with your plex? Note that > ierr = PetscFECreateDefault(PetscObjectComm((PetscObject) dmGrad), dim, coordDim, PETSC_TRUE, NULL, -1, &feGrad);CHKERRQ(ierr); Does not directly tie the feGrad to your dmGrad, rather just gives it the same MPI_Comm. Best regards, Jacob Faibussowitsch (Jacob Fai - booss - oh - vitch) Cell: (312) 694-3391 > On Dec 11, 2020, at 11:02, Nicolas Barral wrote: > > Hi all (and probably more specifically Matt ?) > > I am trying to understand how the class IDs of a DM field are set, and can't find it in the documentation. > > A little background, I am mimicking SNES/utils/dmadapt.c/DMAdaptorAdapt_Sequence_Private for a specific case (I'm trying to build the same kind of metric from a single sensor field, without all the SNES layer). > > I need to compute the gradient of the sensor field, using DMPlexComputeGradientClementInterpolant, for which I create a DM, to which I associate a PetscFE, a DS, like in existing code: > > PetscFE feGrad > PetscDS probGrad > > ierr = PetscFECreateDefault(PetscObjectComm((PetscObject) dmGrad), dim, coordDim, PETSC_TRUE, NULL, -1, &feGrad);CHKERRQ(ierr); > ierr = PetscDSSetDiscretization(probGrad, f, (PetscObject) feGrad);CHKERRQ(ierr); > ierr = PetscFEDestroy(&feGrad);CHKERRQ(ierr); > > Yet, when I call DMPlexComputeGradientClementInterpolant, I get the following error: > [0]PETSC ERROR: Unknown discretization type for field 0 > > I don't fully understand what all these objects are (FE, DS and Field), and how they are related, where would that be documented ? > And what else do I need to do to make my example work ? > > Thanks > > -- > Nicolas -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Dec 11 11:39:45 2020 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 11 Dec 2020 12:39:45 -0500 Subject: [petsc-users] PETSCFE_CLASSID/PETSCVF_CLASSID In-Reply-To: <121f20bf-9ca0-0657-d856-82a8a6b98899@math.u-bordeaux.fr> References: <121f20bf-9ca0-0657-d856-82a8a6b98899@math.u-bordeaux.fr> Message-ID: On Fri, Dec 11, 2020 at 12:02 PM Nicolas Barral < nicolas.barral at math.u-bordeaux.fr> wrote: > Hi all (and probably more specifically Matt ?) > > I am trying to understand how the class IDs of a DM field are set, and > can't find it in the documentation. > > A little background, I am mimicking > SNES/utils/dmadapt.c/DMAdaptorAdapt_Sequence_Private for a specific case > (I'm trying to build the same kind of metric from a single sensor field, > without all the SNES layer). > > I need to compute the gradient of the sensor field, using > DMPlexComputeGradientClementInterpolant, for which I create a DM, to > which I associate a PetscFE, a DS, like in existing code: > > PetscFE feGrad > PetscDS probGrad > > ierr = PetscFECreateDefault(PetscObjectComm((PetscObject) dmGrad), dim, > coordDim, PETSC_TRUE, NULL, -1, &feGrad);CHKERRQ(ierr); > ierr = PetscDSSetDiscretization(probGrad, f, (PetscObject) > feGrad);CHKERRQ(ierr); > Jacobi is correct, so let me give the history. Originally, you were to call PetscDSSetDiscretization() as you have done. However, now it is possible to have different discretization within the same domain, So now we want you to call DMAddField(dm, feGrad), and then DMCreateDS(), which will call PetscDSSetDiscretization() for you. I changed the examples, but I did not have another place to document this. Thanks, Matt > ierr = PetscFEDestroy(&feGrad);CHKERRQ(ierr); > > Yet, when I call DMPlexComputeGradientClementInterpolant, I get the > following error: > [0]PETSC ERROR: Unknown discretization type for field 0 > > I don't fully understand what all these objects are (FE, DS and Field), > and how they are related, where would that be documented ? > And what else do I need to do to make my example work ? > > Thanks > > -- > Nicolas > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Fri Dec 11 12:59:04 2020 From: mfadams at lbl.gov (Mark Adams) Date: Fri, 11 Dec 2020 13:59:04 -0500 Subject: [petsc-users] Usage of PETSC_NULL_INTEGER for PCASMGetSubKSP via Fortran interface In-Reply-To: References: Message-ID: Integers in the C interface are now PETSC_DEFAULT_INTEGER in the Fortran interface. Types were not checked in 3.12. THis seems to have started in 3.13. On Fri, Dec 11, 2020 at 9:50 AM WUKIE, NATHAN A DR-02 USAF AFMC AFRL/RQVC via petsc-users wrote: > Hello, > > It looks like there has been some recent reorganization of Fortran > interfaces. One item that has cropped up is that the interface for > procedure PCASMGetSubKSP has been moved to a module via > src/ksp/f90-mod/petscksp.h90. > > The question I have arrises when trying to pass PETSC_NULL_INTEGER, which > seems to be an array, but the module interface to PCASMGetSubKSP now checks > type rank consistency. PCASMGetSubKSP PetscInt arguments expect scalar > values so just passing PETSC_NULL_INTEGER generates a compile-time error > due to Scalar - Rank(1) mismatch. > > It seems one could pass PETSC_NULL_INTEGER(0) or PETSC_NULL_INTEGER(1), > but I haven't found any documentation about how that's defined or if this > is even the correct approach to resolve that issue. Could someone provide > some insight or advise on the correct way forward? > > Thanks, > Nathan > > petsc version: v3.14.2 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Fri Dec 11 13:55:02 2020 From: bsmith at petsc.dev (Barry Smith) Date: Fri, 11 Dec 2020 13:55:02 -0600 Subject: [petsc-users] Usage of PETSC_NULL_INTEGER for PCASMGetSubKSP via Fortran interface In-Reply-To: References: Message-ID: Nathan, This is an oversight on our part. We need to provide each possibility for the function arguments as scalars or arrays, some of them are missing in the interface definition. git checkout barry/2020-12-11/fix-fortran-pcasmgetsubksp/release will get you the fix. Barry The MR for getting the fix into release is https://gitlab.com/petsc/petsc/-/merge_requests/3475 > On Dec 11, 2020, at 8:49 AM, WUKIE, NATHAN A DR-02 USAF AFMC AFRL/RQVC via petsc-users wrote: > > Hello, > > It looks like there has been some recent reorganization of Fortran interfaces. One item that has cropped up is that the interface for procedure PCASMGetSubKSP has been moved to a module via src/ksp/f90-mod/petscksp.h90. > > The question I have arrises when trying to pass PETSC_NULL_INTEGER, which seems to be an array, but the module interface to PCASMGetSubKSP now checks type rank consistency. PCASMGetSubKSP PetscInt arguments expect scalar values so just passing PETSC_NULL_INTEGER generates a compile-time error due to Scalar - Rank(1) mismatch. > > It seems one could pass PETSC_NULL_INTEGER(0) or PETSC_NULL_INTEGER(1), but I haven't found any documentation about how that's defined or if this is even the correct approach to resolve that issue. Could someone provide some insight or advise on the correct way forward? > > Thanks, > Nathan > > petsc version: v3.14.2 -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas.barral at math.u-bordeaux.fr Sat Dec 12 05:07:35 2020 From: nicolas.barral at math.u-bordeaux.fr (Nicolas Barral) Date: Sat, 12 Dec 2020 12:07:35 +0100 Subject: [petsc-users] PETSCFE_CLASSID/PETSCVF_CLASSID In-Reply-To: References: <121f20bf-9ca0-0657-d856-82a8a6b98899@math.u-bordeaux.fr> Message-ID: <840becf2-e425-15c9-1daa-44908e754bc6@math.u-bordeaux.fr> Thanks Matt and Jacob, There's still something not working yet, but I'm trying to build a MFE before asking. Matt, what you say is consistent with plex/tutorials/ex8.c, but not with DMAdaptorAdapt_Sequence_Private. Does that mean that the latter is broken ? Thanks -- Nicolas On 11/12/2020 18:39, Matthew Knepley wrote: > On Fri, Dec 11, 2020 at 12:02 PM Nicolas Barral > > wrote: > > Hi all (and probably more specifically Matt ?) > > I am trying to understand how the class IDs of a DM field are set, and > can't find it in the documentation. > > A little background, I am mimicking > SNES/utils/dmadapt.c/DMAdaptorAdapt_Sequence_Private for a specific > case > (I'm trying to build the same kind of metric from a single sensor > field, > without all the SNES layer). > > I need to compute the gradient of the sensor field, using > DMPlexComputeGradientClementInterpolant, for which I create a DM, to > which I associate a PetscFE, a DS, like in existing code: > > PetscFE? feGrad > PetscDS? probGrad > > ierr = PetscFECreateDefault(PetscObjectComm((PetscObject) dmGrad), dim, > coordDim, PETSC_TRUE, NULL, -1, &feGrad);CHKERRQ(ierr); > ierr = PetscDSSetDiscretization(probGrad, f, (PetscObject) > feGrad);CHKERRQ(ierr); > > > Jacobi is correct, so let me give the history. Originally, you were to > call?PetscDSSetDiscretization() as you have?done. > However, now it is possible to have different discretization within the > same domain, So now we want you to call > DMAddField(dm, feGrad), and then DMCreateDS(), which will > call?PetscDSSetDiscretization() for you. I changed the > examples, but I did not have another place to document this. > > ? Thanks, > > ? ? ?Matt > > ierr = PetscFEDestroy(&feGrad);CHKERRQ(ierr); > > Yet, when I call DMPlexComputeGradientClementInterpolant, I get the > following error: > [0]PETSC ERROR: Unknown discretization type for field 0 > > I don't fully understand what all these objects are (FE, DS and Field), > and how they are related, where would that be documented ? > And what else do I need to do to make my example work ? > > Thanks > > -- > Nicolas > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ From thibault.bridelbertomeu at gmail.com Sat Dec 12 09:30:01 2020 From: thibault.bridelbertomeu at gmail.com (Thibault Bridel-Bertomeu) Date: Sat, 12 Dec 2020 16:30:01 +0100 Subject: [petsc-users] TS tutorial ex11 in Fortran Message-ID: Dear all, Is there somewhere a version of the TS tutorial ex11.c in Fortran ? I am looking into building in F90 (let's say that it is an unavoidable constraint) an unstructured 3D solver of the Euler equations using the "new" features of PETSc - mostly DMPlex & PetscFV - but I think there are some interfaces missing and I find it hard to find workarounds in Fortran. I would be grateful if anyone could please give me some pointers ... Thank you very much in advance, Thibault Bridel-Bertomeu ? Eng, MSc, PhD Research Engineer CEA/CESTA 33114 LE BARP Tel.: (+33)557046924 Mob.: (+33)611025322 Mail: thibault.bridelbertomeu at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Sat Dec 12 10:47:01 2020 From: jed at jedbrown.org (Jed Brown) Date: Sat, 12 Dec 2020 09:47:01 -0700 Subject: [petsc-users] TS tutorial ex11 in Fortran In-Reply-To: References: Message-ID: <877dpnynsq.fsf@jedbrown.org> I'm not aware of an analogue written in Fortran, but we'd be happy to accept a pull request that ports ex11.c (or a subset thereof) to Fortran. Thibault Bridel-Bertomeu writes: > Dear all, > > Is there somewhere a version of the TS tutorial ex11.c in Fortran ? > I am looking into building in F90 (let's say that it is an unavoidable > constraint) an unstructured 3D solver of the Euler equations using the > "new" features of PETSc - mostly DMPlex & PetscFV - but I think there are > some interfaces missing and I find it hard to find workarounds in Fortran. > I would be grateful if anyone could please give me some pointers ... > > Thank you very much in advance, > > Thibault Bridel-Bertomeu > ? > Eng, MSc, PhD > Research Engineer > CEA/CESTA > 33114 LE BARP > Tel.: (+33)557046924 > Mob.: (+33)611025322 > Mail: thibault.bridelbertomeu at gmail.com From bsmith at petsc.dev Sat Dec 12 14:48:49 2020 From: bsmith at petsc.dev (Barry Smith) Date: Sat, 12 Dec 2020 14:48:49 -0600 Subject: [petsc-users] TS tutorial ex11 in Fortran In-Reply-To: References: Message-ID: <1EC72DF8-61BD-482B-B1B0-9427AB1ADF74@petsc.dev> PETSc Fortran interfaces are a combination of automatically generated and manually generated. For any C PETSc function if the manual page begins with /*@ it generates the Fortran interface automatically (make allfortranstubs). If it begins /*@C then either the Fortran interface is done manually or is missing. C functions that have character string arguments or function arguments (or a few other special cases) need to be manually provided. The automatically generated stubs go in the directory ftn-auto while manually generated ones go in the directory fin-custom. Perhaps you could first generate a list of "missing" Fortran stubs and then for each stub determine why it is missing and if it can be provided. Some are likely easy to provide but a few (involving function arguments) will be more involved. Once you have all the stubs available translating ex11.c becomes straightforward. Barry > On Dec 12, 2020, at 9:30 AM, Thibault Bridel-Bertomeu wrote: > > Dear all, > > Is there somewhere a version of the TS tutorial ex11.c in Fortran ? > I am looking into building in F90 (let's say that it is an unavoidable constraint) an unstructured 3D solver of the Euler equations using the "new" features of PETSc - mostly DMPlex & PetscFV - but I think there are some interfaces missing and I find it hard to find workarounds in Fortran. I would be grateful if anyone could please give me some pointers ... > > Thank you very much in advance, > > Thibault Bridel-Bertomeu > ? > Eng, MSc, PhD > Research Engineer > CEA/CESTA > 33114 LE BARP > Tel.: (+33)557046924 > Mob.: (+33)611025322 > Mail: thibault.bridelbertomeu at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From thibault.bridelbertomeu at gmail.com Sat Dec 12 14:59:19 2020 From: thibault.bridelbertomeu at gmail.com (Thibault Bridel-Bertomeu) Date: Sat, 12 Dec 2020 21:59:19 +0100 Subject: [petsc-users] TS tutorial ex11 in Fortran In-Reply-To: <1EC72DF8-61BD-482B-B1B0-9427AB1ADF74@petsc.dev> References: <1EC72DF8-61BD-482B-B1B0-9427AB1ADF74@petsc.dev> Message-ID: Dear Jed, dear Barry, Thank you for the fast answers ! If I have any success I will make sure to make a pull request to provide some version of ex11 in Fortran. Regarding the stubs, I admit I started looking in that direction to add the missing wrappers but I am not sure I fully understand the process yet. For each C function, I gotta provide a Fortran interface in a .h90 file as well as a C function that has a Fortran-like prototype and calls the C function - right ? However there are a few things I could not find / understand yet. For instance, it appears that for C functions that have character string arguments take an extra argument in their Fortran-like-prototype-wrapper, namely the length of the string. Is that passed automatically ? I couldn?t find where it could come from ... Another thing is for functions like PetscFVView. I guess the wrapping is less straightforward because I tried a quick something and it segfault?ed. I couldnt find the wrapper for DMView although there is such a routine in Fortran too. Could you please detail how to wrap such functions ? Thank you very much again, Thibault Bridel-Bertomeu Le sam. 12 d?c. 2020 ? 21:48, Barry Smith a ?crit : > > PETSc Fortran interfaces are a combination of automatically generated > and manually generated. > > For any C PETSc function if the manual page begins with /*@ it > generates the Fortran interface automatically (make allfortranstubs). If > it begins /*@C then either the Fortran interface is done manually or is > missing. > > C functions that have character string arguments or function arguments > (or a few other special cases) need to be manually provided. The > automatically generated stubs go in the directory ftn-auto while manually > generated ones go in the directory fin-custom. > > Perhaps you could first generate a list of "missing" Fortran stubs and > then for each stub determine why it is missing and if it can be provided. > Some are likely easy to provide but a few (involving function arguments) > will be more involved. Once you have all the stubs available translating > ex11.c becomes straightforward. > > Barry > > > On Dec 12, 2020, at 9:30 AM, Thibault Bridel-Bertomeu < > thibault.bridelbertomeu at gmail.com> wrote: > > Dear all, > > Is there somewhere a version of the TS tutorial ex11.c in Fortran ? > I am looking into building in F90 (let's say that it is an unavoidable > constraint) an unstructured 3D solver of the Euler equations using the > "new" features of PETSc - mostly DMPlex & PetscFV - but I think there are > some interfaces missing and I find it hard to find workarounds in Fortran. > I would be grateful if anyone could please give me some pointers ... > > Thank you very much in advance, > > Thibault Bridel-Bertomeu > ? > Eng, MSc, PhD > Research Engineer > CEA/CESTA > 33114 LE BARP > Tel.: (+33)557046924 > Mob.: (+33)611025322 > Mail: thibault.bridelbertomeu at gmail.com > > > -- Thibault Bridel-Bertomeu ? Eng, MSc, PhD Research Engineer CEA/CESTA 33114 LE BARP Tel.: (+33)557046924 Mob.: (+33)611025322 Mail: thibault.bridelbertomeu at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Sat Dec 12 16:28:12 2020 From: bsmith at petsc.dev (Barry Smith) Date: Sat, 12 Dec 2020 16:28:12 -0600 Subject: [petsc-users] TS tutorial ex11 in Fortran In-Reply-To: References: <1EC72DF8-61BD-482B-B1B0-9427AB1ADF74@petsc.dev> Message-ID: <048DC071-33E5-4644-B1C8-DD598248923A@petsc.dev> > On Dec 12, 2020, at 2:59 PM, Thibault Bridel-Bertomeu wrote: > > Dear Jed, dear Barry, > > Thank you for the fast answers ! > > If I have any success I will make sure to make a pull request to provide some version of ex11 in Fortran. > > Regarding the stubs, I admit I started looking in that direction to add the missing wrappers but I am not sure I fully understand the process yet. > For each C function, I gotta provide a Fortran interface in a .h90 file as well as a C function that has a Fortran-like prototype and calls the C function - right ? Yes, > However there are a few things I could not find / understand yet. > For instance, it appears that for C functions that have character string arguments take an extra argument in their Fortran-like-prototype-wrapper, namely the length of the string. Is that passed automatically ? I couldn?t find where it could come from ... This secret argument is put automatically by the Fortran compiler. > Another thing is for functions like PetscFVView. I guess the wrapping is less straightforward because I tried a quick something and it segfault?ed. I couldnt find the wrapper for DMView although there is such a routine in Fortran too. Could you please detail how to wrap such functions ? PETSC_EXTERN void dmview_(DM *da,PetscViewer *vin,PetscErrorCode *ierr) { PetscViewer v; PetscPatchDefaultViewers_Fortran(vin,v); *ierr = DMView(*da,v); } dm/interface/ftn-custom/zdmf.c > > Thank you very much again, > > Thibault Bridel-Bertomeu > > Le sam. 12 d?c. 2020 ? 21:48, Barry Smith > a ?crit : > > PETSc Fortran interfaces are a combination of automatically generated and manually generated. > > For any C PETSc function if the manual page begins with /*@ it generates the Fortran interface automatically (make allfortranstubs). If it begins /*@C then either the Fortran interface is done manually or is missing. > > C functions that have character string arguments or function arguments (or a few other special cases) need to be manually provided. The automatically generated stubs go in the directory ftn-auto while manually generated ones go in the directory fin-custom. > > Perhaps you could first generate a list of "missing" Fortran stubs and then for each stub determine why it is missing and if it can be provided. Some are likely easy to provide but a few (involving function arguments) will be more involved. Once you have all the stubs available translating ex11.c becomes straightforward. > > Barry > > >> On Dec 12, 2020, at 9:30 AM, Thibault Bridel-Bertomeu > wrote: >> >> Dear all, >> >> Is there somewhere a version of the TS tutorial ex11.c in Fortran ? >> I am looking into building in F90 (let's say that it is an unavoidable constraint) an unstructured 3D solver of the Euler equations using the "new" features of PETSc - mostly DMPlex & PetscFV - but I think there are some interfaces missing and I find it hard to find workarounds in Fortran. I would be grateful if anyone could please give me some pointers ... >> >> Thank you very much in advance, >> >> Thibault Bridel-Bertomeu >> ? >> Eng, MSc, PhD >> Research Engineer >> CEA/CESTA >> 33114 LE BARP >> Tel.: (+33)557046924 >> Mob.: (+33)611025322 >> Mail: thibault.bridelbertomeu at gmail.com > > -- > Thibault Bridel-Bertomeu > ? > Eng, MSc, PhD > Research Engineer > CEA/CESTA > 33114 LE BARP > Tel.: (+33)557046924 > Mob.: (+33)611025322 > Mail: thibault.bridelbertomeu at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From thibault.bridelbertomeu at gmail.com Sun Dec 13 04:30:57 2020 From: thibault.bridelbertomeu at gmail.com (Thibault Bridel-Bertomeu) Date: Sun, 13 Dec 2020 11:30:57 +0100 Subject: [petsc-users] TS tutorial ex11 in Fortran In-Reply-To: <048DC071-33E5-4644-B1C8-DD598248923A@petsc.dev> References: <1EC72DF8-61BD-482B-B1B0-9427AB1ADF74@petsc.dev> <048DC071-33E5-4644-B1C8-DD598248923A@petsc.dev> Message-ID: Good morning all, Thank you Barry for your answer. I started adding some interfaces (PetscFVSetComponentName, PetscFVView & PetscFVSetType so far) and I have to say I think it is working quite well, but the prototypes of those functions are still quite "simple". I am stuck at how to implement the wrappers for PetscDSSetRiemannSolver and PetscDSSetContext though, especially on how to pass a function as an argument to PetscDSSetRiemannSolver ... Are there any similar functions that may already have their wrappers ? Thank you very much, Thibault Le sam. 12 d?c. 2020 ? 23:28, Barry Smith a ?crit : > > > On Dec 12, 2020, at 2:59 PM, Thibault Bridel-Bertomeu < > thibault.bridelbertomeu at gmail.com> wrote: > > Dear Jed, dear Barry, > > Thank you for the fast answers ! > > If I have any success I will make sure to make a pull request to provide > some version of ex11 in Fortran. > > Regarding the stubs, I admit I started looking in that direction to add > the missing wrappers but I am not sure I fully understand the process yet. > For each C function, I gotta provide a Fortran interface in a .h90 file as > well as a C function that has a Fortran-like prototype and calls the C > function - right ? > > > Yes, > > However there are a few things I could not find / understand yet. > For instance, it appears that for C functions that have character string > arguments take an extra argument in their Fortran-like-prototype-wrapper, > namely the length of the string. Is that passed automatically ? I couldn?t > find where it could come from ... > > > This secret argument is put automatically by the Fortran compiler. > > Another thing is for functions like PetscFVView. I guess the wrapping is > less straightforward because I tried a quick something and it segfault?ed. > I couldnt find the wrapper for DMView although there is such a routine in > Fortran too. Could you please detail how to wrap such functions ? > > > PETSC_EXTERN void dmview_(DM *da,PetscViewer *vin,PetscErrorCode *ierr) > { > PetscViewer v; > PetscPatchDefaultViewers_Fortran(vin,v); > *ierr = DMView(*da,v); > } > > dm/interface/ftn-custom/zdmf.c > > > > Thank you very much again, > > Thibault Bridel-Bertomeu > > Le sam. 12 d?c. 2020 ? 21:48, Barry Smith a ?crit : > >> >> PETSc Fortran interfaces are a combination of automatically generated >> and manually generated. >> >> For any C PETSc function if the manual page begins with /*@ it >> generates the Fortran interface automatically (make allfortranstubs). If >> it begins /*@C then either the Fortran interface is done manually or is >> missing. >> >> C functions that have character string arguments or function arguments >> (or a few other special cases) need to be manually provided. The >> automatically generated stubs go in the directory ftn-auto while manually >> generated ones go in the directory fin-custom. >> >> Perhaps you could first generate a list of "missing" Fortran stubs and >> then for each stub determine why it is missing and if it can be provided. >> Some are likely easy to provide but a few (involving function arguments) >> will be more involved. Once you have all the stubs available translating >> ex11.c becomes straightforward. >> >> Barry >> >> >> On Dec 12, 2020, at 9:30 AM, Thibault Bridel-Bertomeu < >> thibault.bridelbertomeu at gmail.com> wrote: >> >> Dear all, >> >> Is there somewhere a version of the TS tutorial ex11.c in Fortran ? >> I am looking into building in F90 (let's say that it is an unavoidable >> constraint) an unstructured 3D solver of the Euler equations using the >> "new" features of PETSc - mostly DMPlex & PetscFV - but I think there are >> some interfaces missing and I find it hard to find workarounds in Fortran. >> I would be grateful if anyone could please give me some pointers ... >> >> Thank you very much in advance, >> >> Thibault Bridel-Bertomeu >> ? >> Eng, MSc, PhD >> Research Engineer >> CEA/CESTA >> 33114 LE BARP >> Tel.: (+33)557046924 >> Mob.: (+33)611025322 >> Mail: thibault.bridelbertomeu at gmail.com >> >> >> -- > Thibault Bridel-Bertomeu > ? > Eng, MSc, PhD > Research Engineer > CEA/CESTA > 33114 LE BARP > Tel.: (+33)557046924 > Mob.: (+33)611025322 > Mail: thibault.bridelbertomeu at gmail.com > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Sun Dec 13 08:17:05 2020 From: mfadams at lbl.gov (Mark Adams) Date: Sun, 13 Dec 2020 09:17:05 -0500 Subject: [petsc-users] TS tutorial ex11 in Fortran In-Reply-To: References: <1EC72DF8-61BD-482B-B1B0-9427AB1ADF74@petsc.dev> <048DC071-33E5-4644-B1C8-DD598248923A@petsc.dev> Message-ID: I don't think function pointers to PETSc (DS and DM) methods are not going to work in Fortran: ierr = PetscDSAddBoundary(prob, DM_BC_NATURAL_RIEMANN, "inflow", "Face Sets", 0, 0, NULL, (void (*)(void)) PhysicsBoundary_Advect_Inflow, NULL, ALEN(inflowids), inflowids, phys);CHKERRQ(ierr); You could write a funcs.c file that you call from your fortran code, like, call setBC1(prob,...,ierr) and put PhysicsBoundary_Advect_Inflow and setBC1 in funcs.c, for instance. On Sun, Dec 13, 2020 at 5:32 AM Thibault Bridel-Bertomeu < thibault.bridelbertomeu at gmail.com> wrote: > Good morning all, > > Thank you Barry for your answer. > I started adding some interfaces (PetscFVSetComponentName, PetscFVView & > PetscFVSetType so far) and I have to say I think it is working quite well, > but the prototypes of those functions are still quite "simple". > I am stuck at how to implement the wrappers for PetscDSSetRiemannSolver > and PetscDSSetContext though, especially on how to pass a function as an > argument to PetscDSSetRiemannSolver ... Are there any similar functions > that may already have their wrappers ? > > Thank you very much, > > Thibault > > > Le sam. 12 d?c. 2020 ? 23:28, Barry Smith a ?crit : > >> >> >> On Dec 12, 2020, at 2:59 PM, Thibault Bridel-Bertomeu < >> thibault.bridelbertomeu at gmail.com> wrote: >> >> Dear Jed, dear Barry, >> >> Thank you for the fast answers ! >> >> If I have any success I will make sure to make a pull request to provide >> some version of ex11 in Fortran. >> >> Regarding the stubs, I admit I started looking in that direction to add >> the missing wrappers but I am not sure I fully understand the process yet. >> For each C function, I gotta provide a Fortran interface in a .h90 file >> as well as a C function that has a Fortran-like prototype and calls the C >> function - right ? >> >> >> Yes, >> >> However there are a few things I could not find / understand yet. >> For instance, it appears that for C functions that have character string >> arguments take an extra argument in their Fortran-like-prototype-wrapper, >> namely the length of the string. Is that passed automatically ? I couldn?t >> find where it could come from ... >> >> >> This secret argument is put automatically by the Fortran compiler. >> >> Another thing is for functions like PetscFVView. I guess the wrapping is >> less straightforward because I tried a quick something and it segfault?ed. >> I couldnt find the wrapper for DMView although there is such a routine in >> Fortran too. Could you please detail how to wrap such functions ? >> >> >> PETSC_EXTERN void dmview_(DM *da,PetscViewer *vin,PetscErrorCode *ierr) >> { >> PetscViewer v; >> PetscPatchDefaultViewers_Fortran(vin,v); >> *ierr = DMView(*da,v); >> } >> >> dm/interface/ftn-custom/zdmf.c >> >> >> >> Thank you very much again, >> >> Thibault Bridel-Bertomeu >> >> Le sam. 12 d?c. 2020 ? 21:48, Barry Smith a ?crit : >> >>> >>> PETSc Fortran interfaces are a combination of automatically generated >>> and manually generated. >>> >>> For any C PETSc function if the manual page begins with /*@ it >>> generates the Fortran interface automatically (make allfortranstubs). If >>> it begins /*@C then either the Fortran interface is done manually or is >>> missing. >>> >>> C functions that have character string arguments or function >>> arguments (or a few other special cases) need to be manually provided. The >>> automatically generated stubs go in the directory ftn-auto while manually >>> generated ones go in the directory fin-custom. >>> >>> Perhaps you could first generate a list of "missing" Fortran stubs >>> and then for each stub determine why it is missing and if it can be >>> provided. Some are likely easy to provide but a few (involving function >>> arguments) will be more involved. Once you have all the stubs available >>> translating ex11.c becomes straightforward. >>> >>> Barry >>> >>> >>> On Dec 12, 2020, at 9:30 AM, Thibault Bridel-Bertomeu < >>> thibault.bridelbertomeu at gmail.com> wrote: >>> >>> Dear all, >>> >>> Is there somewhere a version of the TS tutorial ex11.c in Fortran ? >>> I am looking into building in F90 (let's say that it is an unavoidable >>> constraint) an unstructured 3D solver of the Euler equations using the >>> "new" features of PETSc - mostly DMPlex & PetscFV - but I think there are >>> some interfaces missing and I find it hard to find workarounds in Fortran. >>> I would be grateful if anyone could please give me some pointers ... >>> >>> Thank you very much in advance, >>> >>> Thibault Bridel-Bertomeu >>> ? >>> Eng, MSc, PhD >>> Research Engineer >>> CEA/CESTA >>> 33114 LE BARP >>> Tel.: (+33)557046924 >>> Mob.: (+33)611025322 >>> Mail: thibault.bridelbertomeu at gmail.com >>> >>> >>> -- >> Thibault Bridel-Bertomeu >> ? >> Eng, MSc, PhD >> Research Engineer >> CEA/CESTA >> 33114 LE BARP >> Tel.: (+33)557046924 >> Mob.: (+33)611025322 >> Mail: thibault.bridelbertomeu at gmail.com >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From thibault.bridelbertomeu at gmail.com Sun Dec 13 08:28:46 2020 From: thibault.bridelbertomeu at gmail.com (Thibault Bridel-Bertomeu) Date: Sun, 13 Dec 2020 15:28:46 +0100 Subject: [petsc-users] TS tutorial ex11 in Fortran In-Reply-To: References: <1EC72DF8-61BD-482B-B1B0-9427AB1ADF74@petsc.dev> <048DC071-33E5-4644-B1C8-DD598248923A@petsc.dev> Message-ID: Thank you Mark for your answer. I am not sure what you think could be in the setBC1 routine ? How to make the connection with the PetscDS ? On the other hand, I actually found after a while TSMonitorSet has a fortran wrapper, and it does take as arguments two function pointers, so I guess it is possible ? Although I am not sure exactly how to play with the PetscObjectSetFortranCallback & PetscObjectUseFortranCallback macros - could anybody advise please ? Thank you ! Thibault Le dim. 13 d?c. 2020 ? 15:17, Mark Adams a ?crit : > I don't think function pointers to PETSc (DS and DM) methods are not going > to work in Fortran: > > ierr = PetscDSAddBoundary(prob, DM_BC_NATURAL_RIEMANN, "inflow", "Face > Sets", 0, 0, NULL, (void (*)(void)) PhysicsBoundary_Advect_Inflow, NULL, > ALEN(inflowids), inflowids, phys);CHKERRQ(ierr); > > You could write a funcs.c file that you call from your fortran code, like, > call setBC1(prob,...,ierr) > > and put PhysicsBoundary_Advect_Inflow and setBC1 in funcs.c, for instance. > > > > > On Sun, Dec 13, 2020 at 5:32 AM Thibault Bridel-Bertomeu < > thibault.bridelbertomeu at gmail.com> wrote: > >> Good morning all, >> >> Thank you Barry for your answer. >> I started adding some interfaces (PetscFVSetComponentName, PetscFVView & >> PetscFVSetType so far) and I have to say I think it is working quite well, >> but the prototypes of those functions are still quite "simple". >> I am stuck at how to implement the wrappers for PetscDSSetRiemannSolver >> and PetscDSSetContext though, especially on how to pass a function as an >> argument to PetscDSSetRiemannSolver ... Are there any similar functions >> that may already have their wrappers ? >> >> Thank you very much, >> >> Thibault >> >> >> Le sam. 12 d?c. 2020 ? 23:28, Barry Smith a ?crit : >> >>> >>> >>> On Dec 12, 2020, at 2:59 PM, Thibault Bridel-Bertomeu < >>> thibault.bridelbertomeu at gmail.com> wrote: >>> >>> Dear Jed, dear Barry, >>> >>> Thank you for the fast answers ! >>> >>> If I have any success I will make sure to make a pull request to provide >>> some version of ex11 in Fortran. >>> >>> Regarding the stubs, I admit I started looking in that direction to add >>> the missing wrappers but I am not sure I fully understand the process yet. >>> For each C function, I gotta provide a Fortran interface in a .h90 file >>> as well as a C function that has a Fortran-like prototype and calls the C >>> function - right ? >>> >>> >>> Yes, >>> >>> However there are a few things I could not find / understand yet. >>> For instance, it appears that for C functions that have character string >>> arguments take an extra argument in their Fortran-like-prototype-wrapper, >>> namely the length of the string. Is that passed automatically ? I couldn?t >>> find where it could come from ... >>> >>> >>> This secret argument is put automatically by the Fortran compiler. >>> >>> Another thing is for functions like PetscFVView. I guess the wrapping is >>> less straightforward because I tried a quick something and it segfault?ed. >>> I couldnt find the wrapper for DMView although there is such a routine in >>> Fortran too. Could you please detail how to wrap such functions ? >>> >>> >>> PETSC_EXTERN void dmview_(DM *da,PetscViewer *vin,PetscErrorCode *ierr) >>> { >>> PetscViewer v; >>> PetscPatchDefaultViewers_Fortran(vin,v); >>> *ierr = DMView(*da,v); >>> } >>> >>> dm/interface/ftn-custom/zdmf.c >>> >>> >>> >>> Thank you very much again, >>> >>> Thibault Bridel-Bertomeu >>> >>> Le sam. 12 d?c. 2020 ? 21:48, Barry Smith a ?crit : >>> >>>> >>>> PETSc Fortran interfaces are a combination of automatically >>>> generated and manually generated. >>>> >>>> For any C PETSc function if the manual page begins with /*@ it >>>> generates the Fortran interface automatically (make allfortranstubs). If >>>> it begins /*@C then either the Fortran interface is done manually or is >>>> missing. >>>> >>>> C functions that have character string arguments or function >>>> arguments (or a few other special cases) need to be manually provided. The >>>> automatically generated stubs go in the directory ftn-auto while manually >>>> generated ones go in the directory fin-custom. >>>> >>>> Perhaps you could first generate a list of "missing" Fortran stubs >>>> and then for each stub determine why it is missing and if it can be >>>> provided. Some are likely easy to provide but a few (involving function >>>> arguments) will be more involved. Once you have all the stubs available >>>> translating ex11.c becomes straightforward. >>>> >>>> Barry >>>> >>>> >>>> On Dec 12, 2020, at 9:30 AM, Thibault Bridel-Bertomeu < >>>> thibault.bridelbertomeu at gmail.com> wrote: >>>> >>>> Dear all, >>>> >>>> Is there somewhere a version of the TS tutorial ex11.c in Fortran ? >>>> I am looking into building in F90 (let's say that it is an unavoidable >>>> constraint) an unstructured 3D solver of the Euler equations using the >>>> "new" features of PETSc - mostly DMPlex & PetscFV - but I think there are >>>> some interfaces missing and I find it hard to find workarounds in Fortran. >>>> I would be grateful if anyone could please give me some pointers ... >>>> >>>> Thank you very much in advance, >>>> >>>> Thibault Bridel-Bertomeu >>>> ? >>>> Eng, MSc, PhD >>>> Research Engineer >>>> CEA/CESTA >>>> 33114 LE BARP >>>> Tel.: (+33)557046924 >>>> Mob.: (+33)611025322 >>>> Mail: thibault.bridelbertomeu at gmail.com >>>> >>>> >>>> -- >>> Thibault Bridel-Bertomeu >>> ? >>> Eng, MSc, PhD >>> Research Engineer >>> CEA/CESTA >>> 33114 LE BARP >>> Tel.: (+33)557046924 >>> Mob.: (+33)611025322 >>> Mail: thibault.bridelbertomeu at gmail.com >>> >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sun Dec 13 08:29:37 2020 From: knepley at gmail.com (Matthew Knepley) Date: Sun, 13 Dec 2020 09:29:37 -0500 Subject: [petsc-users] TS tutorial ex11 in Fortran In-Reply-To: References: <1EC72DF8-61BD-482B-B1B0-9427AB1ADF74@petsc.dev> <048DC071-33E5-4644-B1C8-DD598248923A@petsc.dev> Message-ID: On Sun, Dec 13, 2020 at 9:17 AM Mark Adams wrote: > I don't think function pointers to PETSc (DS and DM) methods are not going > to work in Fortran: > > ierr = PetscDSAddBoundary(prob, DM_BC_NATURAL_RIEMANN, "inflow", "Face > Sets", 0, 0, NULL, (void (*)(void)) PhysicsBoundary_Advect_Inflow, NULL, > ALEN(inflowids), inflowids, phys);CHKERRQ(ierr); > > You could write a funcs.c file that you call from your fortran code, like, > call setBC1(prob,...,ierr) > > and put PhysicsBoundary_Advect_Inflow and setBC1 in funcs.c, for instance. > Wrappers for functions that that function arguments are possible, but more involved. We have to create an internal struct that holds Fortran function pointers, and then call them with the right arguments. This has been done for SNESSetFunction() for instance, so we would start emulating that. Thanks, Matt > > On Sun, Dec 13, 2020 at 5:32 AM Thibault Bridel-Bertomeu < > thibault.bridelbertomeu at gmail.com> wrote: > >> Good morning all, >> >> Thank you Barry for your answer. >> I started adding some interfaces (PetscFVSetComponentName, PetscFVView & >> PetscFVSetType so far) and I have to say I think it is working quite well, >> but the prototypes of those functions are still quite "simple". >> I am stuck at how to implement the wrappers for PetscDSSetRiemannSolver >> and PetscDSSetContext though, especially on how to pass a function as an >> argument to PetscDSSetRiemannSolver ... Are there any similar functions >> that may already have their wrappers ? >> >> Thank you very much, >> >> Thibault >> >> >> Le sam. 12 d?c. 2020 ? 23:28, Barry Smith a ?crit : >> >>> >>> >>> On Dec 12, 2020, at 2:59 PM, Thibault Bridel-Bertomeu < >>> thibault.bridelbertomeu at gmail.com> wrote: >>> >>> Dear Jed, dear Barry, >>> >>> Thank you for the fast answers ! >>> >>> If I have any success I will make sure to make a pull request to provide >>> some version of ex11 in Fortran. >>> >>> Regarding the stubs, I admit I started looking in that direction to add >>> the missing wrappers but I am not sure I fully understand the process yet. >>> For each C function, I gotta provide a Fortran interface in a .h90 file >>> as well as a C function that has a Fortran-like prototype and calls the C >>> function - right ? >>> >>> >>> Yes, >>> >>> However there are a few things I could not find / understand yet. >>> For instance, it appears that for C functions that have character string >>> arguments take an extra argument in their Fortran-like-prototype-wrapper, >>> namely the length of the string. Is that passed automatically ? I couldn?t >>> find where it could come from ... >>> >>> >>> This secret argument is put automatically by the Fortran compiler. >>> >>> Another thing is for functions like PetscFVView. I guess the wrapping is >>> less straightforward because I tried a quick something and it segfault?ed. >>> I couldnt find the wrapper for DMView although there is such a routine in >>> Fortran too. Could you please detail how to wrap such functions ? >>> >>> >>> PETSC_EXTERN void dmview_(DM *da,PetscViewer *vin,PetscErrorCode *ierr) >>> { >>> PetscViewer v; >>> PetscPatchDefaultViewers_Fortran(vin,v); >>> *ierr = DMView(*da,v); >>> } >>> >>> dm/interface/ftn-custom/zdmf.c >>> >>> >>> >>> Thank you very much again, >>> >>> Thibault Bridel-Bertomeu >>> >>> Le sam. 12 d?c. 2020 ? 21:48, Barry Smith a ?crit : >>> >>>> >>>> PETSc Fortran interfaces are a combination of automatically >>>> generated and manually generated. >>>> >>>> For any C PETSc function if the manual page begins with /*@ it >>>> generates the Fortran interface automatically (make allfortranstubs). If >>>> it begins /*@C then either the Fortran interface is done manually or is >>>> missing. >>>> >>>> C functions that have character string arguments or function >>>> arguments (or a few other special cases) need to be manually provided. The >>>> automatically generated stubs go in the directory ftn-auto while manually >>>> generated ones go in the directory fin-custom. >>>> >>>> Perhaps you could first generate a list of "missing" Fortran stubs >>>> and then for each stub determine why it is missing and if it can be >>>> provided. Some are likely easy to provide but a few (involving function >>>> arguments) will be more involved. Once you have all the stubs available >>>> translating ex11.c becomes straightforward. >>>> >>>> Barry >>>> >>>> >>>> On Dec 12, 2020, at 9:30 AM, Thibault Bridel-Bertomeu < >>>> thibault.bridelbertomeu at gmail.com> wrote: >>>> >>>> Dear all, >>>> >>>> Is there somewhere a version of the TS tutorial ex11.c in Fortran ? >>>> I am looking into building in F90 (let's say that it is an unavoidable >>>> constraint) an unstructured 3D solver of the Euler equations using the >>>> "new" features of PETSc - mostly DMPlex & PetscFV - but I think there are >>>> some interfaces missing and I find it hard to find workarounds in Fortran. >>>> I would be grateful if anyone could please give me some pointers ... >>>> >>>> Thank you very much in advance, >>>> >>>> Thibault Bridel-Bertomeu >>>> ? >>>> Eng, MSc, PhD >>>> Research Engineer >>>> CEA/CESTA >>>> 33114 LE BARP >>>> Tel.: (+33)557046924 >>>> Mob.: (+33)611025322 >>>> Mail: thibault.bridelbertomeu at gmail.com >>>> >>>> >>>> -- >>> Thibault Bridel-Bertomeu >>> ? >>> Eng, MSc, PhD >>> Research Engineer >>> CEA/CESTA >>> 33114 LE BARP >>> Tel.: (+33)557046924 >>> Mob.: (+33)611025322 >>> Mail: thibault.bridelbertomeu at gmail.com >>> >>> >>> -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From thibault.bridelbertomeu at gmail.com Sun Dec 13 08:34:06 2020 From: thibault.bridelbertomeu at gmail.com (Thibault Bridel-Bertomeu) Date: Sun, 13 Dec 2020 15:34:06 +0100 Subject: [petsc-users] TS tutorial ex11 in Fortran In-Reply-To: References: <1EC72DF8-61BD-482B-B1B0-9427AB1ADF74@petsc.dev> <048DC071-33E5-4644-B1C8-DD598248923A@petsc.dev> Message-ID: Hello Matthew, Thank you for your answer and the pointer to SNESSetFunction(). Is the PETSC_F90_2PTR_PROTO(ptr) necessary for every wrapper of that category ? So far, for the PetscDSSetRiemannSolver, I did this : #ifdef PETSC_HAVE_FORTRAN_CAPS #define petscdssetriemannsolver_ PETSCDSSETRIEMANNSOLVER #elif !defined(PETSC_HAVE_FORTRAN_UNDERSCORE) && !defined(FORTRANDOUBLEUNDERSCORE) #define petscdssetriemannsolver_ petscdssetriemannsolver #endif PetscFortranCallbackId riemannsolver; static PetscErrorCode ourriemannsolver(PetscInt dim, PetscInt Nf, PetscReal x[], PetscReal n[], PetscScalar uL[], PetscScalar uR[], PetscInt numConstants, PetscScalar constants[], PetscScalar flux[], void *ctx) { PetscObjectUseFortranCallback((PetscDS)ctx, riemannsolver, (PetscInt*, PetscInt*, PetscReal*, PetscReal*, PetscScalar*, PetscScalar*, PetscInt*, PetscScalar*, PetscScalar*, void*, PetscErrorCode*), (&dim, &Nf, x, n, uL, uR, &numConstants, constants, flux, _ctx, &ierr)); } PETSC_EXTERN void petscdssetriemannsolver_(PetscDS *prob, PetscInt *f, void (*rs)(PetscInt *dim, PetscInt *Nf, PetscReal x[], PetscReal n[], PetscScalar uL[], PetscScalar uR[], PetscInt *numConstants, PetscScalar constants[], PetscScalar flux[], void *ctx, PetscErrorCode *jerr), PetscErrorCode *ierr) { *ierr = PetscObjectSetFortranCallback((PetscObject)*prob, PETSC_FORTRAN_CALLBACK_CLASS, &riemannsolver, (PetscVoidFunction)rs, prob); *ierr = PetscDSSetRiemannSolver(*prob, *f, (void*)ourriemannsolver); } It compiles, but I have no idea whether it works or not, and I won't have until the program is entirely built (with the TS and all) and it actually has to call a Riemann solver added with that routine. What do you think ? Thibault Le dim. 13 d?c. 2020 ? 15:29, Matthew Knepley a ?crit : > On Sun, Dec 13, 2020 at 9:17 AM Mark Adams wrote: > >> I don't think function pointers to PETSc (DS and DM) methods are not >> going to work in Fortran: >> >> ierr = PetscDSAddBoundary(prob, DM_BC_NATURAL_RIEMANN, "inflow", "Face >> Sets", 0, 0, NULL, (void (*)(void)) PhysicsBoundary_Advect_Inflow, NULL, >> ALEN(inflowids), inflowids, phys);CHKERRQ(ierr); >> >> You could write a funcs.c file that you call from your fortran code, >> like, call setBC1(prob,...,ierr) >> >> and put PhysicsBoundary_Advect_Inflow and setBC1 in funcs.c, for >> instance. >> > > Wrappers for functions that that function arguments are possible, but more > involved. We have to create an internal struct that > holds Fortran function pointers, and then call them with the right > arguments. This has been done for SNESSetFunction() for > instance, so we would start emulating that. > > Thanks, > > Matt > > >> >> On Sun, Dec 13, 2020 at 5:32 AM Thibault Bridel-Bertomeu < >> thibault.bridelbertomeu at gmail.com> wrote: >> >>> Good morning all, >>> >>> Thank you Barry for your answer. >>> I started adding some interfaces (PetscFVSetComponentName, PetscFVView & >>> PetscFVSetType so far) and I have to say I think it is working quite well, >>> but the prototypes of those functions are still quite "simple". >>> I am stuck at how to implement the wrappers for PetscDSSetRiemannSolver >>> and PetscDSSetContext though, especially on how to pass a function as an >>> argument to PetscDSSetRiemannSolver ... Are there any similar functions >>> that may already have their wrappers ? >>> >>> Thank you very much, >>> >>> Thibault >>> >>> >>> Le sam. 12 d?c. 2020 ? 23:28, Barry Smith a ?crit : >>> >>>> >>>> >>>> On Dec 12, 2020, at 2:59 PM, Thibault Bridel-Bertomeu < >>>> thibault.bridelbertomeu at gmail.com> wrote: >>>> >>>> Dear Jed, dear Barry, >>>> >>>> Thank you for the fast answers ! >>>> >>>> If I have any success I will make sure to make a pull request to >>>> provide some version of ex11 in Fortran. >>>> >>>> Regarding the stubs, I admit I started looking in that direction to add >>>> the missing wrappers but I am not sure I fully understand the process yet. >>>> For each C function, I gotta provide a Fortran interface in a .h90 file >>>> as well as a C function that has a Fortran-like prototype and calls the C >>>> function - right ? >>>> >>>> >>>> Yes, >>>> >>>> However there are a few things I could not find / understand yet. >>>> For instance, it appears that for C functions that have character >>>> string arguments take an extra argument in their >>>> Fortran-like-prototype-wrapper, namely the length of the string. Is that >>>> passed automatically ? I couldn?t find where it could come from ... >>>> >>>> >>>> This secret argument is put automatically by the Fortran compiler. >>>> >>>> Another thing is for functions like PetscFVView. I guess the wrapping >>>> is less straightforward because I tried a quick something and it >>>> segfault?ed. I couldnt find the wrapper for DMView although there is such a >>>> routine in Fortran too. Could you please detail how to wrap such functions >>>> ? >>>> >>>> >>>> PETSC_EXTERN void dmview_(DM *da,PetscViewer *vin,PetscErrorCode *ierr) >>>> { >>>> PetscViewer v; >>>> PetscPatchDefaultViewers_Fortran(vin,v); >>>> *ierr = DMView(*da,v); >>>> } >>>> >>>> dm/interface/ftn-custom/zdmf.c >>>> >>>> >>>> >>>> Thank you very much again, >>>> >>>> Thibault Bridel-Bertomeu >>>> >>>> Le sam. 12 d?c. 2020 ? 21:48, Barry Smith a ?crit : >>>> >>>>> >>>>> PETSc Fortran interfaces are a combination of automatically >>>>> generated and manually generated. >>>>> >>>>> For any C PETSc function if the manual page begins with /*@ it >>>>> generates the Fortran interface automatically (make allfortranstubs). If >>>>> it begins /*@C then either the Fortran interface is done manually or is >>>>> missing. >>>>> >>>>> C functions that have character string arguments or function >>>>> arguments (or a few other special cases) need to be manually provided. The >>>>> automatically generated stubs go in the directory ftn-auto while manually >>>>> generated ones go in the directory fin-custom. >>>>> >>>>> Perhaps you could first generate a list of "missing" Fortran stubs >>>>> and then for each stub determine why it is missing and if it can be >>>>> provided. Some are likely easy to provide but a few (involving function >>>>> arguments) will be more involved. Once you have all the stubs available >>>>> translating ex11.c becomes straightforward. >>>>> >>>>> Barry >>>>> >>>>> >>>>> On Dec 12, 2020, at 9:30 AM, Thibault Bridel-Bertomeu < >>>>> thibault.bridelbertomeu at gmail.com> wrote: >>>>> >>>>> Dear all, >>>>> >>>>> Is there somewhere a version of the TS tutorial ex11.c in Fortran ? >>>>> I am looking into building in F90 (let's say that it is an unavoidable >>>>> constraint) an unstructured 3D solver of the Euler equations using the >>>>> "new" features of PETSc - mostly DMPlex & PetscFV - but I think there are >>>>> some interfaces missing and I find it hard to find workarounds in Fortran. >>>>> I would be grateful if anyone could please give me some pointers ... >>>>> >>>>> Thank you very much in advance, >>>>> >>>>> Thibault Bridel-Bertomeu >>>>> ? >>>>> Eng, MSc, PhD >>>>> Research Engineer >>>>> CEA/CESTA >>>>> 33114 LE BARP >>>>> Tel.: (+33)557046924 >>>>> Mob.: (+33)611025322 >>>>> Mail: thibault.bridelbertomeu at gmail.com >>>>> >>>>> >>>>> -- >>>> Thibault Bridel-Bertomeu >>>> ? >>>> Eng, MSc, PhD >>>> Research Engineer >>>> CEA/CESTA >>>> 33114 LE BARP >>>> Tel.: (+33)557046924 >>>> Mob.: (+33)611025322 >>>> Mail: thibault.bridelbertomeu at gmail.com >>>> >>>> >>>> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sun Dec 13 08:52:31 2020 From: knepley at gmail.com (Matthew Knepley) Date: Sun, 13 Dec 2020 09:52:31 -0500 Subject: [petsc-users] TS tutorial ex11 in Fortran In-Reply-To: References: <1EC72DF8-61BD-482B-B1B0-9427AB1ADF74@petsc.dev> <048DC071-33E5-4644-B1C8-DD598248923A@petsc.dev> Message-ID: On Sun, Dec 13, 2020 at 9:35 AM Thibault Bridel-Bertomeu < thibault.bridelbertomeu at gmail.com> wrote: > Hello Matthew, > > Thank you for your answer and the pointer to SNESSetFunction(). Is the > PETSC_F90_2PTR_PROTO(ptr) necessary for every wrapper of that category ? > > So far, for the PetscDSSetRiemannSolver, I did this : > > #ifdef PETSC_HAVE_FORTRAN_CAPS > #define petscdssetriemannsolver_ PETSCDSSETRIEMANNSOLVER > #elif !defined(PETSC_HAVE_FORTRAN_UNDERSCORE) && > !defined(FORTRANDOUBLEUNDERSCORE) > #define petscdssetriemannsolver_ petscdssetriemannsolver > #endif > > PetscFortranCallbackId riemannsolver; > > static PetscErrorCode ourriemannsolver(PetscInt dim, PetscInt Nf, > PetscReal x[], PetscReal n[], PetscScalar uL[], PetscScalar uR[], PetscInt > numConstants, PetscScalar constants[], PetscScalar flux[], void *ctx) > { > PetscObjectUseFortranCallback((PetscDS)ctx, riemannsolver, (PetscInt*, > PetscInt*, PetscReal*, PetscReal*, PetscScalar*, PetscScalar*, PetscInt*, > PetscScalar*, PetscScalar*, void*, PetscErrorCode*), > (&dim, &Nf, x, n, uL, uR, &numConstants, > constants, flux, _ctx, &ierr)); > } > > PETSC_EXTERN void petscdssetriemannsolver_(PetscDS *prob, PetscInt *f, > void (*rs)(PetscInt *dim, > PetscInt *Nf, PetscReal x[], PetscReal n[], PetscScalar uL[], PetscScalar > uR[], PetscInt *numConstants, PetscScalar constants[], PetscScalar flux[], > void *ctx, PetscErrorCode *jerr), > PetscErrorCode *ierr) > { > *ierr = PetscObjectSetFortranCallback((PetscObject)*prob, > PETSC_FORTRAN_CALLBACK_CLASS, &riemannsolver, (PetscVoidFunction)rs, prob); > *ierr = PetscDSSetRiemannSolver(*prob, *f, (void*)ourriemannsolver); > } > > It compiles, but I have no idea whether it works or not, and I won't have > until the program is entirely built (with the TS and all) and it actually > has to call a Riemann solver added with that routine. > What do you think ? > It looks good for right now. We can help you debug it when the rest of the wrappers are in place. Thanks, Matt > Thibault > > Le dim. 13 d?c. 2020 ? 15:29, Matthew Knepley a > ?crit : > >> On Sun, Dec 13, 2020 at 9:17 AM Mark Adams wrote: >> >>> I don't think function pointers to PETSc (DS and DM) methods are not >>> going to work in Fortran: >>> >>> ierr = PetscDSAddBoundary(prob, DM_BC_NATURAL_RIEMANN, "inflow", >>> "Face Sets", 0, 0, NULL, (void (*)(void)) PhysicsBoundary_Advect_Inflow, >>> NULL, ALEN(inflowids), inflowids, phys);CHKERRQ(ierr); >>> >>> You could write a funcs.c file that you call from your fortran code, >>> like, call setBC1(prob,...,ierr) >>> >>> and put PhysicsBoundary_Advect_Inflow and setBC1 in funcs.c, for >>> instance. >>> >> >> Wrappers for functions that that function arguments are possible, but >> more involved. We have to create an internal struct that >> holds Fortran function pointers, and then call them with the right >> arguments. This has been done for SNESSetFunction() for >> instance, so we would start emulating that. >> >> Thanks, >> >> Matt >> >> >>> >>> On Sun, Dec 13, 2020 at 5:32 AM Thibault Bridel-Bertomeu < >>> thibault.bridelbertomeu at gmail.com> wrote: >>> >>>> Good morning all, >>>> >>>> Thank you Barry for your answer. >>>> I started adding some interfaces (PetscFVSetComponentName, PetscFVView >>>> & PetscFVSetType so far) and I have to say I think it is working quite >>>> well, but the prototypes of those functions are still quite "simple". >>>> I am stuck at how to implement the wrappers for PetscDSSetRiemannSolver >>>> and PetscDSSetContext though, especially on how to pass a function as an >>>> argument to PetscDSSetRiemannSolver ... Are there any similar functions >>>> that may already have their wrappers ? >>>> >>>> Thank you very much, >>>> >>>> Thibault >>>> >>>> >>>> Le sam. 12 d?c. 2020 ? 23:28, Barry Smith a ?crit : >>>> >>>>> >>>>> >>>>> On Dec 12, 2020, at 2:59 PM, Thibault Bridel-Bertomeu < >>>>> thibault.bridelbertomeu at gmail.com> wrote: >>>>> >>>>> Dear Jed, dear Barry, >>>>> >>>>> Thank you for the fast answers ! >>>>> >>>>> If I have any success I will make sure to make a pull request to >>>>> provide some version of ex11 in Fortran. >>>>> >>>>> Regarding the stubs, I admit I started looking in that direction to >>>>> add the missing wrappers but I am not sure I fully understand the process >>>>> yet. >>>>> For each C function, I gotta provide a Fortran interface in a .h90 >>>>> file as well as a C function that has a Fortran-like prototype and calls >>>>> the C function - right ? >>>>> >>>>> >>>>> Yes, >>>>> >>>>> However there are a few things I could not find / understand yet. >>>>> For instance, it appears that for C functions that have character >>>>> string arguments take an extra argument in their >>>>> Fortran-like-prototype-wrapper, namely the length of the string. Is that >>>>> passed automatically ? I couldn?t find where it could come from ... >>>>> >>>>> >>>>> This secret argument is put automatically by the Fortran compiler. >>>>> >>>>> Another thing is for functions like PetscFVView. I guess the wrapping >>>>> is less straightforward because I tried a quick something and it >>>>> segfault?ed. I couldnt find the wrapper for DMView although there is such a >>>>> routine in Fortran too. Could you please detail how to wrap such functions >>>>> ? >>>>> >>>>> >>>>> PETSC_EXTERN void dmview_(DM *da,PetscViewer *vin,PetscErrorCode *ierr) >>>>> { >>>>> PetscViewer v; >>>>> PetscPatchDefaultViewers_Fortran(vin,v); >>>>> *ierr = DMView(*da,v); >>>>> } >>>>> >>>>> dm/interface/ftn-custom/zdmf.c >>>>> >>>>> >>>>> >>>>> Thank you very much again, >>>>> >>>>> Thibault Bridel-Bertomeu >>>>> >>>>> Le sam. 12 d?c. 2020 ? 21:48, Barry Smith a ?crit : >>>>> >>>>>> >>>>>> PETSc Fortran interfaces are a combination of automatically >>>>>> generated and manually generated. >>>>>> >>>>>> For any C PETSc function if the manual page begins with /*@ it >>>>>> generates the Fortran interface automatically (make allfortranstubs). If >>>>>> it begins /*@C then either the Fortran interface is done manually or is >>>>>> missing. >>>>>> >>>>>> C functions that have character string arguments or function >>>>>> arguments (or a few other special cases) need to be manually provided. The >>>>>> automatically generated stubs go in the directory ftn-auto while manually >>>>>> generated ones go in the directory fin-custom. >>>>>> >>>>>> Perhaps you could first generate a list of "missing" Fortran stubs >>>>>> and then for each stub determine why it is missing and if it can be >>>>>> provided. Some are likely easy to provide but a few (involving function >>>>>> arguments) will be more involved. Once you have all the stubs available >>>>>> translating ex11.c becomes straightforward. >>>>>> >>>>>> Barry >>>>>> >>>>>> >>>>>> On Dec 12, 2020, at 9:30 AM, Thibault Bridel-Bertomeu < >>>>>> thibault.bridelbertomeu at gmail.com> wrote: >>>>>> >>>>>> Dear all, >>>>>> >>>>>> Is there somewhere a version of the TS tutorial ex11.c in Fortran ? >>>>>> I am looking into building in F90 (let's say that it is an >>>>>> unavoidable constraint) an unstructured 3D solver of the Euler equations >>>>>> using the "new" features of PETSc - mostly DMPlex & PetscFV - but I think >>>>>> there are some interfaces missing and I find it hard to find workarounds in >>>>>> Fortran. I would be grateful if anyone could please give me some pointers >>>>>> ... >>>>>> >>>>>> Thank you very much in advance, >>>>>> >>>>>> Thibault Bridel-Bertomeu >>>>>> ? >>>>>> Eng, MSc, PhD >>>>>> Research Engineer >>>>>> CEA/CESTA >>>>>> 33114 LE BARP >>>>>> Tel.: (+33)557046924 >>>>>> Mob.: (+33)611025322 >>>>>> Mail: thibault.bridelbertomeu at gmail.com >>>>>> >>>>>> >>>>>> -- >>>>> Thibault Bridel-Bertomeu >>>>> ? >>>>> Eng, MSc, PhD >>>>> Research Engineer >>>>> CEA/CESTA >>>>> 33114 LE BARP >>>>> Tel.: (+33)557046924 >>>>> Mob.: (+33)611025322 >>>>> Mail: thibault.bridelbertomeu at gmail.com >>>>> >>>>> >>>>> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sun Dec 13 08:54:14 2020 From: knepley at gmail.com (Matthew Knepley) Date: Sun, 13 Dec 2020 09:54:14 -0500 Subject: [petsc-users] PETSCFE_CLASSID/PETSCVF_CLASSID In-Reply-To: <840becf2-e425-15c9-1daa-44908e754bc6@math.u-bordeaux.fr> References: <121f20bf-9ca0-0657-d856-82a8a6b98899@math.u-bordeaux.fr> <840becf2-e425-15c9-1daa-44908e754bc6@math.u-bordeaux.fr> Message-ID: On Sat, Dec 12, 2020 at 6:07 AM Nicolas Barral < nicolas.barral at math.u-bordeaux.fr> wrote: > Thanks Matt and Jacob, > > There's still something not working yet, but I'm trying to build a MFE > before asking. > > Matt, what you say is consistent with plex/tutorials/ex8.c, but not with > DMAdaptorAdapt_Sequence_Private. Does that mean that the latter is broken ? > Yes, that may be broken. The pitfall of having untested interface. I will try and look at it. Thanks, Matt > Thanks > > -- > Nicolas > > On 11/12/2020 18:39, Matthew Knepley wrote: > > On Fri, Dec 11, 2020 at 12:02 PM Nicolas Barral > > > > wrote: > > > > Hi all (and probably more specifically Matt ?) > > > > I am trying to understand how the class IDs of a DM field are set, > and > > can't find it in the documentation. > > > > A little background, I am mimicking > > SNES/utils/dmadapt.c/DMAdaptorAdapt_Sequence_Private for a specific > > case > > (I'm trying to build the same kind of metric from a single sensor > > field, > > without all the SNES layer). > > > > I need to compute the gradient of the sensor field, using > > DMPlexComputeGradientClementInterpolant, for which I create a DM, to > > which I associate a PetscFE, a DS, like in existing code: > > > > PetscFE feGrad > > PetscDS probGrad > > > > ierr = PetscFECreateDefault(PetscObjectComm((PetscObject) dmGrad), > dim, > > coordDim, PETSC_TRUE, NULL, -1, &feGrad);CHKERRQ(ierr); > > ierr = PetscDSSetDiscretization(probGrad, f, (PetscObject) > > feGrad);CHKERRQ(ierr); > > > > > > Jacobi is correct, so let me give the history. Originally, you were to > > call PetscDSSetDiscretization() as you have done. > > However, now it is possible to have different discretization within the > > same domain, So now we want you to call > > DMAddField(dm, feGrad), and then DMCreateDS(), which will > > call PetscDSSetDiscretization() for you. I changed the > > examples, but I did not have another place to document this. > > > > Thanks, > > > > Matt > > > > ierr = PetscFEDestroy(&feGrad);CHKERRQ(ierr); > > > > Yet, when I call DMPlexComputeGradientClementInterpolant, I get the > > following error: > > [0]PETSC ERROR: Unknown discretization type for field 0 > > > > I don't fully understand what all these objects are (FE, DS and > Field), > > and how they are related, where would that be documented ? > > And what else do I need to do to make my example work ? > > > > Thanks > > > > -- > > Nicolas > > > > > > > > -- > > What most experimenters take for granted before they begin their > > experiments is infinitely more interesting than any results to which > > their experiments lead. > > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ < > http://www.cse.buffalo.edu/~knepley/> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From thibault.bridelbertomeu at gmail.com Sun Dec 13 08:53:51 2020 From: thibault.bridelbertomeu at gmail.com (Thibault Bridel-Bertomeu) Date: Sun, 13 Dec 2020 15:53:51 +0100 Subject: [petsc-users] TS tutorial ex11 in Fortran In-Reply-To: References: <1EC72DF8-61BD-482B-B1B0-9427AB1ADF74@petsc.dev> <048DC071-33E5-4644-B1C8-DD598248923A@petsc.dev> Message-ID: OK thank you ! For the PetscDSAddBoundary I am not sure either ... I did this for now : #ifdef PETSC_HAVE_FORTRAN_CAPS #define petscdsaddboundary_ PETSCDSADDBOUNDARY #elif !defined(PETSC_HAVE_FORTRAN_UNDERSCORE) && !defined(FORTRANDOUBLEUNDERSCORE) #define petscdsaddboundary_ petscdsaddboundary #endif PetscFortranCallbackId bocofunc, bocofunc_time; static PetscErrorCode ourbocofunc(PetscReal time, const PetscReal *c, const PetscReal *n, const PetscScalar *a_xI, const PetscScalar *a_xG, void *ctx) { PetscObjectUseFortranCallback((PetscDS)ctx, bocofunc, (PetscReal*, const PetscReal*, const PetscReal*, const PetscScalar*, const PetscScalar*, void*, PetscErrorCode*), (&time, c, n, a_xI, a_xG, ctx, &ierr)); } static PetscErrorCode ourbocofunc_time(PetscReal time, const PetscReal *c, const PetscReal *n, const PetscScalar *a_xI, const PetscScalar *a_xG, void *ctx) { PetscObjectUseFortranCallback((PetscDS)ctx, bocofunc_time, (PetscReal*, const PetscReal*, const PetscReal*, const PetscScalar*, const PetscScalar*, void*, PetscErrorCode*), (&time, c, n, a_xI, a_xG, ctx, &ierr)); } PETSC_EXTERN void petscdsaddboundary_(PetscDS *prob, DMBoundaryConditionType *type, char *name, char *labelname, PetscInt *field, PetscInt *numcomps, PetscInt *comps, void (*bcFunc)(void), void (*bcFunc_t)(void), PetscInt *numids, const PetscInt *ids, void *ctx, PetscErrorCode *ierr, PETSC_FORTRAN_CHARLEN_T namelen, PETSC_FORTRAN_CHARLEN_T labelnamelen) { char *newname, *newlabelname; FIXCHAR(name, namelen, newname); FIXCHAR(labelname, labelnamelen, newlabelname); *ierr = PetscObjectSetFortranCallback((PetscObject)*prob, PETSC_FORTRAN_CALLBACK_CLASS, &bocofunc, (PetscVoidFunction)bcFunc, prob); *ierr = PetscObjectSetFortranCallback((PetscObject)*prob, PETSC_FORTRAN_CALLBACK_CLASS, &bocofunc_time, (PetscVoidFunction)bcFunc_t, prob); *ierr = PetscDSAddBoundary(*prob, *type, newname, newlabelname, *field, *numcomps, comps, (void (*)(void))ourbocofunc, (void (*)(void))ourbocofunc_time, *numids, ids, ctx); FREECHAR(name, newname); FREECHAR(labelname, newlabelname); } But I do not know how to handle the two char* is the argument list : are there going to be two PETSC_FORTRAN_CHARLEN_T like I wrote ? Thanks again, Thibault Le dim. 13 d?c. 2020 ? 15:52, Matthew Knepley a ?crit : > On Sun, Dec 13, 2020 at 9:35 AM Thibault Bridel-Bertomeu < > thibault.bridelbertomeu at gmail.com> wrote: > >> Hello Matthew, >> >> Thank you for your answer and the pointer to SNESSetFunction(). Is the >> PETSC_F90_2PTR_PROTO(ptr) necessary for every wrapper of that category ? >> >> So far, for the PetscDSSetRiemannSolver, I did this : >> >> #ifdef PETSC_HAVE_FORTRAN_CAPS >> #define petscdssetriemannsolver_ PETSCDSSETRIEMANNSOLVER >> #elif !defined(PETSC_HAVE_FORTRAN_UNDERSCORE) && >> !defined(FORTRANDOUBLEUNDERSCORE) >> #define petscdssetriemannsolver_ petscdssetriemannsolver >> #endif >> >> PetscFortranCallbackId riemannsolver; >> >> static PetscErrorCode ourriemannsolver(PetscInt dim, PetscInt Nf, >> PetscReal x[], PetscReal n[], PetscScalar uL[], PetscScalar uR[], PetscInt >> numConstants, PetscScalar constants[], PetscScalar flux[], void *ctx) >> { >> PetscObjectUseFortranCallback((PetscDS)ctx, riemannsolver, >> (PetscInt*, PetscInt*, PetscReal*, PetscReal*, PetscScalar*, PetscScalar*, >> PetscInt*, PetscScalar*, PetscScalar*, void*, PetscErrorCode*), >> (&dim, &Nf, x, n, uL, uR, >> &numConstants, constants, flux, _ctx, &ierr)); >> } >> >> PETSC_EXTERN void petscdssetriemannsolver_(PetscDS *prob, PetscInt *f, >> void (*rs)(PetscInt *dim, >> PetscInt *Nf, PetscReal x[], PetscReal n[], PetscScalar uL[], PetscScalar >> uR[], PetscInt *numConstants, PetscScalar constants[], PetscScalar flux[], >> void *ctx, PetscErrorCode *jerr), >> PetscErrorCode *ierr) >> { >> *ierr = PetscObjectSetFortranCallback((PetscObject)*prob, >> PETSC_FORTRAN_CALLBACK_CLASS, &riemannsolver, (PetscVoidFunction)rs, prob); >> *ierr = PetscDSSetRiemannSolver(*prob, *f, (void*)ourriemannsolver); >> } >> >> It compiles, but I have no idea whether it works or not, and I won't have >> until the program is entirely built (with the TS and all) and it actually >> has to call a Riemann solver added with that routine. >> What do you think ? >> > > It looks good for right now. We can help you debug it when the rest of the > wrappers are in place. > > Thanks, > > Matt > > >> Thibault >> >> Le dim. 13 d?c. 2020 ? 15:29, Matthew Knepley a >> ?crit : >> >>> On Sun, Dec 13, 2020 at 9:17 AM Mark Adams wrote: >>> >>>> I don't think function pointers to PETSc (DS and DM) methods are not >>>> going to work in Fortran: >>>> >>>> ierr = PetscDSAddBoundary(prob, DM_BC_NATURAL_RIEMANN, "inflow", >>>> "Face Sets", 0, 0, NULL, (void (*)(void)) PhysicsBoundary_Advect_Inflow, >>>> NULL, ALEN(inflowids), inflowids, phys);CHKERRQ(ierr); >>>> >>>> You could write a funcs.c file that you call from your fortran code, >>>> like, call setBC1(prob,...,ierr) >>>> >>>> and put PhysicsBoundary_Advect_Inflow and setBC1 in funcs.c, for >>>> instance. >>>> >>> >>> Wrappers for functions that that function arguments are possible, but >>> more involved. We have to create an internal struct that >>> holds Fortran function pointers, and then call them with the right >>> arguments. This has been done for SNESSetFunction() for >>> instance, so we would start emulating that. >>> >>> Thanks, >>> >>> Matt >>> >>> >>>> >>>> On Sun, Dec 13, 2020 at 5:32 AM Thibault Bridel-Bertomeu < >>>> thibault.bridelbertomeu at gmail.com> wrote: >>>> >>>>> Good morning all, >>>>> >>>>> Thank you Barry for your answer. >>>>> I started adding some interfaces (PetscFVSetComponentName, PetscFVView >>>>> & PetscFVSetType so far) and I have to say I think it is working quite >>>>> well, but the prototypes of those functions are still quite "simple". >>>>> I am stuck at how to implement the wrappers for >>>>> PetscDSSetRiemannSolver and PetscDSSetContext though, especially on how to >>>>> pass a function as an argument to PetscDSSetRiemannSolver ... Are there any >>>>> similar functions that may already have their wrappers ? >>>>> >>>>> Thank you very much, >>>>> >>>>> Thibault >>>>> >>>>> >>>>> Le sam. 12 d?c. 2020 ? 23:28, Barry Smith a ?crit : >>>>> >>>>>> >>>>>> >>>>>> On Dec 12, 2020, at 2:59 PM, Thibault Bridel-Bertomeu < >>>>>> thibault.bridelbertomeu at gmail.com> wrote: >>>>>> >>>>>> Dear Jed, dear Barry, >>>>>> >>>>>> Thank you for the fast answers ! >>>>>> >>>>>> If I have any success I will make sure to make a pull request to >>>>>> provide some version of ex11 in Fortran. >>>>>> >>>>>> Regarding the stubs, I admit I started looking in that direction to >>>>>> add the missing wrappers but I am not sure I fully understand the process >>>>>> yet. >>>>>> For each C function, I gotta provide a Fortran interface in a .h90 >>>>>> file as well as a C function that has a Fortran-like prototype and calls >>>>>> the C function - right ? >>>>>> >>>>>> >>>>>> Yes, >>>>>> >>>>>> However there are a few things I could not find / understand yet. >>>>>> For instance, it appears that for C functions that have character >>>>>> string arguments take an extra argument in their >>>>>> Fortran-like-prototype-wrapper, namely the length of the string. Is that >>>>>> passed automatically ? I couldn?t find where it could come from ... >>>>>> >>>>>> >>>>>> This secret argument is put automatically by the Fortran compiler. >>>>>> >>>>>> Another thing is for functions like PetscFVView. I guess the wrapping >>>>>> is less straightforward because I tried a quick something and it >>>>>> segfault?ed. I couldnt find the wrapper for DMView although there is such a >>>>>> routine in Fortran too. Could you please detail how to wrap such functions >>>>>> ? >>>>>> >>>>>> >>>>>> PETSC_EXTERN void dmview_(DM *da,PetscViewer *vin,PetscErrorCode >>>>>> *ierr) >>>>>> { >>>>>> PetscViewer v; >>>>>> PetscPatchDefaultViewers_Fortran(vin,v); >>>>>> *ierr = DMView(*da,v); >>>>>> } >>>>>> >>>>>> dm/interface/ftn-custom/zdmf.c >>>>>> >>>>>> >>>>>> >>>>>> Thank you very much again, >>>>>> >>>>>> Thibault Bridel-Bertomeu >>>>>> >>>>>> Le sam. 12 d?c. 2020 ? 21:48, Barry Smith a >>>>>> ?crit : >>>>>> >>>>>>> >>>>>>> PETSc Fortran interfaces are a combination of automatically >>>>>>> generated and manually generated. >>>>>>> >>>>>>> For any C PETSc function if the manual page begins with /*@ it >>>>>>> generates the Fortran interface automatically (make allfortranstubs). If >>>>>>> it begins /*@C then either the Fortran interface is done manually or is >>>>>>> missing. >>>>>>> >>>>>>> C functions that have character string arguments or function >>>>>>> arguments (or a few other special cases) need to be manually provided. The >>>>>>> automatically generated stubs go in the directory ftn-auto while manually >>>>>>> generated ones go in the directory fin-custom. >>>>>>> >>>>>>> Perhaps you could first generate a list of "missing" Fortran >>>>>>> stubs and then for each stub determine why it is missing and if it can be >>>>>>> provided. Some are likely easy to provide but a few (involving function >>>>>>> arguments) will be more involved. Once you have all the stubs available >>>>>>> translating ex11.c becomes straightforward. >>>>>>> >>>>>>> Barry >>>>>>> >>>>>>> >>>>>>> On Dec 12, 2020, at 9:30 AM, Thibault Bridel-Bertomeu < >>>>>>> thibault.bridelbertomeu at gmail.com> wrote: >>>>>>> >>>>>>> Dear all, >>>>>>> >>>>>>> Is there somewhere a version of the TS tutorial ex11.c in Fortran ? >>>>>>> I am looking into building in F90 (let's say that it is an >>>>>>> unavoidable constraint) an unstructured 3D solver of the Euler equations >>>>>>> using the "new" features of PETSc - mostly DMPlex & PetscFV - but I think >>>>>>> there are some interfaces missing and I find it hard to find workarounds in >>>>>>> Fortran. I would be grateful if anyone could please give me some pointers >>>>>>> ... >>>>>>> >>>>>>> Thank you very much in advance, >>>>>>> >>>>>>> Thibault Bridel-Bertomeu >>>>>>> ? >>>>>>> Eng, MSc, PhD >>>>>>> Research Engineer >>>>>>> CEA/CESTA >>>>>>> 33114 LE BARP >>>>>>> Tel.: (+33)557046924 >>>>>>> Mob.: (+33)611025322 >>>>>>> Mail: thibault.bridelbertomeu at gmail.com >>>>>>> >>>>>>> >>>>>>> -- >>>>>> Thibault Bridel-Bertomeu >>>>>> ? >>>>>> Eng, MSc, PhD >>>>>> Research Engineer >>>>>> CEA/CESTA >>>>>> 33114 LE BARP >>>>>> Tel.: (+33)557046924 >>>>>> Mob.: (+33)611025322 >>>>>> Mail: thibault.bridelbertomeu at gmail.com >>>>>> >>>>>> >>>>>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >>> https://www.cse.buffalo.edu/~knepley/ >>> >>> >> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Sun Dec 13 09:39:03 2020 From: jed at jedbrown.org (Jed Brown) Date: Sun, 13 Dec 2020 08:39:03 -0700 Subject: [petsc-users] TS tutorial ex11 in Fortran In-Reply-To: References: <1EC72DF8-61BD-482B-B1B0-9427AB1ADF74@petsc.dev> <048DC071-33E5-4644-B1C8-DD598248923A@petsc.dev> Message-ID: <87tuspyaug.fsf@jedbrown.org> Thibault Bridel-Bertomeu writes: > Thank you Mark for your answer. > > I am not sure what you think could be in the setBC1 routine ? How to make > the connection with the PetscDS ? > > On the other hand, I actually found after a while TSMonitorSet has a > fortran wrapper, and it does take as arguments two function pointers, so I > guess it is possible ? Although I am not sure exactly how to play with the > PetscObjectSetFortranCallback & PetscObjectUseFortranCallback macros - > could anybody advise please ? tsmonitorset_ is a good example to follow. In your file, create one of these static structs with a member for each callback. These are IDs that will be used as keys for Fortran callbacks and their contexts. The salient parts of the file are below. static struct { PetscFortranCallbackId prestep; PetscFortranCallbackId poststep; PetscFortranCallbackId rhsfunction; PetscFortranCallbackId rhsjacobian; PetscFortranCallbackId ifunction; PetscFortranCallbackId ijacobian; PetscFortranCallbackId monitor; PetscFortranCallbackId mondestroy; PetscFortranCallbackId transform; #if defined(PETSC_HAVE_F90_2PTR_ARG) PetscFortranCallbackId function_pgiptr; #endif } _cb; /* Note ctx is the same as ts so we need to get the Fortran context out of the TS; this gets put in _ctx using the callback ID */ static PetscErrorCode ourmonitor(TS ts,PetscInt i,PetscReal d,Vec v,void *ctx) { PetscObjectUseFortranCallback(ts,_cb.monitor,(TS*,PetscInt*,PetscReal*,Vec *,void*,PetscErrorCode*),(&ts,&i,&d,&v,_ctx,&ierr)); } Then follow as in tsmonitorset_, which sets two callbacks. PETSC_EXTERN void tsmonitorset_(TS *ts,void (*func)(TS*,PetscInt*,PetscReal*,Vec*,void*,PetscErrorCode*),void *mctx,void (*d)(void*,PetscErrorCode*),PetscErrorCode *ierr) { CHKFORTRANNULLFUNCTION(d); if ((PetscVoidFunction)func == (PetscVoidFunction) tsmonitordefault_) { *ierr = TSMonitorSet(*ts,(PetscErrorCode (*)(TS,PetscInt,PetscReal,Vec,void*))TSMonitorDefault,*(PetscViewerAndFormat**)mctx,(PetscErrorCode (*)(void **))PetscViewerAndFormatDestroy); } else { *ierr = PetscObjectSetFortranCallback((PetscObject)*ts,PETSC_FORTRAN_CALLBACK_CLASS,&_cb.monitor,(PetscVoidFunction)func,mctx); *ierr = PetscObjectSetFortranCallback((PetscObject)*ts,PETSC_FORTRAN_CALLBACK_CLASS,&_cb.mondestroy,(PetscVoidFunction)d,mctx); *ierr = TSMonitorSet(*ts,ourmonitor,*ts,ourmonitordestroy); } } From roland.richter at ntnu.no Mon Dec 14 17:18:08 2020 From: roland.richter at ntnu.no (Roland Richter) Date: Tue, 15 Dec 2020 00:18:08 +0100 Subject: [petsc-users] Transform of algorithm containing zgemv/FFT/Slicing into PETSc-functions Message-ID: Dear all, I am currently working on the transformation of an algorithm implemented using armadillo into PETSc. It is a forward/backward transformation, and boils down to the following steps (for the forward transformation): Assumed I have matrices A and B, defined as A =? |aa ab ac ad| ??? ??? |ae af ag ah| ??? ??? |ai aj ak al| B =? |ba bb bc| ??? ??? |be bf bg| ??? ??? |bi bj bk| with the number of rows in A and B always equal, but number of columns in B always less or equal than half the number of columns in A (Example here is only for demonstration, I am aware of that 3 is not smaller or equal than 2). Moreover, I have vectors x and y, with x defined as x = |xa xb xc xd| and y defined as y = |ya yb yc| The number of elements in x corresponds to the number of columns in A, and the number of elements y accordingly correspond to the number of columns in B. Now, the transformation can be described as * Set all values in A to zero * Copy B into A with an offset of a0: o A(a0 = 1) = |0 ba bb bc| ??????????????????? |0 be bf bg| ??????????????????? |0 bi bj bk| * Multiply every row in A elementwise with y, including offset, resulting in o A(a0 = 1) = |0 ba*ya bb*yb bc*yc| ??????????????????? |0 be*ya bf*yb bg*yc| ??????????????????? |0 bi*ya bj*yb bk*yc| * Apply a 1d-FFT over each row of A, resulting in A' * Multiply every row in A' elementwise with x, resulting in o A'(a0 = 1) = |aa'*xa (ba*ya)'*xb (bb*yb)'*xc (bc*yc)'*xd| ??????????????????? |ae'*xa (be*ya)'*xb (bf*yb)'*xc (bg*yc)'*xd| ??????????????????? |ai'*xa (bi*ya)'*xb (bj*yb)'*xc (bk*yc)'*xd| Based on earlier questions, I already know how to apply a vector to each row of a matrix (by using .diag()) and how to apply an FFT over each row of a distributed matrix by using FFTW. Still, I am not aware of a method for copying B into A with an offset, and therefore I would have to iterate over each row for the copy process, which might slow down the process. Therefore, is there a way I could make this process more efficient using the built-in functions in PETSc? Unfortunately, I am not that familiar with all the functions yet. Thanks! Roland -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Mon Dec 14 22:01:49 2020 From: bsmith at petsc.dev (Barry Smith) Date: Mon, 14 Dec 2020 22:01:49 -0600 Subject: [petsc-users] Transform of algorithm containing zgemv/FFT/Slicing into PETSc-functions In-Reply-To: References: Message-ID: <577FE964-4F02-414A-94C0-A2809ADA7513@petsc.dev> I think you can again use MatDenseGetArray() and do the copies directly respecting the shift that you desire. Each process will just do the local rows so you need not worry about parallelism. I think it may be as simple as get the array pointer for A, shift it by number of local rows * number of columns then do a PetscArraycpy() to copy the B values into the shifted location in A. Barry > On Dec 14, 2020, at 5:18 PM, Roland Richter wrote: > > Dear all, > > I am currently working on the transformation of an algorithm implemented using armadillo into PETSc. It is a forward/backward transformation, and boils down to the following steps (for the forward transformation): > > Assumed I have matrices A and B, defined as > > A = |aa ab ac ad| > |ae af ag ah| > |ai aj ak al| > > B = |ba bb bc| > |be bf bg| > |bi bj bk| > > with the number of rows in A and B always equal, but number of columns in B always less or equal than half the number of columns in A (Example here is only for demonstration, I am aware of that 3 is not smaller or equal than 2). > > Moreover, I have vectors x and y, with x defined as > > x = |xa xb xc xd| > > and y defined as > > y = |ya yb yc| > > The number of elements in x corresponds to the number of columns in A, and the number of elements y accordingly correspond to the number of columns in B. > > Now, the transformation can be described as > > Set all values in A to zero > Copy B into A with an offset of a0: > A(a0 = 1) = |0 ba bb bc| > |0 be bf bg| > |0 bi bj bk| > Multiply every row in A elementwise with y, including offset, resulting in > A(a0 = 1) = |0 ba*ya bb*yb bc*yc| > |0 be*ya bf*yb bg*yc| > |0 bi*ya bj*yb bk*yc| > Apply a 1d-FFT over each row of A, resulting in A' > Multiply every row in A' elementwise with x, resulting in > A'(a0 = 1) = |aa'*xa (ba*ya)'*xb (bb*yb)'*xc (bc*yc)'*xd| > |ae'*xa (be*ya)'*xb (bf*yb)'*xc (bg*yc)'*xd| > |ai'*xa (bi*ya)'*xb (bj*yb)'*xc (bk*yc)'*xd| > Based on earlier questions, I already know how to apply a vector to each row of a matrix (by using .diag()) and how to apply an FFT over each row of a distributed matrix by using FFTW. Still, I am not aware of a method for copying B into A with an offset, and therefore I would have to iterate over each row for the copy process, which might slow down the process. Therefore, is there a way I could make this process more efficient using the built-in functions in PETSc? Unfortunately, I am not that familiar with all the functions yet. > > Thanks! > > Roland > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fdkong.jd at gmail.com Mon Dec 14 23:59:52 2020 From: fdkong.jd at gmail.com (Fande Kong) Date: Mon, 14 Dec 2020 22:59:52 -0700 Subject: [petsc-users] valgrind with petscmpiexec Message-ID: Hi All, I tried to use valgrind to check if the simulation is valgrind clean because I saw some random communication fails during the simulation. I tried this command-line petscmpiexec -valgrind -n 576 ../../../moose-app-oprof -i input.i -log_view -snes_view But I got the following error messages: valgrind: Unable to start up properly. Giving up. ==75586== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75586_cmdline_8c3fabf2 ==75586== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75586_cmdline_8cac2243 ==75586== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75586_cmdline_da8d30c0 ==75586== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75586_cmdline_877871f9 ==75586== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75586_cmdline_c098953e ==75586== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75586_cmdline_aa649f9f ==75586== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75586_cmdline_097498ec ==75586== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75586_cmdline_bfc534b5 ==75586== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75586_cmdline_7604c74a ==75586== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75586_cmdline_a1fd96bb ==75586== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75586_cmdline_4c8857d8 valgrind: Startup or configuration error: valgrind: Can't create client cmdline file in /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75586_cmdline_4c8857d8 valgrind: Unable to start up properly. Giving up. ==75596== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75596_cmdline_bc5492bb ==75596== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75596_cmdline_ec59a3d8 valgrind: Startup or configuration error: valgrind: Can't create client cmdline file in /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75596_cmdline_ec59a3d8 valgrind: Unable to start up properly. Giving up. ==75597== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75597_cmdline_b036bdf2 ==75597== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75597_cmdline_105acc43 ==75597== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75597_cmdline_9fb792c0 ==75597== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75597_cmdline_30602bf9 ==75597== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75597_cmdline_21eec73e ==75597== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75597_cmdline_0b53e99f ==75597== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75597_cmdline_73e31aec ==75597== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75597_cmdline_486e8eb5 ==75597== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75597_cmdline_db8c194a ==75597== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75597_cmdline_839780bb I did a bit search online, and found something related https://stackoverflow.com/questions/13707211/what-causes-mkstemp-to-fail-when-running-many-simultaneous-valgrind-processes But do not know what is the right way to fix the issue. Thanks so much, Fande, -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Tue Dec 15 02:23:00 2020 From: bsmith at petsc.dev (Barry Smith) Date: Tue, 15 Dec 2020 02:23:00 -0600 Subject: [petsc-users] valgrind with petscmpiexec In-Reply-To: References: Message-ID: No idea. Perhaps petscmpiexec could be modified so it only ran valgrind on the first 10 ranks? Not clear how to do that. Or valgrind should get a MR that removes this small arbitrary limitation on the number of processes. 576 is so 2000 :-) Barry > On Dec 14, 2020, at 11:59 PM, Fande Kong wrote: > > Hi All, > > I tried to use valgrind to check if the simulation is valgrind clean because I saw some random communication fails during the simulation. > > I tried this command-line > > petscmpiexec -valgrind -n 576 ../../../moose-app-oprof -i input.i -log_view -snes_view > > > But I got the following error messages: > > valgrind: Unable to start up properly. Giving up. > ==75586== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75586_cmdline_8c3fabf2 > ==75586== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75586_cmdline_8cac2243 > ==75586== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75586_cmdline_da8d30c0 > ==75586== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75586_cmdline_877871f9 > ==75586== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75586_cmdline_c098953e > ==75586== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75586_cmdline_aa649f9f > ==75586== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75586_cmdline_097498ec > ==75586== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75586_cmdline_bfc534b5 > ==75586== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75586_cmdline_7604c74a > ==75586== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75586_cmdline_a1fd96bb > ==75586== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75586_cmdline_4c8857d8 > valgrind: Startup or configuration error: > valgrind: Can't create client cmdline file in /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75586_cmdline_4c8857d8 > valgrind: Unable to start up properly. Giving up. > ==75596== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75596_cmdline_bc5492bb > ==75596== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75596_cmdline_ec59a3d8 > valgrind: Startup or configuration error: > valgrind: Can't create client cmdline file in /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75596_cmdline_ec59a3d8 > valgrind: Unable to start up properly. Giving up. > ==75597== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75597_cmdline_b036bdf2 > ==75597== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75597_cmdline_105acc43 > ==75597== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75597_cmdline_9fb792c0 > ==75597== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75597_cmdline_30602bf9 > ==75597== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75597_cmdline_21eec73e > ==75597== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75597_cmdline_0b53e99f > ==75597== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75597_cmdline_73e31aec > ==75597== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75597_cmdline_486e8eb5 > ==75597== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75597_cmdline_db8c194a > ==75597== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75597_cmdline_839780bb > > > I did a bit search online, and found something related https://stackoverflow.com/questions/13707211/what-causes-mkstemp-to-fail-when-running-many-simultaneous-valgrind-processes > > But do not know what is the right way to fix the issue. > > Thanks so much, > > Fande, > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thibault.bridelbertomeu at gmail.com Tue Dec 15 05:35:17 2020 From: thibault.bridelbertomeu at gmail.com (Thibault Bridel-Bertomeu) Date: Tue, 15 Dec 2020 12:35:17 +0100 Subject: [petsc-users] TS tutorial ex11 in Fortran In-Reply-To: <87tuspyaug.fsf@jedbrown.org> References: <1EC72DF8-61BD-482B-B1B0-9427AB1ADF74@petsc.dev> <048DC071-33E5-4644-B1C8-DD598248923A@petsc.dev> <87tuspyaug.fsf@jedbrown.org> Message-ID: Hello everyone, So far, I have the wrappers in the files attached to this e-mail. I still do not know if they work properly - at least the code compiles and the calls to the wrapped-subroutine do not fail - but I wanted to put this here in case someone sees something really wrong with it already. Thank you again for your help, I'll try to post updates of the F90 version of ex11 regularly in this thread. Stay safe, Thibault Bridel-Bertomeu Le dim. 13 d?c. 2020 ? 16:39, Jed Brown a ?crit : > Thibault Bridel-Bertomeu writes: > > > Thank you Mark for your answer. > > > > I am not sure what you think could be in the setBC1 routine ? How to make > > the connection with the PetscDS ? > > > > On the other hand, I actually found after a while TSMonitorSet has a > > fortran wrapper, and it does take as arguments two function pointers, so > I > > guess it is possible ? Although I am not sure exactly how to play with > the > > PetscObjectSetFortranCallback & PetscObjectUseFortranCallback macros - > > could anybody advise please ? > > tsmonitorset_ is a good example to follow. In your file, create one of > these static structs with a member for each callback. These are IDs that > will be used as keys for Fortran callbacks and their contexts. The salient > parts of the file are below. > > static struct { > PetscFortranCallbackId prestep; > PetscFortranCallbackId poststep; > PetscFortranCallbackId rhsfunction; > PetscFortranCallbackId rhsjacobian; > PetscFortranCallbackId ifunction; > PetscFortranCallbackId ijacobian; > PetscFortranCallbackId monitor; > PetscFortranCallbackId mondestroy; > PetscFortranCallbackId transform; > #if defined(PETSC_HAVE_F90_2PTR_ARG) > PetscFortranCallbackId function_pgiptr; > #endif > } _cb; > > /* > Note ctx is the same as ts so we need to get the Fortran context out of > the TS; this gets put in _ctx using the callback ID > */ > static PetscErrorCode ourmonitor(TS ts,PetscInt i,PetscReal d,Vec v,void > *ctx) > { > > PetscObjectUseFortranCallback(ts,_cb.monitor,(TS*,PetscInt*,PetscReal*,Vec > *,void*,PetscErrorCode*),(&ts,&i,&d,&v,_ctx,&ierr)); > } > > Then follow as in tsmonitorset_, which sets two callbacks. > > PETSC_EXTERN void tsmonitorset_(TS *ts,void > (*func)(TS*,PetscInt*,PetscReal*,Vec*,void*,PetscErrorCode*),void > *mctx,void (*d)(void*,PetscErrorCode*),PetscErrorCode *ierr) > { > CHKFORTRANNULLFUNCTION(d); > if ((PetscVoidFunction)func == (PetscVoidFunction) tsmonitordefault_) { > *ierr = TSMonitorSet(*ts,(PetscErrorCode > (*)(TS,PetscInt,PetscReal,Vec,void*))TSMonitorDefault,*(PetscViewerAndFormat**)mctx,(PetscErrorCode > (*)(void **))PetscViewerAndFormatDestroy); > } else { > *ierr = > PetscObjectSetFortranCallback((PetscObject)*ts,PETSC_FORTRAN_CALLBACK_CLASS,&_cb.monitor,(PetscVoidFunction)func,mctx); > *ierr = > PetscObjectSetFortranCallback((PetscObject)*ts,PETSC_FORTRAN_CALLBACK_CLASS,&_cb.mondestroy,(PetscVoidFunction)d,mctx); > *ierr = TSMonitorSet(*ts,ourmonitor,*ts,ourmonitordestroy); > } > } > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: wrapper_petsc.h90 Type: application/octet-stream Size: 887 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: wrapper_petsc.c Type: text/x-csrc Size: 6507 bytes Desc: not available URL: From yaqiwang at gmail.com Tue Dec 15 09:55:03 2020 From: yaqiwang at gmail.com (Yaqi Wang) Date: Tue, 15 Dec 2020 08:55:03 -0700 Subject: [petsc-users] valgrind with petscmpiexec In-Reply-To: References: Message-ID: <8818383F-B4AC-4A3D-AF5F-14B5E2ADF011@gmail.com> Fande, Did you try set TMPDIR for valgrind? Sent from my iPhone > On Dec 15, 2020, at 1:23 AM, Barry Smith wrote: > > > No idea. Perhaps petscmpiexec could be modified so it only ran valgrind on the first 10 ranks? Not clear how to do that. Or valgrind should get a MR that removes this small arbitrary limitation on the number of processes. 576 is so 2000 :-) > > > Barry > > >> On Dec 14, 2020, at 11:59 PM, Fande Kong wrote: >> >> Hi All, >> >> I tried to use valgrind to check if the simulation is valgrind clean because I saw some random communication fails during the simulation. >> >> I tried this command-line >> >> petscmpiexec -valgrind -n 576 ../../../moose-app-oprof -i input.i -log_view -snes_view >> >> >> But I got the following error messages: >> >> valgrind: Unable to start up properly. Giving up. >> ==75586== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75586_cmdline_8c3fabf2 >> ==75586== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75586_cmdline_8cac2243 >> ==75586== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75586_cmdline_da8d30c0 >> ==75586== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75586_cmdline_877871f9 >> ==75586== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75586_cmdline_c098953e >> ==75586== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75586_cmdline_aa649f9f >> ==75586== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75586_cmdline_097498ec >> ==75586== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75586_cmdline_bfc534b5 >> ==75586== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75586_cmdline_7604c74a >> ==75586== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75586_cmdline_a1fd96bb >> ==75586== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75586_cmdline_4c8857d8 >> valgrind: Startup or configuration error: >> valgrind: Can't create client cmdline file in /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75586_cmdline_4c8857d8 >> valgrind: Unable to start up properly. Giving up. >> ==75596== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75596_cmdline_bc5492bb >> ==75596== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75596_cmdline_ec59a3d8 >> valgrind: Startup or configuration error: >> valgrind: Can't create client cmdline file in /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75596_cmdline_ec59a3d8 >> valgrind: Unable to start up properly. Giving up. >> ==75597== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75597_cmdline_b036bdf2 >> ==75597== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75597_cmdline_105acc43 >> ==75597== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75597_cmdline_9fb792c0 >> ==75597== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75597_cmdline_30602bf9 >> ==75597== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75597_cmdline_21eec73e >> ==75597== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75597_cmdline_0b53e99f >> ==75597== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75597_cmdline_73e31aec >> ==75597== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75597_cmdline_486e8eb5 >> ==75597== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75597_cmdline_db8c194a >> ==75597== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75597_cmdline_839780bb >> >> >> I did a bit search online, and found something related https://stackoverflow.com/questions/13707211/what-causes-mkstemp-to-fail-when-running-many-simultaneous-valgrind-processes >> >> But do not know what is the right way to fix the issue. >> >> Thanks so much, >> >> Fande, >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Tue Dec 15 10:33:33 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 15 Dec 2020 10:33:33 -0600 Subject: [petsc-users] valgrind with petscmpiexec In-Reply-To: <8818383F-B4AC-4A3D-AF5F-14B5E2ADF011@gmail.com> References: <8818383F-B4AC-4A3D-AF5F-14B5E2ADF011@gmail.com> Message-ID: <1236bca-4eb1-d212-8f98-175767751b9@mcs.anl.gov> For one - I think using '--log-file=valgrind-%q{HOSTNAME}-%p.log' might help [to keep the logs from each process separate] And I think the TMPDIR recommendation is to have a different value for each of the nodes [where the "pid" clash comes from] and perhaps "TMPDIR=/tmp" might work - as this would be local disk on each node [vs /var/tmp/ - which is probably a shared TMP across nodes] But then - PBS or this MPI requires a shared TMP? Satish On Tue, 15 Dec 2020, Yaqi Wang wrote: > Fande, > > Did you try set TMPDIR for valgrind? > > Sent from my iPhone > > > On Dec 15, 2020, at 1:23 AM, Barry Smith wrote: > > > > > > No idea. Perhaps petscmpiexec could be modified so it only ran valgrind on the first 10 ranks? Not clear how to do that. Or valgrind should get a MR that removes this small arbitrary limitation on the number of processes. 576 is so 2000 :-) > > > > > > Barry > > > > > >> On Dec 14, 2020, at 11:59 PM, Fande Kong wrote: > >> > >> Hi All, > >> > >> I tried to use valgrind to check if the simulation is valgrind clean because I saw some random communication fails during the simulation. > >> > >> I tried this command-line > >> > >> petscmpiexec -valgrind -n 576 ../../../moose-app-oprof -i input.i -log_view -snes_view > >> > >> > >> But I got the following error messages: > >> > >> valgrind: Unable to start up properly. Giving up. > >> ==75586== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75586_cmdline_8c3fabf2 > >> ==75586== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75586_cmdline_8cac2243 > >> ==75586== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75586_cmdline_da8d30c0 > >> ==75586== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75586_cmdline_877871f9 > >> ==75586== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75586_cmdline_c098953e > >> ==75586== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75586_cmdline_aa649f9f > >> ==75586== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75586_cmdline_097498ec > >> ==75586== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75586_cmdline_bfc534b5 > >> ==75586== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75586_cmdline_7604c74a > >> ==75586== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75586_cmdline_a1fd96bb > >> ==75586== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75586_cmdline_4c8857d8 > >> valgrind: Startup or configuration error: > >> valgrind: Can't create client cmdline file in /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75586_cmdline_4c8857d8 > >> valgrind: Unable to start up properly. Giving up. > >> ==75596== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75596_cmdline_bc5492bb > >> ==75596== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75596_cmdline_ec59a3d8 > >> valgrind: Startup or configuration error: > >> valgrind: Can't create client cmdline file in /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75596_cmdline_ec59a3d8 > >> valgrind: Unable to start up properly. Giving up. > >> ==75597== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75597_cmdline_b036bdf2 > >> ==75597== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75597_cmdline_105acc43 > >> ==75597== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75597_cmdline_9fb792c0 > >> ==75597== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75597_cmdline_30602bf9 > >> ==75597== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75597_cmdline_21eec73e > >> ==75597== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75597_cmdline_0b53e99f > >> ==75597== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75597_cmdline_73e31aec > >> ==75597== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75597_cmdline_486e8eb5 > >> ==75597== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75597_cmdline_db8c194a > >> ==75597== VG_(mkstemp): failed to create temp file: /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75597_cmdline_839780bb > >> > >> > >> I did a bit search online, and found something related https://stackoverflow.com/questions/13707211/what-causes-mkstemp-to-fail-when-running-many-simultaneous-valgrind-processes > >> > >> But do not know what is the right way to fix the issue. > >> > >> Thanks so much, > >> > >> Fande, > >> > > > From fdkong.jd at gmail.com Tue Dec 15 10:54:36 2020 From: fdkong.jd at gmail.com (Fande Kong) Date: Tue, 15 Dec 2020 09:54:36 -0700 Subject: [petsc-users] valgrind with petscmpiexec In-Reply-To: <1236bca-4eb1-d212-8f98-175767751b9@mcs.anl.gov> References: <8818383F-B4AC-4A3D-AF5F-14B5E2ADF011@gmail.com> <1236bca-4eb1-d212-8f98-175767751b9@mcs.anl.gov> Message-ID: Thanks so much, Satish, On Tue, Dec 15, 2020 at 9:33 AM Satish Balay via petsc-users < petsc-users at mcs.anl.gov> wrote: > For one - I think using '--log-file=valgrind-%q{HOSTNAME}-%p.log' might > help [to keep the logs from each process separate] > > And I think the TMPDIR recommendation is to have a different value for > each of the nodes [where the "pid" clash comes from] and perhaps > "TMPDIR=/tmp" might work "TMPDIR=/tmp" worked out. Fande > - as this would be local disk on each node [vs /var/tmp/ - which is > probably a shared TMP across nodes] > > But then - PBS or this MPI requires a shared TMP? > > Satish > > On Tue, 15 Dec 2020, Yaqi Wang wrote: > > > Fande, > > > > Did you try set TMPDIR for valgrind? > > > > Sent from my iPhone > > > > > On Dec 15, 2020, at 1:23 AM, Barry Smith wrote: > > > > > > > > > No idea. Perhaps petscmpiexec could be modified so it only ran > valgrind on the first 10 ranks? Not clear how to do that. Or valgrind > should get a MR that removes this small arbitrary limitation on the number > of processes. 576 is so 2000 :-) > > > > > > > > > Barry > > > > > > > > >> On Dec 14, 2020, at 11:59 PM, Fande Kong wrote: > > >> > > >> Hi All, > > >> > > >> I tried to use valgrind to check if the simulation is valgrind clean > because I saw some random communication fails during the simulation. > > >> > > >> I tried this command-line > > >> > > >> petscmpiexec -valgrind -n 576 ../../../moose-app-oprof -i input.i > -log_view -snes_view > > >> > > >> > > >> But I got the following error messages: > > >> > > >> valgrind: Unable to start up properly. Giving up. > > >> ==75586== VG_(mkstemp): failed to create temp file: > /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75586_cmdline_8c3fabf2 > > >> ==75586== VG_(mkstemp): failed to create temp file: > /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75586_cmdline_8cac2243 > > >> ==75586== VG_(mkstemp): failed to create temp file: > /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75586_cmdline_da8d30c0 > > >> ==75586== VG_(mkstemp): failed to create temp file: > /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75586_cmdline_877871f9 > > >> ==75586== VG_(mkstemp): failed to create temp file: > /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75586_cmdline_c098953e > > >> ==75586== VG_(mkstemp): failed to create temp file: > /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75586_cmdline_aa649f9f > > >> ==75586== VG_(mkstemp): failed to create temp file: > /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75586_cmdline_097498ec > > >> ==75586== VG_(mkstemp): failed to create temp file: > /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75586_cmdline_bfc534b5 > > >> ==75586== VG_(mkstemp): failed to create temp file: > /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75586_cmdline_7604c74a > > >> ==75586== VG_(mkstemp): failed to create temp file: > /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75586_cmdline_a1fd96bb > > >> ==75586== VG_(mkstemp): failed to create temp file: > /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75586_cmdline_4c8857d8 > > >> valgrind: Startup or configuration error: > > >> valgrind: Can't create client cmdline file in > /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75586_cmdline_4c8857d8 > > >> valgrind: Unable to start up properly. Giving up. > > >> ==75596== VG_(mkstemp): failed to create temp file: > /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75596_cmdline_bc5492bb > > >> ==75596== VG_(mkstemp): failed to create temp file: > /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75596_cmdline_ec59a3d8 > > >> valgrind: Startup or configuration error: > > >> valgrind: Can't create client cmdline file in > /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75596_cmdline_ec59a3d8 > > >> valgrind: Unable to start up properly. Giving up. > > >> ==75597== VG_(mkstemp): failed to create temp file: > /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75597_cmdline_b036bdf2 > > >> ==75597== VG_(mkstemp): failed to create temp file: > /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75597_cmdline_105acc43 > > >> ==75597== VG_(mkstemp): failed to create temp file: > /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75597_cmdline_9fb792c0 > > >> ==75597== VG_(mkstemp): failed to create temp file: > /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75597_cmdline_30602bf9 > > >> ==75597== VG_(mkstemp): failed to create temp file: > /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75597_cmdline_21eec73e > > >> ==75597== VG_(mkstemp): failed to create temp file: > /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75597_cmdline_0b53e99f > > >> ==75597== VG_(mkstemp): failed to create temp file: > /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75597_cmdline_73e31aec > > >> ==75597== VG_(mkstemp): failed to create temp file: > /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75597_cmdline_486e8eb5 > > >> ==75597== VG_(mkstemp): failed to create temp file: > /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75597_cmdline_db8c194a > > >> ==75597== VG_(mkstemp): failed to create temp file: > /var/tmp/pbs.3110013.sawtoothpbs/valgrind_proc_75597_cmdline_839780bb > > >> > > >> > > >> I did a bit search online, and found something related > https://stackoverflow.com/questions/13707211/what-causes-mkstemp-to-fail-when-running-many-simultaneous-valgrind-processes > > >> > > >> But do not know what is the right way to fix the issue. > > >> > > >> Thanks so much, > > >> > > >> Fande, > > >> > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From okoshkarov at tae.com Tue Dec 15 11:18:31 2020 From: okoshkarov at tae.com (Alex Koshkarov) Date: Tue, 15 Dec 2020 17:18:31 +0000 Subject: [petsc-users] Petsc makefile and PETSC_COMPILE variable Message-ID: <53487151-D8AD-49D5-8827-BFDAF8D8CFA8@tae.com> Hi All, I have been using trivial makefile (see below) for the code which uses petsc. The makefile relies on variable PETSC_COMPILE which disappeared in new petsc version (absent in 3.14.2, but present in 3.13.4). What would be the right way to fix the makefile? (should I use something like PETSC_COMPILE_SINGLE ?). Is it a very bad practice to use such makefile? p.s. sorry if this is a duplicate message, I believe I sent the first one to the wrong address. Thank you very much, Alex Koshkarov. Example of makefile, it assumes sources in ?src? and creats objects in ?objects?: CPP := $(wildcard src/*.c) DEP := $(wildcard src/*.h) OBJ := $(addprefix objects/,$(notdir $(CPP:.c=.o))) include ${PETSC_DIR}/lib/petsc/conf/variables include ${PETSC_DIR}/lib/petsc/conf/rules driver: $(OBJ) -${CLINKER} -o $@ $^ ${PETSC_LIB} ${EXTRALIBS} ${CFLAGS} objects/%.o: src/%.c $(DEP) ${PETSC_COMPILE} -c $< -o $@ -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Tue Dec 15 11:25:26 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 15 Dec 2020 11:25:26 -0600 Subject: [petsc-users] Petsc makefile and PETSC_COMPILE variable In-Reply-To: <53487151-D8AD-49D5-8827-BFDAF8D8CFA8@tae.com> References: <53487151-D8AD-49D5-8827-BFDAF8D8CFA8@tae.com> Message-ID: <1221f54-2e-5fa2-59f2-b9a7f63464a@mcs.anl.gov> On Tue, 15 Dec 2020, Alex Koshkarov wrote: > Hi All, > > I have been using trivial makefile (see below) for the code which uses petsc. The makefile relies on variable PETSC_COMPILE which disappeared in new petsc version (absent in 3.14.2, but present in 3.13.4). What would be the right way to fix the makefile? (should I use something like PETSC_COMPILE_SINGLE ?). Yes - this change was a bugfix. > Is it a very bad practice to use such makefile? For most use cases the default targets work. However this usage [where sources and obj files are in different dirs] is not covered by them. So - I think using such targets is appropriate. There is also share/petsc/Makefile.user - which attempts to provide a portable way to create user makefiles [that don't rely on internals like PETSC_COMPILE_SINGLE] - but requires gnumake and pkgconfig Satish > > p.s. sorry if this is a duplicate message, I believe I sent the first one to the wrong address. > > Thank you very much, > Alex Koshkarov. > > > Example of makefile, it assumes sources in ?src? and creats objects in ?objects?: > > CPP := $(wildcard src/*.c) > DEP := $(wildcard src/*.h) > OBJ := $(addprefix objects/,$(notdir $(CPP:.c=.o))) > > include ${PETSC_DIR}/lib/petsc/conf/variables > include ${PETSC_DIR}/lib/petsc/conf/rules > > driver: $(OBJ) > -${CLINKER} -o $@ $^ ${PETSC_LIB} ${EXTRALIBS} ${CFLAGS} > > objects/%.o: src/%.c $(DEP) > ${PETSC_COMPILE} -c $< -o $@ > > From okoshkarov at tae.com Tue Dec 15 11:46:48 2020 From: okoshkarov at tae.com (Alex Koshkarov) Date: Tue, 15 Dec 2020 17:46:48 +0000 Subject: [petsc-users] Petsc makefile and PETSC_COMPILE variable In-Reply-To: <1221f54-2e-5fa2-59f2-b9a7f63464a@mcs.anl.gov> References: <53487151-D8AD-49D5-8827-BFDAF8D8CFA8@tae.com> <1221f54-2e-5fa2-59f2-b9a7f63464a@mcs.anl.gov> Message-ID: <05CDFCFA-7026-4ADE-AACC-3D89B30A7F86@tae.com> Thank you! It makes sense to use share/petsc/Makefile.user - I will try to understand it. However, can you please tell me what is the meaning of "_SINGLE" in "PETSC_COMPILE_SINGLE"? Does it mean compile only one source file? Best regards, Alex Koshkarov. ?On 12/15/20, 9:25 AM, "Satish Balay" wrote: On Tue, 15 Dec 2020, Alex Koshkarov wrote: > Hi All, > > I have been using trivial makefile (see below) for the code which uses petsc. The makefile relies on variable PETSC_COMPILE which disappeared in new petsc version (absent in 3.14.2, but present in 3.13.4). What would be the right way to fix the makefile? (should I use something like PETSC_COMPILE_SINGLE ?). Yes - this change was a bugfix. > Is it a very bad practice to use such makefile? For most use cases the default targets work. However this usage [where sources and obj files are in different dirs] is not covered by them. So - I think using such targets is appropriate. There is also share/petsc/Makefile.user - which attempts to provide a portable way to create user makefiles [that don't rely on internals like PETSC_COMPILE_SINGLE] - but requires gnumake and pkgconfig Satish > > p.s. sorry if this is a duplicate message, I believe I sent the first one to the wrong address. > > Thank you very much, > Alex Koshkarov. > > > Example of makefile, it assumes sources in ?src? and creats objects in ?objects?: > > CPP := $(wildcard src/*.c) > DEP := $(wildcard src/*.h) > OBJ := $(addprefix objects/,$(notdir $(CPP:.c=.o))) > > include ${PETSC_DIR}/lib/petsc/conf/variables > include ${PETSC_DIR}/lib/petsc/conf/rules > > driver: $(OBJ) > -${CLINKER} -o $@ $^ ${PETSC_LIB} ${EXTRALIBS} ${CFLAGS} > > objects/%.o: src/%.c $(DEP) > ${PETSC_COMPILE} -c $< -o $@ > > From balay at mcs.anl.gov Tue Dec 15 11:55:46 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 15 Dec 2020 11:55:46 -0600 Subject: [petsc-users] Petsc makefile and PETSC_COMPILE variable In-Reply-To: <05CDFCFA-7026-4ADE-AACC-3D89B30A7F86@tae.com> References: <53487151-D8AD-49D5-8827-BFDAF8D8CFA8@tae.com> <1221f54-2e-5fa2-59f2-b9a7f63464a@mcs.anl.gov> <05CDFCFA-7026-4ADE-AACC-3D89B30A7F86@tae.com> Message-ID: <6464c0b5-9f62-7c63-37bb-cf9475f2d74c@mcs.anl.gov> Its internal organization of some old code. Previously we had 2 things: - PETSC_COMPILE (perhaps PETSC_COMPILE_MULTIPLE is a more appropriate name) did something equivalent to 'gcc -c a.c b.c c.c' - i.e compile multiple files at the same time. - PETSC_COMPILE_SINGLE used was to compile one file at a time. [gcc -c a.c] Some of the old build infrastructure that uses PETSC_COMPILE is no longer used and removed [ i.e make target 'libfast'] And somehow the remaining *_SINGLE targets had this error [i.e PETSC_COMPILE was used incorrectly instead of PETSC_COMPILE_SINGLE] which got fixed. Satish On Tue, 15 Dec 2020, Alex Koshkarov wrote: > Thank you! > > It makes sense to use share/petsc/Makefile.user - I will try to understand it. However, can you please tell me what is the meaning of "_SINGLE" in "PETSC_COMPILE_SINGLE"? Does it mean compile only one source file? > > Best regards, > Alex Koshkarov. > > ?On 12/15/20, 9:25 AM, "Satish Balay" wrote: > > On Tue, 15 Dec 2020, Alex Koshkarov wrote: > > > Hi All, > > > > I have been using trivial makefile (see below) for the code which uses petsc. The makefile relies on variable PETSC_COMPILE which disappeared in new petsc version (absent in 3.14.2, but present in 3.13.4). What would be the right way to fix the makefile? (should I use something like PETSC_COMPILE_SINGLE ?). > > Yes - this change was a bugfix. > > > Is it a very bad practice to use such makefile? > > For most use cases the default targets work. However this usage [where sources and obj files are in different dirs] is not covered by them. > > So - I think using such targets is appropriate. > > There is also share/petsc/Makefile.user - which attempts to provide a portable way to create user makefiles [that don't rely on internals like PETSC_COMPILE_SINGLE] - but requires gnumake and pkgconfig > > Satish > > > > > > p.s. sorry if this is a duplicate message, I believe I sent the first one to the wrong address. > > > > Thank you very much, > > Alex Koshkarov. > > > > > > Example of makefile, it assumes sources in ?src? and creats objects in ?objects?: > > > > CPP := $(wildcard src/*.c) > > DEP := $(wildcard src/*.h) > > OBJ := $(addprefix objects/,$(notdir $(CPP:.c=.o))) > > > > include ${PETSC_DIR}/lib/petsc/conf/variables > > include ${PETSC_DIR}/lib/petsc/conf/rules > > > > driver: $(OBJ) > > -${CLINKER} -o $@ $^ ${PETSC_LIB} ${EXTRALIBS} ${CFLAGS} > > > > objects/%.o: src/%.c $(DEP) > > ${PETSC_COMPILE} -c $< -o $@ > > > > > > From knepley at gmail.com Tue Dec 15 11:57:52 2020 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 15 Dec 2020 12:57:52 -0500 Subject: [petsc-users] Petsc makefile and PETSC_COMPILE variable In-Reply-To: <05CDFCFA-7026-4ADE-AACC-3D89B30A7F86@tae.com> References: <53487151-D8AD-49D5-8827-BFDAF8D8CFA8@tae.com> <1221f54-2e-5fa2-59f2-b9a7f63464a@mcs.anl.gov> <05CDFCFA-7026-4ADE-AACC-3D89B30A7F86@tae.com> Message-ID: On Tue, Dec 15, 2020 at 12:46 PM Alex Koshkarov wrote: > Thank you! > > It makes sense to use share/petsc/Makefile.user - I will try to understand > it. However, can you please tell me what is the meaning of "_SINGLE" in > "PETSC_COMPILE_SINGLE"? Does it mean compile only one source file? > Yes. Thanks, MAtt > Best regards, > Alex Koshkarov. > > ?On 12/15/20, 9:25 AM, "Satish Balay" wrote: > > On Tue, 15 Dec 2020, Alex Koshkarov wrote: > > > Hi All, > > > > I have been using trivial makefile (see below) for the code which > uses petsc. The makefile relies on variable PETSC_COMPILE which disappeared > in new petsc version (absent in 3.14.2, but present in 3.13.4). What would > be the right way to fix the makefile? (should I use something like > PETSC_COMPILE_SINGLE ?). > > Yes - this change was a bugfix. > > > Is it a very bad practice to use such makefile? > > For most use cases the default targets work. However this usage [where > sources and obj files are in different dirs] is not covered by them. > > So - I think using such targets is appropriate. > > There is also share/petsc/Makefile.user - which attempts to provide a > portable way to create user makefiles [that don't rely on internals like > PETSC_COMPILE_SINGLE] - but requires gnumake and pkgconfig > > Satish > > > > > > p.s. sorry if this is a duplicate message, I believe I sent the > first one to the wrong address. > > > > Thank you very much, > > Alex Koshkarov. > > > > > > Example of makefile, it assumes sources in ?src? and creats objects > in ?objects?: > > > > CPP := $(wildcard src/*.c) > > DEP := $(wildcard src/*.h) > > OBJ := $(addprefix objects/,$(notdir $(CPP:.c=.o))) > > > > include ${PETSC_DIR}/lib/petsc/conf/variables > > include ${PETSC_DIR}/lib/petsc/conf/rules > > > > driver: $(OBJ) > > -${CLINKER} -o $@ $^ ${PETSC_LIB} ${EXTRALIBS} > ${CFLAGS} > > > > objects/%.o: src/%.c $(DEP) > > ${PETSC_COMPILE} -c $< -o $@ > > > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Tue Dec 15 21:41:26 2020 From: bsmith at petsc.dev (Barry Smith) Date: Tue, 15 Dec 2020 21:41:26 -0600 Subject: [petsc-users] Petsc makefile and PETSC_COMPILE variable In-Reply-To: <05CDFCFA-7026-4ADE-AACC-3D89B30A7F86@tae.com> References: <53487151-D8AD-49D5-8827-BFDAF8D8CFA8@tae.com> <1221f54-2e-5fa2-59f2-b9a7f63464a@mcs.anl.gov> <05CDFCFA-7026-4ADE-AACC-3D89B30A7F86@tae.com> Message-ID: Alex, Since you are explicitly defining your rules you might as well just do it completely, so use something like >> objects/%.o: src/%.c $(DEP) >> ${PCC} -c $< -o $@ ${PCC_FLAGS} ${PFLAGS} ${CCPPFLAGS} The P indicates either C and C flags or C++ compiler and its flags if configure was run with --with-clanguage=c++ (not recommended). So if your code is C you can use >> ${CC} -c $< -o $@ ${CC_FLAGS} ${CPP_FLAGS} if C++ use >> ${CXX} -c $< -o $@ ${CXXPP_FLAGS} ${CXX_FLAGS} > With this you don't need the COMPILE macros that are really internal for PETSc's use. Barry We have not been completely successful at getting share/petsc/Makefile.user to be bullet proof yet, but if you can get it to work in your case great. > On Dec 15, 2020, at 11:46 AM, Alex Koshkarov wrote: > > Thank you! > > It makes sense to use share/petsc/Makefile.user - I will try to understand it. However, can you please tell me what is the meaning of "_SINGLE" in "PETSC_COMPILE_SINGLE"? Does it mean compile only one source file? > > Best regards, > Alex Koshkarov. > > ?On 12/15/20, 9:25 AM, "Satish Balay" wrote: > > On Tue, 15 Dec 2020, Alex Koshkarov wrote: > >> Hi All, >> >> I have been using trivial makefile (see below) for the code which uses petsc. The makefile relies on variable PETSC_COMPILE which disappeared in new petsc version (absent in 3.14.2, but present in 3.13.4). What would be the right way to fix the makefile? (should I use something like PETSC_COMPILE_SINGLE ?). > > Yes - this change was a bugfix. > >> Is it a very bad practice to use such makefile? > > For most use cases the default targets work. However this usage [where sources and obj files are in different dirs] is not covered by them. > > So - I think using such targets is appropriate. > > There is also share/petsc/Makefile.user - which attempts to provide a portable way to create user makefiles [that don't rely on internals like PETSC_COMPILE_SINGLE] - but requires gnumake and pkgconfig > > Satish > > >> >> p.s. sorry if this is a duplicate message, I believe I sent the first one to the wrong address. >> >> Thank you very much, >> Alex Koshkarov. >> >> >> Example of makefile, it assumes sources in ?src? and creats objects in ?objects?: >> >> CPP := $(wildcard src/*.c) >> DEP := $(wildcard src/*.h) >> OBJ := $(addprefix objects/,$(notdir $(CPP:.c=.o))) >> >> include ${PETSC_DIR}/lib/petsc/conf/variables >> include ${PETSC_DIR}/lib/petsc/conf/rules >> >> driver: $(OBJ) >> -${CLINKER} -o $@ $^ ${PETSC_LIB} ${EXTRALIBS} ${CFLAGS} >> >> objects/%.o: src/%.c $(DEP) >> ${PETSC_COMPILE} -c $< -o $@ >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Tue Dec 15 21:46:00 2020 From: bsmith at petsc.dev (Barry Smith) Date: Tue, 15 Dec 2020 21:46:00 -0600 Subject: [petsc-users] TS tutorial ex11 in Fortran In-Reply-To: References: <1EC72DF8-61BD-482B-B1B0-9427AB1ADF74@petsc.dev> <048DC071-33E5-4644-B1C8-DD598248923A@petsc.dev> <87tuspyaug.fsf@jedbrown.org> Message-ID: This is great. If you make a branch off of the PETSc git repository with these additions and work on ex11 you can make a merge request and we can run the code easily on all our test systems (for security reasons one of use needs to launch the tests from your MR). https://docs.petsc.org/en/latest/developers/integration/ Barry > On Dec 15, 2020, at 5:35 AM, Thibault Bridel-Bertomeu wrote: > > Hello everyone, > > So far, I have the wrappers in the files attached to this e-mail. I still do not know if they work properly - at least the code compiles and the calls to the wrapped-subroutine do not fail - but I wanted to put this here in case someone sees something really wrong with it already. > > Thank you again for your help, I'll try to post updates of the F90 version of ex11 regularly in this thread. > > Stay safe, > > Thibault Bridel-Bertomeu > > Le dim. 13 d?c. 2020 ? 16:39, Jed Brown > a ?crit : > Thibault Bridel-Bertomeu > writes: > > > Thank you Mark for your answer. > > > > I am not sure what you think could be in the setBC1 routine ? How to make > > the connection with the PetscDS ? > > > > On the other hand, I actually found after a while TSMonitorSet has a > > fortran wrapper, and it does take as arguments two function pointers, so I > > guess it is possible ? Although I am not sure exactly how to play with the > > PetscObjectSetFortranCallback & PetscObjectUseFortranCallback macros - > > could anybody advise please ? > > tsmonitorset_ is a good example to follow. In your file, create one of these static structs with a member for each callback. These are IDs that will be used as keys for Fortran callbacks and their contexts. The salient parts of the file are below. > > static struct { > PetscFortranCallbackId prestep; > PetscFortranCallbackId poststep; > PetscFortranCallbackId rhsfunction; > PetscFortranCallbackId rhsjacobian; > PetscFortranCallbackId ifunction; > PetscFortranCallbackId ijacobian; > PetscFortranCallbackId monitor; > PetscFortranCallbackId mondestroy; > PetscFortranCallbackId transform; > #if defined(PETSC_HAVE_F90_2PTR_ARG) > PetscFortranCallbackId function_pgiptr; > #endif > } _cb; > > /* > Note ctx is the same as ts so we need to get the Fortran context out of the TS; this gets put in _ctx using the callback ID > */ > static PetscErrorCode ourmonitor(TS ts,PetscInt i,PetscReal d,Vec v,void *ctx) > { > PetscObjectUseFortranCallback(ts,_cb.monitor,(TS*,PetscInt*,PetscReal*,Vec *,void*,PetscErrorCode*),(&ts,&i,&d,&v,_ctx,&ierr)); > } > > Then follow as in tsmonitorset_, which sets two callbacks. > > PETSC_EXTERN void tsmonitorset_(TS *ts,void (*func)(TS*,PetscInt*,PetscReal*,Vec*,void*,PetscErrorCode*),void *mctx,void (*d)(void*,PetscErrorCode*),PetscErrorCode *ierr) > { > CHKFORTRANNULLFUNCTION(d); > if ((PetscVoidFunction)func == (PetscVoidFunction) tsmonitordefault_) { > *ierr = TSMonitorSet(*ts,(PetscErrorCode (*)(TS,PetscInt,PetscReal,Vec,void*))TSMonitorDefault,*(PetscViewerAndFormat**)mctx,(PetscErrorCode (*)(void **))PetscViewerAndFormatDestroy); > } else { > *ierr = PetscObjectSetFortranCallback((PetscObject)*ts,PETSC_FORTRAN_CALLBACK_CLASS,&_cb.monitor,(PetscVoidFunction)func,mctx); > *ierr = PetscObjectSetFortranCallback((PetscObject)*ts,PETSC_FORTRAN_CALLBACK_CLASS,&_cb.mondestroy,(PetscVoidFunction)d,mctx); > *ierr = TSMonitorSet(*ts,ourmonitor,*ts,ourmonitordestroy); > } > } > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Tue Dec 15 22:10:38 2020 From: jed at jedbrown.org (Jed Brown) Date: Tue, 15 Dec 2020 21:10:38 -0700 Subject: [petsc-users] valgrind with petscmpiexec In-Reply-To: References: Message-ID: <87v9d2e6gx.fsf@jedbrown.org> Barry Smith writes: > No idea. Perhaps petscmpiexec could be modified so it only ran valgrind on the first 10 ranks? Not clear how to do that. Or valgrind should get a MR that removes this small arbitrary limitation on the number of processes. 576 is so 2000 :-) I don't want it stuffed into petscmpiexec, but I routinely run Valgrind or gdb on a subset of ranks mpiexec -n 3 valgrind --track-origins=yes ./app -args : -n 5 ./app -args From thibault.bridelbertomeu at gmail.com Wed Dec 16 00:35:53 2020 From: thibault.bridelbertomeu at gmail.com (Thibault Bridel-Bertomeu) Date: Wed, 16 Dec 2020 07:35:53 +0100 Subject: [petsc-users] TS tutorial ex11 in Fortran In-Reply-To: References: <1EC72DF8-61BD-482B-B1B0-9427AB1ADF74@petsc.dev> <048DC071-33E5-4644-B1C8-DD598248923A@petsc.dev> <87tuspyaug.fsf@jedbrown.org> Message-ID: Hello everyone, Thank you Barry for the feedback. OK, yes I'll work up an MR as soon as I have got something working. By the way, does the fortran-version of the example have to be a single file ? If my push contains a directory with several files (different modules and the main), and the Makefile that goes with it, is that ok ? Thibault Bridel-Bertomeu Le mer. 16 d?c. 2020 ? 04:46, Barry Smith a ?crit : > > This is great. If you make a branch off of the PETSc git repository > with these additions and work on ex11 you can make a merge request and we > can run the code easily on all our test systems (for security reasons one > of use needs to launch the tests from your MR). > https://docs.petsc.org/en/latest/developers/integration/ > > Barry > > > On Dec 15, 2020, at 5:35 AM, Thibault Bridel-Bertomeu < > thibault.bridelbertomeu at gmail.com> wrote: > > Hello everyone, > > So far, I have the wrappers in the files attached to this e-mail. I still > do not know if they work properly - at least the code compiles and the > calls to the wrapped-subroutine do not fail - but I wanted to put this here > in case someone sees something really wrong with it already. > > Thank you again for your help, I'll try to post updates of the F90 version > of ex11 regularly in this thread. > > Stay safe, > > Thibault Bridel-Bertomeu > > Le dim. 13 d?c. 2020 ? 16:39, Jed Brown a ?crit : > >> Thibault Bridel-Bertomeu writes: >> >> > Thank you Mark for your answer. >> > >> > I am not sure what you think could be in the setBC1 routine ? How to >> make >> > the connection with the PetscDS ? >> > >> > On the other hand, I actually found after a while TSMonitorSet has a >> > fortran wrapper, and it does take as arguments two function pointers, >> so I >> > guess it is possible ? Although I am not sure exactly how to play with >> the >> > PetscObjectSetFortranCallback & PetscObjectUseFortranCallback macros - >> > could anybody advise please ? >> >> tsmonitorset_ is a good example to follow. In your file, create one of >> these static structs with a member for each callback. These are IDs that >> will be used as keys for Fortran callbacks and their contexts. The salient >> parts of the file are below. >> >> static struct { >> PetscFortranCallbackId prestep; >> PetscFortranCallbackId poststep; >> PetscFortranCallbackId rhsfunction; >> PetscFortranCallbackId rhsjacobian; >> PetscFortranCallbackId ifunction; >> PetscFortranCallbackId ijacobian; >> PetscFortranCallbackId monitor; >> PetscFortranCallbackId mondestroy; >> PetscFortranCallbackId transform; >> #if defined(PETSC_HAVE_F90_2PTR_ARG) >> PetscFortranCallbackId function_pgiptr; >> #endif >> } _cb; >> >> /* >> Note ctx is the same as ts so we need to get the Fortran context out >> of the TS; this gets put in _ctx using the callback ID >> */ >> static PetscErrorCode ourmonitor(TS ts,PetscInt i,PetscReal d,Vec v,void >> *ctx) >> { >> >> PetscObjectUseFortranCallback(ts,_cb.monitor,(TS*,PetscInt*,PetscReal*,Vec >> *,void*,PetscErrorCode*),(&ts,&i,&d,&v,_ctx,&ierr)); >> } >> >> Then follow as in tsmonitorset_, which sets two callbacks. >> >> PETSC_EXTERN void tsmonitorset_(TS *ts,void >> (*func)(TS*,PetscInt*,PetscReal*,Vec*,void*,PetscErrorCode*),void >> *mctx,void (*d)(void*,PetscErrorCode*),PetscErrorCode *ierr) >> { >> CHKFORTRANNULLFUNCTION(d); >> if ((PetscVoidFunction)func == (PetscVoidFunction) tsmonitordefault_) { >> *ierr = TSMonitorSet(*ts,(PetscErrorCode >> (*)(TS,PetscInt,PetscReal,Vec,void*))TSMonitorDefault,*(PetscViewerAndFormat**)mctx,(PetscErrorCode >> (*)(void **))PetscViewerAndFormatDestroy); >> } else { >> *ierr = >> PetscObjectSetFortranCallback((PetscObject)*ts,PETSC_FORTRAN_CALLBACK_CLASS,&_cb.monitor,(PetscVoidFunction)func,mctx); >> *ierr = >> PetscObjectSetFortranCallback((PetscObject)*ts,PETSC_FORTRAN_CALLBACK_CLASS,&_cb.mondestroy,(PetscVoidFunction)d,mctx); >> *ierr = TSMonitorSet(*ts,ourmonitor,*ts,ourmonitordestroy); >> } >> } >> > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Wed Dec 16 00:47:46 2020 From: bsmith at petsc.dev (Barry Smith) Date: Wed, 16 Dec 2020 00:47:46 -0600 Subject: [petsc-users] TS tutorial ex11 in Fortran In-Reply-To: References: <1EC72DF8-61BD-482B-B1B0-9427AB1ADF74@petsc.dev> <048DC071-33E5-4644-B1C8-DD598248923A@petsc.dev> <87tuspyaug.fsf@jedbrown.org> Message-ID: <1035FEDB-B297-4E71-8F0F-D44C72707105@petsc.dev> Thibault, A subdirectory for the example is fine; we have other examples that use subdirectories and multiple files. Note: even if you don't have something completely working you can still make MR and list it as DRAFT request for comments, some other PETSc members who understand the packages you are using and Fortran better than I may be able to help as you develop the code. Barry > On Dec 16, 2020, at 12:35 AM, Thibault Bridel-Bertomeu wrote: > > Hello everyone, > > Thank you Barry for the feedback. > OK, yes I'll work up an MR as soon as I have got something working. By the way, does the fortran-version of the example have to be a single file ? If my push contains a directory with several files (different modules and the main), and the Makefile that goes with it, is that ok ? > > Thibault Bridel-Bertomeu > > > Le mer. 16 d?c. 2020 ? 04:46, Barry Smith > a ?crit : > > This is great. If you make a branch off of the PETSc git repository with these additions and work on ex11 you can make a merge request and we can run the code easily on all our test systems (for security reasons one of use needs to launch the tests from your MR). https://docs.petsc.org/en/latest/developers/integration/ > > Barry > > >> On Dec 15, 2020, at 5:35 AM, Thibault Bridel-Bertomeu > wrote: >> >> Hello everyone, >> >> So far, I have the wrappers in the files attached to this e-mail. I still do not know if they work properly - at least the code compiles and the calls to the wrapped-subroutine do not fail - but I wanted to put this here in case someone sees something really wrong with it already. >> >> Thank you again for your help, I'll try to post updates of the F90 version of ex11 regularly in this thread. >> >> Stay safe, >> >> Thibault Bridel-Bertomeu >> >> Le dim. 13 d?c. 2020 ? 16:39, Jed Brown > a ?crit : >> Thibault Bridel-Bertomeu > writes: >> >> > Thank you Mark for your answer. >> > >> > I am not sure what you think could be in the setBC1 routine ? How to make >> > the connection with the PetscDS ? >> > >> > On the other hand, I actually found after a while TSMonitorSet has a >> > fortran wrapper, and it does take as arguments two function pointers, so I >> > guess it is possible ? Although I am not sure exactly how to play with the >> > PetscObjectSetFortranCallback & PetscObjectUseFortranCallback macros - >> > could anybody advise please ? >> >> tsmonitorset_ is a good example to follow. In your file, create one of these static structs with a member for each callback. These are IDs that will be used as keys for Fortran callbacks and their contexts. The salient parts of the file are below. >> >> static struct { >> PetscFortranCallbackId prestep; >> PetscFortranCallbackId poststep; >> PetscFortranCallbackId rhsfunction; >> PetscFortranCallbackId rhsjacobian; >> PetscFortranCallbackId ifunction; >> PetscFortranCallbackId ijacobian; >> PetscFortranCallbackId monitor; >> PetscFortranCallbackId mondestroy; >> PetscFortranCallbackId transform; >> #if defined(PETSC_HAVE_F90_2PTR_ARG) >> PetscFortranCallbackId function_pgiptr; >> #endif >> } _cb; >> >> /* >> Note ctx is the same as ts so we need to get the Fortran context out of the TS; this gets put in _ctx using the callback ID >> */ >> static PetscErrorCode ourmonitor(TS ts,PetscInt i,PetscReal d,Vec v,void *ctx) >> { >> PetscObjectUseFortranCallback(ts,_cb.monitor,(TS*,PetscInt*,PetscReal*,Vec *,void*,PetscErrorCode*),(&ts,&i,&d,&v,_ctx,&ierr)); >> } >> >> Then follow as in tsmonitorset_, which sets two callbacks. >> >> PETSC_EXTERN void tsmonitorset_(TS *ts,void (*func)(TS*,PetscInt*,PetscReal*,Vec*,void*,PetscErrorCode*),void *mctx,void (*d)(void*,PetscErrorCode*),PetscErrorCode *ierr) >> { >> CHKFORTRANNULLFUNCTION(d); >> if ((PetscVoidFunction)func == (PetscVoidFunction) tsmonitordefault_) { >> *ierr = TSMonitorSet(*ts,(PetscErrorCode (*)(TS,PetscInt,PetscReal,Vec,void*))TSMonitorDefault,*(PetscViewerAndFormat**)mctx,(PetscErrorCode (*)(void **))PetscViewerAndFormatDestroy); >> } else { >> *ierr = PetscObjectSetFortranCallback((PetscObject)*ts,PETSC_FORTRAN_CALLBACK_CLASS,&_cb.monitor,(PetscVoidFunction)func,mctx); >> *ierr = PetscObjectSetFortranCallback((PetscObject)*ts,PETSC_FORTRAN_CALLBACK_CLASS,&_cb.mondestroy,(PetscVoidFunction)d,mctx); >> *ierr = TSMonitorSet(*ts,ourmonitor,*ts,ourmonitordestroy); >> } >> } >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eijkhout at tacc.utexas.edu Wed Dec 16 09:13:28 2020 From: eijkhout at tacc.utexas.edu (Victor Eijkhout) Date: Wed, 16 Dec 2020 15:13:28 +0000 Subject: [petsc-users] Petsc makefile and PETSC_COMPILE variable In-Reply-To: References: <53487151-D8AD-49D5-8827-BFDAF8D8CFA8@tae.com> <1221f54-2e-5fa2-59f2-b9a7f63464a@mcs.anl.gov> <05CDFCFA-7026-4ADE-AACC-3D89B30A7F86@tae.com> Message-ID: On , 2020Dec15, at 21:41, Barry Smith > wrote: So if your code is C you can use ${CC} -c $< -o $@ ${CC_FLAGS} ${CPP_FLAGS} For completeness, what would be the F rule? Victor. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Wed Dec 16 09:18:55 2020 From: jed at jedbrown.org (Jed Brown) Date: Wed, 16 Dec 2020 08:18:55 -0700 Subject: [petsc-users] Petsc makefile and PETSC_COMPILE variable In-Reply-To: References: <53487151-D8AD-49D5-8827-BFDAF8D8CFA8@tae.com> <1221f54-2e-5fa2-59f2-b9a7f63464a@mcs.anl.gov> <05CDFCFA-7026-4ADE-AACC-3D89B30A7F86@tae.com> Message-ID: <87r1npeq3k.fsf@jedbrown.org> Victor Eijkhout writes: > On , 2020Dec15, at 21:41, Barry Smith > wrote: > > So if your code is C you can use > > > ${CC} -c $< -o $@ ${CC_FLAGS} ${CPP_FLAGS} Makefile.user is intended to be used with the default rules or any similar convention. $ make -f /dev/null -p [snipped] COMPILE.c = $(CC) $(CFLAGS) $(CPPFLAGS) $(TARGET_ARCH) -c OUTPUT_OPTION = -o $@ %.o: %.c # recipe to execute (built-in): $(COMPILE.c) $(OUTPUT_OPTION) $< > For completeness, what would be the F rule? COMPILE.F = $(FC) $(FFLAGS) $(CPPFLAGS) $(TARGET_ARCH) -c %.o: %.F # recipe to execute (built-in): $(COMPILE.F) $(OUTPUT_OPTION) $< From rlmackie862 at gmail.com Wed Dec 16 09:58:27 2020 From: rlmackie862 at gmail.com (Randall Mackie) Date: Wed, 16 Dec 2020 07:58:27 -0800 Subject: [petsc-users] trouble compiling MPICH on cluster Message-ID: Dear PETSc team: I am trying to compile a debug-mpich version of PETSc on a new remote cluster for running valgrind. I?ve done this a thousand times on my laptop and the clusters I normally have access to, and it?s never been a problem. This time, it?s failing on trying to install mpich and according to the configure.log (attached) seems to be failing with the following message: src/binding/cxx/.libs/initcxx.o: In function `__static_initialization_and_destruction_0': /auto/soft1/multiphysics/PETSc/petsc-3.13.3/linux-gnu-mpich-complex-debug/externalpackages/mpich-3.3.2/src/binding/cxx/initcxx.cxx:46: undefined reference to `__dso_handle' /usr/bin/ld: src/binding/cxx/.libs/initcxx.o: relocation R_X86_64_PC32 against undefined hidden symbol `__dso_handle' can not be used when making a shared object /usr/bin/ld: final link failed: Bad value collect2: error: ld returned 1 exit status gmake[2]: *** [lib/libmpicxx.la] Error 1 gmake[2]: *** Waiting for unfinished jobs.... /usr/bin/ld: cannot find -l-L/usr/lib/gcc/x86_64-linux-gnu/4.7 collect2: error: ld returned 1 exit status gmake[2]: *** [lib/libmpifort.la] Error 1 gmake[1]: *** [all-recursive] Error 1 gmake: *** [all] Error 2 We were able to separately compile and install mpich (using the same tar ball) and then use that and compile PETSc, so we have a work-around, but I would prefer to compile them together as I?ve always done. Any ideas as to the issue? Thanks, Randy M. -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: application/octet-stream Size: 2144883 bytes Desc: not available URL: From knepley at gmail.com Wed Dec 16 10:07:58 2020 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 16 Dec 2020 11:07:58 -0500 Subject: [petsc-users] trouble compiling MPICH on cluster In-Reply-To: References: Message-ID: On Wed, Dec 16, 2020 at 10:59 AM Randall Mackie wrote: > Dear PETSc team: > > I am trying to compile a debug-mpich version of PETSc on a new remote > cluster for running valgrind. > > I?ve done this a thousand times on my laptop and the clusters I normally > have access to, and it?s never been a problem. > > This time, it?s failing on trying to install mpich and according to the > configure.log (attached) seems to be failing with the following message: > > src/binding/cxx/.libs/initcxx.o: In function > `__static_initialization_and_destruction_0': > /auto/soft1/multiphysics/PETSc/petsc-3.13.3/linux-gnu-mpich-complex-debug/externalpackages/mpich-3.3.2/src/binding/cxx/initcxx.cxx:46: > undefined reference to `__dso_handle' > /usr/bin/ld: src/binding/cxx/.libs/initcxx.o: relocation R_X86_64_PC32 > against undefined hidden symbol `__dso_handle' can not be used when making > a shared object > /usr/bin/ld: final link failed: Bad value > collect2: error: ld returned 1 exit status > gmake[2]: *** [lib/libmpicxx.la] Error 1 > gmake[2]: *** Waiting for unfinished jobs.... > /usr/bin/ld: cannot find -l-L/usr/lib/gcc/x86_64-linux-gnu/4.7 > collect2: error: ld returned 1 exit status > gmake[2]: *** [lib/libmpifort.la] Error 1 > gmake[1]: *** [all-recursive] Error 1 > gmake: *** [all] Error 2 > > > We were able to separately compile and install mpich (using the same tar > ball) and then use that and compile PETSc, so we have a work-around, but I > would prefer to compile them together as I?ve always done. > > Any ideas as to the issue? > There are a lot of complaints about clock skew on this machine. Not sure if that could have messed up the build. There also seems to be a missing space for arguments -l-L/usr/lib/gcc/x86_64-linux-gnu/4.7 That __dso_handle comes out of the C++ standard library, so maybe also a problem with -libstdc++. I cannot figure it out from the log. Maybe Satish knows. Thanks, Matt > Thanks, > > Randy M. > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Wed Dec 16 10:38:06 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 16 Dec 2020 10:38:06 -0600 Subject: [petsc-users] trouble compiling MPICH on cluster In-Reply-To: References: Message-ID: > Configure Options: --configModules=PETSc.Configure --optionsModule=config.compilerOptions --with-clean=1 --with-scalar-type=complex --with-debugging=1 --with-fortran=1 --download-mpich=../external/mpich-3.3.2.tar.gz --with-cc=mpicc --with-fc=mpif90 --with-cxx=mpicc using --downlaod-mpich with a prior install of mpi [i.e --with-cc=mpicc --with-fc=mpif90 --with-cxx=mpicc] does not make sense.. This should be: --with-cc=gcc --with-fc=gfortran --with-cxx=mpicc Note: all petsc is doing is build mpich with: Configuring MPICH version 3.3.2 with '--prefix=/auto/soft1/multiphysics/PETSc/petsc-3.13.3/linux-gnu-mpich-complex-debug' 'MAKE=/usr/bin/gmake' '--libdir=/auto/soft1/multiphysics/PETSc/petsc-3.13.3/linux-gnu-mpich-complex-debug/lib' 'CC=mpicc' 'CFLAGS=-fPIC -fstack-protector -g3' 'AR=/usr/bin/ar' 'ARFLAGS=cr' 'CXX=mpicc' 'CXXFLAGS=-fstack-protector -g -fPIC -x c++ -std=gnu++11' 'FFLAGS=-fPIC -ffree-line-length-0 -g' 'FC=mpif90' 'F77=mpif90' 'FCFLAGS=-fPIC -ffree-line-length-0 -g' '--enable-shared' '--with-device=ch3:sock' '--with-pm=hydra' '--enable-fast=no' '--enable-error-messages=all' '--enable-g=meminit' So a manual build with equivalent options [with the above fix - i.e CC=gcc CXX=g++ FC=gfortran F77=gfortran] should also provide equivalent [valgrind clean] MPICH. Satish On Wed, 16 Dec 2020, Randall Mackie wrote: > Dear PETSc team: > > I am trying to compile a debug-mpich version of PETSc on a new remote cluster for running valgrind. > > I?ve done this a thousand times on my laptop and the clusters I normally have access to, and it?s never been a problem. > > This time, it?s failing on trying to install mpich and according to the configure.log (attached) seems to be failing with the following message: > > src/binding/cxx/.libs/initcxx.o: In function `__static_initialization_and_destruction_0': > /auto/soft1/multiphysics/PETSc/petsc-3.13.3/linux-gnu-mpich-complex-debug/externalpackages/mpich-3.3.2/src/binding/cxx/initcxx.cxx:46: undefined reference to `__dso_handle' > /usr/bin/ld: src/binding/cxx/.libs/initcxx.o: relocation R_X86_64_PC32 against undefined hidden symbol `__dso_handle' can not be used when making a shared object > /usr/bin/ld: final link failed: Bad value > collect2: error: ld returned 1 exit status > gmake[2]: *** [lib/libmpicxx.la] Error 1 > gmake[2]: *** Waiting for unfinished jobs.... > /usr/bin/ld: cannot find -l-L/usr/lib/gcc/x86_64-linux-gnu/4.7 > collect2: error: ld returned 1 exit status > gmake[2]: *** [lib/libmpifort.la] Error 1 > gmake[1]: *** [all-recursive] Error 1 > gmake: *** [all] Error 2 > > > We were able to separately compile and install mpich (using the same tar ball) and then use that and compile PETSc, so we have a work-around, but I would prefer to compile them together as I?ve always done. > > Any ideas as to the issue? > > Thanks, > > Randy M. > > From rlmackie862 at gmail.com Wed Dec 16 11:05:08 2020 From: rlmackie862 at gmail.com (Randall Mackie) Date: Wed, 16 Dec 2020 09:05:08 -0800 Subject: [petsc-users] trouble compiling MPICH on cluster In-Reply-To: References: Message-ID: Hi Satish, You are quite right and thank you for spotting that! I had copied another configuration command file and forgot to remove those lines with mpicc and mpif90. All that is necessary is --with-clean=1 \ --with-scalar-type=complex \ --with-debugging=1 \ --with-fortran=1 \ --download-mpich=../external/mpich-3.3.2.tar.gz Much appreciated, Randy > On Dec 16, 2020, at 8:38 AM, Satish Balay wrote: > >> Configure Options: --configModules=PETSc.Configure --optionsModule=config.compilerOptions --with-clean=1 --with-scalar-type=complex --with-debugging=1 --with-fortran=1 --download-mpich=../external/mpich-3.3.2.tar.gz --with-cc=mpicc --with-fc=mpif90 --with-cxx=mpicc > > using --downlaod-mpich with a prior install of mpi [i.e --with-cc=mpicc --with-fc=mpif90 --with-cxx=mpicc] does not make sense.. This should be: > > --with-cc=gcc --with-fc=gfortran --with-cxx=mpicc > > > Note: all petsc is doing is build mpich with: > > Configuring MPICH version 3.3.2 with '--prefix=/auto/soft1/multiphysics/PETSc/petsc-3.13.3/linux-gnu-mpich-complex-debug' 'MAKE=/usr/bin/gmake' '--libdir=/auto/soft1/multiphysics/PETSc/petsc-3.13.3/linux-gnu-mpich-complex-debug/lib' 'CC=mpicc' 'CFLAGS=-fPIC -fstack-protector -g3' 'AR=/usr/bin/ar' 'ARFLAGS=cr' 'CXX=mpicc' 'CXXFLAGS=-fstack-protector -g -fPIC -x c++ -std=gnu++11' 'FFLAGS=-fPIC -ffree-line-length-0 -g' 'FC=mpif90' 'F77=mpif90' 'FCFLAGS=-fPIC -ffree-line-length-0 -g' '--enable-shared' '--with-device=ch3:sock' '--with-pm=hydra' '--enable-fast=no' '--enable-error-messages=all' '--enable-g=meminit' > > > So a manual build with equivalent options [with the above fix - i.e CC=gcc CXX=g++ FC=gfortran F77=gfortran] should also provide equivalent [valgrind clean] MPICH. > > Satish > > > > On Wed, 16 Dec 2020, Randall Mackie wrote: > >> Dear PETSc team: >> >> I am trying to compile a debug-mpich version of PETSc on a new remote cluster for running valgrind. >> >> I?ve done this a thousand times on my laptop and the clusters I normally have access to, and it?s never been a problem. >> >> This time, it?s failing on trying to install mpich and according to the configure.log (attached) seems to be failing with the following message: >> >> src/binding/cxx/.libs/initcxx.o: In function `__static_initialization_and_destruction_0': >> /auto/soft1/multiphysics/PETSc/petsc-3.13.3/linux-gnu-mpich-complex-debug/externalpackages/mpich-3.3.2/src/binding/cxx/initcxx.cxx:46: undefined reference to `__dso_handle' >> /usr/bin/ld: src/binding/cxx/.libs/initcxx.o: relocation R_X86_64_PC32 against undefined hidden symbol `__dso_handle' can not be used when making a shared object >> /usr/bin/ld: final link failed: Bad value >> collect2: error: ld returned 1 exit status >> gmake[2]: *** [lib/libmpicxx.la] Error 1 >> gmake[2]: *** Waiting for unfinished jobs.... >> /usr/bin/ld: cannot find -l-L/usr/lib/gcc/x86_64-linux-gnu/4.7 >> collect2: error: ld returned 1 exit status >> gmake[2]: *** [lib/libmpifort.la] Error 1 >> gmake[1]: *** [all-recursive] Error 1 >> gmake: *** [all] Error 2 >> >> >> We were able to separately compile and install mpich (using the same tar ball) and then use that and compile PETSc, so we have a work-around, but I would prefer to compile them together as I?ve always done. >> >> Any ideas as to the issue? >> >> Thanks, >> >> Randy M. >> >> From okoshkarov at tae.com Wed Dec 16 11:09:22 2020 From: okoshkarov at tae.com (Alex Koshkarov) Date: Wed, 16 Dec 2020 17:09:22 +0000 Subject: [petsc-users] Petsc makefile and PETSC_COMPILE variable In-Reply-To: References: <53487151-D8AD-49D5-8827-BFDAF8D8CFA8@tae.com> <1221f54-2e-5fa2-59f2-b9a7f63464a@mcs.anl.gov> <05CDFCFA-7026-4ADE-AACC-3D89B30A7F86@tae.com> Message-ID: <4A772F29-C36E-4D4D-BDFD-66594F542C23@tae.com> Thanks Barry, It makes much more sense! And thanks for clarifying what ?P? indicates. I like this explicit approach over implicit because I am bad with makefile syntax. However, share/petsc/Makefile.user looks much cleaner, but I need to learn how pkg-config works to understand it. Thank you, Alex. From: Barry Smith Date: Tuesday, December 15, 2020 at 7:41 PM To: Alex Koshkarov Cc: petsc-users Subject: Re: [petsc-users] Petsc makefile and PETSC_COMPILE variable Alex, Since you are explicitly defining your rules you might as well just do it completely, so use something like objects/%.o: src/%.c $(DEP) ${PCC} -c $< -o $@ ${PCC_FLAGS} ${PFLAGS} ${CCPPFLAGS} The P indicates either C and C flags or C++ compiler and its flags if configure was run with --with-clanguage=c++ (not recommended). So if your code is C you can use ${CC} -c $< -o $@ ${CC_FLAGS} ${CPP_FLAGS} if C++ use ${CXX} -c $< -o $@ ${CXXPP_FLAGS} ${CXX_FLAGS} With this you don't need the COMPILE macros that are really internal for PETSc's use. Barry We have not been completely successful at getting share/petsc/Makefile.user to be bullet proof yet, but if you can get it to work in your case great. On Dec 15, 2020, at 11:46 AM, Alex Koshkarov > wrote: Thank you! It makes sense to use share/petsc/Makefile.user - I will try to understand it. However, can you please tell me what is the meaning of "_SINGLE" in "PETSC_COMPILE_SINGLE"? Does it mean compile only one source file? Best regards, Alex Koshkarov. On 12/15/20, 9:25 AM, "Satish Balay" > wrote: On Tue, 15 Dec 2020, Alex Koshkarov wrote: Hi All, I have been using trivial makefile (see below) for the code which uses petsc. The makefile relies on variable PETSC_COMPILE which disappeared in new petsc version (absent in 3.14.2, but present in 3.13.4). What would be the right way to fix the makefile? (should I use something like PETSC_COMPILE_SINGLE ?). Yes - this change was a bugfix. Is it a very bad practice to use such makefile? For most use cases the default targets work. However this usage [where sources and obj files are in different dirs] is not covered by them. So - I think using such targets is appropriate. There is also share/petsc/Makefile.user - which attempts to provide a portable way to create user makefiles [that don't rely on internals like PETSC_COMPILE_SINGLE] - but requires gnumake and pkgconfig Satish p.s. sorry if this is a duplicate message, I believe I sent the first one to the wrong address. Thank you very much, Alex Koshkarov. Example of makefile, it assumes sources in ?src? and creats objects in ?objects?: CPP := $(wildcard src/*.c) DEP := $(wildcard src/*.h) OBJ := $(addprefix objects/,$(notdir $(CPP:.c=.o))) include ${PETSC_DIR}/lib/petsc/conf/variables include ${PETSC_DIR}/lib/petsc/conf/rules driver: $(OBJ) -${CLINKER} -o $@ $^ ${PETSC_LIB} ${EXTRALIBS} ${CFLAGS} objects/%.o: src/%.c $(DEP) ${PETSC_COMPILE} -c $< -o $@ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Wed Dec 16 11:14:26 2020 From: jed at jedbrown.org (Jed Brown) Date: Wed, 16 Dec 2020 10:14:26 -0700 Subject: [petsc-users] Petsc makefile and PETSC_COMPILE variable In-Reply-To: <4A772F29-C36E-4D4D-BDFD-66594F542C23@tae.com> References: <53487151-D8AD-49D5-8827-BFDAF8D8CFA8@tae.com> <1221f54-2e-5fa2-59f2-b9a7f63464a@mcs.anl.gov> <05CDFCFA-7026-4ADE-AACC-3D89B30A7F86@tae.com> <4A772F29-C36E-4D4D-BDFD-66594F542C23@tae.com> Message-ID: <87o8itekr1.fsf@jedbrown.org> Alex Koshkarov writes: > Thanks Barry, > > It makes much more sense! And thanks for clarifying what ?P? indicates. I like this explicit approach over implicit because I am bad with makefile syntax. However, share/petsc/Makefile.user looks much cleaner, but I need to learn how pkg-config works to understand it. Open a petsc.pc to see what is specified there. We define a number of extra variables that can help with checking for common compilers/wrappers and how to use RPATH if needed. prefix=/home/jed/petsc/ompi-optg exec_prefix=${prefix} includedir=${prefix}/include libdir=${prefix}/lib ccompiler=mpicc cflags_extra=-fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -O2 -march=native -ffp-contract=fast -g cflags_dep=-MMD -MP ldflag_rpath=-Wl,-rpath, cxxcompiler=mpicxx cxxflags_extra=-Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fPIC fcompiler=mpif90 fflags_extra=-fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O Name: PETSc Description: Library to solve ODEs and algebraic equations Version: 3.13.99 Cflags: -I${includedir} -I/home/jed/petsc/include Libs: -L${libdir} -lpetsc Libs.private: -L/home/jed/petsc/ompi-optg/lib -L/usr/lib/openmpi -L/usr/lib/gcc/x86_64-pc-linux-gnu/10.1.0 -lHYPRE -lumfpack -lklu -lcholmod -lbtf -lccolamd -lcolamd -lcamd -lamd -lsuitesparseconfig -lsuperlu -lsuperlu_dist -lml -llapack -lblis -ltriangle -lX11 -lexodus -lnetcdf -lpnetcdf -lcgns -lmedC -lmed -lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl -lhdf5 -lparmetis -lmetis -lm -lz -lstdc++ -ldl -lmpi_usempif08 -lmpi_usempi_ignore_tkr -lmpi_mpifh -lmpi -lgfortran -lm -lgfortran -lm -lgcc_s -lquadmath -lpthread -lquadmath -lstdc++ -ldl From okoshkarov at tae.com Wed Dec 16 16:48:12 2020 From: okoshkarov at tae.com (Alex Koshkarov) Date: Wed, 16 Dec 2020 22:48:12 +0000 Subject: [petsc-users] Petsc makefile and PETSC_COMPILE variable In-Reply-To: <87o8itekr1.fsf@jedbrown.org> References: <53487151-D8AD-49D5-8827-BFDAF8D8CFA8@tae.com> <1221f54-2e-5fa2-59f2-b9a7f63464a@mcs.anl.gov> <05CDFCFA-7026-4ADE-AACC-3D89B30A7F86@tae.com> <4A772F29-C36E-4D4D-BDFD-66594F542C23@tae.com> <87o8itekr1.fsf@jedbrown.org> Message-ID: Thanks for this. I now use modified version of `share/petsc/Makefile.user`. However, now I am not sure if I do linking correctly. All my code is pure C, so I use $(OBJ) - variable with all object files driver: $(OBJ) $(LINK.c) $^ $(LDFLAGS) $(LDLIBS) -o $@ I am not sure if I need to pass variable $(LDFLAGS) or not, since in Makefile.user examples does not, it uses only $(LDLIBS). Also, I cannot find the definition of LINK.c in `make -p` to make sure. Also, after reading Jed's and mine petsc.pc, I noticed that I do not have optimization flags, however I compile petsc with optimization flags. (I also add additional -O3 into compilation rule when compile my code) Namely, I add during petsc configuration/compilation (I use gcc and openmpi): --with-debugging=0 COPTFLAGS='-O3 -march=native -mtune=native' CXXOPTFLAGS='-O3 -march=native -mtune=native' FOPTFLAGS='-O3 -march=native -mtune=native' So how do I check if my petsc is compiled with optimizations? And my petsc.pc is: prefix=/home/kosh/Documents/NEPIC/libs/petsc-3.14.2/optim exec_prefix=${prefix} includedir=${prefix}/include libdir=${prefix}/lib ccompiler=mpicc cflags_extra=-fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden cflags_dep=-MMD -MP ldflag_rpath=-Wl,-rpath, cxxcompiler=mpicxx cxxflags_extra=-Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -fPIC fcompiler=mpif90 fflags_extra=-fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument Name: PETSc Description: Library to solve ODEs and algebraic equations Version: 3.14.2 Cflags: -I${includedir} -I/home/kosh/Documents/NEPIC/libs/petsc-3.14.2/include Libs: -L${libdir} -lpetsc Libs.private: -L/home/kosh/Documents/NEPIC/libs/petsc-3.14.2/optim/lib -L/usr/lib/openmpi -L/usr/lib/gcc/x86_64-pc-linux-gnu/10.2.0 -lHYPRE -lscalapack -llapack -lblas -lX11 -lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl -lhdf5 -lm -lstdc++ -ldl -lmpi_usempif08 -lmpi_usempi_ignore_tkr -lmpi_mpifh -lmpi -lgfortran -lm -lgfortran -lm -lgcc_s -lquadmath -lpthread -lquadmath -lstdc++ -ldl Thank you and best regards, Alex. ?On 12/16/20, 9:15 AM, "Jed Brown" wrote: Alex Koshkarov writes: > Thanks Barry, > > It makes much more sense! And thanks for clarifying what ?P? indicates. I like this explicit approach over implicit because I am bad with makefile syntax. However, share/petsc/Makefile.user looks much cleaner, but I need to learn how pkg-config works to understand it. Open a petsc.pc to see what is specified there. We define a number of extra variables that can help with checking for common compilers/wrappers and how to use RPATH if needed. prefix=/home/jed/petsc/ompi-optg exec_prefix=${prefix} includedir=${prefix}/include libdir=${prefix}/lib ccompiler=mpicc cflags_extra=-fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -O2 -march=native -ffp-contract=fast -g cflags_dep=-MMD -MP ldflag_rpath=-Wl,-rpath, cxxcompiler=mpicxx cxxflags_extra=-Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fPIC fcompiler=mpif90 fflags_extra=-fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O Name: PETSc Description: Library to solve ODEs and algebraic equations Version: 3.13.99 Cflags: -I${includedir} -I/home/jed/petsc/include Libs: -L${libdir} -lpetsc Libs.private: -L/home/jed/petsc/ompi-optg/lib -L/usr/lib/openmpi -L/usr/lib/gcc/x86_64-pc-linux-gnu/10.1.0 -lHYPRE -lumfpack -lklu -lcholmod -lbtf -lccolamd -lcolamd -lcamd -lamd -lsuitesparseconfig -lsuperlu -lsuperlu_dist -lml -llapack -lblis -ltriangle -lX11 -lexodus -lnetcdf -lpnetcdf -lcgns -lmedC -lmed -lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl -lhdf5 -lparmetis -lmetis -lm -lz -lstdc++ -ldl -lmpi_usempif08 -lmpi_usempi_ignore_tkr -lmpi_mpifh -lmpi -lgfortran -lm -lgfortran -lm -lgcc_s -lquadmath -lpthread -lquadmath -lstdc++ -ldl From jed at jedbrown.org Wed Dec 16 16:57:38 2020 From: jed at jedbrown.org (Jed Brown) Date: Wed, 16 Dec 2020 15:57:38 -0700 Subject: [petsc-users] Petsc makefile and PETSC_COMPILE variable In-Reply-To: References: <53487151-D8AD-49D5-8827-BFDAF8D8CFA8@tae.com> <1221f54-2e-5fa2-59f2-b9a7f63464a@mcs.anl.gov> <05CDFCFA-7026-4ADE-AACC-3D89B30A7F86@tae.com> <4A772F29-C36E-4D4D-BDFD-66594F542C23@tae.com> <87o8itekr1.fsf@jedbrown.org> Message-ID: <87im91e4v1.fsf@jedbrown.org> Alex Koshkarov writes: > Thanks for this. > I now use modified version of `share/petsc/Makefile.user`. However, now I am not sure if I do linking correctly. All my code is pure C, so I use > > $(OBJ) - variable with all object files > > driver: $(OBJ) > $(LINK.c) $^ $(LDFLAGS) $(LDLIBS) -o $@ > > I am not sure if I need to pass variable $(LDFLAGS) or not, since in Makefile.user examples does not, it uses only $(LDLIBS). It's part of LINK.c: $ make -p -f /dev/null | fgrep 'LINK.c =' make: *** No targets. Stop. LINK.c = $(CC) $(CFLAGS) $(CPPFLAGS) $(LDFLAGS) $(TARGET_ARCH) The variables are being extracted according to conventions. $ make -f share/petsc/Makefile.user print CC=mpicc CXX=mpicxx FC=mpif90 CFLAGS=-fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -O2 -march=native -ffp-contract=fast -g CXXFLAGS=-Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fPIC FFLAGS=-fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O CPPFLAGS=-I/home/jed/petsc/ompi-optg/include -I/home/jed/petsc/include LDFLAGS=-L/home/jed/petsc/ompi-optg/lib -Wl,-rpath,/home/jed/petsc/ompi-optg/lib LDLIBS=-lpetsc -lm CUDAC= CUDAC_FLAGS= CUDA_LIB= CUDA_INCLUDE= > Also, after reading Jed's and mine petsc.pc, I noticed that I do not have optimization flags, however I compile petsc with optimization flags. (I also add additional -O3 into compilation rule when compile my code) > Namely, I add during petsc configuration/compilation (I use gcc and openmpi): > > --with-debugging=0 COPTFLAGS='-O3 -march=native -mtune=native' CXXOPTFLAGS='-O3 -march=native -mtune=native' FOPTFLAGS='-O3 -march=native -mtune=native' > > So how do I check if my petsc is compiled with optimizations? petsc.pc only provides what is necessary to compile your code with PETSc. There are lots of circumstances in which you might want a debugging build of your application linked with release PETSc (suppose all your development is unrelated from PETSc). > And my petsc.pc is: > > prefix=/home/kosh/Documents/NEPIC/libs/petsc-3.14.2/optim > exec_prefix=${prefix} > includedir=${prefix}/include > libdir=${prefix}/lib > ccompiler=mpicc > cflags_extra=-fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden > cflags_dep=-MMD -MP > ldflag_rpath=-Wl,-rpath, > cxxcompiler=mpicxx > cxxflags_extra=-Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -fPIC > fcompiler=mpif90 > fflags_extra=-fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument > > Name: PETSc > Description: Library to solve ODEs and algebraic equations > Version: 3.14.2 > Cflags: -I${includedir} -I/home/kosh/Documents/NEPIC/libs/petsc-3.14.2/include > Libs: -L${libdir} -lpetsc > Libs.private: -L/home/kosh/Documents/NEPIC/libs/petsc-3.14.2/optim/lib -L/usr/lib/openmpi -L/usr/lib/gcc/x86_64-pc-linux-gnu/10.2.0 -lHYPRE -lscalapack -llapack -lblas -lX11 -lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl -lhdf5 -lm -lstdc++ -ldl -lmpi_usempif08 -lmpi_usempi_ignore_tkr -lmpi_mpifh -lmpi -lgfortran -lm -lgfortran -lm -lgcc_s -lquadmath -lpthread -lquadmath -lstdc++ -ldl > > > Thank you and best regards, > Alex. > > ?On 12/16/20, 9:15 AM, "Jed Brown" wrote: > > Alex Koshkarov writes: > > > Thanks Barry, > > > > It makes much more sense! And thanks for clarifying what ?P? indicates. I like this explicit approach over implicit because I am bad with makefile syntax. However, share/petsc/Makefile.user looks much cleaner, but I need to learn how pkg-config works to understand it. > > Open a petsc.pc to see what is specified there. We define a number of extra variables that can help with checking for common compilers/wrappers and how to use RPATH if needed. > > prefix=/home/jed/petsc/ompi-optg > exec_prefix=${prefix} > includedir=${prefix}/include > libdir=${prefix}/lib > ccompiler=mpicc > cflags_extra=-fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -O2 -march=native -ffp-contract=fast -g > cflags_dep=-MMD -MP > ldflag_rpath=-Wl,-rpath, > cxxcompiler=mpicxx > cxxflags_extra=-Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O -fPIC > fcompiler=mpif90 > fflags_extra=-fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O > > Name: PETSc > Description: Library to solve ODEs and algebraic equations > Version: 3.13.99 > Cflags: -I${includedir} -I/home/jed/petsc/include > Libs: -L${libdir} -lpetsc > Libs.private: -L/home/jed/petsc/ompi-optg/lib -L/usr/lib/openmpi -L/usr/lib/gcc/x86_64-pc-linux-gnu/10.1.0 -lHYPRE -lumfpack -lklu -lcholmod -lbtf -lccolamd -lcolamd -lcamd -lamd -lsuitesparseconfig -lsuperlu -lsuperlu_dist -lml -llapack -lblis -ltriangle -lX11 -lexodus -lnetcdf -lpnetcdf -lcgns -lmedC -lmed -lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl -lhdf5 -lparmetis -lmetis -lm -lz -lstdc++ -ldl -lmpi_usempif08 -lmpi_usempi_ignore_tkr -lmpi_mpifh -lmpi -lgfortran -lm -lgfortran -lm -lgcc_s -lquadmath -lpthread -lquadmath -lstdc++ -ldl From roland.richter at ntnu.no Thu Dec 17 05:59:47 2020 From: roland.richter at ntnu.no (Roland Richter) Date: Thu, 17 Dec 2020 12:59:47 +0100 Subject: [petsc-users] PetscArraycpy only copies half of the entries of the matrix rows Message-ID: <8ebbe056-02f2-709e-4bee-8b4d850a321e@ntnu.no> Dear all, I wanted to use PetscArraycpy for copying a part of one complex matrix A with a row length of a_len and an offset of a_off into another matrix B with a row length of b_len (smaller than a_len - a_off), using the following code snippet: /??? ??? PetscScalar *A_ptr, *B_ptr;// //??? ??? MatDenseGetArray(A, &A_ptr);// //??? ??? MatDenseGetArray(B, &B_ptr);// //??? ??? MatView(A, PETSC_VIEWER_STDOUT_WORLD);// //??? ??? for(size_t i = 0; i < num_local_rows; ++i) {// //??? ??? ??? PetscArraycpy(B_ptr + i * b_len, (2 * a_off + A_ptr) + i * a_len, b_len);// //??? ??? }/ /??? ??? MatAssemblyBegin(B, MAT_FINAL_ASSEMBLY);// //??? ??? MatAssemblyEnd(B, MAT_FINAL_ASSEMBLY);// //??? ??? MatView(B, PETSC_VIEWER_STDOUT_WORLD);/ When printing the first row of matrix A (with a_len = 128, a_off = 76 and b_len = 26), I get /0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i -0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i -0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i -0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i -0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i -0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i -0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i -0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i -0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i -0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i -0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i -0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i -0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i -0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i -0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i -0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i -0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i -0.0000000000000000e+00 + 0.0000000000000000e+00i -7.5118186821231378e-02 + -1.2515848507502547e-01i 5.7593629917958706e+00 + 1.6535197175842331e+00i -6.3062119866941906e+01 + 3.2118985283369987e+01i 1.6228535942636518e+02 + -4.4588492144691378e+02i 8.7350162264986420e+02 + 2.0568440963147814e+03i -7.4258479521622921e+03 + -3.3031631388498363e+03i 2.2699374989663269e+04 + -7.8289291098031481e+03i -2.7846379282467926e+04 + 5.2456793075148809e+04i -3.2554674832896777e+04 + -1.2108819252524960e+05i 1.9430868047197413e+05 + 1.2114559011378702e+05i -3.5831799834334152e+05 + 7.0086227392363056e+04i 3.0028983479603863e+05 + -4.1447894788669585e+05i 7.8224949502036819e+04 + 6.2926756374162808e+05i -5.3474873053744854e+05 + -4.4718914789259754e+05i 6.8111038372267899e+05 + -3.6947593166740131e+04i -4.1287212326113920e+05 + 4.2925417635846150e+05i 7.1098224367113344e+03 + -4.6490743916366581e+05i 2.1807010096419850e+05 + 2.4106178223450572e+05i -2.0304162108015743e+05 + -1.7254769976182859e+04i 9.0164628688356752e+04 + -7.0830186001321214e+04i -8.9050769071193699e+03 + 5.7267241933255813e+04i -1.4789632694550470e+04 + -2.1786332309775924e+04i 1.0491004489879153e+04 + 2.3873712516742830e+03i -3.4183915782335853e+03 + 1.9886861075931499e+03i 3.7807432692260045e+02 + -1.2521184263406540e+03i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i -0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i -0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i -0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i -0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i -0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i/ and for the first row of B I get /-0.0000000000000000e+00 + 0.0000000000000000e+00i -7.5118186821231378e-02 + -1.2515848507502547e-01i 5.7593629917958706e+00 + 1.6535197175842331e+00i -6.3062119866941906e+01 + 3.2118985283369987e+01i 1.6228535942636518e+02 + -4.4588492144691378e+02i 8.7350162264986420e+02 + 2.0568440963147814e+03i -7.4258479521622921e+03 + -3.3031631388498363e+03i 2.2699374989663269e+04 + -7.8289291098031481e+03i -2.7846379282467926e+04 + 5.2456793075148809e+04i -3.2554674832896777e+04 + -1.2108819252524960e+05i 1.9430868047197413e+05 + 1.2114559011378702e+05i -3.5831799834334152e+05 + 7.0086227392363056e+04i 3.0028983479603863e+05 + -4.1447894788669585e+05i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 6.9004764357806349e-310 + 6.9004764357806349e-310i 6.9004764365446580e-310 + 6.9004764300000669e-310i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i/ Apparently, only 13 complex values have been copied, and not 26. Moreover, if my source destination is chosen to be (a_off + A_ptr) instead, I will just copy 0-values. When increasing the number of values I would like to copy, nothing changes (except getting a segfault for sufficient large values). Why does that happen? And how can I copy all values into the second matrix, and not only half of them? Another question: Is there a parallel version of that function, to copy all local rows in parallel, or do I have to write it myself, for example by using OpenMP? Thanks! Roland -------------- next part -------------- An HTML attachment was scrubbed... URL: From junchao.zhang at gmail.com Thu Dec 17 10:05:37 2020 From: junchao.zhang at gmail.com (Junchao Zhang) Date: Thu, 17 Dec 2020 10:05:37 -0600 Subject: [petsc-users] PetscArraycpy only copies half of the entries of the matrix rows In-Reply-To: <8ebbe056-02f2-709e-4bee-8b4d850a321e@ntnu.no> References: <8ebbe056-02f2-709e-4bee-8b4d850a321e@ntnu.no> Message-ID: MatDense is stored by column. Is that causing the problem? --Junchao Zhang On Thu, Dec 17, 2020 at 6:00 AM Roland Richter wrote: > Dear all, > > I wanted to use PetscArraycpy for copying a part of one complex matrix A > with a row length of a_len and an offset of a_off into another matrix B > with a row length of b_len (smaller than a_len - a_off), using the > following code snippet: > > * PetscScalar *A_ptr, *B_ptr;* > * MatDenseGetArray(A, &A_ptr);* > * MatDenseGetArray(B, &B_ptr);* > * MatView(A, PETSC_VIEWER_STDOUT_WORLD);* > * for(size_t i = 0; i < num_local_rows; ++i) {* > * PetscArraycpy(B_ptr + i * b_len, (2 * a_off + A_ptr) + i * > a_len, b_len);* > * }* > > * MatAssemblyBegin(B, MAT_FINAL_ASSEMBLY);* > * MatAssemblyEnd(B, MAT_FINAL_ASSEMBLY);* > * MatView(B, PETSC_VIEWER_STDOUT_WORLD);* > > When printing the first row of matrix A (with a_len = 128, a_off = 76 and > b_len = 26), I get > > *0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i > -0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i > -0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i -0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i > -0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i > -0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i -0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i -0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i -0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i -0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i -0.0000000000000000e+00 + > 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i -0.0000000000000000e+00 + > 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i -0.0000000000000000e+00 + > 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i -0.0000000000000000e+00 + > 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i > -0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i > -0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i -0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i -0.0000000000000000e+00 + > 0.0000000000000000e+00i -7.5118186821231378e-02 + -1.2515848507502547e-01i > 5.7593629917958706e+00 + 1.6535197175842331e+00i -6.3062119866941906e+01 + > 3.2118985283369987e+01i 1.6228535942636518e+02 + -4.4588492144691378e+02i > 8.7350162264986420e+02 + 2.0568440963147814e+03i -7.4258479521622921e+03 + > -3.3031631388498363e+03i 2.2699374989663269e+04 + -7.8289291098031481e+03i > -2.7846379282467926e+04 + 5.2456793075148809e+04i -3.2554674832896777e+04 + > -1.2108819252524960e+05i 1.9430868047197413e+05 + 1.2114559011378702e+05i > -3.5831799834334152e+05 + 7.0086227392363056e+04i 3.0028983479603863e+05 + > -4.1447894788669585e+05i 7.8224949502036819e+04 + 6.2926756374162808e+05i > -5.3474873053744854e+05 + -4.4718914789259754e+05i 6.8111038372267899e+05 + > -3.6947593166740131e+04i -4.1287212326113920e+05 + 4.2925417635846150e+05i > 7.1098224367113344e+03 + -4.6490743916366581e+05i 2.1807010096419850e+05 + > 2.4106178223450572e+05i -2.0304162108015743e+05 + -1.7254769976182859e+04i > 9.0164628688356752e+04 + -7.0830186001321214e+04i -8.9050769071193699e+03 + > 5.7267241933255813e+04i -1.4789632694550470e+04 + -2.1786332309775924e+04i > 1.0491004489879153e+04 + 2.3873712516742830e+03i -3.4183915782335853e+03 + > 1.9886861075931499e+03i 3.7807432692260045e+02 + -1.2521184263406540e+03i > 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i > -0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i > -0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i -0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i -0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i -0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i* > > and for the first row of B I get > > *-0.0000000000000000e+00 + 0.0000000000000000e+00i -7.5118186821231378e-02 > + -1.2515848507502547e-01i 5.7593629917958706e+00 + 1.6535197175842331e+00i > -6.3062119866941906e+01 + 3.2118985283369987e+01i 1.6228535942636518e+02 + > -4.4588492144691378e+02i 8.7350162264986420e+02 + 2.0568440963147814e+03i > -7.4258479521622921e+03 + -3.3031631388498363e+03i 2.2699374989663269e+04 + > -7.8289291098031481e+03i -2.7846379282467926e+04 + 5.2456793075148809e+04i > -3.2554674832896777e+04 + -1.2108819252524960e+05i 1.9430868047197413e+05 + > 1.2114559011378702e+05i -3.5831799834334152e+05 + 7.0086227392363056e+04i > 3.0028983479603863e+05 + -4.1447894788669585e+05i 0.0000000000000000e+00 + > 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i 6.9004764357806349e-310 + > 6.9004764357806349e-310i 6.9004764365446580e-310 + 6.9004764300000669e-310i > 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i* > > Apparently, only 13 complex values have been copied, and not 26. Moreover, > if my source destination is chosen to be (a_off + A_ptr) instead, I will > just copy 0-values. When increasing the number of values I would like to > copy, nothing changes (except getting a segfault for sufficient large > values). > > Why does that happen? And how can I copy all values into the second > matrix, and not only half of them? > > Another question: Is there a parallel version of that function, to copy > all local rows in parallel, or do I have to write it myself, for example by > using OpenMP? > > Thanks! > > Roland > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Dec 17 10:06:49 2020 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 17 Dec 2020 11:06:49 -0500 Subject: [petsc-users] PetscArraycpy only copies half of the entries of the matrix rows In-Reply-To: <8ebbe056-02f2-709e-4bee-8b4d850a321e@ntnu.no> References: <8ebbe056-02f2-709e-4bee-8b4d850a321e@ntnu.no> Message-ID: On Thu, Dec 17, 2020 at 7:00 AM Roland Richter wrote: > Dear all, > > I wanted to use PetscArraycpy for copying a part of one complex matrix A > with a row length of a_len and an offset of a_off into another matrix B > with a row length of b_len (smaller than a_len - a_off), using the > following code snippet: > > * PetscScalar *A_ptr, *B_ptr;* > * MatDenseGetArray(A, &A_ptr);* > * MatDenseGetArray(B, &B_ptr);* > * MatView(A, PETSC_VIEWER_STDOUT_WORLD);* > * for(size_t i = 0; i < num_local_rows; ++i) {* > * PetscArraycpy(B_ptr + i * b_len, (2 * a_off + A_ptr) + i * > a_len, b_len);* > * }* > > * MatAssemblyBegin(B, MAT_FINAL_ASSEMBLY);* > * MatAssemblyEnd(B, MAT_FINAL_ASSEMBLY);* > * MatView(B, PETSC_VIEWER_STDOUT_WORLD);* > > When printing the first row of matrix A (with a_len = 128, a_off = 76 and > b_len = 26), I get > > *0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i > -0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i > -0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i -0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i > -0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i > -0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i -0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i -0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i -0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i -0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i -0.0000000000000000e+00 + > 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i -0.0000000000000000e+00 + > 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i -0.0000000000000000e+00 + > 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i -0.0000000000000000e+00 + > 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i > -0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i > -0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i -0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i -0.0000000000000000e+00 + > 0.0000000000000000e+00i -7.5118186821231378e-02 + -1.2515848507502547e-01i > 5.7593629917958706e+00 + 1.6535197175842331e+00i -6.3062119866941906e+01 + > 3.2118985283369987e+01i 1.6228535942636518e+02 + -4.4588492144691378e+02i > 8.7350162264986420e+02 + 2.0568440963147814e+03i -7.4258479521622921e+03 + > -3.3031631388498363e+03i 2.2699374989663269e+04 + -7.8289291098031481e+03i > -2.7846379282467926e+04 + 5.2456793075148809e+04i -3.2554674832896777e+04 + > -1.2108819252524960e+05i 1.9430868047197413e+05 + 1.2114559011378702e+05i > -3.5831799834334152e+05 + 7.0086227392363056e+04i 3.0028983479603863e+05 + > -4.1447894788669585e+05i 7.8224949502036819e+04 + 6.2926756374162808e+05i > -5.3474873053744854e+05 + -4.4718914789259754e+05i 6.8111038372267899e+05 + > -3.6947593166740131e+04i -4.1287212326113920e+05 + 4.2925417635846150e+05i > 7.1098224367113344e+03 + -4.6490743916366581e+05i 2.1807010096419850e+05 + > 2.4106178223450572e+05i -2.0304162108015743e+05 + -1.7254769976182859e+04i > 9.0164628688356752e+04 + -7.0830186001321214e+04i -8.9050769071193699e+03 + > 5.7267241933255813e+04i -1.4789632694550470e+04 + -2.1786332309775924e+04i > 1.0491004489879153e+04 + 2.3873712516742830e+03i -3.4183915782335853e+03 + > 1.9886861075931499e+03i 3.7807432692260045e+02 + -1.2521184263406540e+03i > 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i > -0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i > -0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i -0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i -0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i -0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i* > > and for the first row of B I get > > *-0.0000000000000000e+00 + 0.0000000000000000e+00i -7.5118186821231378e-02 > + -1.2515848507502547e-01i 5.7593629917958706e+00 + 1.6535197175842331e+00i > -6.3062119866941906e+01 + 3.2118985283369987e+01i 1.6228535942636518e+02 + > -4.4588492144691378e+02i 8.7350162264986420e+02 + 2.0568440963147814e+03i > -7.4258479521622921e+03 + -3.3031631388498363e+03i 2.2699374989663269e+04 + > -7.8289291098031481e+03i -2.7846379282467926e+04 + 5.2456793075148809e+04i > -3.2554674832896777e+04 + -1.2108819252524960e+05i 1.9430868047197413e+05 + > 1.2114559011378702e+05i -3.5831799834334152e+05 + 7.0086227392363056e+04i > 3.0028983479603863e+05 + -4.1447894788669585e+05i 0.0000000000000000e+00 + > 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i 6.9004764357806349e-310 + > 6.9004764357806349e-310i 6.9004764365446580e-310 + 6.9004764300000669e-310i > 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i 0.0000000000000000e+00 + 0.0000000000000000e+00i > 0.0000000000000000e+00 + 0.0000000000000000e+00i 0.0000000000000000e+00 + > 0.0000000000000000e+00i* > > Apparently, only 13 complex values have been copied, and not 26. Moreover, > if my source destination is chosen to be (a_off + A_ptr) instead, I will > just copy 0-values. When increasing the number of values I would like to > copy, nothing changes (except getting a segfault for sufficient large > values). > > Why does that happen? And how can I copy all values into the second > matrix, and not only half of them? > Can you make a minimal example? It looks like it should work. If I can run the code, I can make it work. > Another question: Is there a parallel version of that function, to copy > all local rows in parallel, or do I have to write it myself, for example by > using OpenMP? > The copy should be vectorized by the compiler. If you have idle cores waiting for something, you could possibly use OpenMP. However, as Jed points out, the time to fork and join is likely to exceed your speedup. Thanks, Matt > Thanks! > > Roland > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From davydden at gmail.com Thu Dec 17 11:16:26 2020 From: davydden at gmail.com (Denis Davydov) Date: Thu, 17 Dec 2020 18:16:26 +0100 Subject: [petsc-users] Output cell data related to DMDA Message-ID: <577F1741-2E29-482E-8686-E191B6512B19@gmail.com> Dear all, I would like to output cell data (eg conductivity coefficient) in VTK for DMDA setup. Given that I know how many elements/cells are owned locally, I hoped that PetscViewerVTKAddField with PETSC_VTK_CELL_FIELD would do the job. However I am not sure whether provided vector should be fully distributed (no ghosts)? If not, can I get the required ghosts from DMDA created with DMDACreate3D ? Ps. I saw just one relevant discussion on the mailing list. Sincerely, Denis From knepley at gmail.com Thu Dec 17 11:58:44 2020 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 17 Dec 2020 12:58:44 -0500 Subject: [petsc-users] Output cell data related to DMDA In-Reply-To: <577F1741-2E29-482E-8686-E191B6512B19@gmail.com> References: <577F1741-2E29-482E-8686-E191B6512B19@gmail.com> Message-ID: On Thu, Dec 17, 2020 at 12:18 PM Denis Davydov wrote: > Dear all, > > I would like to output cell data (eg conductivity coefficient) in VTK for > DMDA setup. > > Given that I know how many elements/cells are owned locally, I hoped that > PetscViewerVTKAddField with PETSC_VTK_CELL_FIELD would do the job. > However I am not sure whether provided vector should be fully distributed > (no ghosts)? If not, can I get the required ghosts from DMDA created with > DMDACreate3D ? > I believe that it outputs global vectors, meaning that there are no ghosts. Thanks, Matt > Ps. I saw just one relevant discussion on the mailing list. > > Sincerely, > Denis -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From salazardetro1 at llnl.gov Thu Dec 17 15:38:27 2020 From: salazardetro1 at llnl.gov (Salazar De Troya, Miguel) Date: Thu, 17 Dec 2020 21:38:27 +0000 Subject: [petsc-users] Support for full jacobianP in TSSetIJacobianP Message-ID: <449B7337-595D-439B-8EDB-C719CD2D91BD@llnl.gov> Hello, I am working on hooking up TSAdjoint with pyadjoint through the firedrake-ts interface (https://github.com/IvanYashchuk/firedrake-ts). I have done most of the implementation and now I am just testing for corner cases. One of them is when the design variable is multiplying the first derivative term. It would be the case of F(Udot,U,P,t) = G(U,P,t) in https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/Sensitivity/TSSetIJacobianP.html . I imagine that one could think of refactoring the ?P? in the left hand side to the right hand side, but this is not trivial when ?P? is a discontinuous field over the domain. I think it would be better to include the case of F(Udot,U,P,t) = G(U,P,t) in TSSetIJacobianP and I am volunteering to do it. Given the current implementation of TSAdjoint, is this something feasible? Thanks Miguel Miguel A. Salazar de Troya Postdoctoral Researcher, Lawrence Livermore National Laboratory B141 Rm: 1085-5 Ph: 1(925) 422-6411 -------------- next part -------------- An HTML attachment was scrubbed... URL: From hongzhang at anl.gov Thu Dec 17 21:25:24 2020 From: hongzhang at anl.gov (Zhang, Hong) Date: Fri, 18 Dec 2020 03:25:24 +0000 Subject: [petsc-users] Support for full jacobianP in TSSetIJacobianP In-Reply-To: <449B7337-595D-439B-8EDB-C719CD2D91BD@llnl.gov> References: <449B7337-595D-439B-8EDB-C719CD2D91BD@llnl.gov> Message-ID: <4BA69262-B9E6-4D18-8705-55DEF978F965@anl.gov> Hi Miguel, Thank you for the nice work. I do not understand what you propose to do here. What is the obstacle to using current TSSetIJacobianP() for the corner case you mentioned? Are you considering a case in which the mass matrix is parameterized, e.g. M(p) udot - f(t,u) = g(t,u) ? Thanks, Hong On Dec 17, 2020, at 3:38 PM, Salazar De Troya, Miguel via petsc-users > wrote: Hello, I am working on hooking up TSAdjoint with pyadjoint through the firedrake-ts interface (https://github.com/IvanYashchuk/firedrake-ts). I have done most of the implementation and now I am just testing for corner cases. One of them is when the design variable is multiplying the first derivative term. It would be the case ofF(Udot,U,P,t) = G(U,P,t) in https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/Sensitivity/TSSetIJacobianP.html . I imagine that one could think of refactoring the ?P? in the left hand side to the right hand side, but this is not trivial when ?P? is a discontinuous field over the domain. I think it would be better to include the case of F(Udot,U,P,t) = G(U,P,t) in TSSetIJacobianP and I am volunteering to do it. Given the current implementation of TSAdjoint, is this something feasible? Thanks Miguel Miguel A. Salazar de Troya Postdoctoral Researcher, Lawrence Livermore National Laboratory B141 Rm: 1085-5 Ph: 1(925) 422-6411 -------------- next part -------------- An HTML attachment was scrubbed... URL: From davydden at gmail.com Fri Dec 18 02:29:05 2020 From: davydden at gmail.com (Denis Davydov) Date: Fri, 18 Dec 2020 09:29:05 +0100 Subject: [petsc-users] Output cell data related to DMDA In-Reply-To: References: Message-ID: <83010917-03B3-4463-8F9C-76BB528CA555@gmail.com> Hi Matt, By global vector you mean one created with VecCreateMPI(..., nel, PETSC_DETERMINE,...) ? If so, that gives segfault (even with 1 MPI process) in user write function, which is just VecView((Vec)obj,viewer); which clearly indicates that I misunderstand your comment. Would you please clarify what PETSc expect as a ?global? vector in case of cell-based quantities as opposed to unknowns/fields associated with the DMDA discretization? Sincerely, Denis > Am 17.12.2020 um 18:58 schrieb Matthew Knepley : > > ? >> On Thu, Dec 17, 2020 at 12:18 PM Denis Davydov wrote: > >> Dear all, >> >> I would like to output cell data (eg conductivity coefficient) in VTK for DMDA setup. >> >> Given that I know how many elements/cells are owned locally, I hoped that PetscViewerVTKAddField with PETSC_VTK_CELL_FIELD would do the job. >> However I am not sure whether provided vector should be fully distributed (no ghosts)? If not, can I get the required ghosts from DMDA created with DMDACreate3D ? > > I believe that it outputs global vectors, meaning that there are no ghosts. > > Thanks, > > Matt > >> Ps. I saw just one relevant discussion on the mailing list. >> >> Sincerely, >> Denis > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From thibault.bridelbertomeu at gmail.com Fri Dec 18 03:00:13 2020 From: thibault.bridelbertomeu at gmail.com (Thibault Bridel-Bertomeu) Date: Fri, 18 Dec 2020 10:00:13 +0100 Subject: [petsc-users] TS tutorial ex11 in Fortran In-Reply-To: <1035FEDB-B297-4E71-8F0F-D44C72707105@petsc.dev> References: <1EC72DF8-61BD-482B-B1B0-9427AB1ADF74@petsc.dev> <048DC071-33E5-4644-B1C8-DD598248923A@petsc.dev> <87tuspyaug.fsf@jedbrown.org> <1035FEDB-B297-4E71-8F0F-D44C72707105@petsc.dev> Message-ID: Hello Barry, I'll start the MR as soon as possible then so that specialists can indeed have a look. Do I have to fork PETSc to start a MR or are PETSc repo settings such that can I push a branch from the PETSc clone I got ? Thibault Le mer. 16 d?c. 2020 ? 07:47, Barry Smith a ?crit : > > Thibault, > > A subdirectory for the example is fine; we have other examples that use > subdirectories and multiple files. > > Note: even if you don't have something completely working you can still > make MR and list it as DRAFT request for comments, some other PETSc members > who understand the packages you are using and Fortran better than I may be > able to help as you develop the code. > > Barry > > > > > On Dec 16, 2020, at 12:35 AM, Thibault Bridel-Bertomeu < > thibault.bridelbertomeu at gmail.com> wrote: > > Hello everyone, > > Thank you Barry for the feedback. > OK, yes I'll work up an MR as soon as I have got something working. By the > way, does the fortran-version of the example have to be a single file ? If > my push contains a directory with several files (different modules and the > main), and the Makefile that goes with it, is that ok ? > > Thibault Bridel-Bertomeu > > > Le mer. 16 d?c. 2020 ? 04:46, Barry Smith a ?crit : > >> >> This is great. If you make a branch off of the PETSc git repository >> with these additions and work on ex11 you can make a merge request and we >> can run the code easily on all our test systems (for security reasons one >> of use needs to launch the tests from your MR). >> https://docs.petsc.org/en/latest/developers/integration/ >> >> Barry >> >> >> On Dec 15, 2020, at 5:35 AM, Thibault Bridel-Bertomeu < >> thibault.bridelbertomeu at gmail.com> wrote: >> >> Hello everyone, >> >> So far, I have the wrappers in the files attached to this e-mail. I still >> do not know if they work properly - at least the code compiles and the >> calls to the wrapped-subroutine do not fail - but I wanted to put this here >> in case someone sees something really wrong with it already. >> >> Thank you again for your help, I'll try to post updates of the F90 >> version of ex11 regularly in this thread. >> >> Stay safe, >> >> Thibault Bridel-Bertomeu >> >> Le dim. 13 d?c. 2020 ? 16:39, Jed Brown a ?crit : >> >>> Thibault Bridel-Bertomeu writes: >>> >>> > Thank you Mark for your answer. >>> > >>> > I am not sure what you think could be in the setBC1 routine ? How to >>> make >>> > the connection with the PetscDS ? >>> > >>> > On the other hand, I actually found after a while TSMonitorSet has a >>> > fortran wrapper, and it does take as arguments two function pointers, >>> so I >>> > guess it is possible ? Although I am not sure exactly how to play with >>> the >>> > PetscObjectSetFortranCallback & PetscObjectUseFortranCallback macros - >>> > could anybody advise please ? >>> >>> tsmonitorset_ is a good example to follow. In your file, create one of >>> these static structs with a member for each callback. These are IDs that >>> will be used as keys for Fortran callbacks and their contexts. The salient >>> parts of the file are below. >>> >>> static struct { >>> PetscFortranCallbackId prestep; >>> PetscFortranCallbackId poststep; >>> PetscFortranCallbackId rhsfunction; >>> PetscFortranCallbackId rhsjacobian; >>> PetscFortranCallbackId ifunction; >>> PetscFortranCallbackId ijacobian; >>> PetscFortranCallbackId monitor; >>> PetscFortranCallbackId mondestroy; >>> PetscFortranCallbackId transform; >>> #if defined(PETSC_HAVE_F90_2PTR_ARG) >>> PetscFortranCallbackId function_pgiptr; >>> #endif >>> } _cb; >>> >>> /* >>> Note ctx is the same as ts so we need to get the Fortran context out >>> of the TS; this gets put in _ctx using the callback ID >>> */ >>> static PetscErrorCode ourmonitor(TS ts,PetscInt i,PetscReal d,Vec v,void >>> *ctx) >>> { >>> >>> PetscObjectUseFortranCallback(ts,_cb.monitor,(TS*,PetscInt*,PetscReal*,Vec >>> *,void*,PetscErrorCode*),(&ts,&i,&d,&v,_ctx,&ierr)); >>> } >>> >>> Then follow as in tsmonitorset_, which sets two callbacks. >>> >>> PETSC_EXTERN void tsmonitorset_(TS *ts,void >>> (*func)(TS*,PetscInt*,PetscReal*,Vec*,void*,PetscErrorCode*),void >>> *mctx,void (*d)(void*,PetscErrorCode*),PetscErrorCode *ierr) >>> { >>> CHKFORTRANNULLFUNCTION(d); >>> if ((PetscVoidFunction)func == (PetscVoidFunction) tsmonitordefault_) { >>> *ierr = TSMonitorSet(*ts,(PetscErrorCode >>> (*)(TS,PetscInt,PetscReal,Vec,void*))TSMonitorDefault,*(PetscViewerAndFormat**)mctx,(PetscErrorCode >>> (*)(void **))PetscViewerAndFormatDestroy); >>> } else { >>> *ierr = >>> PetscObjectSetFortranCallback((PetscObject)*ts,PETSC_FORTRAN_CALLBACK_CLASS,&_cb.monitor,(PetscVoidFunction)func,mctx); >>> *ierr = >>> PetscObjectSetFortranCallback((PetscObject)*ts,PETSC_FORTRAN_CALLBACK_CLASS,&_cb.mondestroy,(PetscVoidFunction)d,mctx); >>> *ierr = TSMonitorSet(*ts,ourmonitor,*ts,ourmonitordestroy); >>> } >>> } >>> >> >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Fri Dec 18 03:03:39 2020 From: bsmith at petsc.dev (Barry Smith) Date: Fri, 18 Dec 2020 03:03:39 -0600 Subject: [petsc-users] Output cell data related to DMDA In-Reply-To: <83010917-03B3-4463-8F9C-76BB528CA555@gmail.com> References: <83010917-03B3-4463-8F9C-76BB528CA555@gmail.com> Message-ID: <961E46DE-12F7-458C-839B-504E61CF73FC@petsc.dev> > On Dec 18, 2020, at 2:29 AM, Denis Davydov wrote: > > Hi Matt, > > By global vector you mean one created with > > VecCreateMPI(..., nel, PETSC_DETERMINE,...) > > ? If so, that gives segfault (even with 1 MPI process) in user write function, which is just > > VecView((Vec)obj,viewer); > > which clearly indicates that I misunderstand your comment. > > Would you please clarify what PETSc expect as a ?global? vector in case of cell-based quantities as opposed to unknowns/fields associated with the DMDA discretization? > Denis, Not sure what you mean by cell-based but if your vector is associated with a DMDA you need to create DMCreateGlobalVector() to get the properly layout with respect to the DM. If you use VecCreateMPI() it just has a naive 1d layout not associated with the DMDA in any way so won't be compatible. (Of course we would hope the code would not "crash" with an incompatible vector but just produce a useful error message) Barry > Sincerely, > Denis > >> Am 17.12.2020 um 18:58 schrieb Matthew Knepley : >> >> ? >> On Thu, Dec 17, 2020 at 12:18 PM Denis Davydov > wrote: >> Dear all, >> >> I would like to output cell data (eg conductivity coefficient) in VTK for DMDA setup. >> >> Given that I know how many elements/cells are owned locally, I hoped that PetscViewerVTKAddField with PETSC_VTK_CELL_FIELD would do the job. >> However I am not sure whether provided vector should be fully distributed (no ghosts)? If not, can I get the required ghosts from DMDA created with DMDACreate3D ? >> >> I believe that it outputs global vectors, meaning that there are no ghosts. >> >> Thanks, >> >> Matt >> >> Ps. I saw just one relevant discussion on the mailing list. >> >> Sincerely, >> Denis >> >> >> -- >> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Fri Dec 18 03:16:23 2020 From: bsmith at petsc.dev (Barry Smith) Date: Fri, 18 Dec 2020 03:16:23 -0600 Subject: [petsc-users] TS tutorial ex11 in Fortran In-Reply-To: References: <1EC72DF8-61BD-482B-B1B0-9427AB1ADF74@petsc.dev> <048DC071-33E5-4644-B1C8-DD598248923A@petsc.dev> <87tuspyaug.fsf@jedbrown.org> <1035FEDB-B297-4E71-8F0F-D44C72707105@petsc.dev> Message-ID: <39E9FBED-6D6B-41E4-983C-58465ACAB4D0@petsc.dev> Good question. There is a trick to limit the amount of work you need to do with a new fork after you have already made changes with a PETSc clone, but it looks like we do not document this clearly in the webpages. (I couldn't find it). Yes, you do need to make a fork, but after you have made the fork on the GitLab website (and have done nothing on your machine) edit the file $PETSC_DIR/.git/config for your clone on your machine Locate the line that has url = git at gitlab.com:petsc/petsc.git (this may have an https at the beginning of the line) Change this line to point to the fork url instead with git@ not https, which will be pretty much the same URL but with your user id instead of petsc in the address. Then git push and it will push to your fork. Now you changes will be in your fork and you can make the MR from your fork URL on Gitlab. (In other words this editing trick converts your PETSc clone on your machine to a PETSc fork). I hope I have explained this clearly enough it goes smoothly. Barry > On Dec 18, 2020, at 3:00 AM, Thibault Bridel-Bertomeu wrote: > > Hello Barry, > > I'll start the MR as soon as possible then so that specialists can indeed have a look. Do I have to fork PETSc to start a MR or are PETSc repo settings such that can I push a branch from the PETSc clone I got ? > > Thibault > > > Le mer. 16 d?c. 2020 ? 07:47, Barry Smith > a ?crit : > > Thibault, > > A subdirectory for the example is fine; we have other examples that use subdirectories and multiple files. > > Note: even if you don't have something completely working you can still make MR and list it as DRAFT request for comments, some other PETSc members who understand the packages you are using and Fortran better than I may be able to help as you develop the code. > > Barry > > > > >> On Dec 16, 2020, at 12:35 AM, Thibault Bridel-Bertomeu > wrote: >> >> Hello everyone, >> >> Thank you Barry for the feedback. >> OK, yes I'll work up an MR as soon as I have got something working. By the way, does the fortran-version of the example have to be a single file ? If my push contains a directory with several files (different modules and the main), and the Makefile that goes with it, is that ok ? >> >> Thibault Bridel-Bertomeu >> >> >> Le mer. 16 d?c. 2020 ? 04:46, Barry Smith > a ?crit : >> >> This is great. If you make a branch off of the PETSc git repository with these additions and work on ex11 you can make a merge request and we can run the code easily on all our test systems (for security reasons one of use needs to launch the tests from your MR). https://docs.petsc.org/en/latest/developers/integration/ >> >> Barry >> >> >>> On Dec 15, 2020, at 5:35 AM, Thibault Bridel-Bertomeu > wrote: >>> >>> Hello everyone, >>> >>> So far, I have the wrappers in the files attached to this e-mail. I still do not know if they work properly - at least the code compiles and the calls to the wrapped-subroutine do not fail - but I wanted to put this here in case someone sees something really wrong with it already. >>> >>> Thank you again for your help, I'll try to post updates of the F90 version of ex11 regularly in this thread. >>> >>> Stay safe, >>> >>> Thibault Bridel-Bertomeu >>> >>> Le dim. 13 d?c. 2020 ? 16:39, Jed Brown > a ?crit : >>> Thibault Bridel-Bertomeu > writes: >>> >>> > Thank you Mark for your answer. >>> > >>> > I am not sure what you think could be in the setBC1 routine ? How to make >>> > the connection with the PetscDS ? >>> > >>> > On the other hand, I actually found after a while TSMonitorSet has a >>> > fortran wrapper, and it does take as arguments two function pointers, so I >>> > guess it is possible ? Although I am not sure exactly how to play with the >>> > PetscObjectSetFortranCallback & PetscObjectUseFortranCallback macros - >>> > could anybody advise please ? >>> >>> tsmonitorset_ is a good example to follow. In your file, create one of these static structs with a member for each callback. These are IDs that will be used as keys for Fortran callbacks and their contexts. The salient parts of the file are below. >>> >>> static struct { >>> PetscFortranCallbackId prestep; >>> PetscFortranCallbackId poststep; >>> PetscFortranCallbackId rhsfunction; >>> PetscFortranCallbackId rhsjacobian; >>> PetscFortranCallbackId ifunction; >>> PetscFortranCallbackId ijacobian; >>> PetscFortranCallbackId monitor; >>> PetscFortranCallbackId mondestroy; >>> PetscFortranCallbackId transform; >>> #if defined(PETSC_HAVE_F90_2PTR_ARG) >>> PetscFortranCallbackId function_pgiptr; >>> #endif >>> } _cb; >>> >>> /* >>> Note ctx is the same as ts so we need to get the Fortran context out of the TS; this gets put in _ctx using the callback ID >>> */ >>> static PetscErrorCode ourmonitor(TS ts,PetscInt i,PetscReal d,Vec v,void *ctx) >>> { >>> PetscObjectUseFortranCallback(ts,_cb.monitor,(TS*,PetscInt*,PetscReal*,Vec *,void*,PetscErrorCode*),(&ts,&i,&d,&v,_ctx,&ierr)); >>> } >>> >>> Then follow as in tsmonitorset_, which sets two callbacks. >>> >>> PETSC_EXTERN void tsmonitorset_(TS *ts,void (*func)(TS*,PetscInt*,PetscReal*,Vec*,void*,PetscErrorCode*),void *mctx,void (*d)(void*,PetscErrorCode*),PetscErrorCode *ierr) >>> { >>> CHKFORTRANNULLFUNCTION(d); >>> if ((PetscVoidFunction)func == (PetscVoidFunction) tsmonitordefault_) { >>> *ierr = TSMonitorSet(*ts,(PetscErrorCode (*)(TS,PetscInt,PetscReal,Vec,void*))TSMonitorDefault,*(PetscViewerAndFormat**)mctx,(PetscErrorCode (*)(void **))PetscViewerAndFormatDestroy); >>> } else { >>> *ierr = PetscObjectSetFortranCallback((PetscObject)*ts,PETSC_FORTRAN_CALLBACK_CLASS,&_cb.monitor,(PetscVoidFunction)func,mctx); >>> *ierr = PetscObjectSetFortranCallback((PetscObject)*ts,PETSC_FORTRAN_CALLBACK_CLASS,&_cb.mondestroy,(PetscVoidFunction)d,mctx); >>> *ierr = TSMonitorSet(*ts,ourmonitor,*ts,ourmonitordestroy); >>> } >>> } >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thibault.bridelbertomeu at gmail.com Fri Dec 18 04:02:29 2020 From: thibault.bridelbertomeu at gmail.com (Thibault Bridel-Bertomeu) Date: Fri, 18 Dec 2020 11:02:29 +0100 Subject: [petsc-users] TS tutorial ex11 in Fortran In-Reply-To: <39E9FBED-6D6B-41E4-983C-58465ACAB4D0@petsc.dev> References: <1EC72DF8-61BD-482B-B1B0-9427AB1ADF74@petsc.dev> <048DC071-33E5-4644-B1C8-DD598248923A@petsc.dev> <87tuspyaug.fsf@jedbrown.org> <1035FEDB-B297-4E71-8F0F-D44C72707105@petsc.dev> <39E9FBED-6D6B-41E4-983C-58465ACAB4D0@petsc.dev> Message-ID: Aah that is a nice trick, I was getting ready to fork, clone the fork and redo the work, but that worked fine ! Thank you Barry ! The MR will appear in a little while ! Thibault Le ven. 18 d?c. 2020 ? 10:16, Barry Smith a ?crit : > > Good question. There is a trick to limit the amount of work you need to > do with a new fork after you have already made changes with a PETSc clone, > but it looks like we do not document this clearly in the webpages. (I > couldn't find it). > > Yes, you do need to make a fork, but after you have made the fork on the > GitLab website (and have done nothing on your machine) edit the file > $PETSC_DIR/.git/config for your clone on your machine > > Locate the line that has url = git at gitlab.com:petsc/petsc.git (this > may have an https at the beginning of the line) > > Change this line to point to the fork url instead with git@ not https, > which will be pretty much the same URL but with your user id instead of > petsc in the address. Then git push and it will push to your fork. > > Now you changes will be in your fork and you can make the MR from your > fork URL on Gitlab. (In other words this editing trick converts your PETSc > clone on your machine to a PETSc fork). > > I hope I have explained this clearly enough it goes smoothly. > > Barry > > > > On Dec 18, 2020, at 3:00 AM, Thibault Bridel-Bertomeu < > thibault.bridelbertomeu at gmail.com> wrote: > > Hello Barry, > > I'll start the MR as soon as possible then so that specialists can indeed > have a look. Do I have to fork PETSc to start a MR or are PETSc repo > settings such that can I push a branch from the PETSc clone I got ? > > Thibault > > > Le mer. 16 d?c. 2020 ? 07:47, Barry Smith a ?crit : > >> >> Thibault, >> >> A subdirectory for the example is fine; we have other examples that use >> subdirectories and multiple files. >> >> Note: even if you don't have something completely working you can still >> make MR and list it as DRAFT request for comments, some other PETSc members >> who understand the packages you are using and Fortran better than I may be >> able to help as you develop the code. >> >> Barry >> >> >> >> >> On Dec 16, 2020, at 12:35 AM, Thibault Bridel-Bertomeu < >> thibault.bridelbertomeu at gmail.com> wrote: >> >> Hello everyone, >> >> Thank you Barry for the feedback. >> OK, yes I'll work up an MR as soon as I have got something working. By >> the way, does the fortran-version of the example have to be a single file ? >> If my push contains a directory with several files (different modules and >> the main), and the Makefile that goes with it, is that ok ? >> >> Thibault Bridel-Bertomeu >> >> >> Le mer. 16 d?c. 2020 ? 04:46, Barry Smith a ?crit : >> >>> >>> This is great. If you make a branch off of the PETSc git repository >>> with these additions and work on ex11 you can make a merge request and we >>> can run the code easily on all our test systems (for security reasons one >>> of use needs to launch the tests from your MR). >>> https://docs.petsc.org/en/latest/developers/integration/ >>> >>> Barry >>> >>> >>> On Dec 15, 2020, at 5:35 AM, Thibault Bridel-Bertomeu < >>> thibault.bridelbertomeu at gmail.com> wrote: >>> >>> Hello everyone, >>> >>> So far, I have the wrappers in the files attached to this e-mail. I >>> still do not know if they work properly - at least the code compiles and >>> the calls to the wrapped-subroutine do not fail - but I wanted to put this >>> here in case someone sees something really wrong with it already. >>> >>> Thank you again for your help, I'll try to post updates of the F90 >>> version of ex11 regularly in this thread. >>> >>> Stay safe, >>> >>> Thibault Bridel-Bertomeu >>> >>> Le dim. 13 d?c. 2020 ? 16:39, Jed Brown a ?crit : >>> >>>> Thibault Bridel-Bertomeu writes: >>>> >>>> > Thank you Mark for your answer. >>>> > >>>> > I am not sure what you think could be in the setBC1 routine ? How to >>>> make >>>> > the connection with the PetscDS ? >>>> > >>>> > On the other hand, I actually found after a while TSMonitorSet has a >>>> > fortran wrapper, and it does take as arguments two function pointers, >>>> so I >>>> > guess it is possible ? Although I am not sure exactly how to play >>>> with the >>>> > PetscObjectSetFortranCallback & PetscObjectUseFortranCallback macros - >>>> > could anybody advise please ? >>>> >>>> tsmonitorset_ is a good example to follow. In your file, create one of >>>> these static structs with a member for each callback. These are IDs that >>>> will be used as keys for Fortran callbacks and their contexts. The salient >>>> parts of the file are below. >>>> >>>> static struct { >>>> PetscFortranCallbackId prestep; >>>> PetscFortranCallbackId poststep; >>>> PetscFortranCallbackId rhsfunction; >>>> PetscFortranCallbackId rhsjacobian; >>>> PetscFortranCallbackId ifunction; >>>> PetscFortranCallbackId ijacobian; >>>> PetscFortranCallbackId monitor; >>>> PetscFortranCallbackId mondestroy; >>>> PetscFortranCallbackId transform; >>>> #if defined(PETSC_HAVE_F90_2PTR_ARG) >>>> PetscFortranCallbackId function_pgiptr; >>>> #endif >>>> } _cb; >>>> >>>> /* >>>> Note ctx is the same as ts so we need to get the Fortran context out >>>> of the TS; this gets put in _ctx using the callback ID >>>> */ >>>> static PetscErrorCode ourmonitor(TS ts,PetscInt i,PetscReal d,Vec >>>> v,void *ctx) >>>> { >>>> >>>> PetscObjectUseFortranCallback(ts,_cb.monitor,(TS*,PetscInt*,PetscReal*,Vec >>>> *,void*,PetscErrorCode*),(&ts,&i,&d,&v,_ctx,&ierr)); >>>> } >>>> >>>> Then follow as in tsmonitorset_, which sets two callbacks. >>>> >>>> PETSC_EXTERN void tsmonitorset_(TS *ts,void >>>> (*func)(TS*,PetscInt*,PetscReal*,Vec*,void*,PetscErrorCode*),void >>>> *mctx,void (*d)(void*,PetscErrorCode*),PetscErrorCode *ierr) >>>> { >>>> CHKFORTRANNULLFUNCTION(d); >>>> if ((PetscVoidFunction)func == (PetscVoidFunction) tsmonitordefault_) >>>> { >>>> *ierr = TSMonitorSet(*ts,(PetscErrorCode >>>> (*)(TS,PetscInt,PetscReal,Vec,void*))TSMonitorDefault,*(PetscViewerAndFormat**)mctx,(PetscErrorCode >>>> (*)(void **))PetscViewerAndFormatDestroy); >>>> } else { >>>> *ierr = >>>> PetscObjectSetFortranCallback((PetscObject)*ts,PETSC_FORTRAN_CALLBACK_CLASS,&_cb.monitor,(PetscVoidFunction)func,mctx); >>>> *ierr = >>>> PetscObjectSetFortranCallback((PetscObject)*ts,PETSC_FORTRAN_CALLBACK_CLASS,&_cb.mondestroy,(PetscVoidFunction)d,mctx); >>>> *ierr = TSMonitorSet(*ts,ourmonitor,*ts,ourmonitordestroy); >>>> } >>>> } >>>> >>> >>> >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From davydden at gmail.com Fri Dec 18 04:05:10 2020 From: davydden at gmail.com (Denis Davydov) Date: Fri, 18 Dec 2020 11:05:10 +0100 Subject: [petsc-users] Output cell data related to DMDA In-Reply-To: <961E46DE-12F7-458C-839B-504E61CF73FC@petsc.dev> References: <961E46DE-12F7-458C-839B-504E61CF73FC@petsc.dev> Message-ID: Hi Barry, What I am after is to output one scalar per cell of DMDA (for example heat conduction on this cell or MPI partitioning of the computation domain). I hope that?s what is meant by PETSC_VTK_CELL_FIELD. My understanding is that DMCreateGlobalVector will create a vector associated with the field/discretization/nodal unknowns/etc (that would be PETSC_VTK_POINT_FIELD?), which is not what I would like to visualize. Could you point me to the right direction to look at? If this is not possible with VTK interface, I am fine to go for other viewer formats (maybe it?s coincidentally easier to visualize in MATLAB). Sincerely, Denis > Am 18.12.2020 um 10:03 schrieb Barry Smith : > > ? > >>> On Dec 18, 2020, at 2:29 AM, Denis Davydov wrote: >>> >>> Hi Matt, >>> >>> By global vector you mean one created with >>> >>> VecCreateMPI(..., nel, PETSC_DETERMINE,...) >>> >>> ? If so, that gives segfault (even with 1 MPI process) in user write function, which is just >>> >>> VecView((Vec)obj,viewer); >>> >>> which clearly indicates that I misunderstand your comment. >>> >>> Would you please clarify what PETSc expect as a ?global? vector in case of cell-based quantities as opposed to unknowns/fields associated with the DMDA discretization? >>> >> >> Denis, >> >> Not sure what you mean by cell-based but if your vector is associated with a DMDA you need to create DMCreateGlobalVector() to get the properly layout with respect to the DM. If you use VecCreateMPI() it just has a naive 1d layout not associated with the DMDA in any way so won't be compatible. (Of course we would hope the code would not "crash" with an incompatible vector but just produce a useful error message) >> >> Barry >> >> >> >> >> Sincerely, >> Denis >> >>>> Am 17.12.2020 um 18:58 schrieb Matthew Knepley : >>>> >>> ? >>>> On Thu, Dec 17, 2020 at 12:18 PM Denis Davydov wrote: >>> >>>> Dear all, >>>> >>>> I would like to output cell data (eg conductivity coefficient) in VTK for DMDA setup. >>>> >>>> Given that I know how many elements/cells are owned locally, I hoped that PetscViewerVTKAddField with PETSC_VTK_CELL_FIELD would do the job. >>>> However I am not sure whether provided vector should be fully distributed (no ghosts)? If not, can I get the required ghosts from DMDA created with DMDACreate3D ? >>> >>> I believe that it outputs global vectors, meaning that there are no ghosts. >>> >>> Thanks, >>> >>> Matt >>> >>>> Ps. I saw just one relevant discussion on the mailing list. >>>> >>>> Sincerely, >>>> Denis >>> >>> >>> -- >>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>> -- Norbert Wiener >>> >>> https://www.cse.buffalo.edu/~knepley/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Fri Dec 18 04:35:46 2020 From: bsmith at petsc.dev (Barry Smith) Date: Fri, 18 Dec 2020 04:35:46 -0600 Subject: [petsc-users] Output cell data related to DMDA In-Reply-To: References: <961E46DE-12F7-458C-839B-504E61CF73FC@petsc.dev> Message-ID: Denis, This makes sense. I don't know the details of PETSC_VTK_CELL_FIELD or how well they are supported for DMDA (as opposed to DMPLEX). A quick and dirty approach would be to create a new DMDA with a single DOF and use VecStrideGather() to grab that single component into a DMCreateGlobalVector() of the new DMDA from the previous and then pass that new vector to the viewer routine. If the original DMDA is vertex centered and you want computed values on a cell centered then it becomes more complicated. If you truly need a combination of vertex and cell-centered values you might find DMSTAG is more useful for your needs. Barry > On Dec 18, 2020, at 4:05 AM, Denis Davydov wrote: > > Hi Barry, > > What I am after is to output one scalar per cell of DMDA (for example heat conduction on this cell or MPI partitioning of the computation domain). I hope that?s what is meant by PETSC_VTK_CELL_FIELD. > > My understanding is that DMCreateGlobalVector will create a vector associated with the field/discretization/nodal unknowns/etc (that would be PETSC_VTK_POINT_FIELD?), which is not what I would like to visualize. > > Could you point me to the right direction to look at? > > If this is not possible with VTK interface, I am fine to go for other viewer formats (maybe it?s coincidentally easier to visualize in MATLAB). > > Sincerely, > Denis > >> Am 18.12.2020 um 10:03 schrieb Barry Smith : >> >> ? >> >>> On Dec 18, 2020, at 2:29 AM, Denis Davydov > wrote: >>> >>> Hi Matt, >>> >>> By global vector you mean one created with >>> >>> VecCreateMPI(..., nel, PETSC_DETERMINE,...) >>> >>> ? If so, that gives segfault (even with 1 MPI process) in user write function, which is just >>> >>> VecView((Vec)obj,viewer); >>> >>> which clearly indicates that I misunderstand your comment. >>> >>> Would you please clarify what PETSc expect as a ?global? vector in case of cell-based quantities as opposed to unknowns/fields associated with the DMDA discretization? >>> >> >> Denis, >> >> Not sure what you mean by cell-based but if your vector is associated with a DMDA you need to create DMCreateGlobalVector() to get the properly layout with respect to the DM. If you use VecCreateMPI() it just has a naive 1d layout not associated with the DMDA in any way so won't be compatible. (Of course we would hope the code would not "crash" with an incompatible vector but just produce a useful error message) >> >> Barry >> >> >> >> >>> Sincerely, >>> Denis >>> >>>> Am 17.12.2020 um 18:58 schrieb Matthew Knepley >: >>>> >>>> ? >>>> On Thu, Dec 17, 2020 at 12:18 PM Denis Davydov > wrote: >>>> Dear all, >>>> >>>> I would like to output cell data (eg conductivity coefficient) in VTK for DMDA setup. >>>> >>>> Given that I know how many elements/cells are owned locally, I hoped that PetscViewerVTKAddField with PETSC_VTK_CELL_FIELD would do the job. >>>> However I am not sure whether provided vector should be fully distributed (no ghosts)? If not, can I get the required ghosts from DMDA created with DMDACreate3D ? >>>> >>>> I believe that it outputs global vectors, meaning that there are no ghosts. >>>> >>>> Thanks, >>>> >>>> Matt >>>> >>>> Ps. I saw just one relevant discussion on the mailing list. >>>> >>>> Sincerely, >>>> Denis >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>>> -- Norbert Wiener >>>> >>>> https://www.cse.buffalo.edu/~knepley/ >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From davydden at gmail.com Fri Dec 18 05:33:11 2020 From: davydden at gmail.com (Denis Davydov) Date: Fri, 18 Dec 2020 12:33:11 +0100 Subject: [petsc-users] Output cell data related to DMDA In-Reply-To: References: Message-ID: Thanks Barry, > A quick and dirty approach would be to create a new DMDA with a single DOF and I would need something like DG with constant value per cell. Can one create those with DMDA and make sure that MPI partitioning is the same between the two DMDA? > use VecStrideGather() to grab that single component into a DMCreateGlobalVector() of the new DMDA from the previous and then pass that new vector to the viewer routine. If the original DMDA is vertex Yes, original one is vertex centered (Q1), but really only care about the output/visualization of scalars per cell used in the weak form of the original DMDA. I don?t need to combine them anyhow (DMStag-like, if I understand you correctly). Sincerely Denis > centered and you want computed values on a cell centered then it becomes more complicated. If you truly need a combination of vertex and cell-centered values you might find DMSTAG is more useful for your needs. > > Barry > > > > >> On Dec 18, 2020, at 4:05 AM, Denis Davydov wrote: >> >> Hi Barry, >> >> What I am after is to output one scalar per cell of DMDA (for example heat conduction on this cell or MPI partitioning of the computation domain). I hope that?s what is meant by PETSC_VTK_CELL_FIELD. >> >> My understanding is that DMCreateGlobalVector will create a vector associated with the field/discretization/nodal unknowns/etc (that would be PETSC_VTK_POINT_FIELD?), which is not what I would like to visualize. >> >> Could you point me to the right direction to look at? >> >> If this is not possible with VTK interface, I am fine to go for other viewer formats (maybe it?s coincidentally easier to visualize in MATLAB). >> >> Sincerely, >> Denis >> >>>> Am 18.12.2020 um 10:03 schrieb Barry Smith : >>>> >>> ? >>> >>>> On Dec 18, 2020, at 2:29 AM, Denis Davydov wrote: >>>> >>>> Hi Matt, >>>> >>>> By global vector you mean one created with >>>> >>>> VecCreateMPI(..., nel, PETSC_DETERMINE,...) >>>> >>>> ? If so, that gives segfault (even with 1 MPI process) in user write function, which is just >>>> >>>> VecView((Vec)obj,viewer); >>>> >>>> which clearly indicates that I misunderstand your comment. >>>> >>>> Would you please clarify what PETSc expect as a ?global? vector in case of cell-based quantities as opposed to unknowns/fields associated with the DMDA discretization? >>>> >>> >>> Denis, >>> >>> Not sure what you mean by cell-based but if your vector is associated with a DMDA you need to create DMCreateGlobalVector() to get the properly layout with respect to the DM. If you use VecCreateMPI() it just has a naive 1d layout not associated with the DMDA in any way so won't be compatible. (Of course we would hope the code would not "crash" with an incompatible vector but just produce a useful error message) >>> >>> Barry >>> >>> >>> >>> >>>> Sincerely, >>>> Denis >>>> >>>>>> Am 17.12.2020 um 18:58 schrieb Matthew Knepley : >>>>>> >>>>> ? >>>>>> On Thu, Dec 17, 2020 at 12:18 PM Denis Davydov wrote: >>>>> >>>>>> Dear all, >>>>>> >>>>>> I would like to output cell data (eg conductivity coefficient) in VTK for DMDA setup. >>>>>> >>>>>> Given that I know how many elements/cells are owned locally, I hoped that PetscViewerVTKAddField with PETSC_VTK_CELL_FIELD would do the job. >>>>>> However I am not sure whether provided vector should be fully distributed (no ghosts)? If not, can I get the required ghosts from DMDA created with DMDACreate3D ? >>>>> >>>>> I believe that it outputs global vectors, meaning that there are no ghosts. >>>>> >>>>> Thanks, >>>>> >>>>> Matt >>>>> >>>>>> Ps. I saw just one relevant discussion on the mailing list. >>>>>> >>>>>> Sincerely, >>>>>> Denis >>>>> >>>>> >>>>> -- >>>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>>>> -- Norbert Wiener >>>>> >>>>> https://www.cse.buffalo.edu/~knepley/ >>> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Fri Dec 18 05:46:16 2020 From: bsmith at petsc.dev (Barry Smith) Date: Fri, 18 Dec 2020 05:46:16 -0600 Subject: [petsc-users] Output cell data related to DMDA In-Reply-To: References: Message-ID: <5A8AF049-7FDC-4781-A89D-2B1E089D26F0@petsc.dev> > On Dec 18, 2020, at 5:33 AM, Denis Davydov wrote: > > Thanks Barry, > >> A quick and dirty approach would be to create a new DMDA with a single DOF and > > I would need something like DG with constant value per cell. Can one create those with DMDA and make sure that MPI partitioning is the same between the two DMDA?\ Denis, This is slightly tricky, the DMDA is simple and doesn't directly support relationships between cell-centered and vertex centered. The way we recommend handling this is to "cheat" and associate the cell-centered value with the "lower-left" corner of vertex centered values. Then in Matlab, or whatever software you are using, "clip out" the "upper-right" values of the domain from the vector and discard them. So you can create a DMDA with dof of 1 that matches the original DMDA, fill all the values except the upper right values of the domain in your code and then save the vector to a file and in post-processing ignore or remove the extra values. Barry > >> use VecStrideGather() to grab that single component into a DMCreateGlobalVector() of the new DMDA from the previous and then pass that new vector to the viewer routine. If the original DMDA is vertex > > Yes, original one is vertex centered (Q1), but really only care about the output/visualization of scalars per cell used in the weak form of the original DMDA. I don?t need to combine them anyhow (DMStag-like, if I understand you correctly). > > Sincerely > Denis > >> centered and you want computed values on a cell centered then it becomes more complicated. If you truly need a combination of vertex and cell-centered values you might find DMSTAG is more useful for your needs. >> >> Barry >> >> >> >> >>> On Dec 18, 2020, at 4:05 AM, Denis Davydov > wrote: >>> >>> Hi Barry, >>> >>> What I am after is to output one scalar per cell of DMDA (for example heat conduction on this cell or MPI partitioning of the computation domain). I hope that?s what is meant by PETSC_VTK_CELL_FIELD. >>> >>> My understanding is that DMCreateGlobalVector will create a vector associated with the field/discretization/nodal unknowns/etc (that would be PETSC_VTK_POINT_FIELD?), which is not what I would like to visualize. >>> >>> Could you point me to the right direction to look at? >>> >>> If this is not possible with VTK interface, I am fine to go for other viewer formats (maybe it?s coincidentally easier to visualize in MATLAB). >>> >>> Sincerely, >>> Denis >>> >>>> Am 18.12.2020 um 10:03 schrieb Barry Smith >: >>>> >>>> ? >>>> >>>>> On Dec 18, 2020, at 2:29 AM, Denis Davydov > wrote: >>>>> >>>>> Hi Matt, >>>>> >>>>> By global vector you mean one created with >>>>> >>>>> VecCreateMPI(..., nel, PETSC_DETERMINE,...) >>>>> >>>>> ? If so, that gives segfault (even with 1 MPI process) in user write function, which is just >>>>> >>>>> VecView((Vec)obj,viewer); >>>>> >>>>> which clearly indicates that I misunderstand your comment. >>>>> >>>>> Would you please clarify what PETSc expect as a ?global? vector in case of cell-based quantities as opposed to unknowns/fields associated with the DMDA discretization? >>>>> >>>> >>>> Denis, >>>> >>>> Not sure what you mean by cell-based but if your vector is associated with a DMDA you need to create DMCreateGlobalVector() to get the properly layout with respect to the DM. If you use VecCreateMPI() it just has a naive 1d layout not associated with the DMDA in any way so won't be compatible. (Of course we would hope the code would not "crash" with an incompatible vector but just produce a useful error message) >>>> >>>> Barry >>>> >>>> >>>> >>>> >>>>> Sincerely, >>>>> Denis >>>>> >>>>>> Am 17.12.2020 um 18:58 schrieb Matthew Knepley >: >>>>>> >>>>>> ? >>>>>> On Thu, Dec 17, 2020 at 12:18 PM Denis Davydov > wrote: >>>>>> Dear all, >>>>>> >>>>>> I would like to output cell data (eg conductivity coefficient) in VTK for DMDA setup. >>>>>> >>>>>> Given that I know how many elements/cells are owned locally, I hoped that PetscViewerVTKAddField with PETSC_VTK_CELL_FIELD would do the job. >>>>>> However I am not sure whether provided vector should be fully distributed (no ghosts)? If not, can I get the required ghosts from DMDA created with DMDACreate3D ? >>>>>> >>>>>> I believe that it outputs global vectors, meaning that there are no ghosts. >>>>>> >>>>>> Thanks, >>>>>> >>>>>> Matt >>>>>> >>>>>> Ps. I saw just one relevant discussion on the mailing list. >>>>>> >>>>>> Sincerely, >>>>>> Denis >>>>>> >>>>>> >>>>>> -- >>>>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>>>>> -- Norbert Wiener >>>>>> >>>>>> https://www.cse.buffalo.edu/~knepley/ >>>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From davydden at gmail.com Fri Dec 18 09:34:02 2020 From: davydden at gmail.com (Denis Davydov) Date: Fri, 18 Dec 2020 16:34:02 +0100 Subject: [petsc-users] Output cell data related to DMDA In-Reply-To: <5A8AF049-7FDC-4781-A89D-2B1E089D26F0@petsc.dev> References: <5A8AF049-7FDC-4781-A89D-2B1E089D26F0@petsc.dev> Message-ID: <47F424E1-1AE6-49CB-9F1E-DE093AA9D1C1@gmail.com> I see, thank you Barry. I will try to go this 1DoF route to visualize the data. Sincerely, Denis > Am 18.12.2020 um 12:46 schrieb Barry Smith : > > ? > >>> On Dec 18, 2020, at 5:33 AM, Denis Davydov wrote: >>> >>> Thanks Barry, >>> >>>> A quick and dirty approach would be to create a new DMDA with a single DOF and >>> >>> I would need something like DG with constant value per cell. Can one create those with DMDA and make sure that MPI partitioning is the same between the two DMDA?\ >> >> Denis, >> >> This is slightly tricky, the DMDA is simple and doesn't directly support relationships between cell-centered and vertex centered. >> >> The way we recommend handling this is to "cheat" and associate the cell-centered value with the "lower-left" corner of vertex centered values. Then in Matlab, or whatever software you are using, "clip out" the "upper-right" values of the domain from the vector and discard them. So you can create a DMDA with dof of 1 that matches the original DMDA, fill all the values except the upper right values of the domain in your code and then save the vector to a file and in post-processing ignore or remove the extra values. >> >> Barry >> >> >> >>> use VecStrideGather() to grab that single component into a DMCreateGlobalVector() of the new DMDA from the previous and then pass that new vector to the viewer routine. If the original DMDA is vertex >> >> Yes, original one is vertex centered (Q1), but really only care about the output/visualization of scalars per cell used in the weak form of the original DMDA. I don?t need to combine them anyhow (DMStag-like, if I understand you correctly). >> >> Sincerely >> Denis >> >>> centered and you want computed values on a cell centered then it becomes more complicated. If you truly need a combination of vertex and cell-centered values you might find DMSTAG is more useful for your needs. >>> >>> Barry >>> >>> >>> >>> >>>> On Dec 18, 2020, at 4:05 AM, Denis Davydov wrote: >>>> >>>> Hi Barry, >>>> >>>> What I am after is to output one scalar per cell of DMDA (for example heat conduction on this cell or MPI partitioning of the computation domain). I hope that?s what is meant by PETSC_VTK_CELL_FIELD. >>>> >>>> My understanding is that DMCreateGlobalVector will create a vector associated with the field/discretization/nodal unknowns/etc (that would be PETSC_VTK_POINT_FIELD?), which is not what I would like to visualize. >>>> >>>> Could you point me to the right direction to look at? >>>> >>>> If this is not possible with VTK interface, I am fine to go for other viewer formats (maybe it?s coincidentally easier to visualize in MATLAB). >>>> >>>> Sincerely, >>>> Denis >>>> >>>>>> Am 18.12.2020 um 10:03 schrieb Barry Smith : >>>>>> >>>>> ? >>>>> >>>>>> On Dec 18, 2020, at 2:29 AM, Denis Davydov wrote: >>>>>> >>>>>> Hi Matt, >>>>>> >>>>>> By global vector you mean one created with >>>>>> >>>>>> VecCreateMPI(..., nel, PETSC_DETERMINE,...) >>>>>> >>>>>> ? If so, that gives segfault (even with 1 MPI process) in user write function, which is just >>>>>> >>>>>> VecView((Vec)obj,viewer); >>>>>> >>>>>> which clearly indicates that I misunderstand your comment. >>>>>> >>>>>> Would you please clarify what PETSc expect as a ?global? vector in case of cell-based quantities as opposed to unknowns/fields associated with the DMDA discretization? >>>>>> >>>>> >>>>> Denis, >>>>> >>>>> Not sure what you mean by cell-based but if your vector is associated with a DMDA you need to create DMCreateGlobalVector() to get the properly layout with respect to the DM. If you use VecCreateMPI() it just has a naive 1d layout not associated with the DMDA in any way so won't be compatible. (Of course we would hope the code would not "crash" with an incompatible vector but just produce a useful error message) >>>>> >>>>> Barry >>>>> >>>>> >>>>> >>>>> >>>>>> Sincerely, >>>>>> Denis >>>>>> >>>>>>>> Am 17.12.2020 um 18:58 schrieb Matthew Knepley : >>>>>>>> >>>>>>> ? >>>>>>>> On Thu, Dec 17, 2020 at 12:18 PM Denis Davydov wrote: >>>>>>> >>>>>>>> Dear all, >>>>>>>> >>>>>>>> I would like to output cell data (eg conductivity coefficient) in VTK for DMDA setup. >>>>>>>> >>>>>>>> Given that I know how many elements/cells are owned locally, I hoped that PetscViewerVTKAddField with PETSC_VTK_CELL_FIELD would do the job. >>>>>>>> However I am not sure whether provided vector should be fully distributed (no ghosts)? If not, can I get the required ghosts from DMDA created with DMDACreate3D ? >>>>>>> >>>>>>> I believe that it outputs global vectors, meaning that there are no ghosts. >>>>>>> >>>>>>> Thanks, >>>>>>> >>>>>>> Matt >>>>>>> >>>>>>>> Ps. I saw just one relevant discussion on the mailing list. >>>>>>>> >>>>>>>> Sincerely, >>>>>>>> Denis >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>>>>>> -- Norbert Wiener >>>>>>> >>>>>>> https://www.cse.buffalo.edu/~knepley/ >>>>> >>> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From salazardetro1 at llnl.gov Fri Dec 18 10:58:14 2020 From: salazardetro1 at llnl.gov (Salazar De Troya, Miguel) Date: Fri, 18 Dec 2020 16:58:14 +0000 Subject: [petsc-users] Support for full jacobianP in TSSetIJacobianP In-Reply-To: <4BA69262-B9E6-4D18-8705-55DEF978F965@anl.gov> References: <449B7337-595D-439B-8EDB-C719CD2D91BD@llnl.gov> <4BA69262-B9E6-4D18-8705-55DEF978F965@anl.gov> Message-ID: Yes, that is the case I am considering. The special case I am concerned about is as following: the heat equation in variational form and in firedrake/UFL notation is as follows: p*u_t*v*dx + inner(grad(u), grad(v))*dx = 0, where u is the temperature, u_t is its time derivative, v is just the test function, dx is the integration domain and p is the design parameter. If ?p? were discontinuous, one can?t just factor ?p? into the second term due to the divergence theorem. Meaning that p*u_t*v*dx + inner(grad(u), grad(v))*dx = 0 is different than u_t*v*dx + inner(1.0 / p * grad(u), grad(v))*dx = 0, which is what ideally one would obtain in order to adapt to the current interface in TSAdjoint. Thanks Miguel From: "Zhang, Hong" Date: Thursday, December 17, 2020 at 7:25 PM To: "Salazar De Troya, Miguel" Cc: Satish Balay via petsc-users Subject: Re: [petsc-users] Support for full jacobianP in TSSetIJacobianP Hi Miguel, Thank you for the nice work. I do not understand what you propose to do here. What is the obstacle to using current TSSetIJacobianP() for the corner case you mentioned? Are you considering a case in which the mass matrix is parameterized, e.g. M(p) udot - f(t,u) = g(t,u) ? Thanks, Hong On Dec 17, 2020, at 3:38 PM, Salazar De Troya, Miguel via petsc-users > wrote: Hello, I am working on hooking up TSAdjoint with pyadjoint through the firedrake-ts interface (https://github.com/IvanYashchuk/firedrake-ts). I have done most of the implementation and now I am just testing for corner cases. One of them is when the design variable is multiplying the first derivative term. It would be the case ofF(Udot,U,P,t) = G(U,P,t) in https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/Sensitivity/TSSetIJacobianP.html . I imagine that one could think of refactoring the ?P? in the left hand side to the right hand side, but this is not trivial when ?P? is a discontinuous field over the domain. I think it would be better to include the case of F(Udot,U,P,t) = G(U,P,t) in TSSetIJacobianP and I am volunteering to do it. Given the current implementation of TSAdjoint, is this something feasible? Thanks Miguel Miguel A. Salazar de Troya Postdoctoral Researcher, Lawrence Livermore National Laboratory B141 Rm: 1085-5 Ph: 1(925) 422-6411 -------------- next part -------------- An HTML attachment was scrubbed... URL: From hongzhang at anl.gov Fri Dec 18 17:11:50 2020 From: hongzhang at anl.gov (Zhang, Hong) Date: Fri, 18 Dec 2020 23:11:50 +0000 Subject: [petsc-users] Support for full jacobianP in TSSetIJacobianP In-Reply-To: References: <449B7337-595D-439B-8EDB-C719CD2D91BD@llnl.gov> <4BA69262-B9E6-4D18-8705-55DEF978F965@anl.gov> Message-ID: <4BDCA608-0433-424C-B817-80BAD1AD904D@anl.gov> The current interface is general and should be applicable to this case as soon as users can provide IJacobianP, which is dF(Udot,U,P,t)/dP. Were you able to generate it in firedrake? If so, could you provide an example that I can test? Thanks, Hong On Dec 18, 2020, at 10:58 AM, Salazar De Troya, Miguel > wrote: Yes, that is the case I am considering. The special case I am concerned about is as following: the heat equation in variational form and in firedrake/UFL notation is as follows: p*u_t*v*dx + inner(grad(u), grad(v))*dx = 0, where u is the temperature, u_t is its time derivative, v is just the test function, dx is the integration domain and p is the design parameter. If ?p? were discontinuous, one can?t just factor ?p? into the second term due to the divergence theorem. Meaning that p*u_t*v*dx + inner(grad(u), grad(v))*dx = 0 is different than u_t*v*dx + inner(1.0 / p * grad(u), grad(v))*dx = 0, which is what ideally one would obtain in order to adapt to the current interface in TSAdjoint. Thanks Miguel From: "Zhang, Hong" > Date: Thursday, December 17, 2020 at 7:25 PM To: "Salazar De Troya, Miguel" > Cc: Satish Balay via petsc-users > Subject: Re: [petsc-users] Support for full jacobianP in TSSetIJacobianP Hi Miguel, Thank you for the nice work. I do not understand what you propose to do here. What is the obstacle to using current TSSetIJacobianP() for the corner case you mentioned? Are you considering a case in which the mass matrix is parameterized, e.g. M(p) udot - f(t,u) = g(t,u) ? Thanks, Hong On Dec 17, 2020, at 3:38 PM, Salazar De Troya, Miguel via petsc-users > wrote: Hello, I am working on hooking up TSAdjoint with pyadjoint through the firedrake-ts interface (https://github.com/IvanYashchuk/firedrake-ts). I have done most of the implementation and now I am just testing for corner cases. One of them is when the design variable is multiplying the first derivative term. It would be the case ofF(Udot,U,P,t) = G(U,P,t) in https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/Sensitivity/TSSetIJacobianP.html . I imagine that one could think of refactoring the ?P? in the left hand side to the right hand side, but this is not trivial when ?P? is a discontinuous field over the domain. I think it would be better to include the case of F(Udot,U,P,t) = G(U,P,t) in TSSetIJacobianP and I am volunteering to do it. Given the current implementation of TSAdjoint, is this something feasible? Thanks Miguel Miguel A. Salazar de Troya Postdoctoral Researcher, Lawrence Livermore National Laboratory B141 Rm: 1085-5 Ph: 1(925) 422-6411 -------------- next part -------------- An HTML attachment was scrubbed... URL: From salazardetro1 at llnl.gov Fri Dec 18 18:35:41 2020 From: salazardetro1 at llnl.gov (Salazar De Troya, Miguel) Date: Sat, 19 Dec 2020 00:35:41 +0000 Subject: [petsc-users] Support for full jacobianP in TSSetIJacobianP In-Reply-To: <4BDCA608-0433-424C-B817-80BAD1AD904D@anl.gov> References: <449B7337-595D-439B-8EDB-C719CD2D91BD@llnl.gov> <4BA69262-B9E6-4D18-8705-55DEF978F965@anl.gov> <4BDCA608-0433-424C-B817-80BAD1AD904D@anl.gov> Message-ID: Ok, I was not able to get such case to work in my firedrake-ts implementation. Maybe I am missing something in my code. I looked at the TSAdjoint paper https://arxiv.org/pdf/1912.07696.pdf Equation 2.1 and at the adjoint method for the theta method (Equation 2.15) where the mass matrix is not differentiated w.r.t. the design parameter ?p? and decided to ask the question. Is the actual implementation different from what is in the paper? Thanks Miguel From: "Zhang, Hong" Date: Friday, December 18, 2020 at 3:11 PM To: "Salazar De Troya, Miguel" Cc: Satish Balay via petsc-users Subject: Re: [petsc-users] Support for full jacobianP in TSSetIJacobianP The current interface is general and should be applicable to this case as soon as users can provide IJacobianP, which is dF(Udot,U,P,t)/dP. Were you able to generate it in firedrake? If so, could you provide an example that I can test? Thanks, Hong On Dec 18, 2020, at 10:58 AM, Salazar De Troya, Miguel > wrote: Yes, that is the case I am considering. The special case I am concerned about is as following: the heat equation in variational form and in firedrake/UFL notation is as follows: p*u_t*v*dx + inner(grad(u), grad(v))*dx = 0, where u is the temperature, u_t is its time derivative, v is just the test function, dx is the integration domain and p is the design parameter. If ?p? were discontinuous, one can?t just factor ?p? into the second term due to the divergence theorem. Meaning that p*u_t*v*dx + inner(grad(u), grad(v))*dx = 0 is different than u_t*v*dx + inner(1.0 / p * grad(u), grad(v))*dx = 0, which is what ideally one would obtain in order to adapt to the current interface in TSAdjoint. Thanks Miguel From: "Zhang, Hong" > Date: Thursday, December 17, 2020 at 7:25 PM To: "Salazar De Troya, Miguel" > Cc: Satish Balay via petsc-users > Subject: Re: [petsc-users] Support for full jacobianP in TSSetIJacobianP Hi Miguel, Thank you for the nice work. I do not understand what you propose to do here. What is the obstacle to using current TSSetIJacobianP() for the corner case you mentioned? Are you considering a case in which the mass matrix is parameterized, e.g. M(p) udot - f(t,u) = g(t,u) ? Thanks, Hong On Dec 17, 2020, at 3:38 PM, Salazar De Troya, Miguel via petsc-users > wrote: Hello, I am working on hooking up TSAdjoint with pyadjoint through the firedrake-ts interface (https://github.com/IvanYashchuk/firedrake-ts). I have done most of the implementation and now I am just testing for corner cases. One of them is when the design variable is multiplying the first derivative term. It would be the case ofF(Udot,U,P,t) = G(U,P,t) in https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/Sensitivity/TSSetIJacobianP.html . I imagine that one could think of refactoring the ?P? in the left hand side to the right hand side, but this is not trivial when ?P? is a discontinuous field over the domain. I think it would be better to include the case of F(Udot,U,P,t) = G(U,P,t) in TSSetIJacobianP and I am volunteering to do it. Given the current implementation of TSAdjoint, is this something feasible? Thanks Miguel Miguel A. Salazar de Troya Postdoctoral Researcher, Lawrence Livermore National Laboratory B141 Rm: 1085-5 Ph: 1(925) 422-6411 -------------- next part -------------- An HTML attachment was scrubbed... URL: From hongzhang at anl.gov Sat Dec 19 12:02:40 2020 From: hongzhang at anl.gov (Zhang, Hong) Date: Sat, 19 Dec 2020 18:02:40 +0000 Subject: [petsc-users] Support for full jacobianP in TSSetIJacobianP In-Reply-To: References: <449B7337-595D-439B-8EDB-C719CD2D91BD@llnl.gov> <4BA69262-B9E6-4D18-8705-55DEF978F965@anl.gov> <4BDCA608-0433-424C-B817-80BAD1AD904D@anl.gov> Message-ID: On Dec 18, 2020, at 6:35 PM, Salazar De Troya, Miguel > wrote: Ok, I was not able to get such case to work in my firedrake-ts implementation. Maybe I am missing something in my code. I looked at the TSAdjoint paper https://arxiv.org/pdf/1912.07696.pdf Equation 2.1 and at the adjoint method for the theta method (Equation 2.15) where the mass matrix is not differentiated w.r.t. the design parameter ?p? and decided to ask the question. For notational brevity, the formula used in the paper does not assume that the mass matrix depends on the parameters p. But it can be easily extended for this case. Is the actual implementation different from what is in the paper? The actual implementation is more general than the formula presented in the paper. Note that PETSc TS takes the ODE problem as F(U_t,U,P,t) = G(U,P,t) and does not ask for a mass matrix explicitly from users. When users provide IFunction, which is F(Udot,U,P,t), IJacobian (dF/dU) and IJacobianP (dF/dP) are needed by TSAdjoint to compute the sensitivities. Differentiating the mass matrix (more precisely, the term M*U_t ) is needed when you prepare the call back function IJacobianP. So if we have M(P)*U_t - f(t,U,P) in IFunction, IJacobianP should be M_P*U_t - f_P where U_t is provided by PETSc as an input argument. Thanks, Hong Thanks Miguel From: "Zhang, Hong" > Date: Friday, December 18, 2020 at 3:11 PM To: "Salazar De Troya, Miguel" > Cc: Satish Balay via petsc-users > Subject: Re: [petsc-users] Support for full jacobianP in TSSetIJacobianP The current interface is general and should be applicable to this case as soon as users can provide IJacobianP, which is dF(Udot,U,P,t)/dP. Were you able to generate it in firedrake? If so, could you provide an example that I can test? Thanks, Hong On Dec 18, 2020, at 10:58 AM, Salazar De Troya, Miguel > wrote: Yes, that is the case I am considering. The special case I am concerned about is as following: the heat equation in variational form and in firedrake/UFL notation is as follows: p*u_t*v*dx + inner(grad(u), grad(v))*dx = 0, where u is the temperature, u_t is its time derivative, v is just the test function, dx is the integration domain and p is the design parameter. If ?p? were discontinuous, one can?t just factor ?p? into the second term due to the divergence theorem. Meaning that p*u_t*v*dx + inner(grad(u), grad(v))*dx = 0 is different than u_t*v*dx + inner(1.0 / p * grad(u), grad(v))*dx = 0, which is what ideally one would obtain in order to adapt to the current interface in TSAdjoint. Thanks Miguel From: "Zhang, Hong" > Date: Thursday, December 17, 2020 at 7:25 PM To: "Salazar De Troya, Miguel" > Cc: Satish Balay via petsc-users > Subject: Re: [petsc-users] Support for full jacobianP in TSSetIJacobianP Hi Miguel, Thank you for the nice work. I do not understand what you propose to do here. What is the obstacle to using current TSSetIJacobianP() for the corner case you mentioned? Are you considering a case in which the mass matrix is parameterized, e.g. M(p) udot - f(t,u) = g(t,u) ? Thanks, Hong On Dec 17, 2020, at 3:38 PM, Salazar De Troya, Miguel via petsc-users > wrote: Hello, I am working on hooking up TSAdjoint with pyadjoint through the firedrake-ts interface (https://github.com/IvanYashchuk/firedrake-ts). I have done most of the implementation and now I am just testing for corner cases. One of them is when the design variable is multiplying the first derivative term. It would be the case ofF(Udot,U,P,t) = G(U,P,t) in https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/Sensitivity/TSSetIJacobianP.html . I imagine that one could think of refactoring the ?P? in the left hand side to the right hand side, but this is not trivial when ?P? is a discontinuous field over the domain. I think it would be better to include the case of F(Udot,U,P,t) = G(U,P,t) in TSSetIJacobianP and I am volunteering to do it. Given the current implementation of TSAdjoint, is this something feasible? Thanks Miguel Miguel A. Salazar de Troya Postdoctoral Researcher, Lawrence Livermore National Laboratory B141 Rm: 1085-5 Ph: 1(925) 422-6411 -------------- next part -------------- An HTML attachment was scrubbed... URL: From sthavishthabr at gmail.com Sun Dec 20 10:03:41 2020 From: sthavishthabr at gmail.com (sthavishtha bhopalam) Date: Sun, 20 Dec 2020 21:33:41 +0530 Subject: [petsc-users] Parsing user-defined variables using YAML Message-ID: Hello PETSc users! I wanted to know how one could parse user-defined variables (not the inbuilt PETSc options) in a PETSc code using YAML? Any suggestions? To my knowledge, the existing YAML parser in PETSc solely reads the PETSc options like snes, ksp, pc etc. as described here ( https://lists.mcs.anl.gov/mailman/htdig/petsc-users/2020-August/042074.html) but is incapable to read user-defined variables, which may be specific to a code. Thanks ------------------------------------------- Regards Sthavishtha -------------- next part -------------- An HTML attachment was scrubbed... URL: From bourdin at lsu.edu Sun Dec 20 10:56:43 2020 From: bourdin at lsu.edu (Blaise A Bourdin) Date: Sun, 20 Dec 2020 16:56:43 +0000 Subject: [petsc-users] Parsing user-defined variables using YAML In-Reply-To: References: Message-ID: <94DE0643-38C2-4B6A-B625-1CF8E782BB50@lsu.edu> Have a look at the options related functions in the system section of the man pages: https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/index.html You can get options from the petsc option database and pass them to your user code. You can also use petscBag to pass a C struct or a fortran derived type to your user code. Regards, Blaise > On Dec 20, 2020, at 10:03 AM, sthavishtha bhopalam wrote: > > Hello PETSc users! > > I wanted to know how one could parse user-defined variables (not the inbuilt PETSc options) in a PETSc code using YAML? Any suggestions? To my knowledge, the existing YAML parser in PETSc solely reads the PETSc options like snes, ksp, pc etc. as described here (https://lists.mcs.anl.gov/mailman/htdig/petsc-users/2020-August/042074.html) but is incapable to read user-defined variables, which may be specific to a code. > > Thanks > ------------------------------------------- > Regards > > Sthavishtha > > > > -- A.K. & Shirley Barton Professor of Mathematics Adjunct Professor of Mechanical Engineering Adjunct of the Center for Computation & Technology Louisiana State University, Lockett Hall Room 344, Baton Rouge, LA 70803, USA Tel. +1 (225) 578 1612, Fax +1 (225) 578 4276 Web http://www.math.lsu.edu/~bourdin From salazardetro1 at llnl.gov Sun Dec 20 15:28:48 2020 From: salazardetro1 at llnl.gov (Salazar De Troya, Miguel) Date: Sun, 20 Dec 2020 21:28:48 +0000 Subject: [petsc-users] Support for full jacobianP in TSSetIJacobianP In-Reply-To: References: <449B7337-595D-439B-8EDB-C719CD2D91BD@llnl.gov> <4BA69262-B9E6-4D18-8705-55DEF978F965@anl.gov> <4BDCA608-0433-424C-B817-80BAD1AD904D@anl.gov> Message-ID: <5EB41847-FE48-4618-90CE-8F8853303BEF@llnl.gov> Hello Hong, Thank you. My apologies for rushing to blame the API instead of looking at my own code. I?ve put together a minimum example in petsc4py that I am attaching to this email. Here I am solving the simple ODE: c * xdot = b * x(t) with initial condition x(0) = a and the cost function J equal to the solution at the final time ?T?, i.e. J = x(T). The analytical solution is x(t) = a * exp(b/c *t). In the example, there is the option to calculate the derivatives w.r.t ?b? or ?c? in the keyword argument ?deriv? passed to ?SimpleODE?. For ?b?, the solver returns the correct derivatives (checked with the analytical expression), but this does not work for ?c?. I might be building the wrong jacobian that I pass to ?setIJacobianP?. Could you please take a look at it? Thanks. Miguel From: "Zhang, Hong" Date: Saturday, December 19, 2020 at 10:02 AM To: "Salazar De Troya, Miguel" Cc: Satish Balay via petsc-users Subject: Re: [petsc-users] Support for full jacobianP in TSSetIJacobianP On Dec 18, 2020, at 6:35 PM, Salazar De Troya, Miguel > wrote: Ok, I was not able to get such case to work in my firedrake-ts implementation. Maybe I am missing something in my code. I looked at the TSAdjoint paper https://arxiv.org/pdf/1912.07696.pdf Equation 2.1 and at the adjoint method for the theta method (Equation 2.15) where the mass matrix is not differentiated w.r.t. the design parameter ?p? and decided to ask the question. For notational brevity, the formula used in the paper does not assume that the mass matrix depends on the parameters p. But it can be easily extended for this case. Is the actual implementation different from what is in the paper? The actual implementation is more general than the formula presented in the paper. Note that PETSc TS takes the ODE problem as F(U_t,U,P,t) = G(U,P,t) and does not ask for a mass matrix explicitly from users. When users provide IFunction, which is F(Udot,U,P,t), IJacobian (dF/dU) and IJacobianP (dF/dP) are needed by TSAdjoint to compute the sensitivities. Differentiating the mass matrix (more precisely, the term M*U_t ) is needed when you prepare the call back function IJacobianP. So if we have M(P)*U_t - f(t,U,P) in IFunction, IJacobianP should be M_P*U_t - f_P where U_t is provided by PETSc as an input argument. Thanks, Hong Thanks Miguel From: "Zhang, Hong" > Date: Friday, December 18, 2020 at 3:11 PM To: "Salazar De Troya, Miguel" > Cc: Satish Balay via petsc-users > Subject: Re: [petsc-users] Support for full jacobianP in TSSetIJacobianP The current interface is general and should be applicable to this case as soon as users can provide IJacobianP, which is dF(Udot,U,P,t)/dP. Were you able to generate it in firedrake? If so, could you provide an example that I can test? Thanks, Hong On Dec 18, 2020, at 10:58 AM, Salazar De Troya, Miguel > wrote: Yes, that is the case I am considering. The special case I am concerned about is as following: the heat equation in variational form and in firedrake/UFL notation is as follows: p*u_t*v*dx + inner(grad(u), grad(v))*dx = 0, where u is the temperature, u_t is its time derivative, v is just the test function, dx is the integration domain and p is the design parameter. If ?p? were discontinuous, one can?t just factor ?p? into the second term due to the divergence theorem. Meaning that p*u_t*v*dx + inner(grad(u), grad(v))*dx = 0 is different than u_t*v*dx + inner(1.0 / p * grad(u), grad(v))*dx = 0, which is what ideally one would obtain in order to adapt to the current interface in TSAdjoint. Thanks Miguel From: "Zhang, Hong" > Date: Thursday, December 17, 2020 at 7:25 PM To: "Salazar De Troya, Miguel" > Cc: Satish Balay via petsc-users > Subject: Re: [petsc-users] Support for full jacobianP in TSSetIJacobianP Hi Miguel, Thank you for the nice work. I do not understand what you propose to do here. What is the obstacle to using current TSSetIJacobianP() for the corner case you mentioned? Are you considering a case in which the mass matrix is parameterized, e.g. M(p) udot - f(t,u) = g(t,u) ? Thanks, Hong On Dec 17, 2020, at 3:38 PM, Salazar De Troya, Miguel via petsc-users > wrote: Hello, I am working on hooking up TSAdjoint with pyadjoint through the firedrake-ts interface (https://github.com/IvanYashchuk/firedrake-ts). I have done most of the implementation and now I am just testing for corner cases. One of them is when the design variable is multiplying the first derivative term. It would be the case ofF(Udot,U,P,t) = G(U,P,t) in https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/Sensitivity/TSSetIJacobianP.html . I imagine that one could think of refactoring the ?P? in the left hand side to the right hand side, but this is not trivial when ?P? is a discontinuous field over the domain. I think it would be better to include the case of F(Udot,U,P,t) = G(U,P,t) in TSSetIJacobianP and I am volunteering to do it. Given the current implementation of TSAdjoint, is this something feasible? Thanks Miguel Miguel A. Salazar de Troya Postdoctoral Researcher, Lawrence Livermore National Laboratory B141 Rm: 1085-5 Ph: 1(925) 422-6411 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: simple-ode.py Type: text/x-python-script Size: 2442 bytes Desc: simple-ode.py URL: From hongzhang at anl.gov Sun Dec 20 22:53:43 2020 From: hongzhang at anl.gov (Zhang, Hong) Date: Mon, 21 Dec 2020 04:53:43 +0000 Subject: [petsc-users] TSAdjoint multilevel checkpointing running out of memory In-Reply-To: References: Message-ID: Anton, There is a memory leak bug in TSTrajectory for this particular case. It has already been fixed. Can you update your PETSc and retry? Thank you for reporting the issue to us. Best, Hong (Mr.) On Dec 10, 2020, at 5:19 PM, Anton Glazkov > wrote: Dear Matt and Hong, Thank you for your quick replies! In answer to your question Matt, the application fails in the same way as with 5 checkpoints. I don?t believe the RAM capacity to be a problem though because we are running this case on a cluster with 64GB RAM per node, and we anticipate 0.1GB storage requirements for the 4 checkpoints. The case is being run in MPMD mode with the following command: aprun -n 72 /work/e01/e01/chri4903/bin/cascade-ng/checkpoints_gradients ../data/nl_adj_0-chkpts.ini -adjoint -vr "/?/?/?/?/sims/aaX-testcases/10-RotorStator/test/data/ic_0_chkpts.h5:/0000000000/field" -vTarg "/?/?/?/?/sims/aaX-testcases/10-RotorStator/test/data/targ_0_chkpts.h5:/targ/field" -vMet "/?/?/?/?/sims/aaX-testcases/10-RotorStator/test/data/targ_0_chkpts.h5:/metric_diag/field" -ts_trajectory_dirname ./test_directory_0 -ts_trajectory_type memory -ts_trajectory_max_cps_ram 4 -ts_trajectory_max_cps_disk 5000 -ts_trajectory_monitor : -n 80 /?/?/?/?/bin/cascade-ng/checkpoints_gradients ../data/nl_adj_1-chkpts.ini -adjoint -vr "/?/?/?/?/sims/aaX-testcases/10-RotorStator/test/data/ic_1_chkpts.h5:/0000000000/field" -vTarg "/?/?/?/?/sims/aaX-testcases/10-RotorStator/test/data/targ_1_chkpts.h5:/targ/field" -vMet "/?/?/?/?/sims/aaX-testcases/10-RotorStator/test/data/targ_1_chkpts.h5:/metric_diag/field" -ts_trajectory_dirname ./test_directory_1 -ts_trajectory_type memory -ts_trajectory_max_cps_ram 4 -ts_trajectory_max_cps_disk 5000 -ts_trajectory_monitor > log.txt 2> error.txt I have attached the log.txt and error.txt to this email so that you can have a look at these. It seems to look ok until the OOM killer kills the job. Best wishes, Anton From: Matthew Knepley > Date: Wednesday, 9 December 2020 at 01:38 To: Zhang, Hong > Cc: Anton Glazkov >, petsc-users at mcs.anl.gov > Subject: Re: [petsc-users] TSAdjoint multilevel checkpointing running out of memory On Tue, Dec 8, 2020 at 6:47 PM Zhang, Hong via petsc-users > wrote: Anton, TSAdjoint should manage checkpointing automatically, and the number of checkpoints in RAM and disk should not exceed the user-specified values. Can you send us the output for -ts_trajectory_monitor in your case? One other thing. It is always possible to miscalculate RAM a little. If you set it to 4 checkpoints, does it complete? Thanks, Matt Hong (Mr.) On Dec 8, 2020, at 3:37 PM, Anton Glazkov > wrote: Good evening, I?m attempting to run a multi-level checkpointing code on a cluster (ie RAM+disk storage with ?download-revolve as a configure option) with the options ?-ts_trajectory_type memory -ts_trajectory_max_cps_ram 5 -ts_trajectory_max_cps_disk 5000?, for example. My question is, if I have 100,000 time points, for example, that need to be evaluated during the forward and adjoint run, does TSAdjoint automatically optimize the checkpointing so that the number of checkpoints in RAM and disk do not exceed these values, or is one of the options ignored. I ask because I have a case that runs correctly with -ts_trajectory_type basic, but runs out of memory when attempting to fill the checkpoints in RAM when running the adjoint (I have verified that 5 checkpoints will actually fit into the available memory). This makes me think that maybe the -ts_trajectory_max_cps_ram 5 option is being ignored? Best wishes, Anton -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From gohardoust at gmail.com Mon Dec 21 12:51:31 2020 From: gohardoust at gmail.com (Mohammad Gohardoust) Date: Mon, 21 Dec 2020 11:51:31 -0700 Subject: [petsc-users] How to set PC sub type Message-ID: Hi, My question seems to be simple but I couldn't find the answer to it. I know that for the option -pc_type, there is an equivalent "PCSetType" function, but how about -sub_pc_type? How can I implement "-pc_type bjacobi -sub_pc_type icc" in my code? Thanks, Mohammad -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Mon Dec 21 12:54:49 2020 From: balay at mcs.anl.gov (Satish Balay) Date: Mon, 21 Dec 2020 12:54:49 -0600 (CST) Subject: [petsc-users] How to set PC sub type In-Reply-To: References: Message-ID: check PCBJacobiGetSubKSP() usage - for ex: src/ksp/ksp/tutorials/ex7.c Satish On Mon, 21 Dec 2020, Mohammad Gohardoust wrote: > Hi, > > My question seems to be simple but I couldn't find the answer to it. I know > that for the option -pc_type, there is an equivalent "PCSetType" function, > but how about -sub_pc_type? How can I implement "-pc_type > bjacobi -sub_pc_type icc" in my code? > > Thanks, > Mohammad > From bsmith at petsc.dev Mon Dec 21 16:12:06 2020 From: bsmith at petsc.dev (Barry Smith) Date: Mon, 21 Dec 2020 16:12:06 -0600 Subject: [petsc-users] Parsing user-defined variables using YAML In-Reply-To: <94DE0643-38C2-4B6A-B625-1CF8E782BB50@lsu.edu> References: <94DE0643-38C2-4B6A-B625-1CF8E782BB50@lsu.edu> Message-ID: <39EB2A0C-DBC7-4E6F-85A7-38ADF8DDECBA@petsc.dev> You can make up your own options and provide them in YAML format then query for them in your PETSc code using the routines Blaise pointed out. There is no restriction that the arguments need to be only pre-defined PETSc options. Barry As documented the YAML format supported is very simple. > On Dec 20, 2020, at 10:56 AM, Blaise A Bourdin wrote: > > Have a look at the options related functions in the system section of the man pages: https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/index.html > You can get options from the petsc option database and pass them to your user code. > You can also use petscBag to pass a C struct or a fortran derived type to your user code. > > Regards, > Blaise > >> On Dec 20, 2020, at 10:03 AM, sthavishtha bhopalam wrote: >> >> Hello PETSc users! >> >> I wanted to know how one could parse user-defined variables (not the inbuilt PETSc options) in a PETSc code using YAML? Any suggestions? To my knowledge, the existing YAML parser in PETSc solely reads the PETSc options like snes, ksp, pc etc. as described here (https://lists.mcs.anl.gov/mailman/htdig/petsc-users/2020-August/042074.html) but is incapable to read user-defined variables, which may be specific to a code. >> >> Thanks >> ------------------------------------------- >> Regards >> >> Sthavishtha >> >> >> >> > > -- > A.K. & Shirley Barton Professor of Mathematics > Adjunct Professor of Mechanical Engineering > Adjunct of the Center for Computation & Technology > Louisiana State University, Lockett Hall Room 344, Baton Rouge, LA 70803, USA > Tel. +1 (225) 578 1612, Fax +1 (225) 578 4276 Web http://www.math.lsu.edu/~bourdin > From hongzhang at anl.gov Mon Dec 21 22:16:28 2020 From: hongzhang at anl.gov (Zhang, Hong) Date: Tue, 22 Dec 2020 04:16:28 +0000 Subject: [petsc-users] Support for full jacobianP in TSSetIJacobianP In-Reply-To: <5EB41847-FE48-4618-90CE-8F8853303BEF@llnl.gov> References: <449B7337-595D-439B-8EDB-C719CD2D91BD@llnl.gov> <4BA69262-B9E6-4D18-8705-55DEF978F965@anl.gov> <4BDCA608-0433-424C-B817-80BAD1AD904D@anl.gov> <5EB41847-FE48-4618-90CE-8F8853303BEF@llnl.gov> Message-ID: Thank you for providing the example. This is very helpful. Sorry that I was not accurate about what should be in IJacobianP. With current API, a little hack is needed to get it work. In IJacobianP, we have to provide shift*M_P*dt if we expand the formula in the paper to accommodate parameters mass matrices. So I changed your code as follows: if self.deriv == "c": dt = ts.getTimeStep() # dt is negative in backward run Jp[0, 0] = -shift*udot[0]*dt I noticed that there is some problem with the input variable Xdot and have been working on a fix. But as a quick workaround, you can use backward Euler with the following options before the fix is ready: -ts_type beuler -ts_trajectory_type memory -ts_trajectory_solution_only Thanks, Hong On Dec 20, 2020, at 3:28 PM, Salazar De Troya, Miguel > wrote: Hello Hong, Thank you. My apologies for rushing to blame the API instead of looking at my own code. I?ve put together a minimum example in petsc4py that I am attaching to this email. Here I am solving the simple ODE: c * xdot = b * x(t) with initial condition x(0) = a and the cost function J equal to the solution at the final time ?T?, i.e. J = x(T). The analytical solution is x(t) = a * exp(b/c *t). In the example, there is the option to calculate the derivatives w.r.t ?b? or ?c? in the keyword argument ?deriv? passed to ?SimpleODE?. For ?b?, the solver returns the correct derivatives (checked with the analytical expression), but this does not work for ?c?. I might be building the wrong jacobian that I pass to ?setIJacobianP?. Could you please take a look at it? Thanks. Miguel From: "Zhang, Hong" > Date: Saturday, December 19, 2020 at 10:02 AM To: "Salazar De Troya, Miguel" > Cc: Satish Balay via petsc-users > Subject: Re: [petsc-users] Support for full jacobianP in TSSetIJacobianP On Dec 18, 2020, at 6:35 PM, Salazar De Troya, Miguel > wrote: Ok, I was not able to get such case to work in my firedrake-ts implementation. Maybe I am missing something in my code. I looked at the TSAdjoint paper https://arxiv.org/pdf/1912.07696.pdf Equation 2.1 and at the adjoint method for the theta method (Equation 2.15) where the mass matrix is not differentiated w.r.t. the design parameter ?p? and decided to ask the question. For notational brevity, the formula used in the paper does not assume that the mass matrix depends on the parameters p. But it can be easily extended for this case. Is the actual implementation different from what is in the paper? The actual implementation is more general than the formula presented in the paper. Note that PETSc TS takes the ODE problem as F(U_t,U,P,t) = G(U,P,t) and does not ask for a mass matrix explicitly from users. When users provide IFunction, which is F(Udot,U,P,t), IJacobian (dF/dU) and IJacobianP (dF/dP) are needed by TSAdjoint to compute the sensitivities. Differentiating the mass matrix (more precisely, the term M*U_t ) is needed when you prepare the call back function IJacobianP. So if we have M(P)*U_t - f(t,U,P) in IFunction, IJacobianP should be M_P*U_t - f_P where U_t is provided by PETSc as an input argument. Thanks, Hong Thanks Miguel From: "Zhang, Hong" > Date: Friday, December 18, 2020 at 3:11 PM To: "Salazar De Troya, Miguel" > Cc: Satish Balay via petsc-users > Subject: Re: [petsc-users] Support for full jacobianP in TSSetIJacobianP The current interface is general and should be applicable to this case as soon as users can provide IJacobianP, which is dF(Udot,U,P,t)/dP. Were you able to generate it in firedrake? If so, could you provide an example that I can test? Thanks, Hong On Dec 18, 2020, at 10:58 AM, Salazar De Troya, Miguel > wrote: Yes, that is the case I am considering. The special case I am concerned about is as following: the heat equation in variational form and in firedrake/UFL notation is as follows: p*u_t*v*dx + inner(grad(u), grad(v))*dx = 0, where u is the temperature, u_t is its time derivative, v is just the test function, dx is the integration domain and p is the design parameter. If ?p? were discontinuous, one can?t just factor ?p? into the second term due to the divergence theorem. Meaning that p*u_t*v*dx + inner(grad(u), grad(v))*dx = 0 is different than u_t*v*dx + inner(1.0 / p * grad(u), grad(v))*dx = 0, which is what ideally one would obtain in order to adapt to the current interface in TSAdjoint. Thanks Miguel From: "Zhang, Hong" > Date: Thursday, December 17, 2020 at 7:25 PM To: "Salazar De Troya, Miguel" > Cc: Satish Balay via petsc-users > Subject: Re: [petsc-users] Support for full jacobianP in TSSetIJacobianP Hi Miguel, Thank you for the nice work. I do not understand what you propose to do here. What is the obstacle to using current TSSetIJacobianP() for the corner case you mentioned? Are you considering a case in which the mass matrix is parameterized, e.g. M(p) udot - f(t,u) = g(t,u) ? Thanks, Hong On Dec 17, 2020, at 3:38 PM, Salazar De Troya, Miguel via petsc-users > wrote: Hello, I am working on hooking up TSAdjoint with pyadjoint through the firedrake-ts interface (https://github.com/IvanYashchuk/firedrake-ts). I have done most of the implementation and now I am just testing for corner cases. One of them is when the design variable is multiplying the first derivative term. It would be the case ofF(Udot,U,P,t) = G(U,P,t) in https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/Sensitivity/TSSetIJacobianP.html . I imagine that one could think of refactoring the ?P? in the left hand side to the right hand side, but this is not trivial when ?P? is a discontinuous field over the domain. I think it would be better to include the case of F(Udot,U,P,t) = G(U,P,t) in TSSetIJacobianP and I am volunteering to do it. Given the current implementation of TSAdjoint, is this something feasible? Thanks Miguel Miguel A. Salazar de Troya Postdoctoral Researcher, Lawrence Livermore National Laboratory B141 Rm: 1085-5 Ph: 1(925) 422-6411 -------------- next part -------------- An HTML attachment was scrubbed... URL: From sthavishthabr at gmail.com Tue Dec 22 00:05:42 2020 From: sthavishthabr at gmail.com (sthavishtha bhopalam) Date: Tue, 22 Dec 2020 11:35:42 +0530 Subject: [petsc-users] Parsing user-defined variables using YAML In-Reply-To: <39EB2A0C-DBC7-4E6F-85A7-38ADF8DDECBA@petsc.dev> References: <94DE0643-38C2-4B6A-B625-1CF8E782BB50@lsu.edu> <39EB2A0C-DBC7-4E6F-85A7-38ADF8DDECBA@petsc.dev> Message-ID: Thank you Barry and Blaise for the suggestions. ------------------------------------------- Regards Sthavishtha Bhopalam Rajakumar On Tue, Dec 22, 2020 at 3:42 AM Barry Smith wrote: > > You can make up your own options and provide them in YAML format then > query for them in your PETSc code using the routines Blaise pointed out. > There is no restriction that the arguments need to be only pre-defined > PETSc options. > > Barry > > As documented the YAML format supported is very simple. > > > > On Dec 20, 2020, at 10:56 AM, Blaise A Bourdin wrote: > > > > Have a look at the options related functions in the system section of > the man pages: > https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/index.html > > You can get options from the petsc option database and pass them to your > user code. > > You can also use petscBag to pass a C struct or a fortran derived type > to your user code. > > > > Regards, > > Blaise > > > >> On Dec 20, 2020, at 10:03 AM, sthavishtha bhopalam < > sthavishthabr at gmail.com> wrote: > >> > >> Hello PETSc users! > >> > >> I wanted to know how one could parse user-defined variables (not the > inbuilt PETSc options) in a PETSc code using YAML? Any suggestions? To my > knowledge, the existing YAML parser in PETSc solely reads the PETSc options > like snes, ksp, pc etc. as described here ( > https://lists.mcs.anl.gov/mailman/htdig/petsc-users/2020-August/042074.html) > but is incapable to read user-defined variables, which may be specific to a > code. > >> > >> Thanks > >> ------------------------------------------- > >> Regards > >> > >> Sthavishtha > >> > >> > >> > >> > > > > -- > > A.K. & Shirley Barton Professor of Mathematics > > Adjunct Professor of Mechanical Engineering > > Adjunct of the Center for Computation & Technology > > Louisiana State University, Lockett Hall Room 344, Baton Rouge, LA > 70803, USA > > Tel. +1 (225) 578 1612, Fax +1 (225) 578 4276 Web > http://www.math.lsu.edu/~bourdin > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thibault.bridelbertomeu at gmail.com Tue Dec 22 05:49:40 2020 From: thibault.bridelbertomeu at gmail.com (Thibault Bridel-Bertomeu) Date: Tue, 22 Dec 2020 12:49:40 +0100 Subject: [petsc-users] TS tutorial ex11 in Fortran In-Reply-To: References: <1EC72DF8-61BD-482B-B1B0-9427AB1ADF74@petsc.dev> <048DC071-33E5-4644-B1C8-DD598248923A@petsc.dev> <87tuspyaug.fsf@jedbrown.org> <1035FEDB-B297-4E71-8F0F-D44C72707105@petsc.dev> <39E9FBED-6D6B-41E4-983C-58465ACAB4D0@petsc.dev> Message-ID: Dear all, I have hit two snags while implementing the missing wrappers necessary to transcribe ex11 to Fortran. First is about the PetscDSAddBoundary wrapper, that I have done so : static PetscErrorCode ourbocofunc(PetscReal time, const PetscReal *c, const PetscReal *n, const PetscScalar *a_xI, const PetscScalar *a_xG, void *ctx) { PetscObjectUseFortranCallback((PetscDS)ctx, bocofunc, (PetscReal*,const PetscReal*,const PetscReal*,const PetscScalar*,const PetscScalar*,void*,PetscErrorCode*), (&time,c,n,a_xI,a_xG,ctx,&ierr)); } static PetscErrorCode ourbocofunc_time(PetscReal time, const PetscReal *c, const PetscReal *n, const PetscScalar *a_xI, const PetscScalar *a_xG, void *ctx) { PetscObjectUseFortranCallback((PetscDS)ctx, bocofunc_time, (PetscReal*,const PetscReal*,const PetscReal*,const PetscScalar*,const PetscScalar*,void*,PetscErrorCode*), (&time,c,n,a_xI,a_xG,ctx,&ierr)); } PETSC_EXTERN void petscdsaddboundary_(PetscDS *prob, DMBoundaryConditionType *type, char *name, char *labelname, PetscInt *field, PetscInt *numcomps, PetscInt *comps, void (*bcFunc)(void), void (*bcFunc_t)(void), PetscInt *numids, const PetscInt *ids, void *ctx, PetscErrorCode *ierr, PETSC_FORTRAN_CHARLEN_T namelen, PETSC_FORTRAN_CHARLEN_T labelnamelen) { char *newname, *newlabelname; FIXCHAR(name, namelen, newname); FIXCHAR(labelname, labelnamelen, newlabelname); *ierr = PetscObjectSetFortranCallback((PetscObject)*prob, PETSC_FORTRAN_CALLBACK_CLASS, &bocofunc, (PetscVoidFunction)bcFunc, ctx); *ierr = PetscObjectSetFortranCallback((PetscObject)*prob, PETSC_FORTRAN_CALLBACK_CLASS, &bocofunc_time, (PetscVoidFunction)bcFunc_t, ctx); *ierr = PetscDSAddBoundary(*prob, *type, newname, newlabelname, *field, *numcomps, comps, (void (*)(void))ourbocofunc, (void (*)(void))ourbocofunc_time, *numids, ids, *prob); FREECHAR(name, newname); FREECHAR(labelname, newlabelname); } but when I call it in the program, with adequate routines, I obtain the following error : [0]PETSC ERROR: --------------------- Error Message --------------------------------------------------------------[0]PETSC ERROR: Corrupt argument: https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSC ERROR: Fortran callback not set on this object[0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting.[0]PETSC ERROR: Petsc Development GIT revision: v3.14.2-297-gf36a7edeb8 GIT Date: 2020-12-18 04:42:53 +0000[0]PETSC ERROR: ../../../bin/eulerian3D on a named macbook-pro-de-thibault.home by tbridel Sun Dec 20 15:05:15 2020[0]PETSC ERROR: Configure options --with-clean=0 --prefix=/Users/tbridel/Documents/1-CODES/04-PETSC/build --with-make-np=2 --with-windows-graphics=0 --with-debugging=0 --download-fblaslapack --download-mpich-shared=0 --with-x=0 --with-pthread=0 --with-valgrind=0 --PETSC_ARCH=macosx --with-fc=/usr/local/Cellar/open-mpi/4.0.1_2/bin/mpifort --with-cc=/usr/local/Cellar/open-mpi/4.0.1_2/bin/mpicc --with-cxx=/usr/local/Cellar/open-mpi/4.0.1_2/bin/mpic++ --with-openmp=0 --download-hypre=yes --download-sowing=yes --download-metis=yes --download-parmetis=yes --download-triangle=yes --download-tetgen=yes --download-ctetgen=yes --download-p4est=yes --download-zlib=yes --download-c2html=yes --download-eigen=yes --download-pragmatic=yes --with-hdf5-dir=/usr/local/Cellar/hdf5/1.10.5_1 --with-cmake-dir=/usr/local/Cellar/cmake/3.15.3[0]PETSC ERROR: #1 PetscObjectGetFortranCallback() line 258 in /Users/tbridel/Documents/1-CODES/04-PETSC/src/sys/objects/inherit.c[0]PETSC ERROR: #2 ourbocofunc() line 141 in /Users/tbridel/Documents/1-CODES/59-EULERIAN3D/sources/petsc_wrapping/wrapper_petsc.c[0]PETSC ERROR: #3 DMPlexInsertBoundaryValuesRiemann() line 989 in /Users/tbridel/Documents/1-CODES/04-PETSC/src/dm/impls/plex/plexfem.c[0]PETSC ERROR: #4 DMPlexInsertBoundaryValues_Plex() line 1052 in /Users/tbridel/Documents/1-CODES/04-PETSC/src/dm/impls/plex/plexfem.c[0]PETSC ERROR: #5 DMPlexInsertBoundaryValues() line 1142 in /Users/tbridel/Documents/1-CODES/04-PETSC/src/dm/impls/plex/plexfem.c[0]PETSC ERROR: #6 DMPlexComputeResidual_Internal() line 4524 in /Users/tbridel/Documents/1-CODES/04-PETSC/src/dm/impls/plex/plexfem.c[0]PETSC ERROR: #7 DMPlexTSComputeRHSFunctionFVM() line 74 in /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/utils/dmplexts.c[0]PETSC ERROR: #8 ourdmtsrhsfunc() line 186 in /Users/tbridel/Documents/1-CODES/59-EULERIAN3D/sources/petsc_wrapping/wrapper_petsc.c[0]PETSC ERROR: #9 TSComputeRHSFunction_DMLocal() line 105 in /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/utils/dmlocalts.c[0]PETSC ERROR: #10 TSComputeRHSFunction() line 653 in /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/interface/ts.c[0]PETSC ERROR: #11 TSSSPStep_RK_3() line 120 in /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/impls/explicit/ssp/ssp.c[0]PETSC ERROR: #12 TSStep_SSP() line 208 in /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/impls/explicit/ssp/ssp.c[0]PETSC ERROR: #13 TSStep() line 3757 in /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/interface/ts.c[0]PETSC ERROR: #14 TSSolve() line 4154 in /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/interface/ts.c[0]PETSC ERROR: #15 User provided function() line 0 in User file Second is about the DMProjectFunction wrapper, that I have done so : static PetscErrorCode ourdmprojfunc(PetscInt dim, PetscReal time, PetscReal x[], PetscInt Nf, PetscScalar u[], void *ctx) { PetscObjectUseFortranCallback((DM)ctx, dmprojfunc, (PetscInt*,PetscReal*,PetscReal*,PetscInt*,PetscScalar*,void*,PetscErrorCode*), (&dim,&time,x,&Nf,u,_ctx,&ierr)) } PETSC_EXTERN void dmprojectfunction_(DM *dm, PetscReal *time, void (*func)(PetscInt*,PetscReal*,PetscReal*,PetscInt*,PetscScalar*,void*,PetscErrorCode*), void *ctx, InsertMode *mode, Vec X, PetscErrorCode *ierr) { PetscErrorCode (*funcarr[1]) (PetscInt dim, PetscReal time, PetscReal x[], PetscInt Nf, PetscScalar *u, void *ctx); *ierr = PetscObjectSetFortranCallback((PetscObject)*dm, PETSC_FORTRAN_CALLBACK_CLASS, &dmprojfunc, (PetscVoidFunction)func, ctx); funcarr[0] = ourdmprojfunc; *ierr = DMProjectFunction(*dm, *time, funcarr, &ctx, *mode, X); } This time there is no error because I cannot reach this point in the program, but I am not sure anyways how to write this wrapper, especially because of the double pointers that DMProjectFunction takes as arguments. Does anyone have any idea what could be going wrong with those two wrappers ? Thank you very much in advance !! Thibault Le ven. 18 d?c. 2020 ? 11:02, Thibault Bridel-Bertomeu < thibault.bridelbertomeu at gmail.com> a ?crit : > Aah that is a nice trick, I was getting ready to fork, clone the fork and > redo the work, but that worked fine ! Thank you Barry ! > > The MR will appear in a little while ! > > Thibault > > > Le ven. 18 d?c. 2020 ? 10:16, Barry Smith a ?crit : > >> >> Good question. There is a trick to limit the amount of work you need >> to do with a new fork after you have already made changes with a PETSc >> clone, but it looks like we do not document this clearly in the webpages. >> (I couldn't find it). >> >> Yes, you do need to make a fork, but after you have made the fork on >> the GitLab website (and have done nothing on your machine) edit the file >> $PETSC_DIR/.git/config for your clone on your machine >> >> Locate the line that has url = git at gitlab.com:petsc/petsc.git (this >> may have an https at the beginning of the line) >> >> Change this line to point to the fork url instead with git@ not https, >> which will be pretty much the same URL but with your user id instead of >> petsc in the address. Then git push and it will push to your fork. >> >> Now you changes will be in your fork and you can make the MR from your >> fork URL on Gitlab. (In other words this editing trick converts your PETSc >> clone on your machine to a PETSc fork). >> >> I hope I have explained this clearly enough it goes smoothly. >> >> Barry >> >> >> >> On Dec 18, 2020, at 3:00 AM, Thibault Bridel-Bertomeu < >> thibault.bridelbertomeu at gmail.com> wrote: >> >> Hello Barry, >> >> I'll start the MR as soon as possible then so that specialists can indeed >> have a look. Do I have to fork PETSc to start a MR or are PETSc repo >> settings such that can I push a branch from the PETSc clone I got ? >> >> Thibault >> >> >> Le mer. 16 d?c. 2020 ? 07:47, Barry Smith a ?crit : >> >>> >>> Thibault, >>> >>> A subdirectory for the example is fine; we have other examples that >>> use subdirectories and multiple files. >>> >>> Note: even if you don't have something completely working you can >>> still make MR and list it as DRAFT request for comments, some other PETSc >>> members who understand the packages you are using and Fortran better than I >>> may be able to help as you develop the code. >>> >>> Barry >>> >>> >>> >>> >>> On Dec 16, 2020, at 12:35 AM, Thibault Bridel-Bertomeu < >>> thibault.bridelbertomeu at gmail.com> wrote: >>> >>> Hello everyone, >>> >>> Thank you Barry for the feedback. >>> OK, yes I'll work up an MR as soon as I have got something working. By >>> the way, does the fortran-version of the example have to be a single file ? >>> If my push contains a directory with several files (different modules and >>> the main), and the Makefile that goes with it, is that ok ? >>> >>> Thibault Bridel-Bertomeu >>> >>> >>> Le mer. 16 d?c. 2020 ? 04:46, Barry Smith a ?crit : >>> >>>> >>>> This is great. If you make a branch off of the PETSc git repository >>>> with these additions and work on ex11 you can make a merge request and we >>>> can run the code easily on all our test systems (for security reasons one >>>> of use needs to launch the tests from your MR). >>>> https://docs.petsc.org/en/latest/developers/integration/ >>>> >>>> Barry >>>> >>>> >>>> On Dec 15, 2020, at 5:35 AM, Thibault Bridel-Bertomeu < >>>> thibault.bridelbertomeu at gmail.com> wrote: >>>> >>>> Hello everyone, >>>> >>>> So far, I have the wrappers in the files attached to this e-mail. I >>>> still do not know if they work properly - at least the code compiles and >>>> the calls to the wrapped-subroutine do not fail - but I wanted to put this >>>> here in case someone sees something really wrong with it already. >>>> >>>> Thank you again for your help, I'll try to post updates of the F90 >>>> version of ex11 regularly in this thread. >>>> >>>> Stay safe, >>>> >>>> Thibault Bridel-Bertomeu >>>> >>>> Le dim. 13 d?c. 2020 ? 16:39, Jed Brown a ?crit : >>>> >>>>> Thibault Bridel-Bertomeu writes: >>>>> >>>>> > Thank you Mark for your answer. >>>>> > >>>>> > I am not sure what you think could be in the setBC1 routine ? How to >>>>> make >>>>> > the connection with the PetscDS ? >>>>> > >>>>> > On the other hand, I actually found after a while TSMonitorSet has a >>>>> > fortran wrapper, and it does take as arguments two function >>>>> pointers, so I >>>>> > guess it is possible ? Although I am not sure exactly how to play >>>>> with the >>>>> > PetscObjectSetFortranCallback & PetscObjectUseFortranCallback macros >>>>> - >>>>> > could anybody advise please ? >>>>> >>>>> tsmonitorset_ is a good example to follow. In your file, create one of >>>>> these static structs with a member for each callback. These are IDs that >>>>> will be used as keys for Fortran callbacks and their contexts. The salient >>>>> parts of the file are below. >>>>> >>>>> static struct { >>>>> PetscFortranCallbackId prestep; >>>>> PetscFortranCallbackId poststep; >>>>> PetscFortranCallbackId rhsfunction; >>>>> PetscFortranCallbackId rhsjacobian; >>>>> PetscFortranCallbackId ifunction; >>>>> PetscFortranCallbackId ijacobian; >>>>> PetscFortranCallbackId monitor; >>>>> PetscFortranCallbackId mondestroy; >>>>> PetscFortranCallbackId transform; >>>>> #if defined(PETSC_HAVE_F90_2PTR_ARG) >>>>> PetscFortranCallbackId function_pgiptr; >>>>> #endif >>>>> } _cb; >>>>> >>>>> /* >>>>> Note ctx is the same as ts so we need to get the Fortran context >>>>> out of the TS; this gets put in _ctx using the callback ID >>>>> */ >>>>> static PetscErrorCode ourmonitor(TS ts,PetscInt i,PetscReal d,Vec >>>>> v,void *ctx) >>>>> { >>>>> >>>>> PetscObjectUseFortranCallback(ts,_cb.monitor,(TS*,PetscInt*,PetscReal*,Vec >>>>> *,void*,PetscErrorCode*),(&ts,&i,&d,&v,_ctx,&ierr)); >>>>> } >>>>> >>>>> Then follow as in tsmonitorset_, which sets two callbacks. >>>>> >>>>> PETSC_EXTERN void tsmonitorset_(TS *ts,void >>>>> (*func)(TS*,PetscInt*,PetscReal*,Vec*,void*,PetscErrorCode*),void >>>>> *mctx,void (*d)(void*,PetscErrorCode*),PetscErrorCode *ierr) >>>>> { >>>>> CHKFORTRANNULLFUNCTION(d); >>>>> if ((PetscVoidFunction)func == (PetscVoidFunction) >>>>> tsmonitordefault_) { >>>>> *ierr = TSMonitorSet(*ts,(PetscErrorCode >>>>> (*)(TS,PetscInt,PetscReal,Vec,void*))TSMonitorDefault,*(PetscViewerAndFormat**)mctx,(PetscErrorCode >>>>> (*)(void **))PetscViewerAndFormatDestroy); >>>>> } else { >>>>> *ierr = >>>>> PetscObjectSetFortranCallback((PetscObject)*ts,PETSC_FORTRAN_CALLBACK_CLASS,&_cb.monitor,(PetscVoidFunction)func,mctx); >>>>> *ierr = >>>>> PetscObjectSetFortranCallback((PetscObject)*ts,PETSC_FORTRAN_CALLBACK_CLASS,&_cb.mondestroy,(PetscVoidFunction)d,mctx); >>>>> *ierr = TSMonitorSet(*ts,ourmonitor,*ts,ourmonitordestroy); >>>>> } >>>>> } >>>>> >>>> >>>> >>>> >>>> >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From salazardetro1 at llnl.gov Tue Dec 22 11:46:40 2020 From: salazardetro1 at llnl.gov (Salazar De Troya, Miguel) Date: Tue, 22 Dec 2020 17:46:40 +0000 Subject: [petsc-users] Support for full jacobianP in TSSetIJacobianP In-Reply-To: References: <449B7337-595D-439B-8EDB-C719CD2D91BD@llnl.gov> <4BA69262-B9E6-4D18-8705-55DEF978F965@anl.gov> <4BDCA608-0433-424C-B817-80BAD1AD904D@anl.gov> <5EB41847-FE48-4618-90CE-8F8853303BEF@llnl.gov> Message-ID: <23470D92-538D-4796-BB9F-2DDEAD002A58@llnl.gov> Thanks, Hong. Now it works! I can work with backwards Euler for now. With regards to the other two options, I think -ts_trajectory_solution_only is ok because backwards euler does not have intermediate stage. With respect to -ts_trajectory_type memory, can I still do checkpointing to be able to solve larger problems? I have also noticed that TSComputeIJacobianP() is only used by the theta methods. Are there plans to support higher order methods? Miguel From: "Zhang, Hong" Date: Monday, December 21, 2020 at 8:16 PM To: "Salazar De Troya, Miguel" Cc: Satish Balay via petsc-users Subject: Re: [petsc-users] Support for full jacobianP in TSSetIJacobianP Thank you for providing the example. This is very helpful. Sorry that I was not accurate about what should be in IJacobianP. With current API, a little hack is needed to get it work. In IJacobianP, we have to provide shift*M_P*dt if we expand the formula in the paper to accommodate parameters mass matrices. So I changed your code as follows: if self.deriv == "c": dt = ts.getTimeStep() # dt is negative in backward run Jp[0, 0] = -shift*udot[0]*dt I noticed that there is some problem with the input variable Xdot and have been working on a fix. But as a quick workaround, you can use backward Euler with the following options before the fix is ready: -ts_type beuler -ts_trajectory_type memory -ts_trajectory_solution_only Thanks, Hong On Dec 20, 2020, at 3:28 PM, Salazar De Troya, Miguel > wrote: Hello Hong, Thank you. My apologies for rushing to blame the API instead of looking at my own code. I?ve put together a minimum example in petsc4py that I am attaching to this email. Here I am solving the simple ODE: c * xdot = b * x(t) with initial condition x(0) = a and the cost function J equal to the solution at the final time ?T?, i.e. J = x(T). The analytical solution is x(t) = a * exp(b/c *t). In the example, there is the option to calculate the derivatives w.r.t ?b? or ?c? in the keyword argument ?deriv? passed to ?SimpleODE?. For ?b?, the solver returns the correct derivatives (checked with the analytical expression), but this does not work for ?c?. I might be building the wrong jacobian that I pass to ?setIJacobianP?. Could you please take a look at it? Thanks. Miguel From: "Zhang, Hong" > Date: Saturday, December 19, 2020 at 10:02 AM To: "Salazar De Troya, Miguel" > Cc: Satish Balay via petsc-users > Subject: Re: [petsc-users] Support for full jacobianP in TSSetIJacobianP On Dec 18, 2020, at 6:35 PM, Salazar De Troya, Miguel > wrote: Ok, I was not able to get such case to work in my firedrake-ts implementation. Maybe I am missing something in my code. I looked at the TSAdjoint paper https://arxiv.org/pdf/1912.07696.pdf Equation 2.1 and at the adjoint method for the theta method (Equation 2.15) where the mass matrix is not differentiated w.r.t. the design parameter ?p? and decided to ask the question. For notational brevity, the formula used in the paper does not assume that the mass matrix depends on the parameters p. But it can be easily extended for this case. Is the actual implementation different from what is in the paper? The actual implementation is more general than the formula presented in the paper. Note that PETSc TS takes the ODE problem as F(U_t,U,P,t) = G(U,P,t) and does not ask for a mass matrix explicitly from users. When users provide IFunction, which is F(Udot,U,P,t), IJacobian (dF/dU) and IJacobianP (dF/dP) are needed by TSAdjoint to compute the sensitivities. Differentiating the mass matrix (more precisely, the term M*U_t ) is needed when you prepare the call back function IJacobianP. So if we have M(P)*U_t - f(t,U,P) in IFunction, IJacobianP should be M_P*U_t - f_P where U_t is provided by PETSc as an input argument. Thanks, Hong Thanks Miguel From: "Zhang, Hong" > Date: Friday, December 18, 2020 at 3:11 PM To: "Salazar De Troya, Miguel" > Cc: Satish Balay via petsc-users > Subject: Re: [petsc-users] Support for full jacobianP in TSSetIJacobianP The current interface is general and should be applicable to this case as soon as users can provide IJacobianP, which is dF(Udot,U,P,t)/dP. Were you able to generate it in firedrake? If so, could you provide an example that I can test? Thanks, Hong On Dec 18, 2020, at 10:58 AM, Salazar De Troya, Miguel > wrote: Yes, that is the case I am considering. The special case I am concerned about is as following: the heat equation in variational form and in firedrake/UFL notation is as follows: p*u_t*v*dx + inner(grad(u), grad(v))*dx = 0, where u is the temperature, u_t is its time derivative, v is just the test function, dx is the integration domain and p is the design parameter. If ?p? were discontinuous, one can?t just factor ?p? into the second term due to the divergence theorem. Meaning that p*u_t*v*dx + inner(grad(u), grad(v))*dx = 0 is different than u_t*v*dx + inner(1.0 / p * grad(u), grad(v))*dx = 0, which is what ideally one would obtain in order to adapt to the current interface in TSAdjoint. Thanks Miguel From: "Zhang, Hong" > Date: Thursday, December 17, 2020 at 7:25 PM To: "Salazar De Troya, Miguel" > Cc: Satish Balay via petsc-users > Subject: Re: [petsc-users] Support for full jacobianP in TSSetIJacobianP Hi Miguel, Thank you for the nice work. I do not understand what you propose to do here. What is the obstacle to using current TSSetIJacobianP() for the corner case you mentioned? Are you considering a case in which the mass matrix is parameterized, e.g. M(p) udot - f(t,u) = g(t,u) ? Thanks, Hong On Dec 17, 2020, at 3:38 PM, Salazar De Troya, Miguel via petsc-users > wrote: Hello, I am working on hooking up TSAdjoint with pyadjoint through the firedrake-ts interface (https://github.com/IvanYashchuk/firedrake-ts). I have done most of the implementation and now I am just testing for corner cases. One of them is when the design variable is multiplying the first derivative term. It would be the case ofF(Udot,U,P,t) = G(U,P,t) in https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/Sensitivity/TSSetIJacobianP.html . I imagine that one could think of refactoring the ?P? in the left hand side to the right hand side, but this is not trivial when ?P? is a discontinuous field over the domain. I think it would be better to include the case of F(Udot,U,P,t) = G(U,P,t) in TSSetIJacobianP and I am volunteering to do it. Given the current implementation of TSAdjoint, is this something feasible? Thanks Miguel Miguel A. Salazar de Troya Postdoctoral Researcher, Lawrence Livermore National Laboratory B141 Rm: 1085-5 Ph: 1(925) 422-6411 -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Tue Dec 22 14:20:25 2020 From: bsmith at petsc.dev (Barry Smith) Date: Tue, 22 Dec 2020 14:20:25 -0600 Subject: [petsc-users] TS tutorial ex11 in Fortran In-Reply-To: References: <1EC72DF8-61BD-482B-B1B0-9427AB1ADF74@petsc.dev> <048DC071-33E5-4644-B1C8-DD598248923A@petsc.dev> <87tuspyaug.fsf@jedbrown.org> <1035FEDB-B297-4E71-8F0F-D44C72707105@petsc.dev> <39E9FBED-6D6B-41E4-983C-58465ACAB4D0@petsc.dev> Message-ID: > PetscObjectUseFortranCallback((PetscDS)ctx, > *ierr = PetscObjectSetFortranCallback((PetscObject)*prob It looks like the problem is that these user provided functions do not take a PetscDS directly as an argument so the Fortran callback information cannot be obtained from them. The manual page for PetscDSAddBoundary() says - ctx - An optional user context for bcFunc but then when it lists the calling sequence for bcFunc it does not list the ctx as an argument, so either the manual page or code is wrong. It looks like you make the ctx be the PetscDS prob argument when you call PetscDSAddBoundary In principle this sounds like it might work. I think you need to track through the debugger to see if the ctx passed to ourbocofunc() is actually the PetscDS prob variable and if not why it is not. Barry > On Dec 22, 2020, at 5:49 AM, Thibault Bridel-Bertomeu wrote: > > Dear all, > > I have hit two snags while implementing the missing wrappers necessary to transcribe ex11 to Fortran. > > First is about the PetscDSAddBoundary wrapper, that I have done so : > > static PetscErrorCode ourbocofunc(PetscReal time, const PetscReal *c, const PetscReal *n, const PetscScalar *a_xI, const PetscScalar *a_xG, void *ctx) > { > PetscObjectUseFortranCallback((PetscDS)ctx, bocofunc, > (PetscReal*,const PetscReal*,const PetscReal*,const PetscScalar*,const PetscScalar*,void*,PetscErrorCode*), > (&time,c,n,a_xI,a_xG,ctx,&ierr)); > } > static PetscErrorCode ourbocofunc_time(PetscReal time, const PetscReal *c, const PetscReal *n, const PetscScalar *a_xI, const PetscScalar *a_xG, void *ctx) > { > PetscObjectUseFortranCallback((PetscDS)ctx, bocofunc_time, > (PetscReal*,const PetscReal*,const PetscReal*,const PetscScalar*,const PetscScalar*,void*,PetscErrorCode*), > (&time,c,n,a_xI,a_xG,ctx,&ierr)); > } > PETSC_EXTERN void petscdsaddboundary_(PetscDS *prob, DMBoundaryConditionType *type, char *name, char *labelname, PetscInt *field, PetscInt *numcomps, PetscInt *comps, > void (*bcFunc)(void), > void (*bcFunc_t)(void), > PetscInt *numids, const PetscInt *ids, void *ctx, PetscErrorCode *ierr, > PETSC_FORTRAN_CHARLEN_T namelen, PETSC_FORTRAN_CHARLEN_T labelnamelen) > { > char *newname, *newlabelname; > FIXCHAR(name, namelen, newname); > FIXCHAR(labelname, labelnamelen, newlabelname); > *ierr = PetscObjectSetFortranCallback((PetscObject)*prob, PETSC_FORTRAN_CALLBACK_CLASS, &bocofunc, (PetscVoidFunction)bcFunc, ctx); > *ierr = PetscObjectSetFortranCallback((PetscObject)*prob, PETSC_FORTRAN_CALLBACK_CLASS, &bocofunc_time, (PetscVoidFunction)bcFunc_t, ctx); > *ierr = PetscDSAddBoundary(*prob, *type, newname, newlabelname, *field, *numcomps, comps, > (void (*)(void))ourbocofunc, > (void (*)(void))ourbocofunc_time, > *numids, ids, *prob); > FREECHAR(name, newname); > FREECHAR(labelname, newlabelname); > } > > > but when I call it in the program, with adequate routines, I obtain the following error : > > [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > [0]PETSC ERROR: Corrupt argument: https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind > [0]PETSC ERROR: Fortran callback not set on this object > [0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. > [0]PETSC ERROR: Petsc Development GIT revision: v3.14.2-297-gf36a7edeb8 GIT Date: 2020-12-18 04:42:53 +0000 > [0]PETSC ERROR: ../../../bin/eulerian3D on a named macbook-pro-de-thibault.home by tbridel Sun Dec 20 15:05:15 2020 > [0]PETSC ERROR: Configure options --with-clean=0 --prefix=/Users/tbridel/Documents/1-CODES/04-PETSC/build --with-make-np=2 --with-windows-graphics=0 --with-debugging=0 --download-fblaslapack --download-mpich-shared=0 --with-x=0 --with-pthread=0 --with-valgrind=0 --PETSC_ARCH=macosx --with-fc=/usr/local/Cellar/open-mpi/4.0.1_2/bin/mpifort --with-cc=/usr/local/Cellar/open-mpi/4.0.1_2/bin/mpicc --with-cxx=/usr/local/Cellar/open-mpi/4.0.1_2/bin/mpic++ --with-openmp=0 --download-hypre=yes --download-sowing=yes --download-metis=yes --download-parmetis=yes --download-triangle=yes --download-tetgen=yes --download-ctetgen=yes --download-p4est=yes --download-zlib=yes --download-c2html=yes --download-eigen=yes --download-pragmatic=yes --with-hdf5-dir=/usr/local/Cellar/hdf5/1.10.5_1 --with-cmake-dir=/usr/local/Cellar/cmake/3.15.3 > [0]PETSC ERROR: #1 PetscObjectGetFortranCallback() line 258 in /Users/tbridel/Documents/1-CODES/04-PETSC/src/sys/objects/inherit.c > [0]PETSC ERROR: #2 ourbocofunc() line 141 in /Users/tbridel/Documents/1-CODES/59-EULERIAN3D/sources/petsc_wrapping/wrapper_petsc.c > [0]PETSC ERROR: #3 DMPlexInsertBoundaryValuesRiemann() line 989 in /Users/tbridel/Documents/1-CODES/04-PETSC/src/dm/impls/plex/plexfem.c > [0]PETSC ERROR: #4 DMPlexInsertBoundaryValues_Plex() line 1052 in /Users/tbridel/Documents/1-CODES/04-PETSC/src/dm/impls/plex/plexfem.c > [0]PETSC ERROR: #5 DMPlexInsertBoundaryValues() line 1142 in /Users/tbridel/Documents/1-CODES/04-PETSC/src/dm/impls/plex/plexfem.c > [0]PETSC ERROR: #6 DMPlexComputeResidual_Internal() line 4524 in /Users/tbridel/Documents/1-CODES/04-PETSC/src/dm/impls/plex/plexfem.c > [0]PETSC ERROR: #7 DMPlexTSComputeRHSFunctionFVM() line 74 in /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/utils/dmplexts.c > [0]PETSC ERROR: #8 ourdmtsrhsfunc() line 186 in /Users/tbridel/Documents/1-CODES/59-EULERIAN3D/sources/petsc_wrapping/wrapper_petsc.c > [0]PETSC ERROR: #9 TSComputeRHSFunction_DMLocal() line 105 in /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/utils/dmlocalts.c > [0]PETSC ERROR: #10 TSComputeRHSFunction() line 653 in /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/interface/ts.c > [0]PETSC ERROR: #11 TSSSPStep_RK_3() line 120 in /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/impls/explicit/ssp/ssp.c > [0]PETSC ERROR: #12 TSStep_SSP() line 208 in /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/impls/explicit/ssp/ssp.c > [0]PETSC ERROR: #13 TSStep() line 3757 in /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/interface/ts.c > [0]PETSC ERROR: #14 TSSolve() line 4154 in /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/interface/ts.c > [0]PETSC ERROR: #15 User provided function() line 0 in User file > > Second is about the DMProjectFunction wrapper, that I have done so : > > static PetscErrorCode ourdmprojfunc(PetscInt dim, PetscReal time, PetscReal x[], PetscInt Nf, PetscScalar u[], void *ctx) > { > PetscObjectUseFortranCallback((DM)ctx, dmprojfunc, > (PetscInt*,PetscReal*,PetscReal*,PetscInt*,PetscScalar*,void*,PetscErrorCode*), > (&dim,&time,x,&Nf,u,_ctx,&ierr)) > } > PETSC_EXTERN void dmprojectfunction_(DM *dm, PetscReal *time, > void (*func)(PetscInt*,PetscReal*,PetscReal*,PetscInt*,PetscScalar*,void*,PetscErrorCode*), > void *ctx, InsertMode *mode, Vec X, PetscErrorCode *ierr) > { > PetscErrorCode (*funcarr[1]) (PetscInt dim, PetscReal time, PetscReal x[], PetscInt Nf, PetscScalar *u, void *ctx); > *ierr = PetscObjectSetFortranCallback((PetscObject)*dm, PETSC_FORTRAN_CALLBACK_CLASS, &dmprojfunc, (PetscVoidFunction)func, ctx); > funcarr[0] = ourdmprojfunc; > *ierr = DMProjectFunction(*dm, *time, funcarr, &ctx, *mode, X); > } > > This time there is no error because I cannot reach this point in the program, but I am not sure anyways how to write this wrapper, especially because of the double pointers that DMProjectFunction takes as arguments. > > Does anyone have any idea what could be going wrong with those two wrappers ? > > Thank you very much in advance !! > > Thibault > > Le ven. 18 d?c. 2020 ? 11:02, Thibault Bridel-Bertomeu > a ?crit : > Aah that is a nice trick, I was getting ready to fork, clone the fork and redo the work, but that worked fine ! Thank you Barry ! > > The MR will appear in a little while ! > > Thibault > > > Le ven. 18 d?c. 2020 ? 10:16, Barry Smith > a ?crit : > > Good question. There is a trick to limit the amount of work you need to do with a new fork after you have already made changes with a PETSc clone, but it looks like we do not document this clearly in the webpages. (I couldn't find it). > > Yes, you do need to make a fork, but after you have made the fork on the GitLab website (and have done nothing on your machine) edit the file $PETSC_DIR/.git/config for your clone on your machine > > Locate the line that has url = git at gitlab.com :petsc/petsc.git (this may have an https at the beginning of the line) > > Change this line to point to the fork url instead with git@ not https, which will be pretty much the same URL but with your user id instead of petsc in the address. Then git push and it will push to your fork. > > Now you changes will be in your fork and you can make the MR from your fork URL on Gitlab. (In other words this editing trick converts your PETSc clone on your machine to a PETSc fork). > > I hope I have explained this clearly enough it goes smoothly. > > Barry > > > >> On Dec 18, 2020, at 3:00 AM, Thibault Bridel-Bertomeu > wrote: >> >> Hello Barry, >> >> I'll start the MR as soon as possible then so that specialists can indeed have a look. Do I have to fork PETSc to start a MR or are PETSc repo settings such that can I push a branch from the PETSc clone I got ? >> >> Thibault >> >> >> Le mer. 16 d?c. 2020 ? 07:47, Barry Smith > a ?crit : >> >> Thibault, >> >> A subdirectory for the example is fine; we have other examples that use subdirectories and multiple files. >> >> Note: even if you don't have something completely working you can still make MR and list it as DRAFT request for comments, some other PETSc members who understand the packages you are using and Fortran better than I may be able to help as you develop the code. >> >> Barry >> >> >> >> >>> On Dec 16, 2020, at 12:35 AM, Thibault Bridel-Bertomeu > wrote: >>> >>> Hello everyone, >>> >>> Thank you Barry for the feedback. >>> OK, yes I'll work up an MR as soon as I have got something working. By the way, does the fortran-version of the example have to be a single file ? If my push contains a directory with several files (different modules and the main), and the Makefile that goes with it, is that ok ? >>> >>> Thibault Bridel-Bertomeu >>> >>> >>> Le mer. 16 d?c. 2020 ? 04:46, Barry Smith > a ?crit : >>> >>> This is great. If you make a branch off of the PETSc git repository with these additions and work on ex11 you can make a merge request and we can run the code easily on all our test systems (for security reasons one of use needs to launch the tests from your MR). https://docs.petsc.org/en/latest/developers/integration/ >>> >>> Barry >>> >>> >>>> On Dec 15, 2020, at 5:35 AM, Thibault Bridel-Bertomeu > wrote: >>>> >>>> Hello everyone, >>>> >>>> So far, I have the wrappers in the files attached to this e-mail. I still do not know if they work properly - at least the code compiles and the calls to the wrapped-subroutine do not fail - but I wanted to put this here in case someone sees something really wrong with it already. >>>> >>>> Thank you again for your help, I'll try to post updates of the F90 version of ex11 regularly in this thread. >>>> >>>> Stay safe, >>>> >>>> Thibault Bridel-Bertomeu >>>> >>>> Le dim. 13 d?c. 2020 ? 16:39, Jed Brown > a ?crit : >>>> Thibault Bridel-Bertomeu > writes: >>>> >>>> > Thank you Mark for your answer. >>>> > >>>> > I am not sure what you think could be in the setBC1 routine ? How to make >>>> > the connection with the PetscDS ? >>>> > >>>> > On the other hand, I actually found after a while TSMonitorSet has a >>>> > fortran wrapper, and it does take as arguments two function pointers, so I >>>> > guess it is possible ? Although I am not sure exactly how to play with the >>>> > PetscObjectSetFortranCallback & PetscObjectUseFortranCallback macros - >>>> > could anybody advise please ? >>>> >>>> tsmonitorset_ is a good example to follow. In your file, create one of these static structs with a member for each callback. These are IDs that will be used as keys for Fortran callbacks and their contexts. The salient parts of the file are below. >>>> >>>> static struct { >>>> PetscFortranCallbackId prestep; >>>> PetscFortranCallbackId poststep; >>>> PetscFortranCallbackId rhsfunction; >>>> PetscFortranCallbackId rhsjacobian; >>>> PetscFortranCallbackId ifunction; >>>> PetscFortranCallbackId ijacobian; >>>> PetscFortranCallbackId monitor; >>>> PetscFortranCallbackId mondestroy; >>>> PetscFortranCallbackId transform; >>>> #if defined(PETSC_HAVE_F90_2PTR_ARG) >>>> PetscFortranCallbackId function_pgiptr; >>>> #endif >>>> } _cb; >>>> >>>> /* >>>> Note ctx is the same as ts so we need to get the Fortran context out of the TS; this gets put in _ctx using the callback ID >>>> */ >>>> static PetscErrorCode ourmonitor(TS ts,PetscInt i,PetscReal d,Vec v,void *ctx) >>>> { >>>> PetscObjectUseFortranCallback(ts,_cb.monitor,(TS*,PetscInt*,PetscReal*,Vec *,void*,PetscErrorCode*),(&ts,&i,&d,&v,_ctx,&ierr)); >>>> } >>>> >>>> Then follow as in tsmonitorset_, which sets two callbacks. >>>> >>>> PETSC_EXTERN void tsmonitorset_(TS *ts,void (*func)(TS*,PetscInt*,PetscReal*,Vec*,void*,PetscErrorCode*),void *mctx,void (*d)(void*,PetscErrorCode*),PetscErrorCode *ierr) >>>> { >>>> CHKFORTRANNULLFUNCTION(d); >>>> if ((PetscVoidFunction)func == (PetscVoidFunction) tsmonitordefault_) { >>>> *ierr = TSMonitorSet(*ts,(PetscErrorCode (*)(TS,PetscInt,PetscReal,Vec,void*))TSMonitorDefault,*(PetscViewerAndFormat**)mctx,(PetscErrorCode (*)(void **))PetscViewerAndFormatDestroy); >>>> } else { >>>> *ierr = PetscObjectSetFortranCallback((PetscObject)*ts,PETSC_FORTRAN_CALLBACK_CLASS,&_cb.monitor,(PetscVoidFunction)func,mctx); >>>> *ierr = PetscObjectSetFortranCallback((PetscObject)*ts,PETSC_FORTRAN_CALLBACK_CLASS,&_cb.mondestroy,(PetscVoidFunction)d,mctx); >>>> *ierr = TSMonitorSet(*ts,ourmonitor,*ts,ourmonitordestroy); >>>> } >>>> } >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hongzhang at anl.gov Tue Dec 22 14:35:02 2020 From: hongzhang at anl.gov (Zhang, Hong) Date: Tue, 22 Dec 2020 20:35:02 +0000 Subject: [petsc-users] Support for full jacobianP in TSSetIJacobianP In-Reply-To: <23470D92-538D-4796-BB9F-2DDEAD002A58@llnl.gov> References: <449B7337-595D-439B-8EDB-C719CD2D91BD@llnl.gov> <4BA69262-B9E6-4D18-8705-55DEF978F965@anl.gov> <4BDCA608-0433-424C-B817-80BAD1AD904D@anl.gov> <5EB41847-FE48-4618-90CE-8F8853303BEF@llnl.gov> <23470D92-538D-4796-BB9F-2DDEAD002A58@llnl.gov> Message-ID: <07967DE6-483D-40B4-9F48-B3387CED69DB@anl.gov> Miguel, You can now use my branch hongzh/support-parameterized-mass-matrix. It may take a few days or weeks to be merged. Your original script should work out of box with any checkpointing scheme. Nothing needs to be changed. The IJacobianP is simply M_P*U_t. Hong On Dec 22, 2020, at 11:46 AM, Salazar De Troya, Miguel via petsc-users > wrote: Thanks, Hong. Now it works! I can work with backwards Euler for now. With regards to the other two options, I think -ts_trajectory_solution_only is ok because backwards euler does not have intermediate stage. With respect to -ts_trajectory_type memory, can I still do checkpointing to be able to solve larger problems? I have also noticed that TSComputeIJacobianP() is only used by the theta methods. Are there plans to support higher order methods? Miguel From: "Zhang, Hong" > Date: Monday, December 21, 2020 at 8:16 PM To: "Salazar De Troya, Miguel" > Cc: Satish Balay via petsc-users > Subject: Re: [petsc-users] Support for full jacobianP in TSSetIJacobianP Thank you for providing the example. This is very helpful. Sorry that I was not accurate about what should be in IJacobianP. With current API, a little hack is needed to get it work. In IJacobianP, we have to provide shift*M_P*dt if we expand the formula in the paper to accommodate parameters mass matrices. So I changed your code as follows: if self.deriv == "c": dt = ts.getTimeStep() # dt is negative in backward run Jp[0, 0] = -shift*udot[0]*dt I noticed that there is some problem with the input variable Xdot and have been working on a fix. But as a quick workaround, you can use backward Euler with the following options before the fix is ready: -ts_type beuler -ts_trajectory_type memory -ts_trajectory_solution_only Thanks, Hong On Dec 20, 2020, at 3:28 PM, Salazar De Troya, Miguel > wrote: Hello Hong, Thank you. My apologies for rushing to blame the API instead of looking at my own code. I?ve put together a minimum example in petsc4py that I am attaching to this email. Here I am solving the simple ODE: c * xdot = b * x(t) with initial condition x(0) = a and the cost function J equal to the solution at the final time ?T?, i.e. J = x(T). The analytical solution is x(t) = a * exp(b/c *t). In the example, there is the option to calculate the derivatives w.r.t ?b? or ?c? in the keyword argument ?deriv? passed to ?SimpleODE?. For ?b?, the solver returns the correct derivatives (checked with the analytical expression), but this does not work for ?c?. I might be building the wrong jacobian that I pass to ?setIJacobianP?. Could you please take a look at it? Thanks. Miguel From: "Zhang, Hong" > Date: Saturday, December 19, 2020 at 10:02 AM To: "Salazar De Troya, Miguel" > Cc: Satish Balay via petsc-users > Subject: Re: [petsc-users] Support for full jacobianP in TSSetIJacobianP On Dec 18, 2020, at 6:35 PM, Salazar De Troya, Miguel > wrote: Ok, I was not able to get such case to work in my firedrake-ts implementation. Maybe I am missing something in my code. I looked at the TSAdjoint paper https://arxiv.org/pdf/1912.07696.pdf Equation 2.1 and at the adjoint method for the theta method (Equation 2.15) where the mass matrix is not differentiated w.r.t. the design parameter ?p? and decided to ask the question. For notational brevity, the formula used in the paper does not assume that the mass matrix depends on the parameters p. But it can be easily extended for this case. Is the actual implementation different from what is in the paper? The actual implementation is more general than the formula presented in the paper. Note that PETSc TS takes the ODE problem as F(U_t,U,P,t) = G(U,P,t) and does not ask for a mass matrix explicitly from users. When users provide IFunction, which is F(Udot,U,P,t), IJacobian (dF/dU) and IJacobianP (dF/dP) are needed by TSAdjoint to compute the sensitivities. Differentiating the mass matrix (more precisely, the term M*U_t ) is needed when you prepare the call back function IJacobianP. So if we have M(P)*U_t - f(t,U,P) in IFunction, IJacobianP should be M_P*U_t - f_P where U_t is provided by PETSc as an input argument. Thanks, Hong Thanks Miguel From: "Zhang, Hong" > Date: Friday, December 18, 2020 at 3:11 PM To: "Salazar De Troya, Miguel" > Cc: Satish Balay via petsc-users > Subject: Re: [petsc-users] Support for full jacobianP in TSSetIJacobianP The current interface is general and should be applicable to this case as soon as users can provide IJacobianP, which is dF(Udot,U,P,t)/dP. Were you able to generate it in firedrake? If so, could you provide an example that I can test? Thanks, Hong On Dec 18, 2020, at 10:58 AM, Salazar De Troya, Miguel > wrote: Yes, that is the case I am considering. The special case I am concerned about is as following: the heat equation in variational form and in firedrake/UFL notation is as follows: p*u_t*v*dx + inner(grad(u), grad(v))*dx = 0, where u is the temperature, u_t is its time derivative, v is just the test function, dx is the integration domain and p is the design parameter. If ?p? were discontinuous, one can?t just factor ?p? into the second term due to the divergence theorem. Meaning that p*u_t*v*dx + inner(grad(u), grad(v))*dx = 0 is different than u_t*v*dx + inner(1.0 / p * grad(u), grad(v))*dx = 0, which is what ideally one would obtain in order to adapt to the current interface in TSAdjoint. Thanks Miguel From: "Zhang, Hong" > Date: Thursday, December 17, 2020 at 7:25 PM To: "Salazar De Troya, Miguel" > Cc: Satish Balay via petsc-users > Subject: Re: [petsc-users] Support for full jacobianP in TSSetIJacobianP Hi Miguel, Thank you for the nice work. I do not understand what you propose to do here. What is the obstacle to using current TSSetIJacobianP() for the corner case you mentioned? Are you considering a case in which the mass matrix is parameterized, e.g. M(p) udot - f(t,u) = g(t,u) ? Thanks, Hong On Dec 17, 2020, at 3:38 PM, Salazar De Troya, Miguel via petsc-users > wrote: Hello, I am working on hooking up TSAdjoint with pyadjoint through the firedrake-ts interface (https://github.com/IvanYashchuk/firedrake-ts). I have done most of the implementation and now I am just testing for corner cases. One of them is when the design variable is multiplying the first derivative term. It would be the case ofF(Udot,U,P,t) = G(U,P,t) in https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/Sensitivity/TSSetIJacobianP.html . I imagine that one could think of refactoring the ?P? in the left hand side to the right hand side, but this is not trivial when ?P? is a discontinuous field over the domain. I think it would be better to include the case of F(Udot,U,P,t) = G(U,P,t) in TSSetIJacobianP and I am volunteering to do it. Given the current implementation of TSAdjoint, is this something feasible? Thanks Miguel Miguel A. Salazar de Troya Postdoctoral Researcher, Lawrence Livermore National Laboratory B141 Rm: 1085-5 Ph: 1(925) 422-6411 -------------- next part -------------- An HTML attachment was scrubbed... URL: From salazardetro1 at llnl.gov Tue Dec 22 15:16:09 2020 From: salazardetro1 at llnl.gov (Salazar De Troya, Miguel) Date: Tue, 22 Dec 2020 21:16:09 +0000 Subject: [petsc-users] Support for full jacobianP in TSSetIJacobianP In-Reply-To: <07967DE6-483D-40B4-9F48-B3387CED69DB@anl.gov> References: <449B7337-595D-439B-8EDB-C719CD2D91BD@llnl.gov> <4BA69262-B9E6-4D18-8705-55DEF978F965@anl.gov> <4BDCA608-0433-424C-B817-80BAD1AD904D@anl.gov> <5EB41847-FE48-4618-90CE-8F8853303BEF@llnl.gov> <23470D92-538D-4796-BB9F-2DDEAD002A58@llnl.gov> <07967DE6-483D-40B4-9F48-B3387CED69DB@anl.gov> Message-ID: <83F70661-F68A-4D6A-B308-99F024B552A0@llnl.gov> Thanks Hong! Now the petsc4py script I sent works for theta methods as well. I will test it with firedrake-ts soon. Miguel From: "Zhang, Hong" Date: Tuesday, December 22, 2020 at 12:35 PM To: "Salazar De Troya, Miguel" Cc: Satish Balay via petsc-users Subject: Re: [petsc-users] Support for full jacobianP in TSSetIJacobianP Miguel, You can now use my branch hongzh/support-parameterized-mass-matrix. It may take a few days or weeks to be merged. Your original script should work out of box with any checkpointing scheme. Nothing needs to be changed. The IJacobianP is simply M_P*U_t. Hong On Dec 22, 2020, at 11:46 AM, Salazar De Troya, Miguel via petsc-users > wrote: Thanks, Hong. Now it works! I can work with backwards Euler for now. With regards to the other two options, I think -ts_trajectory_solution_only is ok because backwards euler does not have intermediate stage. With respect to -ts_trajectory_type memory, can I still do checkpointing to be able to solve larger problems? I have also noticed that TSComputeIJacobianP() is only used by the theta methods. Are there plans to support higher order methods? Miguel From: "Zhang, Hong" > Date: Monday, December 21, 2020 at 8:16 PM To: "Salazar De Troya, Miguel" > Cc: Satish Balay via petsc-users > Subject: Re: [petsc-users] Support for full jacobianP in TSSetIJacobianP Thank you for providing the example. This is very helpful. Sorry that I was not accurate about what should be in IJacobianP. With current API, a little hack is needed to get it work. In IJacobianP, we have to provide shift*M_P*dt if we expand the formula in the paper to accommodate parameters mass matrices. So I changed your code as follows: if self.deriv == "c": dt = ts.getTimeStep() # dt is negative in backward run Jp[0, 0] = -shift*udot[0]*dt I noticed that there is some problem with the input variable Xdot and have been working on a fix. But as a quick workaround, you can use backward Euler with the following options before the fix is ready: -ts_type beuler -ts_trajectory_type memory -ts_trajectory_solution_only Thanks, Hong On Dec 20, 2020, at 3:28 PM, Salazar De Troya, Miguel > wrote: Hello Hong, Thank you. My apologies for rushing to blame the API instead of looking at my own code. I?ve put together a minimum example in petsc4py that I am attaching to this email. Here I am solving the simple ODE: c * xdot = b * x(t) with initial condition x(0) = a and the cost function J equal to the solution at the final time ?T?, i.e. J = x(T). The analytical solution is x(t) = a * exp(b/c *t). In the example, there is the option to calculate the derivatives w.r.t ?b? or ?c? in the keyword argument ?deriv? passed to ?SimpleODE?. For ?b?, the solver returns the correct derivatives (checked with the analytical expression), but this does not work for ?c?. I might be building the wrong jacobian that I pass to ?setIJacobianP?. Could you please take a look at it? Thanks. Miguel From: "Zhang, Hong" > Date: Saturday, December 19, 2020 at 10:02 AM To: "Salazar De Troya, Miguel" > Cc: Satish Balay via petsc-users > Subject: Re: [petsc-users] Support for full jacobianP in TSSetIJacobianP On Dec 18, 2020, at 6:35 PM, Salazar De Troya, Miguel > wrote: Ok, I was not able to get such case to work in my firedrake-ts implementation. Maybe I am missing something in my code. I looked at the TSAdjoint paper https://arxiv.org/pdf/1912.07696.pdf Equation 2.1 and at the adjoint method for the theta method (Equation 2.15) where the mass matrix is not differentiated w.r.t. the design parameter ?p? and decided to ask the question. For notational brevity, the formula used in the paper does not assume that the mass matrix depends on the parameters p. But it can be easily extended for this case. Is the actual implementation different from what is in the paper? The actual implementation is more general than the formula presented in the paper. Note that PETSc TS takes the ODE problem as F(U_t,U,P,t) = G(U,P,t) and does not ask for a mass matrix explicitly from users. When users provide IFunction, which is F(Udot,U,P,t), IJacobian (dF/dU) and IJacobianP (dF/dP) are needed by TSAdjoint to compute the sensitivities. Differentiating the mass matrix (more precisely, the term M*U_t ) is needed when you prepare the call back function IJacobianP. So if we have M(P)*U_t - f(t,U,P) in IFunction, IJacobianP should be M_P*U_t - f_P where U_t is provided by PETSc as an input argument. Thanks, Hong Thanks Miguel From: "Zhang, Hong" > Date: Friday, December 18, 2020 at 3:11 PM To: "Salazar De Troya, Miguel" > Cc: Satish Balay via petsc-users > Subject: Re: [petsc-users] Support for full jacobianP in TSSetIJacobianP The current interface is general and should be applicable to this case as soon as users can provide IJacobianP, which is dF(Udot,U,P,t)/dP. Were you able to generate it in firedrake? If so, could you provide an example that I can test? Thanks, Hong On Dec 18, 2020, at 10:58 AM, Salazar De Troya, Miguel > wrote: Yes, that is the case I am considering. The special case I am concerned about is as following: the heat equation in variational form and in firedrake/UFL notation is as follows: p*u_t*v*dx + inner(grad(u), grad(v))*dx = 0, where u is the temperature, u_t is its time derivative, v is just the test function, dx is the integration domain and p is the design parameter. If ?p? were discontinuous, one can?t just factor ?p? into the second term due to the divergence theorem. Meaning that p*u_t*v*dx + inner(grad(u), grad(v))*dx = 0 is different than u_t*v*dx + inner(1.0 / p * grad(u), grad(v))*dx = 0, which is what ideally one would obtain in order to adapt to the current interface in TSAdjoint. Thanks Miguel From: "Zhang, Hong" > Date: Thursday, December 17, 2020 at 7:25 PM To: "Salazar De Troya, Miguel" > Cc: Satish Balay via petsc-users > Subject: Re: [petsc-users] Support for full jacobianP in TSSetIJacobianP Hi Miguel, Thank you for the nice work. I do not understand what you propose to do here. What is the obstacle to using current TSSetIJacobianP() for the corner case you mentioned? Are you considering a case in which the mass matrix is parameterized, e.g. M(p) udot - f(t,u) = g(t,u) ? Thanks, Hong On Dec 17, 2020, at 3:38 PM, Salazar De Troya, Miguel via petsc-users > wrote: Hello, I am working on hooking up TSAdjoint with pyadjoint through the firedrake-ts interface (https://github.com/IvanYashchuk/firedrake-ts). I have done most of the implementation and now I am just testing for corner cases. One of them is when the design variable is multiplying the first derivative term. It would be the case ofF(Udot,U,P,t) = G(U,P,t) in https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/Sensitivity/TSSetIJacobianP.html . I imagine that one could think of refactoring the ?P? in the left hand side to the right hand side, but this is not trivial when ?P? is a discontinuous field over the domain. I think it would be better to include the case of F(Udot,U,P,t) = G(U,P,t) in TSSetIJacobianP and I am volunteering to do it. Given the current implementation of TSAdjoint, is this something feasible? Thanks Miguel Miguel A. Salazar de Troya Postdoctoral Researcher, Lawrence Livermore National Laboratory B141 Rm: 1085-5 Ph: 1(925) 422-6411 -------------- next part -------------- An HTML attachment was scrubbed... URL: From h.liu at whu.edu.cn Wed Dec 23 02:30:53 2020 From: h.liu at whu.edu.cn (Hui LIU) Date: Wed, 23 Dec 2020 16:30:53 +0800 Subject: [petsc-users] mapping from the original ordering to the PETSc/global ordering for the imported unstructured mesh Message-ID: <000001d6d905$ee5b19e0$cb114da0$@whu.edu.cn> Dear the maintainer of the PETSc: I am implementing the CPU parallel finite element calculation of the unstructured grids by using the great PETSc platform. The unstructured grids are imported from other open access software, for example, Gmsh. Herein we call the node numbering for the mesh generated by the Gmsh as the original ordering, before distributing the mesh to each processor. After distributing this unstructured mesh to each process, is it possible to find the mapping between the original ordering (before distributing) and the PETSc/global ordering (after distributing)? If yes, how to extract or find it? If no, could you give me some suggestions for better applying the displacement constraints and load boundary conditions? Another question is how to distinguish the nodes subject to displacement boundary constraints and the nodes subject to the external force loads. Thank you very much. Looking forward to your reply. Best regards, Hui Liu Wuhan University -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Dec 23 11:07:23 2020 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 23 Dec 2020 12:07:23 -0500 Subject: [petsc-users] mapping from the original ordering to the PETSc/global ordering for the imported unstructured mesh In-Reply-To: <000001d6d905$ee5b19e0$cb114da0$@whu.edu.cn> References: <000001d6d905$ee5b19e0$cb114da0$@whu.edu.cn> Message-ID: On Wed, Dec 23, 2020 at 11:29 AM Hui LIU wrote: > Dear the maintainer of the PETSc: > > > > I am implementing the CPU parallel finite element calculation of the > unstructured grids by using the great PETSc platform. > > The unstructured grids are imported from other open access software, for > example, Gmsh. > > Herein we call the node numbering for the mesh generated by the Gmsh as > the original ordering, before distributing the mesh to each processor. > > After distributing this unstructured mesh to each process, is it possible > to find the mapping between the original ordering (before distributing) and > the PETSc/global ordering (after distributing)? > > If yes, how to extract or find it? > > It is possible using https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DMPLEX/DMPlexNaturalToGlobalBegin.html However, I do not recommend you do this. It is not scalable and makes your code fragile. > If no, could you give me some suggestions for better applying the > displacement constraints and load boundary conditions? > Sure. You should mark your boundaries with a marker that indicates the kind of condition to apply. This is easy in GMsh, and those markers will be translated to DMLabel objects when you import the mesh. These markers will be preserved when the mesh is distributed, or redistributed for load balance. > Another question is how to distinguish the nodes subject to displacement > boundary constraints and the nodes subject to the external force loads. > Use a different marker. Thanks, Matt > Thank you very much. Looking forward to your reply. > > > > Best regards, > > > > Hui Liu > > Wuhan University > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From salazardetro1 at llnl.gov Sun Dec 27 17:01:08 2020 From: salazardetro1 at llnl.gov (Salazar De Troya, Miguel) Date: Sun, 27 Dec 2020 23:01:08 +0000 Subject: [petsc-users] Calculating adjoint of more than one cost function separately Message-ID: <4EB04688-BF1C-4464-A040-CA2FEDE7968C@llnl.gov> Hello, I am interested in calculating the gradients of an optimization problem with one goal and one constraint functions which need TSAdjoint for their adjoints. I?d like to call each of their adjoints in different calls, but it does not seem to be possible without making compromises. For instance, one could set TSCreateQuadratureTS() and TSSetCostGradients() with different quadratures (and their gradients) for each adjoint call (one at a time). This would evaluate the cost functions in the backwards run though, whereas one typically computes the cost functions in a different routine than the adjoint call (like in line searches evaluations) One could also set TSCreateQuadratureTS() with the goal and the constraint functions to be evaluated at the forward run (as typically done when computing the cost function). The problem would be that the adjoint call now requires two sets of gradients for TSSetCostGradients() and their adjoint are calculated together, costing twice if your routines for the cost and the constraint gradients are separated. The only solution I can think of is to set TSCreateQuadratureTS() with both the goal and constraint functions in the forward run. Then, in the adjoint calls, reset TSCreateQuadratureTS() with just the cost function I am interested in (either the goal or the constraint) and set just a single TSSetCostGradients(). Will this work? Are there better alternatives? Even if successful, there is the problem that the trajectory goes back to the beginning when we perform a TSAdjointSolve() call. Subsequent calls to TSAdjointSolve() (for instance for another cost function) are invalid because the trajectory is not set at the end of the simulation. One needs to call the forward problem to bring it back to the end. Is there a quick way to set the trajectory state to the last time step without having to run the forward problem? I am attaching an example to illustrate this issue. One can uncomment lines 120-122 to obtain the right value of the derivative. Thanks Miguel Miguel A. Salazar de Troya Postdoctoral Researcher, Lawrence Livermore National Laboratory B141 Rm: 1085-5 Ph: 1(925) 422-6411 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: simple-ode.py Type: text/x-python-script Size: 2995 bytes Desc: simple-ode.py URL: From thibault.bridelbertomeu at gmail.com Mon Dec 28 04:32:31 2020 From: thibault.bridelbertomeu at gmail.com (Thibault Bridel-Bertomeu) Date: Mon, 28 Dec 2020 11:32:31 +0100 Subject: [petsc-users] TS tutorial ex11 in Fortran In-Reply-To: References: <1EC72DF8-61BD-482B-B1B0-9427AB1ADF74@petsc.dev> <048DC071-33E5-4644-B1C8-DD598248923A@petsc.dev> <87tuspyaug.fsf@jedbrown.org> <1035FEDB-B297-4E71-8F0F-D44C72707105@petsc.dev> <39E9FBED-6D6B-41E4-983C-58465ACAB4D0@petsc.dev> Message-ID: Good morning everyone, Thank you Barry for the answer, it works now ! I am facing (yet) another situation: the TSAdaptRegister function. In the MR on gitlab, Jed mentioned that sometimes, when function pointers are not stored in PETSc objects, one can use stack memory to pass that pointer from fortran to C. Can anyone develop that idea ? Because for TSAdaptRegister, i guess the wrapper would start like : PETSC_EXTERN void tsadaptregister_(char *sname, void (*func)(TSAdapt*,PetscErrorCode*), PetscErrorCode *ierr, PETSC_FORTRAN_CHARLEN_T snamelen) but then the C TSAdaptRegister function takes a PetscErrorCode (*func)(TSAdapt) function pointer as argument ... I cannot use any FORTRAN_CALLBACK here since I do not have any object to hook it to, and I could not find a similar situation among the pre-existing wrappers. Does anyone have an idea on how to proceed ? Thanks !! Thibault Le mar. 22 d?c. 2020 ? 21:20, Barry Smith a ?crit : > > PetscObjectUseFortranCallback((PetscDS)ctx, > > > *ierr = PetscObjectSetFortranCallback((PetscObject)*prob > > > It looks like the problem is that these user provided functions do not > take a PetscDS directly as an argument so the Fortran callback information > cannot be obtained from them. > > The manual page for PetscDSAddBoundary() says > > - ctx - An optional user context for bcFunc > > but then when it lists the calling sequence for bcFunc it does not list > the ctx as an argument, so either the manual page or code is wrong. > > It looks like you make the ctx be the PetscDS prob argument when you > call PetscDSAddBoundary > > In principle this sounds like it might work. I think you need to track > through the debugger to see if the ctx passed to ourbocofunc() is > actually the PetscDS prob variable and if not why it is not. > > Barry > > > On Dec 22, 2020, at 5:49 AM, Thibault Bridel-Bertomeu < > thibault.bridelbertomeu at gmail.com> wrote: > > Dear all, > > I have hit two snags while implementing the missing wrappers necessary to > transcribe ex11 to Fortran. > > First is about the PetscDSAddBoundary wrapper, that I have done so : > > static PetscErrorCode ourbocofunc(PetscReal time, const PetscReal *c, > const PetscReal *n, const PetscScalar *a_xI, const PetscScalar *a_xG, void > *ctx) > { > PetscObjectUseFortranCallback((PetscDS)ctx, bocofunc, > (PetscReal*,const PetscReal*,const > PetscReal*,const PetscScalar*,const PetscScalar*,void*,PetscErrorCode*), > (&time,c,n,a_xI,a_xG,ctx,&ierr)); > } > static PetscErrorCode ourbocofunc_time(PetscReal time, const PetscReal *c, > const PetscReal *n, const PetscScalar *a_xI, const PetscScalar *a_xG, void > *ctx) > { > PetscObjectUseFortranCallback((PetscDS)ctx, bocofunc_time, > (PetscReal*,const PetscReal*,const > PetscReal*,const PetscScalar*,const PetscScalar*,void*,PetscErrorCode*), > (&time,c,n,a_xI,a_xG,ctx,&ierr)); > } > PETSC_EXTERN void petscdsaddboundary_(PetscDS *prob, > DMBoundaryConditionType *type, char *name, char *labelname, PetscInt > *field, PetscInt *numcomps, PetscInt *comps, > void (*bcFunc)(void), > void (*bcFunc_t)(void), > PetscInt *numids, const PetscInt > *ids, void *ctx, PetscErrorCode *ierr, > PETSC_FORTRAN_CHARLEN_T namelen, > PETSC_FORTRAN_CHARLEN_T labelnamelen) > { > char *newname, *newlabelname; > FIXCHAR(name, namelen, newname); > FIXCHAR(labelname, labelnamelen, newlabelname); > *ierr = PetscObjectSetFortranCallback((PetscObject)*prob, > PETSC_FORTRAN_CALLBACK_CLASS, &bocofunc, (PetscVoidFunction)bcFunc, > ctx); > *ierr = PetscObjectSetFortranCallback((PetscObject)*prob, > PETSC_FORTRAN_CALLBACK_CLASS, &bocofunc_time, (PetscVoidFunction)bcFunc_t, > ctx); > *ierr = PetscDSAddBoundary(*prob, *type, newname, newlabelname, > *field, *numcomps, comps, > (void (*)(void))ourbocofunc, > (void (*)(void))ourbocofunc_time, > *numids, ids, *prob); > FREECHAR(name, newname); > FREECHAR(labelname, newlabelname); > } > > > > but when I call it in the program, with adequate routines, I obtain the > following error : > > [0]PETSC ERROR: --------------------- Error Message --------------------------------------------------------------[0]PETSC ERROR: Corrupt argument: https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSC ERROR: Fortran callback not set on this object[0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting.[0]PETSC ERROR: Petsc Development GIT revision: v3.14.2-297-gf36a7edeb8 GIT Date: 2020-12-18 04:42:53 +0000[0]PETSC ERROR: ../../../bin/eulerian3D on a named macbook-pro-de-thibault.home by tbridel Sun Dec 20 15:05:15 2020[0]PETSC ERROR: Configure options --with-clean=0 --prefix=/Users/tbridel/Documents/1-CODES/04-PETSC/build --with-make-np=2 --with-windows-graphics=0 --with-debugging=0 --download-fblaslapack --download-mpich-shared=0 --with-x=0 --with-pthread=0 --with-valgrind=0 --PETSC_ARCH=macosx --with-fc=/usr/local/Cellar/open-mpi/4.0.1_2/bin/mpifort --with-cc=/usr/local/Cellar/open-mpi/4.0.1_2/bin/mpicc --with-cxx=/usr/local/Cellar/open-mpi/4.0.1_2/bin/mpic++ --with-openmp=0 --download-hypre=yes --download-sowing=yes --download-metis=yes --download-parmetis=yes --download-triangle=yes --download-tetgen=yes --download-ctetgen=yes --download-p4est=yes --download-zlib=yes --download-c2html=yes --download-eigen=yes --download-pragmatic=yes --with-hdf5-dir=/usr/local/Cellar/hdf5/1.10.5_1 --with-cmake-dir=/usr/local/Cellar/cmake/3.15.3[0]PETSC ERROR: #1 PetscObjectGetFortranCallback() line 258 in /Users/tbridel/Documents/1-CODES/04-PETSC/src/sys/objects/inherit.c[0]PETSC ERROR: #2 ourbocofunc() line 141 in /Users/tbridel/Documents/1-CODES/59-EULERIAN3D/sources/petsc_wrapping/wrapper_petsc.c[0]PETSC ERROR: #3 DMPlexInsertBoundaryValuesRiemann() line 989 in /Users/tbridel/Documents/1-CODES/04-PETSC/src/dm/impls/plex/plexfem.c[0]PETSC ERROR: #4 DMPlexInsertBoundaryValues_Plex() line 1052 in /Users/tbridel/Documents/1-CODES/04-PETSC/src/dm/impls/plex/plexfem.c[0]PETSC ERROR: #5 DMPlexInsertBoundaryValues() line 1142 in /Users/tbridel/Documents/1-CODES/04-PETSC/src/dm/impls/plex/plexfem.c[0]PETSC ERROR: #6 DMPlexComputeResidual_Internal() line 4524 in /Users/tbridel/Documents/1-CODES/04-PETSC/src/dm/impls/plex/plexfem.c[0]PETSC ERROR: #7 DMPlexTSComputeRHSFunctionFVM() line 74 in /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/utils/dmplexts.c[0]PETSC ERROR: #8 ourdmtsrhsfunc() line 186 in /Users/tbridel/Documents/1-CODES/59-EULERIAN3D/sources/petsc_wrapping/wrapper_petsc.c[0]PETSC ERROR: #9 TSComputeRHSFunction_DMLocal() line 105 in /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/utils/dmlocalts.c[0]PETSC ERROR: #10 TSComputeRHSFunction() line 653 in /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/interface/ts.c[0]PETSC ERROR: #11 TSSSPStep_RK_3() line 120 in /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/impls/explicit/ssp/ssp.c[0]PETSC ERROR: #12 TSStep_SSP() line 208 in /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/impls/explicit/ssp/ssp.c[0]PETSC ERROR: #13 TSStep() line 3757 in /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/interface/ts.c[0]PETSC ERROR: #14 TSSolve() line 4154 in /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/interface/ts.c[0]PETSC ERROR: #15 User provided function() line 0 in User file > > > Second is about the DMProjectFunction wrapper, that I have done so : > > static PetscErrorCode ourdmprojfunc(PetscInt dim, PetscReal time, > PetscReal x[], PetscInt Nf, PetscScalar u[], void *ctx) > { > PetscObjectUseFortranCallback((DM)ctx, dmprojfunc, > > (PetscInt*,PetscReal*,PetscReal*,PetscInt*,PetscScalar*,void*,PetscErrorCode*), > (&dim,&time,x,&Nf,u,_ctx,&ierr)) > } > PETSC_EXTERN void dmprojectfunction_(DM *dm, PetscReal *time, > void > (*func)(PetscInt*,PetscReal*,PetscReal*,PetscInt*,PetscScalar*,void*,PetscErrorCode*), > void *ctx, InsertMode *mode, Vec X, > PetscErrorCode *ierr) > { > PetscErrorCode (*funcarr[1]) (PetscInt dim, PetscReal time, PetscReal > x[], PetscInt Nf, PetscScalar *u, void *ctx); > *ierr = PetscObjectSetFortranCallback((PetscObject)*dm, > PETSC_FORTRAN_CALLBACK_CLASS, &dmprojfunc, (PetscVoidFunction)func, ctx); > funcarr[0] = ourdmprojfunc; > *ierr = DMProjectFunction(*dm, *time, funcarr, &ctx, *mode, X); > } > > > This time there is no error because I cannot reach this point in the > program, but I am not sure anyways how to write this wrapper, especially > because of the double pointers that DMProjectFunction takes as arguments. > > Does anyone have any idea what could be going wrong with those two > wrappers ? > > Thank you very much in advance !! > > Thibault > > Le ven. 18 d?c. 2020 ? 11:02, Thibault Bridel-Bertomeu < > thibault.bridelbertomeu at gmail.com> a ?crit : > >> Aah that is a nice trick, I was getting ready to fork, clone the fork and >> redo the work, but that worked fine ! Thank you Barry ! >> >> The MR will appear in a little while ! >> >> Thibault >> >> >> Le ven. 18 d?c. 2020 ? 10:16, Barry Smith a ?crit : >> >>> >>> Good question. There is a trick to limit the amount of work you need >>> to do with a new fork after you have already made changes with a PETSc >>> clone, but it looks like we do not document this clearly in the webpages. >>> (I couldn't find it). >>> >>> Yes, you do need to make a fork, but after you have made the fork on >>> the GitLab website (and have done nothing on your machine) edit the file >>> $PETSC_DIR/.git/config for your clone on your machine >>> >>> Locate the line that has url = git at gitlab.com:petsc/petsc.git (this >>> may have an https at the beginning of the line) >>> >>> Change this line to point to the fork url instead with git@ not >>> https, which will be pretty much the same URL but with your user id instead >>> of petsc in the address. Then git push and it will push to your fork. >>> >>> Now you changes will be in your fork and you can make the MR from your >>> fork URL on Gitlab. (In other words this editing trick converts your PETSc >>> clone on your machine to a PETSc fork). >>> >>> I hope I have explained this clearly enough it goes smoothly. >>> >>> Barry >>> >>> >>> >>> On Dec 18, 2020, at 3:00 AM, Thibault Bridel-Bertomeu < >>> thibault.bridelbertomeu at gmail.com> wrote: >>> >>> Hello Barry, >>> >>> I'll start the MR as soon as possible then so that specialists can >>> indeed have a look. Do I have to fork PETSc to start a MR or are PETSc repo >>> settings such that can I push a branch from the PETSc clone I got ? >>> >>> Thibault >>> >>> >>> Le mer. 16 d?c. 2020 ? 07:47, Barry Smith a ?crit : >>> >>>> >>>> Thibault, >>>> >>>> A subdirectory for the example is fine; we have other examples that >>>> use subdirectories and multiple files. >>>> >>>> Note: even if you don't have something completely working you can >>>> still make MR and list it as DRAFT request for comments, some other PETSc >>>> members who understand the packages you are using and Fortran better than I >>>> may be able to help as you develop the code. >>>> >>>> Barry >>>> >>>> >>>> >>>> >>>> On Dec 16, 2020, at 12:35 AM, Thibault Bridel-Bertomeu < >>>> thibault.bridelbertomeu at gmail.com> wrote: >>>> >>>> Hello everyone, >>>> >>>> Thank you Barry for the feedback. >>>> OK, yes I'll work up an MR as soon as I have got something working. By >>>> the way, does the fortran-version of the example have to be a single file ? >>>> If my push contains a directory with several files (different modules and >>>> the main), and the Makefile that goes with it, is that ok ? >>>> >>>> Thibault Bridel-Bertomeu >>>> >>>> >>>> Le mer. 16 d?c. 2020 ? 04:46, Barry Smith a ?crit : >>>> >>>>> >>>>> This is great. If you make a branch off of the PETSc git repository >>>>> with these additions and work on ex11 you can make a merge request and we >>>>> can run the code easily on all our test systems (for security reasons one >>>>> of use needs to launch the tests from your MR). >>>>> https://docs.petsc.org/en/latest/developers/integration/ >>>>> >>>>> Barry >>>>> >>>>> >>>>> On Dec 15, 2020, at 5:35 AM, Thibault Bridel-Bertomeu < >>>>> thibault.bridelbertomeu at gmail.com> wrote: >>>>> >>>>> Hello everyone, >>>>> >>>>> So far, I have the wrappers in the files attached to this e-mail. I >>>>> still do not know if they work properly - at least the code compiles and >>>>> the calls to the wrapped-subroutine do not fail - but I wanted to put this >>>>> here in case someone sees something really wrong with it already. >>>>> >>>>> Thank you again for your help, I'll try to post updates of the F90 >>>>> version of ex11 regularly in this thread. >>>>> >>>>> Stay safe, >>>>> >>>>> Thibault Bridel-Bertomeu >>>>> >>>>> Le dim. 13 d?c. 2020 ? 16:39, Jed Brown a ?crit : >>>>> >>>>>> Thibault Bridel-Bertomeu writes: >>>>>> >>>>>> > Thank you Mark for your answer. >>>>>> > >>>>>> > I am not sure what you think could be in the setBC1 routine ? How >>>>>> to make >>>>>> > the connection with the PetscDS ? >>>>>> > >>>>>> > On the other hand, I actually found after a while TSMonitorSet has a >>>>>> > fortran wrapper, and it does take as arguments two function >>>>>> pointers, so I >>>>>> > guess it is possible ? Although I am not sure exactly how to play >>>>>> with the >>>>>> > PetscObjectSetFortranCallback & PetscObjectUseFortranCallback >>>>>> macros - >>>>>> > could anybody advise please ? >>>>>> >>>>>> tsmonitorset_ is a good example to follow. In your file, create one >>>>>> of these static structs with a member for each callback. These are IDs that >>>>>> will be used as keys for Fortran callbacks and their contexts. The salient >>>>>> parts of the file are below. >>>>>> >>>>>> static struct { >>>>>> PetscFortranCallbackId prestep; >>>>>> PetscFortranCallbackId poststep; >>>>>> PetscFortranCallbackId rhsfunction; >>>>>> PetscFortranCallbackId rhsjacobian; >>>>>> PetscFortranCallbackId ifunction; >>>>>> PetscFortranCallbackId ijacobian; >>>>>> PetscFortranCallbackId monitor; >>>>>> PetscFortranCallbackId mondestroy; >>>>>> PetscFortranCallbackId transform; >>>>>> #if defined(PETSC_HAVE_F90_2PTR_ARG) >>>>>> PetscFortranCallbackId function_pgiptr; >>>>>> #endif >>>>>> } _cb; >>>>>> >>>>>> /* >>>>>> Note ctx is the same as ts so we need to get the Fortran context >>>>>> out of the TS; this gets put in _ctx using the callback ID >>>>>> */ >>>>>> static PetscErrorCode ourmonitor(TS ts,PetscInt i,PetscReal d,Vec >>>>>> v,void *ctx) >>>>>> { >>>>>> >>>>>> PetscObjectUseFortranCallback(ts,_cb.monitor,(TS*,PetscInt*,PetscReal*,Vec >>>>>> *,void*,PetscErrorCode*),(&ts,&i,&d,&v,_ctx,&ierr)); >>>>>> } >>>>>> >>>>>> Then follow as in tsmonitorset_, which sets two callbacks. >>>>>> >>>>>> PETSC_EXTERN void tsmonitorset_(TS *ts,void >>>>>> (*func)(TS*,PetscInt*,PetscReal*,Vec*,void*,PetscErrorCode*),void >>>>>> *mctx,void (*d)(void*,PetscErrorCode*),PetscErrorCode *ierr) >>>>>> { >>>>>> CHKFORTRANNULLFUNCTION(d); >>>>>> if ((PetscVoidFunction)func == (PetscVoidFunction) >>>>>> tsmonitordefault_) { >>>>>> *ierr = TSMonitorSet(*ts,(PetscErrorCode >>>>>> (*)(TS,PetscInt,PetscReal,Vec,void*))TSMonitorDefault,*(PetscViewerAndFormat**)mctx,(PetscErrorCode >>>>>> (*)(void **))PetscViewerAndFormatDestroy); >>>>>> } else { >>>>>> *ierr = >>>>>> PetscObjectSetFortranCallback((PetscObject)*ts,PETSC_FORTRAN_CALLBACK_CLASS,&_cb.monitor,(PetscVoidFunction)func,mctx); >>>>>> *ierr = >>>>>> PetscObjectSetFortranCallback((PetscObject)*ts,PETSC_FORTRAN_CALLBACK_CLASS,&_cb.mondestroy,(PetscVoidFunction)d,mctx); >>>>>> *ierr = TSMonitorSet(*ts,ourmonitor,*ts,ourmonitordestroy); >>>>>> } >>>>>> } >>>>>> >>>>> >>>>> >>>>> >>>>> >>>> >>> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Mon Dec 28 08:30:23 2020 From: jed at jedbrown.org (Jed Brown) Date: Mon, 28 Dec 2020 07:30:23 -0700 Subject: [petsc-users] TS tutorial ex11 in Fortran In-Reply-To: References: <1EC72DF8-61BD-482B-B1B0-9427AB1ADF74@petsc.dev> <048DC071-33E5-4644-B1C8-DD598248923A@petsc.dev> <87tuspyaug.fsf@jedbrown.org> <1035FEDB-B297-4E71-8F0F-D44C72707105@petsc.dev> <39E9FBED-6D6B-41E4-983C-58465ACAB4D0@petsc.dev> Message-ID: <877dp2c8ao.fsf@jedbrown.org> Thibault Bridel-Bertomeu writes: > Good morning everyone, > > Thank you Barry for the answer, it works now ! > > I am facing (yet) another situation: the TSAdaptRegister function. > In the MR on gitlab, Jed mentioned that sometimes, when function pointers > are not stored in PETSc objects, one can use stack memory to pass that > pointer from fortran to C. The issue with stack memory is that when it returns, that memory is invalid. You can't use it in this instance. I think you're going to have problems implementing a TSAdaptCreate_XYZ in Fortran (because the body of that function will need to access private struct members; see below). I would implement what you need in C and you can call out to Fortran if you want from inside TSAdaptChoose_YourMethod(). PETSC_EXTERN PetscErrorCode TSAdaptCreate_DSP(TSAdapt adapt) { TSAdapt_DSP *dsp; PetscErrorCode ierr; PetscFunctionBegin; ierr = PetscNewLog(adapt,&dsp);CHKERRQ(ierr); adapt->reject_safety = 1.0; /* unused */ adapt->data = (void*)dsp; adapt->ops->choose = TSAdaptChoose_DSP; adapt->ops->setfromoptions = TSAdaptSetFromOptions_DSP; adapt->ops->destroy = TSAdaptDestroy_DSP; adapt->ops->view = TSAdaptView_DSP; ierr = PetscObjectComposeFunction((PetscObject)adapt,"TSAdaptDSPSetFilter_C",TSAdaptDSPSetFilter_DSP);CHKERRQ(ierr); ierr = PetscObjectComposeFunction((PetscObject)adapt,"TSAdaptDSPSetPID_C",TSAdaptDSPSetPID_DSP);CHKERRQ(ierr); ierr = TSAdaptDSPSetFilter_DSP(adapt,"PI42");CHKERRQ(ierr); ierr = TSAdaptRestart_DSP(adapt);CHKERRQ(ierr); PetscFunctionReturn(0); } > Can anyone develop that idea ? Because for TSAdaptRegister, i guess the > wrapper would start like : > > PETSC_EXTERN void tsadaptregister_(char *sname, > void (*func)(TSAdapt*,PetscErrorCode*), > PetscErrorCode *ierr, > PETSC_FORTRAN_CHARLEN_T snamelen) > > but then the C TSAdaptRegister function takes a PetscErrorCode > (*func)(TSAdapt) function pointer as argument ... I cannot use any > FORTRAN_CALLBACK here since I do not have any object to hook it to, and I > could not find a similar situation among the pre-existing wrappers. Does > anyone have an idea on how to proceed ? > > Thanks !! > > Thibault > > Le mar. 22 d?c. 2020 ? 21:20, Barry Smith a ?crit : > >> >> PetscObjectUseFortranCallback((PetscDS)ctx, >> >> >> *ierr = PetscObjectSetFortranCallback((PetscObject)*prob >> >> >> It looks like the problem is that these user provided functions do not >> take a PetscDS directly as an argument so the Fortran callback information >> cannot be obtained from them. >> >> The manual page for PetscDSAddBoundary() says >> >> - ctx - An optional user context for bcFunc >> >> but then when it lists the calling sequence for bcFunc it does not list >> the ctx as an argument, so either the manual page or code is wrong. >> >> It looks like you make the ctx be the PetscDS prob argument when you >> call PetscDSAddBoundary >> >> In principle this sounds like it might work. I think you need to track >> through the debugger to see if the ctx passed to ourbocofunc() is >> actually the PetscDS prob variable and if not why it is not. >> >> Barry >> >> >> On Dec 22, 2020, at 5:49 AM, Thibault Bridel-Bertomeu < >> thibault.bridelbertomeu at gmail.com> wrote: >> >> Dear all, >> >> I have hit two snags while implementing the missing wrappers necessary to >> transcribe ex11 to Fortran. >> >> First is about the PetscDSAddBoundary wrapper, that I have done so : >> >> static PetscErrorCode ourbocofunc(PetscReal time, const PetscReal *c, >> const PetscReal *n, const PetscScalar *a_xI, const PetscScalar *a_xG, void >> *ctx) >> { >> PetscObjectUseFortranCallback((PetscDS)ctx, bocofunc, >> (PetscReal*,const PetscReal*,const >> PetscReal*,const PetscScalar*,const PetscScalar*,void*,PetscErrorCode*), >> (&time,c,n,a_xI,a_xG,ctx,&ierr)); >> } >> static PetscErrorCode ourbocofunc_time(PetscReal time, const PetscReal *c, >> const PetscReal *n, const PetscScalar *a_xI, const PetscScalar *a_xG, void >> *ctx) >> { >> PetscObjectUseFortranCallback((PetscDS)ctx, bocofunc_time, >> (PetscReal*,const PetscReal*,const >> PetscReal*,const PetscScalar*,const PetscScalar*,void*,PetscErrorCode*), >> (&time,c,n,a_xI,a_xG,ctx,&ierr)); >> } >> PETSC_EXTERN void petscdsaddboundary_(PetscDS *prob, >> DMBoundaryConditionType *type, char *name, char *labelname, PetscInt >> *field, PetscInt *numcomps, PetscInt *comps, >> void (*bcFunc)(void), >> void (*bcFunc_t)(void), >> PetscInt *numids, const PetscInt >> *ids, void *ctx, PetscErrorCode *ierr, >> PETSC_FORTRAN_CHARLEN_T namelen, >> PETSC_FORTRAN_CHARLEN_T labelnamelen) >> { >> char *newname, *newlabelname; >> FIXCHAR(name, namelen, newname); >> FIXCHAR(labelname, labelnamelen, newlabelname); >> *ierr = PetscObjectSetFortranCallback((PetscObject)*prob, >> PETSC_FORTRAN_CALLBACK_CLASS, &bocofunc, (PetscVoidFunction)bcFunc, >> ctx); >> *ierr = PetscObjectSetFortranCallback((PetscObject)*prob, >> PETSC_FORTRAN_CALLBACK_CLASS, &bocofunc_time, (PetscVoidFunction)bcFunc_t, >> ctx); >> *ierr = PetscDSAddBoundary(*prob, *type, newname, newlabelname, >> *field, *numcomps, comps, >> (void (*)(void))ourbocofunc, >> (void (*)(void))ourbocofunc_time, >> *numids, ids, *prob); >> FREECHAR(name, newname); >> FREECHAR(labelname, newlabelname); >> } >> >> >> >> but when I call it in the program, with adequate routines, I obtain the >> following error : >> >> [0]PETSC ERROR: --------------------- Error Message --------------------------------------------------------------[0]PETSC ERROR: Corrupt argument: https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSC ERROR: Fortran callback not set on this object[0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting.[0]PETSC ERROR: Petsc Development GIT revision: v3.14.2-297-gf36a7edeb8 GIT Date: 2020-12-18 04:42:53 +0000[0]PETSC ERROR: ../../../bin/eulerian3D on a named macbook-pro-de-thibault.home by tbridel Sun Dec 20 15:05:15 2020[0]PETSC ERROR: Configure options --with-clean=0 --prefix=/Users/tbridel/Documents/1-CODES/04-PETSC/build --with-make-np=2 --with-windows-graphics=0 --with-debugging=0 --download-fblaslapack --download-mpich-shared=0 --with-x=0 --with-pthread=0 --with-valgrind=0 --PETSC_ARCH=macosx --with-fc=/usr/local/Cellar/open-mpi/4.0.1_2/bin/mpifort --with-cc=/usr/local/Cellar/open-mpi/4.0.1_2/bin/mpicc --with-cxx=/usr/local/Cellar/open-mpi/4.0.1_2/bin/mpic++ --with-openmp=0 --download-hypre=yes --download-sowing=yes --download-metis=yes --download-parmetis=yes --download-triangle=yes --download-tetgen=yes --download-ctetgen=yes --download-p4est=yes --download-zlib=yes --download-c2html=yes --download-eigen=yes --download-pragmatic=yes --with-hdf5-dir=/usr/local/Cellar/hdf5/1.10.5_1 --with-cmake-dir=/usr/local/Cellar/cmake/3.15.3[0]PETSC ERROR: #1 PetscObjectGetFortranCallback() line 258 in /Users/tbridel/Documents/1-CODES/04-PETSC/src/sys/objects/inherit.c[0]PETSC ERROR: #2 ourbocofunc() line 141 in /Users/tbridel/Documents/1-CODES/59-EULERIAN3D/sources/petsc_wrapping/wrapper_petsc.c[0]PETSC ERROR: #3 DMPlexInsertBoundaryValuesRiemann() line 989 in /Users/tbridel/Documents/1-CODES/04-PETSC/src/dm/impls/plex/plexfem.c[0]PETSC ERROR: #4 DMPlexInsertBoundaryValues_Plex() line 1052 in /Users/tbridel/Documents/1-CODES/04-PETSC/src/dm/impls/plex/plexfem.c[0]PETSC ERROR: #5 DMPlexInsertBoundaryValues() line 1142 in /Users/tbridel/Documents/1-CODES/04-PETSC/src/dm/impls/plex/plexfem.c[0]PETSC ERROR: #6 DMPlexComputeResidual_Internal() line 4524 in /Users/tbridel/Documents/1-CODES/04-PETSC/src/dm/impls/plex/plexfem.c[0]PETSC ERROR: #7 DMPlexTSComputeRHSFunctionFVM() line 74 in /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/utils/dmplexts.c[0]PETSC ERROR: #8 ourdmtsrhsfunc() line 186 in /Users/tbridel/Documents/1-CODES/59-EULERIAN3D/sources/petsc_wrapping/wrapper_petsc.c[0]PETSC ERROR: #9 TSComputeRHSFunction_DMLocal() line 105 in /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/utils/dmlocalts.c[0]PETSC ERROR: #10 TSComputeRHSFunction() line 653 in /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/interface/ts.c[0]PETSC ERROR: #11 TSSSPStep_RK_3() line 120 in /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/impls/explicit/ssp/ssp.c[0]PETSC ERROR: #12 TSStep_SSP() line 208 in /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/impls/explicit/ssp/ssp.c[0]PETSC ERROR: #13 TSStep() line 3757 in /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/interface/ts.c[0]PETSC ERROR: #14 TSSolve() line 4154 in /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/interface/ts.c[0]PETSC ERROR: #15 User provided function() line 0 in User file >> >> >> Second is about the DMProjectFunction wrapper, that I have done so : >> >> static PetscErrorCode ourdmprojfunc(PetscInt dim, PetscReal time, >> PetscReal x[], PetscInt Nf, PetscScalar u[], void *ctx) >> { >> PetscObjectUseFortranCallback((DM)ctx, dmprojfunc, >> >> (PetscInt*,PetscReal*,PetscReal*,PetscInt*,PetscScalar*,void*,PetscErrorCode*), >> (&dim,&time,x,&Nf,u,_ctx,&ierr)) >> } >> PETSC_EXTERN void dmprojectfunction_(DM *dm, PetscReal *time, >> void >> (*func)(PetscInt*,PetscReal*,PetscReal*,PetscInt*,PetscScalar*,void*,PetscErrorCode*), >> void *ctx, InsertMode *mode, Vec X, >> PetscErrorCode *ierr) >> { >> PetscErrorCode (*funcarr[1]) (PetscInt dim, PetscReal time, PetscReal >> x[], PetscInt Nf, PetscScalar *u, void *ctx); >> *ierr = PetscObjectSetFortranCallback((PetscObject)*dm, >> PETSC_FORTRAN_CALLBACK_CLASS, &dmprojfunc, (PetscVoidFunction)func, ctx); >> funcarr[0] = ourdmprojfunc; >> *ierr = DMProjectFunction(*dm, *time, funcarr, &ctx, *mode, X); >> } >> >> >> This time there is no error because I cannot reach this point in the >> program, but I am not sure anyways how to write this wrapper, especially >> because of the double pointers that DMProjectFunction takes as arguments. >> >> Does anyone have any idea what could be going wrong with those two >> wrappers ? >> >> Thank you very much in advance !! >> >> Thibault >> >> Le ven. 18 d?c. 2020 ? 11:02, Thibault Bridel-Bertomeu < >> thibault.bridelbertomeu at gmail.com> a ?crit : >> >>> Aah that is a nice trick, I was getting ready to fork, clone the fork and >>> redo the work, but that worked fine ! Thank you Barry ! >>> >>> The MR will appear in a little while ! >>> >>> Thibault >>> >>> >>> Le ven. 18 d?c. 2020 ? 10:16, Barry Smith a ?crit : >>> >>>> >>>> Good question. There is a trick to limit the amount of work you need >>>> to do with a new fork after you have already made changes with a PETSc >>>> clone, but it looks like we do not document this clearly in the webpages. >>>> (I couldn't find it). >>>> >>>> Yes, you do need to make a fork, but after you have made the fork on >>>> the GitLab website (and have done nothing on your machine) edit the file >>>> $PETSC_DIR/.git/config for your clone on your machine >>>> >>>> Locate the line that has url = git at gitlab.com:petsc/petsc.git (this >>>> may have an https at the beginning of the line) >>>> >>>> Change this line to point to the fork url instead with git@ not >>>> https, which will be pretty much the same URL but with your user id instead >>>> of petsc in the address. Then git push and it will push to your fork. >>>> >>>> Now you changes will be in your fork and you can make the MR from your >>>> fork URL on Gitlab. (In other words this editing trick converts your PETSc >>>> clone on your machine to a PETSc fork). >>>> >>>> I hope I have explained this clearly enough it goes smoothly. >>>> >>>> Barry >>>> >>>> >>>> >>>> On Dec 18, 2020, at 3:00 AM, Thibault Bridel-Bertomeu < >>>> thibault.bridelbertomeu at gmail.com> wrote: >>>> >>>> Hello Barry, >>>> >>>> I'll start the MR as soon as possible then so that specialists can >>>> indeed have a look. Do I have to fork PETSc to start a MR or are PETSc repo >>>> settings such that can I push a branch from the PETSc clone I got ? >>>> >>>> Thibault >>>> >>>> >>>> Le mer. 16 d?c. 2020 ? 07:47, Barry Smith a ?crit : >>>> >>>>> >>>>> Thibault, >>>>> >>>>> A subdirectory for the example is fine; we have other examples that >>>>> use subdirectories and multiple files. >>>>> >>>>> Note: even if you don't have something completely working you can >>>>> still make MR and list it as DRAFT request for comments, some other PETSc >>>>> members who understand the packages you are using and Fortran better than I >>>>> may be able to help as you develop the code. >>>>> >>>>> Barry >>>>> >>>>> >>>>> >>>>> >>>>> On Dec 16, 2020, at 12:35 AM, Thibault Bridel-Bertomeu < >>>>> thibault.bridelbertomeu at gmail.com> wrote: >>>>> >>>>> Hello everyone, >>>>> >>>>> Thank you Barry for the feedback. >>>>> OK, yes I'll work up an MR as soon as I have got something working. By >>>>> the way, does the fortran-version of the example have to be a single file ? >>>>> If my push contains a directory with several files (different modules and >>>>> the main), and the Makefile that goes with it, is that ok ? >>>>> >>>>> Thibault Bridel-Bertomeu >>>>> >>>>> >>>>> Le mer. 16 d?c. 2020 ? 04:46, Barry Smith a ?crit : >>>>> >>>>>> >>>>>> This is great. If you make a branch off of the PETSc git repository >>>>>> with these additions and work on ex11 you can make a merge request and we >>>>>> can run the code easily on all our test systems (for security reasons one >>>>>> of use needs to launch the tests from your MR). >>>>>> https://docs.petsc.org/en/latest/developers/integration/ >>>>>> >>>>>> Barry >>>>>> >>>>>> >>>>>> On Dec 15, 2020, at 5:35 AM, Thibault Bridel-Bertomeu < >>>>>> thibault.bridelbertomeu at gmail.com> wrote: >>>>>> >>>>>> Hello everyone, >>>>>> >>>>>> So far, I have the wrappers in the files attached to this e-mail. I >>>>>> still do not know if they work properly - at least the code compiles and >>>>>> the calls to the wrapped-subroutine do not fail - but I wanted to put this >>>>>> here in case someone sees something really wrong with it already. >>>>>> >>>>>> Thank you again for your help, I'll try to post updates of the F90 >>>>>> version of ex11 regularly in this thread. >>>>>> >>>>>> Stay safe, >>>>>> >>>>>> Thibault Bridel-Bertomeu >>>>>> >>>>>> Le dim. 13 d?c. 2020 ? 16:39, Jed Brown a ?crit : >>>>>> >>>>>>> Thibault Bridel-Bertomeu writes: >>>>>>> >>>>>>> > Thank you Mark for your answer. >>>>>>> > >>>>>>> > I am not sure what you think could be in the setBC1 routine ? How >>>>>>> to make >>>>>>> > the connection with the PetscDS ? >>>>>>> > >>>>>>> > On the other hand, I actually found after a while TSMonitorSet has a >>>>>>> > fortran wrapper, and it does take as arguments two function >>>>>>> pointers, so I >>>>>>> > guess it is possible ? Although I am not sure exactly how to play >>>>>>> with the >>>>>>> > PetscObjectSetFortranCallback & PetscObjectUseFortranCallback >>>>>>> macros - >>>>>>> > could anybody advise please ? >>>>>>> >>>>>>> tsmonitorset_ is a good example to follow. In your file, create one >>>>>>> of these static structs with a member for each callback. These are IDs that >>>>>>> will be used as keys for Fortran callbacks and their contexts. The salient >>>>>>> parts of the file are below. >>>>>>> >>>>>>> static struct { >>>>>>> PetscFortranCallbackId prestep; >>>>>>> PetscFortranCallbackId poststep; >>>>>>> PetscFortranCallbackId rhsfunction; >>>>>>> PetscFortranCallbackId rhsjacobian; >>>>>>> PetscFortranCallbackId ifunction; >>>>>>> PetscFortranCallbackId ijacobian; >>>>>>> PetscFortranCallbackId monitor; >>>>>>> PetscFortranCallbackId mondestroy; >>>>>>> PetscFortranCallbackId transform; >>>>>>> #if defined(PETSC_HAVE_F90_2PTR_ARG) >>>>>>> PetscFortranCallbackId function_pgiptr; >>>>>>> #endif >>>>>>> } _cb; >>>>>>> >>>>>>> /* >>>>>>> Note ctx is the same as ts so we need to get the Fortran context >>>>>>> out of the TS; this gets put in _ctx using the callback ID >>>>>>> */ >>>>>>> static PetscErrorCode ourmonitor(TS ts,PetscInt i,PetscReal d,Vec >>>>>>> v,void *ctx) >>>>>>> { >>>>>>> >>>>>>> PetscObjectUseFortranCallback(ts,_cb.monitor,(TS*,PetscInt*,PetscReal*,Vec >>>>>>> *,void*,PetscErrorCode*),(&ts,&i,&d,&v,_ctx,&ierr)); >>>>>>> } >>>>>>> >>>>>>> Then follow as in tsmonitorset_, which sets two callbacks. >>>>>>> >>>>>>> PETSC_EXTERN void tsmonitorset_(TS *ts,void >>>>>>> (*func)(TS*,PetscInt*,PetscReal*,Vec*,void*,PetscErrorCode*),void >>>>>>> *mctx,void (*d)(void*,PetscErrorCode*),PetscErrorCode *ierr) >>>>>>> { >>>>>>> CHKFORTRANNULLFUNCTION(d); >>>>>>> if ((PetscVoidFunction)func == (PetscVoidFunction) >>>>>>> tsmonitordefault_) { >>>>>>> *ierr = TSMonitorSet(*ts,(PetscErrorCode >>>>>>> (*)(TS,PetscInt,PetscReal,Vec,void*))TSMonitorDefault,*(PetscViewerAndFormat**)mctx,(PetscErrorCode >>>>>>> (*)(void **))PetscViewerAndFormatDestroy); >>>>>>> } else { >>>>>>> *ierr = >>>>>>> PetscObjectSetFortranCallback((PetscObject)*ts,PETSC_FORTRAN_CALLBACK_CLASS,&_cb.monitor,(PetscVoidFunction)func,mctx); >>>>>>> *ierr = >>>>>>> PetscObjectSetFortranCallback((PetscObject)*ts,PETSC_FORTRAN_CALLBACK_CLASS,&_cb.mondestroy,(PetscVoidFunction)d,mctx); >>>>>>> *ierr = TSMonitorSet(*ts,ourmonitor,*ts,ourmonitordestroy); >>>>>>> } >>>>>>> } >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>> >>>> >> From thibault.bridelbertomeu at gmail.com Mon Dec 28 11:02:23 2020 From: thibault.bridelbertomeu at gmail.com (Thibault Bridel-Bertomeu) Date: Mon, 28 Dec 2020 18:02:23 +0100 Subject: [petsc-users] TS tutorial ex11 in Fortran In-Reply-To: <877dp2c8ao.fsf@jedbrown.org> References: <1EC72DF8-61BD-482B-B1B0-9427AB1ADF74@petsc.dev> <048DC071-33E5-4644-B1C8-DD598248923A@petsc.dev> <87tuspyaug.fsf@jedbrown.org> <1035FEDB-B297-4E71-8F0F-D44C72707105@petsc.dev> <39E9FBED-6D6B-41E4-983C-58465ACAB4D0@petsc.dev> <877dp2c8ao.fsf@jedbrown.org> Message-ID: Hi Jed, Thanks for your message. I implemented everything in C as you suggested and it works fine except for one thing : the ts->vec_sol does not seem to get updated when seen from the C code (it is on the Fortran side though the solution is correct). As a result, the time step (that uses among other things the max velocity in the domain) is always at the value it gets from the initial solution. Any idea why ts->vec_sol does not seem to be updated ? (I checked the stepnum and time is updated though , when accessed with TSGetTime for instance). Cheers, Thibault Le lun. 28 d?c. 2020 ? 15:30, Jed Brown a ?crit : > Thibault Bridel-Bertomeu writes: > > > Good morning everyone, > > > > Thank you Barry for the answer, it works now ! > > > > I am facing (yet) another situation: the TSAdaptRegister function. > > In the MR on gitlab, Jed mentioned that sometimes, when function pointers > > are not stored in PETSc objects, one can use stack memory to pass that > > pointer from fortran to C. > > The issue with stack memory is that when it returns, that memory is > invalid. You can't use it in this instance. > > I think you're going to have problems implementing a TSAdaptCreate_XYZ in > Fortran (because the body of that function will need to access private > struct members; see below). > > I would implement what you need in C and you can call out to Fortran if > you want from inside TSAdaptChoose_YourMethod(). > > PETSC_EXTERN PetscErrorCode TSAdaptCreate_DSP(TSAdapt adapt) > { > TSAdapt_DSP *dsp; > PetscErrorCode ierr; > > PetscFunctionBegin; > ierr = PetscNewLog(adapt,&dsp);CHKERRQ(ierr); > adapt->reject_safety = 1.0; /* unused */ > > adapt->data = (void*)dsp; > adapt->ops->choose = TSAdaptChoose_DSP; > adapt->ops->setfromoptions = TSAdaptSetFromOptions_DSP; > adapt->ops->destroy = TSAdaptDestroy_DSP; > adapt->ops->view = TSAdaptView_DSP; > > ierr = > PetscObjectComposeFunction((PetscObject)adapt,"TSAdaptDSPSetFilter_C",TSAdaptDSPSetFilter_DSP);CHKERRQ(ierr); > ierr = > PetscObjectComposeFunction((PetscObject)adapt,"TSAdaptDSPSetPID_C",TSAdaptDSPSetPID_DSP);CHKERRQ(ierr); > > ierr = TSAdaptDSPSetFilter_DSP(adapt,"PI42");CHKERRQ(ierr); > ierr = TSAdaptRestart_DSP(adapt);CHKERRQ(ierr); > PetscFunctionReturn(0); > } > > > Can anyone develop that idea ? Because for TSAdaptRegister, i guess the > > wrapper would start like : > > > > PETSC_EXTERN void tsadaptregister_(char *sname, > > void > (*func)(TSAdapt*,PetscErrorCode*), > > PetscErrorCode *ierr, > > PETSC_FORTRAN_CHARLEN_T snamelen) > > > > but then the C TSAdaptRegister function takes a PetscErrorCode > > (*func)(TSAdapt) function pointer as argument ... I cannot use any > > FORTRAN_CALLBACK here since I do not have any object to hook it to, and I > > could not find a similar situation among the pre-existing wrappers. Does > > anyone have an idea on how to proceed ? > > > > Thanks !! > > > > Thibault > > > > Le mar. 22 d?c. 2020 ? 21:20, Barry Smith a ?crit : > > > >> > >> PetscObjectUseFortranCallback((PetscDS)ctx, > >> > >> > >> *ierr = PetscObjectSetFortranCallback((PetscObject)*prob > >> > >> > >> It looks like the problem is that these user provided functions do not > >> take a PetscDS directly as an argument so the Fortran callback > information > >> cannot be obtained from them. > >> > >> The manual page for PetscDSAddBoundary() says > >> > >> - ctx - An optional user context for bcFunc > >> > >> but then when it lists the calling sequence for bcFunc it does not list > >> the ctx as an argument, so either the manual page or code is wrong. > >> > >> It looks like you make the ctx be the PetscDS prob argument when you > >> call PetscDSAddBoundary > >> > >> In principle this sounds like it might work. I think you need to track > >> through the debugger to see if the ctx passed to ourbocofunc() is > >> actually the PetscDS prob variable and if not why it is not. > >> > >> Barry > >> > >> > >> On Dec 22, 2020, at 5:49 AM, Thibault Bridel-Bertomeu < > >> thibault.bridelbertomeu at gmail.com> wrote: > >> > >> Dear all, > >> > >> I have hit two snags while implementing the missing wrappers necessary > to > >> transcribe ex11 to Fortran. > >> > >> First is about the PetscDSAddBoundary wrapper, that I have done so : > >> > >> static PetscErrorCode ourbocofunc(PetscReal time, const PetscReal *c, > >> const PetscReal *n, const PetscScalar *a_xI, const PetscScalar *a_xG, > void > >> *ctx) > >> { > >> PetscObjectUseFortranCallback((PetscDS)ctx, bocofunc, > >> (PetscReal*,const PetscReal*,const > >> PetscReal*,const PetscScalar*,const PetscScalar*,void*,PetscErrorCode*), > >> (&time,c,n,a_xI,a_xG,ctx,&ierr)); > >> } > >> static PetscErrorCode ourbocofunc_time(PetscReal time, const PetscReal > *c, > >> const PetscReal *n, const PetscScalar *a_xI, const PetscScalar *a_xG, > void > >> *ctx) > >> { > >> PetscObjectUseFortranCallback((PetscDS)ctx, bocofunc_time, > >> (PetscReal*,const PetscReal*,const > >> PetscReal*,const PetscScalar*,const PetscScalar*,void*,PetscErrorCode*), > >> (&time,c,n,a_xI,a_xG,ctx,&ierr)); > >> } > >> PETSC_EXTERN void petscdsaddboundary_(PetscDS *prob, > >> DMBoundaryConditionType *type, char *name, char *labelname, PetscInt > >> *field, PetscInt *numcomps, PetscInt *comps, > >> void (*bcFunc)(void), > >> void (*bcFunc_t)(void), > >> PetscInt *numids, const PetscInt > >> *ids, void *ctx, PetscErrorCode *ierr, > >> PETSC_FORTRAN_CHARLEN_T namelen, > >> PETSC_FORTRAN_CHARLEN_T labelnamelen) > >> { > >> char *newname, *newlabelname; > >> FIXCHAR(name, namelen, newname); > >> FIXCHAR(labelname, labelnamelen, newlabelname); > >> *ierr = PetscObjectSetFortranCallback((PetscObject)*prob, > >> PETSC_FORTRAN_CALLBACK_CLASS, &bocofunc, (PetscVoidFunction)bcFunc, > >> ctx); > >> *ierr = PetscObjectSetFortranCallback((PetscObject)*prob, > >> PETSC_FORTRAN_CALLBACK_CLASS, &bocofunc_time, > (PetscVoidFunction)bcFunc_t, > >> ctx); > >> *ierr = PetscDSAddBoundary(*prob, *type, newname, newlabelname, > >> *field, *numcomps, comps, > >> (void (*)(void))ourbocofunc, > >> (void (*)(void))ourbocofunc_time, > >> *numids, ids, *prob); > >> FREECHAR(name, newname); > >> FREECHAR(labelname, newlabelname); > >> } > >> > >> > >> > >> but when I call it in the program, with adequate routines, I obtain the > >> following error : > >> > >> [0]PETSC ERROR: --------------------- Error Message > --------------------------------------------------------------[0]PETSC > ERROR: Corrupt argument: > https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSC > ERROR: Fortran callback not set on this object[0]PETSC ERROR: See > https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble > shooting.[0]PETSC ERROR: Petsc Development GIT revision: > v3.14.2-297-gf36a7edeb8 GIT Date: 2020-12-18 04:42:53 +0000[0]PETSC ERROR: > ../../../bin/eulerian3D on a named macbook-pro-de-thibault.home by tbridel > Sun Dec 20 15:05:15 2020[0]PETSC ERROR: Configure options --with-clean=0 > --prefix=/Users/tbridel/Documents/1-CODES/04-PETSC/build --with-make-np=2 > --with-windows-graphics=0 --with-debugging=0 --download-fblaslapack > --download-mpich-shared=0 --with-x=0 --with-pthread=0 --with-valgrind=0 > --PETSC_ARCH=macosx > --with-fc=/usr/local/Cellar/open-mpi/4.0.1_2/bin/mpifort > --with-cc=/usr/local/Cellar/open-mpi/4.0.1_2/bin/mpicc > --with-cxx=/usr/local/Cellar/open-mpi/4.0.1_2/bin/mpic++ --with-openmp=0 > --download-hypre=yes --download-sowing=yes --download-metis=yes > --download-parmetis=yes --download-triangle=yes --download-tetgen=yes > --download-ctetgen=yes --download-p4est=yes --download-zlib=yes > --download-c2html=yes --download-eigen=yes --download-pragmatic=yes > --with-hdf5-dir=/usr/local/Cellar/hdf5/1.10.5_1 > --with-cmake-dir=/usr/local/Cellar/cmake/3.15.3[0]PETSC ERROR: #1 > PetscObjectGetFortranCallback() line 258 in > /Users/tbridel/Documents/1-CODES/04-PETSC/src/sys/objects/inherit.c[0]PETSC > ERROR: #2 ourbocofunc() line 141 in > /Users/tbridel/Documents/1-CODES/59-EULERIAN3D/sources/petsc_wrapping/wrapper_petsc.c[0]PETSC > ERROR: #3 DMPlexInsertBoundaryValuesRiemann() line 989 in > /Users/tbridel/Documents/1-CODES/04-PETSC/src/dm/impls/plex/plexfem.c[0]PETSC > ERROR: #4 DMPlexInsertBoundaryValues_Plex() line 1052 in > /Users/tbridel/Documents/1-CODES/04-PETSC/src/dm/impls/plex/plexfem.c[0]PETSC > ERROR: #5 DMPlexInsertBoundaryValues() line 1142 in > /Users/tbridel/Documents/1-CODES/04-PETSC/src/dm/impls/plex/plexfem.c[0]PETSC > ERROR: #6 DMPlexComputeResidual_Internal() line 4524 in > /Users/tbridel/Documents/1-CODES/04-PETSC/src/dm/impls/plex/plexfem.c[0]PETSC > ERROR: #7 DMPlexTSComputeRHSFunctionFVM() line 74 in > /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/utils/dmplexts.c[0]PETSC > ERROR: #8 ourdmtsrhsfunc() line 186 in > /Users/tbridel/Documents/1-CODES/59-EULERIAN3D/sources/petsc_wrapping/wrapper_petsc.c[0]PETSC > ERROR: #9 TSComputeRHSFunction_DMLocal() line 105 in > /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/utils/dmlocalts.c[0]PETSC > ERROR: #10 TSComputeRHSFunction() line 653 in > /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/interface/ts.c[0]PETSC > ERROR: #11 TSSSPStep_RK_3() line 120 in > /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/impls/explicit/ssp/ssp.c[0]PETSC > ERROR: #12 TSStep_SSP() line 208 in > /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/impls/explicit/ssp/ssp.c[0]PETSC > ERROR: #13 TSStep() line 3757 in > /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/interface/ts.c[0]PETSC > ERROR: #14 TSSolve() line 4154 in > /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/interface/ts.c[0]PETSC > ERROR: #15 User provided function() line 0 in User file > >> > >> > >> Second is about the DMProjectFunction wrapper, that I have done so : > >> > >> static PetscErrorCode ourdmprojfunc(PetscInt dim, PetscReal time, > >> PetscReal x[], PetscInt Nf, PetscScalar u[], void *ctx) > >> { > >> PetscObjectUseFortranCallback((DM)ctx, dmprojfunc, > >> > >> > (PetscInt*,PetscReal*,PetscReal*,PetscInt*,PetscScalar*,void*,PetscErrorCode*), > >> (&dim,&time,x,&Nf,u,_ctx,&ierr)) > >> } > >> PETSC_EXTERN void dmprojectfunction_(DM *dm, PetscReal *time, > >> void > >> > (*func)(PetscInt*,PetscReal*,PetscReal*,PetscInt*,PetscScalar*,void*,PetscErrorCode*), > >> void *ctx, InsertMode *mode, Vec X, > >> PetscErrorCode *ierr) > >> { > >> PetscErrorCode (*funcarr[1]) (PetscInt dim, PetscReal time, > PetscReal > >> x[], PetscInt Nf, PetscScalar *u, void *ctx); > >> *ierr = PetscObjectSetFortranCallback((PetscObject)*dm, > >> PETSC_FORTRAN_CALLBACK_CLASS, &dmprojfunc, (PetscVoidFunction)func, > ctx); > >> funcarr[0] = ourdmprojfunc; > >> *ierr = DMProjectFunction(*dm, *time, funcarr, &ctx, *mode, X); > >> } > >> > >> > >> This time there is no error because I cannot reach this point in the > >> program, but I am not sure anyways how to write this wrapper, especially > >> because of the double pointers that DMProjectFunction takes as > arguments. > >> > >> Does anyone have any idea what could be going wrong with those two > >> wrappers ? > >> > >> Thank you very much in advance !! > >> > >> Thibault > >> > >> Le ven. 18 d?c. 2020 ? 11:02, Thibault Bridel-Bertomeu < > >> thibault.bridelbertomeu at gmail.com> a ?crit : > >> > >>> Aah that is a nice trick, I was getting ready to fork, clone the fork > and > >>> redo the work, but that worked fine ! Thank you Barry ! > >>> > >>> The MR will appear in a little while ! > >>> > >>> Thibault > >>> > >>> > >>> Le ven. 18 d?c. 2020 ? 10:16, Barry Smith a ?crit : > >>> > >>>> > >>>> Good question. There is a trick to limit the amount of work you > need > >>>> to do with a new fork after you have already made changes with a > PETSc > >>>> clone, but it looks like we do not document this clearly in the > webpages. > >>>> (I couldn't find it). > >>>> > >>>> Yes, you do need to make a fork, but after you have made the fork on > >>>> the GitLab website (and have done nothing on your machine) edit the > file > >>>> $PETSC_DIR/.git/config for your clone on your machine > >>>> > >>>> Locate the line that has url = git at gitlab.com:petsc/petsc.git > (this > >>>> may have an https at the beginning of the line) > >>>> > >>>> Change this line to point to the fork url instead with git@ not > >>>> https, which will be pretty much the same URL but with your user id > instead > >>>> of petsc in the address. Then git push and it will push to your fork. > >>>> > >>>> Now you changes will be in your fork and you can make the MR from > your > >>>> fork URL on Gitlab. (In other words this editing trick converts your > PETSc > >>>> clone on your machine to a PETSc fork). > >>>> > >>>> I hope I have explained this clearly enough it goes smoothly. > >>>> > >>>> Barry > >>>> > >>>> > >>>> > >>>> On Dec 18, 2020, at 3:00 AM, Thibault Bridel-Bertomeu < > >>>> thibault.bridelbertomeu at gmail.com> wrote: > >>>> > >>>> Hello Barry, > >>>> > >>>> I'll start the MR as soon as possible then so that specialists can > >>>> indeed have a look. Do I have to fork PETSc to start a MR or are > PETSc repo > >>>> settings such that can I push a branch from the PETSc clone I got ? > >>>> > >>>> Thibault > >>>> > >>>> > >>>> Le mer. 16 d?c. 2020 ? 07:47, Barry Smith a ?crit > : > >>>> > >>>>> > >>>>> Thibault, > >>>>> > >>>>> A subdirectory for the example is fine; we have other examples that > >>>>> use subdirectories and multiple files. > >>>>> > >>>>> Note: even if you don't have something completely working you can > >>>>> still make MR and list it as DRAFT request for comments, some other > PETSc > >>>>> members who understand the packages you are using and Fortran better > than I > >>>>> may be able to help as you develop the code. > >>>>> > >>>>> Barry > >>>>> > >>>>> > >>>>> > >>>>> > >>>>> On Dec 16, 2020, at 12:35 AM, Thibault Bridel-Bertomeu < > >>>>> thibault.bridelbertomeu at gmail.com> wrote: > >>>>> > >>>>> Hello everyone, > >>>>> > >>>>> Thank you Barry for the feedback. > >>>>> OK, yes I'll work up an MR as soon as I have got something working. > By > >>>>> the way, does the fortran-version of the example have to be a single > file ? > >>>>> If my push contains a directory with several files (different > modules and > >>>>> the main), and the Makefile that goes with it, is that ok ? > >>>>> > >>>>> Thibault Bridel-Bertomeu > >>>>> > >>>>> > >>>>> Le mer. 16 d?c. 2020 ? 04:46, Barry Smith a > ?crit : > >>>>> > >>>>>> > >>>>>> This is great. If you make a branch off of the PETSc git > repository > >>>>>> with these additions and work on ex11 you can make a merge request > and we > >>>>>> can run the code easily on all our test systems (for security > reasons one > >>>>>> of use needs to launch the tests from your MR). > >>>>>> https://docs.petsc.org/en/latest/developers/integration/ > >>>>>> > >>>>>> Barry > >>>>>> > >>>>>> > >>>>>> On Dec 15, 2020, at 5:35 AM, Thibault Bridel-Bertomeu < > >>>>>> thibault.bridelbertomeu at gmail.com> wrote: > >>>>>> > >>>>>> Hello everyone, > >>>>>> > >>>>>> So far, I have the wrappers in the files attached to this e-mail. I > >>>>>> still do not know if they work properly - at least the code > compiles and > >>>>>> the calls to the wrapped-subroutine do not fail - but I wanted to > put this > >>>>>> here in case someone sees something really wrong with it already. > >>>>>> > >>>>>> Thank you again for your help, I'll try to post updates of the F90 > >>>>>> version of ex11 regularly in this thread. > >>>>>> > >>>>>> Stay safe, > >>>>>> > >>>>>> Thibault Bridel-Bertomeu > >>>>>> > >>>>>> Le dim. 13 d?c. 2020 ? 16:39, Jed Brown a ?crit > : > >>>>>> > >>>>>>> Thibault Bridel-Bertomeu > writes: > >>>>>>> > >>>>>>> > Thank you Mark for your answer. > >>>>>>> > > >>>>>>> > I am not sure what you think could be in the setBC1 routine ? How > >>>>>>> to make > >>>>>>> > the connection with the PetscDS ? > >>>>>>> > > >>>>>>> > On the other hand, I actually found after a while TSMonitorSet > has a > >>>>>>> > fortran wrapper, and it does take as arguments two function > >>>>>>> pointers, so I > >>>>>>> > guess it is possible ? Although I am not sure exactly how to play > >>>>>>> with the > >>>>>>> > PetscObjectSetFortranCallback & PetscObjectUseFortranCallback > >>>>>>> macros - > >>>>>>> > could anybody advise please ? > >>>>>>> > >>>>>>> tsmonitorset_ is a good example to follow. In your file, create one > >>>>>>> of these static structs with a member for each callback. These are > IDs that > >>>>>>> will be used as keys for Fortran callbacks and their contexts. The > salient > >>>>>>> parts of the file are below. > >>>>>>> > >>>>>>> static struct { > >>>>>>> PetscFortranCallbackId prestep; > >>>>>>> PetscFortranCallbackId poststep; > >>>>>>> PetscFortranCallbackId rhsfunction; > >>>>>>> PetscFortranCallbackId rhsjacobian; > >>>>>>> PetscFortranCallbackId ifunction; > >>>>>>> PetscFortranCallbackId ijacobian; > >>>>>>> PetscFortranCallbackId monitor; > >>>>>>> PetscFortranCallbackId mondestroy; > >>>>>>> PetscFortranCallbackId transform; > >>>>>>> #if defined(PETSC_HAVE_F90_2PTR_ARG) > >>>>>>> PetscFortranCallbackId function_pgiptr; > >>>>>>> #endif > >>>>>>> } _cb; > >>>>>>> > >>>>>>> /* > >>>>>>> Note ctx is the same as ts so we need to get the Fortran context > >>>>>>> out of the TS; this gets put in _ctx using the callback ID > >>>>>>> */ > >>>>>>> static PetscErrorCode ourmonitor(TS ts,PetscInt i,PetscReal d,Vec > >>>>>>> v,void *ctx) > >>>>>>> { > >>>>>>> > >>>>>>> > PetscObjectUseFortranCallback(ts,_cb.monitor,(TS*,PetscInt*,PetscReal*,Vec > >>>>>>> *,void*,PetscErrorCode*),(&ts,&i,&d,&v,_ctx,&ierr)); > >>>>>>> } > >>>>>>> > >>>>>>> Then follow as in tsmonitorset_, which sets two callbacks. > >>>>>>> > >>>>>>> PETSC_EXTERN void tsmonitorset_(TS *ts,void > >>>>>>> (*func)(TS*,PetscInt*,PetscReal*,Vec*,void*,PetscErrorCode*),void > >>>>>>> *mctx,void (*d)(void*,PetscErrorCode*),PetscErrorCode *ierr) > >>>>>>> { > >>>>>>> CHKFORTRANNULLFUNCTION(d); > >>>>>>> if ((PetscVoidFunction)func == (PetscVoidFunction) > >>>>>>> tsmonitordefault_) { > >>>>>>> *ierr = TSMonitorSet(*ts,(PetscErrorCode > >>>>>>> > (*)(TS,PetscInt,PetscReal,Vec,void*))TSMonitorDefault,*(PetscViewerAndFormat**)mctx,(PetscErrorCode > >>>>>>> (*)(void **))PetscViewerAndFormatDestroy); > >>>>>>> } else { > >>>>>>> *ierr = > >>>>>>> > PetscObjectSetFortranCallback((PetscObject)*ts,PETSC_FORTRAN_CALLBACK_CLASS,&_cb.monitor,(PetscVoidFunction)func,mctx); > >>>>>>> *ierr = > >>>>>>> > PetscObjectSetFortranCallback((PetscObject)*ts,PETSC_FORTRAN_CALLBACK_CLASS,&_cb.mondestroy,(PetscVoidFunction)d,mctx); > >>>>>>> *ierr = TSMonitorSet(*ts,ourmonitor,*ts,ourmonitordestroy); > >>>>>>> } > >>>>>>> } > >>>>>>> > >>>>>> > >>>>>> > >>>>>> > >>>>>> > >>>>> > >>>> > >> > -- Thibault Bridel-Bertomeu ? Eng, MSc, PhD Research Engineer CEA/CESTA 33114 LE BARP Tel.: (+33)557046924 Mob.: (+33)611025322 Mail: thibault.bridelbertomeu at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From thibault.bridelbertomeu at gmail.com Mon Dec 28 11:07:07 2020 From: thibault.bridelbertomeu at gmail.com (Thibault Bridel-Bertomeu) Date: Mon, 28 Dec 2020 18:07:07 +0100 Subject: [petsc-users] TS tutorial ex11 in Fortran In-Reply-To: References: <1EC72DF8-61BD-482B-B1B0-9427AB1ADF74@petsc.dev> <048DC071-33E5-4644-B1C8-DD598248923A@petsc.dev> <87tuspyaug.fsf@jedbrown.org> <1035FEDB-B297-4E71-8F0F-D44C72707105@petsc.dev> <39E9FBED-6D6B-41E4-983C-58465ACAB4D0@petsc.dev> <877dp2c8ao.fsf@jedbrown.org> Message-ID: With the code in attachement it's better. And I set it up in Fortran as follow :: TS :: timeS TSAdapt :: adapt call TSGetAdapt(timeS, adapt, ierr); CHKERRA(ierr) call MyTSAdaptRegister(timeS, adapt, gamma, cfl, ierr); CHKERRA(ierr) Thibault Le lun. 28 d?c. 2020 ? 18:02, Thibault Bridel-Bertomeu < thibault.bridelbertomeu at gmail.com> a ?crit : > Hi Jed, > > Thanks for your message. > I implemented everything in C as you suggested and it works fine except > for one thing : the ts->vec_sol does not seem to get updated when seen from > the C code (it is on the Fortran side though the solution is correct). > As a result, the time step (that uses among other things the max velocity > in the domain) is always at the value it gets from the initial solution. > Any idea why ts->vec_sol does not seem to be updated ? (I checked the > stepnum and time is updated though , when accessed with TSGetTime for > instance). > > Cheers, > Thibault > > Le lun. 28 d?c. 2020 ? 15:30, Jed Brown a ?crit : > >> Thibault Bridel-Bertomeu writes: >> >> > Good morning everyone, >> > >> > Thank you Barry for the answer, it works now ! >> > >> > I am facing (yet) another situation: the TSAdaptRegister function. >> > In the MR on gitlab, Jed mentioned that sometimes, when function >> pointers >> > are not stored in PETSc objects, one can use stack memory to pass that >> > pointer from fortran to C. >> >> The issue with stack memory is that when it returns, that memory is >> invalid. You can't use it in this instance. >> >> I think you're going to have problems implementing a TSAdaptCreate_XYZ in >> Fortran (because the body of that function will need to access private >> struct members; see below). >> >> I would implement what you need in C and you can call out to Fortran if >> you want from inside TSAdaptChoose_YourMethod(). >> >> PETSC_EXTERN PetscErrorCode TSAdaptCreate_DSP(TSAdapt adapt) >> { >> TSAdapt_DSP *dsp; >> PetscErrorCode ierr; >> >> PetscFunctionBegin; >> ierr = PetscNewLog(adapt,&dsp);CHKERRQ(ierr); >> adapt->reject_safety = 1.0; /* unused */ >> >> adapt->data = (void*)dsp; >> adapt->ops->choose = TSAdaptChoose_DSP; >> adapt->ops->setfromoptions = TSAdaptSetFromOptions_DSP; >> adapt->ops->destroy = TSAdaptDestroy_DSP; >> adapt->ops->view = TSAdaptView_DSP; >> >> ierr = >> PetscObjectComposeFunction((PetscObject)adapt,"TSAdaptDSPSetFilter_C",TSAdaptDSPSetFilter_DSP);CHKERRQ(ierr); >> ierr = >> PetscObjectComposeFunction((PetscObject)adapt,"TSAdaptDSPSetPID_C",TSAdaptDSPSetPID_DSP);CHKERRQ(ierr); >> >> ierr = TSAdaptDSPSetFilter_DSP(adapt,"PI42");CHKERRQ(ierr); >> ierr = TSAdaptRestart_DSP(adapt);CHKERRQ(ierr); >> PetscFunctionReturn(0); >> } >> >> > Can anyone develop that idea ? Because for TSAdaptRegister, i guess the >> > wrapper would start like : >> > >> > PETSC_EXTERN void tsadaptregister_(char *sname, >> > void >> (*func)(TSAdapt*,PetscErrorCode*), >> > PetscErrorCode *ierr, >> > PETSC_FORTRAN_CHARLEN_T snamelen) >> > >> > but then the C TSAdaptRegister function takes a PetscErrorCode >> > (*func)(TSAdapt) function pointer as argument ... I cannot use any >> > FORTRAN_CALLBACK here since I do not have any object to hook it to, and >> I >> > could not find a similar situation among the pre-existing wrappers. Does >> > anyone have an idea on how to proceed ? >> > >> > Thanks !! >> > >> > Thibault >> > >> > Le mar. 22 d?c. 2020 ? 21:20, Barry Smith a ?crit : >> > >> >> >> >> PetscObjectUseFortranCallback((PetscDS)ctx, >> >> >> >> >> >> *ierr = PetscObjectSetFortranCallback((PetscObject)*prob >> >> >> >> >> >> It looks like the problem is that these user provided functions do >> not >> >> take a PetscDS directly as an argument so the Fortran callback >> information >> >> cannot be obtained from them. >> >> >> >> The manual page for PetscDSAddBoundary() says >> >> >> >> - ctx - An optional user context for bcFunc >> >> >> >> but then when it lists the calling sequence for bcFunc it does not list >> >> the ctx as an argument, so either the manual page or code is wrong. >> >> >> >> It looks like you make the ctx be the PetscDS prob argument when you >> >> call PetscDSAddBoundary >> >> >> >> In principle this sounds like it might work. I think you need to track >> >> through the debugger to see if the ctx passed to ourbocofunc() is >> >> actually the PetscDS prob variable and if not why it is not. >> >> >> >> Barry >> >> >> >> >> >> On Dec 22, 2020, at 5:49 AM, Thibault Bridel-Bertomeu < >> >> thibault.bridelbertomeu at gmail.com> wrote: >> >> >> >> Dear all, >> >> >> >> I have hit two snags while implementing the missing wrappers necessary >> to >> >> transcribe ex11 to Fortran. >> >> >> >> First is about the PetscDSAddBoundary wrapper, that I have done so : >> >> >> >> static PetscErrorCode ourbocofunc(PetscReal time, const PetscReal *c, >> >> const PetscReal *n, const PetscScalar *a_xI, const PetscScalar *a_xG, >> void >> >> *ctx) >> >> { >> >> PetscObjectUseFortranCallback((PetscDS)ctx, bocofunc, >> >> (PetscReal*,const PetscReal*,const >> >> PetscReal*,const PetscScalar*,const >> PetscScalar*,void*,PetscErrorCode*), >> >> (&time,c,n,a_xI,a_xG,ctx,&ierr)); >> >> } >> >> static PetscErrorCode ourbocofunc_time(PetscReal time, const PetscReal >> *c, >> >> const PetscReal *n, const PetscScalar *a_xI, const PetscScalar *a_xG, >> void >> >> *ctx) >> >> { >> >> PetscObjectUseFortranCallback((PetscDS)ctx, bocofunc_time, >> >> (PetscReal*,const PetscReal*,const >> >> PetscReal*,const PetscScalar*,const >> PetscScalar*,void*,PetscErrorCode*), >> >> (&time,c,n,a_xI,a_xG,ctx,&ierr)); >> >> } >> >> PETSC_EXTERN void petscdsaddboundary_(PetscDS *prob, >> >> DMBoundaryConditionType *type, char *name, char *labelname, PetscInt >> >> *field, PetscInt *numcomps, PetscInt *comps, >> >> void (*bcFunc)(void), >> >> void (*bcFunc_t)(void), >> >> PetscInt *numids, const PetscInt >> >> *ids, void *ctx, PetscErrorCode *ierr, >> >> PETSC_FORTRAN_CHARLEN_T namelen, >> >> PETSC_FORTRAN_CHARLEN_T labelnamelen) >> >> { >> >> char *newname, *newlabelname; >> >> FIXCHAR(name, namelen, newname); >> >> FIXCHAR(labelname, labelnamelen, newlabelname); >> >> *ierr = PetscObjectSetFortranCallback((PetscObject)*prob, >> >> PETSC_FORTRAN_CALLBACK_CLASS, &bocofunc, >> (PetscVoidFunction)bcFunc, >> >> ctx); >> >> *ierr = PetscObjectSetFortranCallback((PetscObject)*prob, >> >> PETSC_FORTRAN_CALLBACK_CLASS, &bocofunc_time, >> (PetscVoidFunction)bcFunc_t, >> >> ctx); >> >> *ierr = PetscDSAddBoundary(*prob, *type, newname, newlabelname, >> >> *field, *numcomps, comps, >> >> (void (*)(void))ourbocofunc, >> >> (void (*)(void))ourbocofunc_time, >> >> *numids, ids, *prob); >> >> FREECHAR(name, newname); >> >> FREECHAR(labelname, newlabelname); >> >> } >> >> >> >> >> >> >> >> but when I call it in the program, with adequate routines, I obtain >> the >> >> following error : >> >> >> >> [0]PETSC ERROR: --------------------- Error Message >> --------------------------------------------------------------[0]PETSC >> ERROR: Corrupt argument: >> https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSC >> ERROR: Fortran callback not set on this object[0]PETSC ERROR: See >> https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble >> shooting.[0]PETSC ERROR: Petsc Development GIT revision: >> v3.14.2-297-gf36a7edeb8 GIT Date: 2020-12-18 04:42:53 +0000[0]PETSC ERROR: >> ../../../bin/eulerian3D on a named macbook-pro-de-thibault.home by tbridel >> Sun Dec 20 15:05:15 2020[0]PETSC ERROR: Configure options --with-clean=0 >> --prefix=/Users/tbridel/Documents/1-CODES/04-PETSC/build --with-make-np=2 >> --with-windows-graphics=0 --with-debugging=0 --download-fblaslapack >> --download-mpich-shared=0 --with-x=0 --with-pthread=0 --with-valgrind=0 >> --PETSC_ARCH=macosx >> --with-fc=/usr/local/Cellar/open-mpi/4.0.1_2/bin/mpifort >> --with-cc=/usr/local/Cellar/open-mpi/4.0.1_2/bin/mpicc >> --with-cxx=/usr/local/Cellar/open-mpi/4.0.1_2/bin/mpic++ --with-openmp=0 >> --download-hypre=yes --download-sowing=yes --download-metis=yes >> --download-parmetis=yes --download-triangle=yes --download-tetgen=yes >> --download-ctetgen=yes --download-p4est=yes --download-zlib=yes >> --download-c2html=yes --download-eigen=yes --download-pragmatic=yes >> --with-hdf5-dir=/usr/local/Cellar/hdf5/1.10.5_1 >> --with-cmake-dir=/usr/local/Cellar/cmake/3.15.3[0]PETSC ERROR: #1 >> PetscObjectGetFortranCallback() line 258 in >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/sys/objects/inherit.c[0]PETSC >> ERROR: #2 ourbocofunc() line 141 in >> /Users/tbridel/Documents/1-CODES/59-EULERIAN3D/sources/petsc_wrapping/wrapper_petsc.c[0]PETSC >> ERROR: #3 DMPlexInsertBoundaryValuesRiemann() line 989 in >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/dm/impls/plex/plexfem.c[0]PETSC >> ERROR: #4 DMPlexInsertBoundaryValues_Plex() line 1052 in >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/dm/impls/plex/plexfem.c[0]PETSC >> ERROR: #5 DMPlexInsertBoundaryValues() line 1142 in >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/dm/impls/plex/plexfem.c[0]PETSC >> ERROR: #6 DMPlexComputeResidual_Internal() line 4524 in >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/dm/impls/plex/plexfem.c[0]PETSC >> ERROR: #7 DMPlexTSComputeRHSFunctionFVM() line 74 in >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/utils/dmplexts.c[0]PETSC >> ERROR: #8 ourdmtsrhsfunc() line 186 in >> /Users/tbridel/Documents/1-CODES/59-EULERIAN3D/sources/petsc_wrapping/wrapper_petsc.c[0]PETSC >> ERROR: #9 TSComputeRHSFunction_DMLocal() line 105 in >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/utils/dmlocalts.c[0]PETSC >> ERROR: #10 TSComputeRHSFunction() line 653 in >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/interface/ts.c[0]PETSC >> ERROR: #11 TSSSPStep_RK_3() line 120 in >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/impls/explicit/ssp/ssp.c[0]PETSC >> ERROR: #12 TSStep_SSP() line 208 in >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/impls/explicit/ssp/ssp.c[0]PETSC >> ERROR: #13 TSStep() line 3757 in >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/interface/ts.c[0]PETSC >> ERROR: #14 TSSolve() line 4154 in >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/interface/ts.c[0]PETSC >> ERROR: #15 User provided function() line 0 in User file >> >> >> >> >> >> Second is about the DMProjectFunction wrapper, that I have done so : >> >> >> >> static PetscErrorCode ourdmprojfunc(PetscInt dim, PetscReal time, >> >> PetscReal x[], PetscInt Nf, PetscScalar u[], void *ctx) >> >> { >> >> PetscObjectUseFortranCallback((DM)ctx, dmprojfunc, >> >> >> >> >> (PetscInt*,PetscReal*,PetscReal*,PetscInt*,PetscScalar*,void*,PetscErrorCode*), >> >> (&dim,&time,x,&Nf,u,_ctx,&ierr)) >> >> } >> >> PETSC_EXTERN void dmprojectfunction_(DM *dm, PetscReal *time, >> >> void >> >> >> (*func)(PetscInt*,PetscReal*,PetscReal*,PetscInt*,PetscScalar*,void*,PetscErrorCode*), >> >> void *ctx, InsertMode *mode, Vec >> X, >> >> PetscErrorCode *ierr) >> >> { >> >> PetscErrorCode (*funcarr[1]) (PetscInt dim, PetscReal time, >> PetscReal >> >> x[], PetscInt Nf, PetscScalar *u, void *ctx); >> >> *ierr = PetscObjectSetFortranCallback((PetscObject)*dm, >> >> PETSC_FORTRAN_CALLBACK_CLASS, &dmprojfunc, (PetscVoidFunction)func, >> ctx); >> >> funcarr[0] = ourdmprojfunc; >> >> *ierr = DMProjectFunction(*dm, *time, funcarr, &ctx, *mode, X); >> >> } >> >> >> >> >> >> This time there is no error because I cannot reach this point in the >> >> program, but I am not sure anyways how to write this wrapper, >> especially >> >> because of the double pointers that DMProjectFunction takes as >> arguments. >> >> >> >> Does anyone have any idea what could be going wrong with those two >> >> wrappers ? >> >> >> >> Thank you very much in advance !! >> >> >> >> Thibault >> >> >> >> Le ven. 18 d?c. 2020 ? 11:02, Thibault Bridel-Bertomeu < >> >> thibault.bridelbertomeu at gmail.com> a ?crit : >> >> >> >>> Aah that is a nice trick, I was getting ready to fork, clone the fork >> and >> >>> redo the work, but that worked fine ! Thank you Barry ! >> >>> >> >>> The MR will appear in a little while ! >> >>> >> >>> Thibault >> >>> >> >>> >> >>> Le ven. 18 d?c. 2020 ? 10:16, Barry Smith a ?crit >> : >> >>> >> >>>> >> >>>> Good question. There is a trick to limit the amount of work you >> need >> >>>> to do with a new fork after you have already made changes with a >> PETSc >> >>>> clone, but it looks like we do not document this clearly in the >> webpages. >> >>>> (I couldn't find it). >> >>>> >> >>>> Yes, you do need to make a fork, but after you have made the fork >> on >> >>>> the GitLab website (and have done nothing on your machine) edit the >> file >> >>>> $PETSC_DIR/.git/config for your clone on your machine >> >>>> >> >>>> Locate the line that has url = git at gitlab.com:petsc/petsc.git >> (this >> >>>> may have an https at the beginning of the line) >> >>>> >> >>>> Change this line to point to the fork url instead with git@ not >> >>>> https, which will be pretty much the same URL but with your user id >> instead >> >>>> of petsc in the address. Then git push and it will push to your >> fork. >> >>>> >> >>>> Now you changes will be in your fork and you can make the MR from >> your >> >>>> fork URL on Gitlab. (In other words this editing trick converts your >> PETSc >> >>>> clone on your machine to a PETSc fork). >> >>>> >> >>>> I hope I have explained this clearly enough it goes smoothly. >> >>>> >> >>>> Barry >> >>>> >> >>>> >> >>>> >> >>>> On Dec 18, 2020, at 3:00 AM, Thibault Bridel-Bertomeu < >> >>>> thibault.bridelbertomeu at gmail.com> wrote: >> >>>> >> >>>> Hello Barry, >> >>>> >> >>>> I'll start the MR as soon as possible then so that specialists can >> >>>> indeed have a look. Do I have to fork PETSc to start a MR or are >> PETSc repo >> >>>> settings such that can I push a branch from the PETSc clone I got ? >> >>>> >> >>>> Thibault >> >>>> >> >>>> >> >>>> Le mer. 16 d?c. 2020 ? 07:47, Barry Smith a >> ?crit : >> >>>> >> >>>>> >> >>>>> Thibault, >> >>>>> >> >>>>> A subdirectory for the example is fine; we have other examples >> that >> >>>>> use subdirectories and multiple files. >> >>>>> >> >>>>> Note: even if you don't have something completely working you can >> >>>>> still make MR and list it as DRAFT request for comments, some other >> PETSc >> >>>>> members who understand the packages you are using and Fortran >> better than I >> >>>>> may be able to help as you develop the code. >> >>>>> >> >>>>> Barry >> >>>>> >> >>>>> >> >>>>> >> >>>>> >> >>>>> On Dec 16, 2020, at 12:35 AM, Thibault Bridel-Bertomeu < >> >>>>> thibault.bridelbertomeu at gmail.com> wrote: >> >>>>> >> >>>>> Hello everyone, >> >>>>> >> >>>>> Thank you Barry for the feedback. >> >>>>> OK, yes I'll work up an MR as soon as I have got something working. >> By >> >>>>> the way, does the fortran-version of the example have to be a >> single file ? >> >>>>> If my push contains a directory with several files (different >> modules and >> >>>>> the main), and the Makefile that goes with it, is that ok ? >> >>>>> >> >>>>> Thibault Bridel-Bertomeu >> >>>>> >> >>>>> >> >>>>> Le mer. 16 d?c. 2020 ? 04:46, Barry Smith a >> ?crit : >> >>>>> >> >>>>>> >> >>>>>> This is great. If you make a branch off of the PETSc git >> repository >> >>>>>> with these additions and work on ex11 you can make a merge request >> and we >> >>>>>> can run the code easily on all our test systems (for security >> reasons one >> >>>>>> of use needs to launch the tests from your MR). >> >>>>>> https://docs.petsc.org/en/latest/developers/integration/ >> >>>>>> >> >>>>>> Barry >> >>>>>> >> >>>>>> >> >>>>>> On Dec 15, 2020, at 5:35 AM, Thibault Bridel-Bertomeu < >> >>>>>> thibault.bridelbertomeu at gmail.com> wrote: >> >>>>>> >> >>>>>> Hello everyone, >> >>>>>> >> >>>>>> So far, I have the wrappers in the files attached to this e-mail. I >> >>>>>> still do not know if they work properly - at least the code >> compiles and >> >>>>>> the calls to the wrapped-subroutine do not fail - but I wanted to >> put this >> >>>>>> here in case someone sees something really wrong with it already. >> >>>>>> >> >>>>>> Thank you again for your help, I'll try to post updates of the F90 >> >>>>>> version of ex11 regularly in this thread. >> >>>>>> >> >>>>>> Stay safe, >> >>>>>> >> >>>>>> Thibault Bridel-Bertomeu >> >>>>>> >> >>>>>> Le dim. 13 d?c. 2020 ? 16:39, Jed Brown a >> ?crit : >> >>>>>> >> >>>>>>> Thibault Bridel-Bertomeu >> writes: >> >>>>>>> >> >>>>>>> > Thank you Mark for your answer. >> >>>>>>> > >> >>>>>>> > I am not sure what you think could be in the setBC1 routine ? >> How >> >>>>>>> to make >> >>>>>>> > the connection with the PetscDS ? >> >>>>>>> > >> >>>>>>> > On the other hand, I actually found after a while TSMonitorSet >> has a >> >>>>>>> > fortran wrapper, and it does take as arguments two function >> >>>>>>> pointers, so I >> >>>>>>> > guess it is possible ? Although I am not sure exactly how to >> play >> >>>>>>> with the >> >>>>>>> > PetscObjectSetFortranCallback & PetscObjectUseFortranCallback >> >>>>>>> macros - >> >>>>>>> > could anybody advise please ? >> >>>>>>> >> >>>>>>> tsmonitorset_ is a good example to follow. In your file, create >> one >> >>>>>>> of these static structs with a member for each callback. These >> are IDs that >> >>>>>>> will be used as keys for Fortran callbacks and their contexts. >> The salient >> >>>>>>> parts of the file are below. >> >>>>>>> >> >>>>>>> static struct { >> >>>>>>> PetscFortranCallbackId prestep; >> >>>>>>> PetscFortranCallbackId poststep; >> >>>>>>> PetscFortranCallbackId rhsfunction; >> >>>>>>> PetscFortranCallbackId rhsjacobian; >> >>>>>>> PetscFortranCallbackId ifunction; >> >>>>>>> PetscFortranCallbackId ijacobian; >> >>>>>>> PetscFortranCallbackId monitor; >> >>>>>>> PetscFortranCallbackId mondestroy; >> >>>>>>> PetscFortranCallbackId transform; >> >>>>>>> #if defined(PETSC_HAVE_F90_2PTR_ARG) >> >>>>>>> PetscFortranCallbackId function_pgiptr; >> >>>>>>> #endif >> >>>>>>> } _cb; >> >>>>>>> >> >>>>>>> /* >> >>>>>>> Note ctx is the same as ts so we need to get the Fortran >> context >> >>>>>>> out of the TS; this gets put in _ctx using the callback ID >> >>>>>>> */ >> >>>>>>> static PetscErrorCode ourmonitor(TS ts,PetscInt i,PetscReal d,Vec >> >>>>>>> v,void *ctx) >> >>>>>>> { >> >>>>>>> >> >>>>>>> >> PetscObjectUseFortranCallback(ts,_cb.monitor,(TS*,PetscInt*,PetscReal*,Vec >> >>>>>>> *,void*,PetscErrorCode*),(&ts,&i,&d,&v,_ctx,&ierr)); >> >>>>>>> } >> >>>>>>> >> >>>>>>> Then follow as in tsmonitorset_, which sets two callbacks. >> >>>>>>> >> >>>>>>> PETSC_EXTERN void tsmonitorset_(TS *ts,void >> >>>>>>> (*func)(TS*,PetscInt*,PetscReal*,Vec*,void*,PetscErrorCode*),void >> >>>>>>> *mctx,void (*d)(void*,PetscErrorCode*),PetscErrorCode *ierr) >> >>>>>>> { >> >>>>>>> CHKFORTRANNULLFUNCTION(d); >> >>>>>>> if ((PetscVoidFunction)func == (PetscVoidFunction) >> >>>>>>> tsmonitordefault_) { >> >>>>>>> *ierr = TSMonitorSet(*ts,(PetscErrorCode >> >>>>>>> >> (*)(TS,PetscInt,PetscReal,Vec,void*))TSMonitorDefault,*(PetscViewerAndFormat**)mctx,(PetscErrorCode >> >>>>>>> (*)(void **))PetscViewerAndFormatDestroy); >> >>>>>>> } else { >> >>>>>>> *ierr = >> >>>>>>> >> PetscObjectSetFortranCallback((PetscObject)*ts,PETSC_FORTRAN_CALLBACK_CLASS,&_cb.monitor,(PetscVoidFunction)func,mctx); >> >>>>>>> *ierr = >> >>>>>>> >> PetscObjectSetFortranCallback((PetscObject)*ts,PETSC_FORTRAN_CALLBACK_CLASS,&_cb.mondestroy,(PetscVoidFunction)d,mctx); >> >>>>>>> *ierr = TSMonitorSet(*ts,ourmonitor,*ts,ourmonitordestroy); >> >>>>>>> } >> >>>>>>> } >> >>>>>>> >> >>>>>> >> >>>>>> >> >>>>>> >> >>>>>> >> >>>>> >> >>>> >> >> >> > -- > Thibault Bridel-Bertomeu > ? > Eng, MSc, PhD > Research Engineer > CEA/CESTA > 33114 LE BARP > Tel.: (+33)557046924 > Mob.: (+33)611025322 > Mail: thibault.bridelbertomeu at gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: adapt_dt.c Type: application/octet-stream Size: 5429 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: adapt_dt.h Type: application/octet-stream Size: 733 bytes Desc: not available URL: From jed at jedbrown.org Mon Dec 28 11:41:48 2020 From: jed at jedbrown.org (Jed Brown) Date: Mon, 28 Dec 2020 10:41:48 -0700 Subject: [petsc-users] TS tutorial ex11 in Fortran In-Reply-To: References: <1EC72DF8-61BD-482B-B1B0-9427AB1ADF74@petsc.dev> <048DC071-33E5-4644-B1C8-DD598248923A@petsc.dev> <87tuspyaug.fsf@jedbrown.org> <1035FEDB-B297-4E71-8F0F-D44C72707105@petsc.dev> <39E9FBED-6D6B-41E4-983C-58465ACAB4D0@petsc.dev> <877dp2c8ao.fsf@jedbrown.org> Message-ID: <8735zpde03.fsf@jedbrown.org> I think I'm not following something. The time stepper implementation (TSStep_XYZ) is where ts->vec_sol should be updated. The controller doesn't have anything to do with it. What TS are you using and how do you know your RHS is nonzero? Thibault Bridel-Bertomeu writes: > Hi Jed, > > Thanks for your message. > I implemented everything in C as you suggested and it works fine except for > one thing : the ts->vec_sol does not seem to get updated when seen from the > C code (it is on the Fortran side though the solution is correct). > As a result, the time step (that uses among other things the max velocity > in the domain) is always at the value it gets from the initial solution. > Any idea why ts->vec_sol does not seem to be updated ? (I checked the > stepnum and time is updated though , when accessed with TSGetTime for > instance). > > Cheers, > Thibault > > Le lun. 28 d?c. 2020 ? 15:30, Jed Brown a ?crit : > >> Thibault Bridel-Bertomeu writes: >> >> > Good morning everyone, >> > >> > Thank you Barry for the answer, it works now ! >> > >> > I am facing (yet) another situation: the TSAdaptRegister function. >> > In the MR on gitlab, Jed mentioned that sometimes, when function pointers >> > are not stored in PETSc objects, one can use stack memory to pass that >> > pointer from fortran to C. >> >> The issue with stack memory is that when it returns, that memory is >> invalid. You can't use it in this instance. >> >> I think you're going to have problems implementing a TSAdaptCreate_XYZ in >> Fortran (because the body of that function will need to access private >> struct members; see below). >> >> I would implement what you need in C and you can call out to Fortran if >> you want from inside TSAdaptChoose_YourMethod(). >> >> PETSC_EXTERN PetscErrorCode TSAdaptCreate_DSP(TSAdapt adapt) >> { >> TSAdapt_DSP *dsp; >> PetscErrorCode ierr; >> >> PetscFunctionBegin; >> ierr = PetscNewLog(adapt,&dsp);CHKERRQ(ierr); >> adapt->reject_safety = 1.0; /* unused */ >> >> adapt->data = (void*)dsp; >> adapt->ops->choose = TSAdaptChoose_DSP; >> adapt->ops->setfromoptions = TSAdaptSetFromOptions_DSP; >> adapt->ops->destroy = TSAdaptDestroy_DSP; >> adapt->ops->view = TSAdaptView_DSP; >> >> ierr = >> PetscObjectComposeFunction((PetscObject)adapt,"TSAdaptDSPSetFilter_C",TSAdaptDSPSetFilter_DSP);CHKERRQ(ierr); >> ierr = >> PetscObjectComposeFunction((PetscObject)adapt,"TSAdaptDSPSetPID_C",TSAdaptDSPSetPID_DSP);CHKERRQ(ierr); >> >> ierr = TSAdaptDSPSetFilter_DSP(adapt,"PI42");CHKERRQ(ierr); >> ierr = TSAdaptRestart_DSP(adapt);CHKERRQ(ierr); >> PetscFunctionReturn(0); >> } >> >> > Can anyone develop that idea ? Because for TSAdaptRegister, i guess the >> > wrapper would start like : >> > >> > PETSC_EXTERN void tsadaptregister_(char *sname, >> > void >> (*func)(TSAdapt*,PetscErrorCode*), >> > PetscErrorCode *ierr, >> > PETSC_FORTRAN_CHARLEN_T snamelen) >> > >> > but then the C TSAdaptRegister function takes a PetscErrorCode >> > (*func)(TSAdapt) function pointer as argument ... I cannot use any >> > FORTRAN_CALLBACK here since I do not have any object to hook it to, and I >> > could not find a similar situation among the pre-existing wrappers. Does >> > anyone have an idea on how to proceed ? >> > >> > Thanks !! >> > >> > Thibault >> > >> > Le mar. 22 d?c. 2020 ? 21:20, Barry Smith a ?crit : >> > >> >> >> >> PetscObjectUseFortranCallback((PetscDS)ctx, >> >> >> >> >> >> *ierr = PetscObjectSetFortranCallback((PetscObject)*prob >> >> >> >> >> >> It looks like the problem is that these user provided functions do not >> >> take a PetscDS directly as an argument so the Fortran callback >> information >> >> cannot be obtained from them. >> >> >> >> The manual page for PetscDSAddBoundary() says >> >> >> >> - ctx - An optional user context for bcFunc >> >> >> >> but then when it lists the calling sequence for bcFunc it does not list >> >> the ctx as an argument, so either the manual page or code is wrong. >> >> >> >> It looks like you make the ctx be the PetscDS prob argument when you >> >> call PetscDSAddBoundary >> >> >> >> In principle this sounds like it might work. I think you need to track >> >> through the debugger to see if the ctx passed to ourbocofunc() is >> >> actually the PetscDS prob variable and if not why it is not. >> >> >> >> Barry >> >> >> >> >> >> On Dec 22, 2020, at 5:49 AM, Thibault Bridel-Bertomeu < >> >> thibault.bridelbertomeu at gmail.com> wrote: >> >> >> >> Dear all, >> >> >> >> I have hit two snags while implementing the missing wrappers necessary >> to >> >> transcribe ex11 to Fortran. >> >> >> >> First is about the PetscDSAddBoundary wrapper, that I have done so : >> >> >> >> static PetscErrorCode ourbocofunc(PetscReal time, const PetscReal *c, >> >> const PetscReal *n, const PetscScalar *a_xI, const PetscScalar *a_xG, >> void >> >> *ctx) >> >> { >> >> PetscObjectUseFortranCallback((PetscDS)ctx, bocofunc, >> >> (PetscReal*,const PetscReal*,const >> >> PetscReal*,const PetscScalar*,const PetscScalar*,void*,PetscErrorCode*), >> >> (&time,c,n,a_xI,a_xG,ctx,&ierr)); >> >> } >> >> static PetscErrorCode ourbocofunc_time(PetscReal time, const PetscReal >> *c, >> >> const PetscReal *n, const PetscScalar *a_xI, const PetscScalar *a_xG, >> void >> >> *ctx) >> >> { >> >> PetscObjectUseFortranCallback((PetscDS)ctx, bocofunc_time, >> >> (PetscReal*,const PetscReal*,const >> >> PetscReal*,const PetscScalar*,const PetscScalar*,void*,PetscErrorCode*), >> >> (&time,c,n,a_xI,a_xG,ctx,&ierr)); >> >> } >> >> PETSC_EXTERN void petscdsaddboundary_(PetscDS *prob, >> >> DMBoundaryConditionType *type, char *name, char *labelname, PetscInt >> >> *field, PetscInt *numcomps, PetscInt *comps, >> >> void (*bcFunc)(void), >> >> void (*bcFunc_t)(void), >> >> PetscInt *numids, const PetscInt >> >> *ids, void *ctx, PetscErrorCode *ierr, >> >> PETSC_FORTRAN_CHARLEN_T namelen, >> >> PETSC_FORTRAN_CHARLEN_T labelnamelen) >> >> { >> >> char *newname, *newlabelname; >> >> FIXCHAR(name, namelen, newname); >> >> FIXCHAR(labelname, labelnamelen, newlabelname); >> >> *ierr = PetscObjectSetFortranCallback((PetscObject)*prob, >> >> PETSC_FORTRAN_CALLBACK_CLASS, &bocofunc, (PetscVoidFunction)bcFunc, >> >> ctx); >> >> *ierr = PetscObjectSetFortranCallback((PetscObject)*prob, >> >> PETSC_FORTRAN_CALLBACK_CLASS, &bocofunc_time, >> (PetscVoidFunction)bcFunc_t, >> >> ctx); >> >> *ierr = PetscDSAddBoundary(*prob, *type, newname, newlabelname, >> >> *field, *numcomps, comps, >> >> (void (*)(void))ourbocofunc, >> >> (void (*)(void))ourbocofunc_time, >> >> *numids, ids, *prob); >> >> FREECHAR(name, newname); >> >> FREECHAR(labelname, newlabelname); >> >> } >> >> >> >> >> >> >> >> but when I call it in the program, with adequate routines, I obtain the >> >> following error : >> >> >> >> [0]PETSC ERROR: --------------------- Error Message >> --------------------------------------------------------------[0]PETSC >> ERROR: Corrupt argument: >> https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSC >> ERROR: Fortran callback not set on this object[0]PETSC ERROR: See >> https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble >> shooting.[0]PETSC ERROR: Petsc Development GIT revision: >> v3.14.2-297-gf36a7edeb8 GIT Date: 2020-12-18 04:42:53 +0000[0]PETSC ERROR: >> ../../../bin/eulerian3D on a named macbook-pro-de-thibault.home by tbridel >> Sun Dec 20 15:05:15 2020[0]PETSC ERROR: Configure options --with-clean=0 >> --prefix=/Users/tbridel/Documents/1-CODES/04-PETSC/build --with-make-np=2 >> --with-windows-graphics=0 --with-debugging=0 --download-fblaslapack >> --download-mpich-shared=0 --with-x=0 --with-pthread=0 --with-valgrind=0 >> --PETSC_ARCH=macosx >> --with-fc=/usr/local/Cellar/open-mpi/4.0.1_2/bin/mpifort >> --with-cc=/usr/local/Cellar/open-mpi/4.0.1_2/bin/mpicc >> --with-cxx=/usr/local/Cellar/open-mpi/4.0.1_2/bin/mpic++ --with-openmp=0 >> --download-hypre=yes --download-sowing=yes --download-metis=yes >> --download-parmetis=yes --download-triangle=yes --download-tetgen=yes >> --download-ctetgen=yes --download-p4est=yes --download-zlib=yes >> --download-c2html=yes --download-eigen=yes --download-pragmatic=yes >> --with-hdf5-dir=/usr/local/Cellar/hdf5/1.10.5_1 >> --with-cmake-dir=/usr/local/Cellar/cmake/3.15.3[0]PETSC ERROR: #1 >> PetscObjectGetFortranCallback() line 258 in >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/sys/objects/inherit.c[0]PETSC >> ERROR: #2 ourbocofunc() line 141 in >> /Users/tbridel/Documents/1-CODES/59-EULERIAN3D/sources/petsc_wrapping/wrapper_petsc.c[0]PETSC >> ERROR: #3 DMPlexInsertBoundaryValuesRiemann() line 989 in >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/dm/impls/plex/plexfem.c[0]PETSC >> ERROR: #4 DMPlexInsertBoundaryValues_Plex() line 1052 in >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/dm/impls/plex/plexfem.c[0]PETSC >> ERROR: #5 DMPlexInsertBoundaryValues() line 1142 in >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/dm/impls/plex/plexfem.c[0]PETSC >> ERROR: #6 DMPlexComputeResidual_Internal() line 4524 in >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/dm/impls/plex/plexfem.c[0]PETSC >> ERROR: #7 DMPlexTSComputeRHSFunctionFVM() line 74 in >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/utils/dmplexts.c[0]PETSC >> ERROR: #8 ourdmtsrhsfunc() line 186 in >> /Users/tbridel/Documents/1-CODES/59-EULERIAN3D/sources/petsc_wrapping/wrapper_petsc.c[0]PETSC >> ERROR: #9 TSComputeRHSFunction_DMLocal() line 105 in >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/utils/dmlocalts.c[0]PETSC >> ERROR: #10 TSComputeRHSFunction() line 653 in >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/interface/ts.c[0]PETSC >> ERROR: #11 TSSSPStep_RK_3() line 120 in >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/impls/explicit/ssp/ssp.c[0]PETSC >> ERROR: #12 TSStep_SSP() line 208 in >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/impls/explicit/ssp/ssp.c[0]PETSC >> ERROR: #13 TSStep() line 3757 in >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/interface/ts.c[0]PETSC >> ERROR: #14 TSSolve() line 4154 in >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/interface/ts.c[0]PETSC >> ERROR: #15 User provided function() line 0 in User file >> >> >> >> >> >> Second is about the DMProjectFunction wrapper, that I have done so : >> >> >> >> static PetscErrorCode ourdmprojfunc(PetscInt dim, PetscReal time, >> >> PetscReal x[], PetscInt Nf, PetscScalar u[], void *ctx) >> >> { >> >> PetscObjectUseFortranCallback((DM)ctx, dmprojfunc, >> >> >> >> >> (PetscInt*,PetscReal*,PetscReal*,PetscInt*,PetscScalar*,void*,PetscErrorCode*), >> >> (&dim,&time,x,&Nf,u,_ctx,&ierr)) >> >> } >> >> PETSC_EXTERN void dmprojectfunction_(DM *dm, PetscReal *time, >> >> void >> >> >> (*func)(PetscInt*,PetscReal*,PetscReal*,PetscInt*,PetscScalar*,void*,PetscErrorCode*), >> >> void *ctx, InsertMode *mode, Vec X, >> >> PetscErrorCode *ierr) >> >> { >> >> PetscErrorCode (*funcarr[1]) (PetscInt dim, PetscReal time, >> PetscReal >> >> x[], PetscInt Nf, PetscScalar *u, void *ctx); >> >> *ierr = PetscObjectSetFortranCallback((PetscObject)*dm, >> >> PETSC_FORTRAN_CALLBACK_CLASS, &dmprojfunc, (PetscVoidFunction)func, >> ctx); >> >> funcarr[0] = ourdmprojfunc; >> >> *ierr = DMProjectFunction(*dm, *time, funcarr, &ctx, *mode, X); >> >> } >> >> >> >> >> >> This time there is no error because I cannot reach this point in the >> >> program, but I am not sure anyways how to write this wrapper, especially >> >> because of the double pointers that DMProjectFunction takes as >> arguments. >> >> >> >> Does anyone have any idea what could be going wrong with those two >> >> wrappers ? >> >> >> >> Thank you very much in advance !! >> >> >> >> Thibault >> >> >> >> Le ven. 18 d?c. 2020 ? 11:02, Thibault Bridel-Bertomeu < >> >> thibault.bridelbertomeu at gmail.com> a ?crit : >> >> >> >>> Aah that is a nice trick, I was getting ready to fork, clone the fork >> and >> >>> redo the work, but that worked fine ! Thank you Barry ! >> >>> >> >>> The MR will appear in a little while ! >> >>> >> >>> Thibault >> >>> >> >>> >> >>> Le ven. 18 d?c. 2020 ? 10:16, Barry Smith a ?crit : >> >>> >> >>>> >> >>>> Good question. There is a trick to limit the amount of work you >> need >> >>>> to do with a new fork after you have already made changes with a >> PETSc >> >>>> clone, but it looks like we do not document this clearly in the >> webpages. >> >>>> (I couldn't find it). >> >>>> >> >>>> Yes, you do need to make a fork, but after you have made the fork on >> >>>> the GitLab website (and have done nothing on your machine) edit the >> file >> >>>> $PETSC_DIR/.git/config for your clone on your machine >> >>>> >> >>>> Locate the line that has url = git at gitlab.com:petsc/petsc.git >> (this >> >>>> may have an https at the beginning of the line) >> >>>> >> >>>> Change this line to point to the fork url instead with git@ not >> >>>> https, which will be pretty much the same URL but with your user id >> instead >> >>>> of petsc in the address. Then git push and it will push to your fork. >> >>>> >> >>>> Now you changes will be in your fork and you can make the MR from >> your >> >>>> fork URL on Gitlab. (In other words this editing trick converts your >> PETSc >> >>>> clone on your machine to a PETSc fork). >> >>>> >> >>>> I hope I have explained this clearly enough it goes smoothly. >> >>>> >> >>>> Barry >> >>>> >> >>>> >> >>>> >> >>>> On Dec 18, 2020, at 3:00 AM, Thibault Bridel-Bertomeu < >> >>>> thibault.bridelbertomeu at gmail.com> wrote: >> >>>> >> >>>> Hello Barry, >> >>>> >> >>>> I'll start the MR as soon as possible then so that specialists can >> >>>> indeed have a look. Do I have to fork PETSc to start a MR or are >> PETSc repo >> >>>> settings such that can I push a branch from the PETSc clone I got ? >> >>>> >> >>>> Thibault >> >>>> >> >>>> >> >>>> Le mer. 16 d?c. 2020 ? 07:47, Barry Smith a ?crit >> : >> >>>> >> >>>>> >> >>>>> Thibault, >> >>>>> >> >>>>> A subdirectory for the example is fine; we have other examples that >> >>>>> use subdirectories and multiple files. >> >>>>> >> >>>>> Note: even if you don't have something completely working you can >> >>>>> still make MR and list it as DRAFT request for comments, some other >> PETSc >> >>>>> members who understand the packages you are using and Fortran better >> than I >> >>>>> may be able to help as you develop the code. >> >>>>> >> >>>>> Barry >> >>>>> >> >>>>> >> >>>>> >> >>>>> >> >>>>> On Dec 16, 2020, at 12:35 AM, Thibault Bridel-Bertomeu < >> >>>>> thibault.bridelbertomeu at gmail.com> wrote: >> >>>>> >> >>>>> Hello everyone, >> >>>>> >> >>>>> Thank you Barry for the feedback. >> >>>>> OK, yes I'll work up an MR as soon as I have got something working. >> By >> >>>>> the way, does the fortran-version of the example have to be a single >> file ? >> >>>>> If my push contains a directory with several files (different >> modules and >> >>>>> the main), and the Makefile that goes with it, is that ok ? >> >>>>> >> >>>>> Thibault Bridel-Bertomeu >> >>>>> >> >>>>> >> >>>>> Le mer. 16 d?c. 2020 ? 04:46, Barry Smith a >> ?crit : >> >>>>> >> >>>>>> >> >>>>>> This is great. If you make a branch off of the PETSc git >> repository >> >>>>>> with these additions and work on ex11 you can make a merge request >> and we >> >>>>>> can run the code easily on all our test systems (for security >> reasons one >> >>>>>> of use needs to launch the tests from your MR). >> >>>>>> https://docs.petsc.org/en/latest/developers/integration/ >> >>>>>> >> >>>>>> Barry >> >>>>>> >> >>>>>> >> >>>>>> On Dec 15, 2020, at 5:35 AM, Thibault Bridel-Bertomeu < >> >>>>>> thibault.bridelbertomeu at gmail.com> wrote: >> >>>>>> >> >>>>>> Hello everyone, >> >>>>>> >> >>>>>> So far, I have the wrappers in the files attached to this e-mail. I >> >>>>>> still do not know if they work properly - at least the code >> compiles and >> >>>>>> the calls to the wrapped-subroutine do not fail - but I wanted to >> put this >> >>>>>> here in case someone sees something really wrong with it already. >> >>>>>> >> >>>>>> Thank you again for your help, I'll try to post updates of the F90 >> >>>>>> version of ex11 regularly in this thread. >> >>>>>> >> >>>>>> Stay safe, >> >>>>>> >> >>>>>> Thibault Bridel-Bertomeu >> >>>>>> >> >>>>>> Le dim. 13 d?c. 2020 ? 16:39, Jed Brown a ?crit >> : >> >>>>>> >> >>>>>>> Thibault Bridel-Bertomeu >> writes: >> >>>>>>> >> >>>>>>> > Thank you Mark for your answer. >> >>>>>>> > >> >>>>>>> > I am not sure what you think could be in the setBC1 routine ? How >> >>>>>>> to make >> >>>>>>> > the connection with the PetscDS ? >> >>>>>>> > >> >>>>>>> > On the other hand, I actually found after a while TSMonitorSet >> has a >> >>>>>>> > fortran wrapper, and it does take as arguments two function >> >>>>>>> pointers, so I >> >>>>>>> > guess it is possible ? Although I am not sure exactly how to play >> >>>>>>> with the >> >>>>>>> > PetscObjectSetFortranCallback & PetscObjectUseFortranCallback >> >>>>>>> macros - >> >>>>>>> > could anybody advise please ? >> >>>>>>> >> >>>>>>> tsmonitorset_ is a good example to follow. In your file, create one >> >>>>>>> of these static structs with a member for each callback. These are >> IDs that >> >>>>>>> will be used as keys for Fortran callbacks and their contexts. The >> salient >> >>>>>>> parts of the file are below. >> >>>>>>> >> >>>>>>> static struct { >> >>>>>>> PetscFortranCallbackId prestep; >> >>>>>>> PetscFortranCallbackId poststep; >> >>>>>>> PetscFortranCallbackId rhsfunction; >> >>>>>>> PetscFortranCallbackId rhsjacobian; >> >>>>>>> PetscFortranCallbackId ifunction; >> >>>>>>> PetscFortranCallbackId ijacobian; >> >>>>>>> PetscFortranCallbackId monitor; >> >>>>>>> PetscFortranCallbackId mondestroy; >> >>>>>>> PetscFortranCallbackId transform; >> >>>>>>> #if defined(PETSC_HAVE_F90_2PTR_ARG) >> >>>>>>> PetscFortranCallbackId function_pgiptr; >> >>>>>>> #endif >> >>>>>>> } _cb; >> >>>>>>> >> >>>>>>> /* >> >>>>>>> Note ctx is the same as ts so we need to get the Fortran context >> >>>>>>> out of the TS; this gets put in _ctx using the callback ID >> >>>>>>> */ >> >>>>>>> static PetscErrorCode ourmonitor(TS ts,PetscInt i,PetscReal d,Vec >> >>>>>>> v,void *ctx) >> >>>>>>> { >> >>>>>>> >> >>>>>>> >> PetscObjectUseFortranCallback(ts,_cb.monitor,(TS*,PetscInt*,PetscReal*,Vec >> >>>>>>> *,void*,PetscErrorCode*),(&ts,&i,&d,&v,_ctx,&ierr)); >> >>>>>>> } >> >>>>>>> >> >>>>>>> Then follow as in tsmonitorset_, which sets two callbacks. >> >>>>>>> >> >>>>>>> PETSC_EXTERN void tsmonitorset_(TS *ts,void >> >>>>>>> (*func)(TS*,PetscInt*,PetscReal*,Vec*,void*,PetscErrorCode*),void >> >>>>>>> *mctx,void (*d)(void*,PetscErrorCode*),PetscErrorCode *ierr) >> >>>>>>> { >> >>>>>>> CHKFORTRANNULLFUNCTION(d); >> >>>>>>> if ((PetscVoidFunction)func == (PetscVoidFunction) >> >>>>>>> tsmonitordefault_) { >> >>>>>>> *ierr = TSMonitorSet(*ts,(PetscErrorCode >> >>>>>>> >> (*)(TS,PetscInt,PetscReal,Vec,void*))TSMonitorDefault,*(PetscViewerAndFormat**)mctx,(PetscErrorCode >> >>>>>>> (*)(void **))PetscViewerAndFormatDestroy); >> >>>>>>> } else { >> >>>>>>> *ierr = >> >>>>>>> >> PetscObjectSetFortranCallback((PetscObject)*ts,PETSC_FORTRAN_CALLBACK_CLASS,&_cb.monitor,(PetscVoidFunction)func,mctx); >> >>>>>>> *ierr = >> >>>>>>> >> PetscObjectSetFortranCallback((PetscObject)*ts,PETSC_FORTRAN_CALLBACK_CLASS,&_cb.mondestroy,(PetscVoidFunction)d,mctx); >> >>>>>>> *ierr = TSMonitorSet(*ts,ourmonitor,*ts,ourmonitordestroy); >> >>>>>>> } >> >>>>>>> } >> >>>>>>> >> >>>>>> >> >>>>>> >> >>>>>> >> >>>>>> >> >>>>> >> >>>> >> >> >> > -- > Thibault Bridel-Bertomeu > ? > Eng, MSc, PhD > Research Engineer > CEA/CESTA > 33114 LE BARP > Tel.: (+33)557046924 > Mob.: (+33)611025322 > Mail: thibault.bridelbertomeu at gmail.com From thibault.bridelbertomeu at gmail.com Mon Dec 28 11:48:19 2020 From: thibault.bridelbertomeu at gmail.com (Thibault Bridel-Bertomeu) Date: Mon, 28 Dec 2020 18:48:19 +0100 Subject: [petsc-users] TS tutorial ex11 in Fortran In-Reply-To: <8735zpde03.fsf@jedbrown.org> References: <1EC72DF8-61BD-482B-B1B0-9427AB1ADF74@petsc.dev> <048DC071-33E5-4644-B1C8-DD598248923A@petsc.dev> <87tuspyaug.fsf@jedbrown.org> <1035FEDB-B297-4E71-8F0F-D44C72707105@petsc.dev> <39E9FBED-6D6B-41E4-983C-58465ACAB4D0@petsc.dev> <877dp2c8ao.fsf@jedbrown.org> <8735zpde03.fsf@jedbrown.org> Message-ID: I?m using TSSSP steppers. I know the RHS is not zero because when I output the solution from the Fortran code in VTK format, it looks as expected and indeed moves forward with time. I won?t put the code as an attachement there is a little too much but the MR is up to date on the VTK output (at least). Le lun. 28 d?c. 2020 ? 18:42, Jed Brown a ?crit : > I think I'm not following something. The time stepper implementation > (TSStep_XYZ) is where ts->vec_sol should be updated. The controller doesn't > have anything to do with it. What TS are you using and how do you know your > RHS is nonzero? > > Thibault Bridel-Bertomeu writes: > > > Hi Jed, > > > > Thanks for your message. > > I implemented everything in C as you suggested and it works fine except > for > > one thing : the ts->vec_sol does not seem to get updated when seen from > the > > C code (it is on the Fortran side though the solution is correct). > > As a result, the time step (that uses among other things the max velocity > > in the domain) is always at the value it gets from the initial solution. > > Any idea why ts->vec_sol does not seem to be updated ? (I checked the > > stepnum and time is updated though , when accessed with TSGetTime for > > instance). > > > > Cheers, > > Thibault > > > > Le lun. 28 d?c. 2020 ? 15:30, Jed Brown a ?crit : > > > >> Thibault Bridel-Bertomeu writes: > >> > >> > Good morning everyone, > >> > > >> > Thank you Barry for the answer, it works now ! > >> > > >> > I am facing (yet) another situation: the TSAdaptRegister function. > >> > In the MR on gitlab, Jed mentioned that sometimes, when function > pointers > >> > are not stored in PETSc objects, one can use stack memory to pass that > >> > pointer from fortran to C. > >> > >> The issue with stack memory is that when it returns, that memory is > >> invalid. You can't use it in this instance. > >> > >> I think you're going to have problems implementing a TSAdaptCreate_XYZ > in > >> Fortran (because the body of that function will need to access private > >> struct members; see below). > >> > >> I would implement what you need in C and you can call out to Fortran if > >> you want from inside TSAdaptChoose_YourMethod(). > >> > >> PETSC_EXTERN PetscErrorCode TSAdaptCreate_DSP(TSAdapt adapt) > >> { > >> TSAdapt_DSP *dsp; > >> PetscErrorCode ierr; > >> > >> PetscFunctionBegin; > >> ierr = PetscNewLog(adapt,&dsp);CHKERRQ(ierr); > >> adapt->reject_safety = 1.0; /* unused */ > >> > >> adapt->data = (void*)dsp; > >> adapt->ops->choose = TSAdaptChoose_DSP; > >> adapt->ops->setfromoptions = TSAdaptSetFromOptions_DSP; > >> adapt->ops->destroy = TSAdaptDestroy_DSP; > >> adapt->ops->view = TSAdaptView_DSP; > >> > >> ierr = > >> > PetscObjectComposeFunction((PetscObject)adapt,"TSAdaptDSPSetFilter_C",TSAdaptDSPSetFilter_DSP);CHKERRQ(ierr); > >> ierr = > >> > PetscObjectComposeFunction((PetscObject)adapt,"TSAdaptDSPSetPID_C",TSAdaptDSPSetPID_DSP);CHKERRQ(ierr); > >> > >> ierr = TSAdaptDSPSetFilter_DSP(adapt,"PI42");CHKERRQ(ierr); > >> ierr = TSAdaptRestart_DSP(adapt);CHKERRQ(ierr); > >> PetscFunctionReturn(0); > >> } > >> > >> > Can anyone develop that idea ? Because for TSAdaptRegister, i guess > the > >> > wrapper would start like : > >> > > >> > PETSC_EXTERN void tsadaptregister_(char *sname, > >> > void > >> (*func)(TSAdapt*,PetscErrorCode*), > >> > PetscErrorCode *ierr, > >> > PETSC_FORTRAN_CHARLEN_T snamelen) > >> > > >> > but then the C TSAdaptRegister function takes a PetscErrorCode > >> > (*func)(TSAdapt) function pointer as argument ... I cannot use any > >> > FORTRAN_CALLBACK here since I do not have any object to hook it to, > and I > >> > could not find a similar situation among the pre-existing wrappers. > Does > >> > anyone have an idea on how to proceed ? > >> > > >> > Thanks !! > >> > > >> > Thibault > >> > > >> > Le mar. 22 d?c. 2020 ? 21:20, Barry Smith a ?crit > : > >> > > >> >> > >> >> PetscObjectUseFortranCallback((PetscDS)ctx, > >> >> > >> >> > >> >> *ierr = PetscObjectSetFortranCallback((PetscObject)*prob > >> >> > >> >> > >> >> It looks like the problem is that these user provided functions do > not > >> >> take a PetscDS directly as an argument so the Fortran callback > >> information > >> >> cannot be obtained from them. > >> >> > >> >> The manual page for PetscDSAddBoundary() says > >> >> > >> >> - ctx - An optional user context for bcFunc > >> >> > >> >> but then when it lists the calling sequence for bcFunc it does not > list > >> >> the ctx as an argument, so either the manual page or code is wrong. > >> >> > >> >> It looks like you make the ctx be the PetscDS prob argument when you > >> >> call PetscDSAddBoundary > >> >> > >> >> In principle this sounds like it might work. I think you need to > track > >> >> through the debugger to see if the ctx passed to ourbocofunc() is > >> >> actually the PetscDS prob variable and if not why it is not. > >> >> > >> >> Barry > >> >> > >> >> > >> >> On Dec 22, 2020, at 5:49 AM, Thibault Bridel-Bertomeu < > >> >> thibault.bridelbertomeu at gmail.com> wrote: > >> >> > >> >> Dear all, > >> >> > >> >> I have hit two snags while implementing the missing wrappers > necessary > >> to > >> >> transcribe ex11 to Fortran. > >> >> > >> >> First is about the PetscDSAddBoundary wrapper, that I have done so : > >> >> > >> >> static PetscErrorCode ourbocofunc(PetscReal time, const PetscReal *c, > >> >> const PetscReal *n, const PetscScalar *a_xI, const PetscScalar *a_xG, > >> void > >> >> *ctx) > >> >> { > >> >> PetscObjectUseFortranCallback((PetscDS)ctx, bocofunc, > >> >> (PetscReal*,const PetscReal*,const > >> >> PetscReal*,const PetscScalar*,const > PetscScalar*,void*,PetscErrorCode*), > >> >> (&time,c,n,a_xI,a_xG,ctx,&ierr)); > >> >> } > >> >> static PetscErrorCode ourbocofunc_time(PetscReal time, const > PetscReal > >> *c, > >> >> const PetscReal *n, const PetscScalar *a_xI, const PetscScalar *a_xG, > >> void > >> >> *ctx) > >> >> { > >> >> PetscObjectUseFortranCallback((PetscDS)ctx, bocofunc_time, > >> >> (PetscReal*,const PetscReal*,const > >> >> PetscReal*,const PetscScalar*,const > PetscScalar*,void*,PetscErrorCode*), > >> >> (&time,c,n,a_xI,a_xG,ctx,&ierr)); > >> >> } > >> >> PETSC_EXTERN void petscdsaddboundary_(PetscDS *prob, > >> >> DMBoundaryConditionType *type, char *name, char *labelname, PetscInt > >> >> *field, PetscInt *numcomps, PetscInt *comps, > >> >> void (*bcFunc)(void), > >> >> void (*bcFunc_t)(void), > >> >> PetscInt *numids, const > PetscInt > >> >> *ids, void *ctx, PetscErrorCode *ierr, > >> >> PETSC_FORTRAN_CHARLEN_T > namelen, > >> >> PETSC_FORTRAN_CHARLEN_T labelnamelen) > >> >> { > >> >> char *newname, *newlabelname; > >> >> FIXCHAR(name, namelen, newname); > >> >> FIXCHAR(labelname, labelnamelen, newlabelname); > >> >> *ierr = PetscObjectSetFortranCallback((PetscObject)*prob, > >> >> PETSC_FORTRAN_CALLBACK_CLASS, &bocofunc, > (PetscVoidFunction)bcFunc, > >> >> ctx); > >> >> *ierr = PetscObjectSetFortranCallback((PetscObject)*prob, > >> >> PETSC_FORTRAN_CALLBACK_CLASS, &bocofunc_time, > >> (PetscVoidFunction)bcFunc_t, > >> >> ctx); > >> >> *ierr = PetscDSAddBoundary(*prob, *type, newname, newlabelname, > >> >> *field, *numcomps, comps, > >> >> (void (*)(void))ourbocofunc, > >> >> (void (*)(void))ourbocofunc_time, > >> >> *numids, ids, *prob); > >> >> FREECHAR(name, newname); > >> >> FREECHAR(labelname, newlabelname); > >> >> } > >> >> > >> >> > >> >> > >> >> but when I call it in the program, with adequate routines, I obtain > the > >> >> following error : > >> >> > >> >> [0]PETSC ERROR: --------------------- Error Message > >> --------------------------------------------------------------[0]PETSC > >> ERROR: Corrupt argument: > >> https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSC > >> ERROR: Fortran callback not set on this object[0]PETSC ERROR: See > >> https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble > >> shooting.[0]PETSC ERROR: Petsc Development GIT revision: > >> v3.14.2-297-gf36a7edeb8 GIT Date: 2020-12-18 04:42:53 +0000[0]PETSC > ERROR: > >> ../../../bin/eulerian3D on a named macbook-pro-de-thibault.home by > tbridel > >> Sun Dec 20 15:05:15 2020[0]PETSC ERROR: Configure options --with-clean=0 > >> --prefix=/Users/tbridel/Documents/1-CODES/04-PETSC/build > --with-make-np=2 > >> --with-windows-graphics=0 --with-debugging=0 --download-fblaslapack > >> --download-mpich-shared=0 --with-x=0 --with-pthread=0 --with-valgrind=0 > >> --PETSC_ARCH=macosx > >> --with-fc=/usr/local/Cellar/open-mpi/4.0.1_2/bin/mpifort > >> --with-cc=/usr/local/Cellar/open-mpi/4.0.1_2/bin/mpicc > >> --with-cxx=/usr/local/Cellar/open-mpi/4.0.1_2/bin/mpic++ --with-openmp=0 > >> --download-hypre=yes --download-sowing=yes --download-metis=yes > >> --download-parmetis=yes --download-triangle=yes --download-tetgen=yes > >> --download-ctetgen=yes --download-p4est=yes --download-zlib=yes > >> --download-c2html=yes --download-eigen=yes --download-pragmatic=yes > >> --with-hdf5-dir=/usr/local/Cellar/hdf5/1.10.5_1 > >> --with-cmake-dir=/usr/local/Cellar/cmake/3.15.3[0]PETSC ERROR: #1 > >> PetscObjectGetFortranCallback() line 258 in > >> > /Users/tbridel/Documents/1-CODES/04-PETSC/src/sys/objects/inherit.c[0]PETSC > >> ERROR: #2 ourbocofunc() line 141 in > >> > /Users/tbridel/Documents/1-CODES/59-EULERIAN3D/sources/petsc_wrapping/wrapper_petsc.c[0]PETSC > >> ERROR: #3 DMPlexInsertBoundaryValuesRiemann() line 989 in > >> > /Users/tbridel/Documents/1-CODES/04-PETSC/src/dm/impls/plex/plexfem.c[0]PETSC > >> ERROR: #4 DMPlexInsertBoundaryValues_Plex() line 1052 in > >> > /Users/tbridel/Documents/1-CODES/04-PETSC/src/dm/impls/plex/plexfem.c[0]PETSC > >> ERROR: #5 DMPlexInsertBoundaryValues() line 1142 in > >> > /Users/tbridel/Documents/1-CODES/04-PETSC/src/dm/impls/plex/plexfem.c[0]PETSC > >> ERROR: #6 DMPlexComputeResidual_Internal() line 4524 in > >> > /Users/tbridel/Documents/1-CODES/04-PETSC/src/dm/impls/plex/plexfem.c[0]PETSC > >> ERROR: #7 DMPlexTSComputeRHSFunctionFVM() line 74 in > >> > /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/utils/dmplexts.c[0]PETSC > >> ERROR: #8 ourdmtsrhsfunc() line 186 in > >> > /Users/tbridel/Documents/1-CODES/59-EULERIAN3D/sources/petsc_wrapping/wrapper_petsc.c[0]PETSC > >> ERROR: #9 TSComputeRHSFunction_DMLocal() line 105 in > >> > /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/utils/dmlocalts.c[0]PETSC > >> ERROR: #10 TSComputeRHSFunction() line 653 in > >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/interface/ts.c[0]PETSC > >> ERROR: #11 TSSSPStep_RK_3() line 120 in > >> > /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/impls/explicit/ssp/ssp.c[0]PETSC > >> ERROR: #12 TSStep_SSP() line 208 in > >> > /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/impls/explicit/ssp/ssp.c[0]PETSC > >> ERROR: #13 TSStep() line 3757 in > >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/interface/ts.c[0]PETSC > >> ERROR: #14 TSSolve() line 4154 in > >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/interface/ts.c[0]PETSC > >> ERROR: #15 User provided function() line 0 in User file > >> >> > >> >> > >> >> Second is about the DMProjectFunction wrapper, that I have done so : > >> >> > >> >> static PetscErrorCode ourdmprojfunc(PetscInt dim, PetscReal time, > >> >> PetscReal x[], PetscInt Nf, PetscScalar u[], void *ctx) > >> >> { > >> >> PetscObjectUseFortranCallback((DM)ctx, dmprojfunc, > >> >> > >> >> > >> > (PetscInt*,PetscReal*,PetscReal*,PetscInt*,PetscScalar*,void*,PetscErrorCode*), > >> >> (&dim,&time,x,&Nf,u,_ctx,&ierr)) > >> >> } > >> >> PETSC_EXTERN void dmprojectfunction_(DM *dm, PetscReal *time, > >> >> void > >> >> > >> > (*func)(PetscInt*,PetscReal*,PetscReal*,PetscInt*,PetscScalar*,void*,PetscErrorCode*), > >> >> void *ctx, InsertMode *mode, > Vec X, > >> >> PetscErrorCode *ierr) > >> >> { > >> >> PetscErrorCode (*funcarr[1]) (PetscInt dim, PetscReal time, > >> PetscReal > >> >> x[], PetscInt Nf, PetscScalar *u, void *ctx); > >> >> *ierr = PetscObjectSetFortranCallback((PetscObject)*dm, > >> >> PETSC_FORTRAN_CALLBACK_CLASS, &dmprojfunc, (PetscVoidFunction)func, > >> ctx); > >> >> funcarr[0] = ourdmprojfunc; > >> >> *ierr = DMProjectFunction(*dm, *time, funcarr, &ctx, *mode, X); > >> >> } > >> >> > >> >> > >> >> This time there is no error because I cannot reach this point in the > >> >> program, but I am not sure anyways how to write this wrapper, > especially > >> >> because of the double pointers that DMProjectFunction takes as > >> arguments. > >> >> > >> >> Does anyone have any idea what could be going wrong with those two > >> >> wrappers ? > >> >> > >> >> Thank you very much in advance !! > >> >> > >> >> Thibault > >> >> > >> >> Le ven. 18 d?c. 2020 ? 11:02, Thibault Bridel-Bertomeu < > >> >> thibault.bridelbertomeu at gmail.com> a ?crit : > >> >> > >> >>> Aah that is a nice trick, I was getting ready to fork, clone the > fork > >> and > >> >>> redo the work, but that worked fine ! Thank you Barry ! > >> >>> > >> >>> The MR will appear in a little while ! > >> >>> > >> >>> Thibault > >> >>> > >> >>> > >> >>> Le ven. 18 d?c. 2020 ? 10:16, Barry Smith a > ?crit : > >> >>> > >> >>>> > >> >>>> Good question. There is a trick to limit the amount of work you > >> need > >> >>>> to do with a new fork after you have already made changes with a > >> PETSc > >> >>>> clone, but it looks like we do not document this clearly in the > >> webpages. > >> >>>> (I couldn't find it). > >> >>>> > >> >>>> Yes, you do need to make a fork, but after you have made the > fork on > >> >>>> the GitLab website (and have done nothing on your machine) edit the > >> file > >> >>>> $PETSC_DIR/.git/config for your clone on your machine > >> >>>> > >> >>>> Locate the line that has url = git at gitlab.com:petsc/petsc.git > >> (this > >> >>>> may have an https at the beginning of the line) > >> >>>> > >> >>>> Change this line to point to the fork url instead with git@ not > >> >>>> https, which will be pretty much the same URL but with your user id > >> instead > >> >>>> of petsc in the address. Then git push and it will push to your > fork. > >> >>>> > >> >>>> Now you changes will be in your fork and you can make the MR from > >> your > >> >>>> fork URL on Gitlab. (In other words this editing trick converts > your > >> PETSc > >> >>>> clone on your machine to a PETSc fork). > >> >>>> > >> >>>> I hope I have explained this clearly enough it goes smoothly. > >> >>>> > >> >>>> Barry > >> >>>> > >> >>>> > >> >>>> > >> >>>> On Dec 18, 2020, at 3:00 AM, Thibault Bridel-Bertomeu < > >> >>>> thibault.bridelbertomeu at gmail.com> wrote: > >> >>>> > >> >>>> Hello Barry, > >> >>>> > >> >>>> I'll start the MR as soon as possible then so that specialists can > >> >>>> indeed have a look. Do I have to fork PETSc to start a MR or are > >> PETSc repo > >> >>>> settings such that can I push a branch from the PETSc clone I got ? > >> >>>> > >> >>>> Thibault > >> >>>> > >> >>>> > >> >>>> Le mer. 16 d?c. 2020 ? 07:47, Barry Smith a > ?crit > >> : > >> >>>> > >> >>>>> > >> >>>>> Thibault, > >> >>>>> > >> >>>>> A subdirectory for the example is fine; we have other examples > that > >> >>>>> use subdirectories and multiple files. > >> >>>>> > >> >>>>> Note: even if you don't have something completely working you > can > >> >>>>> still make MR and list it as DRAFT request for comments, some > other > >> PETSc > >> >>>>> members who understand the packages you are using and Fortran > better > >> than I > >> >>>>> may be able to help as you develop the code. > >> >>>>> > >> >>>>> Barry > >> >>>>> > >> >>>>> > >> >>>>> > >> >>>>> > >> >>>>> On Dec 16, 2020, at 12:35 AM, Thibault Bridel-Bertomeu < > >> >>>>> thibault.bridelbertomeu at gmail.com> wrote: > >> >>>>> > >> >>>>> Hello everyone, > >> >>>>> > >> >>>>> Thank you Barry for the feedback. > >> >>>>> OK, yes I'll work up an MR as soon as I have got something > working. > >> By > >> >>>>> the way, does the fortran-version of the example have to be a > single > >> file ? > >> >>>>> If my push contains a directory with several files (different > >> modules and > >> >>>>> the main), and the Makefile that goes with it, is that ok ? > >> >>>>> > >> >>>>> Thibault Bridel-Bertomeu > >> >>>>> > >> >>>>> > >> >>>>> Le mer. 16 d?c. 2020 ? 04:46, Barry Smith a > >> ?crit : > >> >>>>> > >> >>>>>> > >> >>>>>> This is great. If you make a branch off of the PETSc git > >> repository > >> >>>>>> with these additions and work on ex11 you can make a merge > request > >> and we > >> >>>>>> can run the code easily on all our test systems (for security > >> reasons one > >> >>>>>> of use needs to launch the tests from your MR). > >> >>>>>> https://docs.petsc.org/en/latest/developers/integration/ > >> >>>>>> > >> >>>>>> Barry > >> >>>>>> > >> >>>>>> > >> >>>>>> On Dec 15, 2020, at 5:35 AM, Thibault Bridel-Bertomeu < > >> >>>>>> thibault.bridelbertomeu at gmail.com> wrote: > >> >>>>>> > >> >>>>>> Hello everyone, > >> >>>>>> > >> >>>>>> So far, I have the wrappers in the files attached to this > e-mail. I > >> >>>>>> still do not know if they work properly - at least the code > >> compiles and > >> >>>>>> the calls to the wrapped-subroutine do not fail - but I wanted to > >> put this > >> >>>>>> here in case someone sees something really wrong with it already. > >> >>>>>> > >> >>>>>> Thank you again for your help, I'll try to post updates of the > F90 > >> >>>>>> version of ex11 regularly in this thread. > >> >>>>>> > >> >>>>>> Stay safe, > >> >>>>>> > >> >>>>>> Thibault Bridel-Bertomeu > >> >>>>>> > >> >>>>>> Le dim. 13 d?c. 2020 ? 16:39, Jed Brown a > ?crit > >> : > >> >>>>>> > >> >>>>>>> Thibault Bridel-Bertomeu > >> writes: > >> >>>>>>> > >> >>>>>>> > Thank you Mark for your answer. > >> >>>>>>> > > >> >>>>>>> > I am not sure what you think could be in the setBC1 routine ? > How > >> >>>>>>> to make > >> >>>>>>> > the connection with the PetscDS ? > >> >>>>>>> > > >> >>>>>>> > On the other hand, I actually found after a while TSMonitorSet > >> has a > >> >>>>>>> > fortran wrapper, and it does take as arguments two function > >> >>>>>>> pointers, so I > >> >>>>>>> > guess it is possible ? Although I am not sure exactly how to > play > >> >>>>>>> with the > >> >>>>>>> > PetscObjectSetFortranCallback & PetscObjectUseFortranCallback > >> >>>>>>> macros - > >> >>>>>>> > could anybody advise please ? > >> >>>>>>> > >> >>>>>>> tsmonitorset_ is a good example to follow. In your file, create > one > >> >>>>>>> of these static structs with a member for each callback. These > are > >> IDs that > >> >>>>>>> will be used as keys for Fortran callbacks and their contexts. > The > >> salient > >> >>>>>>> parts of the file are below. > >> >>>>>>> > >> >>>>>>> static struct { > >> >>>>>>> PetscFortranCallbackId prestep; > >> >>>>>>> PetscFortranCallbackId poststep; > >> >>>>>>> PetscFortranCallbackId rhsfunction; > >> >>>>>>> PetscFortranCallbackId rhsjacobian; > >> >>>>>>> PetscFortranCallbackId ifunction; > >> >>>>>>> PetscFortranCallbackId ijacobian; > >> >>>>>>> PetscFortranCallbackId monitor; > >> >>>>>>> PetscFortranCallbackId mondestroy; > >> >>>>>>> PetscFortranCallbackId transform; > >> >>>>>>> #if defined(PETSC_HAVE_F90_2PTR_ARG) > >> >>>>>>> PetscFortranCallbackId function_pgiptr; > >> >>>>>>> #endif > >> >>>>>>> } _cb; > >> >>>>>>> > >> >>>>>>> /* > >> >>>>>>> Note ctx is the same as ts so we need to get the Fortran > context > >> >>>>>>> out of the TS; this gets put in _ctx using the callback ID > >> >>>>>>> */ > >> >>>>>>> static PetscErrorCode ourmonitor(TS ts,PetscInt i,PetscReal > d,Vec > >> >>>>>>> v,void *ctx) > >> >>>>>>> { > >> >>>>>>> > >> >>>>>>> > >> > PetscObjectUseFortranCallback(ts,_cb.monitor,(TS*,PetscInt*,PetscReal*,Vec > >> >>>>>>> *,void*,PetscErrorCode*),(&ts,&i,&d,&v,_ctx,&ierr)); > >> >>>>>>> } > >> >>>>>>> > >> >>>>>>> Then follow as in tsmonitorset_, which sets two callbacks. > >> >>>>>>> > >> >>>>>>> PETSC_EXTERN void tsmonitorset_(TS *ts,void > >> >>>>>>> > (*func)(TS*,PetscInt*,PetscReal*,Vec*,void*,PetscErrorCode*),void > >> >>>>>>> *mctx,void (*d)(void*,PetscErrorCode*),PetscErrorCode *ierr) > >> >>>>>>> { > >> >>>>>>> CHKFORTRANNULLFUNCTION(d); > >> >>>>>>> if ((PetscVoidFunction)func == (PetscVoidFunction) > >> >>>>>>> tsmonitordefault_) { > >> >>>>>>> *ierr = TSMonitorSet(*ts,(PetscErrorCode > >> >>>>>>> > >> > (*)(TS,PetscInt,PetscReal,Vec,void*))TSMonitorDefault,*(PetscViewerAndFormat**)mctx,(PetscErrorCode > >> >>>>>>> (*)(void **))PetscViewerAndFormatDestroy); > >> >>>>>>> } else { > >> >>>>>>> *ierr = > >> >>>>>>> > >> > PetscObjectSetFortranCallback((PetscObject)*ts,PETSC_FORTRAN_CALLBACK_CLASS,&_cb.monitor,(PetscVoidFunction)func,mctx); > >> >>>>>>> *ierr = > >> >>>>>>> > >> > PetscObjectSetFortranCallback((PetscObject)*ts,PETSC_FORTRAN_CALLBACK_CLASS,&_cb.mondestroy,(PetscVoidFunction)d,mctx); > >> >>>>>>> *ierr = TSMonitorSet(*ts,ourmonitor,*ts,ourmonitordestroy); > >> >>>>>>> } > >> >>>>>>> } > >> >>>>>>> > >> >>>>>> > >> >>>>>> > >> >>>>>> > >> >>>>>> > >> >>>>> > >> >>>> > >> >> > >> > > -- > > Thibault Bridel-Bertomeu > > ? > > Eng, MSc, PhD > > Research Engineer > > CEA/CESTA > > 33114 LE BARP > > Tel.: (+33)557046924 > > Mob.: (+33)611025322 > > Mail: thibault.bridelbertomeu at gmail.com > -- Thibault Bridel-Bertomeu ? Eng, MSc, PhD Research Engineer CEA/CESTA 33114 LE BARP Tel.: (+33)557046924 Mob.: (+33)611025322 Mail: thibault.bridelbertomeu at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From thibault.bridelbertomeu at gmail.com Mon Dec 28 11:52:28 2020 From: thibault.bridelbertomeu at gmail.com (Thibault Bridel-Bertomeu) Date: Mon, 28 Dec 2020 18:52:28 +0100 Subject: [petsc-users] TS tutorial ex11 in Fortran In-Reply-To: <8735zpde03.fsf@jedbrown.org> References: <1EC72DF8-61BD-482B-B1B0-9427AB1ADF74@petsc.dev> <048DC071-33E5-4644-B1C8-DD598248923A@petsc.dev> <87tuspyaug.fsf@jedbrown.org> <1035FEDB-B297-4E71-8F0F-D44C72707105@petsc.dev> <39E9FBED-6D6B-41E4-983C-58465ACAB4D0@petsc.dev> <877dp2c8ao.fsf@jedbrown.org> <8735zpde03.fsf@jedbrown.org> Message-ID: However in Fortran, for the VTK output, it is true I do not view ts->vec_sol directly but rather a Vec called sol that I use in the TSSolve(ts, sol) call ... but I suppose it should be equivalent shouldn?t it ? Le lun. 28 d?c. 2020 ? 18:42, Jed Brown a ?crit : > I think I'm not following something. The time stepper implementation > (TSStep_XYZ) is where ts->vec_sol should be updated. The controller doesn't > have anything to do with it. What TS are you using and how do you know your > RHS is nonzero? > > Thibault Bridel-Bertomeu writes: > > > Hi Jed, > > > > Thanks for your message. > > I implemented everything in C as you suggested and it works fine except > for > > one thing : the ts->vec_sol does not seem to get updated when seen from > the > > C code (it is on the Fortran side though the solution is correct). > > As a result, the time step (that uses among other things the max velocity > > in the domain) is always at the value it gets from the initial solution. > > Any idea why ts->vec_sol does not seem to be updated ? (I checked the > > stepnum and time is updated though , when accessed with TSGetTime for > > instance). > > > > Cheers, > > Thibault > > > > Le lun. 28 d?c. 2020 ? 15:30, Jed Brown a ?crit : > > > >> Thibault Bridel-Bertomeu writes: > >> > >> > Good morning everyone, > >> > > >> > Thank you Barry for the answer, it works now ! > >> > > >> > I am facing (yet) another situation: the TSAdaptRegister function. > >> > In the MR on gitlab, Jed mentioned that sometimes, when function > pointers > >> > are not stored in PETSc objects, one can use stack memory to pass that > >> > pointer from fortran to C. > >> > >> The issue with stack memory is that when it returns, that memory is > >> invalid. You can't use it in this instance. > >> > >> I think you're going to have problems implementing a TSAdaptCreate_XYZ > in > >> Fortran (because the body of that function will need to access private > >> struct members; see below). > >> > >> I would implement what you need in C and you can call out to Fortran if > >> you want from inside TSAdaptChoose_YourMethod(). > >> > >> PETSC_EXTERN PetscErrorCode TSAdaptCreate_DSP(TSAdapt adapt) > >> { > >> TSAdapt_DSP *dsp; > >> PetscErrorCode ierr; > >> > >> PetscFunctionBegin; > >> ierr = PetscNewLog(adapt,&dsp);CHKERRQ(ierr); > >> adapt->reject_safety = 1.0; /* unused */ > >> > >> adapt->data = (void*)dsp; > >> adapt->ops->choose = TSAdaptChoose_DSP; > >> adapt->ops->setfromoptions = TSAdaptSetFromOptions_DSP; > >> adapt->ops->destroy = TSAdaptDestroy_DSP; > >> adapt->ops->view = TSAdaptView_DSP; > >> > >> ierr = > >> > PetscObjectComposeFunction((PetscObject)adapt,"TSAdaptDSPSetFilter_C",TSAdaptDSPSetFilter_DSP);CHKERRQ(ierr); > >> ierr = > >> > PetscObjectComposeFunction((PetscObject)adapt,"TSAdaptDSPSetPID_C",TSAdaptDSPSetPID_DSP);CHKERRQ(ierr); > >> > >> ierr = TSAdaptDSPSetFilter_DSP(adapt,"PI42");CHKERRQ(ierr); > >> ierr = TSAdaptRestart_DSP(adapt);CHKERRQ(ierr); > >> PetscFunctionReturn(0); > >> } > >> > >> > Can anyone develop that idea ? Because for TSAdaptRegister, i guess > the > >> > wrapper would start like : > >> > > >> > PETSC_EXTERN void tsadaptregister_(char *sname, > >> > void > >> (*func)(TSAdapt*,PetscErrorCode*), > >> > PetscErrorCode *ierr, > >> > PETSC_FORTRAN_CHARLEN_T snamelen) > >> > > >> > but then the C TSAdaptRegister function takes a PetscErrorCode > >> > (*func)(TSAdapt) function pointer as argument ... I cannot use any > >> > FORTRAN_CALLBACK here since I do not have any object to hook it to, > and I > >> > could not find a similar situation among the pre-existing wrappers. > Does > >> > anyone have an idea on how to proceed ? > >> > > >> > Thanks !! > >> > > >> > Thibault > >> > > >> > Le mar. 22 d?c. 2020 ? 21:20, Barry Smith a ?crit > : > >> > > >> >> > >> >> PetscObjectUseFortranCallback((PetscDS)ctx, > >> >> > >> >> > >> >> *ierr = PetscObjectSetFortranCallback((PetscObject)*prob > >> >> > >> >> > >> >> It looks like the problem is that these user provided functions do > not > >> >> take a PetscDS directly as an argument so the Fortran callback > >> information > >> >> cannot be obtained from them. > >> >> > >> >> The manual page for PetscDSAddBoundary() says > >> >> > >> >> - ctx - An optional user context for bcFunc > >> >> > >> >> but then when it lists the calling sequence for bcFunc it does not > list > >> >> the ctx as an argument, so either the manual page or code is wrong. > >> >> > >> >> It looks like you make the ctx be the PetscDS prob argument when you > >> >> call PetscDSAddBoundary > >> >> > >> >> In principle this sounds like it might work. I think you need to > track > >> >> through the debugger to see if the ctx passed to ourbocofunc() is > >> >> actually the PetscDS prob variable and if not why it is not. > >> >> > >> >> Barry > >> >> > >> >> > >> >> On Dec 22, 2020, at 5:49 AM, Thibault Bridel-Bertomeu < > >> >> thibault.bridelbertomeu at gmail.com> wrote: > >> >> > >> >> Dear all, > >> >> > >> >> I have hit two snags while implementing the missing wrappers > necessary > >> to > >> >> transcribe ex11 to Fortran. > >> >> > >> >> First is about the PetscDSAddBoundary wrapper, that I have done so : > >> >> > >> >> static PetscErrorCode ourbocofunc(PetscReal time, const PetscReal *c, > >> >> const PetscReal *n, const PetscScalar *a_xI, const PetscScalar *a_xG, > >> void > >> >> *ctx) > >> >> { > >> >> PetscObjectUseFortranCallback((PetscDS)ctx, bocofunc, > >> >> (PetscReal*,const PetscReal*,const > >> >> PetscReal*,const PetscScalar*,const > PetscScalar*,void*,PetscErrorCode*), > >> >> (&time,c,n,a_xI,a_xG,ctx,&ierr)); > >> >> } > >> >> static PetscErrorCode ourbocofunc_time(PetscReal time, const > PetscReal > >> *c, > >> >> const PetscReal *n, const PetscScalar *a_xI, const PetscScalar *a_xG, > >> void > >> >> *ctx) > >> >> { > >> >> PetscObjectUseFortranCallback((PetscDS)ctx, bocofunc_time, > >> >> (PetscReal*,const PetscReal*,const > >> >> PetscReal*,const PetscScalar*,const > PetscScalar*,void*,PetscErrorCode*), > >> >> (&time,c,n,a_xI,a_xG,ctx,&ierr)); > >> >> } > >> >> PETSC_EXTERN void petscdsaddboundary_(PetscDS *prob, > >> >> DMBoundaryConditionType *type, char *name, char *labelname, PetscInt > >> >> *field, PetscInt *numcomps, PetscInt *comps, > >> >> void (*bcFunc)(void), > >> >> void (*bcFunc_t)(void), > >> >> PetscInt *numids, const > PetscInt > >> >> *ids, void *ctx, PetscErrorCode *ierr, > >> >> PETSC_FORTRAN_CHARLEN_T > namelen, > >> >> PETSC_FORTRAN_CHARLEN_T labelnamelen) > >> >> { > >> >> char *newname, *newlabelname; > >> >> FIXCHAR(name, namelen, newname); > >> >> FIXCHAR(labelname, labelnamelen, newlabelname); > >> >> *ierr = PetscObjectSetFortranCallback((PetscObject)*prob, > >> >> PETSC_FORTRAN_CALLBACK_CLASS, &bocofunc, > (PetscVoidFunction)bcFunc, > >> >> ctx); > >> >> *ierr = PetscObjectSetFortranCallback((PetscObject)*prob, > >> >> PETSC_FORTRAN_CALLBACK_CLASS, &bocofunc_time, > >> (PetscVoidFunction)bcFunc_t, > >> >> ctx); > >> >> *ierr = PetscDSAddBoundary(*prob, *type, newname, newlabelname, > >> >> *field, *numcomps, comps, > >> >> (void (*)(void))ourbocofunc, > >> >> (void (*)(void))ourbocofunc_time, > >> >> *numids, ids, *prob); > >> >> FREECHAR(name, newname); > >> >> FREECHAR(labelname, newlabelname); > >> >> } > >> >> > >> >> > >> >> > >> >> but when I call it in the program, with adequate routines, I obtain > the > >> >> following error : > >> >> > >> >> [0]PETSC ERROR: --------------------- Error Message > >> --------------------------------------------------------------[0]PETSC > >> ERROR: Corrupt argument: > >> https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSC > >> ERROR: Fortran callback not set on this object[0]PETSC ERROR: See > >> https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble > >> shooting.[0]PETSC ERROR: Petsc Development GIT revision: > >> v3.14.2-297-gf36a7edeb8 GIT Date: 2020-12-18 04:42:53 +0000[0]PETSC > ERROR: > >> ../../../bin/eulerian3D on a named macbook-pro-de-thibault.home by > tbridel > >> Sun Dec 20 15:05:15 2020[0]PETSC ERROR: Configure options --with-clean=0 > >> --prefix=/Users/tbridel/Documents/1-CODES/04-PETSC/build > --with-make-np=2 > >> --with-windows-graphics=0 --with-debugging=0 --download-fblaslapack > >> --download-mpich-shared=0 --with-x=0 --with-pthread=0 --with-valgrind=0 > >> --PETSC_ARCH=macosx > >> --with-fc=/usr/local/Cellar/open-mpi/4.0.1_2/bin/mpifort > >> --with-cc=/usr/local/Cellar/open-mpi/4.0.1_2/bin/mpicc > >> --with-cxx=/usr/local/Cellar/open-mpi/4.0.1_2/bin/mpic++ --with-openmp=0 > >> --download-hypre=yes --download-sowing=yes --download-metis=yes > >> --download-parmetis=yes --download-triangle=yes --download-tetgen=yes > >> --download-ctetgen=yes --download-p4est=yes --download-zlib=yes > >> --download-c2html=yes --download-eigen=yes --download-pragmatic=yes > >> --with-hdf5-dir=/usr/local/Cellar/hdf5/1.10.5_1 > >> --with-cmake-dir=/usr/local/Cellar/cmake/3.15.3[0]PETSC ERROR: #1 > >> PetscObjectGetFortranCallback() line 258 in > >> > /Users/tbridel/Documents/1-CODES/04-PETSC/src/sys/objects/inherit.c[0]PETSC > >> ERROR: #2 ourbocofunc() line 141 in > >> > /Users/tbridel/Documents/1-CODES/59-EULERIAN3D/sources/petsc_wrapping/wrapper_petsc.c[0]PETSC > >> ERROR: #3 DMPlexInsertBoundaryValuesRiemann() line 989 in > >> > /Users/tbridel/Documents/1-CODES/04-PETSC/src/dm/impls/plex/plexfem.c[0]PETSC > >> ERROR: #4 DMPlexInsertBoundaryValues_Plex() line 1052 in > >> > /Users/tbridel/Documents/1-CODES/04-PETSC/src/dm/impls/plex/plexfem.c[0]PETSC > >> ERROR: #5 DMPlexInsertBoundaryValues() line 1142 in > >> > /Users/tbridel/Documents/1-CODES/04-PETSC/src/dm/impls/plex/plexfem.c[0]PETSC > >> ERROR: #6 DMPlexComputeResidual_Internal() line 4524 in > >> > /Users/tbridel/Documents/1-CODES/04-PETSC/src/dm/impls/plex/plexfem.c[0]PETSC > >> ERROR: #7 DMPlexTSComputeRHSFunctionFVM() line 74 in > >> > /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/utils/dmplexts.c[0]PETSC > >> ERROR: #8 ourdmtsrhsfunc() line 186 in > >> > /Users/tbridel/Documents/1-CODES/59-EULERIAN3D/sources/petsc_wrapping/wrapper_petsc.c[0]PETSC > >> ERROR: #9 TSComputeRHSFunction_DMLocal() line 105 in > >> > /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/utils/dmlocalts.c[0]PETSC > >> ERROR: #10 TSComputeRHSFunction() line 653 in > >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/interface/ts.c[0]PETSC > >> ERROR: #11 TSSSPStep_RK_3() line 120 in > >> > /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/impls/explicit/ssp/ssp.c[0]PETSC > >> ERROR: #12 TSStep_SSP() line 208 in > >> > /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/impls/explicit/ssp/ssp.c[0]PETSC > >> ERROR: #13 TSStep() line 3757 in > >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/interface/ts.c[0]PETSC > >> ERROR: #14 TSSolve() line 4154 in > >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/interface/ts.c[0]PETSC > >> ERROR: #15 User provided function() line 0 in User file > >> >> > >> >> > >> >> Second is about the DMProjectFunction wrapper, that I have done so : > >> >> > >> >> static PetscErrorCode ourdmprojfunc(PetscInt dim, PetscReal time, > >> >> PetscReal x[], PetscInt Nf, PetscScalar u[], void *ctx) > >> >> { > >> >> PetscObjectUseFortranCallback((DM)ctx, dmprojfunc, > >> >> > >> >> > >> > (PetscInt*,PetscReal*,PetscReal*,PetscInt*,PetscScalar*,void*,PetscErrorCode*), > >> >> (&dim,&time,x,&Nf,u,_ctx,&ierr)) > >> >> } > >> >> PETSC_EXTERN void dmprojectfunction_(DM *dm, PetscReal *time, > >> >> void > >> >> > >> > (*func)(PetscInt*,PetscReal*,PetscReal*,PetscInt*,PetscScalar*,void*,PetscErrorCode*), > >> >> void *ctx, InsertMode *mode, > Vec X, > >> >> PetscErrorCode *ierr) > >> >> { > >> >> PetscErrorCode (*funcarr[1]) (PetscInt dim, PetscReal time, > >> PetscReal > >> >> x[], PetscInt Nf, PetscScalar *u, void *ctx); > >> >> *ierr = PetscObjectSetFortranCallback((PetscObject)*dm, > >> >> PETSC_FORTRAN_CALLBACK_CLASS, &dmprojfunc, (PetscVoidFunction)func, > >> ctx); > >> >> funcarr[0] = ourdmprojfunc; > >> >> *ierr = DMProjectFunction(*dm, *time, funcarr, &ctx, *mode, X); > >> >> } > >> >> > >> >> > >> >> This time there is no error because I cannot reach this point in the > >> >> program, but I am not sure anyways how to write this wrapper, > especially > >> >> because of the double pointers that DMProjectFunction takes as > >> arguments. > >> >> > >> >> Does anyone have any idea what could be going wrong with those two > >> >> wrappers ? > >> >> > >> >> Thank you very much in advance !! > >> >> > >> >> Thibault > >> >> > >> >> Le ven. 18 d?c. 2020 ? 11:02, Thibault Bridel-Bertomeu < > >> >> thibault.bridelbertomeu at gmail.com> a ?crit : > >> >> > >> >>> Aah that is a nice trick, I was getting ready to fork, clone the > fork > >> and > >> >>> redo the work, but that worked fine ! Thank you Barry ! > >> >>> > >> >>> The MR will appear in a little while ! > >> >>> > >> >>> Thibault > >> >>> > >> >>> > >> >>> Le ven. 18 d?c. 2020 ? 10:16, Barry Smith a > ?crit : > >> >>> > >> >>>> > >> >>>> Good question. There is a trick to limit the amount of work you > >> need > >> >>>> to do with a new fork after you have already made changes with a > >> PETSc > >> >>>> clone, but it looks like we do not document this clearly in the > >> webpages. > >> >>>> (I couldn't find it). > >> >>>> > >> >>>> Yes, you do need to make a fork, but after you have made the > fork on > >> >>>> the GitLab website (and have done nothing on your machine) edit the > >> file > >> >>>> $PETSC_DIR/.git/config for your clone on your machine > >> >>>> > >> >>>> Locate the line that has url = git at gitlab.com:petsc/petsc.git > >> (this > >> >>>> may have an https at the beginning of the line) > >> >>>> > >> >>>> Change this line to point to the fork url instead with git@ not > >> >>>> https, which will be pretty much the same URL but with your user id > >> instead > >> >>>> of petsc in the address. Then git push and it will push to your > fork. > >> >>>> > >> >>>> Now you changes will be in your fork and you can make the MR from > >> your > >> >>>> fork URL on Gitlab. (In other words this editing trick converts > your > >> PETSc > >> >>>> clone on your machine to a PETSc fork). > >> >>>> > >> >>>> I hope I have explained this clearly enough it goes smoothly. > >> >>>> > >> >>>> Barry > >> >>>> > >> >>>> > >> >>>> > >> >>>> On Dec 18, 2020, at 3:00 AM, Thibault Bridel-Bertomeu < > >> >>>> thibault.bridelbertomeu at gmail.com> wrote: > >> >>>> > >> >>>> Hello Barry, > >> >>>> > >> >>>> I'll start the MR as soon as possible then so that specialists can > >> >>>> indeed have a look. Do I have to fork PETSc to start a MR or are > >> PETSc repo > >> >>>> settings such that can I push a branch from the PETSc clone I got ? > >> >>>> > >> >>>> Thibault > >> >>>> > >> >>>> > >> >>>> Le mer. 16 d?c. 2020 ? 07:47, Barry Smith a > ?crit > >> : > >> >>>> > >> >>>>> > >> >>>>> Thibault, > >> >>>>> > >> >>>>> A subdirectory for the example is fine; we have other examples > that > >> >>>>> use subdirectories and multiple files. > >> >>>>> > >> >>>>> Note: even if you don't have something completely working you > can > >> >>>>> still make MR and list it as DRAFT request for comments, some > other > >> PETSc > >> >>>>> members who understand the packages you are using and Fortran > better > >> than I > >> >>>>> may be able to help as you develop the code. > >> >>>>> > >> >>>>> Barry > >> >>>>> > >> >>>>> > >> >>>>> > >> >>>>> > >> >>>>> On Dec 16, 2020, at 12:35 AM, Thibault Bridel-Bertomeu < > >> >>>>> thibault.bridelbertomeu at gmail.com> wrote: > >> >>>>> > >> >>>>> Hello everyone, > >> >>>>> > >> >>>>> Thank you Barry for the feedback. > >> >>>>> OK, yes I'll work up an MR as soon as I have got something > working. > >> By > >> >>>>> the way, does the fortran-version of the example have to be a > single > >> file ? > >> >>>>> If my push contains a directory with several files (different > >> modules and > >> >>>>> the main), and the Makefile that goes with it, is that ok ? > >> >>>>> > >> >>>>> Thibault Bridel-Bertomeu > >> >>>>> > >> >>>>> > >> >>>>> Le mer. 16 d?c. 2020 ? 04:46, Barry Smith a > >> ?crit : > >> >>>>> > >> >>>>>> > >> >>>>>> This is great. If you make a branch off of the PETSc git > >> repository > >> >>>>>> with these additions and work on ex11 you can make a merge > request > >> and we > >> >>>>>> can run the code easily on all our test systems (for security > >> reasons one > >> >>>>>> of use needs to launch the tests from your MR). > >> >>>>>> https://docs.petsc.org/en/latest/developers/integration/ > >> >>>>>> > >> >>>>>> Barry > >> >>>>>> > >> >>>>>> > >> >>>>>> On Dec 15, 2020, at 5:35 AM, Thibault Bridel-Bertomeu < > >> >>>>>> thibault.bridelbertomeu at gmail.com> wrote: > >> >>>>>> > >> >>>>>> Hello everyone, > >> >>>>>> > >> >>>>>> So far, I have the wrappers in the files attached to this > e-mail. I > >> >>>>>> still do not know if they work properly - at least the code > >> compiles and > >> >>>>>> the calls to the wrapped-subroutine do not fail - but I wanted to > >> put this > >> >>>>>> here in case someone sees something really wrong with it already. > >> >>>>>> > >> >>>>>> Thank you again for your help, I'll try to post updates of the > F90 > >> >>>>>> version of ex11 regularly in this thread. > >> >>>>>> > >> >>>>>> Stay safe, > >> >>>>>> > >> >>>>>> Thibault Bridel-Bertomeu > >> >>>>>> > >> >>>>>> Le dim. 13 d?c. 2020 ? 16:39, Jed Brown a > ?crit > >> : > >> >>>>>> > >> >>>>>>> Thibault Bridel-Bertomeu > >> writes: > >> >>>>>>> > >> >>>>>>> > Thank you Mark for your answer. > >> >>>>>>> > > >> >>>>>>> > I am not sure what you think could be in the setBC1 routine ? > How > >> >>>>>>> to make > >> >>>>>>> > the connection with the PetscDS ? > >> >>>>>>> > > >> >>>>>>> > On the other hand, I actually found after a while TSMonitorSet > >> has a > >> >>>>>>> > fortran wrapper, and it does take as arguments two function > >> >>>>>>> pointers, so I > >> >>>>>>> > guess it is possible ? Although I am not sure exactly how to > play > >> >>>>>>> with the > >> >>>>>>> > PetscObjectSetFortranCallback & PetscObjectUseFortranCallback > >> >>>>>>> macros - > >> >>>>>>> > could anybody advise please ? > >> >>>>>>> > >> >>>>>>> tsmonitorset_ is a good example to follow. In your file, create > one > >> >>>>>>> of these static structs with a member for each callback. These > are > >> IDs that > >> >>>>>>> will be used as keys for Fortran callbacks and their contexts. > The > >> salient > >> >>>>>>> parts of the file are below. > >> >>>>>>> > >> >>>>>>> static struct { > >> >>>>>>> PetscFortranCallbackId prestep; > >> >>>>>>> PetscFortranCallbackId poststep; > >> >>>>>>> PetscFortranCallbackId rhsfunction; > >> >>>>>>> PetscFortranCallbackId rhsjacobian; > >> >>>>>>> PetscFortranCallbackId ifunction; > >> >>>>>>> PetscFortranCallbackId ijacobian; > >> >>>>>>> PetscFortranCallbackId monitor; > >> >>>>>>> PetscFortranCallbackId mondestroy; > >> >>>>>>> PetscFortranCallbackId transform; > >> >>>>>>> #if defined(PETSC_HAVE_F90_2PTR_ARG) > >> >>>>>>> PetscFortranCallbackId function_pgiptr; > >> >>>>>>> #endif > >> >>>>>>> } _cb; > >> >>>>>>> > >> >>>>>>> /* > >> >>>>>>> Note ctx is the same as ts so we need to get the Fortran > context > >> >>>>>>> out of the TS; this gets put in _ctx using the callback ID > >> >>>>>>> */ > >> >>>>>>> static PetscErrorCode ourmonitor(TS ts,PetscInt i,PetscReal > d,Vec > >> >>>>>>> v,void *ctx) > >> >>>>>>> { > >> >>>>>>> > >> >>>>>>> > >> > PetscObjectUseFortranCallback(ts,_cb.monitor,(TS*,PetscInt*,PetscReal*,Vec > >> >>>>>>> *,void*,PetscErrorCode*),(&ts,&i,&d,&v,_ctx,&ierr)); > >> >>>>>>> } > >> >>>>>>> > >> >>>>>>> Then follow as in tsmonitorset_, which sets two callbacks. > >> >>>>>>> > >> >>>>>>> PETSC_EXTERN void tsmonitorset_(TS *ts,void > >> >>>>>>> > (*func)(TS*,PetscInt*,PetscReal*,Vec*,void*,PetscErrorCode*),void > >> >>>>>>> *mctx,void (*d)(void*,PetscErrorCode*),PetscErrorCode *ierr) > >> >>>>>>> { > >> >>>>>>> CHKFORTRANNULLFUNCTION(d); > >> >>>>>>> if ((PetscVoidFunction)func == (PetscVoidFunction) > >> >>>>>>> tsmonitordefault_) { > >> >>>>>>> *ierr = TSMonitorSet(*ts,(PetscErrorCode > >> >>>>>>> > >> > (*)(TS,PetscInt,PetscReal,Vec,void*))TSMonitorDefault,*(PetscViewerAndFormat**)mctx,(PetscErrorCode > >> >>>>>>> (*)(void **))PetscViewerAndFormatDestroy); > >> >>>>>>> } else { > >> >>>>>>> *ierr = > >> >>>>>>> > >> > PetscObjectSetFortranCallback((PetscObject)*ts,PETSC_FORTRAN_CALLBACK_CLASS,&_cb.monitor,(PetscVoidFunction)func,mctx); > >> >>>>>>> *ierr = > >> >>>>>>> > >> > PetscObjectSetFortranCallback((PetscObject)*ts,PETSC_FORTRAN_CALLBACK_CLASS,&_cb.mondestroy,(PetscVoidFunction)d,mctx); > >> >>>>>>> *ierr = TSMonitorSet(*ts,ourmonitor,*ts,ourmonitordestroy); > >> >>>>>>> } > >> >>>>>>> } > >> >>>>>>> > >> >>>>>> > >> >>>>>> > >> >>>>>> > >> >>>>>> > >> >>>>> > >> >>>> > >> >> > >> > > -- > > Thibault Bridel-Bertomeu > > ? > > Eng, MSc, PhD > > Research Engineer > > CEA/CESTA > > 33114 LE BARP > > Tel.: (+33)557046924 > > Mob.: (+33)611025322 > > Mail: thibault.bridelbertomeu at gmail.com > -- Thibault Bridel-Bertomeu ? Eng, MSc, PhD Research Engineer CEA/CESTA 33114 LE BARP Tel.: (+33)557046924 Mob.: (+33)611025322 Mail: thibault.bridelbertomeu at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Mon Dec 28 17:16:47 2020 From: jed at jedbrown.org (Jed Brown) Date: Mon, 28 Dec 2020 16:16:47 -0700 Subject: [petsc-users] TS tutorial ex11 in Fortran In-Reply-To: References: <048DC071-33E5-4644-B1C8-DD598248923A@petsc.dev> <87tuspyaug.fsf@jedbrown.org> <1035FEDB-B297-4E71-8F0F-D44C72707105@petsc.dev> <39E9FBED-6D6B-41E4-983C-58465ACAB4D0@petsc.dev> <877dp2c8ao.fsf@jedbrown.org> <8735zpde03.fsf@jedbrown.org> Message-ID: <87wnx1bjxc.fsf@jedbrown.org> I'd need some form of reproducible test case to debug. It sounds like you're viewing different vectors, but I probably won't be able to track it down without a reproducible test case. A gdb session might be most helpful to identify which vectors are where when you get the unexpected condition. Thibault Bridel-Bertomeu writes: > However in Fortran, for the VTK output, it is true I do not view > ts->vec_sol directly but rather a Vec called sol that I use in the > TSSolve(ts, sol) call ... but I suppose it should be equivalent shouldn?t > it ? > > Le lun. 28 d?c. 2020 ? 18:42, Jed Brown a ?crit : > >> I think I'm not following something. The time stepper implementation >> (TSStep_XYZ) is where ts->vec_sol should be updated. The controller doesn't >> have anything to do with it. What TS are you using and how do you know your >> RHS is nonzero? >> >> Thibault Bridel-Bertomeu writes: >> >> > Hi Jed, >> > >> > Thanks for your message. >> > I implemented everything in C as you suggested and it works fine except >> for >> > one thing : the ts->vec_sol does not seem to get updated when seen from >> the >> > C code (it is on the Fortran side though the solution is correct). >> > As a result, the time step (that uses among other things the max velocity >> > in the domain) is always at the value it gets from the initial solution. >> > Any idea why ts->vec_sol does not seem to be updated ? (I checked the >> > stepnum and time is updated though , when accessed with TSGetTime for >> > instance). >> > >> > Cheers, >> > Thibault >> > >> > Le lun. 28 d?c. 2020 ? 15:30, Jed Brown a ?crit : >> > >> >> Thibault Bridel-Bertomeu writes: >> >> >> >> > Good morning everyone, >> >> > >> >> > Thank you Barry for the answer, it works now ! >> >> > >> >> > I am facing (yet) another situation: the TSAdaptRegister function. >> >> > In the MR on gitlab, Jed mentioned that sometimes, when function >> pointers >> >> > are not stored in PETSc objects, one can use stack memory to pass that >> >> > pointer from fortran to C. >> >> >> >> The issue with stack memory is that when it returns, that memory is >> >> invalid. You can't use it in this instance. >> >> >> >> I think you're going to have problems implementing a TSAdaptCreate_XYZ >> in >> >> Fortran (because the body of that function will need to access private >> >> struct members; see below). >> >> >> >> I would implement what you need in C and you can call out to Fortran if >> >> you want from inside TSAdaptChoose_YourMethod(). >> >> >> >> PETSC_EXTERN PetscErrorCode TSAdaptCreate_DSP(TSAdapt adapt) >> >> { >> >> TSAdapt_DSP *dsp; >> >> PetscErrorCode ierr; >> >> >> >> PetscFunctionBegin; >> >> ierr = PetscNewLog(adapt,&dsp);CHKERRQ(ierr); >> >> adapt->reject_safety = 1.0; /* unused */ >> >> >> >> adapt->data = (void*)dsp; >> >> adapt->ops->choose = TSAdaptChoose_DSP; >> >> adapt->ops->setfromoptions = TSAdaptSetFromOptions_DSP; >> >> adapt->ops->destroy = TSAdaptDestroy_DSP; >> >> adapt->ops->view = TSAdaptView_DSP; >> >> >> >> ierr = >> >> >> PetscObjectComposeFunction((PetscObject)adapt,"TSAdaptDSPSetFilter_C",TSAdaptDSPSetFilter_DSP);CHKERRQ(ierr); >> >> ierr = >> >> >> PetscObjectComposeFunction((PetscObject)adapt,"TSAdaptDSPSetPID_C",TSAdaptDSPSetPID_DSP);CHKERRQ(ierr); >> >> >> >> ierr = TSAdaptDSPSetFilter_DSP(adapt,"PI42");CHKERRQ(ierr); >> >> ierr = TSAdaptRestart_DSP(adapt);CHKERRQ(ierr); >> >> PetscFunctionReturn(0); >> >> } >> >> >> >> > Can anyone develop that idea ? Because for TSAdaptRegister, i guess >> the >> >> > wrapper would start like : >> >> > >> >> > PETSC_EXTERN void tsadaptregister_(char *sname, >> >> > void >> >> (*func)(TSAdapt*,PetscErrorCode*), >> >> > PetscErrorCode *ierr, >> >> > PETSC_FORTRAN_CHARLEN_T snamelen) >> >> > >> >> > but then the C TSAdaptRegister function takes a PetscErrorCode >> >> > (*func)(TSAdapt) function pointer as argument ... I cannot use any >> >> > FORTRAN_CALLBACK here since I do not have any object to hook it to, >> and I >> >> > could not find a similar situation among the pre-existing wrappers. >> Does >> >> > anyone have an idea on how to proceed ? >> >> > >> >> > Thanks !! >> >> > >> >> > Thibault >> >> > >> >> > Le mar. 22 d?c. 2020 ? 21:20, Barry Smith a ?crit >> : >> >> > >> >> >> >> >> >> PetscObjectUseFortranCallback((PetscDS)ctx, >> >> >> >> >> >> >> >> >> *ierr = PetscObjectSetFortranCallback((PetscObject)*prob >> >> >> >> >> >> >> >> >> It looks like the problem is that these user provided functions do >> not >> >> >> take a PetscDS directly as an argument so the Fortran callback >> >> information >> >> >> cannot be obtained from them. >> >> >> >> >> >> The manual page for PetscDSAddBoundary() says >> >> >> >> >> >> - ctx - An optional user context for bcFunc >> >> >> >> >> >> but then when it lists the calling sequence for bcFunc it does not >> list >> >> >> the ctx as an argument, so either the manual page or code is wrong. >> >> >> >> >> >> It looks like you make the ctx be the PetscDS prob argument when you >> >> >> call PetscDSAddBoundary >> >> >> >> >> >> In principle this sounds like it might work. I think you need to >> track >> >> >> through the debugger to see if the ctx passed to ourbocofunc() is >> >> >> actually the PetscDS prob variable and if not why it is not. >> >> >> >> >> >> Barry >> >> >> >> >> >> >> >> >> On Dec 22, 2020, at 5:49 AM, Thibault Bridel-Bertomeu < >> >> >> thibault.bridelbertomeu at gmail.com> wrote: >> >> >> >> >> >> Dear all, >> >> >> >> >> >> I have hit two snags while implementing the missing wrappers >> necessary >> >> to >> >> >> transcribe ex11 to Fortran. >> >> >> >> >> >> First is about the PetscDSAddBoundary wrapper, that I have done so : >> >> >> >> >> >> static PetscErrorCode ourbocofunc(PetscReal time, const PetscReal *c, >> >> >> const PetscReal *n, const PetscScalar *a_xI, const PetscScalar *a_xG, >> >> void >> >> >> *ctx) >> >> >> { >> >> >> PetscObjectUseFortranCallback((PetscDS)ctx, bocofunc, >> >> >> (PetscReal*,const PetscReal*,const >> >> >> PetscReal*,const PetscScalar*,const >> PetscScalar*,void*,PetscErrorCode*), >> >> >> (&time,c,n,a_xI,a_xG,ctx,&ierr)); >> >> >> } >> >> >> static PetscErrorCode ourbocofunc_time(PetscReal time, const >> PetscReal >> >> *c, >> >> >> const PetscReal *n, const PetscScalar *a_xI, const PetscScalar *a_xG, >> >> void >> >> >> *ctx) >> >> >> { >> >> >> PetscObjectUseFortranCallback((PetscDS)ctx, bocofunc_time, >> >> >> (PetscReal*,const PetscReal*,const >> >> >> PetscReal*,const PetscScalar*,const >> PetscScalar*,void*,PetscErrorCode*), >> >> >> (&time,c,n,a_xI,a_xG,ctx,&ierr)); >> >> >> } >> >> >> PETSC_EXTERN void petscdsaddboundary_(PetscDS *prob, >> >> >> DMBoundaryConditionType *type, char *name, char *labelname, PetscInt >> >> >> *field, PetscInt *numcomps, PetscInt *comps, >> >> >> void (*bcFunc)(void), >> >> >> void (*bcFunc_t)(void), >> >> >> PetscInt *numids, const >> PetscInt >> >> >> *ids, void *ctx, PetscErrorCode *ierr, >> >> >> PETSC_FORTRAN_CHARLEN_T >> namelen, >> >> >> PETSC_FORTRAN_CHARLEN_T labelnamelen) >> >> >> { >> >> >> char *newname, *newlabelname; >> >> >> FIXCHAR(name, namelen, newname); >> >> >> FIXCHAR(labelname, labelnamelen, newlabelname); >> >> >> *ierr = PetscObjectSetFortranCallback((PetscObject)*prob, >> >> >> PETSC_FORTRAN_CALLBACK_CLASS, &bocofunc, >> (PetscVoidFunction)bcFunc, >> >> >> ctx); >> >> >> *ierr = PetscObjectSetFortranCallback((PetscObject)*prob, >> >> >> PETSC_FORTRAN_CALLBACK_CLASS, &bocofunc_time, >> >> (PetscVoidFunction)bcFunc_t, >> >> >> ctx); >> >> >> *ierr = PetscDSAddBoundary(*prob, *type, newname, newlabelname, >> >> >> *field, *numcomps, comps, >> >> >> (void (*)(void))ourbocofunc, >> >> >> (void (*)(void))ourbocofunc_time, >> >> >> *numids, ids, *prob); >> >> >> FREECHAR(name, newname); >> >> >> FREECHAR(labelname, newlabelname); >> >> >> } >> >> >> >> >> >> >> >> >> >> >> >> but when I call it in the program, with adequate routines, I obtain >> the >> >> >> following error : >> >> >> >> >> >> [0]PETSC ERROR: --------------------- Error Message >> >> --------------------------------------------------------------[0]PETSC >> >> ERROR: Corrupt argument: >> >> https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSC >> >> ERROR: Fortran callback not set on this object[0]PETSC ERROR: See >> >> https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble >> >> shooting.[0]PETSC ERROR: Petsc Development GIT revision: >> >> v3.14.2-297-gf36a7edeb8 GIT Date: 2020-12-18 04:42:53 +0000[0]PETSC >> ERROR: >> >> ../../../bin/eulerian3D on a named macbook-pro-de-thibault.home by >> tbridel >> >> Sun Dec 20 15:05:15 2020[0]PETSC ERROR: Configure options --with-clean=0 >> >> --prefix=/Users/tbridel/Documents/1-CODES/04-PETSC/build >> --with-make-np=2 >> >> --with-windows-graphics=0 --with-debugging=0 --download-fblaslapack >> >> --download-mpich-shared=0 --with-x=0 --with-pthread=0 --with-valgrind=0 >> >> --PETSC_ARCH=macosx >> >> --with-fc=/usr/local/Cellar/open-mpi/4.0.1_2/bin/mpifort >> >> --with-cc=/usr/local/Cellar/open-mpi/4.0.1_2/bin/mpicc >> >> --with-cxx=/usr/local/Cellar/open-mpi/4.0.1_2/bin/mpic++ --with-openmp=0 >> >> --download-hypre=yes --download-sowing=yes --download-metis=yes >> >> --download-parmetis=yes --download-triangle=yes --download-tetgen=yes >> >> --download-ctetgen=yes --download-p4est=yes --download-zlib=yes >> >> --download-c2html=yes --download-eigen=yes --download-pragmatic=yes >> >> --with-hdf5-dir=/usr/local/Cellar/hdf5/1.10.5_1 >> >> --with-cmake-dir=/usr/local/Cellar/cmake/3.15.3[0]PETSC ERROR: #1 >> >> PetscObjectGetFortranCallback() line 258 in >> >> >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/sys/objects/inherit.c[0]PETSC >> >> ERROR: #2 ourbocofunc() line 141 in >> >> >> /Users/tbridel/Documents/1-CODES/59-EULERIAN3D/sources/petsc_wrapping/wrapper_petsc.c[0]PETSC >> >> ERROR: #3 DMPlexInsertBoundaryValuesRiemann() line 989 in >> >> >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/dm/impls/plex/plexfem.c[0]PETSC >> >> ERROR: #4 DMPlexInsertBoundaryValues_Plex() line 1052 in >> >> >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/dm/impls/plex/plexfem.c[0]PETSC >> >> ERROR: #5 DMPlexInsertBoundaryValues() line 1142 in >> >> >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/dm/impls/plex/plexfem.c[0]PETSC >> >> ERROR: #6 DMPlexComputeResidual_Internal() line 4524 in >> >> >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/dm/impls/plex/plexfem.c[0]PETSC >> >> ERROR: #7 DMPlexTSComputeRHSFunctionFVM() line 74 in >> >> >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/utils/dmplexts.c[0]PETSC >> >> ERROR: #8 ourdmtsrhsfunc() line 186 in >> >> >> /Users/tbridel/Documents/1-CODES/59-EULERIAN3D/sources/petsc_wrapping/wrapper_petsc.c[0]PETSC >> >> ERROR: #9 TSComputeRHSFunction_DMLocal() line 105 in >> >> >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/utils/dmlocalts.c[0]PETSC >> >> ERROR: #10 TSComputeRHSFunction() line 653 in >> >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/interface/ts.c[0]PETSC >> >> ERROR: #11 TSSSPStep_RK_3() line 120 in >> >> >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/impls/explicit/ssp/ssp.c[0]PETSC >> >> ERROR: #12 TSStep_SSP() line 208 in >> >> >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/impls/explicit/ssp/ssp.c[0]PETSC >> >> ERROR: #13 TSStep() line 3757 in >> >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/interface/ts.c[0]PETSC >> >> ERROR: #14 TSSolve() line 4154 in >> >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/interface/ts.c[0]PETSC >> >> ERROR: #15 User provided function() line 0 in User file >> >> >> >> >> >> >> >> >> Second is about the DMProjectFunction wrapper, that I have done so : >> >> >> >> >> >> static PetscErrorCode ourdmprojfunc(PetscInt dim, PetscReal time, >> >> >> PetscReal x[], PetscInt Nf, PetscScalar u[], void *ctx) >> >> >> { >> >> >> PetscObjectUseFortranCallback((DM)ctx, dmprojfunc, >> >> >> >> >> >> >> >> >> (PetscInt*,PetscReal*,PetscReal*,PetscInt*,PetscScalar*,void*,PetscErrorCode*), >> >> >> (&dim,&time,x,&Nf,u,_ctx,&ierr)) >> >> >> } >> >> >> PETSC_EXTERN void dmprojectfunction_(DM *dm, PetscReal *time, >> >> >> void >> >> >> >> >> >> (*func)(PetscInt*,PetscReal*,PetscReal*,PetscInt*,PetscScalar*,void*,PetscErrorCode*), >> >> >> void *ctx, InsertMode *mode, >> Vec X, >> >> >> PetscErrorCode *ierr) >> >> >> { >> >> >> PetscErrorCode (*funcarr[1]) (PetscInt dim, PetscReal time, >> >> PetscReal >> >> >> x[], PetscInt Nf, PetscScalar *u, void *ctx); >> >> >> *ierr = PetscObjectSetFortranCallback((PetscObject)*dm, >> >> >> PETSC_FORTRAN_CALLBACK_CLASS, &dmprojfunc, (PetscVoidFunction)func, >> >> ctx); >> >> >> funcarr[0] = ourdmprojfunc; >> >> >> *ierr = DMProjectFunction(*dm, *time, funcarr, &ctx, *mode, X); >> >> >> } >> >> >> >> >> >> >> >> >> This time there is no error because I cannot reach this point in the >> >> >> program, but I am not sure anyways how to write this wrapper, >> especially >> >> >> because of the double pointers that DMProjectFunction takes as >> >> arguments. >> >> >> >> >> >> Does anyone have any idea what could be going wrong with those two >> >> >> wrappers ? >> >> >> >> >> >> Thank you very much in advance !! >> >> >> >> >> >> Thibault >> >> >> >> >> >> Le ven. 18 d?c. 2020 ? 11:02, Thibault Bridel-Bertomeu < >> >> >> thibault.bridelbertomeu at gmail.com> a ?crit : >> >> >> >> >> >>> Aah that is a nice trick, I was getting ready to fork, clone the >> fork >> >> and >> >> >>> redo the work, but that worked fine ! Thank you Barry ! >> >> >>> >> >> >>> The MR will appear in a little while ! >> >> >>> >> >> >>> Thibault >> >> >>> >> >> >>> >> >> >>> Le ven. 18 d?c. 2020 ? 10:16, Barry Smith a >> ?crit : >> >> >>> >> >> >>>> >> >> >>>> Good question. There is a trick to limit the amount of work you >> >> need >> >> >>>> to do with a new fork after you have already made changes with a >> >> PETSc >> >> >>>> clone, but it looks like we do not document this clearly in the >> >> webpages. >> >> >>>> (I couldn't find it). >> >> >>>> >> >> >>>> Yes, you do need to make a fork, but after you have made the >> fork on >> >> >>>> the GitLab website (and have done nothing on your machine) edit the >> >> file >> >> >>>> $PETSC_DIR/.git/config for your clone on your machine >> >> >>>> >> >> >>>> Locate the line that has url = git at gitlab.com:petsc/petsc.git >> >> (this >> >> >>>> may have an https at the beginning of the line) >> >> >>>> >> >> >>>> Change this line to point to the fork url instead with git@ not >> >> >>>> https, which will be pretty much the same URL but with your user id >> >> instead >> >> >>>> of petsc in the address. Then git push and it will push to your >> fork. >> >> >>>> >> >> >>>> Now you changes will be in your fork and you can make the MR from >> >> your >> >> >>>> fork URL on Gitlab. (In other words this editing trick converts >> your >> >> PETSc >> >> >>>> clone on your machine to a PETSc fork). >> >> >>>> >> >> >>>> I hope I have explained this clearly enough it goes smoothly. >> >> >>>> >> >> >>>> Barry >> >> >>>> >> >> >>>> >> >> >>>> >> >> >>>> On Dec 18, 2020, at 3:00 AM, Thibault Bridel-Bertomeu < >> >> >>>> thibault.bridelbertomeu at gmail.com> wrote: >> >> >>>> >> >> >>>> Hello Barry, >> >> >>>> >> >> >>>> I'll start the MR as soon as possible then so that specialists can >> >> >>>> indeed have a look. Do I have to fork PETSc to start a MR or are >> >> PETSc repo >> >> >>>> settings such that can I push a branch from the PETSc clone I got ? >> >> >>>> >> >> >>>> Thibault >> >> >>>> >> >> >>>> >> >> >>>> Le mer. 16 d?c. 2020 ? 07:47, Barry Smith a >> ?crit >> >> : >> >> >>>> >> >> >>>>> >> >> >>>>> Thibault, >> >> >>>>> >> >> >>>>> A subdirectory for the example is fine; we have other examples >> that >> >> >>>>> use subdirectories and multiple files. >> >> >>>>> >> >> >>>>> Note: even if you don't have something completely working you >> can >> >> >>>>> still make MR and list it as DRAFT request for comments, some >> other >> >> PETSc >> >> >>>>> members who understand the packages you are using and Fortran >> better >> >> than I >> >> >>>>> may be able to help as you develop the code. >> >> >>>>> >> >> >>>>> Barry >> >> >>>>> >> >> >>>>> >> >> >>>>> >> >> >>>>> >> >> >>>>> On Dec 16, 2020, at 12:35 AM, Thibault Bridel-Bertomeu < >> >> >>>>> thibault.bridelbertomeu at gmail.com> wrote: >> >> >>>>> >> >> >>>>> Hello everyone, >> >> >>>>> >> >> >>>>> Thank you Barry for the feedback. >> >> >>>>> OK, yes I'll work up an MR as soon as I have got something >> working. >> >> By >> >> >>>>> the way, does the fortran-version of the example have to be a >> single >> >> file ? >> >> >>>>> If my push contains a directory with several files (different >> >> modules and >> >> >>>>> the main), and the Makefile that goes with it, is that ok ? >> >> >>>>> >> >> >>>>> Thibault Bridel-Bertomeu >> >> >>>>> >> >> >>>>> >> >> >>>>> Le mer. 16 d?c. 2020 ? 04:46, Barry Smith a >> >> ?crit : >> >> >>>>> >> >> >>>>>> >> >> >>>>>> This is great. If you make a branch off of the PETSc git >> >> repository >> >> >>>>>> with these additions and work on ex11 you can make a merge >> request >> >> and we >> >> >>>>>> can run the code easily on all our test systems (for security >> >> reasons one >> >> >>>>>> of use needs to launch the tests from your MR). >> >> >>>>>> https://docs.petsc.org/en/latest/developers/integration/ >> >> >>>>>> >> >> >>>>>> Barry >> >> >>>>>> >> >> >>>>>> >> >> >>>>>> On Dec 15, 2020, at 5:35 AM, Thibault Bridel-Bertomeu < >> >> >>>>>> thibault.bridelbertomeu at gmail.com> wrote: >> >> >>>>>> >> >> >>>>>> Hello everyone, >> >> >>>>>> >> >> >>>>>> So far, I have the wrappers in the files attached to this >> e-mail. I >> >> >>>>>> still do not know if they work properly - at least the code >> >> compiles and >> >> >>>>>> the calls to the wrapped-subroutine do not fail - but I wanted to >> >> put this >> >> >>>>>> here in case someone sees something really wrong with it already. >> >> >>>>>> >> >> >>>>>> Thank you again for your help, I'll try to post updates of the >> F90 >> >> >>>>>> version of ex11 regularly in this thread. >> >> >>>>>> >> >> >>>>>> Stay safe, >> >> >>>>>> >> >> >>>>>> Thibault Bridel-Bertomeu >> >> >>>>>> >> >> >>>>>> Le dim. 13 d?c. 2020 ? 16:39, Jed Brown a >> ?crit >> >> : >> >> >>>>>> >> >> >>>>>>> Thibault Bridel-Bertomeu >> >> writes: >> >> >>>>>>> >> >> >>>>>>> > Thank you Mark for your answer. >> >> >>>>>>> > >> >> >>>>>>> > I am not sure what you think could be in the setBC1 routine ? >> How >> >> >>>>>>> to make >> >> >>>>>>> > the connection with the PetscDS ? >> >> >>>>>>> > >> >> >>>>>>> > On the other hand, I actually found after a while TSMonitorSet >> >> has a >> >> >>>>>>> > fortran wrapper, and it does take as arguments two function >> >> >>>>>>> pointers, so I >> >> >>>>>>> > guess it is possible ? Although I am not sure exactly how to >> play >> >> >>>>>>> with the >> >> >>>>>>> > PetscObjectSetFortranCallback & PetscObjectUseFortranCallback >> >> >>>>>>> macros - >> >> >>>>>>> > could anybody advise please ? >> >> >>>>>>> >> >> >>>>>>> tsmonitorset_ is a good example to follow. In your file, create >> one >> >> >>>>>>> of these static structs with a member for each callback. These >> are >> >> IDs that >> >> >>>>>>> will be used as keys for Fortran callbacks and their contexts. >> The >> >> salient >> >> >>>>>>> parts of the file are below. >> >> >>>>>>> >> >> >>>>>>> static struct { >> >> >>>>>>> PetscFortranCallbackId prestep; >> >> >>>>>>> PetscFortranCallbackId poststep; >> >> >>>>>>> PetscFortranCallbackId rhsfunction; >> >> >>>>>>> PetscFortranCallbackId rhsjacobian; >> >> >>>>>>> PetscFortranCallbackId ifunction; >> >> >>>>>>> PetscFortranCallbackId ijacobian; >> >> >>>>>>> PetscFortranCallbackId monitor; >> >> >>>>>>> PetscFortranCallbackId mondestroy; >> >> >>>>>>> PetscFortranCallbackId transform; >> >> >>>>>>> #if defined(PETSC_HAVE_F90_2PTR_ARG) >> >> >>>>>>> PetscFortranCallbackId function_pgiptr; >> >> >>>>>>> #endif >> >> >>>>>>> } _cb; >> >> >>>>>>> >> >> >>>>>>> /* >> >> >>>>>>> Note ctx is the same as ts so we need to get the Fortran >> context >> >> >>>>>>> out of the TS; this gets put in _ctx using the callback ID >> >> >>>>>>> */ >> >> >>>>>>> static PetscErrorCode ourmonitor(TS ts,PetscInt i,PetscReal >> d,Vec >> >> >>>>>>> v,void *ctx) >> >> >>>>>>> { >> >> >>>>>>> >> >> >>>>>>> >> >> >> PetscObjectUseFortranCallback(ts,_cb.monitor,(TS*,PetscInt*,PetscReal*,Vec >> >> >>>>>>> *,void*,PetscErrorCode*),(&ts,&i,&d,&v,_ctx,&ierr)); >> >> >>>>>>> } >> >> >>>>>>> >> >> >>>>>>> Then follow as in tsmonitorset_, which sets two callbacks. >> >> >>>>>>> >> >> >>>>>>> PETSC_EXTERN void tsmonitorset_(TS *ts,void >> >> >>>>>>> >> (*func)(TS*,PetscInt*,PetscReal*,Vec*,void*,PetscErrorCode*),void >> >> >>>>>>> *mctx,void (*d)(void*,PetscErrorCode*),PetscErrorCode *ierr) >> >> >>>>>>> { >> >> >>>>>>> CHKFORTRANNULLFUNCTION(d); >> >> >>>>>>> if ((PetscVoidFunction)func == (PetscVoidFunction) >> >> >>>>>>> tsmonitordefault_) { >> >> >>>>>>> *ierr = TSMonitorSet(*ts,(PetscErrorCode >> >> >>>>>>> >> >> >> (*)(TS,PetscInt,PetscReal,Vec,void*))TSMonitorDefault,*(PetscViewerAndFormat**)mctx,(PetscErrorCode >> >> >>>>>>> (*)(void **))PetscViewerAndFormatDestroy); >> >> >>>>>>> } else { >> >> >>>>>>> *ierr = >> >> >>>>>>> >> >> >> PetscObjectSetFortranCallback((PetscObject)*ts,PETSC_FORTRAN_CALLBACK_CLASS,&_cb.monitor,(PetscVoidFunction)func,mctx); >> >> >>>>>>> *ierr = >> >> >>>>>>> >> >> >> PetscObjectSetFortranCallback((PetscObject)*ts,PETSC_FORTRAN_CALLBACK_CLASS,&_cb.mondestroy,(PetscVoidFunction)d,mctx); >> >> >>>>>>> *ierr = TSMonitorSet(*ts,ourmonitor,*ts,ourmonitordestroy); >> >> >>>>>>> } >> >> >>>>>>> } >> >> >>>>>>> >> >> >>>>>> >> >> >>>>>> >> >> >>>>>> >> >> >>>>>> >> >> >>>>> >> >> >>>> >> >> >> >> >> >> > -- >> > Thibault Bridel-Bertomeu >> > ? >> > Eng, MSc, PhD >> > Research Engineer >> > CEA/CESTA >> > 33114 LE BARP >> > Tel.: (+33)557046924 >> > Mob.: (+33)611025322 >> > Mail: thibault.bridelbertomeu at gmail.com >> > -- > Thibault Bridel-Bertomeu > ? > Eng, MSc, PhD > Research Engineer > CEA/CESTA > 33114 LE BARP > Tel.: (+33)557046924 > Mob.: (+33)611025322 > Mail: thibault.bridelbertomeu at gmail.com From hongzhang at anl.gov Mon Dec 28 20:16:09 2020 From: hongzhang at anl.gov (Zhang, Hong) Date: Tue, 29 Dec 2020 02:16:09 +0000 Subject: [petsc-users] Calculating adjoint of more than one cost function separately In-Reply-To: <4EB04688-BF1C-4464-A040-CA2FEDE7968C@llnl.gov> References: <4EB04688-BF1C-4464-A040-CA2FEDE7968C@llnl.gov> Message-ID: <6C0BF724-117E-4983-A4FA-0E8D437E4A9D@anl.gov> On Dec 27, 2020, at 5:01 PM, Salazar De Troya, Miguel via petsc-users > wrote: Hello, I am interested in calculating the gradients of an optimization problem with one goal and one constraint functions which need TSAdjoint for their adjoints. I?d like to call each of their adjoints in different calls, but it does not seem to be possible without making compromises. If you are calculating the derivatives to the same set of parameters, the adjoints of all cost functionals can be done with a single backward run. For instance, one could set TSCreateQuadratureTS() and TSSetCostGradients() with different quadratures (and their gradients) for each adjoint call (one at a time). This would evaluate the cost functions in the backwards run though, whereas one typically computes the cost functions in a different routine than the adjoint call (like in line searches evaluations) The second argument of TSCreateQuadratureTS() allows you to choose if the quadrature is evaluated in the forward run or in the backward run. The choice typically depends on the optimization algorithms. Some optimization algorithms may expect users to provide an objective function and its gradient as a bundle; in this case, the choice does not make a difference. Some other algorithms may occasionally evaluate the objective function without evaluating its gradient, then evaluating the quadrature in the forward run is definitely a better choice. One could also set TSCreateQuadratureTS() with the goal and the constraint functions to be evaluated at the forward run (as typically done when computing the cost function). The problem would be that the adjoint call now requires two sets of gradients for TSSetCostGradients() and their adjoint are calculated together, costing twice if your routines for the cost and the constraint gradients are separated. You can put the two sets of gradients in vector arrays and pass them to TSSetCostGradients() together. Only one call to TSAdjointSolve() is needed. See the example src/ts/tutorials/ex20adj.c, where we have two independent cost functionals, and their adjoints correspond to lambda[0]/mup[0] and lambda[1]/mup[1] respectively. After performing a TSAdjontSolve, you will get the gradients for both cost functionals. The only solution I can think of is to set TSCreateQuadratureTS() with both the goal and constraint functions in the forward run. Then, in the adjoint calls, reset TSCreateQuadratureTS() with just the cost function I am interested in (either the goal or the constraint) and set just a single TSSetCostGradients(). Will this work? Are there better alternatives? TSCreateQuadratureTS() is needed only when you have integral terms in the cost functionals. It has nothing to do with the procedure to compute the adjoints for multiple cost functionals simultaneously. Do you have integrals in both the goal and the constraint? If so, you can create one quadrature TS and evaluate both integrals together. For example, you may have r[0] (the first element of the output vector in your cost integrand) for the goal and r[1] for the constraint. Just be careful that the adjoint variables (array lambda[]/mup[]) should be organized in the same order. Even if successful, there is the problem that the trajectory goes back to the beginning when we perform a TSAdjointSolve() call. Subsequent calls to TSAdjointSolve() (for instance for another cost function) are invalid because the trajectory is not set at the end of the simulation. One needs to call the forward problem to bring it back to the end. Is there a quick way to set the trajectory state to the last time step without having to run the forward problem? I am attaching an example to illustrate this issue. One can uncomment lines 120-122 to obtain the right value of the derivative. Most likely you need only one call to TSAdjointSolve(). Reusing the trajectory for multiple calls is also doable. But I doubt you would need it. Hong (Mr.) Thanks Miguel Miguel A. Salazar de Troya Postdoctoral Researcher, Lawrence Livermore National Laboratory B141 Rm: 1085-5 Ph: 1(925) 422-6411 -------------- next part -------------- An HTML attachment was scrubbed... URL: From salazardetro1 at llnl.gov Mon Dec 28 21:31:41 2020 From: salazardetro1 at llnl.gov (Salazar De Troya, Miguel) Date: Tue, 29 Dec 2020 03:31:41 +0000 Subject: [petsc-users] Calculating adjoint of more than one cost function separately In-Reply-To: <6C0BF724-117E-4983-A4FA-0E8D437E4A9D@anl.gov> References: <4EB04688-BF1C-4464-A040-CA2FEDE7968C@llnl.gov> <6C0BF724-117E-4983-A4FA-0E8D437E4A9D@anl.gov> Message-ID: <8F46B598-8E78-410D-B682-5A531AFB0A08@llnl.gov> Hello, Thanks for your response, Hong. I see that all cost functionals are evaluated in a single backward run. However, I want to do it separately. I want to isolate the evaluation of the gradients for each cost functional. Can you please elaborate on how to reuse the trajectory for multiple calls? Specifically, how to set the trajectory back to the end so I can call TSAdjoint() again? Miguel From: "Zhang, Hong" Date: Monday, December 28, 2020 at 6:16 PM To: "Salazar De Troya, Miguel" Cc: "Salazar De Troya, Miguel via petsc-users" Subject: Re: [petsc-users] Calculating adjoint of more than one cost function separately On Dec 27, 2020, at 5:01 PM, Salazar De Troya, Miguel via petsc-users > wrote: Hello, I am interested in calculating the gradients of an optimization problem with one goal and one constraint functions which need TSAdjoint for their adjoints. I?d like to call each of their adjoints in different calls, but it does not seem to be possible without making compromises. If you are calculating the derivatives to the same set of parameters, the adjoints of all cost functionals can be done with a single backward run. For instance, one could set TSCreateQuadratureTS() and TSSetCostGradients() with different quadratures (and their gradients) for each adjoint call (one at a time). This would evaluate the cost functions in the backwards run though, whereas one typically computes the cost functions in a different routine than the adjoint call (like in line searches evaluations) The second argument of TSCreateQuadratureTS() allows you to choose if the quadrature is evaluated in the forward run or in the backward run. The choice typically depends on the optimization algorithms. Some optimization algorithms may expect users to provide an objective function and its gradient as a bundle; in this case, the choice does not make a difference. Some other algorithms may occasionally evaluate the objective function without evaluating its gradient, then evaluating the quadrature in the forward run is definitely a better choice. One could also set TSCreateQuadratureTS() with the goal and the constraint functions to be evaluated at the forward run (as typically done when computing the cost function). The problem would be that the adjoint call now requires two sets of gradients for TSSetCostGradients() and their adjoint are calculated together, costing twice if your routines for the cost and the constraint gradients are separated. You can put the two sets of gradients in vector arrays and pass them to TSSetCostGradients() together. Only one call to TSAdjointSolve() is needed. See the example src/ts/tutorials/ex20adj.c, where we have two independent cost functionals, and their adjoints correspond to lambda[0]/mup[0] and lambda[1]/mup[1] respectively. After performing a TSAdjontSolve, you will get the gradients for both cost functionals. The only solution I can think of is to set TSCreateQuadratureTS() with both the goal and constraint functions in the forward run. Then, in the adjoint calls, reset TSCreateQuadratureTS() with just the cost function I am interested in (either the goal or the constraint) and set just a single TSSetCostGradients(). Will this work? Are there better alternatives? TSCreateQuadratureTS() is needed only when you have integral terms in the cost functionals. It has nothing to do with the procedure to compute the adjoints for multiple cost functionals simultaneously. Do you have integrals in both the goal and the constraint? If so, you can create one quadrature TS and evaluate both integrals together. For example, you may have r[0] (the first element of the output vector in your cost integrand) for the goal and r[1] for the constraint. Just be careful that the adjoint variables (array lambda[]/mup[]) should be organized in the same order. Even if successful, there is the problem that the trajectory goes back to the beginning when we perform a TSAdjointSolve() call. Subsequent calls to TSAdjointSolve() (for instance for another cost function) are invalid because the trajectory is not set at the end of the simulation. One needs to call the forward problem to bring it back to the end. Is there a quick way to set the trajectory state to the last time step without having to run the forward problem? I am attaching an example to illustrate this issue. One can uncomment lines 120-122 to obtain the right value of the derivative. Most likely you need only one call to TSAdjointSolve(). Reusing the trajectory for multiple calls is also doable. But I doubt you would need it. Hong (Mr.) Thanks Miguel Miguel A. Salazar de Troya Postdoctoral Researcher, Lawrence Livermore National Laboratory B141 Rm: 1085-5 Ph: 1(925) 422-6411 -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Mon Dec 28 21:39:37 2020 From: bsmith at petsc.dev (Barry Smith) Date: Mon, 28 Dec 2020 21:39:37 -0600 Subject: [petsc-users] TS tutorial ex11 in Fortran In-Reply-To: References: <1EC72DF8-61BD-482B-B1B0-9427AB1ADF74@petsc.dev> <048DC071-33E5-4644-B1C8-DD598248923A@petsc.dev> <87tuspyaug.fsf@jedbrown.org> <1035FEDB-B297-4E71-8F0F-D44C72707105@petsc.dev> <39E9FBED-6D6B-41E4-983C-58465ACAB4D0@petsc.dev> <877dp2c8ao.fsf@jedbrown.org> <8735zpde03.fsf@jedbrown.org> Message-ID: > On Dec 28, 2020, at 11:52 AM, Thibault Bridel-Bertomeu wrote: > > However in Fortran, for the VTK output, it is true I do not view ts->vec_sol directly but rather a Vec called sol that I use in the TSSolve(ts, sol) call ... but I suppose it should be equivalent shouldn?t it ? This can be tricky. As the integrator runs it may not always be updating the vector you pass into TSSolve. It is only when TSSolve returns that the values in sol are guaranteed to match the TS solution. It is better if your VTK output code call something to get get the current solution and not use your Fortran sol vector. For example if you register a monitor function one of its arguments is always the correct current solution. I think you can also call TSGetSolution to return a vector that contains the current solution. Barry > > Le lun. 28 d?c. 2020 ? 18:42, Jed Brown > a ?crit : > I think I'm not following something. The time stepper implementation (TSStep_XYZ) is where ts->vec_sol should be updated. The controller doesn't have anything to do with it. What TS are you using and how do you know your RHS is nonzero? > > Thibault Bridel-Bertomeu > writes: > > > Hi Jed, > > > > Thanks for your message. > > I implemented everything in C as you suggested and it works fine except for > > one thing : the ts->vec_sol does not seem to get updated when seen from the > > C code (it is on the Fortran side though the solution is correct). > > As a result, the time step (that uses among other things the max velocity > > in the domain) is always at the value it gets from the initial solution. > > Any idea why ts->vec_sol does not seem to be updated ? (I checked the > > stepnum and time is updated though , when accessed with TSGetTime for > > instance). > > > > Cheers, > > Thibault > > > > Le lun. 28 d?c. 2020 ? 15:30, Jed Brown > a ?crit : > > > >> Thibault Bridel-Bertomeu > writes: > >> > >> > Good morning everyone, > >> > > >> > Thank you Barry for the answer, it works now ! > >> > > >> > I am facing (yet) another situation: the TSAdaptRegister function. > >> > In the MR on gitlab, Jed mentioned that sometimes, when function pointers > >> > are not stored in PETSc objects, one can use stack memory to pass that > >> > pointer from fortran to C. > >> > >> The issue with stack memory is that when it returns, that memory is > >> invalid. You can't use it in this instance. > >> > >> I think you're going to have problems implementing a TSAdaptCreate_XYZ in > >> Fortran (because the body of that function will need to access private > >> struct members; see below). > >> > >> I would implement what you need in C and you can call out to Fortran if > >> you want from inside TSAdaptChoose_YourMethod(). > >> > >> PETSC_EXTERN PetscErrorCode TSAdaptCreate_DSP(TSAdapt adapt) > >> { > >> TSAdapt_DSP *dsp; > >> PetscErrorCode ierr; > >> > >> PetscFunctionBegin; > >> ierr = PetscNewLog(adapt,&dsp);CHKERRQ(ierr); > >> adapt->reject_safety = 1.0; /* unused */ > >> > >> adapt->data = (void*)dsp; > >> adapt->ops->choose = TSAdaptChoose_DSP; > >> adapt->ops->setfromoptions = TSAdaptSetFromOptions_DSP; > >> adapt->ops->destroy = TSAdaptDestroy_DSP; > >> adapt->ops->view = TSAdaptView_DSP; > >> > >> ierr = > >> PetscObjectComposeFunction((PetscObject)adapt,"TSAdaptDSPSetFilter_C",TSAdaptDSPSetFilter_DSP);CHKERRQ(ierr); > >> ierr = > >> PetscObjectComposeFunction((PetscObject)adapt,"TSAdaptDSPSetPID_C",TSAdaptDSPSetPID_DSP);CHKERRQ(ierr); > >> > >> ierr = TSAdaptDSPSetFilter_DSP(adapt,"PI42");CHKERRQ(ierr); > >> ierr = TSAdaptRestart_DSP(adapt);CHKERRQ(ierr); > >> PetscFunctionReturn(0); > >> } > >> > >> > Can anyone develop that idea ? Because for TSAdaptRegister, i guess the > >> > wrapper would start like : > >> > > >> > PETSC_EXTERN void tsadaptregister_(char *sname, > >> > void > >> (*func)(TSAdapt*,PetscErrorCode*), > >> > PetscErrorCode *ierr, > >> > PETSC_FORTRAN_CHARLEN_T snamelen) > >> > > >> > but then the C TSAdaptRegister function takes a PetscErrorCode > >> > (*func)(TSAdapt) function pointer as argument ... I cannot use any > >> > FORTRAN_CALLBACK here since I do not have any object to hook it to, and I > >> > could not find a similar situation among the pre-existing wrappers. Does > >> > anyone have an idea on how to proceed ? > >> > > >> > Thanks !! > >> > > >> > Thibault > >> > > >> > Le mar. 22 d?c. 2020 ? 21:20, Barry Smith > a ?crit : > >> > > >> >> > >> >> PetscObjectUseFortranCallback((PetscDS)ctx, > >> >> > >> >> > >> >> *ierr = PetscObjectSetFortranCallback((PetscObject)*prob > >> >> > >> >> > >> >> It looks like the problem is that these user provided functions do not > >> >> take a PetscDS directly as an argument so the Fortran callback > >> information > >> >> cannot be obtained from them. > >> >> > >> >> The manual page for PetscDSAddBoundary() says > >> >> > >> >> - ctx - An optional user context for bcFunc > >> >> > >> >> but then when it lists the calling sequence for bcFunc it does not list > >> >> the ctx as an argument, so either the manual page or code is wrong. > >> >> > >> >> It looks like you make the ctx be the PetscDS prob argument when you > >> >> call PetscDSAddBoundary > >> >> > >> >> In principle this sounds like it might work. I think you need to track > >> >> through the debugger to see if the ctx passed to ourbocofunc() is > >> >> actually the PetscDS prob variable and if not why it is not. > >> >> > >> >> Barry > >> >> > >> >> > >> >> On Dec 22, 2020, at 5:49 AM, Thibault Bridel-Bertomeu < > >> >> thibault.bridelbertomeu at gmail.com > wrote: > >> >> > >> >> Dear all, > >> >> > >> >> I have hit two snags while implementing the missing wrappers necessary > >> to > >> >> transcribe ex11 to Fortran. > >> >> > >> >> First is about the PetscDSAddBoundary wrapper, that I have done so : > >> >> > >> >> static PetscErrorCode ourbocofunc(PetscReal time, const PetscReal *c, > >> >> const PetscReal *n, const PetscScalar *a_xI, const PetscScalar *a_xG, > >> void > >> >> *ctx) > >> >> { > >> >> PetscObjectUseFortranCallback((PetscDS)ctx, bocofunc, > >> >> (PetscReal*,const PetscReal*,const > >> >> PetscReal*,const PetscScalar*,const PetscScalar*,void*,PetscErrorCode*), > >> >> (&time,c,n,a_xI,a_xG,ctx,&ierr)); > >> >> } > >> >> static PetscErrorCode ourbocofunc_time(PetscReal time, const PetscReal > >> *c, > >> >> const PetscReal *n, const PetscScalar *a_xI, const PetscScalar *a_xG, > >> void > >> >> *ctx) > >> >> { > >> >> PetscObjectUseFortranCallback((PetscDS)ctx, bocofunc_time, > >> >> (PetscReal*,const PetscReal*,const > >> >> PetscReal*,const PetscScalar*,const PetscScalar*,void*,PetscErrorCode*), > >> >> (&time,c,n,a_xI,a_xG,ctx,&ierr)); > >> >> } > >> >> PETSC_EXTERN void petscdsaddboundary_(PetscDS *prob, > >> >> DMBoundaryConditionType *type, char *name, char *labelname, PetscInt > >> >> *field, PetscInt *numcomps, PetscInt *comps, > >> >> void (*bcFunc)(void), > >> >> void (*bcFunc_t)(void), > >> >> PetscInt *numids, const PetscInt > >> >> *ids, void *ctx, PetscErrorCode *ierr, > >> >> PETSC_FORTRAN_CHARLEN_T namelen, > >> >> PETSC_FORTRAN_CHARLEN_T labelnamelen) > >> >> { > >> >> char *newname, *newlabelname; > >> >> FIXCHAR(name, namelen, newname); > >> >> FIXCHAR(labelname, labelnamelen, newlabelname); > >> >> *ierr = PetscObjectSetFortranCallback((PetscObject)*prob, > >> >> PETSC_FORTRAN_CALLBACK_CLASS, &bocofunc, (PetscVoidFunction)bcFunc, > >> >> ctx); > >> >> *ierr = PetscObjectSetFortranCallback((PetscObject)*prob, > >> >> PETSC_FORTRAN_CALLBACK_CLASS, &bocofunc_time, > >> (PetscVoidFunction)bcFunc_t, > >> >> ctx); > >> >> *ierr = PetscDSAddBoundary(*prob, *type, newname, newlabelname, > >> >> *field, *numcomps, comps, > >> >> (void (*)(void))ourbocofunc, > >> >> (void (*)(void))ourbocofunc_time, > >> >> *numids, ids, *prob); > >> >> FREECHAR(name, newname); > >> >> FREECHAR(labelname, newlabelname); > >> >> } > >> >> > >> >> > >> >> > >> >> but when I call it in the program, with adequate routines, I obtain the > >> >> following error : > >> >> > >> >> [0]PETSC ERROR: --------------------- Error Message > >> --------------------------------------------------------------[0]PETSC > >> ERROR: Corrupt argument: > >> https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSC > >> ERROR: Fortran callback not set on this object[0]PETSC ERROR: See > >> https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble > >> shooting.[0]PETSC ERROR: Petsc Development GIT revision: > >> v3.14.2-297-gf36a7edeb8 GIT Date: 2020-12-18 04:42:53 +0000[0]PETSC ERROR: > >> ../../../bin/eulerian3D on a named macbook-pro-de-thibault.home by tbridel > >> Sun Dec 20 15:05:15 2020[0]PETSC ERROR: Configure options --with-clean=0 > >> --prefix=/Users/tbridel/Documents/1-CODES/04-PETSC/build --with-make-np=2 > >> --with-windows-graphics=0 --with-debugging=0 --download-fblaslapack > >> --download-mpich-shared=0 --with-x=0 --with-pthread=0 --with-valgrind=0 > >> --PETSC_ARCH=macosx > >> --with-fc=/usr/local/Cellar/open-mpi/4.0.1_2/bin/mpifort > >> --with-cc=/usr/local/Cellar/open-mpi/4.0.1_2/bin/mpicc > >> --with-cxx=/usr/local/Cellar/open-mpi/4.0.1_2/bin/mpic++ --with-openmp=0 > >> --download-hypre=yes --download-sowing=yes --download-metis=yes > >> --download-parmetis=yes --download-triangle=yes --download-tetgen=yes > >> --download-ctetgen=yes --download-p4est=yes --download-zlib=yes > >> --download-c2html=yes --download-eigen=yes --download-pragmatic=yes > >> --with-hdf5-dir=/usr/local/Cellar/hdf5/1.10.5_1 > >> --with-cmake-dir=/usr/local/Cellar/cmake/3.15.3[0]PETSC ERROR: #1 > >> PetscObjectGetFortranCallback() line 258 in > >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/sys/objects/inherit.c[0]PETSC > >> ERROR: #2 ourbocofunc() line 141 in > >> /Users/tbridel/Documents/1-CODES/59-EULERIAN3D/sources/petsc_wrapping/wrapper_petsc.c[0]PETSC > >> ERROR: #3 DMPlexInsertBoundaryValuesRiemann() line 989 in > >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/dm/impls/plex/plexfem.c[0]PETSC > >> ERROR: #4 DMPlexInsertBoundaryValues_Plex() line 1052 in > >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/dm/impls/plex/plexfem.c[0]PETSC > >> ERROR: #5 DMPlexInsertBoundaryValues() line 1142 in > >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/dm/impls/plex/plexfem.c[0]PETSC > >> ERROR: #6 DMPlexComputeResidual_Internal() line 4524 in > >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/dm/impls/plex/plexfem.c[0]PETSC > >> ERROR: #7 DMPlexTSComputeRHSFunctionFVM() line 74 in > >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/utils/dmplexts.c[0]PETSC > >> ERROR: #8 ourdmtsrhsfunc() line 186 in > >> /Users/tbridel/Documents/1-CODES/59-EULERIAN3D/sources/petsc_wrapping/wrapper_petsc.c[0]PETSC > >> ERROR: #9 TSComputeRHSFunction_DMLocal() line 105 in > >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/utils/dmlocalts.c[0]PETSC > >> ERROR: #10 TSComputeRHSFunction() line 653 in > >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/interface/ts.c[0]PETSC > >> ERROR: #11 TSSSPStep_RK_3() line 120 in > >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/impls/explicit/ssp/ssp.c[0]PETSC > >> ERROR: #12 TSStep_SSP() line 208 in > >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/impls/explicit/ssp/ssp.c[0]PETSC > >> ERROR: #13 TSStep() line 3757 in > >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/interface/ts.c[0]PETSC > >> ERROR: #14 TSSolve() line 4154 in > >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/interface/ts.c[0]PETSC > >> ERROR: #15 User provided function() line 0 in User file > >> >> > >> >> > >> >> Second is about the DMProjectFunction wrapper, that I have done so : > >> >> > >> >> static PetscErrorCode ourdmprojfunc(PetscInt dim, PetscReal time, > >> >> PetscReal x[], PetscInt Nf, PetscScalar u[], void *ctx) > >> >> { > >> >> PetscObjectUseFortranCallback((DM)ctx, dmprojfunc, > >> >> > >> >> > >> (PetscInt*,PetscReal*,PetscReal*,PetscInt*,PetscScalar*,void*,PetscErrorCode*), > >> >> (&dim,&time,x,&Nf,u,_ctx,&ierr)) > >> >> } > >> >> PETSC_EXTERN void dmprojectfunction_(DM *dm, PetscReal *time, > >> >> void > >> >> > >> (*func)(PetscInt*,PetscReal*,PetscReal*,PetscInt*,PetscScalar*,void*,PetscErrorCode*), > >> >> void *ctx, InsertMode *mode, Vec X, > >> >> PetscErrorCode *ierr) > >> >> { > >> >> PetscErrorCode (*funcarr[1]) (PetscInt dim, PetscReal time, > >> PetscReal > >> >> x[], PetscInt Nf, PetscScalar *u, void *ctx); > >> >> *ierr = PetscObjectSetFortranCallback((PetscObject)*dm, > >> >> PETSC_FORTRAN_CALLBACK_CLASS, &dmprojfunc, (PetscVoidFunction)func, > >> ctx); > >> >> funcarr[0] = ourdmprojfunc; > >> >> *ierr = DMProjectFunction(*dm, *time, funcarr, &ctx, *mode, X); > >> >> } > >> >> > >> >> > >> >> This time there is no error because I cannot reach this point in the > >> >> program, but I am not sure anyways how to write this wrapper, especially > >> >> because of the double pointers that DMProjectFunction takes as > >> arguments. > >> >> > >> >> Does anyone have any idea what could be going wrong with those two > >> >> wrappers ? > >> >> > >> >> Thank you very much in advance !! > >> >> > >> >> Thibault > >> >> > >> >> Le ven. 18 d?c. 2020 ? 11:02, Thibault Bridel-Bertomeu < > >> >> thibault.bridelbertomeu at gmail.com > a ?crit : > >> >> > >> >>> Aah that is a nice trick, I was getting ready to fork, clone the fork > >> and > >> >>> redo the work, but that worked fine ! Thank you Barry ! > >> >>> > >> >>> The MR will appear in a little while ! > >> >>> > >> >>> Thibault > >> >>> > >> >>> > >> >>> Le ven. 18 d?c. 2020 ? 10:16, Barry Smith > a ?crit : > >> >>> > >> >>>> > >> >>>> Good question. There is a trick to limit the amount of work you > >> need > >> >>>> to do with a new fork after you have already made changes with a > >> PETSc > >> >>>> clone, but it looks like we do not document this clearly in the > >> webpages. > >> >>>> (I couldn't find it). > >> >>>> > >> >>>> Yes, you do need to make a fork, but after you have made the fork on > >> >>>> the GitLab website (and have done nothing on your machine) edit the > >> file > >> >>>> $PETSC_DIR/.git/config for your clone on your machine > >> >>>> > >> >>>> Locate the line that has url = git at gitlab.com:petsc/petsc.git > >> (this > >> >>>> may have an https at the beginning of the line) > >> >>>> > >> >>>> Change this line to point to the fork url instead with git@ not > >> >>>> https, which will be pretty much the same URL but with your user id > >> instead > >> >>>> of petsc in the address. Then git push and it will push to your fork. > >> >>>> > >> >>>> Now you changes will be in your fork and you can make the MR from > >> your > >> >>>> fork URL on Gitlab. (In other words this editing trick converts your > >> PETSc > >> >>>> clone on your machine to a PETSc fork). > >> >>>> > >> >>>> I hope I have explained this clearly enough it goes smoothly. > >> >>>> > >> >>>> Barry > >> >>>> > >> >>>> > >> >>>> > >> >>>> On Dec 18, 2020, at 3:00 AM, Thibault Bridel-Bertomeu < > >> >>>> thibault.bridelbertomeu at gmail.com > wrote: > >> >>>> > >> >>>> Hello Barry, > >> >>>> > >> >>>> I'll start the MR as soon as possible then so that specialists can > >> >>>> indeed have a look. Do I have to fork PETSc to start a MR or are > >> PETSc repo > >> >>>> settings such that can I push a branch from the PETSc clone I got ? > >> >>>> > >> >>>> Thibault > >> >>>> > >> >>>> > >> >>>> Le mer. 16 d?c. 2020 ? 07:47, Barry Smith > a ?crit > >> : > >> >>>> > >> >>>>> > >> >>>>> Thibault, > >> >>>>> > >> >>>>> A subdirectory for the example is fine; we have other examples that > >> >>>>> use subdirectories and multiple files. > >> >>>>> > >> >>>>> Note: even if you don't have something completely working you can > >> >>>>> still make MR and list it as DRAFT request for comments, some other > >> PETSc > >> >>>>> members who understand the packages you are using and Fortran better > >> than I > >> >>>>> may be able to help as you develop the code. > >> >>>>> > >> >>>>> Barry > >> >>>>> > >> >>>>> > >> >>>>> > >> >>>>> > >> >>>>> On Dec 16, 2020, at 12:35 AM, Thibault Bridel-Bertomeu < > >> >>>>> thibault.bridelbertomeu at gmail.com > wrote: > >> >>>>> > >> >>>>> Hello everyone, > >> >>>>> > >> >>>>> Thank you Barry for the feedback. > >> >>>>> OK, yes I'll work up an MR as soon as I have got something working. > >> By > >> >>>>> the way, does the fortran-version of the example have to be a single > >> file ? > >> >>>>> If my push contains a directory with several files (different > >> modules and > >> >>>>> the main), and the Makefile that goes with it, is that ok ? > >> >>>>> > >> >>>>> Thibault Bridel-Bertomeu > >> >>>>> > >> >>>>> > >> >>>>> Le mer. 16 d?c. 2020 ? 04:46, Barry Smith > a > >> ?crit : > >> >>>>> > >> >>>>>> > >> >>>>>> This is great. If you make a branch off of the PETSc git > >> repository > >> >>>>>> with these additions and work on ex11 you can make a merge request > >> and we > >> >>>>>> can run the code easily on all our test systems (for security > >> reasons one > >> >>>>>> of use needs to launch the tests from your MR). > >> >>>>>> https://docs.petsc.org/en/latest/developers/integration/ > >> >>>>>> > >> >>>>>> Barry > >> >>>>>> > >> >>>>>> > >> >>>>>> On Dec 15, 2020, at 5:35 AM, Thibault Bridel-Bertomeu < > >> >>>>>> thibault.bridelbertomeu at gmail.com > wrote: > >> >>>>>> > >> >>>>>> Hello everyone, > >> >>>>>> > >> >>>>>> So far, I have the wrappers in the files attached to this e-mail. I > >> >>>>>> still do not know if they work properly - at least the code > >> compiles and > >> >>>>>> the calls to the wrapped-subroutine do not fail - but I wanted to > >> put this > >> >>>>>> here in case someone sees something really wrong with it already. > >> >>>>>> > >> >>>>>> Thank you again for your help, I'll try to post updates of the F90 > >> >>>>>> version of ex11 regularly in this thread. > >> >>>>>> > >> >>>>>> Stay safe, > >> >>>>>> > >> >>>>>> Thibault Bridel-Bertomeu > >> >>>>>> > >> >>>>>> Le dim. 13 d?c. 2020 ? 16:39, Jed Brown > a ?crit > >> : > >> >>>>>> > >> >>>>>>> Thibault Bridel-Bertomeu > > >> writes: > >> >>>>>>> > >> >>>>>>> > Thank you Mark for your answer. > >> >>>>>>> > > >> >>>>>>> > I am not sure what you think could be in the setBC1 routine ? How > >> >>>>>>> to make > >> >>>>>>> > the connection with the PetscDS ? > >> >>>>>>> > > >> >>>>>>> > On the other hand, I actually found after a while TSMonitorSet > >> has a > >> >>>>>>> > fortran wrapper, and it does take as arguments two function > >> >>>>>>> pointers, so I > >> >>>>>>> > guess it is possible ? Although I am not sure exactly how to play > >> >>>>>>> with the > >> >>>>>>> > PetscObjectSetFortranCallback & PetscObjectUseFortranCallback > >> >>>>>>> macros - > >> >>>>>>> > could anybody advise please ? > >> >>>>>>> > >> >>>>>>> tsmonitorset_ is a good example to follow. In your file, create one > >> >>>>>>> of these static structs with a member for each callback. These are > >> IDs that > >> >>>>>>> will be used as keys for Fortran callbacks and their contexts. The > >> salient > >> >>>>>>> parts of the file are below. > >> >>>>>>> > >> >>>>>>> static struct { > >> >>>>>>> PetscFortranCallbackId prestep; > >> >>>>>>> PetscFortranCallbackId poststep; > >> >>>>>>> PetscFortranCallbackId rhsfunction; > >> >>>>>>> PetscFortranCallbackId rhsjacobian; > >> >>>>>>> PetscFortranCallbackId ifunction; > >> >>>>>>> PetscFortranCallbackId ijacobian; > >> >>>>>>> PetscFortranCallbackId monitor; > >> >>>>>>> PetscFortranCallbackId mondestroy; > >> >>>>>>> PetscFortranCallbackId transform; > >> >>>>>>> #if defined(PETSC_HAVE_F90_2PTR_ARG) > >> >>>>>>> PetscFortranCallbackId function_pgiptr; > >> >>>>>>> #endif > >> >>>>>>> } _cb; > >> >>>>>>> > >> >>>>>>> /* > >> >>>>>>> Note ctx is the same as ts so we need to get the Fortran context > >> >>>>>>> out of the TS; this gets put in _ctx using the callback ID > >> >>>>>>> */ > >> >>>>>>> static PetscErrorCode ourmonitor(TS ts,PetscInt i,PetscReal d,Vec > >> >>>>>>> v,void *ctx) > >> >>>>>>> { > >> >>>>>>> > >> >>>>>>> > >> PetscObjectUseFortranCallback(ts,_cb.monitor,(TS*,PetscInt*,PetscReal*,Vec > >> >>>>>>> *,void*,PetscErrorCode*),(&ts,&i,&d,&v,_ctx,&ierr)); > >> >>>>>>> } > >> >>>>>>> > >> >>>>>>> Then follow as in tsmonitorset_, which sets two callbacks. > >> >>>>>>> > >> >>>>>>> PETSC_EXTERN void tsmonitorset_(TS *ts,void > >> >>>>>>> (*func)(TS*,PetscInt*,PetscReal*,Vec*,void*,PetscErrorCode*),void > >> >>>>>>> *mctx,void (*d)(void*,PetscErrorCode*),PetscErrorCode *ierr) > >> >>>>>>> { > >> >>>>>>> CHKFORTRANNULLFUNCTION(d); > >> >>>>>>> if ((PetscVoidFunction)func == (PetscVoidFunction) > >> >>>>>>> tsmonitordefault_) { > >> >>>>>>> *ierr = TSMonitorSet(*ts,(PetscErrorCode > >> >>>>>>> > >> (*)(TS,PetscInt,PetscReal,Vec,void*))TSMonitorDefault,*(PetscViewerAndFormat**)mctx,(PetscErrorCode > >> >>>>>>> (*)(void **))PetscViewerAndFormatDestroy); > >> >>>>>>> } else { > >> >>>>>>> *ierr = > >> >>>>>>> > >> PetscObjectSetFortranCallback((PetscObject)*ts,PETSC_FORTRAN_CALLBACK_CLASS,&_cb.monitor,(PetscVoidFunction)func,mctx); > >> >>>>>>> *ierr = > >> >>>>>>> > >> PetscObjectSetFortranCallback((PetscObject)*ts,PETSC_FORTRAN_CALLBACK_CLASS,&_cb.mondestroy,(PetscVoidFunction)d,mctx); > >> >>>>>>> *ierr = TSMonitorSet(*ts,ourmonitor,*ts,ourmonitordestroy); > >> >>>>>>> } > >> >>>>>>> } > >> >>>>>>> > >> >>>>>> > >> >>>>>> > >> >>>>>> > >> >>>>>> > >> >>>>> > >> >>>> > >> >> > >> > > -- > > Thibault Bridel-Bertomeu > > ? > > Eng, MSc, PhD > > Research Engineer > > CEA/CESTA > > 33114 LE BARP > > Tel.: (+33)557046924 > > Mob.: (+33)611025322 > > Mail: thibault.bridelbertomeu at gmail.com > -- > Thibault Bridel-Bertomeu > ? > Eng, MSc, PhD > Research Engineer > CEA/CESTA > 33114 LE BARP > Tel.: (+33)557046924 > Mob.: (+33)611025322 > Mail: thibault.bridelbertomeu at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From hongzhang at anl.gov Mon Dec 28 22:43:43 2020 From: hongzhang at anl.gov (Zhang, Hong) Date: Tue, 29 Dec 2020 04:43:43 +0000 Subject: [petsc-users] Calculating adjoint of more than one cost function separately In-Reply-To: <8F46B598-8E78-410D-B682-5A531AFB0A08@llnl.gov> References: <4EB04688-BF1C-4464-A040-CA2FEDE7968C@llnl.gov> <6C0BF724-117E-4983-A4FA-0E8D437E4A9D@anl.gov> <8F46B598-8E78-410D-B682-5A531AFB0A08@llnl.gov> Message-ID: <92E53E99-7596-4C9C-A0D8-23350F11EBF5@anl.gov> On Dec 28, 2020, at 9:31 PM, Salazar De Troya, Miguel > wrote: Hello, Thanks for your response, Hong. I see that all cost functionals are evaluated in a single backward run. All gradients, not necessarily the cost functionals. However, I want to do it separately. I want to isolate the evaluation of the gradients for each cost functional. What is the motivation of doing multiple TSAdjointSolve() calls in your case? Note that evaluating the gradients in one call is more efficient because you do not have to load the same checkpoints multiple times. Can you please elaborate on how to reuse the trajectory for multiple calls? Specifically, how to set the trajectory back to the end so I can call TSAdjoint() again? This is the last thing you want to do. Before each adjoint run, you can reset TS into the same state as when the forward run ends by specifying the final time, the step size and the step number. You will be limited to use disk (default option) for checkpointing. Here is an example modified from ex20adj.c: diff --git a/src/ts/tutorials/ex20adj.c b/src/ts/tutorials/ex20adj.c index 8ca9e0b7ba..e185bc4721 100644 --- a/src/ts/tutorials/ex20adj.c +++ b/src/ts/tutorials/ex20adj.c @@ -277,6 +277,10 @@ int main(int argc,char **argv) ierr = TSGetSolveTime(ts,&user.ftime);CHKERRQ(ierr); ierr = TSGetStepNumber(ts,&user.steps);CHKERRQ(ierr); + for (PetscInt iter=1; iter<3; iter++) { + ierr = TSSetTime(ts,user.ftime);CHKERRQ(ierr); + ierr = TSSetTimeStep(ts,0.001);CHKERRQ(ierr); + ierr = TSSetStepNumber(ts,user.steps);CHKERRQ(ierr); /* - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Adjoint model starts here - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - */ @@ -321,7 +325,7 @@ int main(int argc,char **argv) ierr = VecRestoreArray(user.mup[1],&x_ptr);CHKERRQ(ierr); ierr = VecRestoreArray(user.lambda[1],&y_ptr);CHKERRQ(ierr); ierr = PetscPrintf(PETSC_COMM_WORLD,"\n sensivitity wrt parameters: d[z(tf)]/d[mu]\n%g\n",(double)PetscRealPart(derp));CHKERRQ(ierr); - + } /* - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Free work space. All PETSc objects should be destroyed when they are no longer needed. Hong (Mr.) Miguel From: "Zhang, Hong" > Date: Monday, December 28, 2020 at 6:16 PM To: "Salazar De Troya, Miguel" > Cc: "Salazar De Troya, Miguel via petsc-users" > Subject: Re: [petsc-users] Calculating adjoint of more than one cost function separately On Dec 27, 2020, at 5:01 PM, Salazar De Troya, Miguel via petsc-users > wrote: Hello, I am interested in calculating the gradients of an optimization problem with one goal and one constraint functions which need TSAdjoint for their adjoints. I?d like to call each of their adjoints in different calls, but it does not seem to be possible without making compromises. If you are calculating the derivatives to the same set of parameters, the adjoints of all cost functionals can be done with a single backward run. For instance, one could set TSCreateQuadratureTS() and TSSetCostGradients() with different quadratures (and their gradients) for each adjoint call (one at a time). This would evaluate the cost functions in the backwards run though, whereas one typically computes the cost functions in a different routine than the adjoint call (like in line searches evaluations) The second argument of TSCreateQuadratureTS() allows you to choose if the quadrature is evaluated in the forward run or in the backward run. The choice typically depends on the optimization algorithms. Some optimization algorithms may expect users to provide an objective function and its gradient as a bundle; in this case, the choice does not make a difference. Some other algorithms may occasionally evaluate the objective function without evaluating its gradient, then evaluating the quadrature in the forward run is definitely a better choice. One could also set TSCreateQuadratureTS() with the goal and the constraint functions to be evaluated at the forward run (as typically done when computing the cost function). The problem would be that the adjoint call now requires two sets of gradients for TSSetCostGradients() and their adjoint are calculated together, costing twice if your routines for the cost and the constraint gradients are separated. You can put the two sets of gradients in vector arrays and pass them to TSSetCostGradients() together. Only one call to TSAdjointSolve() is needed. See the example src/ts/tutorials/ex20adj.c, where we have two independent cost functionals, and their adjoints correspond to lambda[0]/mup[0] and lambda[1]/mup[1] respectively. After performing a TSAdjontSolve, you will get the gradients for both cost functionals. The only solution I can think of is to set TSCreateQuadratureTS() with both the goal and constraint functions in the forward run. Then, in the adjoint calls, reset TSCreateQuadratureTS() with just the cost function I am interested in (either the goal or the constraint) and set just a single TSSetCostGradients(). Will this work? Are there better alternatives? TSCreateQuadratureTS() is needed only when you have integral terms in the cost functionals. It has nothing to do with the procedure to compute the adjoints for multiple cost functionals simultaneously. Do you have integrals in both the goal and the constraint? If so, you can create one quadrature TS and evaluate both integrals together. For example, you may have r[0] (the first element of the output vector in your cost integrand) for the goal and r[1] for the constraint. Just be careful that the adjoint variables (array lambda[]/mup[]) should be organized in the same order. Even if successful, there is the problem that the trajectory goes back to the beginning when we perform a TSAdjointSolve() call. Subsequent calls to TSAdjointSolve() (for instance for another cost function) are invalid because the trajectory is not set at the end of the simulation. One needs to call the forward problem to bring it back to the end. Is there a quick way to set the trajectory state to the last time step without having to run the forward problem? I am attaching an example to illustrate this issue. One can uncomment lines 120-122 to obtain the right value of the derivative. Most likely you need only one call to TSAdjointSolve(). Reusing the trajectory for multiple calls is also doable. But I doubt you would need it. Hong (Mr.) Thanks Miguel Miguel A. Salazar de Troya Postdoctoral Researcher, Lawrence Livermore National Laboratory B141 Rm: 1085-5 Ph: 1(925) 422-6411 -------------- next part -------------- An HTML attachment was scrubbed... URL: From thibault.bridelbertomeu at gmail.com Tue Dec 29 00:49:47 2020 From: thibault.bridelbertomeu at gmail.com (Thibault Bridel-Bertomeu) Date: Tue, 29 Dec 2020 07:49:47 +0100 Subject: [petsc-users] TS tutorial ex11 in Fortran In-Reply-To: References: <1EC72DF8-61BD-482B-B1B0-9427AB1ADF74@petsc.dev> <048DC071-33E5-4644-B1C8-DD598248923A@petsc.dev> <87tuspyaug.fsf@jedbrown.org> <1035FEDB-B297-4E71-8F0F-D44C72707105@petsc.dev> <39E9FBED-6D6B-41E4-983C-58465ACAB4D0@petsc.dev> <877dp2c8ao.fsf@jedbrown.org> <8735zpde03.fsf@jedbrown.org> Message-ID: Oh okay, from a monitor function it always uses the latest up to date solution ! Then my mistake, when I visualize the VTK it comes from a monitor so it must be the latest solution and not the Vec sol as I was saying, sorry. I?ll check out why in the C part TSGetSolutuon always yields the initial vector rather than the current one ! Keep you in the loop ! Thankss !! Le mar. 29 d?c. 2020 ? 04:39, Barry Smith a ?crit : > > > On Dec 28, 2020, at 11:52 AM, Thibault Bridel-Bertomeu < > thibault.bridelbertomeu at gmail.com> wrote: > > However in Fortran, for the VTK output, it is true I do not view > ts->vec_sol directly but rather a Vec called sol that I use in the > TSSolve(ts, sol) call ... but I suppose it should be equivalent shouldn?t > it ? > > > This can be tricky. As the integrator runs it may not always be updating > the vector you pass into TSSolve. It is only when TSSolve returns that the > values in sol are guaranteed to match the TS solution. It is better if your > VTK output code call something to get get the current solution and not use > your Fortran sol vector. > > For example if you register a monitor function one of its arguments is > always the correct current solution. I think you can also call > TSGetSolution to return a vector that contains the current solution. > > Barry > > > > > Le lun. 28 d?c. 2020 ? 18:42, Jed Brown a ?crit : > >> I think I'm not following something. The time stepper implementation >> (TSStep_XYZ) is where ts->vec_sol should be updated. The controller doesn't >> have anything to do with it. What TS are you using and how do you know your >> RHS is nonzero? >> >> Thibault Bridel-Bertomeu writes: >> >> > Hi Jed, >> > >> > Thanks for your message. >> > I implemented everything in C as you suggested and it works fine except >> for >> > one thing : the ts->vec_sol does not seem to get updated when seen from >> the >> > C code (it is on the Fortran side though the solution is correct). >> > As a result, the time step (that uses among other things the max >> velocity >> > in the domain) is always at the value it gets from the initial solution. >> > Any idea why ts->vec_sol does not seem to be updated ? (I checked the >> > stepnum and time is updated though , when accessed with TSGetTime for >> > instance). >> > >> > Cheers, >> > Thibault >> > >> > Le lun. 28 d?c. 2020 ? 15:30, Jed Brown a ?crit : >> > >> >> Thibault Bridel-Bertomeu writes: >> >> >> >> > Good morning everyone, >> >> > >> >> > Thank you Barry for the answer, it works now ! >> >> > >> >> > I am facing (yet) another situation: the TSAdaptRegister function. >> >> > In the MR on gitlab, Jed mentioned that sometimes, when function >> pointers >> >> > are not stored in PETSc objects, one can use stack memory to pass >> that >> >> > pointer from fortran to C. >> >> >> >> The issue with stack memory is that when it returns, that memory is >> >> invalid. You can't use it in this instance. >> >> >> >> I think you're going to have problems implementing a TSAdaptCreate_XYZ >> in >> >> Fortran (because the body of that function will need to access private >> >> struct members; see below). >> >> >> >> I would implement what you need in C and you can call out to Fortran if >> >> you want from inside TSAdaptChoose_YourMethod(). >> >> >> >> PETSC_EXTERN PetscErrorCode TSAdaptCreate_DSP(TSAdapt adapt) >> >> { >> >> TSAdapt_DSP *dsp; >> >> PetscErrorCode ierr; >> >> >> >> PetscFunctionBegin; >> >> ierr = PetscNewLog(adapt,&dsp);CHKERRQ(ierr); >> >> adapt->reject_safety = 1.0; /* unused */ >> >> >> >> adapt->data = (void*)dsp; >> >> adapt->ops->choose = TSAdaptChoose_DSP; >> >> adapt->ops->setfromoptions = TSAdaptSetFromOptions_DSP; >> >> adapt->ops->destroy = TSAdaptDestroy_DSP; >> >> adapt->ops->view = TSAdaptView_DSP; >> >> >> >> ierr = >> >> >> PetscObjectComposeFunction((PetscObject)adapt,"TSAdaptDSPSetFilter_C",TSAdaptDSPSetFilter_DSP);CHKERRQ(ierr); >> >> ierr = >> >> >> PetscObjectComposeFunction((PetscObject)adapt,"TSAdaptDSPSetPID_C",TSAdaptDSPSetPID_DSP);CHKERRQ(ierr); >> >> >> >> ierr = TSAdaptDSPSetFilter_DSP(adapt,"PI42");CHKERRQ(ierr); >> >> ierr = TSAdaptRestart_DSP(adapt);CHKERRQ(ierr); >> >> PetscFunctionReturn(0); >> >> } >> >> >> >> > Can anyone develop that idea ? Because for TSAdaptRegister, i guess >> the >> >> > wrapper would start like : >> >> > >> >> > PETSC_EXTERN void tsadaptregister_(char *sname, >> >> > void >> >> (*func)(TSAdapt*,PetscErrorCode*), >> >> > PetscErrorCode *ierr, >> >> > PETSC_FORTRAN_CHARLEN_T snamelen) >> >> > >> >> > but then the C TSAdaptRegister function takes a PetscErrorCode >> >> > (*func)(TSAdapt) function pointer as argument ... I cannot use any >> >> > FORTRAN_CALLBACK here since I do not have any object to hook it to, >> and I >> >> > could not find a similar situation among the pre-existing wrappers. >> Does >> >> > anyone have an idea on how to proceed ? >> >> > >> >> > Thanks !! >> >> > >> >> > Thibault >> >> > >> >> > Le mar. 22 d?c. 2020 ? 21:20, Barry Smith a >> ?crit : >> >> > >> >> >> >> >> >> PetscObjectUseFortranCallback((PetscDS)ctx, >> >> >> >> >> >> >> >> >> *ierr = PetscObjectSetFortranCallback((PetscObject)*prob >> >> >> >> >> >> >> >> >> It looks like the problem is that these user provided functions >> do not >> >> >> take a PetscDS directly as an argument so the Fortran callback >> >> information >> >> >> cannot be obtained from them. >> >> >> >> >> >> The manual page for PetscDSAddBoundary() says >> >> >> >> >> >> - ctx - An optional user context for bcFunc >> >> >> >> >> >> but then when it lists the calling sequence for bcFunc it does not >> list >> >> >> the ctx as an argument, so either the manual page or code is wrong. >> >> >> >> >> >> It looks like you make the ctx be the PetscDS prob argument when >> you >> >> >> call PetscDSAddBoundary >> >> >> >> >> >> In principle this sounds like it might work. I think you need to >> track >> >> >> through the debugger to see if the ctx passed to ourbocofunc() is >> >> >> actually the PetscDS prob variable and if not why it is not. >> >> >> >> >> >> Barry >> >> >> >> >> >> >> >> >> On Dec 22, 2020, at 5:49 AM, Thibault Bridel-Bertomeu < >> >> >> thibault.bridelbertomeu at gmail.com> wrote: >> >> >> >> >> >> Dear all, >> >> >> >> >> >> I have hit two snags while implementing the missing wrappers >> necessary >> >> to >> >> >> transcribe ex11 to Fortran. >> >> >> >> >> >> First is about the PetscDSAddBoundary wrapper, that I have done so : >> >> >> >> >> >> static PetscErrorCode ourbocofunc(PetscReal time, const PetscReal >> *c, >> >> >> const PetscReal *n, const PetscScalar *a_xI, const PetscScalar >> *a_xG, >> >> void >> >> >> *ctx) >> >> >> { >> >> >> PetscObjectUseFortranCallback((PetscDS)ctx, bocofunc, >> >> >> (PetscReal*,const PetscReal*,const >> >> >> PetscReal*,const PetscScalar*,const >> PetscScalar*,void*,PetscErrorCode*), >> >> >> (&time,c,n,a_xI,a_xG,ctx,&ierr)); >> >> >> } >> >> >> static PetscErrorCode ourbocofunc_time(PetscReal time, const >> PetscReal >> >> *c, >> >> >> const PetscReal *n, const PetscScalar *a_xI, const PetscScalar >> *a_xG, >> >> void >> >> >> *ctx) >> >> >> { >> >> >> PetscObjectUseFortranCallback((PetscDS)ctx, bocofunc_time, >> >> >> (PetscReal*,const PetscReal*,const >> >> >> PetscReal*,const PetscScalar*,const >> PetscScalar*,void*,PetscErrorCode*), >> >> >> (&time,c,n,a_xI,a_xG,ctx,&ierr)); >> >> >> } >> >> >> PETSC_EXTERN void petscdsaddboundary_(PetscDS *prob, >> >> >> DMBoundaryConditionType *type, char *name, char *labelname, PetscInt >> >> >> *field, PetscInt *numcomps, PetscInt *comps, >> >> >> void (*bcFunc)(void), >> >> >> void (*bcFunc_t)(void), >> >> >> PetscInt *numids, const >> PetscInt >> >> >> *ids, void *ctx, PetscErrorCode *ierr, >> >> >> PETSC_FORTRAN_CHARLEN_T >> namelen, >> >> >> PETSC_FORTRAN_CHARLEN_T labelnamelen) >> >> >> { >> >> >> char *newname, *newlabelname; >> >> >> FIXCHAR(name, namelen, newname); >> >> >> FIXCHAR(labelname, labelnamelen, newlabelname); >> >> >> *ierr = PetscObjectSetFortranCallback((PetscObject)*prob, >> >> >> PETSC_FORTRAN_CALLBACK_CLASS, &bocofunc, >> (PetscVoidFunction)bcFunc, >> >> >> ctx); >> >> >> *ierr = PetscObjectSetFortranCallback((PetscObject)*prob, >> >> >> PETSC_FORTRAN_CALLBACK_CLASS, &bocofunc_time, >> >> (PetscVoidFunction)bcFunc_t, >> >> >> ctx); >> >> >> *ierr = PetscDSAddBoundary(*prob, *type, newname, newlabelname, >> >> >> *field, *numcomps, comps, >> >> >> (void (*)(void))ourbocofunc, >> >> >> (void (*)(void))ourbocofunc_time, >> >> >> *numids, ids, *prob); >> >> >> FREECHAR(name, newname); >> >> >> FREECHAR(labelname, newlabelname); >> >> >> } >> >> >> >> >> >> >> >> >> >> >> >> but when I call it in the program, with adequate routines, I >> obtain the >> >> >> following error : >> >> >> >> >> >> [0]PETSC ERROR: --------------------- Error Message >> >> --------------------------------------------------------------[0]PETSC >> >> ERROR: Corrupt argument: >> >> https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSC >> >> ERROR: Fortran callback not set on this object[0]PETSC ERROR: See >> >> https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble >> >> shooting.[0]PETSC ERROR: Petsc Development GIT revision: >> >> v3.14.2-297-gf36a7edeb8 GIT Date: 2020-12-18 04:42:53 +0000[0]PETSC >> ERROR: >> >> ../../../bin/eulerian3D on a named macbook-pro-de-thibault.home by >> tbridel >> >> Sun Dec 20 15:05:15 2020[0]PETSC ERROR: Configure options >> --with-clean=0 >> >> --prefix=/Users/tbridel/Documents/1-CODES/04-PETSC/build >> --with-make-np=2 >> >> --with-windows-graphics=0 --with-debugging=0 --download-fblaslapack >> >> --download-mpich-shared=0 --with-x=0 --with-pthread=0 --with-valgrind=0 >> >> --PETSC_ARCH=macosx >> >> --with-fc=/usr/local/Cellar/open-mpi/4.0.1_2/bin/mpifort >> >> --with-cc=/usr/local/Cellar/open-mpi/4.0.1_2/bin/mpicc >> >> --with-cxx=/usr/local/Cellar/open-mpi/4.0.1_2/bin/mpic++ >> --with-openmp=0 >> >> --download-hypre=yes --download-sowing=yes --download-metis=yes >> >> --download-parmetis=yes --download-triangle=yes --download-tetgen=yes >> >> --download-ctetgen=yes --download-p4est=yes --download-zlib=yes >> >> --download-c2html=yes --download-eigen=yes --download-pragmatic=yes >> >> --with-hdf5-dir=/usr/local/Cellar/hdf5/1.10.5_1 >> >> --with-cmake-dir=/usr/local/Cellar/cmake/3.15.3[0]PETSC ERROR: #1 >> >> PetscObjectGetFortranCallback() line 258 in >> >> >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/sys/objects/inherit.c[0]PETSC >> >> ERROR: #2 ourbocofunc() line 141 in >> >> >> /Users/tbridel/Documents/1-CODES/59-EULERIAN3D/sources/petsc_wrapping/wrapper_petsc.c[0]PETSC >> >> ERROR: #3 DMPlexInsertBoundaryValuesRiemann() line 989 in >> >> >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/dm/impls/plex/plexfem.c[0]PETSC >> >> ERROR: #4 DMPlexInsertBoundaryValues_Plex() line 1052 in >> >> >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/dm/impls/plex/plexfem.c[0]PETSC >> >> ERROR: #5 DMPlexInsertBoundaryValues() line 1142 in >> >> >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/dm/impls/plex/plexfem.c[0]PETSC >> >> ERROR: #6 DMPlexComputeResidual_Internal() line 4524 in >> >> >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/dm/impls/plex/plexfem.c[0]PETSC >> >> ERROR: #7 DMPlexTSComputeRHSFunctionFVM() line 74 in >> >> >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/utils/dmplexts.c[0]PETSC >> >> ERROR: #8 ourdmtsrhsfunc() line 186 in >> >> >> /Users/tbridel/Documents/1-CODES/59-EULERIAN3D/sources/petsc_wrapping/wrapper_petsc.c[0]PETSC >> >> ERROR: #9 TSComputeRHSFunction_DMLocal() line 105 in >> >> >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/utils/dmlocalts.c[0]PETSC >> >> ERROR: #10 TSComputeRHSFunction() line 653 in >> >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/interface/ts.c[0]PETSC >> >> ERROR: #11 TSSSPStep_RK_3() line 120 in >> >> >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/impls/explicit/ssp/ssp.c[0]PETSC >> >> ERROR: #12 TSStep_SSP() line 208 in >> >> >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/impls/explicit/ssp/ssp.c[0]PETSC >> >> ERROR: #13 TSStep() line 3757 in >> >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/interface/ts.c[0]PETSC >> >> ERROR: #14 TSSolve() line 4154 in >> >> /Users/tbridel/Documents/1-CODES/04-PETSC/src/ts/interface/ts.c[0]PETSC >> >> ERROR: #15 User provided function() line 0 in User file >> >> >> >> >> >> >> >> >> Second is about the DMProjectFunction wrapper, that I have done so : >> >> >> >> >> >> static PetscErrorCode ourdmprojfunc(PetscInt dim, PetscReal time, >> >> >> PetscReal x[], PetscInt Nf, PetscScalar u[], void *ctx) >> >> >> { >> >> >> PetscObjectUseFortranCallback((DM)ctx, dmprojfunc, >> >> >> >> >> >> >> >> >> (PetscInt*,PetscReal*,PetscReal*,PetscInt*,PetscScalar*,void*,PetscErrorCode*), >> >> >> (&dim,&time,x,&Nf,u,_ctx,&ierr)) >> >> >> } >> >> >> PETSC_EXTERN void dmprojectfunction_(DM *dm, PetscReal *time, >> >> >> void >> >> >> >> >> >> (*func)(PetscInt*,PetscReal*,PetscReal*,PetscInt*,PetscScalar*,void*,PetscErrorCode*), >> >> >> void *ctx, InsertMode *mode, >> Vec X, >> >> >> PetscErrorCode *ierr) >> >> >> { >> >> >> PetscErrorCode (*funcarr[1]) (PetscInt dim, PetscReal time, >> >> PetscReal >> >> >> x[], PetscInt Nf, PetscScalar *u, void *ctx); >> >> >> *ierr = PetscObjectSetFortranCallback((PetscObject)*dm, >> >> >> PETSC_FORTRAN_CALLBACK_CLASS, &dmprojfunc, (PetscVoidFunction)func, >> >> ctx); >> >> >> funcarr[0] = ourdmprojfunc; >> >> >> *ierr = DMProjectFunction(*dm, *time, funcarr, &ctx, *mode, X); >> >> >> } >> >> >> >> >> >> >> >> >> This time there is no error because I cannot reach this point in the >> >> >> program, but I am not sure anyways how to write this wrapper, >> especially >> >> >> because of the double pointers that DMProjectFunction takes as >> >> arguments. >> >> >> >> >> >> Does anyone have any idea what could be going wrong with those two >> >> >> wrappers ? >> >> >> >> >> >> Thank you very much in advance !! >> >> >> >> >> >> Thibault >> >> >> >> >> >> Le ven. 18 d?c. 2020 ? 11:02, Thibault Bridel-Bertomeu < >> >> >> thibault.bridelbertomeu at gmail.com> a ?crit : >> >> >> >> >> >>> Aah that is a nice trick, I was getting ready to fork, clone the >> fork >> >> and >> >> >>> redo the work, but that worked fine ! Thank you Barry ! >> >> >>> >> >> >>> The MR will appear in a little while ! >> >> >>> >> >> >>> Thibault >> >> >>> >> >> >>> >> >> >>> Le ven. 18 d?c. 2020 ? 10:16, Barry Smith a >> ?crit : >> >> >>> >> >> >>>> >> >> >>>> Good question. There is a trick to limit the amount of work you >> >> need >> >> >>>> to do with a new fork after you have already made changes with a >> >> PETSc >> >> >>>> clone, but it looks like we do not document this clearly in the >> >> webpages. >> >> >>>> (I couldn't find it). >> >> >>>> >> >> >>>> Yes, you do need to make a fork, but after you have made the >> fork on >> >> >>>> the GitLab website (and have done nothing on your machine) edit >> the >> >> file >> >> >>>> $PETSC_DIR/.git/config for your clone on your machine >> >> >>>> >> >> >>>> Locate the line that has url = git at gitlab.com:petsc/petsc.git >> >> (this >> >> >>>> may have an https at the beginning of the line) >> >> >>>> >> >> >>>> Change this line to point to the fork url instead with git@ not >> >> >>>> https, which will be pretty much the same URL but with your user >> id >> >> instead >> >> >>>> of petsc in the address. Then git push and it will push to your >> fork. >> >> >>>> >> >> >>>> Now you changes will be in your fork and you can make the MR >> from >> >> your >> >> >>>> fork URL on Gitlab. (In other words this editing trick converts >> your >> >> PETSc >> >> >>>> clone on your machine to a PETSc fork). >> >> >>>> >> >> >>>> I hope I have explained this clearly enough it goes smoothly. >> >> >>>> >> >> >>>> Barry >> >> >>>> >> >> >>>> >> >> >>>> >> >> >>>> On Dec 18, 2020, at 3:00 AM, Thibault Bridel-Bertomeu < >> >> >>>> thibault.bridelbertomeu at gmail.com> wrote: >> >> >>>> >> >> >>>> Hello Barry, >> >> >>>> >> >> >>>> I'll start the MR as soon as possible then so that specialists can >> >> >>>> indeed have a look. Do I have to fork PETSc to start a MR or are >> >> PETSc repo >> >> >>>> settings such that can I push a branch from the PETSc clone I got >> ? >> >> >>>> >> >> >>>> Thibault >> >> >>>> >> >> >>>> >> >> >>>> Le mer. 16 d?c. 2020 ? 07:47, Barry Smith a >> ?crit >> >> : >> >> >>>> >> >> >>>>> >> >> >>>>> Thibault, >> >> >>>>> >> >> >>>>> A subdirectory for the example is fine; we have other examples >> that >> >> >>>>> use subdirectories and multiple files. >> >> >>>>> >> >> >>>>> Note: even if you don't have something completely working you >> can >> >> >>>>> still make MR and list it as DRAFT request for comments, some >> other >> >> PETSc >> >> >>>>> members who understand the packages you are using and Fortran >> better >> >> than I >> >> >>>>> may be able to help as you develop the code. >> >> >>>>> >> >> >>>>> Barry >> >> >>>>> >> >> >>>>> >> >> >>>>> >> >> >>>>> >> >> >>>>> On Dec 16, 2020, at 12:35 AM, Thibault Bridel-Bertomeu < >> >> >>>>> thibault.bridelbertomeu at gmail.com> wrote: >> >> >>>>> >> >> >>>>> Hello everyone, >> >> >>>>> >> >> >>>>> Thank you Barry for the feedback. >> >> >>>>> OK, yes I'll work up an MR as soon as I have got something >> working. >> >> By >> >> >>>>> the way, does the fortran-version of the example have to be a >> single >> >> file ? >> >> >>>>> If my push contains a directory with several files (different >> >> modules and >> >> >>>>> the main), and the Makefile that goes with it, is that ok ? >> >> >>>>> >> >> >>>>> Thibault Bridel-Bertomeu >> >> >>>>> >> >> >>>>> >> >> >>>>> Le mer. 16 d?c. 2020 ? 04:46, Barry Smith a >> >> ?crit : >> >> >>>>> >> >> >>>>>> >> >> >>>>>> This is great. If you make a branch off of the PETSc git >> >> repository >> >> >>>>>> with these additions and work on ex11 you can make a merge >> request >> >> and we >> >> >>>>>> can run the code easily on all our test systems (for security >> >> reasons one >> >> >>>>>> of use needs to launch the tests from your MR). >> >> >>>>>> https://docs.petsc.org/en/latest/developers/integration/ >> >> >>>>>> >> >> >>>>>> Barry >> >> >>>>>> >> >> >>>>>> >> >> >>>>>> On Dec 15, 2020, at 5:35 AM, Thibault Bridel-Bertomeu < >> >> >>>>>> thibault.bridelbertomeu at gmail.com> wrote: >> >> >>>>>> >> >> >>>>>> Hello everyone, >> >> >>>>>> >> >> >>>>>> So far, I have the wrappers in the files attached to this >> e-mail. I >> >> >>>>>> still do not know if they work properly - at least the code >> >> compiles and >> >> >>>>>> the calls to the wrapped-subroutine do not fail - but I wanted >> to >> >> put this >> >> >>>>>> here in case someone sees something really wrong with it >> already. >> >> >>>>>> >> >> >>>>>> Thank you again for your help, I'll try to post updates of the >> F90 >> >> >>>>>> version of ex11 regularly in this thread. >> >> >>>>>> >> >> >>>>>> Stay safe, >> >> >>>>>> >> >> >>>>>> Thibault Bridel-Bertomeu >> >> >>>>>> >> >> >>>>>> Le dim. 13 d?c. 2020 ? 16:39, Jed Brown a >> ?crit >> >> : >> >> >>>>>> >> >> >>>>>>> Thibault Bridel-Bertomeu >> >> writes: >> >> >>>>>>> >> >> >>>>>>> > Thank you Mark for your answer. >> >> >>>>>>> > >> >> >>>>>>> > I am not sure what you think could be in the setBC1 routine >> ? How >> >> >>>>>>> to make >> >> >>>>>>> > the connection with the PetscDS ? >> >> >>>>>>> > >> >> >>>>>>> > On the other hand, I actually found after a while >> TSMonitorSet >> >> has a >> >> >>>>>>> > fortran wrapper, and it does take as arguments two function >> >> >>>>>>> pointers, so I >> >> >>>>>>> > guess it is possible ? Although I am not sure exactly how to >> play >> >> >>>>>>> with the >> >> >>>>>>> > PetscObjectSetFortranCallback & PetscObjectUseFortranCallback >> >> >>>>>>> macros - >> >> >>>>>>> > could anybody advise please ? >> >> >>>>>>> >> >> >>>>>>> tsmonitorset_ is a good example to follow. In your file, >> create one >> >> >>>>>>> of these static structs with a member for each callback. These >> are >> >> IDs that >> >> >>>>>>> will be used as keys for Fortran callbacks and their contexts. >> The >> >> salient >> >> >>>>>>> parts of the file are below. >> >> >>>>>>> >> >> >>>>>>> static struct { >> >> >>>>>>> PetscFortranCallbackId prestep; >> >> >>>>>>> PetscFortranCallbackId poststep; >> >> >>>>>>> PetscFortranCallbackId rhsfunction; >> >> >>>>>>> PetscFortranCallbackId rhsjacobian; >> >> >>>>>>> PetscFortranCallbackId ifunction; >> >> >>>>>>> PetscFortranCallbackId ijacobian; >> >> >>>>>>> PetscFortranCallbackId monitor; >> >> >>>>>>> PetscFortranCallbackId mondestroy; >> >> >>>>>>> PetscFortranCallbackId transform; >> >> >>>>>>> #if defined(PETSC_HAVE_F90_2PTR_ARG) >> >> >>>>>>> PetscFortranCallbackId function_pgiptr; >> >> >>>>>>> #endif >> >> >>>>>>> } _cb; >> >> >>>>>>> >> >> >>>>>>> /* >> >> >>>>>>> Note ctx is the same as ts so we need to get the Fortran >> context >> >> >>>>>>> out of the TS; this gets put in _ctx using the callback ID >> >> >>>>>>> */ >> >> >>>>>>> static PetscErrorCode ourmonitor(TS ts,PetscInt i,PetscReal >> d,Vec >> >> >>>>>>> v,void *ctx) >> >> >>>>>>> { >> >> >>>>>>> >> >> >>>>>>> >> >> >> PetscObjectUseFortranCallback(ts,_cb.monitor,(TS*,PetscInt*,PetscReal*,Vec >> >> >>>>>>> *,void*,PetscErrorCode*),(&ts,&i,&d,&v,_ctx,&ierr)); >> >> >>>>>>> } >> >> >>>>>>> >> >> >>>>>>> Then follow as in tsmonitorset_, which sets two callbacks. >> >> >>>>>>> >> >> >>>>>>> PETSC_EXTERN void tsmonitorset_(TS *ts,void >> >> >>>>>>> >> (*func)(TS*,PetscInt*,PetscReal*,Vec*,void*,PetscErrorCode*),void >> >> >>>>>>> *mctx,void (*d)(void*,PetscErrorCode*),PetscErrorCode *ierr) >> >> >>>>>>> { >> >> >>>>>>> CHKFORTRANNULLFUNCTION(d); >> >> >>>>>>> if ((PetscVoidFunction)func == (PetscVoidFunction) >> >> >>>>>>> tsmonitordefault_) { >> >> >>>>>>> *ierr = TSMonitorSet(*ts,(PetscErrorCode >> >> >>>>>>> >> >> >> (*)(TS,PetscInt,PetscReal,Vec,void*))TSMonitorDefault,*(PetscViewerAndFormat**)mctx,(PetscErrorCode >> >> >>>>>>> (*)(void **))PetscViewerAndFormatDestroy); >> >> >>>>>>> } else { >> >> >>>>>>> *ierr = >> >> >>>>>>> >> >> >> PetscObjectSetFortranCallback((PetscObject)*ts,PETSC_FORTRAN_CALLBACK_CLASS,&_cb.monitor,(PetscVoidFunction)func,mctx); >> >> >>>>>>> *ierr = >> >> >>>>>>> >> >> >> PetscObjectSetFortranCallback((PetscObject)*ts,PETSC_FORTRAN_CALLBACK_CLASS,&_cb.mondestroy,(PetscVoidFunction)d,mctx); >> >> >>>>>>> *ierr = TSMonitorSet(*ts,ourmonitor,*ts,ourmonitordestroy); >> >> >>>>>>> } >> >> >>>>>>> } >> >> >>>>>>> >> >> >>>>>> >> >> >>>>>> >> >> >>>>>> >> >> >>>>>> >> >> >>>>> >> >> >>>> >> >> >> >> >> >> > -- >> > Thibault Bridel-Bertomeu >> > ? >> > Eng, MSc, PhD >> > Research Engineer >> > CEA/CESTA >> > 33114 LE BARP >> > Tel.: (+33)557046924 >> > Mob.: (+33)611025322 >> > Mail: thibault.bridelbertomeu at gmail.com >> > -- > Thibault Bridel-Bertomeu > ? > Eng, MSc, PhD > Research Engineer > CEA/CESTA > 33114 LE BARP > Tel.: (+33)557046924 > Mob.: (+33)611025322 > Mail: thibault.bridelbertomeu at gmail.com > > > -- Thibault Bridel-Bertomeu ? Eng, MSc, PhD Research Engineer CEA/CESTA 33114 LE BARP Tel.: (+33)557046924 Mob.: (+33)611025322 Mail: thibault.bridelbertomeu at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From salazardetro1 at llnl.gov Tue Dec 29 11:27:23 2020 From: salazardetro1 at llnl.gov (Salazar De Troya, Miguel) Date: Tue, 29 Dec 2020 17:27:23 +0000 Subject: [petsc-users] Calculating adjoint of more than one cost function separately In-Reply-To: <92E53E99-7596-4C9C-A0D8-23350F11EBF5@anl.gov> References: <4EB04688-BF1C-4464-A040-CA2FEDE7968C@llnl.gov> <6C0BF724-117E-4983-A4FA-0E8D437E4A9D@anl.gov> <8F46B598-8E78-410D-B682-5A531AFB0A08@llnl.gov> <92E53E99-7596-4C9C-A0D8-23350F11EBF5@anl.gov> Message-ID: Hi Hong, I wanted to have separate calls to TSAdjointSolve() for each cost functional just for design purposes (separation of concerns). In pyadjoint, there is the ReducedFunctional object that encapsulates the functionality of a single cost functional and its derivative. Now I understand that there is a very compelling reason to actually calculate all cost functional gradients together (saving on checkpoint loadings). Thanks for clarifying that. I will work with that in mind from now on. Best, Miguel From: "Zhang, Hong" Date: Monday, December 28, 2020 at 8:43 PM To: "Salazar De Troya, Miguel" Cc: "Salazar De Troya, Miguel via petsc-users" Subject: Re: [petsc-users] Calculating adjoint of more than one cost function separately On Dec 28, 2020, at 9:31 PM, Salazar De Troya, Miguel > wrote: Hello, Thanks for your response, Hong. I see that all cost functionals are evaluated in a single backward run. All gradients, not necessarily the cost functionals. However, I want to do it separately. I want to isolate the evaluation of the gradients for each cost functional. What is the motivation of doing multiple TSAdjointSolve() calls in your case? Note that evaluating the gradients in one call is more efficient because you do not have to load the same checkpoints multiple times. Can you please elaborate on how to reuse the trajectory for multiple calls? Specifically, how to set the trajectory back to the end so I can call TSAdjoint() again? This is the last thing you want to do. Before each adjoint run, you can reset TS into the same state as when the forward run ends by specifying the final time, the step size and the step number. You will be limited to use disk (default option) for checkpointing. Here is an example modified from ex20adj.c: diff --git a/src/ts/tutorials/ex20adj.c b/src/ts/tutorials/ex20adj.c index 8ca9e0b7ba..e185bc4721 100644 --- a/src/ts/tutorials/ex20adj.c +++ b/src/ts/tutorials/ex20adj.c @@ -277,6 +277,10 @@ int main(int argc,char **argv) ierr = TSGetSolveTime(ts,&user.ftime);CHKERRQ(ierr); ierr = TSGetStepNumber(ts,&user.steps);CHKERRQ(ierr); + for (PetscInt iter=1; iter<3; iter++) { + ierr = TSSetTime(ts,user.ftime);CHKERRQ(ierr); + ierr = TSSetTimeStep(ts,0.001);CHKERRQ(ierr); + ierr = TSSetStepNumber(ts,user.steps);CHKERRQ(ierr); /* - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Adjoint model starts here - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - */ @@ -321,7 +325,7 @@ int main(int argc,char **argv) ierr = VecRestoreArray(user.mup[1],&x_ptr);CHKERRQ(ierr); ierr = VecRestoreArray(user.lambda[1],&y_ptr);CHKERRQ(ierr); ierr = PetscPrintf(PETSC_COMM_WORLD,"\n sensivitity wrt parameters: d[z(tf)]/d[mu]\n%g\n",(double)PetscRealPart(derp));CHKERRQ(ierr); - + } /* - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Free work space. All PETSc objects should be destroyed when they are no longer needed. Hong (Mr.) Miguel From: "Zhang, Hong" > Date: Monday, December 28, 2020 at 6:16 PM To: "Salazar De Troya, Miguel" > Cc: "Salazar De Troya, Miguel via petsc-users" > Subject: Re: [petsc-users] Calculating adjoint of more than one cost function separately On Dec 27, 2020, at 5:01 PM, Salazar De Troya, Miguel via petsc-users > wrote: Hello, I am interested in calculating the gradients of an optimization problem with one goal and one constraint functions which need TSAdjoint for their adjoints. I?d like to call each of their adjoints in different calls, but it does not seem to be possible without making compromises. If you are calculating the derivatives to the same set of parameters, the adjoints of all cost functionals can be done with a single backward run. For instance, one could set TSCreateQuadratureTS() and TSSetCostGradients() with different quadratures (and their gradients) for each adjoint call (one at a time). This would evaluate the cost functions in the backwards run though, whereas one typically computes the cost functions in a different routine than the adjoint call (like in line searches evaluations) The second argument of TSCreateQuadratureTS() allows you to choose if the quadrature is evaluated in the forward run or in the backward run. The choice typically depends on the optimization algorithms. Some optimization algorithms may expect users to provide an objective function and its gradient as a bundle; in this case, the choice does not make a difference. Some other algorithms may occasionally evaluate the objective function without evaluating its gradient, then evaluating the quadrature in the forward run is definitely a better choice. One could also set TSCreateQuadratureTS() with the goal and the constraint functions to be evaluated at the forward run (as typically done when computing the cost function). The problem would be that the adjoint call now requires two sets of gradients for TSSetCostGradients() and their adjoint are calculated together, costing twice if your routines for the cost and the constraint gradients are separated. You can put the two sets of gradients in vector arrays and pass them to TSSetCostGradients() together. Only one call to TSAdjointSolve() is needed. See the example src/ts/tutorials/ex20adj.c, where we have two independent cost functionals, and their adjoints correspond to lambda[0]/mup[0] and lambda[1]/mup[1] respectively. After performing a TSAdjontSolve, you will get the gradients for both cost functionals. The only solution I can think of is to set TSCreateQuadratureTS() with both the goal and constraint functions in the forward run. Then, in the adjoint calls, reset TSCreateQuadratureTS() with just the cost function I am interested in (either the goal or the constraint) and set just a single TSSetCostGradients(). Will this work? Are there better alternatives? TSCreateQuadratureTS() is needed only when you have integral terms in the cost functionals. It has nothing to do with the procedure to compute the adjoints for multiple cost functionals simultaneously. Do you have integrals in both the goal and the constraint? If so, you can create one quadrature TS and evaluate both integrals together. For example, you may have r[0] (the first element of the output vector in your cost integrand) for the goal and r[1] for the constraint. Just be careful that the adjoint variables (array lambda[]/mup[]) should be organized in the same order. Even if successful, there is the problem that the trajectory goes back to the beginning when we perform a TSAdjointSolve() call. Subsequent calls to TSAdjointSolve() (for instance for another cost function) are invalid because the trajectory is not set at the end of the simulation. One needs to call the forward problem to bring it back to the end. Is there a quick way to set the trajectory state to the last time step without having to run the forward problem? I am attaching an example to illustrate this issue. One can uncomment lines 120-122 to obtain the right value of the derivative. Most likely you need only one call to TSAdjointSolve(). Reusing the trajectory for multiple calls is also doable. But I doubt you would need it. Hong (Mr.) Thanks Miguel Miguel A. Salazar de Troya Postdoctoral Researcher, Lawrence Livermore National Laboratory B141 Rm: 1085-5 Ph: 1(925) 422-6411 -------------- next part -------------- An HTML attachment was scrubbed... URL: From cebau.mail at gmail.com Tue Dec 29 12:32:18 2020 From: cebau.mail at gmail.com (C B) Date: Tue, 29 Dec 2020 12:32:18 -0600 Subject: [petsc-users] Files saved with PetscViewerBinaryOpen - Binary file format ? Message-ID: Hello PETSc community ! I would like to read in another code a matrix / rhs / sol saved by PETSc. I am dealing with large matrices and for precision I would like to use the binary format. To save a matrix I am using the following lines (please let me know if I should make any changes), I am using an old version PETSc 2.1.5 for backward compatibility. ierr = PetscViewerBinaryOpen(PETSC_COMM_WORLD,filename,PETSC_BINARY_CREATE,&petsc_view);CHKERRQ(ierr); ierr = VecView(solut,petsc_view);CHKERRQ(ierr); ierr = PetscViewerDestroy(petsc_view);CHKERRQ(ierr); And to save a vector: ierr = PetscViewerBinaryOpen(PETSC_COMM_WORLD, filename, PETSC_BINARY_CREATE, &petsc_view);CHKERRQ(ierr); ierr = VecView(solut, petsc_view);CHKERRQ(ierr); ierr = PetscViewerDestroy(petsc_view);CHKERRQ(ierr); The files are saved without any problems, and now I need to write the code to read the binary files. Please, would anyone point me to the format of these binary files and/or a code snippet to take as an example ? Thank you very much in advance ! Cheers -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhuzhaoni2017 at hnu.edu.cn Tue Dec 29 02:51:44 2020 From: zhuzhaoni2017 at hnu.edu.cn (=?utf-8?B?5pyx5pit6ZyT?=) Date: Tue, 29 Dec 2020 16:51:44 +0800 Subject: [petsc-users] Error occurred when version change Message-ID: Dear Sir or Madam:  Hello, I am a new PETSc learner. There are some confusions when I using different versions of PETSc. For some reasons, I have to change my codes from the version 3.6.3 to 3.4.5. Newton methods in 3.6.3 version converged successfully with the time step size dt=100. While with the same dt=100 and almost the same codes, Newton methods in version 3.4.5 failed to converge. The debug test indicated that there may be something wrong during the line search process. I have tried to search the changes in different versions but failed. Could you please give me some suggestions? Thank you so much.  Best regards, Zhaoni Zhu -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Error_report.jpg Type: application/octet-stream Size: 185821 bytes Desc: not available URL: From hzhang at mcs.anl.gov Tue Dec 29 12:54:27 2020 From: hzhang at mcs.anl.gov (Zhang, Hong) Date: Tue, 29 Dec 2020 18:54:27 +0000 Subject: [petsc-users] Error occurred when version change In-Reply-To: References: Message-ID: Zhaoni, petsc-3.4 - Public Release, May 13, 2013 i.e., it is a version released 7 years ago. Why you have to use such an old version? Hong ________________________________ From: petsc-users on behalf of ??? Sent: Tuesday, December 29, 2020 2:51 AM To: petsc-users Subject: [petsc-users] Error occurred when version change Dear Sir or Madam: Hello, I am a new PETSc learner. There are some confusions when I using different versions of PETSc. For some reasons, I have to change my codes from the version 3.6.3 to 3.4.5. Newton methods in 3.6.3 version converged successfully with the time step size dt=100. While with the same dt=100 and almost the same codes, Newton methods in version 3.4.5 failed to converge. The debug test indicated that there may be something wrong during the line search process. I have tried to search the changes in different versions but failed. Could you please give me some suggestions? Thank you so much. Best regards, Zhaoni Zhu -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Tue Dec 29 13:12:31 2020 From: bsmith at petsc.dev (Barry Smith) Date: Tue, 29 Dec 2020 13:12:31 -0600 Subject: [petsc-users] Error occurred when version change In-Reply-To: References: Message-ID: <881D8FA1-FECC-4053-A671-71D87CD8B0B1@petsc.dev> It looks like both converged in your image, just at different rates? Note the two sides used different number of linear iterations, my guess is some default behavior of the linear solvers changed between those two ancient releases. First make sure both are using identical linear solvers. Then run both with -pc_type lu do they now have the same SNES convergence? Barry > On Dec 29, 2020, at 2:51 AM, ??? wrote: > > Dear Sir or Madam: > > Hello, I am a new PETSc learner. There are some confusions when I using different versions of PETSc. For some reasons, I have to change my codes from the version 3.6.3 to 3.4.5. Newton methods in 3.6.3 version converged successfully with the time step size dt=100. While with the same dt=100 and almost the same codes, Newton methods in version 3.4.5 failed to converge. The debug test indicated that there may be something wrong during the line search process. I have tried to search the changes in different versions but failed. Could you please give me some suggestions? Thank you so much. > > Best regards, > > Zhaoni Zhu > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Tue Dec 29 13:15:44 2020 From: bsmith at petsc.dev (Barry Smith) Date: Tue, 29 Dec 2020 13:15:44 -0600 Subject: [petsc-users] Files saved with PetscViewerBinaryOpen - Binary file format ? In-Reply-To: References: Message-ID: <68191AFF-AE03-4EF7-8F0C-AFCCB017541C@petsc.dev> https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Vec/VecLoad.html and https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatLoad.html define the simple format. There are loaders for Matlab and Python distributed with PETSc so for those two languages you do not need to write your own custom loaders. Barry > On Dec 29, 2020, at 12:32 PM, C B wrote: > > Hello PETSc community ! > > I would like to read in another code a matrix / rhs / sol saved by PETSc. > I am dealing with large matrices and for precision I would like to use the binary format. > > To save a matrix I am using the following lines (please let me know if I should make any changes), I am using an old version PETSc 2.1.5 for backward compatibility. > > ierr = PetscViewerBinaryOpen(PETSC_COMM_WORLD,filename,PETSC_BINARY_CREATE,&petsc_view);CHKERRQ(ierr); > ierr = VecView(solut,petsc_view);CHKERRQ(ierr); > ierr = PetscViewerDestroy(petsc_view);CHKERRQ(ierr); > > And to save a vector: > > ierr = PetscViewerBinaryOpen(PETSC_COMM_WORLD, filename, PETSC_BINARY_CREATE, &petsc_view);CHKERRQ(ierr); > ierr = VecView(solut, petsc_view);CHKERRQ(ierr); > ierr = PetscViewerDestroy(petsc_view);CHKERRQ(ierr); > > The files are saved without any problems, and now I need to write the code to read the binary files. > Please, would anyone point me to the format of these binary files and/or a code snippet to take as an example ? > > Thank you very much in advance ! > Cheers -------------- next part -------------- An HTML attachment was scrubbed... URL: From cebau.mail at gmail.com Tue Dec 29 15:07:40 2020 From: cebau.mail at gmail.com (C B) Date: Tue, 29 Dec 2020 15:07:40 -0600 Subject: [petsc-users] Files saved with PetscViewerBinaryOpen - Binary file format ? In-Reply-To: <68191AFF-AE03-4EF7-8F0C-AFCCB017541C@petsc.dev> References: <68191AFF-AE03-4EF7-8F0C-AFCCB017541C@petsc.dev> Message-ID: Barry, Thank you very much for your quick response. The two links that you included clearly describe the formats, I have to write a reader for another code, so if anyone has done this in a small code, any code snippet will be appreciated. Cheers, On Tue, Dec 29, 2020 at 1:15 PM Barry Smith wrote: > > > https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Vec/VecLoad.html > and > https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatLoad.html define > the simple format. > > There are loaders for Matlab and Python distributed with PETSc so for > those two languages you do not need to write your own custom loaders. > > Barry > > > > On Dec 29, 2020, at 12:32 PM, C B wrote: > > Hello PETSc community ! > > I would like to read in another code a matrix / rhs / sol saved by PETSc. > I am dealing with large matrices and for precision I would like to use the > binary format. > > To save a matrix I am using the following lines (please let me know if I > should make any changes), I am using an old version PETSc 2.1.5 for > backward compatibility. > > ierr = > PetscViewerBinaryOpen(PETSC_COMM_WORLD,filename,PETSC_BINARY_CREATE,&petsc_view);CHKERRQ(ierr); > ierr = VecView(solut,petsc_view);CHKERRQ(ierr); > ierr = PetscViewerDestroy(petsc_view);CHKERRQ(ierr); > > And to save a vector: > > ierr = PetscViewerBinaryOpen(PETSC_COMM_WORLD, filename, > PETSC_BINARY_CREATE, &petsc_view);CHKERRQ(ierr); > ierr = VecView(solut, petsc_view);CHKERRQ(ierr); > ierr = PetscViewerDestroy(petsc_view);CHKERRQ(ierr); > > The files are saved without any problems, and now I need to write the code > to read the binary files. > Please, would anyone point me to the format of these binary files and/or a > code snippet to take as an example ? > > Thank you very much in advance ! > Cheers > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhuzhaoni2017 at hnu.edu.cn Tue Dec 29 20:41:00 2020 From: zhuzhaoni2017 at hnu.edu.cn (=?utf-8?B?5pyx5pit6ZyT?=) Date: Wed, 30 Dec 2020 10:41:00 +0800 Subject: [petsc-users] Error occurred when version change In-Reply-To: <881D8FA1-FECC-4053-A671-71D87CD8B0B1@petsc.dev> References: <881D8FA1-FECC-4053-A671-71D87CD8B0B1@petsc.dev> Message-ID: Dear Smith,   Thank you so much for your prompt reply! I have confirmed that the parameters of the linear resolver in these two versions are the same and I also run the code with -pc_type lu. It disconverges after the first two time steps, showing "line search fails" (I have printed the running results using 3.6.3 and 3.4.5 versions in the following attachment). I wonder if there are some inner functions or parameters which changes with different versions leading to the disconvergence in 3.4.5 version ?  Best regards, Zhaoni Zhu     ------------------ Original ------------------ From:  "Barry Smith" -------------- next part -------------- A non-text attachment was scrubbed... Name: 3.4.5.docx Type: application/octet-stream Size: 15000 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 3.6.3.docx Type: application/octet-stream Size: 14499 bytes Desc: not available URL: From bsmith at petsc.dev Tue Dec 29 21:13:25 2020 From: bsmith at petsc.dev (Barry Smith) Date: Tue, 29 Dec 2020 21:13:25 -0600 Subject: [petsc-users] Error occurred when version change In-Reply-To: References: <881D8FA1-FECC-4053-A671-71D87CD8B0B1@petsc.dev> Message-ID: So the really old version fails but a slightly newer but still really old version succeeds? I'm sorry but a lot has changed since that time and we really have no way of remembering what could have changed around that time that might cause this. Why can't you use the 3.6.3 version? Whatever code you are using you could upgrade to use the 3.6.3 instead of the 3.4.5 I actually urge you to upgrade to 3.14 our last release, if it is C code it should relatively straightforward; if Fortran code then yes it will be a bit more painful but shouldn't take more than a day. And once that painful day is over all the pain of working with a really old version of PETSc will be gone for good. Barry > On Dec 29, 2020, at 8:41 PM, ??? wrote: > > > Dear Smith, > > Thank you so much for your prompt reply! I have confirmed that the parameters of the linear resolver in these two versions are the same and I also run the code with -pc_type lu. It disconverges after the first two time steps, showing "line search fails" (I have printed the running results using 3.6.3 and 3.4.5 versions in the following attachment). I wonder if there are some inner functions or parameters which changes with different versions leading to the disconvergence in 3.4.5 version ? > > Best regards, > Zhaoni Zhu > > > > ------------------ Original ------------------ > From: "Barry Smith"; > Date: Wed, Dec 30, 2020 03:12 AM > To: "???"; > Cc: "petsc-users"; > Subject: Re: [petsc-users] Error occurred when version change > > > It looks like both converged in your image, just at different rates? Note the two sides used different number of linear iterations, my guess is some default behavior of the linear solvers changed between those two ancient releases. First make sure both are using identical linear solvers. Then run both with -pc_type lu do they now have the same SNES convergence? > > Barry > > >> On Dec 29, 2020, at 2:51 AM, ??? > wrote: >> >> Dear Sir or Madam: >> >> Hello, I am a new PETSc learner. There are some confusions when I using different versions of PETSc. For some reasons, I have to change my codes from the version 3.6.3 to 3.4.5. Newton methods in 3.6.3 version converged successfully with the time step size dt=100. While with the same dt=100 and almost the same codes, Newton methods in version 3.4.5 failed to converge. The debug test indicated that there may be something wrong during the line search process. I have tried to search the changes in different versions but failed. Could you please give me some suggestions? Thank you so much. >> >> Best regards, >> >> Zhaoni Zhu >> >> > > <3.4.5.docx><3.6.3.docx> -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Dec 30 08:44:13 2020 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 30 Dec 2020 09:44:13 -0500 Subject: [petsc-users] Error occurred when version change In-Reply-To: References: <881D8FA1-FECC-4053-A671-71D87CD8B0B1@petsc.dev> Message-ID: On Tue, Dec 29, 2020 at 10:17 PM Barry Smith wrote: > > So the really old version fails but a slightly newer but still really > old version succeeds? I'm sorry but a lot has changed since that time and > we really have no way of remembering what could have changed around that > time that might cause this. > > Why can't you use the 3.6.3 version? Whatever code you are using you > could upgrade to use the 3.6.3 instead of the 3.4.5 I actually urge you > to upgrade to 3.14 our last release, if it is C code it should relatively straightforward; > if Fortran code then yes it will be a bit more painful but shouldn't take > more than a day. And once that painful day is over all the pain of working > with a really old version of PETSc will be gone for good. > I agree with all this. However, we cannot say anything, even about prior version, without the output of -snes_view -snes_monitor -snes_linearsearch_monitor -ksp_monitor_true_residual for each run. Thanks, Matt > Barry > > > On Dec 29, 2020, at 8:41 PM, ??? wrote: > > > Dear Smith, > > Thank you so much for your prompt reply! I have confirmed that the parameters > of the linear resolver in these two versions are the same and I also run > the code with -pc_type lu. It disconverges after the first two time steps > , showing "line search fails" (I have printed the running results using > 3.6.3 and 3.4.5 versions in the following attachment). I wonder if there > are some inner functions or parameters which changes with different > versions leading to the disconvergence in 3.4.5 version ? > > Best regards, > Zhaoni Zhu > > > > ------------------ Original ------------------ > *From: * "Barry Smith"; > *Date: * Wed, Dec 30, 2020 03:12 AM > *To: * "???"; > *Cc: * "petsc-users"; > *Subject: * Re: [petsc-users] Error occurred when version change > > > It looks like both converged in your image, just at different rates? > Note the two sides used different number of linear iterations, my guess is > some default behavior of the linear solvers changed between those two > ancient releases. First make sure both are using identical linear solvers. > Then run both with -pc_type lu do they now have the same SNES convergence? > > Barry > > > On Dec 29, 2020, at 2:51 AM, ??? wrote: > > Dear Sir or Madam: > > Hello, I am a new PETSc learner. There are some confusions when I using > different versions of PETSc. For some reasons, I have to change my codes > from the version 3.6.3 to 3.4.5. Newton methods in 3.6.3 version converged > successfully with the time step size dt=100. While with the same dt=100 and > almost the same codes, Newton methods in version 3.4.5 failed to converge. > The debug test indicated that there may be something wrong during the line > search process. I have tried to search the changes in different versions > but failed. Could you please give me some suggestions? Thank you so much. > > Best regards, > > Zhaoni Zhu > > > > <3.4.5.docx><3.6.3.docx> > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: