From mmolinos at us.es Mon Jul 1 00:43:30 2024 From: mmolinos at us.es (MIGUEL MOLINOS PEREZ) Date: Mon, 1 Jul 2024 05:43:30 +0000 Subject: [petsc-users] Doubt about TSMonitorSolutionVTK In-Reply-To: References: <2067D58E-F041-429F-8ABE-B19DD9F733C2@petsc.dev> Message-ID: <5A92C3DB-471D-4F32-86AE-FE9B3DD9C4D9@us.es> Dear Matthey, Sorry for the late response. Yes, I get output when I run the example mentioned by Barry. The output directory should not be an issue since with the exact same configuration works for hdf5 but not for vtk/vts/vtu. I?ve been doing some tests and now I think this issue might be related to the fact that the output vector was generated using a SWARM discretization. Is this possible? Best, Miguel On Jun 27, 2024, at 4:59?AM, Matthew Knepley wrote: Do you get output when you run an example with that option? Is it possible that your current working directory is not what you expect? Maybe try putting in an absolute path. Thanks, Matt On Wed, Jun 26, 2024 at 5:30?PM MIGUEL MOLINOS PEREZ > wrote: Sorry, I did not put in cc petsc-users@?mcs.?anl.?gov my replay. Miguel On Jun 24, 2024, at 6:?39 PM, MIGUEL MOLINOS PEREZ wrote: Thank you Barry, This is exactly how I did it the first time. Miguel On Jun 24, 2024, at 6:?37 ZjQcmQRYFpfptBannerStart This Message Is From an External Sender This message came from outside your organization. ZjQcmQRYFpfptBannerEnd Sorry, I did not put in cc petsc-users at mcs.anl.gov my replay. Miguel On Jun 24, 2024, at 6:39?PM, MIGUEL MOLINOS PEREZ > wrote: Thank you Barry, This is exactly how I did it the first time. Miguel On Jun 24, 2024, at 6:37?PM, Barry Smith > wrote: See, for example, the bottom of src/ts/tutorials/ex26.c that uses -ts_monitor_solution_vtk 'foo-%03d.vts' On Jun 24, 2024, at 8:47?PM, MIGUEL MOLINOS PEREZ > wrote: This Message Is From an External Sender This message came from outside your organization. Dear all, I want to monitor the results at each iteration of TS using vtk format. To do so, I add the following lines to my Monitor function: char vts_File_Name[MAXC]; PetscCall(PetscSNPrintf(vts_File_Name, sizeof(vts_File_Name), "./xi-MgHx-hcp-cube-x5-x5-x5-TS-BE-%i.vtu", step)); PetscCall(TSMonitorSolutionVTK(ts, step, time, X, (void*)vts_File_Name)); My script compiles and executes without any sort of warning/error messages. However, no output files are produced at the end of the simulation. I?ve also tried the option ?-ts_monitor_solution_vtk ?, but I got no results as well. I can?t find any similar example on the petsc website and I don?t see what I am doing wrong. Could somebody point me to the right direction? Thanks, Miguel -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!eTqkxkTGouEWZKbMuhYTzwo1I7zjEbPV0eh8PeERWTyoabeC74om2ckGXOpU9dIR9fZK3ABsLZ16buIDu0f0nw$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From meator.dev at gmail.com Mon Jul 1 04:21:19 2024 From: meator.dev at gmail.com (meator) Date: Mon, 1 Jul 2024 11:21:19 +0200 Subject: [petsc-users] Weird handling of compiler flags by the build system Message-ID: Hello, I am currently packaging PETSc and have noticed some peculiar behavior when attempting to override CFLAGS, CXXFLAGS, and FFLAGS. Firstly, when calling ./configure with CFLAGS="args" as its argument, it seems to completely override the flag detection system in config/BuildSystem/config/compilerOptions.py (and presumably other places). Same holds true for CXXFLAGS and FFLAGS. This is not desirable for me. With the underlying compiler detected as GCC, I would normally get -fvisibility=hidden and a bunch of extra warnings. These all get lost when CFLAGS or CXXFLAGS is overridden. Is there a way to simply append flags instead of overriding them? Losing flags such as -fvisibility=hidden can severely affect the compiled library. Other build systems usually differentiate between mandatory flags that will get included no matter what and overridable flags which may be replaced. It looks like such a system even exists in PETSc, because -fPIC gets included in C flags even when CFLAGS is set. I understand that the situation here is a bit more complex, as PETSc must support a plethora of compilers which use differing flags and compiler wrappers like mpicc are prevalent. I assume that the same issue occurs for FFLAGS. I do not know Fortran, so I cannot evaluate the risks of omitting flags that are included when FFLAGS isn't specified, but I assume that some of the "lost" flags are also of importance. Another issue, which I find more severe, is that the overridden flags get included in cflags_extra, cxxflags_extra, and fflags_extra of the generated pkg-config file. This is highly undesirable because the flags used to build PETSc should not be used for compiling user programs. My package template provides flags like -ffile-prefix-map, which make sense when PETSc is being built in a fake destdir to be packaged but do not make sense for user programs. The pkg-config file generated by PETSc is something I've never seen before. It took me a considerable amount of time to comprehend the extra keys set there, but I now understand that they are used in the sample Makefile and CMake build definition file. I am not 100% certain why this system is in place. Why does the pkg-config file need to provide extra flags and set compilers? I've seen no other pkg-config file which does such things. Is there a way to fix the pkg-config file (apart from manually removing cflags_extra, cxxflags_extra, and fflags_extra from the .pc file)? Thanks in advance -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_0x1A14CB3464CBE5BF.asc Type: application/pgp-keys Size: 6275 bytes Desc: OpenPGP public key URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature.asc Type: application/pgp-signature Size: 659 bytes Desc: OpenPGP digital signature URL: From knepley at gmail.com Mon Jul 1 07:09:44 2024 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 1 Jul 2024 08:09:44 -0400 Subject: [petsc-users] Doubt about TSMonitorSolutionVTK In-Reply-To: <5A92C3DB-471D-4F32-86AE-FE9B3DD9C4D9@us.es> References: <2067D58E-F041-429F-8ABE-B19DD9F733C2@petsc.dev> <5A92C3DB-471D-4F32-86AE-FE9B3DD9C4D9@us.es> Message-ID: On Mon, Jul 1, 2024 at 1:43?AM MIGUEL MOLINOS PEREZ wrote: > Dear Matthey, > > Sorry for the late response. > > Yes, I get output when I run the example mentioned by Barry. > > The output directory should not be an issue since with the exact same > configuration works for hdf5 but not for vtk/vts/vtu. > > I?ve been doing some tests and now I think this issue might be related to > the fact that the output vector was generated using a SWARM discretization. > Is this possible? > Yes, there is no VTK viewer for Swarm. We have been moving away from VTK format, which is bulky and not very expressive, into our own HDF5 and CGNS. When we use HDF5, we have a script to generate an XDMF file, telling Paraview how to view it. I agree that this is annoying. Currently, we are moving toward PyVista, which can read our HDF5 files directly (and also work directly with running PETSc), although this is not done yet. Thanks, Matt > Best, > Miguel > > On Jun 27, 2024, at 4:59?AM, Matthew Knepley wrote: > > Do you get output when you run an example with that option? Is it possible > that your current working directory is not what you expect? Maybe try > putting in an absolute path. > > Thanks, > > Matt > > On Wed, Jun 26, 2024 at 5:30?PM MIGUEL MOLINOS PEREZ > wrote: > >> Sorry, I did not put in cc petsc-users@ mcs. anl. gov my replay. Miguel >> On Jun 24, 2024, at 6: 39 PM, MIGUEL MOLINOS PEREZ >> wrote: Thank you Barry, This is exactly how I did it the first time. Miguel >> On Jun 24, 2024, at 6: 37 >> ZjQcmQRYFpfptBannerStart >> This Message Is From an External Sender >> This message came from outside your organization. >> >> ZjQcmQRYFpfptBannerEnd >> Sorry, I did not put in cc petsc-users at mcs.anl.gov my replay. >> >> Miguel >> >> On Jun 24, 2024, at 6:39?PM, MIGUEL MOLINOS PEREZ wrote: >> >> Thank you Barry, >> >> This is exactly how I did it the first time. >> >> Miguel >> >> On Jun 24, 2024, at 6:37?PM, Barry Smith wrote: >> >> >> See, for example, the bottom of src/ts/tutorials/ex26.c that uses >> -ts_monitor_*solution_vtk* 'foo-%03d.vts' >> >> >> On Jun 24, 2024, at 8:47?PM, MIGUEL MOLINOS PEREZ wrote: >> >> This Message Is From an External Sender >> This message came from outside your organization. >> Dear all, >> >> I want to monitor the results at each iteration of TS using vtk format. >> To do so, I add the following lines to my Monitor function: >> >> char vts_File_Name[MAXC]; >> PetscCall(PetscSNPrintf(vts_File_Name, sizeof(vts_File_Name), >> "./xi-MgHx-hcp-cube-x5-x5-x5-TS-BE-%i.vtu", step)); >> PetscCall(TSMonitorSolutionVTK(ts, step, time, X, (void*)vts_File_Name)); >> >> My script compiles and executes without any sort of warning/error >> messages. However, no output files are produced at the end of the >> simulation. I?ve also tried the option ?-ts_monitor_solution_vtk >> ?, but I got no results as well. >> >> I can?t find any similar example on the petsc website and I don?t see >> what I am doing wrong. Could somebody point me to the right direction? >> >> Thanks, >> Miguel >> >> >> >> >> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!Z7mz0y2UTXd_x6juHCeo7JisCZjgURW-1JAShrF2hePo3YnESyhFi9psugjCeGNce_91dMHtb2KJEe1KXx1t$ > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!Z7mz0y2UTXd_x6juHCeo7JisCZjgURW-1JAShrF2hePo3YnESyhFi9psugjCeGNce_91dMHtb2KJEe1KXx1t$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Jul 1 07:27:20 2024 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 1 Jul 2024 08:27:20 -0400 Subject: [petsc-users] Question regarding naming of fieldsplit splits In-Reply-To: References: Message-ID: On Fri, Jun 28, 2024 at 4:05?AM Blauth, Sebastian < sebastian.blauth at itwm.fraunhofer.de> wrote: > Hello everyone, > > > > I have a question regarding the naming convention using PETSc?s > PCFieldsplit. I have been following > https://urldefense.us/v3/__https://lists.mcs.anl.gov/pipermail/petsc-users/2019-January/037262.html__;!!G_uCfscf7eWS!bGTaf64ibyuvBn-Qy-UQpxjLdOqFq44f6kBHzEDsXKc0htzQNw1MabtoK463uwb95Pupw_BcLMNwOHdcKldy$ > to create a DMShell with FEniCS in order to customize PCFieldsplit for my > application. > > I am using the following options, which work nicely for me: > > > > -ksp_type fgmres > > -pc_type fieldsplit > > -pc_fieldsplit_0_fields 0, 1 > > -pc_fieldsplit_1_fields 2 > > -pc_fieldsplit_type additive > > -fieldsplit_0_ksp_type fgmres > > -fieldsplit_0_pc_type fieldsplit > > -fieldsplit_0_pc_fieldsplit_type schur > > -fieldsplit_0_pc_fieldsplit_schur_fact_type full > > -fieldsplit_0_pc_fieldsplit_schur_precondition selfp > > -fieldsplit_0_fieldsplit_u_ksp_type preonly > > -fieldsplit_0_fieldsplit_u_pc_type lu > > -fieldsplit_0_fieldsplit_p_ksp_type cg > > -fieldsplit_0_fieldsplit_p_ksp_rtol 1e-14 > > -fieldsplit_0_fieldsplit_p_ksp_atol 1e-30 > > -fieldsplit_0_fieldsplit_p_pc_type icc > > -fieldsplit_0_ksp_rtol 1e-14 > > -fieldsplit_0_ksp_atol 1e-30 > > -fieldsplit_0_ksp_monitor_true_residual > > -fieldsplit_c_ksp_type preonly > > -fieldsplit_c_pc_type lu > > -ksp_view > By default, we use the field names, but you can prevent this by specifying the fields by hand, so -fieldsplit_0_pc_fieldsplit_0_fields 0 -fieldsplit_0_pc_fieldsplit_1_fields 1 should remove the 'u' and 'p' fieldnames. It is somewhat hacky, but I think easier to remember than some extra option. Thanks, Matt > Note that this is just an academic example (sorry for the low solver > tolerances) to test the approach, consisting of a Stokes equation and some > concentration equation (which is not even coupled to Stokes, just for > testing). > > Completely analogous to > https://urldefense.us/v3/__https://lists.mcs.anl.gov/pipermail/petsc-users/2019-January/037262.html__;!!G_uCfscf7eWS!bGTaf64ibyuvBn-Qy-UQpxjLdOqFq44f6kBHzEDsXKc0htzQNw1MabtoK463uwb95Pupw_BcLMNwOHdcKldy$ , > I translate my IS?s to a PETSc Section, which is then supplied to a DMShell > and assigned to a KSP. I am not so familiar with the code or how / why this > works, but it seems to do so perfectly. I name my sections with petsc4py > using > > > > section.setFieldName(0, "u") > > section.setFieldName(1, "p") > > section.setFieldName(2, "c") > > > > However, this is also reflected in the way I can access the fieldsplit > options from the command line. My question is: Is there any way of not > using the FieldNames specified in python but use the index of the field as > defined with ?-pc_fieldsplit_0_fields 0, 1? and ?-pc_fieldsplit_1_fields > 2?, i.e., instead of the prefix ?fieldsplit_0_fieldsplit_u? I want to write > ?fieldsplit_0_fieldsplit_0?, instead of ?fieldsplit_0_fieldsplit_p? I want > to use ?fieldsplit_0_fieldsplit_1?, and instead of ?fieldsplit_c? I want to > use ?fieldsplit_1?. Just changing the names of the fields to > > > > section.setFieldName(0, "0") > > section.setFieldName(1, "1") > > section.setFieldName(2, "2") > > > > does obviously not work as expected, as it works for velocity and > pressure, but not for the concentration ? the prefix there is then > ?fieldsplit_2? and not ?fieldsplit_1?. In the docs, I have found > https://urldefense.us/v3/__https://petsc.org/main/manualpages/PC/PCFieldSplitSetFields/__;!!G_uCfscf7eWS!bGTaf64ibyuvBn-Qy-UQpxjLdOqFq44f6kBHzEDsXKc0htzQNw1MabtoK463uwb95Pupw_BcLMNwOD6iRa_k$ which seems > to suggest that the fieldname can potentially be supplied, but I don?t see > how to do so from the command line. Also, for the sake of completeness, I > attach the output of the solve with ?-ksp_view? below. > > > > Thanks a lot in advance and best regards, > > Sebastian > > > > > > The output of ksp_view is the following: > > KSP Object: 1 MPI processes > > type: fgmres > > restart=30, using Classical (unmodified) Gram-Schmidt > Orthogonalization with no iterative refinement > > happy breakdown tolerance 1e-30 > > maximum iterations=10000, initial guess is zero > > tolerances: relative=1e-05, absolute=1e-11, divergence=10000. > > right preconditioning > > using UNPRECONDITIONED norm type for convergence test > > PC Object: 1 MPI processes > > type: fieldsplit > > FieldSplit with ADDITIVE composition: total splits = 2 > > Solver info for each split is in the following KSP objects: > > Split number 0 Defined by IS > > KSP Object: (fieldsplit_0_) 1 MPI processes > > type: fgmres > > restart=30, using Classical (unmodified) Gram-Schmidt > Orthogonalization with no iterative refinement > > happy breakdown tolerance 1e-30 > > maximum iterations=10000, initial guess is zero > > tolerances: relative=1e-14, absolute=1e-30, divergence=10000. > > right preconditioning > > using UNPRECONDITIONED norm type for convergence test > > PC Object: (fieldsplit_0_) 1 MPI processes > > type: fieldsplit > > FieldSplit with Schur preconditioner, factorization FULL > > Preconditioner for the Schur complement formed from Sp, an assembled > approximation to S, which uses A00's diagonal's inverse > > Split info: > > Split number 0 Defined by IS > > Split number 1 Defined by IS > > KSP solver for A00 block > > KSP Object: (fieldsplit_0_fieldsplit_u_) 1 MPI processes > > type: preonly > > maximum iterations=10000, initial guess is zero > > tolerances: relative=1e-05, absolute=1e-50, divergence=10000. > > left preconditioning > > using NONE norm type for convergence test > > PC Object: (fieldsplit_0_fieldsplit_u_) 1 MPI processes > > type: lu > > out-of-place factorization > > tolerance for zero pivot 2.22045e-14 > > matrix ordering: nd > > factor fill ratio given 5., needed 3.92639 > > Factored matrix follows: > > Mat Object: 1 MPI processes > > type: seqaij > > rows=4290, cols=4290 > > package used to perform factorization: petsc > > total: nonzeros=375944, allocated nonzeros=375944 > > using I-node routines: found 2548 nodes, limit used is > 5 > > linear system matrix = precond matrix: > > Mat Object: (fieldsplit_0_fieldsplit_u_) 1 MPI processes > > type: seqaij > > rows=4290, cols=4290 > > total: nonzeros=95748, allocated nonzeros=95748 > > total number of mallocs used during MatSetValues calls=0 > > using I-node routines: found 3287 nodes, limit used is 5 > > KSP solver for S = A11 - A10 inv(A00) A01 > > KSP Object: (fieldsplit_0_fieldsplit_p_) 1 MPI processes > > type: cg > > maximum iterations=10000, initial guess is zero > > tolerances: relative=1e-14, absolute=1e-30, divergence=10000. > > left preconditioning > > using PRECONDITIONED norm type for convergence test > > PC Object: (fieldsplit_0_fieldsplit_p_) 1 MPI processes > > type: icc > > out-of-place factorization > > 0 levels of fill > > tolerance for zero pivot 2.22045e-14 > > using Manteuffel shift [POSITIVE_DEFINITE] > > matrix ordering: natural > > factor fill ratio given 1., needed 1. > > Factored matrix follows: > > Mat Object: 1 MPI processes > > type: seqsbaij > > rows=561, cols=561 > > package used to perform factorization: petsc > > total: nonzeros=5120, allocated nonzeros=5120 > > block size is 1 > > linear system matrix followed by preconditioner matrix: > > Mat Object: (fieldsplit_0_fieldsplit_p_) 1 MPI processes > > type: schurcomplement > > rows=561, cols=561 > > Schur complement A11 - A10 inv(A00) A01 > > A11 > > Mat Object: (fieldsplit_0_fieldsplit_p_) 1 MPI processes > > type: seqaij > > rows=561, cols=561 > > total: nonzeros=3729, allocated nonzeros=3729 > > total number of mallocs used during MatSetValues calls=0 > > not using I-node routines > > A10 > > Mat Object: 1 MPI processes > > type: seqaij > > rows=561, cols=4290 > > total: nonzeros=19938, allocated nonzeros=19938 > > total number of mallocs used during MatSetValues calls=0 > > not using I-node routines > > KSP of A00 > > KSP Object: (fieldsplit_0_fieldsplit_u_) 1 MPI processes > > type: preonly > > maximum iterations=10000, initial guess is zero > > tolerances: relative=1e-05, absolute=1e-50, > divergence=10000. > > left preconditioning > > using NONE norm type for convergence test > > PC Object: (fieldsplit_0_fieldsplit_u_) 1 MPI processes > > type: lu > > out-of-place factorization > > tolerance for zero pivot 2.22045e-14 > > matrix ordering: nd > > factor fill ratio given 5., needed 3.92639 > > Factored matrix follows: > > Mat Object: 1 MPI processes > > type: seqaij > > rows=4290, cols=4290 > > package used to perform factorization: petsc > > total: nonzeros=375944, allocated nonzeros=375944 > > using I-node routines: found 2548 nodes, limit > used is 5 > > linear system matrix = precond matrix: > > Mat Object: (fieldsplit_0_fieldsplit_u_) 1 MPI processes > > type: seqaij > > rows=4290, cols=4290 > > total: nonzeros=95748, allocated nonzeros=95748 > > total number of mallocs used during MatSetValues > calls=0 > > using I-node routines: found 3287 nodes, limit used > is 5 > > A01 > > Mat Object: 1 MPI processes > > type: seqaij > > rows=4290, cols=561 > > total: nonzeros=19938, allocated nonzeros=19938 > > total number of mallocs used during MatSetValues calls=0 > > using I-node routines: found 3287 nodes, limit used is > 5 > > Mat Object: 1 MPI processes > > type: seqaij > > rows=561, cols=561 > > total: nonzeros=9679, allocated nonzeros=9679 > > total number of mallocs used during MatSetValues calls=0 > > not using I-node routines > > linear system matrix = precond matrix: > > Mat Object: (fieldsplit_0_) 1 MPI processes > > type: seqaij > > rows=4851, cols=4851 > > total: nonzeros=139353, allocated nonzeros=139353 > > total number of mallocs used during MatSetValues calls=0 > > using I-node routines: found 3830 nodes, limit used is 5 > > Split number 1 Defined by IS > > KSP Object: (fieldsplit_c_) 1 MPI processes > > type: preonly > > maximum iterations=10000, initial guess is zero > > tolerances: relative=1e-05, absolute=1e-50, divergence=10000. > > left preconditioning > > using NONE norm type for convergence test > > PC Object: (fieldsplit_c_) 1 MPI processes > > type: lu > > out-of-place factorization > > tolerance for zero pivot 2.22045e-14 > > matrix ordering: nd > > factor fill ratio given 5., needed 4.24323 > > Factored matrix follows: > > Mat Object: 1 MPI processes > > type: seqaij > > rows=561, cols=561 > > package used to perform factorization: petsc > > total: nonzeros=15823, allocated nonzeros=15823 > > not using I-node routines > > linear system matrix = precond matrix: > > Mat Object: (fieldsplit_c_) 1 MPI processes > > type: seqaij > > rows=561, cols=561 > > total: nonzeros=3729, allocated nonzeros=3729 > > total number of mallocs used during MatSetValues calls=0 > > not using I-node routines > > linear system matrix = precond matrix: > > Mat Object: 1 MPI processes > > type: seqaij > > rows=5412, cols=5412 > > total: nonzeros=190416, allocated nonzeros=190416 > > total number of mallocs used during MatSetValues calls=0 > > using I-node routines: found 3833 nodes, limit used is 5 > > > > -- > > Dr. Sebastian Blauth > > Fraunhofer-Institut f?r > > Techno- und Wirtschaftsmathematik ITWM > > Abteilung Transportvorg?nge > > Fraunhofer-Platz 1, 67663 Kaiserslautern > > Telefon: +49 631 31600-4968 > > sebastian.blauth at itwm.fraunhofer.de > > https://urldefense.us/v3/__https://www.itwm.fraunhofer.de__;!!G_uCfscf7eWS!bGTaf64ibyuvBn-Qy-UQpxjLdOqFq44f6kBHzEDsXKc0htzQNw1MabtoK463uwb95Pupw_BcLMNwOFn7-WhZ$ > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!bGTaf64ibyuvBn-Qy-UQpxjLdOqFq44f6kBHzEDsXKc0htzQNw1MabtoK463uwb95Pupw_BcLMNwOHccy8aP$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastian.blauth at itwm.fraunhofer.de Mon Jul 1 08:48:29 2024 From: sebastian.blauth at itwm.fraunhofer.de (Blauth, Sebastian) Date: Mon, 1 Jul 2024 13:48:29 +0000 Subject: [petsc-users] Question regarding naming of fieldsplit splits In-Reply-To: References: Message-ID: Dear Matt, thanks a lot for your help. Unfortunately, for me these extra options do not have any effect, I still get the ?u? and ?p? fieldnames. Also, this would not help me to get rid of the ?c? fieldname ? on that level of the fieldsplit I am basically using your approach already, and still it does show up. The output of the -ksp_view is unchanged, so that I do not attach it here again. Maybe I misunderstood you? Thanks for the help and best regards, Sebastian -- Dr. Sebastian Blauth Fraunhofer-Institut f?r Techno- und Wirtschaftsmathematik ITWM Abteilung Transportvorg?nge Fraunhofer-Platz 1, 67663 Kaiserslautern Telefon: +49 631 31600-4968 sebastian.blauth at itwm.fraunhofer.de https://www.itwm.fraunhofer.de From: Matthew Knepley Sent: Monday, July 1, 2024 2:27 PM To: Blauth, Sebastian Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] Question regarding naming of fieldsplit splits On Fri, Jun 28, 2024 at 4:05?AM Blauth, Sebastian > wrote: Hello everyone, I have a question regarding the naming convention using PETSc?s PCFieldsplit. I have been following https://lists.mcs.anl.gov/pipermail/petsc-users/2019-January/037262.html to create a DMShell with FEniCS in order to customize PCFieldsplit for my application. I am using the following options, which work nicely for me: -ksp_type fgmres -pc_type fieldsplit -pc_fieldsplit_0_fields 0, 1 -pc_fieldsplit_1_fields 2 -pc_fieldsplit_type additive -fieldsplit_0_ksp_type fgmres -fieldsplit_0_pc_type fieldsplit -fieldsplit_0_pc_fieldsplit_type schur -fieldsplit_0_pc_fieldsplit_schur_fact_type full -fieldsplit_0_pc_fieldsplit_schur_precondition selfp -fieldsplit_0_fieldsplit_u_ksp_type preonly -fieldsplit_0_fieldsplit_u_pc_type lu -fieldsplit_0_fieldsplit_p_ksp_type cg -fieldsplit_0_fieldsplit_p_ksp_rtol 1e-14 -fieldsplit_0_fieldsplit_p_ksp_atol 1e-30 -fieldsplit_0_fieldsplit_p_pc_type icc -fieldsplit_0_ksp_rtol 1e-14 -fieldsplit_0_ksp_atol 1e-30 -fieldsplit_0_ksp_monitor_true_residual -fieldsplit_c_ksp_type preonly -fieldsplit_c_pc_type lu -ksp_view By default, we use the field names, but you can prevent this by specifying the fields by hand, so -fieldsplit_0_pc_fieldsplit_0_fields 0 -fieldsplit_0_pc_fieldsplit_1_fields 1 should remove the 'u' and 'p' fieldnames. It is somewhat hacky, but I think easier to remember than some extra option. Thanks, Matt Note that this is just an academic example (sorry for the low solver tolerances) to test the approach, consisting of a Stokes equation and some concentration equation (which is not even coupled to Stokes, just for testing). Completely analogous to https://lists.mcs.anl.gov/pipermail/petsc-users/2019-January/037262.html, I translate my IS?s to a PETSc Section, which is then supplied to a DMShell and assigned to a KSP. I am not so familiar with the code or how / why this works, but it seems to do so perfectly. I name my sections with petsc4py using section.setFieldName(0, "u") section.setFieldName(1, "p") section.setFieldName(2, "c") However, this is also reflected in the way I can access the fieldsplit options from the command line. My question is: Is there any way of not using the FieldNames specified in python but use the index of the field as defined with ?-pc_fieldsplit_0_fields 0, 1? and ?-pc_fieldsplit_1_fields 2?, i.e., instead of the prefix ?fieldsplit_0_fieldsplit_u? I want to write ?fieldsplit_0_fieldsplit_0?, instead of ?fieldsplit_0_fieldsplit_p? I want to use ?fieldsplit_0_fieldsplit_1?, and instead of ?fieldsplit_c? I want to use ?fieldsplit_1?. Just changing the names of the fields to section.setFieldName(0, "0") section.setFieldName(1, "1") section.setFieldName(2, "2") does obviously not work as expected, as it works for velocity and pressure, but not for the concentration ? the prefix there is then ?fieldsplit_2? and not ?fieldsplit_1?. In the docs, I have found https://petsc.org/main/manualpages/PC/PCFieldSplitSetFields/ which seems to suggest that the fieldname can potentially be supplied, but I don?t see how to do so from the command line. Also, for the sake of completeness, I attach the output of the solve with ?-ksp_view? below. Thanks a lot in advance and best regards, Sebastian The output of ksp_view is the following: KSP Object: 1 MPI processes type: fgmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-11, divergence=10000. right preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: 1 MPI processes type: fieldsplit FieldSplit with ADDITIVE composition: total splits = 2 Solver info for each split is in the following KSP objects: Split number 0 Defined by IS KSP Object: (fieldsplit_0_) 1 MPI processes type: fgmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=1e-14, absolute=1e-30, divergence=10000. right preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: (fieldsplit_0_) 1 MPI processes type: fieldsplit FieldSplit with Schur preconditioner, factorization FULL Preconditioner for the Schur complement formed from Sp, an assembled approximation to S, which uses A00's diagonal's inverse Split info: Split number 0 Defined by IS Split number 1 Defined by IS KSP solver for A00 block KSP Object: (fieldsplit_0_fieldsplit_u_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (fieldsplit_0_fieldsplit_u_) 1 MPI processes type: lu out-of-place factorization tolerance for zero pivot 2.22045e-14 matrix ordering: nd factor fill ratio given 5., needed 3.92639 Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=4290, cols=4290 package used to perform factorization: petsc total: nonzeros=375944, allocated nonzeros=375944 using I-node routines: found 2548 nodes, limit used is 5 linear system matrix = precond matrix: Mat Object: (fieldsplit_0_fieldsplit_u_) 1 MPI processes type: seqaij rows=4290, cols=4290 total: nonzeros=95748, allocated nonzeros=95748 total number of mallocs used during MatSetValues calls=0 using I-node routines: found 3287 nodes, limit used is 5 KSP solver for S = A11 - A10 inv(A00) A01 KSP Object: (fieldsplit_0_fieldsplit_p_) 1 MPI processes type: cg maximum iterations=10000, initial guess is zero tolerances: relative=1e-14, absolute=1e-30, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test PC Object: (fieldsplit_0_fieldsplit_p_) 1 MPI processes type: icc out-of-place factorization 0 levels of fill tolerance for zero pivot 2.22045e-14 using Manteuffel shift [POSITIVE_DEFINITE] matrix ordering: natural factor fill ratio given 1., needed 1. Factored matrix follows: Mat Object: 1 MPI processes type: seqsbaij rows=561, cols=561 package used to perform factorization: petsc total: nonzeros=5120, allocated nonzeros=5120 block size is 1 linear system matrix followed by preconditioner matrix: Mat Object: (fieldsplit_0_fieldsplit_p_) 1 MPI processes type: schurcomplement rows=561, cols=561 Schur complement A11 - A10 inv(A00) A01 A11 Mat Object: (fieldsplit_0_fieldsplit_p_) 1 MPI processes type: seqaij rows=561, cols=561 total: nonzeros=3729, allocated nonzeros=3729 total number of mallocs used during MatSetValues calls=0 not using I-node routines A10 Mat Object: 1 MPI processes type: seqaij rows=561, cols=4290 total: nonzeros=19938, allocated nonzeros=19938 total number of mallocs used during MatSetValues calls=0 not using I-node routines KSP of A00 KSP Object: (fieldsplit_0_fieldsplit_u_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (fieldsplit_0_fieldsplit_u_) 1 MPI processes type: lu out-of-place factorization tolerance for zero pivot 2.22045e-14 matrix ordering: nd factor fill ratio given 5., needed 3.92639 Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=4290, cols=4290 package used to perform factorization: petsc total: nonzeros=375944, allocated nonzeros=375944 using I-node routines: found 2548 nodes, limit used is 5 linear system matrix = precond matrix: Mat Object: (fieldsplit_0_fieldsplit_u_) 1 MPI processes type: seqaij rows=4290, cols=4290 total: nonzeros=95748, allocated nonzeros=95748 total number of mallocs used during MatSetValues calls=0 using I-node routines: found 3287 nodes, limit used is 5 A01 Mat Object: 1 MPI processes type: seqaij rows=4290, cols=561 total: nonzeros=19938, allocated nonzeros=19938 total number of mallocs used during MatSetValues calls=0 using I-node routines: found 3287 nodes, limit used is 5 Mat Object: 1 MPI processes type: seqaij rows=561, cols=561 total: nonzeros=9679, allocated nonzeros=9679 total number of mallocs used during MatSetValues calls=0 not using I-node routines linear system matrix = precond matrix: Mat Object: (fieldsplit_0_) 1 MPI processes type: seqaij rows=4851, cols=4851 total: nonzeros=139353, allocated nonzeros=139353 total number of mallocs used during MatSetValues calls=0 using I-node routines: found 3830 nodes, limit used is 5 Split number 1 Defined by IS KSP Object: (fieldsplit_c_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (fieldsplit_c_) 1 MPI processes type: lu out-of-place factorization tolerance for zero pivot 2.22045e-14 matrix ordering: nd factor fill ratio given 5., needed 4.24323 Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=561, cols=561 package used to perform factorization: petsc total: nonzeros=15823, allocated nonzeros=15823 not using I-node routines linear system matrix = precond matrix: Mat Object: (fieldsplit_c_) 1 MPI processes type: seqaij rows=561, cols=561 total: nonzeros=3729, allocated nonzeros=3729 total number of mallocs used during MatSetValues calls=0 not using I-node routines linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=5412, cols=5412 total: nonzeros=190416, allocated nonzeros=190416 total number of mallocs used during MatSetValues calls=0 using I-node routines: found 3833 nodes, limit used is 5 -- Dr. Sebastian Blauth Fraunhofer-Institut f?r Techno- und Wirtschaftsmathematik ITWM Abteilung Transportvorg?nge Fraunhofer-Platz 1, 67663 Kaiserslautern Telefon: +49 631 31600-4968 sebastian.blauth at itwm.fraunhofer.de https://www.itwm.fraunhofer.de -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 7943 bytes Desc: not available URL: From bsmith at petsc.dev Mon Jul 1 09:14:55 2024 From: bsmith at petsc.dev (Barry Smith) Date: Mon, 1 Jul 2024 10:14:55 -0400 Subject: [petsc-users] Weird handling of compiler flags by the build system In-Reply-To: References: Message-ID: <339A8F90-6B61-42BB-B7AB-AB16B905C154@petsc.dev> An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Jul 1 09:30:11 2024 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 1 Jul 2024 10:30:11 -0400 Subject: [petsc-users] Question regarding naming of fieldsplit splits In-Reply-To: References: Message-ID: On Mon, Jul 1, 2024 at 9:48?AM Blauth, Sebastian < sebastian.blauth at itwm.fraunhofer.de> wrote: > Dear Matt, > > > > thanks a lot for your help. Unfortunately, for me these extra options do > not have any effect, I still get the ?u? and ?p? fieldnames. Also, this > would not help me to get rid of the ?c? fieldname ? on that level of the > fieldsplit I am basically using your approach already, and still it does > show up. The output of the -ksp_view is unchanged, so that I do not attach > it here again. Maybe I misunderstood you? > Oh, we make an exception for single fields, since we think you would want to use the name. I have to make an extra option to shut off naming. Thanks, Matt > Thanks for the help and best regards, > > Sebastian > > > > -- > > Dr. Sebastian Blauth > > Fraunhofer-Institut f?r > > Techno- und Wirtschaftsmathematik ITWM > > Abteilung Transportvorg?nge > > Fraunhofer-Platz 1, 67663 Kaiserslautern > > Telefon: +49 631 31600-4968 > > sebastian.blauth at itwm.fraunhofer.de > > https://urldefense.us/v3/__https://www.itwm.fraunhofer.de__;!!G_uCfscf7eWS!e9SeVBzwCGTmMl3gOc_WG4S_zL5JNMZYyiUGhfrhQAN_re34sVynQzfQyxsY8DUyFdPf1HHTM-Pas_5OTyRd$ > > > > *From:* Matthew Knepley > *Sent:* Monday, July 1, 2024 2:27 PM > *To:* Blauth, Sebastian > *Cc:* petsc-users at mcs.anl.gov > *Subject:* Re: [petsc-users] Question regarding naming of fieldsplit > splits > > > > On Fri, Jun 28, 2024 at 4:05?AM Blauth, Sebastian < > sebastian.blauth at itwm.fraunhofer.de> wrote: > > Hello everyone, > > > > I have a question regarding the naming convention using PETSc?s > PCFieldsplit. I have been following > https://urldefense.us/v3/__https://lists.mcs.anl.gov/pipermail/petsc-users/2019-January/037262.html__;!!G_uCfscf7eWS!e9SeVBzwCGTmMl3gOc_WG4S_zL5JNMZYyiUGhfrhQAN_re34sVynQzfQyxsY8DUyFdPf1HHTM-Paswa3c8E2$ > to create a DMShell with FEniCS in order to customize PCFieldsplit for my > application. > > I am using the following options, which work nicely for me: > > > > -ksp_type fgmres > > -pc_type fieldsplit > > -pc_fieldsplit_0_fields 0, 1 > > -pc_fieldsplit_1_fields 2 > > -pc_fieldsplit_type additive > > -fieldsplit_0_ksp_type fgmres > > -fieldsplit_0_pc_type fieldsplit > > -fieldsplit_0_pc_fieldsplit_type schur > > -fieldsplit_0_pc_fieldsplit_schur_fact_type full > > -fieldsplit_0_pc_fieldsplit_schur_precondition selfp > > -fieldsplit_0_fieldsplit_u_ksp_type preonly > > -fieldsplit_0_fieldsplit_u_pc_type lu > > -fieldsplit_0_fieldsplit_p_ksp_type cg > > -fieldsplit_0_fieldsplit_p_ksp_rtol 1e-14 > > -fieldsplit_0_fieldsplit_p_ksp_atol 1e-30 > > -fieldsplit_0_fieldsplit_p_pc_type icc > > -fieldsplit_0_ksp_rtol 1e-14 > > -fieldsplit_0_ksp_atol 1e-30 > > -fieldsplit_0_ksp_monitor_true_residual > > -fieldsplit_c_ksp_type preonly > > -fieldsplit_c_pc_type lu > > -ksp_view > > > > By default, we use the field names, but you can prevent this by specifying > the fields by hand, so > > > > -fieldsplit_0_pc_fieldsplit_0_fields 0 > -fieldsplit_0_pc_fieldsplit_1_fields 1 > > > > should remove the 'u' and 'p' fieldnames. It is somewhat hacky, but I > think easier to remember than > > some extra option. > > > > Thanks, > > > > Matt > > > > Note that this is just an academic example (sorry for the low solver > tolerances) to test the approach, consisting of a Stokes equation and some > concentration equation (which is not even coupled to Stokes, just for > testing). > > Completely analogous to > https://urldefense.us/v3/__https://lists.mcs.anl.gov/pipermail/petsc-users/2019-January/037262.html__;!!G_uCfscf7eWS!e9SeVBzwCGTmMl3gOc_WG4S_zL5JNMZYyiUGhfrhQAN_re34sVynQzfQyxsY8DUyFdPf1HHTM-Paswa3c8E2$ , > I translate my IS?s to a PETSc Section, which is then supplied to a DMShell > and assigned to a KSP. I am not so familiar with the code or how / why this > works, but it seems to do so perfectly. I name my sections with petsc4py > using > > > > section.setFieldName(0, "u") > > section.setFieldName(1, "p") > > section.setFieldName(2, "c") > > > > However, this is also reflected in the way I can access the fieldsplit > options from the command line. My question is: Is there any way of not > using the FieldNames specified in python but use the index of the field as > defined with ?-pc_fieldsplit_0_fields 0, 1? and ?-pc_fieldsplit_1_fields > 2?, i.e., instead of the prefix ?fieldsplit_0_fieldsplit_u? I want to write > ?fieldsplit_0_fieldsplit_0?, instead of ?fieldsplit_0_fieldsplit_p? I want > to use ?fieldsplit_0_fieldsplit_1?, and instead of ?fieldsplit_c? I want to > use ?fieldsplit_1?. Just changing the names of the fields to > > > > section.setFieldName(0, "0") > > section.setFieldName(1, "1") > > section.setFieldName(2, "2") > > > > does obviously not work as expected, as it works for velocity and > pressure, but not for the concentration ? the prefix there is then > ?fieldsplit_2? and not ?fieldsplit_1?. In the docs, I have found > https://urldefense.us/v3/__https://petsc.org/main/manualpages/PC/PCFieldSplitSetFields/__;!!G_uCfscf7eWS!e9SeVBzwCGTmMl3gOc_WG4S_zL5JNMZYyiUGhfrhQAN_re34sVynQzfQyxsY8DUyFdPf1HHTM-Pas1RiSjwn$ which seems > to suggest that the fieldname can potentially be supplied, but I don?t see > how to do so from the command line. Also, for the sake of completeness, I > attach the output of the solve with ?-ksp_view? below. > > > > Thanks a lot in advance and best regards, > > Sebastian > > > > > > The output of ksp_view is the following: > > KSP Object: 1 MPI processes > > type: fgmres > > restart=30, using Classical (unmodified) Gram-Schmidt > Orthogonalization with no iterative refinement > > happy breakdown tolerance 1e-30 > > maximum iterations=10000, initial guess is zero > > tolerances: relative=1e-05, absolute=1e-11, divergence=10000. > > right preconditioning > > using UNPRECONDITIONED norm type for convergence test > > PC Object: 1 MPI processes > > type: fieldsplit > > FieldSplit with ADDITIVE composition: total splits = 2 > > Solver info for each split is in the following KSP objects: > > Split number 0 Defined by IS > > KSP Object: (fieldsplit_0_) 1 MPI processes > > type: fgmres > > restart=30, using Classical (unmodified) Gram-Schmidt > Orthogonalization with no iterative refinement > > happy breakdown tolerance 1e-30 > > maximum iterations=10000, initial guess is zero > > tolerances: relative=1e-14, absolute=1e-30, divergence=10000. > > right preconditioning > > using UNPRECONDITIONED norm type for convergence test > > PC Object: (fieldsplit_0_) 1 MPI processes > > type: fieldsplit > > FieldSplit with Schur preconditioner, factorization FULL > > Preconditioner for the Schur complement formed from Sp, an assembled > approximation to S, which uses A00's diagonal's inverse > > Split info: > > Split number 0 Defined by IS > > Split number 1 Defined by IS > > KSP solver for A00 block > > KSP Object: (fieldsplit_0_fieldsplit_u_) 1 MPI processes > > type: preonly > > maximum iterations=10000, initial guess is zero > > tolerances: relative=1e-05, absolute=1e-50, divergence=10000. > > left preconditioning > > using NONE norm type for convergence test > > PC Object: (fieldsplit_0_fieldsplit_u_) 1 MPI processes > > type: lu > > out-of-place factorization > > tolerance for zero pivot 2.22045e-14 > > matrix ordering: nd > > factor fill ratio given 5., needed 3.92639 > > Factored matrix follows: > > Mat Object: 1 MPI processes > > type: seqaij > > rows=4290, cols=4290 > > package used to perform factorization: petsc > > total: nonzeros=375944, allocated nonzeros=375944 > > using I-node routines: found 2548 nodes, limit used is > 5 > > linear system matrix = precond matrix: > > Mat Object: (fieldsplit_0_fieldsplit_u_) 1 MPI processes > > type: seqaij > > rows=4290, cols=4290 > > total: nonzeros=95748, allocated nonzeros=95748 > > total number of mallocs used during MatSetValues calls=0 > > using I-node routines: found 3287 nodes, limit used is 5 > > KSP solver for S = A11 - A10 inv(A00) A01 > > KSP Object: (fieldsplit_0_fieldsplit_p_) 1 MPI processes > > type: cg > > maximum iterations=10000, initial guess is zero > > tolerances: relative=1e-14, absolute=1e-30, divergence=10000. > > left preconditioning > > using PRECONDITIONED norm type for convergence test > > PC Object: (fieldsplit_0_fieldsplit_p_) 1 MPI processes > > type: icc > > out-of-place factorization > > 0 levels of fill > > tolerance for zero pivot 2.22045e-14 > > using Manteuffel shift [POSITIVE_DEFINITE] > > matrix ordering: natural > > factor fill ratio given 1., needed 1. > > Factored matrix follows: > > Mat Object: 1 MPI processes > > type: seqsbaij > > rows=561, cols=561 > > package used to perform factorization: petsc > > total: nonzeros=5120, allocated nonzeros=5120 > > block size is 1 > > linear system matrix followed by preconditioner matrix: > > Mat Object: (fieldsplit_0_fieldsplit_p_) 1 MPI processes > > type: schurcomplement > > rows=561, cols=561 > > Schur complement A11 - A10 inv(A00) A01 > > A11 > > Mat Object: (fieldsplit_0_fieldsplit_p_) 1 MPI processes > > type: seqaij > > rows=561, cols=561 > > total: nonzeros=3729, allocated nonzeros=3729 > > total number of mallocs used during MatSetValues calls=0 > > not using I-node routines > > A10 > > Mat Object: 1 MPI processes > > type: seqaij > > rows=561, cols=4290 > > total: nonzeros=19938, allocated nonzeros=19938 > > total number of mallocs used during MatSetValues calls=0 > > not using I-node routines > > KSP of A00 > > KSP Object: (fieldsplit_0_fieldsplit_u_) 1 MPI processes > > type: preonly > > maximum iterations=10000, initial guess is zero > > tolerances: relative=1e-05, absolute=1e-50, > divergence=10000. > > left preconditioning > > using NONE norm type for convergence test > > PC Object: (fieldsplit_0_fieldsplit_u_) 1 MPI processes > > type: lu > > out-of-place factorization > > tolerance for zero pivot 2.22045e-14 > > matrix ordering: nd > > factor fill ratio given 5., needed 3.92639 > > Factored matrix follows: > > Mat Object: 1 MPI processes > > type: seqaij > > rows=4290, cols=4290 > > package used to perform factorization: petsc > > total: nonzeros=375944, allocated nonzeros=375944 > > using I-node routines: found 2548 nodes, limit > used is 5 > > linear system matrix = precond matrix: > > Mat Object: (fieldsplit_0_fieldsplit_u_) 1 MPI processes > > type: seqaij > > rows=4290, cols=4290 > > total: nonzeros=95748, allocated nonzeros=95748 > > total number of mallocs used during MatSetValues > calls=0 > > using I-node routines: found 3287 nodes, limit used > is 5 > > A01 > > Mat Object: 1 MPI processes > > type: seqaij > > rows=4290, cols=561 > > total: nonzeros=19938, allocated nonzeros=19938 > > total number of mallocs used during MatSetValues calls=0 > > using I-node routines: found 3287 nodes, limit used is > 5 > > Mat Object: 1 MPI processes > > type: seqaij > > rows=561, cols=561 > > total: nonzeros=9679, allocated nonzeros=9679 > > total number of mallocs used during MatSetValues calls=0 > > not using I-node routines > > linear system matrix = precond matrix: > > Mat Object: (fieldsplit_0_) 1 MPI processes > > type: seqaij > > rows=4851, cols=4851 > > total: nonzeros=139353, allocated nonzeros=139353 > > total number of mallocs used during MatSetValues calls=0 > > using I-node routines: found 3830 nodes, limit used is 5 > > Split number 1 Defined by IS > > KSP Object: (fieldsplit_c_) 1 MPI processes > > type: preonly > > maximum iterations=10000, initial guess is zero > > tolerances: relative=1e-05, absolute=1e-50, divergence=10000. > > left preconditioning > > using NONE norm type for convergence test > > PC Object: (fieldsplit_c_) 1 MPI processes > > type: lu > > out-of-place factorization > > tolerance for zero pivot 2.22045e-14 > > matrix ordering: nd > > factor fill ratio given 5., needed 4.24323 > > Factored matrix follows: > > Mat Object: 1 MPI processes > > type: seqaij > > rows=561, cols=561 > > package used to perform factorization: petsc > > total: nonzeros=15823, allocated nonzeros=15823 > > not using I-node routines > > linear system matrix = precond matrix: > > Mat Object: (fieldsplit_c_) 1 MPI processes > > type: seqaij > > rows=561, cols=561 > > total: nonzeros=3729, allocated nonzeros=3729 > > total number of mallocs used during MatSetValues calls=0 > > not using I-node routines > > linear system matrix = precond matrix: > > Mat Object: 1 MPI processes > > type: seqaij > > rows=5412, cols=5412 > > total: nonzeros=190416, allocated nonzeros=190416 > > total number of mallocs used during MatSetValues calls=0 > > using I-node routines: found 3833 nodes, limit used is 5 > > > > -- > > Dr. Sebastian Blauth > > Fraunhofer-Institut f?r > > Techno- und Wirtschaftsmathematik ITWM > > Abteilung Transportvorg?nge > > Fraunhofer-Platz 1, 67663 Kaiserslautern > > Telefon: +49 631 31600-4968 > > sebastian.blauth at itwm.fraunhofer.de > > https://urldefense.us/v3/__https://www.itwm.fraunhofer.de__;!!G_uCfscf7eWS!e9SeVBzwCGTmMl3gOc_WG4S_zL5JNMZYyiUGhfrhQAN_re34sVynQzfQyxsY8DUyFdPf1HHTM-Pas_5OTyRd$ > > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!e9SeVBzwCGTmMl3gOc_WG4S_zL5JNMZYyiUGhfrhQAN_re34sVynQzfQyxsY8DUyFdPf1HHTM-Pas81IL8s1$ > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!e9SeVBzwCGTmMl3gOc_WG4S_zL5JNMZYyiUGhfrhQAN_re34sVynQzfQyxsY8DUyFdPf1HHTM-Pas81IL8s1$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From maitri.ksh at gmail.com Mon Jul 1 09:37:26 2024 From: maitri.ksh at gmail.com (maitri ksh) Date: Mon, 1 Jul 2024 17:37:26 +0300 Subject: [petsc-users] Issue with Exporting and Reading Complex Vectors and Magnitude Vectors in PETSc and MATLAB Message-ID: I need to export complex vectors data from PETSc and read them in MATLAB. However, I am encountering some issues with the data format and interpretation in MATLAB. code-snippet of the vector data export section: // Assemble the vectors before exporting ierr = VecAssemblyBegin(f); CHKERRQ(ierr); ierr = VecAssemblyEnd(f); CHKERRQ(ierr); PetscViewer viewerf; // Save the complex vectors to binary files ierr = PetscViewerBinaryOpen(PETSC_COMM_WORLD, "output_f.dat", FILE_MODE_WRITE, &viewerf); CHKERRQ(ierr); ierr = VecView(f, viewerf); CHKERRQ(ierr); ierr = PetscViewerDestroy(&viewerf); CHKERRQ(ierr); // Create vectors to store the magnitudes Vec f_magnitude; ierr = VecDuplicate(f, &f_magnitude); CHKERRQ(ierr); // Get local portion of the vectors const PetscScalar *f_array; PetscScalar *f_magnitude_array; PetscInt n_local; ierr = VecGetLocalSize(f, &n_local); CHKERRQ(ierr); ierr = VecGetArrayRead(f, &f_array); CHKERRQ(ierr); ierr = VecGetArray(f_magnitude, &f_magnitude_array); CHKERRQ(ierr); // Compute the magnitude for each element for (int i = 0; i < n_local; i++) { f_magnitude_array[i] = PetscAbsScalar(f_array[i]); } // Restore arrays ierr = VecRestoreArrayRead(f, &f_array); CHKERRQ(ierr); ierr = VecRestoreArray(f_magnitude, &f_magnitude_array); CHKERRQ(ierr); // Assemble the magnitude vectors ierr = VecAssemblyBegin(f_magnitude); CHKERRQ(ierr); ierr = VecAssemblyEnd(f_magnitude); CHKERRQ(ierr); // Save the magnitude vectors to binary files PetscViewer viewerfmag; ierr = PetscViewerBinaryOpen(PETSC_COMM_WORLD, "output_f_mag.dat", FILE_MODE_WRITE, &viewerfmag); CHKERRQ(ierr); ierr = VecView(f_magnitude, viewerfmag); CHKERRQ(ierr); ierr = PetscViewerDestroy(&viewerfmag); CHKERRQ(ierr); In MATLAB, I am using petscBinaryRead to read the data. The complex vectors are read, but only the real part is available. The magnitude vectors are however read as alternating zero and non-zero elements. What went wrong? How can I export the data correctly to MATLAB-accessible format? (I have not configured PETSc with Matlab as I was encountering library conflict issues) -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at joliv.et Mon Jul 1 09:40:24 2024 From: pierre at joliv.et (Pierre Jolivet) Date: Mon, 1 Jul 2024 16:40:24 +0200 Subject: [petsc-users] Issue with Exporting and Reading Complex Vectors and Magnitude Vectors in PETSc and MATLAB In-Reply-To: References: Message-ID: > On 1 Jul 2024, at 4:37?PM, maitri ksh wrote: > > This Message Is From an External Sender > This message came from outside your organization. > I need to export complex vectors data from PETSc and read them in MATLAB. However, I am encountering some issues with the data format and interpretation in MATLAB. > > code-snippet of the vector data export section: > // Assemble the vectors before exporting > ierr = VecAssemblyBegin(f); CHKERRQ(ierr); > ierr = VecAssemblyEnd(f); CHKERRQ(ierr); > > PetscViewer viewerf; > // Save the complex vectors to binary files > ierr = PetscViewerBinaryOpen(PETSC_COMM_WORLD, "output_f.dat", FILE_MODE_WRITE, &viewerf); CHKERRQ(ierr); > ierr = VecView(f, viewerf); CHKERRQ(ierr); > ierr = PetscViewerDestroy(&viewerf); CHKERRQ(ierr); > > // Create vectors to store the magnitudes > Vec f_magnitude; > ierr = VecDuplicate(f, &f_magnitude); CHKERRQ(ierr); > > // Get local portion of the vectors > const PetscScalar *f_array; > PetscScalar *f_magnitude_array; > PetscInt n_local; > > ierr = VecGetLocalSize(f, &n_local); CHKERRQ(ierr); > ierr = VecGetArrayRead(f, &f_array); CHKERRQ(ierr); > ierr = VecGetArray(f_magnitude, &f_magnitude_array); CHKERRQ(ierr); > > // Compute the magnitude for each element > for (int i = 0; i < n_local; i++) { > f_magnitude_array[i] = PetscAbsScalar(f_array[i]); > } > > // Restore arrays > ierr = VecRestoreArrayRead(f, &f_array); CHKERRQ(ierr); > ierr = VecRestoreArray(f_magnitude, &f_magnitude_array); CHKERRQ(ierr); > > // Assemble the magnitude vectors > ierr = VecAssemblyBegin(f_magnitude); CHKERRQ(ierr); > ierr = VecAssemblyEnd(f_magnitude); CHKERRQ(ierr); > > // Save the magnitude vectors to binary files > PetscViewer viewerfmag; > ierr = PetscViewerBinaryOpen(PETSC_COMM_WORLD, "output_f_mag.dat", FILE_MODE_WRITE, &viewerfmag); CHKERRQ(ierr); > ierr = VecView(f_magnitude, viewerfmag); CHKERRQ(ierr); > ierr = PetscViewerDestroy(&viewerfmag); CHKERRQ(ierr); > > In MATLAB, I am using petscBinaryRead to read the data. The complex vectors are read, but only the real part is available. The magnitude vectors are however read as alternating zero and non-zero elements. What went wrong? > How can I export the data correctly to MATLAB-accessible format? (I have not configured PETSc with Matlab as I was encountering library conflict issues) How are you reading complex-valued binary files in MATLAB? It is not the same as real-valued files, in particular, you must add the parameters 'complex', true to the PetscBinaryRead() call, type help PetscBinaryRead in the MATLAB console. Thanks, Pierre -------------- next part -------------- An HTML attachment was scrubbed... URL: From meator.dev at gmail.com Mon Jul 1 09:43:26 2024 From: meator.dev at gmail.com (meator) Date: Mon, 1 Jul 2024 16:43:26 +0200 Subject: [petsc-users] Weird handling of compiler flags by the build system In-Reply-To: <339A8F90-6B61-42BB-B7AB-AB16B905C154@petsc.dev> References: <339A8F90-6B61-42BB-B7AB-AB16B905C154@petsc.dev> Message-ID: <44e6eb41-ac6c-4fd3-beac-da33341c4a09@gmail.com> Thank you for your reply! On 7/1/24 4:14 PM, Barry Smith wrote: > We have had well over a decade of debates on this issue. I would like to see a CFLAGS+=extra_flags option but that has been resisted. Instead Satish can tell you how to get what you want. This is unfortunate. I assume that patching the buildsystem or some other trickery will be necessary if what you're saying is true. >> Is there a way to fix the pkg-config file (apart from manually removing cflags_extra, cxxflags_extra, and fflags_extra from the .pc file)? > > These are there so people can see EXACTLY what flags were used when PETSc was compiled. They are not intended for people using pkg-config to use PETSc when building their package. What is the harm in having these extra flags in the pkgconfig file? Here is an excerpt from /usr/share/petsc/Makefile.user (this is a template Makefile supplied with PETSc for use in custom projects): > # Additional libraries that support pkg-config can be added to the list of PACKAGES below. > PACKAGES := $(petsc.pc) > > CC := $(shell pkg-config --variable=ccompiler $(PACKAGES)) > CXX := $(shell pkg-config --variable=cxxcompiler $(PACKAGES)) > FC := $(shell pkg-config --variable=fcompiler $(PACKAGES)) > CFLAGS_OTHER := $(shell pkg-config --cflags-only-other $(PACKAGES)) > CFLAGS := $(shell pkg-config --variable=cflags_extra $(PACKAGES)) $(CFLAGS_OTHER) > CXXFLAGS := $(shell pkg-config --variable=cxxflags_extra $(PACKAGES)) $(CFLAGS_OTHER) > FFLAGS := $(shell pkg-config --variable=fflags_extra $(PACKAGES)) > CPPFLAGS := $(shell pkg-config --cflags-only-I $(PACKAGES)) > LDFLAGS := $(shell pkg-config --libs-only-L --libs-only-other $(PACKAGES)) > LDFLAGS += $(patsubst -L%, $(shell pkg-config --variable=ldflag_rpath $(PACKAGES))%, $(shell pkg-config --libs-only-L $(PACKAGES))) > LDLIBS := $(shell pkg-config --libs-only-l $(PACKAGES)) -lm > CUDAC := $(shell pkg-config --variable=cudacompiler $(PACKAGES)) > CUDAC_FLAGS := $(shell pkg-config --variable=cudaflags_extra $(PACKAGES)) > CUDA_LIB := $(shell pkg-config --variable=cudalib $(PACKAGES)) > CUDA_INCLUDE := $(shell pkg-config --variable=cudainclude $(PACKAGES)) CFLAGS of user projects get initialized to cflags_extra for people who use the official recommended Makefile template. This is not tolerable because the flags used for building PETSc may be incompatible with unrelated projects that depend on PETSc. The build environment of the user program may be very different from the one used to build PETSc itself. Users trying to build their custom programs depending on PETSc will likely not want flags that were used to build PETSc in a fake destdir in chrooted system while building the PETSc package. -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_0x1A14CB3464CBE5BF.asc Type: application/pgp-keys Size: 6275 bytes Desc: OpenPGP public key URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature.asc Type: application/pgp-signature Size: 659 bytes Desc: OpenPGP digital signature URL: From stefano.zampini at gmail.com Mon Jul 1 10:23:34 2024 From: stefano.zampini at gmail.com (Stefano Zampini) Date: Mon, 1 Jul 2024 17:23:34 +0200 Subject: [petsc-users] Weird handling of compiler flags by the build system In-Reply-To: <44e6eb41-ac6c-4fd3-beac-da33341c4a09@gmail.com> References: <339A8F90-6B61-42BB-B7AB-AB16B905C154@petsc.dev> <44e6eb41-ac6c-4fd3-beac-da33341c4a09@gmail.com> Message-ID: Il giorno lun 1 lug 2024 alle ore 16:43 meator ha scritto: > Thank you for your reply! > > On 7/1/24 4:14 PM, Barry Smith wrote: > > We have had well over a decade of debates on this issue. I would > like to see a CFLAGS+=extra_flags option but that has been resisted. > Instead Satish can tell you how to get what you want. > > This is unfortunate. I assume that patching the buildsystem or some > other trickery will be necessary if what you're saying is true. > > I admit it is not so standard, but Satish always opposed this. No need to hack the buildsystem, just use COPTFLAGS, CXXOPTFLAGS and FOPTFLAGS > >> Is there a way to fix the pkg-config file (apart from manually removing > cflags_extra, cxxflags_extra, and fflags_extra from the .pc file)? > > > > These are there so people can see EXACTLY what flags were used when > PETSc was compiled. They are not intended for people using pkg-config to > use PETSc when building their package. What is the harm in having these > extra flags in the pkgconfig file? > I don't think you should use Makefile.user. That is there as a sort of template/placeholder. The extra variables are not included in a standard usage of pkgconfig, so I don't think this issue is "severe" $ pkg-config --cflags PETSc.pc # standard usage -I/Users/szampini/Devel/petsc/arch-debug/include -I/Users/szampini/Devel/petsc/include $ pkg-config --variable=cflags_extra PETSc.pc # non standard -fPIC -Wall -Wwrite-strings -Wno-unknown-pragmas -fstack-protector -fno-stack-check -Qunused-arguments -fvisibility=hidden -g3 -O0 > Here is an excerpt from /usr/share/petsc/Makefile.user (this is a > template Makefile supplied with PETSc for use in custom projects): > > > # Additional libraries that support pkg-config can be added to the > list of PACKAGES below. > > PACKAGES := $(petsc.pc) > > > > CC := $(shell pkg-config --variable=ccompiler $(PACKAGES)) > > CXX := $(shell pkg-config --variable=cxxcompiler $(PACKAGES)) > > FC := $(shell pkg-config --variable=fcompiler $(PACKAGES)) > > CFLAGS_OTHER := $(shell pkg-config --cflags-only-other $(PACKAGES)) > > CFLAGS := $(shell pkg-config --variable=cflags_extra $(PACKAGES)) > $(CFLAGS_OTHER) > > CXXFLAGS := $(shell pkg-config --variable=cxxflags_extra $(PACKAGES)) > $(CFLAGS_OTHER) > > FFLAGS := $(shell pkg-config --variable=fflags_extra $(PACKAGES)) > > CPPFLAGS := $(shell pkg-config --cflags-only-I $(PACKAGES)) > > LDFLAGS := $(shell pkg-config --libs-only-L --libs-only-other > $(PACKAGES)) > > LDFLAGS += $(patsubst -L%, $(shell pkg-config --variable=ldflag_rpath > $(PACKAGES))%, $(shell pkg-config --libs-only-L $(PACKAGES))) > > LDLIBS := $(shell pkg-config --libs-only-l $(PACKAGES)) -lm > > CUDAC := $(shell pkg-config --variable=cudacompiler $(PACKAGES)) > > CUDAC_FLAGS := $(shell pkg-config --variable=cudaflags_extra > $(PACKAGES)) > > CUDA_LIB := $(shell pkg-config --variable=cudalib $(PACKAGES)) > > CUDA_INCLUDE := $(shell pkg-config --variable=cudainclude $(PACKAGES)) > > CFLAGS of user projects get initialized to cflags_extra for people who > use the official recommended Makefile template. This is not tolerable > because the flags used for building PETSc may be incompatible with > unrelated projects that depend on PETSc. The build environment of the > user program may be very different from the one used to build PETSc > itself. Users trying to build their custom programs depending on PETSc > will likely not want flags that were used to build PETSc in a fake > destdir in chrooted system while building the PETSc package. > -- Stefano -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay.anl at fastmail.org Mon Jul 1 10:39:09 2024 From: balay.anl at fastmail.org (Satish Balay) Date: Mon, 1 Jul 2024 10:39:09 -0500 (CDT) Subject: [petsc-users] Weird handling of compiler flags by the build system In-Reply-To: References: <339A8F90-6B61-42BB-B7AB-AB16B905C154@petsc.dev> <44e6eb41-ac6c-4fd3-beac-da33341c4a09@gmail.com> Message-ID: <1ed1017c-c96c-5244-f195-e08f125f394b@fastmail.org> An HTML attachment was scrubbed... URL: From meator.dev at gmail.com Mon Jul 1 11:08:45 2024 From: meator.dev at gmail.com (meator) Date: Mon, 1 Jul 2024 18:08:45 +0200 Subject: [petsc-users] Weird handling of compiler flags by the build system In-Reply-To: References: <339A8F90-6B61-42BB-B7AB-AB16B905C154@petsc.dev> <44e6eb41-ac6c-4fd3-beac-da33341c4a09@gmail.com> Message-ID: On 7/1/24 5:23 PM, Stefano Zampini wrote: > This is unfortunate. I assume that patching the buildsystem or some > other trickery will be necessary if what you're saying is true. > > I admit it is not so standard, but Satish always opposed this. > No need to hack the buildsystem, just use?COPTFLAGS, CXXOPTFLAGS and > FOPTFLAGS I will try using *OPTFLAGS, thanks! > I don't think you should use Makefile.user. That is there as a sort of > template/placeholder. The extra variables are not included in a standard > usage of pkgconfig, so I don't think this issue is "severe" This is not a choice I get to make. The users of my package may choose to use PETSc however they want. And /usr/share/petsc/Makefile.user or /usr/share/petsc/CMakeLists.txt are officially supported ways of using PETSc, so deciding to ignore these use cases and leaving junk flags in /usr/lib/pkgconfig/petsc.pc is not tolerable for me. -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_0x1A14CB3464CBE5BF.asc Type: application/pgp-keys Size: 6275 bytes Desc: OpenPGP public key URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature.asc Type: application/pgp-signature Size: 659 bytes Desc: OpenPGP digital signature URL: From balay.anl at fastmail.org Mon Jul 1 11:17:01 2024 From: balay.anl at fastmail.org (Satish Balay) Date: Mon, 1 Jul 2024 11:17:01 -0500 (CDT) Subject: [petsc-users] Weird handling of compiler flags by the build system In-Reply-To: References: Message-ID: <95879d83-28e9-bfbb-8039-cfac024d452f@fastmail.org> An HTML attachment was scrubbed... URL: From balay.anl at fastmail.org Mon Jul 1 11:22:29 2024 From: balay.anl at fastmail.org (Satish Balay) Date: Mon, 1 Jul 2024 11:22:29 -0500 (CDT) Subject: [petsc-users] Weird handling of compiler flags by the build system In-Reply-To: References: <339A8F90-6B61-42BB-B7AB-AB16B905C154@petsc.dev> <44e6eb41-ac6c-4fd3-beac-da33341c4a09@gmail.com> Message-ID: <1931593e-ad96-d497-2c5b-34af831e6e3f@fastmail.org> An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Mon Jul 1 11:37:36 2024 From: bsmith at petsc.dev (Barry Smith) Date: Mon, 1 Jul 2024 12:37:36 -0400 Subject: [petsc-users] Weird handling of compiler flags by the build system In-Reply-To: References: <339A8F90-6B61-42BB-B7AB-AB16B905C154@petsc.dev> <44e6eb41-ac6c-4fd3-beac-da33341c4a09@gmail.com> Message-ID: I have added two MR to hopefully improve PETSc usability based on your issues https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/merge_requests/7663__;!!G_uCfscf7eWS!efrDwkgUkc276It23vyi4kUxZ3ieab1AgseqAOCvE-K9-nLNjd6aad4rmdaRExARms_zfeExsNtm5lRx_b5ezSM$ ? Add information to template Makefile.user indicating what parts can easily be... (!7663) ? Merge requests ? PETSc / petsc ? GitLab gitlab.com to clarify in Makefile.User how to remove the PETSc build compiler flags and https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/merge_requests/7662__;!!G_uCfscf7eWS!efrDwkgUkc276It23vyi4kUxZ3ieab1AgseqAOCvE-K9-nLNjd6aad4rmdaRExARms_zfeExsNtm5lRxbVNm9-g$ ? Additional documentation in configure for CFLAGS and friends (!7662) ? Merge requests ? PETSc / petsc ? GitLab gitlab.com additional clarification in the docs for CFLAGS and friends and how they work Barry > On Jul 1, 2024, at 12:08?PM, meator wrote: > > On 7/1/24 5:23 PM, Stefano Zampini wrote: >> This is unfortunate. I assume that patching the buildsystem or some >> other trickery will be necessary if what you're saying is true. >> I admit it is not so standard, but Satish always opposed this. >> No need to hack the buildsystem, just use COPTFLAGS, CXXOPTFLAGS and FOPTFLAGS > > I will try using *OPTFLAGS, thanks! > >> I don't think you should use Makefile.user. That is there as a sort of template/placeholder. The extra variables are not included in a standard usage of pkgconfig, so I don't think this issue is "severe" > > This is not a choice I get to make. The users of my package may choose to use PETSc however they want. And /usr/share/petsc/Makefile.user or /usr/share/petsc/CMakeLists.txt are officially supported ways of using PETSc, so deciding to ignore these use cases and leaving junk flags in /usr/lib/pkgconfig/petsc.pc is not tolerable for me. > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: PETSc_RBG-logo.png Type: image/png Size: 7210 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: PETSc_RBG-logo.png Type: image/png Size: 7210 bytes Desc: not available URL: From stefano.zampini at gmail.com Mon Jul 1 11:53:21 2024 From: stefano.zampini at gmail.com (Stefano Zampini) Date: Mon, 1 Jul 2024 18:53:21 +0200 Subject: [petsc-users] Weird handling of compiler flags by the build system In-Reply-To: References: <339A8F90-6B61-42BB-B7AB-AB16B905C154@petsc.dev> <44e6eb41-ac6c-4fd3-beac-da33341c4a09@gmail.com> Message-ID: > > > > This is not a choice I get to make. The users of my package may choose > to use PETSc however they want. And /usr/share/petsc/Makefile.user or > /usr/share/petsc/CMakeLists.txt are officially supported ways of using > PETSc, so deciding to ignore these use cases and leaving junk flags in > /usr/lib/pkgconfig/petsc.pc is not tolerable for me. > Since you said "your package", I have a few questions: - why do your users need to know how to compile with PETSc? - Shouldn't this be handled by you? - And also, can't you tell your users how to use PETSc? (for example not to use the Makefile.user) > And /usr/share/petsc/Makefile.user or /usr/share/petsc/CMakeLists.txt are officially supported ways of using PETSc, Those two files are not the "officially supported ways of using PETSc." They are examples of how to set up compilations using PETSc. > leaving junk flags in /usr/lib/pkgconfig/petsc.pc is not tolerable for me. Those are not junk flags, since they are not part of the pkg config standard https://urldefense.us/v3/__https://people.freedesktop.org/*dbn/pkg-config-guide.html__;fg!!G_uCfscf7eWS!d-as0YaTKRi7ARzjsQl9IU8ltIJCAw9yJy_g1VbAzfXYDaNPjDiC6o_WhtrhA8K7o1Lm5IAbSM--LkimyiQtut-CuKCpx6A$ -- Stefano -------------- next part -------------- An HTML attachment was scrubbed... URL: From meator.dev at gmail.com Mon Jul 1 14:09:57 2024 From: meator.dev at gmail.com (meator) Date: Mon, 1 Jul 2024 21:09:57 +0200 Subject: [petsc-users] Weird handling of compiler flags by the build system resolved In-Reply-To: <1931593e-ad96-d497-2c5b-34af831e6e3f@fastmail.org> References: <339A8F90-6B61-42BB-B7AB-AB16B905C154@petsc.dev> <44e6eb41-ac6c-4fd3-beac-da33341c4a09@gmail.com> <1931593e-ad96-d497-2c5b-34af831e6e3f@fastmail.org> Message-ID: On 7/1/24 6:22 PM, Satish Balay wrote: > And for your desired use case: "I need to specify flags to PETSc build - and they should not bleed to dependent pkgs" I've indicated one mode for this. > > i.e - do not specify them with configure - only list them with make. > > ./configure > > make CFLAGS='only-for-petsc-library-build' Thank you for your answers! A combination of this approach and the *OPTFLAGS approach mentioned before allowed me to pass all the relevant flags while building PETSc without overriding preset ones and without polluting the pkg-config file. PETSc build flags will now not bleed to dependent pkgs. I was also not aware of some of the flags which you have mentioned (--with-visibility, --with-pic). I have now made use of these. -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_0x1A14CB3464CBE5BF.asc Type: application/pgp-keys Size: 6275 bytes Desc: OpenPGP public key URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature.asc Type: application/pgp-signature Size: 659 bytes Desc: OpenPGP digital signature URL: From meator.dev at gmail.com Mon Jul 1 14:53:43 2024 From: meator.dev at gmail.com (meator) Date: Mon, 1 Jul 2024 21:53:43 +0200 Subject: [petsc-users] Weird handling of compiler flags by the build system In-Reply-To: References: <339A8F90-6B61-42BB-B7AB-AB16B905C154@petsc.dev> <44e6eb41-ac6c-4fd3-beac-da33341c4a09@gmail.com> Message-ID: <67986226-bc57-4bce-85e4-6b710cd96f92@gmail.com> On 7/1/24 6:53 PM, Stefano Zampini wrote:> Since you said? "your package", I have a few questions: > > - why do your users need to know how to compile with?PETSc? I provide two main packages: petsc and petsc-devel. petsc-devel contains header files, symlinks to dynamic library & more. petsc-devel should also be fully usable for personal development of PETSc dependent programs. > - Shouldn't this be handled?by you? I compile PETSc. That's what the users of my package will get from me. But they may link with it however they wish. And since the Makefile method is official & provided by upstream, I don't see a reason why I shouldn't take care in supporting it. The extra pkg-config variables are not standard and most systems integrating with PETSc using pkg-config will simply ignore these, but that isn't true for the Makefile.user. > - And also, can't you tell your users how to?use PETSc? (for example not > to use the Makefile.user) This is a decision I leave to upstream. As a packager, I try to not alter the functionality of programs & libraries I package, because that would make my packages less trustworthy. If I'd believed that Makefile.user should be removed (which I don't), I would first write on this mailing list requesting its removal from PETSc or I would create an issue/MR on GitLab. > > And /usr/share/petsc/Makefile.user or /usr/share/petsc/CMakeLists.txt > are officially supported ways of using PETSc, > Those two files are not the "officially supported?ways of using PETSc." > They are examples of how to set up?compilations using PETSc. I do not view these two files as the only officially supported ways of linking with PETSc. If these files weren't part of PETSc, I wouldn't be compelled to support them. If that was the case, I would simply delete the offending lines in the pkg-config file. But these files are example build definition files provided by PETSc meant to be used in projects which depend on PETSc. Users of my package may or may not choose to use them. > > > leaving junk flags in /usr/lib/pkgconfig/petsc.pc is not tolerable > for me. > > Those are not junk flags, since they are not part of the pkg config standard > https://people.freedesktop.org/~dbn/pkg-config-guide.html > By "junk flags" I meant flags that were used when building PETSc for the petsc package. These flags got put into cflags_extra, which is then used in user projects thanks to Makefile.user. -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_0x1A14CB3464CBE5BF.asc Type: application/pgp-keys Size: 6275 bytes Desc: OpenPGP public key URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature.asc Type: application/pgp-signature Size: 659 bytes Desc: OpenPGP digital signature URL: From maitri.ksh at gmail.com Tue Jul 2 03:59:38 2024 From: maitri.ksh at gmail.com (maitri ksh) Date: Tue, 2 Jul 2024 11:59:38 +0300 Subject: [petsc-users] Issue with Exporting and Reading Complex Vectors and Magnitude Vectors in PETSc and MATLAB In-Reply-To: References: Message-ID: A variable 'arecomplex' which is by default set to false inside PetscBinaryRead() overrides 'complex', true in PetscBinaryRead(file, "complex", true) thus giving real valued output unless one manually changes arecomplex=true inside the function. On Mon, Jul 1, 2024 at 5:40?PM Pierre Jolivet wrote: > > > On 1 Jul 2024, at 4:37?PM, maitri ksh wrote: > > This Message Is From an External Sender > This message came from outside your organization. > I need to export complex vectors data from PETSc and read them in MATLAB. > However, I am encountering some issues with the data format and > interpretation in MATLAB. > > code-snippet of the vector data export section: > // Assemble the vectors before exporting > ierr = VecAssemblyBegin(f); CHKERRQ(ierr); > ierr = VecAssemblyEnd(f); CHKERRQ(ierr); > > PetscViewer viewerf; > // Save the complex vectors to binary files > ierr = PetscViewerBinaryOpen(PETSC_COMM_WORLD, "output_f.dat", > FILE_MODE_WRITE, &viewerf); CHKERRQ(ierr); > ierr = VecView(f, viewerf); CHKERRQ(ierr); > ierr = PetscViewerDestroy(&viewerf); CHKERRQ(ierr); > > // Create vectors to store the magnitudes > Vec f_magnitude; > ierr = VecDuplicate(f, &f_magnitude); CHKERRQ(ierr); > > // Get local portion of the vectors > const PetscScalar *f_array; > PetscScalar *f_magnitude_array; > PetscInt n_local; > > ierr = VecGetLocalSize(f, &n_local); CHKERRQ(ierr); > ierr = VecGetArrayRead(f, &f_array); CHKERRQ(ierr); > ierr = VecGetArray(f_magnitude, &f_magnitude_array); CHKERRQ(ierr); > > // Compute the magnitude for each element > for (int i = 0; i < n_local; i++) { > f_magnitude_array[i] = PetscAbsScalar(f_array[i]); > } > > // Restore arrays > ierr = VecRestoreArrayRead(f, &f_array); CHKERRQ(ierr); > ierr = VecRestoreArray(f_magnitude, &f_magnitude_array); CHKERRQ(ierr); > > // Assemble the magnitude vectors > ierr = VecAssemblyBegin(f_magnitude); CHKERRQ(ierr); > ierr = VecAssemblyEnd(f_magnitude); CHKERRQ(ierr); > > // Save the magnitude vectors to binary files > PetscViewer viewerfmag; > ierr = PetscViewerBinaryOpen(PETSC_COMM_WORLD, "output_f_mag.dat", > FILE_MODE_WRITE, &viewerfmag); CHKERRQ(ierr); > ierr = VecView(f_magnitude, viewerfmag); CHKERRQ(ierr); > ierr = PetscViewerDestroy(&viewerfmag); CHKERRQ(ierr); > > In MATLAB, I am using petscBinaryRead to read the data. The complex > vectors are read, but only the real part is available. The magnitude > vectors are however read as alternating zero and non-zero elements. What > went wrong? > How can I export the data correctly to MATLAB-accessible format? (I have > not configured PETSc with Matlab as I was encountering library conflict > issues) > > > > How are you reading complex-valued binary files in MATLAB? > It is not the same as real-valued files, in particular, you must add the > parameters 'complex', true to the PetscBinaryRead() call, type help > PetscBinaryRead in the MATLAB console. > > Thanks, > Pierre > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre.jolivet at lip6.fr Tue Jul 2 04:09:50 2024 From: pierre.jolivet at lip6.fr (Pierre Jolivet) Date: Tue, 2 Jul 2024 11:09:50 +0200 Subject: [petsc-users] Issue with Exporting and Reading Complex Vectors and Magnitude Vectors in PETSc and MATLAB In-Reply-To: References: Message-ID: <8E6BD489-E27E-445C-9590-DD8B81DFCC0E@lip6.fr> > On 2 Jul 2024, at 10:59?AM, maitri ksh wrote: > > A variable 'arecomplex' which is by default set to false inside PetscBinaryRead() overrides 'complex', true in PetscBinaryRead(file, "complex", true) thus giving real valued output unless one manually changes arecomplex=true inside the function. No, you do not need to edit this function. I?m not a MATLAB expert, but I guess there is a difference between ?complex", true (what you are using) and 'complex', true (what I told you to use). Thanks, Pierre > On Mon, Jul 1, 2024 at 5:40?PM Pierre Jolivet > wrote: >> >> >>> On 1 Jul 2024, at 4:37?PM, maitri ksh > wrote: >>> >>> This Message Is From an External Sender >>> This message came from outside your organization. >>> I need to export complex vectors data from PETSc and read them in MATLAB. However, I am encountering some issues with the data format and interpretation in MATLAB. >>> >>> code-snippet of the vector data export section: >>> // Assemble the vectors before exporting >>> ierr = VecAssemblyBegin(f); CHKERRQ(ierr); >>> ierr = VecAssemblyEnd(f); CHKERRQ(ierr); >>> >>> PetscViewer viewerf; >>> // Save the complex vectors to binary files >>> ierr = PetscViewerBinaryOpen(PETSC_COMM_WORLD, "output_f.dat", FILE_MODE_WRITE, &viewerf); CHKERRQ(ierr); >>> ierr = VecView(f, viewerf); CHKERRQ(ierr); >>> ierr = PetscViewerDestroy(&viewerf); CHKERRQ(ierr); >>> >>> // Create vectors to store the magnitudes >>> Vec f_magnitude; >>> ierr = VecDuplicate(f, &f_magnitude); CHKERRQ(ierr); >>> >>> // Get local portion of the vectors >>> const PetscScalar *f_array; >>> PetscScalar *f_magnitude_array; >>> PetscInt n_local; >>> >>> ierr = VecGetLocalSize(f, &n_local); CHKERRQ(ierr); >>> ierr = VecGetArrayRead(f, &f_array); CHKERRQ(ierr); >>> ierr = VecGetArray(f_magnitude, &f_magnitude_array); CHKERRQ(ierr); >>> >>> // Compute the magnitude for each element >>> for (int i = 0; i < n_local; i++) { >>> f_magnitude_array[i] = PetscAbsScalar(f_array[i]); >>> } >>> >>> // Restore arrays >>> ierr = VecRestoreArrayRead(f, &f_array); CHKERRQ(ierr); >>> ierr = VecRestoreArray(f_magnitude, &f_magnitude_array); CHKERRQ(ierr); >>> >>> // Assemble the magnitude vectors >>> ierr = VecAssemblyBegin(f_magnitude); CHKERRQ(ierr); >>> ierr = VecAssemblyEnd(f_magnitude); CHKERRQ(ierr); >>> >>> // Save the magnitude vectors to binary files >>> PetscViewer viewerfmag; >>> ierr = PetscViewerBinaryOpen(PETSC_COMM_WORLD, "output_f_mag.dat", FILE_MODE_WRITE, &viewerfmag); CHKERRQ(ierr); >>> ierr = VecView(f_magnitude, viewerfmag); CHKERRQ(ierr); >>> ierr = PetscViewerDestroy(&viewerfmag); CHKERRQ(ierr); >>> >>> In MATLAB, I am using petscBinaryRead to read the data. The complex vectors are read, but only the real part is available. The magnitude vectors are however read as alternating zero and non-zero elements. What went wrong? >>> How can I export the data correctly to MATLAB-accessible format? (I have not configured PETSc with Matlab as I was encountering library conflict issues) >> >> >> How are you reading complex-valued binary files in MATLAB? >> It is not the same as real-valued files, in particular, you must add the parameters 'complex', true to the PetscBinaryRead() call, type help PetscBinaryRead in the MATLAB console. >> >> Thanks, >> Pierre >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastian.blauth at itwm.fraunhofer.de Tue Jul 2 04:46:37 2024 From: sebastian.blauth at itwm.fraunhofer.de (Blauth, Sebastian) Date: Tue, 2 Jul 2024 09:46:37 +0000 Subject: [petsc-users] Question regarding naming of fieldsplit splits In-Reply-To: References: Message-ID: Hi Matt, thanks fort he answer and clarification. Then I?ll work around this issue in python, where I set the options. Best, Sebastian -- Dr. Sebastian Blauth Fraunhofer-Institut f?r Techno- und Wirtschaftsmathematik ITWM Abteilung Transportvorg?nge Fraunhofer-Platz 1, 67663 Kaiserslautern Telefon: +49 631 31600-4968 sebastian.blauth at itwm.fraunhofer.de https://www.itwm.fraunhofer.de From: Matthew Knepley Sent: Monday, July 1, 2024 4:30 PM To: Blauth, Sebastian Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] Question regarding naming of fieldsplit splits On Mon, Jul 1, 2024 at 9:48?AM Blauth, Sebastian > wrote: Dear Matt, thanks a lot for your help. Unfortunately, for me these extra options do not have any effect, I still get the ?u? and ?p? fieldnames. Also, this would not help me to get rid of the ?c? fieldname ? on that level of the fieldsplit I am basically using your approach already, and still it does show up. The output of the -ksp_view is unchanged, so that I do not attach it here again. Maybe I misunderstood you? Oh, we make an exception for single fields, since we think you would want to use the name. I have to make an extra option to shut off naming. Thanks, Matt Thanks for the help and best regards, Sebastian -- Dr. Sebastian Blauth Fraunhofer-Institut f?r Techno- und Wirtschaftsmathematik ITWM Abteilung Transportvorg?nge Fraunhofer-Platz 1, 67663 Kaiserslautern Telefon: +49 631 31600-4968 sebastian.blauth at itwm.fraunhofer.de https://www.itwm.fraunhofer.de From: Matthew Knepley > Sent: Monday, July 1, 2024 2:27 PM To: Blauth, Sebastian > Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] Question regarding naming of fieldsplit splits On Fri, Jun 28, 2024 at 4:05?AM Blauth, Sebastian > wrote: Hello everyone, I have a question regarding the naming convention using PETSc?s PCFieldsplit. I have been following https://lists.mcs.anl.gov/pipermail/petsc-users/2019-January/037262.html to create a DMShell with FEniCS in order to customize PCFieldsplit for my application. I am using the following options, which work nicely for me: -ksp_type fgmres -pc_type fieldsplit -pc_fieldsplit_0_fields 0, 1 -pc_fieldsplit_1_fields 2 -pc_fieldsplit_type additive -fieldsplit_0_ksp_type fgmres -fieldsplit_0_pc_type fieldsplit -fieldsplit_0_pc_fieldsplit_type schur -fieldsplit_0_pc_fieldsplit_schur_fact_type full -fieldsplit_0_pc_fieldsplit_schur_precondition selfp -fieldsplit_0_fieldsplit_u_ksp_type preonly -fieldsplit_0_fieldsplit_u_pc_type lu -fieldsplit_0_fieldsplit_p_ksp_type cg -fieldsplit_0_fieldsplit_p_ksp_rtol 1e-14 -fieldsplit_0_fieldsplit_p_ksp_atol 1e-30 -fieldsplit_0_fieldsplit_p_pc_type icc -fieldsplit_0_ksp_rtol 1e-14 -fieldsplit_0_ksp_atol 1e-30 -fieldsplit_0_ksp_monitor_true_residual -fieldsplit_c_ksp_type preonly -fieldsplit_c_pc_type lu -ksp_view By default, we use the field names, but you can prevent this by specifying the fields by hand, so -fieldsplit_0_pc_fieldsplit_0_fields 0 -fieldsplit_0_pc_fieldsplit_1_fields 1 should remove the 'u' and 'p' fieldnames. It is somewhat hacky, but I think easier to remember than some extra option. Thanks, Matt Note that this is just an academic example (sorry for the low solver tolerances) to test the approach, consisting of a Stokes equation and some concentration equation (which is not even coupled to Stokes, just for testing). Completely analogous to https://lists.mcs.anl.gov/pipermail/petsc-users/2019-January/037262.html, I translate my IS?s to a PETSc Section, which is then supplied to a DMShell and assigned to a KSP. I am not so familiar with the code or how / why this works, but it seems to do so perfectly. I name my sections with petsc4py using section.setFieldName(0, "u") section.setFieldName(1, "p") section.setFieldName(2, "c") However, this is also reflected in the way I can access the fieldsplit options from the command line. My question is: Is there any way of not using the FieldNames specified in python but use the index of the field as defined with ?-pc_fieldsplit_0_fields 0, 1? and ?-pc_fieldsplit_1_fields 2?, i.e., instead of the prefix ?fieldsplit_0_fieldsplit_u? I want to write ?fieldsplit_0_fieldsplit_0?, instead of ?fieldsplit_0_fieldsplit_p? I want to use ?fieldsplit_0_fieldsplit_1?, and instead of ?fieldsplit_c? I want to use ?fieldsplit_1?. Just changing the names of the fields to section.setFieldName(0, "0") section.setFieldName(1, "1") section.setFieldName(2, "2") does obviously not work as expected, as it works for velocity and pressure, but not for the concentration ? the prefix there is then ?fieldsplit_2? and not ?fieldsplit_1?. In the docs, I have found https://petsc.org/main/manualpages/PC/PCFieldSplitSetFields/ which seems to suggest that the fieldname can potentially be supplied, but I don?t see how to do so from the command line. Also, for the sake of completeness, I attach the output of the solve with ?-ksp_view? below. Thanks a lot in advance and best regards, Sebastian The output of ksp_view is the following: KSP Object: 1 MPI processes type: fgmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-11, divergence=10000. right preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: 1 MPI processes type: fieldsplit FieldSplit with ADDITIVE composition: total splits = 2 Solver info for each split is in the following KSP objects: Split number 0 Defined by IS KSP Object: (fieldsplit_0_) 1 MPI processes type: fgmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=1e-14, absolute=1e-30, divergence=10000. right preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: (fieldsplit_0_) 1 MPI processes type: fieldsplit FieldSplit with Schur preconditioner, factorization FULL Preconditioner for the Schur complement formed from Sp, an assembled approximation to S, which uses A00's diagonal's inverse Split info: Split number 0 Defined by IS Split number 1 Defined by IS KSP solver for A00 block KSP Object: (fieldsplit_0_fieldsplit_u_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (fieldsplit_0_fieldsplit_u_) 1 MPI processes type: lu out-of-place factorization tolerance for zero pivot 2.22045e-14 matrix ordering: nd factor fill ratio given 5., needed 3.92639 Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=4290, cols=4290 package used to perform factorization: petsc total: nonzeros=375944, allocated nonzeros=375944 using I-node routines: found 2548 nodes, limit used is 5 linear system matrix = precond matrix: Mat Object: (fieldsplit_0_fieldsplit_u_) 1 MPI processes type: seqaij rows=4290, cols=4290 total: nonzeros=95748, allocated nonzeros=95748 total number of mallocs used during MatSetValues calls=0 using I-node routines: found 3287 nodes, limit used is 5 KSP solver for S = A11 - A10 inv(A00) A01 KSP Object: (fieldsplit_0_fieldsplit_p_) 1 MPI processes type: cg maximum iterations=10000, initial guess is zero tolerances: relative=1e-14, absolute=1e-30, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test PC Object: (fieldsplit_0_fieldsplit_p_) 1 MPI processes type: icc out-of-place factorization 0 levels of fill tolerance for zero pivot 2.22045e-14 using Manteuffel shift [POSITIVE_DEFINITE] matrix ordering: natural factor fill ratio given 1., needed 1. Factored matrix follows: Mat Object: 1 MPI processes type: seqsbaij rows=561, cols=561 package used to perform factorization: petsc total: nonzeros=5120, allocated nonzeros=5120 block size is 1 linear system matrix followed by preconditioner matrix: Mat Object: (fieldsplit_0_fieldsplit_p_) 1 MPI processes type: schurcomplement rows=561, cols=561 Schur complement A11 - A10 inv(A00) A01 A11 Mat Object: (fieldsplit_0_fieldsplit_p_) 1 MPI processes type: seqaij rows=561, cols=561 total: nonzeros=3729, allocated nonzeros=3729 total number of mallocs used during MatSetValues calls=0 not using I-node routines A10 Mat Object: 1 MPI processes type: seqaij rows=561, cols=4290 total: nonzeros=19938, allocated nonzeros=19938 total number of mallocs used during MatSetValues calls=0 not using I-node routines KSP of A00 KSP Object: (fieldsplit_0_fieldsplit_u_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (fieldsplit_0_fieldsplit_u_) 1 MPI processes type: lu out-of-place factorization tolerance for zero pivot 2.22045e-14 matrix ordering: nd factor fill ratio given 5., needed 3.92639 Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=4290, cols=4290 package used to perform factorization: petsc total: nonzeros=375944, allocated nonzeros=375944 using I-node routines: found 2548 nodes, limit used is 5 linear system matrix = precond matrix: Mat Object: (fieldsplit_0_fieldsplit_u_) 1 MPI processes type: seqaij rows=4290, cols=4290 total: nonzeros=95748, allocated nonzeros=95748 total number of mallocs used during MatSetValues calls=0 using I-node routines: found 3287 nodes, limit used is 5 A01 Mat Object: 1 MPI processes type: seqaij rows=4290, cols=561 total: nonzeros=19938, allocated nonzeros=19938 total number of mallocs used during MatSetValues calls=0 using I-node routines: found 3287 nodes, limit used is 5 Mat Object: 1 MPI processes type: seqaij rows=561, cols=561 total: nonzeros=9679, allocated nonzeros=9679 total number of mallocs used during MatSetValues calls=0 not using I-node routines linear system matrix = precond matrix: Mat Object: (fieldsplit_0_) 1 MPI processes type: seqaij rows=4851, cols=4851 total: nonzeros=139353, allocated nonzeros=139353 total number of mallocs used during MatSetValues calls=0 using I-node routines: found 3830 nodes, limit used is 5 Split number 1 Defined by IS KSP Object: (fieldsplit_c_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (fieldsplit_c_) 1 MPI processes type: lu out-of-place factorization tolerance for zero pivot 2.22045e-14 matrix ordering: nd factor fill ratio given 5., needed 4.24323 Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=561, cols=561 package used to perform factorization: petsc total: nonzeros=15823, allocated nonzeros=15823 not using I-node routines linear system matrix = precond matrix: Mat Object: (fieldsplit_c_) 1 MPI processes type: seqaij rows=561, cols=561 total: nonzeros=3729, allocated nonzeros=3729 total number of mallocs used during MatSetValues calls=0 not using I-node routines linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=5412, cols=5412 total: nonzeros=190416, allocated nonzeros=190416 total number of mallocs used during MatSetValues calls=0 using I-node routines: found 3833 nodes, limit used is 5 -- Dr. Sebastian Blauth Fraunhofer-Institut f?r Techno- und Wirtschaftsmathematik ITWM Abteilung Transportvorg?nge Fraunhofer-Platz 1, 67663 Kaiserslautern Telefon: +49 631 31600-4968 sebastian.blauth at itwm.fraunhofer.de https://www.itwm.fraunhofer.de -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 7943 bytes Desc: not available URL: From bsmith at petsc.dev Tue Jul 2 09:24:20 2024 From: bsmith at petsc.dev (Barry Smith) Date: Tue, 2 Jul 2024 10:24:20 -0400 Subject: [petsc-users] Issue with Exporting and Reading Complex Vectors and Magnitude Vectors in PETSc and MATLAB In-Reply-To: References: Message-ID: <7F2E07C9-D1E7-45D7-AE76-15A1FEB05B82@petsc.dev> arecomplex = false; tnargin = nargin; for l=1:nargin-2 if ischar(varargin{l}) && strcmpi(varargin{l},'indices') tnargin = min(l,tnargin-1); indices = varargin{l+1}; end if ischar(varargin{l}) && strcmpi(varargin{l},'precision') tnargin = min(l,tnargin-1); precision = varargin{l+1}; end if ischar(varargin{l}) && strcmpi(varargin{l},'cell') tnargin = min(l,tnargin-1); arecell = varargin{l+1}; end if ischar(varargin{l}) && strcmpi(varargin{l},'complex') <======== finds any argument that is 'complex' tnargin = min(l,tnargin-1); arecomplex = varargin{l+1}; <======== sets arecomplex to the next argument in the list end end If you are still having trouble you can use the Matlab debugger to step through this code to see why this check is not triggered. > On Jul 2, 2024, at 4:59?AM, maitri ksh wrote: > > This Message Is From an External Sender > This message came from outside your organization. > A variable 'arecomplex' which is by default set to false inside PetscBinaryRead() overrides 'complex', true in PetscBinaryRead(file, "complex", true) thus giving real valued output unless one manually changes arecomplex=true inside the function. > > On Mon, Jul 1, 2024 at 5:40?PM Pierre Jolivet > wrote: >> >> >>> On 1 Jul 2024, at 4:37?PM, maitri ksh > wrote: >>> >>> This Message Is From an External Sender >>> This message came from outside your organization. >>> I need to export complex vectors data from PETSc and read them in MATLAB. However, I am encountering some issues with the data format and interpretation in MATLAB. >>> >>> code-snippet of the vector data export section: >>> // Assemble the vectors before exporting >>> ierr = VecAssemblyBegin(f); CHKERRQ(ierr); >>> ierr = VecAssemblyEnd(f); CHKERRQ(ierr); >>> >>> PetscViewer viewerf; >>> // Save the complex vectors to binary files >>> ierr = PetscViewerBinaryOpen(PETSC_COMM_WORLD, "output_f.dat", FILE_MODE_WRITE, &viewerf); CHKERRQ(ierr); >>> ierr = VecView(f, viewerf); CHKERRQ(ierr); >>> ierr = PetscViewerDestroy(&viewerf); CHKERRQ(ierr); >>> >>> // Create vectors to store the magnitudes >>> Vec f_magnitude; >>> ierr = VecDuplicate(f, &f_magnitude); CHKERRQ(ierr); >>> >>> // Get local portion of the vectors >>> const PetscScalar *f_array; >>> PetscScalar *f_magnitude_array; >>> PetscInt n_local; >>> >>> ierr = VecGetLocalSize(f, &n_local); CHKERRQ(ierr); >>> ierr = VecGetArrayRead(f, &f_array); CHKERRQ(ierr); >>> ierr = VecGetArray(f_magnitude, &f_magnitude_array); CHKERRQ(ierr); >>> >>> // Compute the magnitude for each element >>> for (int i = 0; i < n_local; i++) { >>> f_magnitude_array[i] = PetscAbsScalar(f_array[i]); >>> } >>> >>> // Restore arrays >>> ierr = VecRestoreArrayRead(f, &f_array); CHKERRQ(ierr); >>> ierr = VecRestoreArray(f_magnitude, &f_magnitude_array); CHKERRQ(ierr); >>> >>> // Assemble the magnitude vectors >>> ierr = VecAssemblyBegin(f_magnitude); CHKERRQ(ierr); >>> ierr = VecAssemblyEnd(f_magnitude); CHKERRQ(ierr); >>> >>> // Save the magnitude vectors to binary files >>> PetscViewer viewerfmag; >>> ierr = PetscViewerBinaryOpen(PETSC_COMM_WORLD, "output_f_mag.dat", FILE_MODE_WRITE, &viewerfmag); CHKERRQ(ierr); >>> ierr = VecView(f_magnitude, viewerfmag); CHKERRQ(ierr); >>> ierr = PetscViewerDestroy(&viewerfmag); CHKERRQ(ierr); >>> >>> In MATLAB, I am using petscBinaryRead to read the data. The complex vectors are read, but only the real part is available. The magnitude vectors are however read as alternating zero and non-zero elements. What went wrong? >>> How can I export the data correctly to MATLAB-accessible format? (I have not configured PETSc with Matlab as I was encountering library conflict issues) >> >> >> How are you reading complex-valued binary files in MATLAB? >> It is not the same as real-valued files, in particular, you must add the parameters 'complex', true to the PetscBinaryRead() call, type help PetscBinaryRead in the MATLAB console. >> >> Thanks, >> Pierre >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mmolinos at us.es Tue Jul 2 14:50:28 2024 From: mmolinos at us.es (MIGUEL MOLINOS PEREZ) Date: Tue, 2 Jul 2024 19:50:28 +0000 Subject: [petsc-users] Doubt about TSMonitorSolutionVTK In-Reply-To: References: <2067D58E-F041-429F-8ABE-B19DD9F733C2@petsc.dev> <5A92C3DB-471D-4F32-86AE-FE9B3DD9C4D9@us.es> Message-ID: <1240DED1-C2AF-4F0C-A28C-BDE04DA31940@us.es> Dear Matthew, thank you. That makes much more sense. Is the script you mention available for download? Thanks, Miguel On Jul 1, 2024, at 5:09?AM, Matthew Knepley wrote: On Mon, Jul 1, 2024 at 1:43?AM MIGUEL MOLINOS PEREZ > wrote: Dear Matthey, Sorry for the late response. Yes, I get output when I run the example mentioned by Barry. The output directory should not be an issue since with the exact same configuration works for hdf5 but not for vtk/vts/vtu. I?ve been doing some tests and now I think this issue might be related to the fact that the output vector was generated using a SWARM discretization. Is this possible? Yes, there is no VTK viewer for Swarm. We have been moving away from VTK format, which is bulky and not very expressive, into our own HDF5 and CGNS. When we use HDF5, we have a script to generate an XDMF file, telling Paraview how to view it. I agree that this is annoying. Currently, we are moving toward PyVista, which can read our HDF5 files directly (and also work directly with running PETSc), although this is not done yet. Thanks, Matt Best, Miguel On Jun 27, 2024, at 4:59?AM, Matthew Knepley > wrote: Do you get output when you run an example with that option? Is it possible that your current working directory is not what you expect? Maybe try putting in an absolute path. Thanks, Matt On Wed, Jun 26, 2024 at 5:30?PM MIGUEL MOLINOS PEREZ > wrote: This Message Is From an External Sender This message came from outside your organization. Sorry, I did not put in cc petsc-users at mcs.anl.gov my replay. Miguel On Jun 24, 2024, at 6:39?PM, MIGUEL MOLINOS PEREZ > wrote: Thank you Barry, This is exactly how I did it the first time. Miguel On Jun 24, 2024, at 6:37?PM, Barry Smith > wrote: See, for example, the bottom of src/ts/tutorials/ex26.c that uses -ts_monitor_solution_vtk 'foo-%03d.vts' On Jun 24, 2024, at 8:47?PM, MIGUEL MOLINOS PEREZ > wrote: This Message Is From an External Sender This message came from outside your organization. Dear all, I want to monitor the results at each iteration of TS using vtk format. To do so, I add the following lines to my Monitor function: char vts_File_Name[MAXC]; PetscCall(PetscSNPrintf(vts_File_Name, sizeof(vts_File_Name), "./xi-MgHx-hcp-cube-x5-x5-x5-TS-BE-%i.vtu", step)); PetscCall(TSMonitorSolutionVTK(ts, step, time, X, (void*)vts_File_Name)); My script compiles and executes without any sort of warning/error messages. However, no output files are produced at the end of the simulation. I?ve also tried the option ?-ts_monitor_solution_vtk ?, but I got no results as well. I can?t find any similar example on the petsc website and I don?t see what I am doing wrong. Could somebody point me to the right direction? Thanks, Miguel -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!ZBJTLkBwKCzZpoUtnOqnP5XE-CuqlbazIyAHhsZrmtHitb0jVKuF8W7dj3vnvgNuLUUejAR9rOTnkxcLH8giXA$ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!ZBJTLkBwKCzZpoUtnOqnP5XE-CuqlbazIyAHhsZrmtHitb0jVKuF8W7dj3vnvgNuLUUejAR9rOTnkxcLH8giXA$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Jul 2 14:57:11 2024 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 2 Jul 2024 15:57:11 -0400 Subject: [petsc-users] Doubt about TSMonitorSolutionVTK In-Reply-To: <1240DED1-C2AF-4F0C-A28C-BDE04DA31940@us.es> References: <2067D58E-F041-429F-8ABE-B19DD9F733C2@petsc.dev> <5A92C3DB-471D-4F32-86AE-FE9B3DD9C4D9@us.es> <1240DED1-C2AF-4F0C-A28C-BDE04DA31940@us.es> Message-ID: On Tue, Jul 2, 2024 at 3:50?PM MIGUEL MOLINOS PEREZ wrote: > Dear Matthew, thank you. That makes much more sense. Is the script you > mention available for download? > $PETSC_DIR/lib/petsc/bin/petsc-_gen_xdmf.py Thanks, Matt > Thanks, > Miguel > > On Jul 1, 2024, at 5:09?AM, Matthew Knepley wrote: > > On Mon, Jul 1, 2024 at 1:43?AM MIGUEL MOLINOS PEREZ > wrote: > >> Dear Matthey, >> >> Sorry for the late response. >> >> Yes, I get output when I run the example mentioned by Barry. >> >> The output directory should not be an issue since with the exact same >> configuration works for hdf5 but not for vtk/vts/vtu. >> >> I?ve been doing some tests and now I think this issue might be related to >> the fact that the output vector was generated using a SWARM discretization. >> Is this possible? >> > > Yes, there is no VTK viewer for Swarm. We have been moving away from VTK > format, which is bulky and not very expressive, into our own HDF5 and CGNS. > When we use HDF5, we have a script to generate an XDMF file, telling > Paraview how to view it. I agree that this is annoying. Currently, we are > moving toward PyVista, which can read our HDF5 files directly (and also > work directly with running PETSc), although this is not done yet. > > Thanks, > > Matt > > >> Best, >> Miguel >> >> On Jun 27, 2024, at 4:59?AM, Matthew Knepley wrote: >> >> Do you get output when you run an example with that option? Is it >> possible that your current working directory is not what you expect? Maybe >> try putting in an absolute path. >> >> Thanks, >> >> Matt >> >> On Wed, Jun 26, 2024 at 5:30?PM MIGUEL MOLINOS PEREZ >> wrote: >> >>> This Message Is From an External Sender >>> This message came from outside your organization. >>> >>> Sorry, I did not put in cc petsc-users at mcs.anl.gov my replay. >>> >>> Miguel >>> >>> On Jun 24, 2024, at 6:39?PM, MIGUEL MOLINOS PEREZ >>> wrote: >>> >>> Thank you Barry, >>> >>> This is exactly how I did it the first time. >>> >>> Miguel >>> >>> On Jun 24, 2024, at 6:37?PM, Barry Smith wrote: >>> >>> >>> See, for example, the bottom of src/ts/tutorials/ex26.c that uses >>> -ts_monitor_*solution_vtk* 'foo-%03d.vts' >>> >>> >>> On Jun 24, 2024, at 8:47?PM, MIGUEL MOLINOS PEREZ >>> wrote: >>> >>> This Message Is From an External Sender >>> This message came from outside your organization. >>> Dear all, >>> >>> I want to monitor the results at each iteration of TS using vtk format. >>> To do so, I add the following lines to my Monitor function: >>> >>> char vts_File_Name[MAXC]; >>> PetscCall(PetscSNPrintf(vts_File_Name, sizeof(vts_File_Name), >>> "./xi-MgHx-hcp-cube-x5-x5-x5-TS-BE-%i.vtu", step)); >>> PetscCall(TSMonitorSolutionVTK(ts, step, time, X, (void*)vts_File_Name >>> )); >>> >>> My script compiles and executes without any sort of warning/error >>> messages. However, no output files are produced at the end of the >>> simulation. I?ve also tried the option ?-ts_monitor_solution_vtk >>> ?, but I got no results as well. >>> >>> I can?t find any similar example on the petsc website and I don?t see >>> what I am doing wrong. Could somebody point me to the right direction? >>> >>> Thanks, >>> Miguel >>> >>> >>> >>> >>> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!c6MlkyvZFc8zu-D_Chh2KKned2NifwM1VkXSb9uEq_whB9rDSDhCEZNpbHt3eGv_MQCI6aR4dEHgyZlOgYlv$ >> >> >> >> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!c6MlkyvZFc8zu-D_Chh2KKned2NifwM1VkXSb9uEq_whB9rDSDhCEZNpbHt3eGv_MQCI6aR4dEHgyZlOgYlv$ > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!c6MlkyvZFc8zu-D_Chh2KKned2NifwM1VkXSb9uEq_whB9rDSDhCEZNpbHt3eGv_MQCI6aR4dEHgyZlOgYlv$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From dr_hwang2022 at outlook.com Thu Jul 4 20:43:32 2024 From: dr_hwang2022 at outlook.com (dr hwang) Date: Fri, 5 Jul 2024 01:43:32 +0000 Subject: [petsc-users] make check error using Intel MKL Message-ID: Dear support team, I want to install the petsc-3.18.5 in my ubuntu20.04 with compiler "Intel parallel studio 2019", but I met some error when I execute the "make check". Below is my steps and relevant errors. Firstly, in my ~/.bashrc, I have exported the PETSC_DIR=/home/hwang/archive/petsc-3.18.5 PETSC_ARCH=linux-gnu-intel and make it source. (1) tar zxvf petsc-3.18.5.tar.gz (2) ./configure PETSC_ARCH=linux-gnu-intel --prefix=/home/hwang/software/petsc-3.18.5 \ --with-cc=mpiicc --with-cxx=mpiicpc --with-fc=mpiifort --with-blaslapack-dir=${MKLROOT}/lib/intel64 --with-clean (3) make PETSC_DIR=/home/hwang/archive/petsc-3.18.5 PETSC_ARCH=linux-gnu-intel all (4) make PETSC_DIR=/home/hwang/archive/petsc-3.18.5 PETSC_ARCH=linux-gnu-intel install (5) make PETSC_DIR=/home/hwang/software/petsc-3.18.5 PETSC_ARCH="" check the step (5) finally threw the error like below: "ld: /home/hwang/software/petsc-3.19.6/lib/libpetsc.so: undefined reference to `__builtin_is_constant_evaluated' make[4]: *** [/home/hwang/software/petsc-3.19.6/lib/petsc/conf/rules:216: ex5f] Error 1 Possible error running Fortran example src/snes/tutorials/ex5f with 1 MPI process See https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!eqXOT36R1yIYfqMp-oF8yWf52fuq7J3L7CbkswjYcLeO7fQEIVvCYwGmvixTX1I4JJlq_XDHZ46Smrc0UZff2-GXD8YTHw$ [proxy:0:0 at DESKTOP-AM9CLNS] HYD_spawn (../../../../../src/pm/i_hydra/libhydra/spawn/intel/hydra_spawn.c:117): execvp error on file ./ex5f (No such file or directory) ld: /home/hwang/software/petsc-3.19.6/lib/libpetsc.so: undefined reference to `__builtin_is_constant_evaluated' make[4]: *** [: ex19] Error 1 Possible error running C/C++ src/snes/tutorials/ex19 with 1 MPI process See https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!eqXOT36R1yIYfqMp-oF8yWf52fuq7J3L7CbkswjYcLeO7fQEIVvCYwGmvixTX1I4JJlq_XDHZ46Smrc0UZff2-GXD8YTHw$ " How can I solved it? Could you please help me. Best regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: From dr_hwang2022 at outlook.com Thu Jul 4 20:47:55 2024 From: dr_hwang2022 at outlook.com (dr hwang) Date: Fri, 5 Jul 2024 01:47:55 +0000 Subject: [petsc-users] make check error using Intel MKL In-Reply-To: References: Message-ID: Dear support team, I want to install the petsc-3.18.5 in my ubuntu20.04 with compiler "Intel parallel studio 2019", but I met some error when I execute the "make check". Below is my steps and relevant errors. Firstly, in my ~/.bashrc, I have exported the PETSC_DIR=/home/hwang/archive/petsc-3.18.5 PETSC_ARCH=linux-gnu-intel and make it source. (1) tar zxvf petsc-3.18.5.tar.gz (2) ./configure PETSC_ARCH=linux-gnu-intel --prefix=/home/hwang/software/petsc-3.18.5 \ --with-cc=mpiicc --with-cxx=mpiicpc --with-fc=mpiifort --with-blaslapack-dir=${MKLROOT}/lib/intel64 --with-clean (3) make PETSC_DIR=/home/hwang/archive/petsc-3.18.5 PETSC_ARCH=linux-gnu-intel all (4) make PETSC_DIR=/home/hwang/archive/petsc-3.18.5 PETSC_ARCH=linux-gnu-intel install (5) make PETSC_DIR=/home/hwang/software/petsc-3.18.5 PETSC_ARCH="" check the step (5) finally threw the error like below: "ld: /home/hwang/software/petsc-3.19.6/lib/libpetsc.so: undefined reference to `__builtin_is_constant_evaluated' make[4]: *** [/home/hwang/software/petsc-3.19.6/lib/petsc/conf/rules:216: ex5f] Error 1 Possible error running Fortran example src/snes/tutorials/ex5f with 1 MPI process See https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!e1r5fo8PjXZn3grY_2VPBAoWO1jMBoLHs10YTI0qJRklFHlZZ1YvQO9y6Yw67AhZvNXPNrThNcE0Ginj_m3Q5_NfcbF2EQ$ [proxy:0:0 at DESKTOP-AM9CLNS] HYD_spawn (../../../../../src/pm/i_hydra/libhydra/spawn/intel/hydra_spawn.c:117): execvp error on file ./ex5f (No such file or directory) ld: /home/hwang/software/petsc-3.19.6/lib/libpetsc.so: undefined reference to `__builtin_is_constant_evaluated' make[4]: *** [: ex19] Error 1 Possible error running C/C++ src/snes/tutorials/ex19 with 1 MPI process See https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!e1r5fo8PjXZn3grY_2VPBAoWO1jMBoLHs10YTI0qJRklFHlZZ1YvQO9y6Yw67AhZvNXPNrThNcE0Ginj_m3Q5_NfcbF2EQ$ " How can I solved it? Could you please help me. Best regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay.anl at fastmail.org Thu Jul 4 23:18:30 2024 From: balay.anl at fastmail.org (Satish Balay) Date: Thu, 4 Jul 2024 23:18:30 -0500 (CDT) Subject: [petsc-users] make check error using Intel MKL In-Reply-To: References: Message-ID: <720bf0fa-3750-1377-af69-559903de8e73@fastmail.org> An HTML attachment was scrubbed... URL: From balay.anl at fastmail.org Thu Jul 4 23:31:36 2024 From: balay.anl at fastmail.org (Satish Balay) Date: Thu, 4 Jul 2024 23:31:36 -0500 (CDT) Subject: [petsc-users] Problem about compiling PETSc-3.21.2 under Cygwin In-Reply-To: <854C9B5E-1FF5-40B9-B45C-A61EACA2EE94@gmail.com> References: <8E45A797-EC22-41B4-9222-5389EEAFCB64@gmail.com> <73B3587D-BE73-4DE3-8E89-6F395FC3F849@petsc.dev> <21e32b88-aed2-a618-3e3c-dca47c6bc456@fastmail.org> <5627D31E-5225-47CA-B337-A08E74C29D4A@gmail.com> <552dde2a-782a-5238-4897-18736ac9e94a@fastmail.org> <7620557F-4CB0-4E6A-91AF-B3C47DC1BCDD@gmail.com> <365c3d40-0f77-1158-1759-bb4c4e2b1dda@fastmail.org> <9d7974dd-22ba-49e8-d96d-d69cba5653bd@fastmail.org> <854C9B5E-1FF5-40B9-B45C-A61EACA2EE94@gmail.com> Message-ID: <04467439-dfcf-c312-2d2c-d2fe9e1ad2b1@fastmail.org> An HTML attachment was scrubbed... URL: From dr_hwang2022 at outlook.com Fri Jul 5 01:10:02 2024 From: dr_hwang2022 at outlook.com (dr hwang) Date: Fri, 5 Jul 2024 06:10:02 +0000 Subject: [petsc-users] make check error using Intel MKL In-Reply-To: <720bf0fa-3750-1377-af69-559903de8e73@fastmail.org> References: <720bf0fa-3750-1377-af69-559903de8e73@fastmail.org> Message-ID: thanks for your answer, i am using gcc-9 in my ubuntu20.04?if gcc9 compatible with intel parallel studio 2019? ??Outlook for Android ________________________________ From: Satish Balay Sent: Friday, July 5, 2024 12:18:30 PM To: dr hwang Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] make check error using Intel MKL Intel compilers require a compatible gcc/g++ compiler. gcc-11 on ubuntu20.04 might be too new for "Intel parallel studio 2019" So you'll need an older gcc - perhaps gcc-7 or gcc-8 for this version of Intel compiler. Alternative: if you do not need to use petsc from c++ - you can try the following and see if it works: --with-cxx=0 Satish On Fri, 5 Jul 2024, dr hwang wrote: > Dear support team, > > I want to install the petsc-3.18.5 in my ubuntu20.04 with compiler "Intel parallel studio 2019", but I met some error when I execute the "make check". Below is my steps and relevant errors. > Firstly, in my ~/.bashrc, I have exported the PETSC_DIR=/home/hwang/archive/petsc-3.18.5 PETSC_ARCH=linux-gnu-intel and make it source. > > (1) tar zxvf petsc-3.18.5.tar.gz > > > (2) ./configure PETSC_ARCH=linux-gnu-intel --prefix=/home/hwang/software/petsc-3.18.5 \ > > --with-cc=mpiicc --with-cxx=mpiicpc --with-fc=mpiifort --with-blaslapack-dir=${MKLROOT}/lib/intel64 --with-clean > > > (3) make PETSC_DIR=/home/hwang/archive/petsc-3.18.5 PETSC_ARCH=linux-gnu-intel all > > > (4) make PETSC_DIR=/home/hwang/archive/petsc-3.18.5 PETSC_ARCH=linux-gnu-intel install > > > > (5) make PETSC_DIR=/home/hwang/software/petsc-3.18.5 PETSC_ARCH="" check > > > the step (5) finally threw the error like below: > "ld: /home/hwang/software/petsc-3.19.6/lib/libpetsc.so: undefined reference to `__builtin_is_constant_evaluated' > make[4]: *** [/home/hwang/software/petsc-3.19.6/lib/petsc/conf/rules:216: ex5f] Error 1 > Possible error running Fortran example src/snes/tutorials/ex5f with 1 MPI process > See https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!eqXOT36R1yIYfqMp-oF8yWf52fuq7J3L7CbkswjYcLeO7fQEIVvCYwGmvixTX1I4JJlq_XDHZ46Smrc0UZff2-GXD8YTHw$ > [proxy:0:0 at DESKTOP-AM9CLNS] HYD_spawn (../../../../../src/pm/i_hydra/libhydra/spawn/intel/hydra_spawn.c:117): execvp error on file ./ex5f (No such file or directory) > > > ld: /home/hwang/software/petsc-3.19.6/lib/libpetsc.so: undefined reference to `__builtin_is_constant_evaluated' > make[4]: *** [: ex19] Error 1 > Possible error running C/C++ src/snes/tutorials/ex19 with 1 MPI process > See https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!eqXOT36R1yIYfqMp-oF8yWf52fuq7J3L7CbkswjYcLeO7fQEIVvCYwGmvixTX1I4JJlq_XDHZ46Smrc0UZff2-GXD8YTHw$ " > > > How can I solved it? Could you please help me. > > Best regards, > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ligang0309 at gmail.com Fri Jul 5 02:24:59 2024 From: ligang0309 at gmail.com (Gang Li) Date: Fri, 5 Jul 2024 15:24:59 +0800 Subject: [petsc-users] =?utf-8?q?Problem_about_compiling_PETSc-3=2E21=2E2?= =?utf-8?q?_under_Cygwin?= In-Reply-To: <04467439-dfcf-c312-2d2c-d2fe9e1ad2b1@fastmail.org> References: <8E45A797-EC22-41B4-9222-5389EEAFCB64@gmail.com> <73B3587D-BE73-4DE3-8E89-6F395FC3F849@petsc.dev> <21e32b88-aed2-a618-3e3c-dca47c6bc456@fastmail.org> <5627D31E-5225-47CA-B337-A08E74C29D4A@gmail.com> <552dde2a-782a-5238-4897-18736ac9e94a@fastmail.org> <7620557F-4CB0-4E6A-91AF-B3C47DC1BCDD@gmail.com> <365c3d40-0f77-1158-1759-bb4c4e2b1dda@fastmail.org> <9d7974dd-22ba-49e8-d96d-d69cba5653bd@fastmail.org> <854C9B5E-1FF5-40B9-B45C-A61EACA2EE94@gmail.com> <04467439-dfcf-c312-2d2c-d2fe9e1ad2b1@fastmail.org> Message-ID: <8DF7DF99-2882-4C59-B6FC-AD753016A1E3@gmail.com> Hi Satish, Thanks. Problem fixed. Gang ---- Replied Message ---- FromSatish BalayDate7/5/2024 12:31ToGang LiCcpetsc-usersSubjectRe: [petsc-users] Problem about compiling PETSc-3.21.2 under Cygwin $ ./configure --with-cc=win32fe_icl --with-fc=win32fe_ifort --with-cxx=win32fe_icl \ --with-precision=double --with-scalar-type=complex \ --with-shared-libraries=0 \ --with-mpi=0 \ --with-blaslapack-lib='-L/cygdrive/c/PROGRA~2/INTELS~1/COMPIL~2/windows/mkl/lib/intel64 mkl_intel_lp64_dll.lib mkl_sequential_dll.lib mkl_core_dll.lib' C Compiler: /cygdrive/c/Users/gli/Desktop/PETSc/petsc-3.21.3/lib/petsc/bin/win32fe/win32fe_icl -Qstd=c99 -MT -Z7 -Od Version: Win32 Development Tool Front End, version 1.11.4 Fri, Sep 10, 2021 6:33:40 PM\nIntel(R) C++ Intel(R) 64 Compiler for applications running on Intel(R) 64, Version 17.0.8.275 Build 20180907 object_pool.cxx C:\Users\gli\Desktop\PETSc\PETSC-~1.3\src\sys\objects\cxx\object_pool.cxx(330): error: no instance of function template "Petsc::util::construct_at" matches the argument list Its likely the current petsc/c++ code is incompatible with this version of Intel C++ compiler [and you need c++ for this complex build - otherwise you could disable c++ with --with-cxx=0] The alternative is to use a newer version of MS Compilers - or OneAPI compilers. Satish On Sun, 30 Jun 2024, Gang Li wrote: Hi Satish, I met another issue when make the lib: gli at WROKSTATION-OFFICE308 /cygdrive/c/Users/gli/Desktop/PETSc $ tar -xzf petsc-3.21.3.tar.gz gli at WROKSTATION-OFFICE308 /cygdrive/c/Users/gli/Desktop/PETSc $ cd petsc-3.21.3 gli at WROKSTATION-OFFICE308 /cygdrive/c/Users/gli/Desktop/PETSc/petsc-3.21.3 $ cd petsc-3.21.3 -bash: cd: petsc-3.21.3: No such file or directory gli at WROKSTATION-OFFICE308 /cygdrive/c/Users/gli/Desktop/PETSc/petsc-3.21.3 $ cygpath -u `cygpath -ms '/cygdrive/c/Program Files (x86)/IntelSWTools/compilers_and_libraries/windows/mkl/lib/intel64'` /cygdrive/c/PROGRA~2/INTELS~1/COMPIL~2/windows/mkl/lib/intel64 gli at WROKSTATION-OFFICE308 /cygdrive/c/Users/gli/Desktop/PETSc/petsc-3.21.3 $ ./configure --with-cc=win32fe_icl --with-fc=win32fe_ifort --with-cxx=win32fe_icl \ --with-precision=double --with-scalar-type=complex \ --with-shared-libraries=0 \ --with-mpi=0 \ --with-blaslapack-lib='-L/cygdrive/c/PROGRA~2/INTELS~1/COMPIL~2/windows/mkl/lib/intel64 mkl_intel_lp64_dll.lib mkl_sequential_dll.lib mkl_core_dll.lib' ================================================================================ Configuring PETSc to compile on your system ================================================================================ Compilers: C Compiler: /cygdrive/c/Users/gli/Desktop/PETSc/petsc-3.21.3/lib/petsc/bin/win32fe/win32fe_icl -Qstd=c99 -MT -Z7 -Od Version: Win32 Development Tool Front End, version 1.11.4 Fri, Sep 10, 2021 6:33:40 PM\nIntel(R) C++ Intel(R) 64 Compiler for applications running on Intel(R) 64, Version 17.0.8.275 Build 20180907 C++ Compiler: /cygdrive/c/Users/gli/Desktop/PETSc/petsc-3.21.3/lib/petsc/bin/win32fe/win32fe_icl -MT -GR -EHsc -Z7 -Od -Qstd=c++14 -TP Version: Win32 Development Tool Front End, version 1.11.4 Fri, Sep 10, 2021 6:33:40 PM\nIntel(R) C++ Intel(R) 64 Compiler for applications running on Intel(R) 64, Version 17.0.8.275 Build 20180907 Fortran Compiler: /cygdrive/c/Users/gli/Desktop/PETSc/petsc-3.21.3/lib/petsc/bin/win32fe/win32fe_ifort -MT -Z7 -Od -fpp Version: Win32 Development Tool Front End, version 1.11.4 Fri, Sep 10, 2021 6:33:40 PM\nIntel(R) Visual Fortran Intel(R) 64 Compiler for applications running on Intel(R) 64, Version 17.0.8.275 Build 20180907 Linkers: Static linker: /cygdrive/c/Users/gli/Desktop/PETSc/petsc-3.21.3/lib/petsc/bin/win32fe/win32fe_lib -a BlasLapack: Intel MKL Version: 20170004 Libraries: -L/cygdrive/c/PROGRA~2/INTELS~1/COMPIL~2/windows/mkl/lib/intel64 mkl_intel_lp64_dll.lib mkl_sequential_dll.lib mkl_core_dll.lib Unknown if this uses OpenMP (try export OMP_NUM_THREADS=<1-4> yourprogram -log_view) uses 4 byte integers MPI: Version: PETSc MPIUNI uniprocessor MPI replacement mpiexec: ${PETSC_DIR}/lib/petsc/bin/petsc-mpiexec.uni python: Executable: /usr/bin/python3 mkl_sparse: Unknown if this uses OpenMP (try export OMP_NUM_THREADS=<1-4> yourprogram -log_view) mkl_sparse_optimize: Unknown if this uses OpenMP (try export OMP_NUM_THREADS=<1-4> yourprogram -log_view) PETSc: Language used to compile PETSc: C PETSC_ARCH: arch-mswin-c-debug PETSC_DIR: /cygdrive/c/Users/gli/Desktop/PETSc/petsc-3.21.3 Prefix: Scalar type: complex Precision: double Integer size: 4 bytes Single library: yes Shared libraries: no Memory alignment from malloc(): 16 bytes Using GNU make: /usr/bin/make xxx=======================================================================================xxx Configure stage complete. Now build PETSc libraries with: make PETSC_DIR=/cygdrive/c/Users/gli/Desktop/PETSc/petsc-3.21.3 PETSC_ARCH=arch-mswin-c-debug all xxx=======================================================================================xxx gli at WROKSTATION-OFFICE308 /cygdrive/c/Users/gli/Desktop/PETSc/petsc-3.21.3 $ make make[2]: Entering directory '/cygdrive/c/Users/gli/Desktop/PETSc/petsc-3.21.3' ========================================== See documentation/faq.html and documentation/bugreporting.html for help with installation problems. Please send EVERYTHING printed out below when reporting problems. Please check the mailing list archives and consider subscribing. https://urldefense.us/v3/__https://petsc.org/release/community/mailing/__;!!G_uCfscf7eWS!drRoCJiI5IcVVrrjYlGWO1leUL5hjFHVfTGJtV0Smxkw6N7wTSeO5I3sGNYcF_DVCZjpoTfUtIbHzDqPEqUK3Mfi$ ========================================== Starting make run on WROKSTATION-OFFICE308 at Sun, 30 Jun 2024 13:11:53 +0800 Machine characteristics: CYGWIN_NT-10.0-19045 WROKSTATION-OFFICE308 3.5.3-1.x86_64 2024-04-03 17:25 UTC x86_64 Cygwin ----------------------------------------- Using PETSc directory: /cygdrive/c/Users/gli/Desktop/PETSc/petsc-3.21.3 Using PETSc arch: arch-mswin-c-debug ----------------------------------------- PETSC_VERSION_RELEASE 1 PETSC_VERSION_MAJOR 3 PETSC_VERSION_MINOR 21 PETSC_VERSION_SUBMINOR 3 PETSC_VERSION_DATE "Jun 28, 2024" PETSC_VERSION_GIT "v3.21.3" PETSC_VERSION_DATE_GIT "2024-06-28 11:53:00 -0500" ----------------------------------------- Using configure Options: --with-cc=win32fe_icl --with-fc=win32fe_ifort --with-cxx=win32fe_icl --with-precision=double --with-scalar-type=complex --with-shared-libraries=0 --with-mpi=0 --with-blaslapack-lib="-L/cygdrive/c/PROGRA~2/INTELS~1/COMPIL~2/windows/mkl/lib/intel64 mkl_intel_lp64_dll.lib mkl_sequential_dll.lib mkl_core_dll.lib" Using configuration flags: #define MPI_Comm_create_errhandler(p_err_fun,p_errhandler) MPI_Errhandler_create((p_err_fun),(p_errhandler)) #define MPI_Comm_set_errhandler(comm,p_errhandler) MPI_Errhandler_set((comm),(p_errhandler)) #define MPI_Type_create_struct(count,lens,displs,types,newtype) MPI_Type_struct((count),(lens),(displs),(types),(newtype)) #define PETSC_ARCH "arch-mswin-c-debug" #define PETSC_ATTRIBUTEALIGNED(size) #define PETSC_BLASLAPACK_CAPS 1 #define PETSC_CANNOT_START_DEBUGGER 1 #define PETSC_CLANGUAGE_C 1 #define PETSC_CXX_RESTRICT __restrict #define PETSC_DEPRECATED_ENUM_BASE(string_literal_why) #define PETSC_DEPRECATED_FUNCTION_BASE(string_literal_why) __declspec(deprecated(string_literal_why)) #define PETSC_DEPRECATED_MACRO_BASE(string_literal_why) PETSC_DEPRECATED_MACRO_BASE_(GCC warning string_literal_why) #define PETSC_DEPRECATED_MACRO_BASE_(why) _Pragma(#why) #define PETSC_DEPRECATED_OBJECT_BASE(string_literal_why) __declspec(deprecated(string_literal_why)) #define PETSC_DEPRECATED_TYPEDEF_BASE(string_literal_why) #define PETSC_DIR "C:\\Users\\gli\\Desktop\\PETSc\\petsc-3.21.3" #define PETSC_DIR_SEPARATOR '\\' #define PETSC_FORTRAN_CHARLEN_T int #define PETSC_FORTRAN_TYPE_INITIALIZE = -2 #define PETSC_FUNCTION_NAME_C __func__ #define PETSC_FUNCTION_NAME_CXX __func__ #define PETSC_HAVE_ACCESS 1 #define PETSC_HAVE_ATOLL 1 #define PETSC_HAVE_BUILTIN_EXPECT 1 #define PETSC_HAVE_C99_COMPLEX 1 #define PETSC_HAVE_CLOCK 1 #define PETSC_HAVE_CLOSESOCKET 1 #define PETSC_HAVE_CXX 1 #define PETSC_HAVE_CXX_ATOMIC 1 #define PETSC_HAVE_CXX_COMPLEX 1 #define PETSC_HAVE_CXX_COMPLEX_FIX 1 #define PETSC_HAVE_CXX_DIALECT_CXX11 1 #define PETSC_HAVE_CXX_DIALECT_CXX14 1 #define PETSC_HAVE_DIRECT_H 1 #define PETSC_HAVE_DOS_H 1 #define PETSC_HAVE_DOUBLE_ALIGN_MALLOC 1 #define PETSC_HAVE_ERF 1 #define PETSC_HAVE_FCNTL_H 1 #define PETSC_HAVE_FENV_H 1 #define PETSC_HAVE_FE_VALUES 1 #define PETSC_HAVE_FLOAT_H 1 #define PETSC_HAVE_FORTRAN_CAPS 1 #define PETSC_HAVE_FORTRAN_FLUSH 1 #define PETSC_HAVE_FORTRAN_FREE_LINE_LENGTH_NONE 1 #define PETSC_HAVE_FORTRAN_TYPE_STAR 1 #define PETSC_HAVE_FREELIBRARY 1 #define PETSC_HAVE_GETCOMPUTERNAME 1 #define PETSC_HAVE_GETCWD 1 #define PETSC_HAVE_GETLASTERROR 1 #define PETSC_HAVE_GETPROCADDRESS 1 #define PETSC_HAVE_GET_USER_NAME 1 #define PETSC_HAVE_IMMINTRIN_H 1 #define PETSC_HAVE_INTTYPES_H 1 #define PETSC_HAVE_IO_H 1 #define PETSC_HAVE_ISINF 1 #define PETSC_HAVE_ISNAN 1 #define PETSC_HAVE_ISNORMAL 1 #define PETSC_HAVE_LARGE_INTEGER_U 1 #define PETSC_HAVE_LGAMMA 1 #define PETSC_HAVE_LOADLIBRARY 1 #define PETSC_HAVE_LOG2 1 #define PETSC_HAVE_LSEEK 1 #define PETSC_HAVE_MALLOC_H 1 #define PETSC_HAVE_MEMMOVE 1 #define PETSC_HAVE_MKL_LIBS 1 #define PETSC_HAVE_MKL_SPARSE 1 #define PETSC_HAVE_MKL_SPARSE_OPTIMIZE 1 #define PETSC_HAVE_MPIUNI 1 #define PETSC_HAVE_O_BINARY 1 #define PETSC_HAVE_PACKAGES ":blaslapack:mathlib:mkl_sparse:mkl_sparse_optimize:mpi:" #define PETSC_HAVE_RAND 1 #define PETSC_HAVE_SETJMP_H 1 #define PETSC_HAVE_SETLASTERROR 1 #define PETSC_HAVE_STDATOMIC_H 1 #define PETSC_HAVE_STDINT_H 1 #define PETSC_HAVE_STRICMP 1 #define PETSC_HAVE_SYS_TYPES_H 1 #define PETSC_HAVE_TAU_PERFSTUBS 1 #define PETSC_HAVE_TGAMMA 1 #define PETSC_HAVE_TIME 1 #define PETSC_HAVE_TIME_H 1 #define PETSC_HAVE_TMPNAM_S 1 #define PETSC_HAVE_VA_COPY 1 #define PETSC_HAVE_VSNPRINTF 1 #define PETSC_HAVE_WINDOWSX_H 1 #define PETSC_HAVE_WINDOWS_COMPILERS 1 #define PETSC_HAVE_WINDOWS_H 1 #define PETSC_HAVE_WINSOCK2_H 1 #define PETSC_HAVE_WS2TCPIP_H 1 #define PETSC_HAVE_WSAGETLASTERROR 1 #define PETSC_HAVE_XMMINTRIN_H 1 #define PETSC_HAVE__ACCESS 1 #define PETSC_HAVE__GETCWD 1 #define PETSC_HAVE__LSEEK 1 #define PETSC_HAVE__MKDIR 1 #define PETSC_HAVE__SLEEP 1 #define PETSC_HAVE___INT64 1 #define PETSC_INTPTR_T intptr_t #define PETSC_INTPTR_T_FMT "#" PRIxPTR #define PETSC_IS_COLORING_MAX USHRT_MAX #define PETSC_IS_COLORING_VALUE_TYPE short #define PETSC_IS_COLORING_VALUE_TYPE_F integer2 #define PETSC_LEVEL1_DCACHE_LINESIZE 32 #define PETSC_LIB_DIR "/cygdrive/c/Users/gli/Desktop/PETSc/petsc-3.21.3/arch-mswin-c-debug/lib" #define PETSC_MAX_PATH_LEN 4096 #define PETSC_MEMALIGN 16 #define PETSC_MISSING_GETLINE 1 #define PETSC_MISSING_SIGALRM 1 #define PETSC_MISSING_SIGBUS 1 #define PETSC_MISSING_SIGCHLD 1 #define PETSC_MISSING_SIGCONT 1 #define PETSC_MISSING_SIGHUP 1 #define PETSC_MISSING_SIGKILL 1 #define PETSC_MISSING_SIGPIPE 1 #define PETSC_MISSING_SIGQUIT 1 #define PETSC_MISSING_SIGSTOP 1 #define PETSC_MISSING_SIGSYS 1 #define PETSC_MISSING_SIGTRAP 1 #define PETSC_MISSING_SIGTSTP 1 #define PETSC_MISSING_SIGURG 1 #define PETSC_MISSING_SIGUSR1 1 #define PETSC_MISSING_SIGUSR2 1 #define PETSC_MPICC_SHOW "Unavailable" #define PETSC_MPIU_IS_COLORING_VALUE_TYPE MPI_UNSIGNED_SHORT #define PETSC_NEEDS_UTYPE_TYPEDEFS 1 #define PETSC_OMAKE "/usr/bin/make --no-print-directory" #define PETSC_PREFETCH_HINT_NTA _MM_HINT_NTA #define PETSC_PREFETCH_HINT_T0 _MM_HINT_T0 #define PETSC_PREFETCH_HINT_T1 _MM_HINT_T1 #define PETSC_PREFETCH_HINT_T2 _MM_HINT_T2 #define PETSC_PYTHON_EXE "/usr/bin/python3" #define PETSC_Prefetch(a,b,c) _mm_prefetch((const char*)(a),(c)) #define PETSC_REPLACE_DIR_SEPARATOR '/' #define PETSC_SIGNAL_CAST #define PETSC_SIZEOF_INT 4 #define PETSC_SIZEOF_LONG 4 #define PETSC_SIZEOF_LONG_LONG 8 #define PETSC_SIZEOF_SIZE_T 8 #define PETSC_SIZEOF_VOID_P 8 #define PETSC_SLSUFFIX "" #define PETSC_UINTPTR_T uintptr_t #define PETSC_UINTPTR_T_FMT "#" PRIxPTR #define PETSC_UNUSED #define PETSC_USE_AVX512_KERNELS 1 #define PETSC_USE_BACKWARD_LOOP 1 #define PETSC_USE_COMPLEX 1 #define PETSC_USE_CTABLE 1 #define PETSC_USE_DEBUG 1 #define PETSC_USE_DEBUGGER "gdb" #define PETSC_USE_DMLANDAU_2D 1 #define PETSC_USE_FORTRAN_BINDINGS 1 #define PETSC_USE_INFO 1 #define PETSC_USE_ISATTY 1 #define PETSC_USE_LOG 1 #define PETSC_USE_MICROSOFT_TIME 1 #define PETSC_USE_PROC_FOR_SIZE 1 #define PETSC_USE_REAL_DOUBLE 1 #define PETSC_USE_SINGLE_LIBRARY 1 #define PETSC_USE_WINDOWS_GRAPHICS 1 #define PETSC_USING_64BIT_PTR 1 #define PETSC_USING_F2003 1 #define PETSC_USING_F90FREEFORM 1 #define PETSC__BSD_SOURCE 1 #define PETSC__DEFAULT_SOURCE 1 #define R_OK 04 #define S_ISDIR(a) (((a)&_S_IFMT) == _S_IFDIR) #define S_ISREG(a) (((a)&_S_IFMT) == _S_IFREG) #define W_OK 02 #define X_OK 01 #define _USE_MATH_DEFINES 1 ----------------------------------------- Using C compile: /cygdrive/c/Users/gli/Desktop/PETSc/petsc-3.21.3/lib/petsc/bin/win32fe/win32fe_icl -o .o -c -Qstd=c99 -MT -Z7 -Od mpicc -show: Unavailable C compiler version: Win32 Development Tool Front End, version 1.11.4 Fri, Sep 10, 2021 6:33:40 PM Intel(R) C++ Intel(R) 64 Compiler for applications running on Intel(R) 64, Version 17.0.8.275 Build 20180907 Using C++ compile: /cygdrive/c/Users/gli/Desktop/PETSc/petsc-3.21.3/lib/petsc/bin/win32fe/win32fe_icl -o .o -c -MT -GR -EHsc -Z7 -Od -Qstd=c++14 -TP -I/cygdrive/c/Users/gli/Desktop/PETSc/petsc-3.21.3/include -I/cygdrive/c/Users/gli/Desktop/PETSc/petsc-3.21.3/arch-mswin-c-debug/include mpicxx -show: Unavailable C++ compiler version: Win32 Development Tool Front End, version 1.11.4 Fri, Sep 10, 2021 6:33:40 PM Intel(R) C++ Intel(R) 64 Compiler for applications running on Intel(R) 64, Version 17.0.8.275 Build 20180907 Using Fortran compile: /cygdrive/c/Users/gli/Desktop/PETSc/petsc-3.21.3/lib/petsc/bin/win32fe/win32fe_ifort -o .o -c -MT -Z7 -Od -fpp -I/cygdrive/c/Users/gli/Desktop/PETSc/petsc-3.21.3/include -I/cygdrive/c/Users/gli/Desktop/PETSc/petsc-3.21.3/arch-mswin-c-debug/include mpif90 -show: Unavailable Fortran compiler version: Win32 Development Tool Front End, version 1.11.4 Fri, Sep 10, 2021 6:33:40 PM Intel(R) Visual Fortran Intel(R) 64 Compiler for applications running on Intel(R) 64, Version 17.0.8.275 Build 20180907 ----------------------------------------- Using C/C++ linker: /cygdrive/c/Users/gli/Desktop/PETSc/petsc-3.21.3/lib/petsc/bin/win32fe/win32fe_icl Using C/C++ flags: -Qwd10161 -Qstd=c99 -MT -Z7 -Od Using Fortran linker: /cygdrive/c/Users/gli/Desktop/PETSc/petsc-3.21.3/lib/petsc/bin/win32fe/win32fe_ifort Using Fortran flags: -MT -Z7 -Od -fpp ----------------------------------------- Using system modules: Using mpi.h: mpiuni ----------------------------------------- Using libraries: -L/cygdrive/c/Users/gli/Desktop/PETSc/petsc-3.21.3/arch-mswin-c-debug/lib -L/cygdrive/c/PROGRA~2/INTELS~1/COMPIL~2/windows/mkl/lib/intel64 -lpetsc mkl_intel_lp64_dll.lib mkl_sequential_dll.lib mkl_core_dll.lib Gdi32.lib User32.lib Advapi32.lib Kernel32.lib Ws2_32.lib ------------------------------------------ Using mpiexec: /cygdrive/c/Users/gli/Desktop/PETSc/petsc-3.21.3/lib/petsc/bin/petsc-mpiexec.uni ------------------------------------------ Using MAKE: /usr/bin/make Default MAKEFLAGS: MAKE_NP:24 MAKE_LOAD:48.0 MAKEFLAGS: --no-print-directory -- PETSC_ARCH=arch-mswin-c-debug PETSC_DIR=/cygdrive/c/Users/gli/Desktop/PETSc/petsc-3.21.3 ========================================== /usr/bin/make --print-directory -f gmakefile -j24 -l48.0 --output-sync=recurse V= libs make[3]: Entering directory '/cygdrive/c/Users/gli/Desktop/PETSc/petsc-3.21.3' /usr/bin/python3 ./config/gmakegen.py --petsc-arch=arch-mswin-c-debug CC arch-mswin-c-debug/obj/src/sys/error/pstack.o pstack.c CC arch-mswin-c-debug/obj/src/sys/error/signal.o signal.c CC arch-mswin-c-debug/obj/src/sys/fileio/fwd.o fwd.c CC arch-mswin-c-debug/obj/src/sys/fileio/ghome.o ghome.c CC arch-mswin-c-debug/obj/src/sys/fileio/grpath.o grpath.c CC arch-mswin-c-debug/obj/src/sys/fileio/mpiuopen.o mpiuopen.c CC arch-mswin-c-debug/obj/src/sys/fileio/mprint.o mprint.c CC arch-mswin-c-debug/obj/src/sys/fileio/rpath.o rpath.c CC arch-mswin-c-debug/obj/src/sys/fileio/smatlab.o smatlab.c CC arch-mswin-c-debug/obj/src/sys/fileio/sysio.o sysio.c CC arch-mswin-c-debug/obj/src/sys/objects/garbage.o garbage.c CC arch-mswin-c-debug/obj/src/sys/objects/gcomm.o gcomm.c CC arch-mswin-c-debug/obj/src/sys/objects/gcookie.o gcookie.c CC arch-mswin-c-debug/obj/src/sys/objects/gtype.o gtype.c CC arch-mswin-c-debug/obj/src/sys/objects/inherit.o inherit.c CC arch-mswin-c-debug/obj/src/sys/objects/init.o init.c CC arch-mswin-c-debug/obj/src/sys/objects/olist.o olist.c CC arch-mswin-c-debug/obj/src/sys/objects/options.o options.c CC arch-mswin-c-debug/obj/src/sys/objects/package.o package.c CC arch-mswin-c-debug/obj/src/sys/objects/pgname.o pgname.c CC arch-mswin-c-debug/obj/src/sys/objects/optionsyaml.o optionsyaml.c C:\Users\gli\Desktop\PETSc\PETSC-~1.3\src\sys\objects\optionsyaml.c(297): warning #161: unrecognized #pragma #pragma GCC diagnostic push ^ C:\Users\gli\Desktop\PETSc\PETSC-~1.3\src\sys\objects\optionsyaml.c(298): warning #161: unrecognized #pragma #pragma GCC diagnostic ignored "-Wsign-conversion" ^ C:\Users\gli\Desktop\PETSc\PETSC-~1.3\src\sys\objects\optionsyaml.c(300): warning #161: unrecognized #pragma #pragma GCC diagnostic pop ^ CC arch-mswin-c-debug/obj/src/sys/objects/pname.o pname.c CC arch-mswin-c-debug/obj/src/sys/objects/pinit.o pinit.c CC arch-mswin-c-debug/obj/src/sys/objects/prefix.o prefix.c CC arch-mswin-c-debug/obj/src/sys/objects/ptype.o ptype.c CC arch-mswin-c-debug/obj/src/sys/objects/tagm.o tagm.c CC arch-mswin-c-debug/obj/src/sys/objects/subcomm.o subcomm.c CC arch-mswin-c-debug/obj/src/sys/objects/state.o state.c CC arch-mswin-c-debug/obj/src/sys/objects/version.o version.c CC arch-mswin-c-debug/obj/src/vec/vec/interface/veccreate.o veccreate.c CC arch-mswin-c-debug/obj/src/vec/vec/interface/vecreg.o vecreg.c CC arch-mswin-c-debug/obj/src/vec/vec/interface/vecregall.o vecregall.c CC arch-mswin-c-debug/obj/src/vec/vec/interface/rvector.o rvector.c CC arch-mswin-c-debug/obj/src/vec/vec/interface/vector.o vector.c CC arch-mswin-c-debug/obj/src/vec/vec/utils/vecglvis.o vecglvis.c CC arch-mswin-c-debug/obj/src/vec/vec/utils/vecs.o vecs.c CC arch-mswin-c-debug/obj/src/vec/vec/utils/vecio.o vecio.c CC arch-mswin-c-debug/obj/src/vec/vec/utils/vecstash.o vecstash.c CC arch-mswin-c-debug/obj/src/vec/vec/utils/vsection.o vsection.c CC arch-mswin-c-debug/obj/src/vec/vec/utils/vinv.o vinv.c CC arch-mswin-c-debug/obj/src/mat/graphops/coarsen/scoarsen.o scoarsen.c CC arch-mswin-c-debug/obj/src/mat/impls/aij/seq/fdaij.o fdaij.c CC arch-mswin-c-debug/obj/src/mat/impls/aij/seq/ij.o ij.c CC arch-mswin-c-debug/obj/src/mat/impls/aij/seq/matmatmatmult.o matmatmatmult.c CC arch-mswin-c-debug/obj/src/mat/impls/aij/seq/inode2.o inode2.c CC arch-mswin-c-debug/obj/src/mat/impls/aij/seq/inode.o inode.c CC arch-mswin-c-debug/obj/src/mat/impls/aij/seq/matrart.o matrart.c CC arch-mswin-c-debug/obj/src/mat/impls/aij/seq/matmatmult.o matmatmult.c CC arch-mswin-c-debug/obj/src/mat/impls/aij/seq/matptap.o matptap.c CC arch-mswin-c-debug/obj/src/mat/impls/aij/seq/mattransposematmult.o mattransposematmult.c CC arch-mswin-c-debug/obj/src/mat/impls/aij/seq/symtranspose.o symtranspose.c CC arch-mswin-c-debug/obj/src/mat/impls/baij/seq/baijsolv.o baijsolv.c CC arch-mswin-c-debug/obj/src/mat/impls/baij/seq/baijsolvnat1.o baijsolvnat1.c CC arch-mswin-c-debug/obj/src/mat/impls/baij/seq/baijsolvnat14.o baijsolvnat14.c CC arch-mswin-c-debug/obj/src/mat/impls/baij/seq/baijsolvnat11.o baijsolvnat11.c CC arch-mswin-c-debug/obj/src/mat/impls/baij/seq/baijsolvnat15.o baijsolvnat15.c CC arch-mswin-c-debug/obj/src/mat/impls/baij/seq/baijsolvnat2.o baijsolvnat2.c CC arch-mswin-c-debug/obj/src/mat/impls/baij/seq/baijsolvnat3.o baijsolvnat3.c CC arch-mswin-c-debug/obj/src/mat/impls/baij/seq/baijsolvnat4.o baijsolvnat4.c CC arch-mswin-c-debug/obj/src/mat/impls/baij/seq/baijsolvnat5.o baijsolvnat5.c CC arch-mswin-c-debug/obj/src/mat/impls/baij/seq/baijsolvnat6.o baijsolvnat6.c CC arch-mswin-c-debug/obj/src/mat/impls/baij/seq/baijsolvtran1.o baijsolvtran1.c CC arch-mswin-c-debug/obj/src/mat/impls/baij/seq/baijsolvnat7.o baijsolvnat7.c CC arch-mswin-c-debug/obj/src/mat/impls/baij/seq/baijsolvtran2.o baijsolvtran2.c CC arch-mswin-c-debug/obj/src/mat/impls/baij/seq/baijsolvtran3.o baijsolvtran3.c CC arch-mswin-c-debug/obj/src/mat/impls/baij/seq/baijsolvtran4.o baijsolvtran4.c CC arch-mswin-c-debug/obj/src/mat/impls/baij/seq/baijsolvtran5.o baijsolvtran5.c CC arch-mswin-c-debug/obj/src/mat/impls/baij/seq/baijsolvtran6.o baijsolvtran6.c CC arch-mswin-c-debug/obj/src/mat/impls/baij/seq/baijsolvtran7.o baijsolvtran7.c CC arch-mswin-c-debug/obj/src/mat/impls/baij/seq/baijsolvtrann.o baijsolvtrann.c CC arch-mswin-c-debug/obj/src/mat/impls/baij/seq/baijsolvtrannat1.o baijsolvtrannat1.c CC arch-mswin-c-debug/obj/src/mat/impls/baij/seq/baijsolvtrannat2.o baijsolvtrannat2.c CC arch-mswin-c-debug/obj/src/mat/impls/baij/seq/baijsolvtrannat4.o baijsolvtrannat4.c CC arch-mswin-c-debug/obj/src/mat/impls/baij/seq/baijsolvtrannat3.o baijsolvtrannat3.c CC arch-mswin-c-debug/obj/src/mat/impls/baij/seq/baijsolvtrannat5.o baijsolvtrannat5.c CC arch-mswin-c-debug/obj/src/mat/impls/baij/seq/baijsolvtrannat6.o baijsolvtrannat6.c CC arch-mswin-c-debug/obj/src/mat/impls/baij/seq/dgedi.o dgedi.c CC arch-mswin-c-debug/obj/src/mat/impls/baij/seq/dgefa.o dgefa.c CC arch-mswin-c-debug/obj/src/mat/impls/baij/seq/baijsolvtrannat7.o baijsolvtrannat7.c CC arch-mswin-c-debug/obj/src/mat/impls/baij/seq/dgefa2.o dgefa2.c CC arch-mswin-c-debug/obj/src/mat/impls/baij/seq/dgefa4.o dgefa4.c CC arch-mswin-c-debug/obj/src/mat/impls/baij/seq/dgefa3.o dgefa3.c CC arch-mswin-c-debug/obj/src/mat/impls/baij/seq/dgefa5.o dgefa5.c CC arch-mswin-c-debug/obj/src/mat/impls/baij/seq/dgefa6.o dgefa6.c CC arch-mswin-c-debug/obj/src/mat/impls/baij/seq/dgefa7.o dgefa7.c CC arch-mswin-c-debug/obj/src/mat/interface/matnull.o matnull.c CC arch-mswin-c-debug/obj/src/mat/interface/matproduct.o matproduct.c CC arch-mswin-c-debug/obj/src/mat/interface/matreg.o matreg.c CC arch-mswin-c-debug/obj/src/mat/interface/matregis.o matregis.c CC arch-mswin-c-debug/obj/src/mat/interface/matrix.o matrix.c CC arch-mswin-c-debug/obj/src/dm/impls/da/gr1.o gr1.c CC arch-mswin-c-debug/obj/src/dm/impls/da/gr2.o gr2.c CC arch-mswin-c-debug/obj/src/dm/impls/da/grglvis.o grglvis.c CC arch-mswin-c-debug/obj/src/dm/impls/da/grvtk.o grvtk.c CC arch-mswin-c-debug/obj/src/dm/impls/swarm/swarm_migrate.o swarm_migrate.c CC arch-mswin-c-debug/obj/src/dm/impls/swarm/swarm.o swarm.c CC arch-mswin-c-debug/obj/src/dm/impls/swarm/swarmpic.o swarmpic.c CC arch-mswin-c-debug/obj/src/dm/impls/swarm/swarmpic_da.o swarmpic_da.c CC arch-mswin-c-debug/obj/src/dm/impls/swarm/swarmpic_plex.o swarmpic_plex.c CC arch-mswin-c-debug/obj/src/dm/impls/swarm/swarmpic_sort.o swarmpic_sort.c CC arch-mswin-c-debug/obj/src/dm/impls/swarm/swarmpic_view.o swarmpic_view.c CC arch-mswin-c-debug/obj/src/ksp/ksp/impls/gmres/gmpre.o gmpre.c CC arch-mswin-c-debug/obj/src/ksp/ksp/impls/gmres/gmreig.o gmreig.c CC arch-mswin-c-debug/obj/src/ksp/ksp/impls/gmres/gmres.o gmres.c CC arch-mswin-c-debug/obj/src/ksp/ksp/impls/gmres/gmres2.o gmres2.c CC arch-mswin-c-debug/obj/src/ksp/ksp/interface/iguess.o iguess.c CC arch-mswin-c-debug/obj/src/ksp/ksp/interface/itcl.o itcl.c CC arch-mswin-c-debug/obj/src/ksp/ksp/interface/itcreate.o itcreate.c CC arch-mswin-c-debug/obj/src/ksp/ksp/interface/itregis.o itregis.c CC arch-mswin-c-debug/obj/src/ksp/ksp/interface/iterativ.o iterativ.c CC arch-mswin-c-debug/obj/src/ksp/ksp/interface/itres.o itres.c CC arch-mswin-c-debug/obj/src/ksp/ksp/interface/itfunc.o itfunc.c CC arch-mswin-c-debug/obj/src/ksp/ksp/interface/xmon.o xmon.c CC arch-mswin-c-debug/obj/src/ksp/pc/impls/mg/gdsw.o gdsw.c CC arch-mswin-c-debug/obj/src/ksp/pc/impls/mg/mgfunc.o mgfunc.c CC arch-mswin-c-debug/obj/src/ksp/pc/impls/mg/mg.o mg.c CC arch-mswin-c-debug/obj/src/ksp/pc/impls/mg/mgadapt.o mgadapt.c CC arch-mswin-c-debug/obj/src/ksp/pc/impls/mg/smg.o smg.c CC arch-mswin-c-debug/obj/src/snes/interface/snesj.o snesj.c CC arch-mswin-c-debug/obj/src/snes/interface/snesj2.o snesj2.c CC arch-mswin-c-debug/obj/src/snes/interface/snesob.o snesob.c CC arch-mswin-c-debug/obj/src/snes/interface/snespc.o snespc.c CC arch-mswin-c-debug/obj/src/snes/interface/snesregi.o snesregi.c CC arch-mswin-c-debug/obj/src/snes/interface/snesut.o snesut.c CC arch-mswin-c-debug/obj/src/snes/interface/snes.o snes.c CC arch-mswin-c-debug/obj/src/ts/interface/tscreate.o tscreate.c CC arch-mswin-c-debug/obj/src/ts/interface/tseig.o tseig.c CC arch-mswin-c-debug/obj/src/ts/interface/tshistory.o tshistory.c CC arch-mswin-c-debug/obj/src/ts/interface/tsreg.o tsreg.c CC arch-mswin-c-debug/obj/src/ts/interface/tsmon.o tsmon.c CC arch-mswin-c-debug/obj/src/ts/interface/ts.o ts.c CC arch-mswin-c-debug/obj/src/ts/interface/tsregall.o tsregall.c CC arch-mswin-c-debug/obj/src/ts/interface/tsrhssplit.o tsrhssplit.c CC arch-mswin-c-debug/obj/src/ts/utils/dmplexts.o dmplexts.c CC arch-mswin-c-debug/obj/src/ts/utils/tsconvest.o tsconvest.c CC arch-mswin-c-debug/obj/src/ts/utils/dmts.o dmts.c FC arch-mswin-c-debug/obj/src/sys/mpiuni/f90-mod/mpiunimod.o FC arch-mswin-c-debug/obj/src/sys/f90-src/fsrc/f90_fwrap.o FC arch-mswin-c-debug/obj/src/sys/fsrc/somefort.o CXX arch-mswin-c-debug/obj/src/sys/dll/cxx/demangle.o demangle.cxx CXX arch-mswin-c-debug/obj/src/sys/objects/device/impls/host/hostcontext.o hostcontext.cxx CXX arch-mswin-c-debug/obj/src/sys/objects/cxx/object_pool.o object_pool.cxx C:\Users\gli\Desktop\PETSc\PETSC-~1.3\src\sys\objects\cxx\object_pool.cxx(330): error: no instance of function template "Petsc::util::construct_at" matches the argument list argument types are: (Petsc::memory::PoolAllocator::AllocationHeader *, Petsc::memory::PoolAllocator::size_type, Petsc::memory::PoolAllocator::align_type) PetscCallCXX(base_ptr = reinterpret_cast(util::construct_at(reinterpret_cast(base_ptr), size, align))); ^ C:\Users\gli\Desktop\PETSc\PETSC-~1.3\include\petsc/private/cpp/memory.hpp(77): note: this candidate was rejected because at least one template argument could not be deduced inline constexpr T *construct_at(T *ptr, Args &&...args) noexcept(std::is_nothrow_constructible::value) ^ compilation aborted for C:\Users\gli\Desktop\PETSc\PETSC-~1.3\src\sys\objects\cxx\object_pool.cxx (code 2) make[3]: *** [gmakefile:203: arch-mswin-c-debug/obj/src/sys/objects/cxx/object_pool.o] Error 2 make[3]: *** Waiting for unfinished jobs.... CXX arch-mswin-c-debug/obj/src/sys/objects/device/impls/host/hostdevice.o hostdevice.cxx CXX arch-mswin-c-debug/obj/src/sys/objects/device/interface/dcontext.o dcontext.cxx C:\Users\gli\Desktop\PETSc\PETSC-~1.3\src\sys\objects\device\INTERF~1\petscdevice_interface_internal.hpp(47): error: defaulted default constructor cannot be constexpr because the corresponding implicitly declared default constructor would not be constexpr constexpr _n_WeakContext() noexcept = default; ^ compilation aborted for C:\Users\gli\Desktop\PETSc\PETSC-~1.3\src\sys\objects\device\INTERF~1\dcontext.cxx (code 2) make[3]: *** [gmakefile:203: arch-mswin-c-debug/obj/src/sys/objects/device/interface/dcontext.o] Error 2 CXX arch-mswin-c-debug/obj/src/sys/objects/device/interface/global_dcontext.o global_dcontext.cxx C:\Users\gli\Desktop\PETSc\PETSC-~1.3\src\sys\objects\device\INTERF~1\petscdevice_interface_internal.hpp(47): error: defaulted default constructor cannot be constexpr because the corresponding implicitly declared default constructor would not be constexpr constexpr _n_WeakContext() noexcept = default; ^ compilation aborted for C:\Users\gli\Desktop\PETSc\PETSC-~1.3\src\sys\objects\device\INTERF~1\global_dcontext.cxx (code 2) make[3]: *** [gmakefile:203: arch-mswin-c-debug/obj/src/sys/objects/device/interface/global_dcontext.o] Error 2 CXX arch-mswin-c-debug/obj/src/sys/objects/device/interface/device.o device.cxx C:\Users\gli\Desktop\PETSc\PETSC-~1.3\src\sys\objects\device\INTERF~1\petscdevice_interface_internal.hpp(47): error: defaulted default constructor cannot be constexpr because the corresponding implicitly declared default constructor would not be constexpr constexpr _n_WeakContext() noexcept = default; ^ compilation aborted for C:\Users\gli\Desktop\PETSc\PETSC-~1.3\src\sys\objects\device\INTERF~1\device.cxx (code 2) make[3]: *** [gmakefile:203: arch-mswin-c-debug/obj/src/sys/objects/device/interface/device.o] Error 2 CXX arch-mswin-c-debug/obj/src/sys/objects/device/interface/mark_dcontext.o mark_dcontext.cxx C:\Users\gli\Desktop\PETSc\PETSC-~1.3\src\sys\objects\device\INTERF~1\petscdevice_interface_internal.hpp(47): error: defaulted default constructor cannot be constexpr because the corresponding implicitly declared default constructor would not be constexpr constexpr _n_WeakContext() noexcept = default; ^ compilation aborted for C:\Users\gli\Desktop\PETSc\PETSC-~1.3\src\sys\objects\device\INTERF~1\mark_dcontext.cxx (code 2) make[3]: *** [gmakefile:203: arch-mswin-c-debug/obj/src/sys/objects/device/interface/mark_dcontext.o] Error 2 make[3]: Leaving directory '/cygdrive/c/Users/gli/Desktop/PETSc/petsc-3.21.3' make[2]: *** [/cygdrive/c/Users/gli/Desktop/PETSc/petsc-3.21.3/lib/petsc/conf/rules_doc.mk:5: libs] Error 2 make[2]: Leaving directory '/cygdrive/c/Users/gli/Desktop/PETSc/petsc-3.21.3' **************************ERROR************************************* Error during compile, check arch-mswin-c-debug/lib/petsc/conf/make.log Send it and arch-mswin-c-debug/lib/petsc/conf/configure.log to petsc-maint at mcs.anl.gov ******************************************************************** make[1]: *** [makefile:44: all] Error 1 make: *** [GNUmakefile:9: all] Error 2 gli at WROKSTATION-OFFICE308 /cygdrive/c/Users/gli/Desktop/PETSc/petsc-3.21.3 $ Sincerely, Gang ---- Replied Message ---- FromGang LiDate6/30/2024 12:43Topetsc-usersSubjectRe: [petsc-users] Problem about compiling PETSc-3.21.2 under Cygwin Hi Satish, Thanks for your help. I find the problem. I uninstall the Perl software under windows now the configure works.? Sincerely, Gang ---- Replied Message ---- FromSatish BalayDate6/28/2024 13:51Topetsc-usersCcGang LiSubjectRe: [petsc-users] Problem about compiling PETSc-3.21.2 under Cygwin Here is what I get Satish ---- balay at petsc-win01 /cygdrive/e/balay $ wget -q https://urldefense.us/v3/__https://web.cels.anl.gov/projects/petsc/download/release-snapshots/petsc-3.21.2.tar.gz__;!!G_uCfscf7eWS!drRoCJiI5IcVVrrjYlGWO1leUL5hjFHVfTGJtV0Smxkw6N7wTSeO5I3sGNYcF_DVCZjpoTfUtIbHzDqPEqlDiEPK$ balay at petsc-win01 /cygdrive/e/balay $ tar -xzf petsc-3.21.2.tar.gz balay at petsc-win01 /cygdrive/e/balay $ cd petsc-3.21.2 balay at petsc-win01 /cygdrive/e/balay/petsc-3.21.2 $ ./configure --with-cc=win32fe_icl --with-fc=win32fe_ifort --with-cxx=win32fe_icl --with-precision=double --with-scalar-type=complex --with-shared-libraries=0 --with-mpi=0 '--with-blaslapack-lib=-L/cygdrive/c/PROGRA~2/Intel/oneAPI/mkl/latest/lib/intel64 mkl_intel_lp64_dll.lib mkl_sequential_dll.lib mkl_core_dll.lib' ============================================================================================= Configuring PETSc to compile on your system ============================================================================================= Compilers: C Compiler: /cygdrive/e/balay/petsc-3.21.2/lib/petsc/bin/win32fe/win32fe_icl -Qstd=c99 -MT -Z7 -Od Version: Win32 Development Tool Front End, version 1.11.4 Fri, Sep 10, 2021 6:33:40 PM\nIntel(R) C++ Intel(R) 64 Compiler Classic for applications running on Intel(R) 64, Version 2021.6.0 Build 20220226_000000 C++ Compiler: /cygdrive/e/balay/petsc-3.21.2/lib/petsc/bin/win32fe/win32fe_icl -MT -GR -EHsc -Z7 -Od -Qstd=c++17 -TP Version: Win32 Development Tool Front End, version 1.11.4 Fri, Sep 10, 2021 6:33:40 PM\nIntel(R) C++ Intel(R) 64 Compiler Classic for applications running on Intel(R) 64, Version 2021.6.0 Build 20220226_000000 Fortran Compiler: /cygdrive/e/balay/petsc-3.21.2/lib/petsc/bin/win32fe/win32fe_ifort -MT -Z7 -Od -fpp Version: Win32 Development Tool Front End, version 1.11.4 Fri, Sep 10, 2021 6:33:40 PM\nIntel(R) Fortran Intel(R) 64 Compiler Classic for applications running on Intel(R) 64, Version 2021.6.0 Build 20220226_000000 Linkers: Static linker: /cygdrive/e/balay/petsc-3.21.2/lib/petsc/bin/win32fe/win32fe_lib -a BlasLapack: Libraries: -L/cygdrive/c/PROGRA~2/Intel/oneAPI/mkl/latest/lib/intel64 mkl_intel_lp64_dll.lib mkl_sequential_dll.lib mkl_core_dll.lib Unknown if this uses OpenMP (try export OMP_NUM_THREADS=<1-4> yourprogram -log_view) uses 4 byte integers MPI: Version: PETSc MPIUNI uniprocessor MPI replacement mpiexec: ${PETSC_DIR}/lib/petsc/bin/petsc-mpiexec.uni python: Executable: /usr/bin/python3 cmake: Version: 3.20.0 Executable: /usr/bin/cmake bison: Version: 3.8 Executable: /usr/bin/bison PETSc: Language used to compile PETSc: C PETSC_ARCH: arch-mswin-c-debug PETSC_DIR: /cygdrive/e/balay/petsc-3.21.2 Prefix: Scalar type: complex Precision: double Integer size: 4 bytes Single library: yes Shared libraries: no Memory alignment from malloc(): 16 bytes Using GNU make: /usr/bin/make xxx=======================================================================================xxx Configure stage complete. Now build PETSc libraries with: make PETSC_DIR=/cygdrive/e/balay/petsc-3.21.2 PETSC_ARCH=arch-mswin-c-debug all xxx=======================================================================================xxx balay at petsc-win01 /cygdrive/e/balay/petsc-3.21.2 $ ls -l lib/petsc/conf/ total 135 -rw-r--r--+ 1 balay Domain Users 391 Mar 29 08:59 bfort-base.txt -rw-r--r--+ 1 balay Domain Users 877 Mar 29 08:59 bfort-mpi.txt -rw-r--r--+ 1 balay Domain Users 5735 Mar 29 19:34 bfort-petsc.txt -rw-rw-r--+ 1 balay Domain Users 136 Jun 28 00:33 petscvariables -rw-r--r--+ 1 balay Domain Users 13140 May 29 14:34 rules -rw-r--r--+ 1 balay Domain Users 613 Mar 29 19:34 rules_doc.mk -rw-r--r--+ 1 balay Domain Users 16516 May 29 14:06 rules_util.mk -rw-r--r--+ 1 balay Domain Users 119 Mar 29 08:59 test -rw-r--r--+ 1 balay Domain Users 71503 Mar 29 08:59 uncrustify.cfg -rw-r--r--+ 1 balay Domain Users 4769 Mar 29 19:34 variables balay at petsc-win01 /cygdrive/e/balay/petsc-3.21.2 $ make ========================================== See documentation/faq.html and documentation/bugreporting.html for help with installation problems. Please send EVERYTHING printed out below when reporting problems. Please check the mailing list archives and consider subscribing. https://urldefense.us/v3/__https://petsc.org/release/community/mailing/__;!!G_uCfscf7eWS!drRoCJiI5IcVVrrjYlGWO1leUL5hjFHVfTGJtV0Smxkw6N7wTSeO5I3sGNYcF_DVCZjpoTfUtIbHzDqPEqUK3Mfi$ ========================================== Starting make run on petsc-win01 at Fri, 28 Jun 2024 00:34:15 -0500 Machine characteristics: CYGWIN_NT-10.0 petsc-win01 3.2.0(0.340/5/3) 2021-03-29 08:42 x86_64 Cygwin ----------------------------------------- Using PETSc directory: /cygdrive/e/balay/petsc-3.21.2 Using PETSc arch: arch-mswin-c-debug ----------------------------------------- PETSC_VERSION_RELEASE 1 PETSC_VERSION_MAJOR 3 PETSC_VERSION_MINOR 21 PETSC_VERSION_SUBMINOR 2 PETSC_VERSION_DATE "May 29, 2024" PETSC_VERSION_GIT "v3.21.2" PETSC_VERSION_DATE_GIT "2024-05-29 14:05:28 -0500" ----------------------------------------- Using configure Options: --with-cc=win32fe_icl --with-fc=win32fe_ifort --with-cxx=win32fe_icl --with-precision=double --with-scalar-type=complex --with-shared-libraries=0 --with-mpi=0 --with-blaslapack-lib="-L/cygdrive/c/PROGRA~2/Intel/oneAPI/mkl/latest/lib/intel64 mkl_intel_lp64_dll.lib mkl_sequential_dll.lib mkl_core_dll.lib" Using configuration flags: #define MPI_Comm_create_errhandler(p_err_fun,p_errhandler) MPI_Errhandler_create((p_err_fun),(p_errhandler)) #define MPI_Comm_set_errhandler(comm,p_errhandler) MPI_Errhandler_set((comm),(p_errhandler)) #define MPI_Type_create_struct(count,lens,displs,types,newtype) MPI_Type_struct((count),(lens),(displs),(types),(newtype)) #define PETSC_ARCH "arch-mswin-c-debug" #define PETSC_ATTRIBUTEALIGNED(size) #define PETSC_BLASLAPACK_CAPS 1 #define PETSC_CANNOT_START_DEBUGGER 1 #define PETSC_CLANGUAGE_C 1 #define PETSC_CXX_RESTRICT __restrict #define PETSC_DEPRECATED_ENUM_BASE(string_literal_why) #define PETSC_DEPRECATED_FUNCTION_BASE(string_literal_why) __declspec(deprecated(string_literal_why)) #define PETSC_DEPRECATED_MACRO_BASE(string_literal_why) PETSC_DEPRECATED_MACRO_BASE_(GCC warning string_literal_why) #define PETSC_DEPRECATED_MACRO_BASE_(why) _Pragma(#why) #define PETSC_DEPRECATED_OBJECT_BASE(string_literal_why) __declspec(deprecated(string_literal_why)) #define PETSC_DEPRECATED_TYPEDEF_BASE(string_literal_why) #define PETSC_DIR "E:\\balay\\petsc-3.21.2" #define PETSC_DIR_SEPARATOR '\\' #define PETSC_FORTRAN_CHARLEN_T int #define PETSC_FORTRAN_TYPE_INITIALIZE = -2 #define PETSC_FUNCTION_NAME_C __func__ #define PETSC_FUNCTION_NAME_CXX __func__ #define PETSC_HAVE_ACCESS 1 #define PETSC_HAVE_ATOLL 1 #define PETSC_HAVE_BUILTIN_EXPECT 1 #define PETSC_HAVE_C99_COMPLEX 1 #define PETSC_HAVE_CLOCK 1 #define PETSC_HAVE_CLOSESOCKET 1 #define PETSC_HAVE_CXX 1 #define PETSC_HAVE_CXX_COMPLEX 1 #define PETSC_HAVE_CXX_COMPLEX_FIX 1 #define PETSC_HAVE_CXX_DIALECT_CXX11 1 #define PETSC_HAVE_CXX_DIALECT_CXX14 1 #define PETSC_HAVE_CXX_DIALECT_CXX17 1 #define PETSC_HAVE_DIRECT_H 1 #define PETSC_HAVE_DOS_H 1 #define PETSC_HAVE_DOUBLE_ALIGN_MALLOC 1 #define PETSC_HAVE_ERF 1 #define PETSC_HAVE_FCNTL_H 1 #define PETSC_HAVE_FENV_H 1 #define PETSC_HAVE_FE_VALUES 1 #define PETSC_HAVE_FLOAT_H 1 #define PETSC_HAVE_FORTRAN_CAPS 1 #define PETSC_HAVE_FORTRAN_FLUSH 1 #define PETSC_HAVE_FORTRAN_FREE_LINE_LENGTH_NONE 1 #define PETSC_HAVE_FORTRAN_TYPE_STAR 1 #define PETSC_HAVE_FREELIBRARY 1 #define PETSC_HAVE_GETCOMPUTERNAME 1 #define PETSC_HAVE_GETCWD 1 #define PETSC_HAVE_GETLASTERROR 1 #define PETSC_HAVE_GETPROCADDRESS 1 #define PETSC_HAVE_GET_USER_NAME 1 #define PETSC_HAVE_IMMINTRIN_H 1 #define PETSC_HAVE_INTTYPES_H 1 #define PETSC_HAVE_IO_H 1 #define PETSC_HAVE_ISINF 1 #define PETSC_HAVE_ISNAN 1 #define PETSC_HAVE_ISNORMAL 1 #define PETSC_HAVE_LARGE_INTEGER_U 1 #define PETSC_HAVE_LGAMMA 1 #define PETSC_HAVE_LOADLIBRARY 1 #define PETSC_HAVE_LOG2 1 #define PETSC_HAVE_LSEEK 1 #define PETSC_HAVE_MALLOC_H 1 #define PETSC_HAVE_MEMMOVE 1 #define PETSC_HAVE_MKL_LIBS 1 #define PETSC_HAVE_MPIUNI 1 #define PETSC_HAVE_O_BINARY 1 #define PETSC_HAVE_PACKAGES ":blaslapack:mathlib:mpi:" #define PETSC_HAVE_RAND 1 #define PETSC_HAVE_SETJMP_H 1 #define PETSC_HAVE_SETLASTERROR 1 #define PETSC_HAVE_SNPRINTF 1 #define PETSC_HAVE_STDINT_H 1 #define PETSC_HAVE_STRICMP 1 #define PETSC_HAVE_SYS_TYPES_H 1 #define PETSC_HAVE_TAU_PERFSTUBS 1 #define PETSC_HAVE_TGAMMA 1 #define PETSC_HAVE_TIME 1 #define PETSC_HAVE_TIME_H 1 #define PETSC_HAVE_TMPNAM_S 1 #define PETSC_HAVE_VA_COPY 1 #define PETSC_HAVE_VSNPRINTF 1 #define PETSC_HAVE_WINDOWSX_H 1 #define PETSC_HAVE_WINDOWS_COMPILERS 1 #define PETSC_HAVE_WINDOWS_H 1 #define PETSC_HAVE_WINSOCK2_H 1 #define PETSC_HAVE_WS2TCPIP_H 1 #define PETSC_HAVE_WSAGETLASTERROR 1 #define PETSC_HAVE_XMMINTRIN_H 1 #define PETSC_HAVE__ACCESS 1 #define PETSC_HAVE__GETCWD 1 #define PETSC_HAVE__LSEEK 1 #define PETSC_HAVE__MKDIR 1 #define PETSC_HAVE__SLEEP 1 #define PETSC_HAVE__SNPRINTF 1 #define PETSC_HAVE___INT64 1 #define PETSC_INTPTR_T intptr_t #define PETSC_INTPTR_T_FMT "#" PRIxPTR #define PETSC_IS_COLORING_MAX USHRT_MAX #define PETSC_IS_COLORING_VALUE_TYPE short #define PETSC_IS_COLORING_VALUE_TYPE_F integer2 #define PETSC_LEVEL1_DCACHE_LINESIZE 32 #define PETSC_LIB_DIR "/cygdrive/e/balay/petsc-3.21.2/arch-mswin-c-debug/lib" #define PETSC_MAX_PATH_LEN 4096 #define PETSC_MEMALIGN 16 #define PETSC_MISSING_GETLINE 1 #define PETSC_MISSING_SIGALRM 1 #define PETSC_MISSING_SIGBUS 1 #define PETSC_MISSING_SIGCHLD 1 #define PETSC_MISSING_SIGCONT 1 #define PETSC_MISSING_SIGHUP 1 #define PETSC_MISSING_SIGKILL 1 #define PETSC_MISSING_SIGPIPE 1 #define PETSC_MISSING_SIGQUIT 1 #define PETSC_MISSING_SIGSTOP 1 #define PETSC_MISSING_SIGSYS 1 #define PETSC_MISSING_SIGTRAP 1 #define PETSC_MISSING_SIGTSTP 1 #define PETSC_MISSING_SIGURG 1 #define PETSC_MISSING_SIGUSR1 1 #define PETSC_MISSING_SIGUSR2 1 #define PETSC_MPICC_SHOW "Unavailable" #define PETSC_MPIU_IS_COLORING_VALUE_TYPE MPI_UNSIGNED_SHORT #define PETSC_NEEDS_UTYPE_TYPEDEFS 1 #define PETSC_OMAKE "/usr/bin/make --no-print-directory" #define PETSC_PREFETCH_HINT_NTA _MM_HINT_NTA #define PETSC_PREFETCH_HINT_T0 _MM_HINT_T0 #define PETSC_PREFETCH_HINT_T1 _MM_HINT_T1 #define PETSC_PREFETCH_HINT_T2 _MM_HINT_T2 #define PETSC_PYTHON_EXE "/usr/bin/python3" #define PETSC_Prefetch(a,b,c) _mm_prefetch((const char*)(a),(c)) #define PETSC_REPLACE_DIR_SEPARATOR '/' #define PETSC_SIGNAL_CAST #define PETSC_SIZEOF_INT 4 #define PETSC_SIZEOF_LONG 4 #define PETSC_SIZEOF_LONG_LONG 8 #define PETSC_SIZEOF_SIZE_T 8 #define PETSC_SIZEOF_VOID_P 8 #define PETSC_SLSUFFIX "" #define PETSC_UINTPTR_T uintptr_t #define PETSC_UINTPTR_T_FMT "#" PRIxPTR #define PETSC_UNUSED #define PETSC_USE_AVX512_KERNELS 1 #define PETSC_USE_BACKWARD_LOOP 1 #define PETSC_USE_COMPLEX 1 #define PETSC_USE_CTABLE 1 #define PETSC_USE_DEBUG 1 #define PETSC_USE_DEBUGGER "gdb" #define PETSC_USE_DMLANDAU_2D 1 #define PETSC_USE_FORTRAN_BINDINGS 1 #define PETSC_USE_INFO 1 #define PETSC_USE_ISATTY 1 #define PETSC_USE_LOG 1 #define PETSC_USE_MICROSOFT_TIME 1 #define PETSC_USE_PROC_FOR_SIZE 1 #define PETSC_USE_REAL_DOUBLE 1 #define PETSC_USE_SINGLE_LIBRARY 1 #define PETSC_USE_WINDOWS_GRAPHICS 1 #define PETSC_USING_64BIT_PTR 1 #define PETSC_USING_F2003 1 #define PETSC_USING_F90FREEFORM 1 #define PETSC__BSD_SOURCE 1 #define PETSC__DEFAULT_SOURCE 1 #define R_OK 04 #define S_ISDIR(a) (((a)&_S_IFMT) == _S_IFDIR) #define S_ISREG(a) (((a)&_S_IFMT) == _S_IFREG) #define W_OK 02 #define X_OK 01 #define _USE_MATH_DEFINES 1 ----------------------------------------- Using C compile: /cygdrive/e/balay/petsc-3.21.2/lib/petsc/bin/win32fe/win32fe_icl -o .o -c -Qstd=c99 -MT -Z7 -Od mpicc -show: Unavailable C compiler version: Win32 Development Tool Front End, version 1.11.4 Fri, Sep 10, 2021 6:33:40 PM Intel(R) C++ Intel(R) 64 Compiler Classic for applications running on Intel(R) 64, Version 2021.6.0 Build 20220226_000000 Using C++ compile: /cygdrive/e/balay/petsc-3.21.2/lib/petsc/bin/win32fe/win32fe_icl -o .o -c -MT -GR -EHsc -Z7 -Od -Qstd=c++17 -TP -I/cygdrive/e/balay/petsc-3.21.2/include -I/cygdrive/e/balay/petsc-3.21.2/arch-mswin-c-debug/include mpicxx -show: Unavailable C++ compiler version: Win32 Development Tool Front End, version 1.11.4 Fri, Sep 10, 2021 6:33:40 PM Intel(R) C++ Intel(R) 64 Compiler Classic for applications running on Intel(R) 64, Version 2021.6.0 Build 20220226_000000 Using Fortran compile: /cygdrive/e/balay/petsc-3.21.2/lib/petsc/bin/win32fe/win32fe_ifort -o .o -c -MT -Z7 -Od -fpp -I/cygdrive/e/balay/petsc-3.21.2/include -I/cygdrive/e/balay/petsc-3.21.2/arch-mswin-c-debug/include mpif90 -show: Unavailable Fortran compiler version: Win32 Development Tool Front End, version 1.11.4 Fri, Sep 10, 2021 6:33:40 PM Intel(R) Fortran Intel(R) 64 Compiler Classic for applications running on Intel(R) 64, Version 2021.6.0 Build 20220226_000000 ----------------------------------------- Using C/C++ linker: /cygdrive/e/balay/petsc-3.21.2/lib/petsc/bin/win32fe/win32fe_icl Using C/C++ flags: -Qwd10161 -Qstd=c99 -MT -Z7 -Od Using Fortran linker: /cygdrive/e/balay/petsc-3.21.2/lib/petsc/bin/win32fe/win32fe_ifort Using Fortran flags: -MT -Z7 -Od -fpp ----------------------------------------- Using system modules: Using mpi.h: mpiuni ----------------------------------------- Using libraries: -L/cygdrive/e/balay/petsc-3.21.2/arch-mswin-c-debug/lib -L/cygdrive/c/PROGRA~2/Intel/oneAPI/mkl/latest/lib/intel64 -lpetsc mkl_intel_lp64_dll.lib mkl_sequential_dll.lib mkl_core_dll.lib Gdi32.lib User32.lib Advapi32.lib Kernel32.lib Ws2_32.lib ------------------------------------------ Using mpiexec: /cygdrive/e/balay/petsc-3.21.2/lib/petsc/bin/petsc-mpiexec.uni ------------------------------------------ Using MAKE: /usr/bin/make Default MAKEFLAGS: MAKE_NP:10 MAKE_LOAD:18.0 MAKEFLAGS: --no-print-directory -- PETSC_ARCH=arch-mswin-c-debug PETSC_DIR=/cygdrive/e/balay/petsc-3.21.2 ========================================== /usr/bin/make --print-directory -f gmakefile -j10 -l18.0 --output-sync=recurse V= libs /usr/bin/python3 ./config/gmakegen.py --petsc-arch=arch-mswin-c-debug CC arch-mswin-c-debug/obj/src/vec/vec/interface/veccreate.o veccreate.c CC arch-mswin-c-debug/obj/src/vec/vec/interface/vecreg.o vecreg.c CC arch-mswin-c-debug/obj/src/vec/vec/interface/vecregall.o vecregall.c CC arch-mswin-c-debug/obj/src/vec/vec/interface/vector.o vector.c CC arch-mswin-c-debug/obj/src/vec/vec/utils/vecglvis.o vecglvis.c CC arch-mswin-c-debug/obj/src/vec/vec/interface/rvector.o rvector.c CC arch-mswin-c-debug/obj/src/vec/vec/utils/vecs.o vecs.c CC arch-mswin-c-debug/obj/src/vec/vec/utils/vecio.o vecio.c CC arch-mswin-c-debug/obj/src/vec/vec/utils/vecstash.o vecstash.c CC arch-mswin-c-debug/obj/src/vec/vec/utils/vsection.o vsection.c CC arch-mswin-c-debug/obj/src/vec/vec/utils/vinv.o vinv.c CC arch-mswin-c-debug/obj/src/mat/graphops/coarsen/scoarsen.o scoarsen.c CC arch-mswin-c-debug/obj/src/mat/impls/aij/seq/fdaij.o fdaij.c CC arch-mswin-c-debug/obj/src/mat/impls/aij/seq/ij.o ij.c CC arch-mswin-c-debug/obj/src/mat/impls/aij/seq/inode2.o inode2.c CC arch-mswin-c-debug/obj/src/mat/impls/aij/seq/matrart.o matrart.c CC arch-mswin-c-debug/obj/src/mat/impls/aij/seq/mattransposematmult.o taoshell.c CC arch-mswin-c-debug/obj/src/tao/snes/taosnes.o taosnes.c CC arch-mswin-c-debug/obj/src/tao/util/ftn-auto/tao_utilf.o tao_utilf.c CC arch-mswin-c-debug/obj/src/tao/python/ftn-custom/zpythontaof.o zpythontaof.c CC arch-mswin-c-debug/obj/src/tao/util/tao_util.o tao_util.c FC arch-mswin-c-debug/obj/src/sys/f90-mod/petscsysmod.o FC arch-mswin-c-debug/obj/src/sys/mpiuni/fsrc/somempifort.o FC arch-mswin-c-debug/obj/src/sys/objects/f2003-src/fsrc/optionenum.o FC arch-mswin-c-debug/obj/src/vec/f90-mod/petscvecmod.o FC arch-mswin-c-debug/obj/src/sys/classes/bag/f2003-src/fsrc/bagenum.o FC arch-mswin-c-debug/obj/src/mat/f90-mod/petscmatmod.o FC arch-mswin-c-debug/obj/src/dm/f90-mod/petscdmmod.o FC arch-mswin-c-debug/obj/src/dm/f90-mod/petscdmswarmmod.o FC arch-mswin-c-debug/obj/src/dm/f90-mod/petscdmplexmod.o FC arch-mswin-c-debug/obj/src/dm/f90-mod/petscdmdamod.o FC arch-mswin-c-debug/obj/src/ksp/f90-mod/petsckspdefmod.o CC arch-mswin-c-debug/obj/src/tao/python/pythontao.o pythontao.c FC arch-mswin-c-debug/obj/src/ksp/f90-mod/petscpcmod.o FC arch-mswin-c-debug/obj/src/ksp/f90-mod/petsckspmod.o FC arch-mswin-c-debug/obj/src/snes/f90-mod/petscsnesmod.o FC arch-mswin-c-debug/obj/src/ts/f90-mod/petsctsmod.o FC arch-mswin-c-debug/obj/src/tao/f90-mod/petsctaomod.o AR arch-mswin-c-debug/lib/libpetsc.lib ========================================= Now to check if the libraries are working do: make PETSC_DIR=/cygdrive/e/balay/petsc-3.21.2 PETSC_ARCH=arch-mswin-c-debug check ========================================= balay at petsc-win01 /cygdrive/e/balay/petsc-3.21.2 $ make check Running PETSc check examples to verify correct installation Using PETSC_DIR=/cygdrive/e/balay/petsc-3.21.2 and PETSC_ARCH=arch-mswin-c-debug C/C++ example src/snes/tutorials/ex19 run successfully with 1 MPI process Fortran example src/snes/tutorials/ex5f run successfully with 1 MPI process Completed PETSc check examples balay at petsc-win01 /cygdrive/e/balay/petsc-3.21.2 $ -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguel.salazar at corintis.com Fri Jul 5 02:29:17 2024 From: miguel.salazar at corintis.com (Miguel Angel Salazar de Troya) Date: Fri, 5 Jul 2024 09:29:17 +0200 Subject: [petsc-users] Strategies for coupled nonlinear problems Message-ID: Hello, I have the Navier-Stokes equation coupled with a convection-diffusion equation for the temperature. It is a two-way coupling because the viscosity depends on the temperature. One way to solve this is with some kind of fixed point iteration scheme, where I solve each equation separately in a loop until I see convergence. I am aware this is not possible directly at the SNES level. Is there something that one can do using PCFIELDSPLIT? I would like to assemble my fully coupled system and play with the solver options to get some kind of fixed-point iteration scheme. I would like to avoid having to build two separate SNES solvers, one per equation. Any reference on techniques to solve this type of coupled system is welcome. Best, Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay.anl at fastmail.org Fri Jul 5 10:08:19 2024 From: balay.anl at fastmail.org (Satish Balay) Date: Fri, 5 Jul 2024 10:08:19 -0500 (CDT) Subject: [petsc-users] make check error using Intel MKL In-Reply-To: References: <720bf0fa-3750-1377-af69-559903de8e73@fastmail.org> Message-ID: <1ab7848e-96df-bb61-75af-2fb64ad24942@fastmail.org> An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Sat Jul 6 08:05:18 2024 From: mfadams at lbl.gov (Mark Adams) Date: Sat, 6 Jul 2024 09:05:18 -0400 Subject: [petsc-users] Strategies for coupled nonlinear problems In-Reply-To: References: Message-ID: Hi Miguel, PCFIELDSPLIT is indeed what you want. There may be some PETSc examples that could help guide algorithm selection, but I would just start by getting the PCFIELDSPLIT infrastructure in place and running with the default solver, which is simple block Gauss-Seidel iteration on the two block system And you want to look in your field to see what other people have done with systems like yours and then see about how to construct them in PCFIELDSPLIT. Thanks, Mark On Fri, Jul 5, 2024 at 3:29?AM Miguel Angel Salazar de Troya < miguel.salazar at corintis.com> wrote: > Hello, I have the Navier-Stokes equation coupled with a > convection-diffusion equation for the temperature. It is a two-way coupling > because the viscosity depends on the temperature. One way to solve this is > with some kind of fixed point iteration > ZjQcmQRYFpfptBannerStart > This Message Is From an External Sender > This message came from outside your organization. > > ZjQcmQRYFpfptBannerEnd > Hello, > > I have the Navier-Stokes equation coupled with a convection-diffusion > equation for the temperature. It is a two-way coupling because the > viscosity depends on the temperature. One way to solve this is with some > kind of fixed point iteration scheme, where I solve each equation > separately in a loop until I see convergence. I am aware this is not > possible directly at the SNES level. Is there something that one can do > using PCFIELDSPLIT? I would like to assemble my fully coupled system and > play with the solver options to get some kind of fixed-point iteration > scheme. I would like to avoid having to build two separate SNES solvers, > one per equation. Any reference on techniques to solve this type of coupled > system is welcome. > > Best, > Miguel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sat Jul 6 08:33:03 2024 From: knepley at gmail.com (Matthew Knepley) Date: Sat, 6 Jul 2024 09:33:03 -0400 Subject: [petsc-users] Strategies for coupled nonlinear problems In-Reply-To: References: Message-ID: On Fri, Jul 5, 2024 at 3:29?AM Miguel Angel Salazar de Troya < miguel.salazar at corintis.com> wrote: > Hello, I have the Navier-Stokes equation coupled with a > convection-diffusion equation for the temperature. It is a two-way coupling > because the viscosity depends on the temperature. One way to solve this is > with some kind of fixed point iteration > ZjQcmQRYFpfptBannerStart > This Message Is From an External Sender > This message came from outside your organization. > > ZjQcmQRYFpfptBannerEnd > Hello, > > I have the Navier-Stokes equation coupled with a convection-diffusion > equation for the temperature. It is a two-way coupling because the > viscosity depends on the temperature. One way to solve this is with some > kind of fixed point iteration scheme, where I solve each equation > separately in a loop until I see convergence. I am aware this is not > possible directly at the SNES level. Is there something that one can do > using PCFIELDSPLIT? I would like to assemble my fully coupled system and > play with the solver options to get some kind of fixed-point iteration > scheme. I would like to avoid having to build two separate SNES solvers, > one per equation. Any reference on techniques to solve this type of coupled > system is welcome. > Hi Miguel, I have a branch https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/tree/knepley/feature-snes-fieldsplit?ref_type=heads__;!!G_uCfscf7eWS!cqtcQdi0PKTs4S7KpYIdusFz-Sr1TBcqFksEpoLFWkYiP_DAZlbbQdGCNTEQxScvJW1Tm0fsMaqh1YxTAA-_$ that will allow you to do exactly what you want to do. However, there are caveats. In order to have SNES do this, it needs a way to selectively reassemble subproblems. I assume you are using Firedrake, so this will not work. I would definitely be willing to work with those guys to get this going, introducing callbacks, just as we did on the FieldSplit case. Thanks, Matt > Best, > Miguel > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!cqtcQdi0PKTs4S7KpYIdusFz-Sr1TBcqFksEpoLFWkYiP_DAZlbbQdGCNTEQxScvJW1Tm0fsMaqh1RE4uZs7$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguel.salazar at corintis.com Mon Jul 8 05:14:25 2024 From: miguel.salazar at corintis.com (Miguel Angel Salazar de Troya) Date: Mon, 8 Jul 2024 12:14:25 +0200 Subject: [petsc-users] Strategies for coupled nonlinear problems In-Reply-To: References: Message-ID: Thanks Adam and Matt, Matt, can I get away with just using PCFIELDSPLIT? Or do I need the SNESFIELDSPLIT? Though it looks like the block Gauss-Seidel is only implemented in serial ( https://urldefense.us/v3/__https://petsc.org/main/manual/ksp/*block-jacobi-and-overlapping-additive-schwarz-preconditioners__;Iw!!G_uCfscf7eWS!fCLvWkRLjtRlx5jckypIplIxnk7AjY_owXIPyfK59pJPLsB9d6F_GYPmQ5koBgIZp7GpV37w_YXjt2j63gEg01pqR7FKTe-u$ ) On a more theoretical note, I have the impression that the convergence failures of the Newton-Raphson method for this kind of problem is ultimately due to a lack of a diagonally dominant Jacobian. I have not found any reference so I might be wrong. Best, Miguel On Sat, Jul 6, 2024 at 3:33?PM Matthew Knepley wrote: > On Fri, Jul 5, 2024 at 3:29?AM Miguel Angel Salazar de Troya < > miguel.salazar at corintis.com> wrote: > >> Hello, I have the Navier-Stokes equation coupled with a >> convection-diffusion equation for the temperature. It is a two-way coupling >> because the viscosity depends on the temperature. One way to solve this is >> with some kind of fixed point iteration >> ZjQcmQRYFpfptBannerStart >> This Message Is From an External Sender >> This message came from outside your organization. >> >> ZjQcmQRYFpfptBannerEnd >> Hello, >> >> I have the Navier-Stokes equation coupled with a convection-diffusion >> equation for the temperature. It is a two-way coupling because the >> viscosity depends on the temperature. One way to solve this is with some >> kind of fixed point iteration scheme, where I solve each equation >> separately in a loop until I see convergence. I am aware this is not >> possible directly at the SNES level. Is there something that one can do >> using PCFIELDSPLIT? I would like to assemble my fully coupled system and >> play with the solver options to get some kind of fixed-point iteration >> scheme. I would like to avoid having to build two separate SNES solvers, >> one per equation. Any reference on techniques to solve this type of coupled >> system is welcome. >> > > Hi Miguel, > > I have a branch > > > https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/tree/knepley/feature-snes-fieldsplit?ref_type=heads__;!!G_uCfscf7eWS!fCLvWkRLjtRlx5jckypIplIxnk7AjY_owXIPyfK59pJPLsB9d6F_GYPmQ5koBgIZp7GpV37w_YXjt2j63gEg01pqR6cHYMfY$ > > that will allow you to do exactly what you want to do. However, there are > caveats. In order to have SNES do this, it needs a way to selectively > reassemble subproblems. I assume you are using Firedrake, so this will not > work. I would definitely be willing to work with those guys to get > this going, introducing callbacks, just as we did on the FieldSplit case. > > Thanks, > > Matt > > >> Best, >> Miguel >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!fCLvWkRLjtRlx5jckypIplIxnk7AjY_owXIPyfK59pJPLsB9d6F_GYPmQ5koBgIZp7GpV37w_YXjt2j63gEg01pqRzG84SwL$ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Jul 8 06:42:56 2024 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 8 Jul 2024 07:42:56 -0400 Subject: [petsc-users] Strategies for coupled nonlinear problems In-Reply-To: References: Message-ID: On Mon, Jul 8, 2024 at 6:14?AM Miguel Angel Salazar de Troya < miguel.salazar at corintis.com> wrote: > Thanks Adam and Matt, > > Matt, can I get away with just using PCFIELDSPLIT? Or do I need the > SNESFIELDSPLIT? Though it looks like the block Gauss-Seidel is only > implemented in serial ( > https://urldefense.us/v3/__https://petsc.org/main/manual/ksp/*block-jacobi-and-overlapping-additive-schwarz-preconditioners__;Iw!!G_uCfscf7eWS!eLmDWSrulgDcLMhEC5MITvrmcOrDVcAOy95wwGeNzgl7fvAnsX_ldsB3qVD5ArIV-jCyIHEPt3Po_GnSYekO$ > ) > You can do what you want for the linear problem, but that will probably not help. The best thing I know of for this kind of nonlinear coupling is now called primal-dual Newton, a name which I am not wild about. It is discussed here (https://urldefense.us/v3/__https://core.ac.uk/download/pdf/211337815.pdf__;!!G_uCfscf7eWS!eLmDWSrulgDcLMhEC5MITvrmcOrDVcAOy95wwGeNzgl7fvAnsX_ldsB3qVD5ArIV-jCyIHEPt3Po_DT_42uJ$ ) and originated in reference [33] from that thesis. My aim was to allow these kinds of solvers with that branch. > On a more theoretical note, I have the impression that the convergence > failures of the Newton-Raphson method for this kind of problem is > ultimately due to a lack of a diagonally dominant Jacobian. I have not > found any reference so I might be wrong. > I would say that the dominant direction for momentum hides the direction for improvement of the coefficient. Thanks, Matt > Best, > Miguel > > On Sat, Jul 6, 2024 at 3:33?PM Matthew Knepley wrote: > >> On Fri, Jul 5, 2024 at 3:29?AM Miguel Angel Salazar de Troya < >> miguel.salazar at corintis.com> wrote: >> >>> Hello, I have the Navier-Stokes equation coupled with a >>> convection-diffusion equation for the temperature. It is a two-way coupling >>> because the viscosity depends on the temperature. One way to solve this is >>> with some kind of fixed point iteration >>> ZjQcmQRYFpfptBannerStart >>> This Message Is From an External Sender >>> This message came from outside your organization. >>> >>> ZjQcmQRYFpfptBannerEnd >>> Hello, >>> >>> I have the Navier-Stokes equation coupled with a convection-diffusion >>> equation for the temperature. It is a two-way coupling because the >>> viscosity depends on the temperature. One way to solve this is with some >>> kind of fixed point iteration scheme, where I solve each equation >>> separately in a loop until I see convergence. I am aware this is not >>> possible directly at the SNES level. Is there something that one can do >>> using PCFIELDSPLIT? I would like to assemble my fully coupled system and >>> play with the solver options to get some kind of fixed-point iteration >>> scheme. I would like to avoid having to build two separate SNES solvers, >>> one per equation. Any reference on techniques to solve this type of coupled >>> system is welcome. >>> >> >> Hi Miguel, >> >> I have a branch >> >> >> https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/tree/knepley/feature-snes-fieldsplit?ref_type=heads__;!!G_uCfscf7eWS!eLmDWSrulgDcLMhEC5MITvrmcOrDVcAOy95wwGeNzgl7fvAnsX_ldsB3qVD5ArIV-jCyIHEPt3Po_ESUOIOo$ >> >> that will allow you to do exactly what you want to do. However, there are >> caveats. In order to have SNES do this, it needs a way to selectively >> reassemble subproblems. I assume you are using Firedrake, so this will >> not work. I would definitely be willing to work with those guys to get >> this going, introducing callbacks, just as we did on the FieldSplit case. >> >> Thanks, >> >> Matt >> >> >>> Best, >>> Miguel >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!eLmDWSrulgDcLMhEC5MITvrmcOrDVcAOy95wwGeNzgl7fvAnsX_ldsB3qVD5ArIV-jCyIHEPt3Po_DKriL_s$ >> >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!eLmDWSrulgDcLMhEC5MITvrmcOrDVcAOy95wwGeNzgl7fvAnsX_ldsB3qVD5ArIV-jCyIHEPt3Po_DKriL_s$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From FERRANJ2 at my.erau.edu Mon Jul 8 21:28:33 2024 From: FERRANJ2 at my.erau.edu (Ferrand, Jesus A.) Date: Tue, 9 Jul 2024 02:28:33 +0000 Subject: [petsc-users] What exactly is the GlobalToNatural PetscSF of DMPlex/DM? Message-ID: Dear PETSc team: Greetings. I keep working on mesh I/O utilities using DMPlex. Specifically for the output stage, I need a solid grasp on the global numbers and ideally how to set them into the DMPlex during an input operation and carrying the global numbers through API calls to DMPlexDistribute() or DMPlexMigrate() and hopefully also through some of the mesh adaption APIs. I was wondering if the GlobalToNatural PetscSF manages these global numbers. The next most useful object is the PointSF, but to me, it seems to only help establish DAG point ownership, not DAG point global indices. Otherwise, I have been working with the IS obtained from DMPlexGetPointNumbering() and manually determining global stratum sizes, offsets, and numbers by looking at the signs of the involuted index list that comes with that IS. It's working for now (I can monolithically write meshes to CGNS in parallel), but it is resulting in repetitive code that I will need for another mesh format that I want to support. Sincerely: J.A. Ferrand Embry-Riddle Aeronautical University - Daytona Beach - FL Ph.D. Candidate, Aerospace Engineering M.Sc. Aerospace Engineering B.Sc. Aerospace Engineering B.Sc. Computational Mathematics Phone: (386)-843-1829 Email(s): ferranj2 at my.erau.edu jesus.ferrand at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastian.blauth at itwm.fraunhofer.de Tue Jul 9 02:47:41 2024 From: sebastian.blauth at itwm.fraunhofer.de (Blauth, Sebastian) Date: Tue, 9 Jul 2024 07:47:41 +0000 Subject: [petsc-users] Questions on TAO and gradient norm / inner products Message-ID: Hello, I have some questions regarding TAO and the use the gradient norm. First, I want to use a custom inner product for the optimization in TAO (for computing the gradient norm and, e.g., in the double loop of a quasi-Newton method). I have seen that there is the method TAOSetGradientNorm https://petsc.org/release/manualpages/Tao/TaoSetGradientNorm/ which seems to do this. According to the petsc4py docs https://petsc.org/release/petsc4py/reference/petsc4py.PETSc.TAO.html#petsc4p y.PETSc.TAO.setGradientNorm, this should do what I want. However, the method does not always seem to perform correctly: When I use it with ?-tao_type lmvm?, it really seems to work and gives the correct scaling of the residual in the default TAO monitor. However, when I use, e.g., ?-tao_type bqnls?, ?-tao_type cg?, or ?-tao_type bncg?, the (initial) residual is the same as it is when I do not use the TAOSetGradientNorm. However, there seem to be some slight internal changes (at least for the bqnls), as the number of iterations to reach the tolerance changes from 15 without TAOSetGradientNorm to 17 with TAOSetGradientNorm. For the context: Here, I am trying to solve a PDE constrained optimal control problem, which I tackle in a reduced fashion (using a reduced cost functional which results in an unconstrained optimization using the adjoint approach). For this, I would like to use the L2 inner product induced by the FEM discretization, so the L2 mass matrix. Moreover, I noticed that the performance of ?-tao_type lmvm? and ?-tao_type bqnls? as well as ?-tao_type cg? and ?-tao_type bncg? are drastically different for the same unconstrained problem. I would have expected that the algorithms are (more or less) identical for that case. Is this to be expected? Finally, I would like to use TAO for solving PDE constrained shape optimization problems. To do so, I would need to be able to specify the inner product used in the solver (see the above part) and this inner product would need to change in each iteration. Is it possible to do this with TAO? And could anyone give me some hints how to do so in python with petsc4py? Thanks a lot in advance, Sebastian -- Dr. Sebastian Blauth Fraunhofer-Institut f?r Techno- und Wirtschaftsmathematik ITWM Abteilung Transportvorg?nge Fraunhofer-Platz 1, 67663 Kaiserslautern Telefon: +49 631 31600-4968 sebastian.blauth at itwm.fraunhofer.de https://www.itwm.fraunhofer.de -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 7943 bytes Desc: not available URL: From bsmith at petsc.dev Tue Jul 9 08:32:00 2024 From: bsmith at petsc.dev (Barry Smith) Date: Tue, 9 Jul 2024 09:32:00 -0400 Subject: [petsc-users] Questions on TAO and gradient norm / inner products In-Reply-To: References: Message-ID: From $ git grep TaoGradientNorm bound/impls/blmvm/blmvm.c: PetscCall(TaoGradientNorm(tao, tao->gradient, NORM_2, &gnorm)); bound/impls/blmvm/blmvm.c: PetscCall(TaoGradientNorm(tao, tao->gradient, NORM_2, &gnorm)); bound/impls/bnk/bnk.c: PetscCall(TaoGradientNorm(tao, tao->gradient, NORM_2, &bnk->gnorm)); bound/impls/bnk/bnk.c: PetscCall(TaoGradientNorm(tao, tao->gradient, NORM_2, &bnk->gnorm)); bound/impls/bnk/bnls.c: PetscCall(TaoGradientNorm(tao, tao->gradient, NORM_2, &bnk->gnorm)); bound/impls/bnk/bntl.c: PetscCall(TaoGradientNorm(tao, tao->gradient, NORM_2, &bnk->gnorm)); bound/impls/bnk/bntl.c: PetscCall(TaoGradientNorm(tao, tao->gradient, NORM_2, &bnk->gnorm)); bound/impls/bnk/bntr.c: PetscCall(TaoGradientNorm(tao, tao->gradient, NORM_2, &bnk->gnorm)); interface/taosolver.c:.seealso: [](ch_tao), `Tao`, `TaoGetGradientNorm()`, `TaoGradientNorm()` interface/taosolver.c:.seealso: [](ch_tao), `Tao`, `TaoSetGradientNorm()`, `TaoGradientNorm()` interface/taosolver.c: TaoGradientNorm - Compute the norm using the `NormType`, the user has selected interface/taosolver.c:PetscErrorCode TaoGradientNorm(Tao tao, Vec gradient, NormType type, PetscReal *gnorm) unconstrained/impls/lmvm/lmvm.c: PetscCall(TaoGradientNorm(tao, tao->gradient, NORM_2, &gnorm)); unconstrained/impls/lmvm/lmvm.c: PetscCall(TaoGradientNorm(tao, tao->gradient, NORM_2, &gnorm)); unconstrained/impls/nls/nls.c: PetscCall(TaoGradientNorm(tao, tao->gradient, NORM_2, &gnorm)); unconstrained/impls/nls/nls.c: PetscCall(TaoGradientNorm(tao, tao->gradient, NORM_2, &gnorm)); unconstrained/impls/nls/nls.c: PetscCall(TaoGradientNorm(tao, tao->gradient, NORM_2, &gnorm)); unconstrained/impls/ntr/ntr.c: PetscCall(TaoGradientNorm(tao, tao->gradient, NORM_2, &gnorm)); unconstrained/impls/ntr/ntr.c: PetscCall(TaoGradientNorm(tao, tao->gradient, NORM_2, &gnorm)); unconstrained/impls/ntr/ntr.c: PetscCall(TaoGradientNorm(tao, tao->gradient, NORM_2, &gnorm)); it appears only some of the algorithm implementations use the norm you provide. While git grep VecNorm indicates many places where it is not used. Likely some of the other algorithm implementations could be easily "fixed" to support by changing the norm computed. But I am not an expert on the algorithms and don't know if all algorithms can mathematically support a user provided norm. You are welcome to take a stab at making the change in an MR, or do you have a simple test problem with a mass matrix we can use to fix the "missing" implementations? Barry > On Jul 9, 2024, at 3:47?AM, Blauth, Sebastian wrote: > > Hello, > > I have some questions regarding TAO and the use the gradient norm. > > First, I want to use a custom inner product for the optimization in TAO (for computing the gradient norm and, e.g., in the double loop of a quasi-Newton method). I have seen that there is the method TAOSetGradientNorm https://urldefense.us/v3/__https://petsc.org/release/manualpages/Tao/TaoSetGradientNorm/__;!!G_uCfscf7eWS!YYY6wdyL1EFFToD3UIx3H-GCsAk3BjMjp0gbO1zDgnDDViycNJ0OgClf3PxjdA7J-xzTSOrACClqwdPuNgYUsmo$ which seems to do this. According to the petsc4py docs https://urldefense.us/v3/__https://petsc.org/release/petsc4py/reference/petsc4py.PETSc.TAO.html*petsc4py.PETSc.TAO.setGradientNorm__;Iw!!G_uCfscf7eWS!YYY6wdyL1EFFToD3UIx3H-GCsAk3BjMjp0gbO1zDgnDDViycNJ0OgClf3PxjdA7J-xzTSOrACClqwdPutG_lawA$ , this should do what I want. However, the method does not always seem to perform correctly: When I use it with ?-tao_type lmvm?, it really seems to work and gives the correct scaling of the residual in the default TAO monitor. However, when I use, e.g., ?-tao_type bqnls?, ?-tao_type cg?, or ?-tao_type bncg?, the (initial) residual is the same as it is when I do not use the TAOSetGradientNorm. However, there seem to be some slight internal changes (at least for the bqnls), as the number of iterations to reach the tolerance changes from 15 without TAOSetGradientNorm to 17 with TAOSetGradientNorm. > > For the context: Here, I am trying to solve a PDE constrained optimal control problem, which I tackle in a reduced fashion (using a reduced cost functional which results in an unconstrained optimization using the adjoint approach). For this, I would like to use the L2 inner product induced by the FEM discretization, so the L2 mass matrix. > > Moreover, I noticed that the performance of ?-tao_type lmvm? and ?-tao_type bqnls? as well as ?-tao_type cg? and ?-tao_type bncg? are drastically different for the same unconstrained problem. I would have expected that the algorithms are (more or less) identical for that case. Is this to be expected? > > Finally, I would like to use TAO for solving PDE constrained shape optimization problems. To do so, I would need to be able to specify the inner product used in the solver (see the above part) and this inner product would need to change in each iteration. Is it possible to do this with TAO? And could anyone give me some hints how to do so in python with petsc4py? > > Thanks a lot in advance, > Sebastian > > > > -- > Dr. Sebastian Blauth > Fraunhofer-Institut f?r > Techno- und Wirtschaftsmathematik ITWM > Abteilung Transportvorg?nge > Fraunhofer-Platz 1, 67663 Kaiserslautern > Telefon: +49 631 31600-4968 > sebastian.blauth at itwm.fraunhofer.de > https://urldefense.us/v3/__https://www.itwm.fraunhofer.de__;!!G_uCfscf7eWS!YYY6wdyL1EFFToD3UIx3H-GCsAk3BjMjp0gbO1zDgnDDViycNJ0OgClf3PxjdA7J-xzTSOrACClqwdPuQJOW8S8$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguel.salazar at corintis.com Tue Jul 9 10:34:09 2024 From: miguel.salazar at corintis.com (Miguel Angel Salazar de Troya) Date: Tue, 9 Jul 2024 17:34:09 +0200 Subject: [petsc-users] Strategies for coupled nonlinear problems In-Reply-To: References: Message-ID: Are there other alternative methods that might be easier to implement? Miguel On Mon, Jul 8, 2024 at 1:43?PM Matthew Knepley wrote: > On Mon, Jul 8, 2024 at 6:14?AM Miguel Angel Salazar de Troya < > miguel.salazar at corintis.com> wrote: > >> Thanks Adam and Matt, >> >> Matt, can I get away with just using PCFIELDSPLIT? Or do I need the >> SNESFIELDSPLIT? Though it looks like the block Gauss-Seidel is only >> implemented in serial ( >> https://urldefense.us/v3/__https://petsc.org/main/manual/ksp/*block-jacobi-and-overlapping-additive-schwarz-preconditioners__;Iw!!G_uCfscf7eWS!bSoUE_2lViJzEHKf5CEFph-9dqm5ZtOB6QjWVEk4zIyGBukbkcoEGiVzHu84pF637kvsxFyFQoUPYjIUFM5eZZ51NH3TTKGi$ >> ) >> > > You can do what you want for the linear problem, but that will probably > not help. The best thing I know of for this kind of nonlinear coupling is > now called primal-dual Newton, a name which I am not wild about. It is > discussed here (https://urldefense.us/v3/__https://core.ac.uk/download/pdf/211337815.pdf__;!!G_uCfscf7eWS!bSoUE_2lViJzEHKf5CEFph-9dqm5ZtOB6QjWVEk4zIyGBukbkcoEGiVzHu84pF637kvsxFyFQoUPYjIUFM5eZZ51NFSFi9w1$ ) and > originated in reference [33] from that thesis. My aim was to allow these > kinds of solvers with that branch. > > >> On a more theoretical note, I have the impression that the convergence >> failures of the Newton-Raphson method for this kind of problem is >> ultimately due to a lack of a diagonally dominant Jacobian. I have not >> found any reference so I might be wrong. >> > > I would say that the dominant direction for momentum hides the direction > for improvement of the coefficient. > > Thanks, > > Matt > > >> Best, >> Miguel >> >> On Sat, Jul 6, 2024 at 3:33?PM Matthew Knepley wrote: >> >>> On Fri, Jul 5, 2024 at 3:29?AM Miguel Angel Salazar de Troya < >>> miguel.salazar at corintis.com> wrote: >>> >>>> Hello, I have the Navier-Stokes equation coupled with a >>>> convection-diffusion equation for the temperature. It is a two-way coupling >>>> because the viscosity depends on the temperature. One way to solve this is >>>> with some kind of fixed point iteration >>>> ZjQcmQRYFpfptBannerStart >>>> This Message Is From an External Sender >>>> This message came from outside your organization. >>>> >>>> ZjQcmQRYFpfptBannerEnd >>>> Hello, >>>> >>>> I have the Navier-Stokes equation coupled with a convection-diffusion >>>> equation for the temperature. It is a two-way coupling because the >>>> viscosity depends on the temperature. One way to solve this is with some >>>> kind of fixed point iteration scheme, where I solve each equation >>>> separately in a loop until I see convergence. I am aware this is not >>>> possible directly at the SNES level. Is there something that one can do >>>> using PCFIELDSPLIT? I would like to assemble my fully coupled system and >>>> play with the solver options to get some kind of fixed-point iteration >>>> scheme. I would like to avoid having to build two separate SNES solvers, >>>> one per equation. Any reference on techniques to solve this type of coupled >>>> system is welcome. >>>> >>> >>> Hi Miguel, >>> >>> I have a branch >>> >>> >>> https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/tree/knepley/feature-snes-fieldsplit?ref_type=heads__;!!G_uCfscf7eWS!bSoUE_2lViJzEHKf5CEFph-9dqm5ZtOB6QjWVEk4zIyGBukbkcoEGiVzHu84pF637kvsxFyFQoUPYjIUFM5eZZ51NA5DeA9R$ >>> >>> that will allow you to do exactly what you want to do. However, there >>> are caveats. In order to have SNES do this, it needs a way to selectively >>> reassemble subproblems. I assume you are using Firedrake, so this will >>> not work. I would definitely be willing to work with those guys to get >>> this going, introducing callbacks, just as we did on the FieldSplit case. >>> >>> Thanks, >>> >>> Matt >>> >>> >>>> Best, >>>> Miguel >>>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >>> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!bSoUE_2lViJzEHKf5CEFph-9dqm5ZtOB6QjWVEk4zIyGBukbkcoEGiVzHu84pF637kvsxFyFQoUPYjIUFM5eZZ51NBJa7cKb$ >>> >>> >> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!bSoUE_2lViJzEHKf5CEFph-9dqm5ZtOB6QjWVEk4zIyGBukbkcoEGiVzHu84pF637kvsxFyFQoUPYjIUFM5eZZ51NBJa7cKb$ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrsd at gmail.com Tue Jul 9 14:18:35 2024 From: andrsd at gmail.com (David Andrs) Date: Tue, 9 Jul 2024 13:18:35 -0600 Subject: [petsc-users] Example with Label-restricted field variables Message-ID: Hi! I am looking for an example that would show how to set an auxiliary field variable restricted to a Label. The set up I am interested in would be with DMPlex, PetscFE, PetscDS. All examples I found set the aux. fields on the whole domain. I would like to set such an aux. field to 2 different kinds (I am not sure if the setup would be any different) of Labels: 1. a cell set (for cell set restricted forcing function) 2. a face set (for boundary conditions) With regards, David -------------- next part -------------- An HTML attachment was scrubbed... URL: From qiyuelu1 at gmail.com Tue Jul 9 16:38:46 2024 From: qiyuelu1 at gmail.com (Qiyue Lu) Date: Tue, 9 Jul 2024 16:38:46 -0500 Subject: [petsc-users] How to add additional cpp flags in the makefile while compiling a code? Message-ID: Hello, I am trying to compile a *.cpp code under PETSc environment. An additional flag is needed to link the metis partition library. ############## # This line is necessary for linking, remember need this flag in both compiler options and linking options LDFLAGS= -fopenmp -lpmix -lmetis # This line is required for using some 'helper' function like cudaErrorCheck from samples suite CXXFLAGS += -Wl,-rpath,/sw/spack/deltas11-2023-03/apps/linux-rhel8-zen/gcc-8.5.0/metis-5.1.0-v5iddu2/lib # This two lines are required for using PETSc include ${PETSC_DIR}/lib/petsc/conf/variables include ${PETSC_DIR}/lib/petsc/conf/rules ############## However, adding "CXXFLAGS += -Wl,-rpath, path_to_lib" in the makefile under the same directory, is not working. And CPPFLAGS and CXXPPFLAGS won't work either. No errors while compiling, but cannot find the metis.o while running the binary. Another confusing point is, in the compilation flash screen, LDFLAGS options can be found, but CXXFLAGS cannot. Could you please inform me how to add additional flags in the makefile? Thanks, Qiyue Lu -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay.anl at fastmail.org Tue Jul 9 18:25:14 2024 From: balay.anl at fastmail.org (Satish Balay) Date: Tue, 9 Jul 2024 18:25:14 -0500 (CDT) Subject: [petsc-users] How to add additional cpp flags in the makefile while compiling a code? In-Reply-To: References: Message-ID: An HTML attachment was scrubbed... URL: From balay.anl at fastmail.org Tue Jul 9 18:34:33 2024 From: balay.anl at fastmail.org (Satish Balay) Date: Tue, 9 Jul 2024 18:34:33 -0500 (CDT) Subject: [petsc-users] How to add additional cpp flags in the makefile while compiling a code? In-Reply-To: References: Message-ID: An HTML attachment was scrubbed... URL: From liufield at gmail.com Wed Jul 10 09:20:10 2024 From: liufield at gmail.com (neil liu) Date: Wed, 10 Jul 2024 10:20:10 -0400 Subject: [petsc-users] About the face orientation. Message-ID: Dear Petsc developers, I am checking the face orientations for DMPLEX. I found the following rule, edge ordering for face 0 Orientation 0 1 2 0 1 0 2 -1 2 1 0 -2 0 2 1 -3 How about -4 or -5? I am a trying a simple mesh, therefore didn't show any face with orientation -4 or -5. Does this also apply for all other faces, right? E.g., face 1, edge ordering for face 1 Orientation 3 4 0 0 4 3 0 -1 ..... Thanks, Xiaodong -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris at resfrac.com Wed Jul 10 16:01:31 2024 From: chris at resfrac.com (Chris Hewson) Date: Wed, 10 Jul 2024 15:01:31 -0600 Subject: [petsc-users] MKL Pardiso returning NAN values Message-ID: Hi There, We have a matrix that is singular and trying to solve it. We first use an iterative solve with KSPBCGS, the solution vector is nan values and the converged reason from PETSc of KSP_DIVERGED_NANORINF, that's great and what I would expect. Sometimes in our program we redo a failed solve using the MKL Pardiso solver, when the same matrix and vectors get put into that solver which is a KSPPREONLY, I get KSP_CONVERGED_ITS as a converged reason and solution vector with nan values in it. Stepping through the PETSc calls, I see that the external call to Pardiso doesn't return an error for this, so not really the fault of PETSc, but curious if y'all have seen this before or a solution/workaround to this? *Chris Hewson* Senior Reservoir Simulation Engineer ResFrac +1.587.575.9792 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Thu Jul 11 09:37:52 2024 From: mfadams at lbl.gov (Mark Adams) Date: Thu, 11 Jul 2024 10:37:52 -0400 Subject: [petsc-users] MKL Pardiso returning NAN values In-Reply-To: References: Message-ID: Not sure I understand the question but if you use '-ksp_type richardson' then you get a convergence check. Then you might say '-ksp_rtol 1e-1 -ksp_max_it 1' just to check if it failed with convered _reason. Just requires one residual calculation. Mark On Wed, Jul 10, 2024 at 5:02?PM Chris Hewson wrote: > Hi There, We have a matrix that is singular and trying to solve it. We > first use an iterative solve with KSPBCGS, the solution vector is nan > values and the converged reason from PETSc of KSP_DIVERGED_NANORINF, that's > great and what I would > ZjQcmQRYFpfptBannerStart > This Message Is From an External Sender > This message came from outside your organization. > > ZjQcmQRYFpfptBannerEnd > Hi There, > > We have a matrix that is singular and trying to solve it. We first use an > iterative solve with KSPBCGS, the solution vector is nan values and the > converged reason from PETSc of KSP_DIVERGED_NANORINF, that's great and what > I would expect. > > Sometimes in our program we redo a failed solve using the MKL Pardiso > solver, when the same matrix and vectors get put into that solver which is > a KSPPREONLY, I get KSP_CONVERGED_ITS as a converged reason and solution > vector with nan values in it. > > Stepping through the PETSc calls, I see that the external call to Pardiso > doesn't return an error for this, so not really the fault of PETSc, but > curious if y'all have seen this before or a solution/workaround to this? > > *Chris Hewson* > Senior Reservoir Simulation Engineer > ResFrac > +1.587.575.9792 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mpovolot at gmail.com Thu Jul 11 12:55:47 2024 From: mpovolot at gmail.com (Michael Povolotskyi) Date: Thu, 11 Jul 2024 13:55:47 -0400 Subject: [petsc-users] question on matrix preallocation Message-ID: <7d67209f-0937-48e2-987a-fda17ea21e61@gmail.com> An HTML attachment was scrubbed... URL: From qiyuelu1 at gmail.com Thu Jul 11 14:28:24 2024 From: qiyuelu1 at gmail.com (Qiyue Lu) Date: Thu, 11 Jul 2024 14:28:24 -0500 Subject: [petsc-users] How to add additional cpp flags in the makefile while compiling a code? In-Reply-To: References: Message-ID: Thanks, it works with appending to LDFLAGS. May I know how to add additional CPP options? It seems manually adding to CXXFLAGS, CPPFLAGS and CXXPPFLAGS won't work in the makefile. Qiyue Lu On Tue, Jul 9, 2024 at 6:34?PM Satish Balay wrote: > And sometimes you might need to list these additional libraries/options > after PETSc libraries - in the link command. One way: > > balay at pj01:~/test$ cat makefile > > include ${PETSC_DIR}/lib/petsc/conf/variables > include ${PETSC_DIR}/lib/petsc/conf/rules > LDLIBS += -foobar > > balay at pj01:~/test$ make ex2 > mpicxx -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas > -Wno-lto-type-mismatch -Wno-psabi -fstack-protector -fvisibility=hidden -g > -O0 -I/home/balay/petsc/include > -I/home/balay/petsc/arch-linux-c-debug/include ex2.cpp > -Wl,-rpath,/home/balay/petsc/arch-linux-c-debug/lib > -L/home/balay/petsc/arch-linux-c-debug/lib > -Wl,-rpath,/software/mpich-4.1.1/lib -L/software/mpich-4.1.1/lib > -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/13 > -L/usr/lib/gcc/x86_64-redhat-linux/13 -lpetsc -llapack -lblas -ltriangle > -lm -lX11 -lmpifort -lmpi -lgfortran -lm -lgfortran -lm -lgcc_s -lquadmath > -lstdc++ -lquadmath -foobar -o ex2 > g++: error: unrecognized command-line option ?-foobar? > make: *** [: ex2] Error 1 > > Satish > > On Tue, 9 Jul 2024, Satish Balay wrote: > > > It depends on your makefile - and how you are over-riding the targets. > 1. Why add -rpath to CXXFLAGS, and not LDFLAGS? The following work for me > with the default makefile format used by petsc examples > > balay@ pj01: ~/test$ ls ex2. cpp makefile > > ZjQcmQRYFpfptBannerStart > > This Message Is From an External Sender > > This message came from outside your organization. > > > > ZjQcmQRYFpfptBannerEnd > > > > It depends on your makefile - and how you are over-riding the targets. > > > > 1. Why add -rpath to CXXFLAGS, and not LDFLAGS? > > > > The following work for me with the default makefile format used by petsc > examples > > > > balay at pj01:~/test$ ls > > ex2.cpp makefile > > balay at pj01:~/test$ cat makefile > > > > include ${PETSC_DIR}/lib/petsc/conf/variables > > include ${PETSC_DIR}/lib/petsc/conf/rules > > > > balay at pj01:~/test$ make ex2 > > mpicxx -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas > -Wno-lto-type-mismatch -Wno-psabi -fstack-protector -fvisibility=hidden -g > -O0 -I/home/balay/petsc/include -I/home/balay/petsc/arch-lin > > ux-c-debug/include ex2.cpp > -Wl,-rpath,/home/balay/petsc/arch-linux-c-debug/lib > -L/home/balay/petsc/arch-linux-c-debug/lib > -Wl,-rpath,/software/mpich-4.1.1/lib -L/software/mpich-4.1.1/lib > -Wl,-rpath,/usr > > /lib/gcc/x86_64-redhat-linux/13 -L/usr/lib/gcc/x86_64-redhat-linux/13 > -lpetsc -llapack -lblas -ltriangle -lm -lX11 -lmpifort -lmpi -lgfortran -lm > -lgfortran -lm -lgcc_s -lquadmath -lstdc++ -lquadmath -o ex2 > > balay at pj01:~/test$ make clean > > balay at pj01:~/test$ make ex2 LDFLAGS=-foobar > > mpicxx -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas > -Wno-lto-type-mismatch -Wno-psabi -fstack-protector -fvisibility=hidden -g > -O0 -I/home/balay/petsc/include -I/home/balay/petsc/arch-lin > > ux-c-debug/include -foobar ex2.cpp > -Wl,-rpath,/home/balay/petsc/arch-linux-c-debug/lib > -L/home/balay/petsc/arch-linux-c-debug/lib > -Wl,-rpath,/software/mpich-4.1.1/lib -L/software/mpich-4.1.1/lib -Wl,-rpa > > th,/usr/lib/gcc/x86_64-redhat-linux/13 > -L/usr/lib/gcc/x86_64-redhat-linux/13 -lpetsc -llapack -lblas -ltriangle > -lm -lX11 -lmpifort -lmpi -lgfortran -lm -lgfortran -lm -lgcc_s -lquadmath > -lstdc++ -lquadmath > > -o ex2 > > g++: error: unrecognized command-line option ?-foobar? > > make: *** [: ex2] Error 1 > > balay at pj01:~/test$ make ex2 CXXFLAGS=-foobar > > mpicxx -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas > -Wno-lto-type-mismatch -Wno-psabi -fstack-protector -fvisibility=hidden -g > -O0 -foobar -I/home/balay/petsc/include -I/home/balay/petsc/a > > rch-linux-c-debug/include ex2.cpp > -Wl,-rpath,/home/balay/petsc/arch-linux-c-debug/lib > -L/home/balay/petsc/arch-linux-c-debug/lib > -Wl,-rpath,/software/mpich-4.1.1/lib -L/software/mpich-4.1.1/lib -Wl,-rpa > > th,/usr/lib/gcc/x86_64-redhat-linux/13 > -L/usr/lib/gcc/x86_64-redhat-linux/13 -lpetsc -llapack -lblas -ltriangle > -lm -lX11 -lmpifort -lmpi -lgfortran -lm -lgfortran -lm -lgcc_s -lquadmath > -lstdc++ -lquadmath > > -o ex2 > > g++: error: unrecognized command-line option ?-foobar? > > make: *** [: ex2] Error 1 > > balay at pj01:~/test$ > > > > Satish > > > > On Tue, 9 Jul 2024, Qiyue Lu wrote: > > > > > Hello, > > > I am trying to compile a *.cpp code under PETSc environment. An > additional > > > flag is needed to link the metis partition library. > > > ############## > > > # This line is necessary for linking, remember need this flag in both > > > compiler options and linking options > > > LDFLAGS= -fopenmp -lpmix -lmetis > > > > > > # This line is required for using some 'helper' function like > > > cudaErrorCheck from samples suite > > > CXXFLAGS += > > > > -Wl,-rpath,/sw/spack/deltas11-2023-03/apps/linux-rhel8-zen/gcc-8.5.0/metis-5.1.0-v5iddu2/lib > > > # This two lines are required for using PETSc > > > include ${PETSC_DIR}/lib/petsc/conf/variables > > > include ${PETSC_DIR}/lib/petsc/conf/rules > > > ############## > > > > > > However, adding "CXXFLAGS += -Wl,-rpath, path_to_lib" in the makefile > under > > > the same directory, is not working. And CPPFLAGS and CXXPPFLAGS won't > work > > > either. No errors while compiling, but cannot find the metis.o while > > > running the binary. Another confusing point is, in the compilation > flash > > > screen, LDFLAGS options can be found, but CXXFLAGS cannot. > > > > > > Could you please inform me how to add additional flags in the makefile? > > > > > > Thanks, > > > Qiyue Lu > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From qiyuelu1 at gmail.com Thu Jul 11 14:42:20 2024 From: qiyuelu1 at gmail.com (Qiyue Lu) Date: Thu, 11 Jul 2024 14:42:20 -0500 Subject: [petsc-users] How to add additional cpp flags in the makefile while compiling a code? In-Reply-To: References: Message-ID: Please ignore the previous thread. By using NVCC, I can append additional options to CXXPPFLAGS. Thanks On Thu, Jul 11, 2024 at 2:28?PM Qiyue Lu wrote: > Thanks, it works with appending to LDFLAGS. > May I know how to add additional CPP options? It seems manually adding to > CXXFLAGS, CPPFLAGS and CXXPPFLAGS won't work in the makefile. > > Qiyue Lu > > On Tue, Jul 9, 2024 at 6:34?PM Satish Balay > wrote: > >> And sometimes you might need to list these additional libraries/options >> after PETSc libraries - in the link command. One way: >> >> balay at pj01:~/test$ cat makefile >> >> include ${PETSC_DIR}/lib/petsc/conf/variables >> include ${PETSC_DIR}/lib/petsc/conf/rules >> LDLIBS += -foobar >> >> balay at pj01:~/test$ make ex2 >> mpicxx -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas >> -Wno-lto-type-mismatch -Wno-psabi -fstack-protector -fvisibility=hidden -g >> -O0 -I/home/balay/petsc/include >> -I/home/balay/petsc/arch-linux-c-debug/include ex2.cpp >> -Wl,-rpath,/home/balay/petsc/arch-linux-c-debug/lib >> -L/home/balay/petsc/arch-linux-c-debug/lib >> -Wl,-rpath,/software/mpich-4.1.1/lib -L/software/mpich-4.1.1/lib >> -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/13 >> -L/usr/lib/gcc/x86_64-redhat-linux/13 -lpetsc -llapack -lblas -ltriangle >> -lm -lX11 -lmpifort -lmpi -lgfortran -lm -lgfortran -lm -lgcc_s -lquadmath >> -lstdc++ -lquadmath -foobar -o ex2 >> g++: error: unrecognized command-line option ?-foobar? >> make: *** [: ex2] Error 1 >> >> Satish >> >> On Tue, 9 Jul 2024, Satish Balay wrote: >> >> > It depends on your makefile - and how you are over-riding the targets. >> 1. Why add -rpath to CXXFLAGS, and not LDFLAGS? The following work for me >> with the default makefile format used by petsc examples >> > balay@ pj01: ~/test$ ls ex2. cpp makefile >> > ZjQcmQRYFpfptBannerStart >> > This Message Is From an External Sender >> > This message came from outside your organization. >> > >> > ZjQcmQRYFpfptBannerEnd >> > >> > It depends on your makefile - and how you are over-riding the targets. >> > >> > 1. Why add -rpath to CXXFLAGS, and not LDFLAGS? >> > >> > The following work for me with the default makefile format used by >> petsc examples >> > >> > balay at pj01:~/test$ ls >> > ex2.cpp makefile >> > balay at pj01:~/test$ cat makefile >> > >> > include ${PETSC_DIR}/lib/petsc/conf/variables >> > include ${PETSC_DIR}/lib/petsc/conf/rules >> > >> > balay at pj01:~/test$ make ex2 >> > mpicxx -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas >> -Wno-lto-type-mismatch -Wno-psabi -fstack-protector -fvisibility=hidden -g >> -O0 -I/home/balay/petsc/include -I/home/balay/petsc/arch-lin >> > ux-c-debug/include ex2.cpp >> -Wl,-rpath,/home/balay/petsc/arch-linux-c-debug/lib >> -L/home/balay/petsc/arch-linux-c-debug/lib >> -Wl,-rpath,/software/mpich-4.1.1/lib -L/software/mpich-4.1.1/lib >> -Wl,-rpath,/usr >> > /lib/gcc/x86_64-redhat-linux/13 -L/usr/lib/gcc/x86_64-redhat-linux/13 >> -lpetsc -llapack -lblas -ltriangle -lm -lX11 -lmpifort -lmpi -lgfortran -lm >> -lgfortran -lm -lgcc_s -lquadmath -lstdc++ -lquadmath -o ex2 >> > balay at pj01:~/test$ make clean >> > balay at pj01:~/test$ make ex2 LDFLAGS=-foobar >> > mpicxx -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas >> -Wno-lto-type-mismatch -Wno-psabi -fstack-protector -fvisibility=hidden -g >> -O0 -I/home/balay/petsc/include -I/home/balay/petsc/arch-lin >> > ux-c-debug/include -foobar ex2.cpp >> -Wl,-rpath,/home/balay/petsc/arch-linux-c-debug/lib >> -L/home/balay/petsc/arch-linux-c-debug/lib >> -Wl,-rpath,/software/mpich-4.1.1/lib -L/software/mpich-4.1.1/lib -Wl,-rpa >> > th,/usr/lib/gcc/x86_64-redhat-linux/13 >> -L/usr/lib/gcc/x86_64-redhat-linux/13 -lpetsc -llapack -lblas -ltriangle >> -lm -lX11 -lmpifort -lmpi -lgfortran -lm -lgfortran -lm -lgcc_s -lquadmath >> -lstdc++ -lquadmath >> > -o ex2 >> > g++: error: unrecognized command-line option ?-foobar? >> > make: *** [: ex2] Error 1 >> > balay at pj01:~/test$ make ex2 CXXFLAGS=-foobar >> > mpicxx -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas >> -Wno-lto-type-mismatch -Wno-psabi -fstack-protector -fvisibility=hidden -g >> -O0 -foobar -I/home/balay/petsc/include -I/home/balay/petsc/a >> > rch-linux-c-debug/include ex2.cpp >> -Wl,-rpath,/home/balay/petsc/arch-linux-c-debug/lib >> -L/home/balay/petsc/arch-linux-c-debug/lib >> -Wl,-rpath,/software/mpich-4.1.1/lib -L/software/mpich-4.1.1/lib -Wl,-rpa >> > th,/usr/lib/gcc/x86_64-redhat-linux/13 >> -L/usr/lib/gcc/x86_64-redhat-linux/13 -lpetsc -llapack -lblas -ltriangle >> -lm -lX11 -lmpifort -lmpi -lgfortran -lm -lgfortran -lm -lgcc_s -lquadmath >> -lstdc++ -lquadmath >> > -o ex2 >> > g++: error: unrecognized command-line option ?-foobar? >> > make: *** [: ex2] Error 1 >> > balay at pj01:~/test$ >> > >> > Satish >> > >> > On Tue, 9 Jul 2024, Qiyue Lu wrote: >> > >> > > Hello, >> > > I am trying to compile a *.cpp code under PETSc environment. An >> additional >> > > flag is needed to link the metis partition library. >> > > ############## >> > > # This line is necessary for linking, remember need this flag in both >> > > compiler options and linking options >> > > LDFLAGS= -fopenmp -lpmix -lmetis >> > > >> > > # This line is required for using some 'helper' function like >> > > cudaErrorCheck from samples suite >> > > CXXFLAGS += >> > > >> -Wl,-rpath,/sw/spack/deltas11-2023-03/apps/linux-rhel8-zen/gcc-8.5.0/metis-5.1.0-v5iddu2/lib >> > > # This two lines are required for using PETSc >> > > include ${PETSC_DIR}/lib/petsc/conf/variables >> > > include ${PETSC_DIR}/lib/petsc/conf/rules >> > > ############## >> > > >> > > However, adding "CXXFLAGS += -Wl,-rpath, path_to_lib" in the makefile >> under >> > > the same directory, is not working. And CPPFLAGS and CXXPPFLAGS won't >> work >> > > either. No errors while compiling, but cannot find the metis.o while >> > > running the binary. Another confusing point is, in the compilation >> flash >> > > screen, LDFLAGS options can be found, but CXXFLAGS cannot. >> > > >> > > Could you please inform me how to add additional flags in the >> makefile? >> > > >> > > Thanks, >> > > Qiyue Lu >> > > >> > >> > >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay.anl at fastmail.org Thu Jul 11 14:45:33 2024 From: balay.anl at fastmail.org (Satish Balay) Date: Thu, 11 Jul 2024 14:45:33 -0500 (CDT) Subject: [petsc-users] How to add additional cpp flags in the makefile while compiling a code? In-Reply-To: References: Message-ID: <48ddb164-3e9e-b7a4-8d44-d06571ac3464@fastmail.org> An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Thu Jul 11 14:53:13 2024 From: bsmith at petsc.dev (Barry Smith) Date: Thu, 11 Jul 2024 15:53:13 -0400 Subject: [petsc-users] MKL Pardiso returning NAN values In-Reply-To: References: Message-ID: We don't by default, for example, check the norm of the solution returned by an external direct solver due to the added expense. We do check the error condition returned by MKL Pardiso with PetscCheck(mat_mkl_pardiso->err >= 0, PETSC_COMM_SELF, PETSC_ERR_LIB, "Error reported by MKL PARDISO: err=%d. Please check manual", mat_mkl_pardiso->err); except (I don't know why) in MatDestroy_MKL_PARDISO() So are you saying that MKL_Padiso is returning inf/nan in the solution but a return code less than or equal to zero? Is this expected behavior of MKL Pardiso? > On Jul 10, 2024, at 5:01?PM, Chris Hewson wrote: > > This Message Is From an External Sender > This message came from outside your organization. > Hi There, > > We have a matrix that is singular and trying to solve it. We first use an iterative solve with KSPBCGS, the solution vector is nan values and the converged reason from PETSc of KSP_DIVERGED_NANORINF, that's great and what I would expect. > > Sometimes in our program we redo a failed solve using the MKL Pardiso solver, when the same matrix and vectors get put into that solver which is a KSPPREONLY, I get KSP_CONVERGED_ITS as a converged reason and solution vector with nan values in it. > > Stepping through the PETSc calls, I see that the external call to Pardiso doesn't return an error for this, so not really the fault of PETSc, but curious if y'all have seen this before or a solution/workaround to this? > > Chris Hewson > Senior Reservoir Simulation Engineer > ResFrac > +1.587.575.9792 -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Thu Jul 11 14:55:11 2024 From: bsmith at petsc.dev (Barry Smith) Date: Thu, 11 Jul 2024 15:55:11 -0400 Subject: [petsc-users] question on matrix preallocation In-Reply-To: <7d67209f-0937-48e2-987a-fda17ea21e61@gmail.com> References: <7d67209f-0937-48e2-987a-fda17ea21e61@gmail.com> Message-ID: MatGetInfo() is the programmatic interface used to get this information. You can also run a proggram with -info and grep for malloc. Barry > On Jul 11, 2024, at 1:55?PM, Michael Povolotskyi wrote: > > This Message Is From an External Sender > This message came from outside your organization. > Hello, > > is there an option in PETSC that allows to check at run time if a sparse > matrix has been preallocated correctly? I remember there was something > like that is the older versions, but cannot find it now. > > The goal is to get rid of any possible time overhead due to dynamic > preallocation. > > Thank you, > > Michael. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mpovolot at gmail.com Thu Jul 11 15:02:10 2024 From: mpovolot at gmail.com (Michael Povolotskyi) Date: Thu, 11 Jul 2024 16:02:10 -0400 Subject: [petsc-users] question on matrix preallocation In-Reply-To: References: <7d67209f-0937-48e2-987a-fda17ea21e61@gmail.com> Message-ID: <211dd26d-5f40-460a-96ec-eebd96abaa70@gmail.com> Thank you, let me clarify my question. Imagine that I have a sparse matrix, and the number of non zero entries that I specified is too small. I know that I can insert values in it but it will be slow. I remember there was a way to make PETSC to throw an error if a number of non zero elements per row was bigger that was preallocated. Then I could fix my algorithm. Is this functionality available with the current version? Michael. On 7/11/2024 3:55 PM, Barry Smith wrote: > > ??MatGetInfo() is the programmatic interface used to get this > information. ?You can also run a proggram with -info and grep for malloc. > > ? Barry > > >> On Jul 11, 2024, at 1:55?PM, Michael Povolotskyi >> wrote: >> >> This Message Is From an External Sender >> This message came from outside your organization. >> Hello, >> >> is there an option in PETSC that allows to check at run time if a sparse >> matrix has been preallocated correctly? I remember there was something >> like that is the older versions, but cannot find it now. >> >> The goal is to get rid of any possible time overhead due to dynamic >> preallocation. >> >> Thank you, >> >> Michael. >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris at resfrac.com Thu Jul 11 15:03:08 2024 From: chris at resfrac.com (Chris Hewson) Date: Thu, 11 Jul 2024 14:03:08 -0600 Subject: [petsc-users] MKL Pardiso returning NAN values In-Reply-To: References: Message-ID: So are you saying that MKL_Padiso is returning inf/nan in the solution but a return code less than or equal to zero? - that's correct Is this expected behavior of MKL Pardiso? - from what I can see, that does appear to be the expected behavior of Pardiso. I mean I guess it has solved the system correctly. Thanks Mark for the suggestion, that seems like a reasonable solution. Chris On Thu, Jul 11, 2024, 13:53 Barry Smith wrote: > > We don't by default, for example, check the norm of the solution > returned by an external direct solver due to the added expense. > > We do check the error condition returned by MKL Pardiso with > > PetscCheck(mat_mkl_pardiso->err >= 0, PETSC_COMM_SELF, PETSC_ERR_LIB, > "Error reported by MKL PARDISO: err=%d. Please check manual", > mat_mkl_pardiso->err); > > except (I don't know why) in MatDestroy_MKL_PARDISO() > > So are you saying that MKL_Padiso is returning inf/nan in the solution but > a return code less than or equal to zero? Is this expected behavior of MKL > Pardiso? > > > > On Jul 10, 2024, at 5:01?PM, Chris Hewson wrote: > > This Message Is From an External Sender > This message came from outside your organization. > Hi There, > > We have a matrix that is singular and trying to solve it. We first use an > iterative solve with KSPBCGS, the solution vector is nan values and the > converged reason from PETSc of KSP_DIVERGED_NANORINF, that's great and what > I would expect. > > Sometimes in our program we redo a failed solve using the MKL Pardiso > solver, when the same matrix and vectors get put into that solver which is > a KSPPREONLY, I get KSP_CONVERGED_ITS as a converged reason and solution > vector with nan values in it. > > Stepping through the PETSc calls, I see that the external call to Pardiso > doesn't return an error for this, so not really the fault of PETSc, but > curious if y'all have seen this before or a solution/workaround to this? > > *Chris Hewson* > Senior Reservoir Simulation Engineer > ResFrac > +1.587.575.9792 > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Thu Jul 11 15:04:37 2024 From: bsmith at petsc.dev (Barry Smith) Date: Thu, 11 Jul 2024 16:04:37 -0400 Subject: [petsc-users] question on matrix preallocation In-Reply-To: <211dd26d-5f40-460a-96ec-eebd96abaa70@gmail.com> References: <7d67209f-0937-48e2-987a-fda17ea21e61@gmail.com> <211dd26d-5f40-460a-96ec-eebd96abaa70@gmail.com> Message-ID: <7512109B-B7E1-4E5D-9AED-A87BCE74926B@petsc.dev> By default, if you preallocate but not enough, it will automatically error unless you call MatSetOption(mat,MAT_NEW_NONZERO_ALLOCATION_ERR,PETSC_FALSE); > On Jul 11, 2024, at 4:02?PM, Michael Povolotskyi wrote: > > Thank you, > > let me clarify my question. > > Imagine that I have a sparse matrix, and the number of non zero entries that I specified is too small. > > I know that I can insert values in it but it will be slow. > > I remember there was a way to make PETSC to throw an error if a number of non zero elements per row was bigger that was preallocated. Then I could fix my algorithm. Is this functionality available with the current version? > > Michael. > > On 7/11/2024 3:55 PM, Barry Smith wrote: >> >> MatGetInfo() is the programmatic interface used to get this information. You can also run a proggram with -info and grep for malloc. >> >> Barry >> >> >>> On Jul 11, 2024, at 1:55?PM, Michael Povolotskyi wrote: >>> >>> This Message Is From an External Sender >>> This message came from outside your organization. >>> Hello, >>> >>> is there an option in PETSC that allows to check at run time if a sparse >>> matrix has been preallocated correctly? I remember there was something >>> like that is the older versions, but cannot find it now. >>> >>> The goal is to get rid of any possible time overhead due to dynamic >>> preallocation. >>> >>> Thank you, >>> >>> Michael. >>> >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From mpovolot at gmail.com Thu Jul 11 15:51:58 2024 From: mpovolot at gmail.com (Michael Povolotskyi) Date: Thu, 11 Jul 2024 16:51:58 -0400 Subject: [petsc-users] question on matrix preallocation In-Reply-To: <7512109B-B7E1-4E5D-9AED-A87BCE74926B@petsc.dev> References: <7d67209f-0937-48e2-987a-fda17ea21e61@gmail.com> <211dd26d-5f40-460a-96ec-eebd96abaa70@gmail.com> <7512109B-B7E1-4E5D-9AED-A87BCE74926B@petsc.dev> Message-ID: <18b8020e-4cac-4a66-9a57-2ee21c2e364d@gmail.com> Thanks a lot. Is this a new feature? It seems to me that 10 years ago the default behavior was different. Michael. On 7/11/2024 4:04 PM, Barry Smith wrote: > > ?By default, if you preallocate but not enough, it will automatically > error unless you call > MatSetOption(mat,MAT_NEW_NONZERO_ALLOCATION_ERR,PETSC_FALSE); > > >> On Jul 11, 2024, at 4:02?PM, Michael Povolotskyi >> wrote: >> >> Thank you, >> >> let me clarify my question. >> >> Imagine that I have a sparse matrix, and the number of non zero >> entries that I specified is too small. >> >> I know that I can insert values in it but it will be slow. >> >> I remember there was a way to make PETSC to throw an error if a >> number of non zero elements per row was bigger that was preallocated. >> Then I could fix my algorithm. Is this functionality available with >> the current version? >> >> Michael. >> >> On 7/11/2024 3:55 PM, Barry Smith wrote: >>> >>> ??MatGetInfo() is the programmatic interface used to get this >>> information. ?You can also run a proggram with -info and grep for >>> malloc. >>> >>> ? Barry >>> >>> >>>> On Jul 11, 2024, at 1:55?PM, Michael Povolotskyi >>>> wrote: >>>> >>>> This Message Is From an External Sender >>>> This message came from outside your organization. >>>> Hello, >>>> >>>> is there an option in PETSC that allows to check at run time if a sparse >>>> matrix has been preallocated correctly? I remember there was something >>>> like that is the older versions, but cannot find it now. >>>> >>>> The goal is to get rid of any possible time overhead due to dynamic >>>> preallocation. >>>> >>>> Thank you, >>>> >>>> Michael. >>>> >>>> >>> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Thu Jul 11 16:09:57 2024 From: bsmith at petsc.dev (Barry Smith) Date: Thu, 11 Jul 2024 17:09:57 -0400 Subject: [petsc-users] question on matrix preallocation In-Reply-To: <18b8020e-4cac-4a66-9a57-2ee21c2e364d@gmail.com> References: <7d67209f-0937-48e2-987a-fda17ea21e61@gmail.com> <211dd26d-5f40-460a-96ec-eebd96abaa70@gmail.com> <7512109B-B7E1-4E5D-9AED-A87BCE74926B@petsc.dev> <18b8020e-4cac-4a66-9a57-2ee21c2e364d@gmail.com> Message-ID: <5050735A-069E-4DA4-B19F-5706AB22EE34@petsc.dev> The default behavior previously was the flip of the current behavior. By the way, we also now have much better performance if you do not preallocate (not as good as with perfect preallocation, but much better than in ancient history; you simply do not preallocate to get this behavior). Barry > On Jul 11, 2024, at 4:51?PM, Michael Povolotskyi wrote: > > Thanks a lot. > > Is this a new feature? It seems to me that 10 years ago the default behavior was different. > > Michael. > > On 7/11/2024 4:04 PM, Barry Smith wrote: >> >> By default, if you preallocate but not enough, it will automatically error unless you call MatSetOption(mat,MAT_NEW_NONZERO_ALLOCATION_ERR,PETSC_FALSE); >> >> >>> On Jul 11, 2024, at 4:02?PM, Michael Povolotskyi wrote: >>> >>> Thank you, >>> >>> let me clarify my question. >>> >>> Imagine that I have a sparse matrix, and the number of non zero entries that I specified is too small. >>> >>> I know that I can insert values in it but it will be slow. >>> >>> I remember there was a way to make PETSC to throw an error if a number of non zero elements per row was bigger that was preallocated. Then I could fix my algorithm. Is this functionality available with the current version? >>> >>> Michael. >>> >>> On 7/11/2024 3:55 PM, Barry Smith wrote: >>>> >>>> MatGetInfo() is the programmatic interface used to get this information. You can also run a proggram with -info and grep for malloc. >>>> >>>> Barry >>>> >>>> >>>>> On Jul 11, 2024, at 1:55?PM, Michael Povolotskyi wrote: >>>>> >>>>> This Message Is From an External Sender >>>>> This message came from outside your organization. >>>>> Hello, >>>>> >>>>> is there an option in PETSC that allows to check at run time if a sparse >>>>> matrix has been preallocated correctly? I remember there was something >>>>> like that is the older versions, but cannot find it now. >>>>> >>>>> The goal is to get rid of any possible time overhead due to dynamic >>>>> preallocation. >>>>> >>>>> Thank you, >>>>> >>>>> Michael. >>>>> >>>>> >>>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From mpovolot at gmail.com Thu Jul 11 16:15:07 2024 From: mpovolot at gmail.com (Michael Povolotskyi) Date: Thu, 11 Jul 2024 17:15:07 -0400 Subject: [petsc-users] question on matrix preallocation In-Reply-To: <5050735A-069E-4DA4-B19F-5706AB22EE34@petsc.dev> References: <7d67209f-0937-48e2-987a-fda17ea21e61@gmail.com> <211dd26d-5f40-460a-96ec-eebd96abaa70@gmail.com> <7512109B-B7E1-4E5D-9AED-A87BCE74926B@petsc.dev> <18b8020e-4cac-4a66-9a57-2ee21c2e364d@gmail.com> <5050735A-069E-4DA4-B19F-5706AB22EE34@petsc.dev> Message-ID: <5cce8849-7ca4-4c5a-a58a-1e06e728e0b2@gmail.com> Thanks a lot, I'm using petsc since 2004 On 7/11/2024 5:09 PM, Barry Smith wrote: > > ? ?The default behavior previously was the flip of the current > behavior. By the way, we also now have much better performance if you > do not preallocate (not as good as with perfect preallocation, but > much better than in ancient history; you simply do not preallocate to > get this behavior). > > ? Barry > > >> On Jul 11, 2024, at 4:51?PM, Michael Povolotskyi >> wrote: >> >> Thanks a lot. >> >> Is this a new feature? It seems to me that 10 years ago the default >> behavior was different. >> >> Michael. >> >> On 7/11/2024 4:04 PM, Barry Smith wrote: >>> >>> ?By default, if you preallocate but not enough, it will >>> automatically error unless you call >>> MatSetOption(mat,MAT_NEW_NONZERO_ALLOCATION_ERR,PETSC_FALSE); >>> >>> >>>> On Jul 11, 2024, at 4:02?PM, Michael Povolotskyi >>>> wrote: >>>> >>>> Thank you, >>>> >>>> let me clarify my question. >>>> >>>> Imagine that I have a sparse matrix, and the number of non zero >>>> entries that I specified is too small. >>>> >>>> I know that I can insert values in it but it will be slow. >>>> >>>> I remember there was a way to make PETSC to throw an error if a >>>> number of non zero elements per row was bigger that was >>>> preallocated. Then I could fix my algorithm. Is this functionality >>>> available with the current version? >>>> >>>> Michael. >>>> >>>> On 7/11/2024 3:55 PM, Barry Smith wrote: >>>>> >>>>> ??MatGetInfo() is the programmatic interface used to get this >>>>> information. ?You can also run a proggram with -info and grep for >>>>> malloc. >>>>> >>>>> ? Barry >>>>> >>>>> >>>>>> On Jul 11, 2024, at 1:55?PM, Michael Povolotskyi >>>>>> wrote: >>>>>> >>>>>> This Message Is From an External Sender >>>>>> This message came from outside your organization. >>>>>> Hello, >>>>>> >>>>>> is there an option in PETSC that allows to check at run time if a sparse >>>>>> matrix has been preallocated correctly? I remember there was something >>>>>> like that is the older versions, but cannot find it now. >>>>>> >>>>>> The goal is to get rid of any possible time overhead due to dynamic >>>>>> preallocation. >>>>>> >>>>>> Thank you, >>>>>> >>>>>> Michael. >>>>>> >>>>>> >>>>> >>> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Jul 11 19:32:07 2024 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 11 Jul 2024 20:32:07 -0400 Subject: [petsc-users] What exactly is the GlobalToNatural PetscSF of DMPlex/DM? In-Reply-To: References: Message-ID: On Mon, Jul 8, 2024 at 10:28?PM Ferrand, Jesus A. wrote: > Dear PETSc team: Greetings. I keep working on mesh I/O utilities using > DMPlex. Specifically for the output stage, I need a solid grasp on the > global numbers and ideally how to set them into the DMPlex during an input > operation and carrying > ZjQcmQRYFpfptBannerStart > This Message Is From an External Sender > This message came from outside your organization. > > ZjQcmQRYFpfptBannerEnd > Dear PETSc team: > > Greetings. > I keep working on mesh I/O utilities using DMPlex. > Specifically for the output stage, I need a solid grasp on the global > numbers and ideally how to set them into the DMPlex during an input > operation and carrying the global numbers through API calls to > DMPlexDistribute() or DMPlexMigrate() and hopefully also through some of > the mesh adaption APIs. I was wondering if the GlobalToNatural PetscSF > manages these global numbers. The next most useful object is the PointSF, > but to me, it seems to only help establish DAG point ownership, not DAG > point global indices. > This is a good question, and gets at a design point of Plex. I don't believe global numbers are the "right" way to talk about mesh points, or even a very useful way to do it, for several reasons. Plex is designed to run just fine without any global numbers. It can, of course, produce them on command, as many people remain committed to their existence. Thus, the first idea is that global numbers should not be stored, since they can always be created on command very cheaply. It is much more costly to write global numbers to disk, or pull them through memory, than compute them. The second idea is that we use a combination of local numbers, namely (rank, point num) pairs, and PetscSF objects to establish sharing relations for parallel meshes. Global numbering is a particular traversal of a mesh, running over the locally owned parts of each mesh in local order. Thus an SF + a local order = a global order, and the local order is provided by the point numbering. The third idea is that a "natural" order is just the global order in which a mesh is first fed to Plex. When I redistribute and reorder for good performance, I keep track of a PetscSF that can map the mesh back to the original order in which it was provided. I see this as an unneeded expense, but many many people want output written in the original order (mostly because processing tools are so poor). This management is what we mean by GlobalToNatural. > Otherwise, I have been working with the IS obtained from > DMPlexGetPointNumbering() and manually determining global stratum sizes, > offsets, and numbers by looking at the signs of the involuted index list > that comes with that IS. It's working for now (I can monolithically write > meshes to CGNS in parallel), but it is resulting in repetitive code that I > will need for another mesh format that I want to support. > What is repetitive? It should be able to be automated. Thanks, Matt > Sincerely: > > *J.A. Ferrand* > > Embry-Riddle Aeronautical University - Daytona Beach - FL > Ph.D. Candidate, Aerospace Engineering > > M.Sc. Aerospace Engineering > > B.Sc. Aerospace Engineering > > B.Sc. Computational Mathematics > > > *Phone:* (386)-843-1829 > > *Email(s):* ferranj2 at my.erau.edu > > jesus.ferrand at gmail.com > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!Zcybs3rxgbG35ciZiIHB3TY07Qnjd1sD0HzJVDWwr-OuDyXtVjDJ8WbIMS4LRixsMUZwLGtwsznQ8MeOalw3$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ivanluthfi5 at gmail.com Thu Jul 11 21:11:48 2024 From: ivanluthfi5 at gmail.com (Ivan Luthfi) Date: Fri, 12 Jul 2024 10:11:48 +0800 Subject: [petsc-users] Asking about warning unused variable Message-ID: Hello, I have declare a variable in my petsc code like the following: PetscInt mg_level = 2, finest; but i get warning. The warning is, (line185) warning: 'variable 'finest' set but not used [-Wunused-but-set-variable] PetscInt mg_level=2, finest; But actually it is used in line 200 as: (line 200) finest = mg_level - 1; Can you help me what should I do? -- Best regards, Ivan Luthfi Ihwani -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay.anl at fastmail.org Thu Jul 11 21:22:56 2024 From: balay.anl at fastmail.org (Satish Balay) Date: Thu, 11 Jul 2024 21:22:56 -0500 (CDT) Subject: [petsc-users] Asking about warning unused variable In-Reply-To: References: Message-ID: <5e904349-ab53-065c-9070-23870b7d4f84@fastmail.org> An HTML attachment was scrubbed... URL: From ivanluthfi5 at gmail.com Thu Jul 11 22:51:01 2024 From: ivanluthfi5 at gmail.com (Ivan Luthfi) Date: Fri, 12 Jul 2024 11:51:01 +0800 Subject: [petsc-users] error: 'DMDA_BOUNDARY_NONE' Message-ID: Hello there, I am trying to update a very old petsc code. When i run it i get the following error. FormFunction.c:14:47: error: 'DMDA_BOUNDARY_NONE' undeclared (first use in this function); did you mean 'DM_BOUNDARY_NONE'? ierr = DMDACreate2d(PETSC_COMM_WORLD, DMDA_BOUNDARY_NONE, DMDA_BOUNDARY_NONE, DMDA_STENCIL_BOX, -(up->Lx+1), -(up->Ly+1), PETSC_DECIDE, PETSC_DECIDE, 1, 1, 0, 0, &up->fine);CHKERRQ(ierr); what is the solution for this? -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at joliv.et Fri Jul 12 01:00:48 2024 From: pierre at joliv.et (Pierre Jolivet) Date: Fri, 12 Jul 2024 08:00:48 +0200 Subject: [petsc-users] error: 'DMDA_BOUNDARY_NONE' In-Reply-To: References: Message-ID: <5109884D-37DB-46E9-9C07-5171F0AABC47@joliv.et> > On 12 Jul 2024, at 5:51?AM, Ivan Luthfi wrote: > > This Message Is From an External Sender > This message came from outside your organization. > Hello there, > I am trying to update a very old petsc code. > > When i run it i get the following error. > > FormFunction.c:14:47: error: 'DMDA_BOUNDARY_NONE' undeclared (first use in this function); did you mean 'DM_BOUNDARY_NONE'? > > ierr = DMDACreate2d(PETSC_COMM_WORLD, DMDA_BOUNDARY_NONE, DMDA_BOUNDARY_NONE, DMDA_STENCIL_BOX, -(up->Lx+1), > -(up->Ly+1), PETSC_DECIDE, PETSC_DECIDE, 1, 1, 0, 0, &up->fine);CHKERRQ(ierr); > > > what is the solution for this? Have you tried the compiler suggestion (did you mean 'DM_BOUNDARY_NONE') ? Thanks, Pierre -------------- next part -------------- An HTML attachment was scrubbed... URL: From ivanluthfi5 at gmail.com Fri Jul 12 01:26:04 2024 From: ivanluthfi5 at gmail.com (Ivan Luthfi) Date: Fri, 12 Jul 2024 14:26:04 +0800 Subject: [petsc-users] Warning [-Wformat-overflow] Message-ID: I have warning: FormatFunction.c: In function 'ComputeStiffnessMatrix': FormatFunction.c:128:46: warning: '%d' directive writing between 1 and 11 bytes into a region of size between 0 and 99 [-Wformat-overflow] (line128) sprintf(filename, "%sc%d_N%dM%d_permeability.log", up->problem_description, up->problem_flag, up->Nx, up->Mx) do you know why this warning appear? and how to fix it. -- Best regards, -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From ivanluthfi5 at gmail.com Fri Jul 12 02:16:06 2024 From: ivanluthfi5 at gmail.com (Ivan Luthfi) Date: Fri, 12 Jul 2024 15:16:06 +0800 Subject: [petsc-users] Help me for compiling my Code Message-ID: I try to compile my code, but i get this error. Anyone can help me? Here is my terminal: $make bin_MsFEM_poisson2D_DMDA mpicc -o bin_MsFEM_poisson2D_DMDA MsFEM_poisson2D_DMDA.o UserParameter.o FormFunction.o MsFEM.o PCMsFEM.o /home/ivan/petsc/opt-3.21.2/lib/libpetsc.so \ /home/ivan/petsc/opt-3.21.2/lib/libsuperlu_dist.so \ /home/ivan/petsc/opt-3.21.2/lib/libparmetis.so \ /home/ivan/petsc/opt-3.21.2/lib/libmetis.so \ /usr/lib64/atlas/liblapack.a /usr/lib64/libblas.so.3 /usr/bin/ld: cannot find /usr/lib64/atlas/liblapack.a: No such file or directory /usr/bin/ld: cannot find /usr/lib64/libblas.so.3: No such file or directory collect2: error: ld returned 1 exit status make: *** [makefile:18: bin_MsFEM_poisson2D_DMDA] Error 1 -- Best regards, Ivan Luthfi Ihwani -------------- next part -------------- An HTML attachment was scrubbed... URL: From ivanluthfi5 at gmail.com Fri Jul 12 03:41:02 2024 From: ivanluthfi5 at gmail.com (Ivan Luthfi) Date: Fri, 12 Jul 2024 16:41:02 +0800 Subject: [petsc-users] Incompatible pointer type Message-ID: I get a warning about an incompatible pointer type when compile a code, anyone know how to fix this? make bin_MsFEM_poisson2D_DMDA mpicc -Wall -c PCMsFEM.c -isystem/home/ivan/petsc/opt-3.21.2/include PCMsFEM.c: In function ?PCCreate_MsFEM?: PCMsFEM.c:59:33: warning: assignment to ?PetscErrorCode (*)(struct _p_PC *, PetscOptionItems *)? {aka ?int (*)(struct _p_PC *, struct _p_PetscOptionItems *)?} from incompatible pointer type ?PetscErrorCode (*)(struct _p_PC *)? {aka ?int (*)(struct _p_PC *)?} [-Wincompatible-pointer-types] 59 | pc->ops->setfromoptions = PCSetFromOptions_MsFEM; | ^ mpicc -o bin_MsFEM_poisson2D_DMDA MsFEM_poisson2D_DMDA.o UserParameter.o FormFunction.o MsFEM.o PCMsFEM.o /home/ivan/petsc/opt-3.21.2/lib/libpetsc.so \ /home/ivan/petsc/opt-3.21.2/lib/libsuperlu_dist.so \ /home/ivan/petsc/opt-3.21.2/lib/libparmetis.so \ /home/ivan/petsc/opt-3.21.2/lib/libmetis.so \ /usr/lib64/atlas/liblapack.a /usr/lib64/libblas.so.3 /usr/bin/ld: cannot find /usr/lib64/atlas/liblapack.a: No such file or directory /usr/bin/ld: cannot find /usr/lib64/libblas.so.3: No such file or directory collect2: error: ld returned 1 exit status make: *** [makefile:18: bin_MsFEM_poisson2D_DMDA] Error 1 -- Best regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Jul 12 05:53:21 2024 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 12 Jul 2024 06:53:21 -0400 Subject: [petsc-users] Warning [-Wformat-overflow] In-Reply-To: References: Message-ID: On Fri, Jul 12, 2024 at 2:26?AM Ivan Luthfi wrote: > I have warning: FormatFunction. c: In function 'ComputeStiffnessMatrix': > FormatFunction. c: 128: 46: warning: '%d' directive writing between 1 and > 11 bytes into a region of size between 0 and 99 [-Wformat-overflow] > (line128) sprintf(filename, > ZjQcmQRYFpfptBannerStart > This Message Is From an External Sender > This message came from outside your organization. > > ZjQcmQRYFpfptBannerEnd > I have warning: > > FormatFunction.c: In function 'ComputeStiffnessMatrix': > FormatFunction.c:128:46: warning: '%d' directive writing between 1 and 11 > bytes into a region of size between 0 and 99 [-Wformat-overflow] > (line128) sprintf(filename, "%sc%d_N%dM%d_permeability.log", > up->problem_description, up->problem_flag, up->Nx, up->Mx) > > do you know why this warning appear? and how to fix it. > You are writing a PetscInt into the spot for an int. You can 1) Cast to int: sprintf(filename, "%sc%d_N%dM%d_permeability.log", up->problem_description, up->problem_flag, (int)up->Nx, (int)up->Mx) 2) Use the custom format sprintf(filename, "%sc%d_N%" PetscInt_FMT "M%" PetscInt_FMT "_permeability.log", up->problem_description, up->problem_flag, up->Nx, up->Mx) Thanks, Matt > -- > Best regards, > -- > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!YNtOwQCCJYLu6JV0ph8gXnJeFhrQXAiTdZyhV_C0bBut_pZLBhsZw-xORAasjijkRzFe_0udBt4lKebIWfMb$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Jul 12 05:57:44 2024 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 12 Jul 2024 06:57:44 -0400 Subject: [petsc-users] Help me for compiling my Code In-Reply-To: References: Message-ID: On Fri, Jul 12, 2024 at 3:16?AM Ivan Luthfi wrote: > I try to compile my code, but i get this error. Anyone can help me? Here > is my terminal: $make bin_MsFEM_poisson2D_DMDA mpicc -o > bin_MsFEM_poisson2D_DMDA MsFEM_poisson2D_DMDA. o UserParameter. o > FormFunction. o MsFEM. o PCMsFEM. o /home/ivan/petsc/opt-3. 21. > 2/lib/libpetsc. so > ZjQcmQRYFpfptBannerStart > This Message Is From an External Sender > This message came from outside your organization. > > ZjQcmQRYFpfptBannerEnd > I try to compile my code, but i get this error. Anyone can help me? > > Here is my terminal: > > $make bin_MsFEM_poisson2D_DMDA > > mpicc -o bin_MsFEM_poisson2D_DMDA MsFEM_poisson2D_DMDA.o UserParameter.o > FormFunction.o MsFEM.o PCMsFEM.o > /home/ivan/petsc/opt-3.21.2/lib/libpetsc.so \ > /home/ivan/petsc/opt-3.21.2/lib/libsuperlu_dist.so \ > /home/ivan/petsc/opt-3.21.2/lib/libparmetis.so \ > /home/ivan/petsc/opt-3.21.2/lib/libmetis.so \ > /usr/lib64/atlas/liblapack.a /usr/lib64/libblas.so.3 > /usr/bin/ld: cannot find /usr/lib64/atlas/liblapack.a: No such file or > directory > /usr/bin/ld: cannot find /usr/lib64/libblas.so.3: No such file or directory > collect2: error: ld returned 1 exit status > make: *** [makefile:18: bin_MsFEM_poisson2D_DMDA] Error 1 > You are specifying libraries that do not exist. Do not do this. You can use the PETSc Makefiles to build this, as described in the manual: https://urldefense.us/v3/__https://petsc.org/main/manual/getting_started/*sec-writing-application-codes__;Iw!!G_uCfscf7eWS!dR3K5-koJJZxrylQZPi1wyTnMeuWa7qeAM46G7OwoWYcDHB_OHBNiTMPGfh-JeAxS5XzHp7hBitgk6cOgnFL$ under the section "For adding PETSc to an existing application" THanks, Matt > -- > Best regards, > > Ivan Luthfi Ihwani > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!dR3K5-koJJZxrylQZPi1wyTnMeuWa7qeAM46G7OwoWYcDHB_OHBNiTMPGfh-JeAxS5XzHp7hBitgk16BIEMn$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Jul 12 05:59:47 2024 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 12 Jul 2024 06:59:47 -0400 Subject: [petsc-users] Incompatible pointer type In-Reply-To: References: Message-ID: On Fri, Jul 12, 2024 at 4:41?AM Ivan Luthfi wrote: > I get a warning about an incompatible pointer type when compile a code, > anyone know how to fix this? make bin_MsFEM_poisson2D_DMDA mpicc -Wall -c > PCMsFEM. c -isystem/home/ivan/petsc/opt-3. 21. 2/include PCMsFEM. c: In > function ?PCCreate_MsFEM?: PCMsFEM. c: 59: 33: > ZjQcmQRYFpfptBannerStart > This Message Is From an External Sender > This message came from outside your organization. > > ZjQcmQRYFpfptBannerEnd > I get a warning about an incompatible pointer type when compile a code, > anyone know how to fix this? > > make bin_MsFEM_poisson2D_DMDA > mpicc -Wall -c PCMsFEM.c -isystem/home/ivan/petsc/opt-3.21.2/include > PCMsFEM.c: In function ?PCCreate_MsFEM?: > PCMsFEM.c:59:33: warning: assignment to ?PetscErrorCode (*)(struct _p_PC > *, PetscOptionItems *)? {aka ?int (*)(struct _p_PC *, struct > _p_PetscOptionItems *)?} from incompatible pointer type ?PetscErrorCode > (*)(struct _p_PC *)? {aka ?int (*)(struct _p_PC *)?} > [-Wincompatible-pointer-types] > 59 | pc->ops->setfromoptions = PCSetFromOptions_MsFEM; > | ^ > mpicc -o bin_MsFEM_poisson2D_DMDA MsFEM_poisson2D_DMDA.o UserParameter.o > FormFunction.o MsFEM.o PCMsFEM.o > /home/ivan/petsc/opt-3.21.2/lib/libpetsc.so \ > /home/ivan/petsc/opt-3.21.2/lib/libsuperlu_dist.so \ > /home/ivan/petsc/opt-3.21.2/lib/libparmetis.so \ > /home/ivan/petsc/opt-3.21.2/lib/libmetis.so \ > /usr/lib64/atlas/liblapack.a /usr/lib64/libblas.so.3 > /usr/bin/ld: cannot find /usr/lib64/atlas/liblapack.a: No such file or > directory > /usr/bin/ld: cannot find /usr/lib64/libblas.so.3: No such file or directory > collect2: error: ld returned 1 exit status > make: *** [makefile:18: bin_MsFEM_poisson2D_DMDA] Error 1 > As the error message says, you are missing the second argument in your function. Here is an example from PETSc itself: https://urldefense.us/v3/__https://petsc.org/main/src/ksp/pc/impls/jacobi/jacobi.c.html*PCSetFromOptions_Jacobi__;Iw!!G_uCfscf7eWS!YNTtUlBhxN1oRmv-wQa9_XiOB4c3ylJzvdw0j3lk02txhvFCujXDCzk2N1vy3r2K9JIIBbZ-f_3xM62LxM13$ Thanks, Matt > -- > Best regards, > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!YNTtUlBhxN1oRmv-wQa9_XiOB4c3ylJzvdw0j3lk02txhvFCujXDCzk2N1vy3r2K9JIIBbZ-f_3xM93D47Rc$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastian.blauth at itwm.fraunhofer.de Fri Jul 12 06:08:23 2024 From: sebastian.blauth at itwm.fraunhofer.de (Blauth, Sebastian) Date: Fri, 12 Jul 2024 11:08:23 +0000 Subject: [petsc-users] Questions on TAO and gradient norm / inner products In-Reply-To: References: Message-ID: Dear Barry, thanks for the clarification. Oh, that?s unfortunate, that not all TAO algorithms use the supplied matrix for the norm (and then probably also not for computing inner products in, e.g., the limited memory formulas). I fear that I don?t have sufficient time at the moment to make a MR. I could, however, provide some ?minimal? example where the behavior is shown. However, that example would be using petsc4py as I am only familiar with that and I would use the fenics FEM package to define the matrices. Would this be okay? And if that?s the case, should I post the example here or at the petsc gitlab? Best regards, Sebastian -- Dr. Sebastian Blauth Fraunhofer-Institut f?r Techno- und Wirtschaftsmathematik ITWM Abteilung Transportvorg?nge Fraunhofer-Platz 1, 67663 Kaiserslautern Telefon: +49 631 31600-4968 sebastian.blauth at itwm.fraunhofer.de https://www.itwm.fraunhofer.de From: Barry Smith Sent: Tuesday, July 9, 2024 3:32 PM To: Blauth, Sebastian ; Munson, Todd ; toby Isaac Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] Questions on TAO and gradient norm / inner products From $ git grep TaoGradientNorm bound/impls/blmvm/blmvm.c: PetscCall(TaoGradientNorm(tao, tao->gradient, NORM_2, &gnorm)); bound/impls/blmvm/blmvm.c: PetscCall(TaoGradientNorm(tao, tao->gradient, NORM_2, &gnorm)); bound/impls/bnk/bnk.c: PetscCall(TaoGradientNorm(tao, tao->gradient, NORM_2, &bnk->gnorm)); bound/impls/bnk/bnk.c: PetscCall(TaoGradientNorm(tao, tao->gradient, NORM_2, &bnk->gnorm)); bound/impls/bnk/bnls.c: PetscCall(TaoGradientNorm(tao, tao->gradient, NORM_2, &bnk->gnorm)); bound/impls/bnk/bntl.c: PetscCall(TaoGradientNorm(tao, tao->gradient, NORM_2, &bnk->gnorm)); bound/impls/bnk/bntl.c: PetscCall(TaoGradientNorm(tao, tao->gradient, NORM_2, &bnk->gnorm)); bound/impls/bnk/bntr.c: PetscCall(TaoGradientNorm(tao, tao->gradient, NORM_2, &bnk->gnorm)); interface/taosolver.c:.seealso: [](ch_tao), `Tao`, `TaoGetGradientNorm()`, `TaoGradientNorm()` interface/taosolver.c:.seealso: [](ch_tao), `Tao`, `TaoSetGradientNorm()`, `TaoGradientNorm()` interface/taosolver.c: TaoGradientNorm - Compute the norm using the `NormType`, the user has selected interface/taosolver.c:PetscErrorCode TaoGradientNorm(Tao tao, Vec gradient, NormType type, PetscReal *gnorm) unconstrained/impls/lmvm/lmvm.c: PetscCall(TaoGradientNorm(tao, tao->gradient, NORM_2, &gnorm)); unconstrained/impls/lmvm/lmvm.c: PetscCall(TaoGradientNorm(tao, tao->gradient, NORM_2, &gnorm)); unconstrained/impls/nls/nls.c: PetscCall(TaoGradientNorm(tao, tao->gradient, NORM_2, &gnorm)); unconstrained/impls/nls/nls.c: PetscCall(TaoGradientNorm(tao, tao->gradient, NORM_2, &gnorm)); unconstrained/impls/nls/nls.c: PetscCall(TaoGradientNorm(tao, tao->gradient, NORM_2, &gnorm)); unconstrained/impls/ntr/ntr.c: PetscCall(TaoGradientNorm(tao, tao->gradient, NORM_2, &gnorm)); unconstrained/impls/ntr/ntr.c: PetscCall(TaoGradientNorm(tao, tao->gradient, NORM_2, &gnorm)); unconstrained/impls/ntr/ntr.c: PetscCall(TaoGradientNorm(tao, tao->gradient, NORM_2, &gnorm)); it appears only some of the algorithm implementations use the norm you provide. While git grep VecNorm indicates many places where it is not used. Likely some of the other algorithm implementations could be easily "fixed" to support by changing the norm computed. But I am not an expert on the algorithms and don't know if all algorithms can mathematically support a user provided norm. You are welcome to take a stab at making the change in an MR, or do you have a simple test problem with a mass matrix we can use to fix the "missing" implementations? Barry On Jul 9, 2024, at 3:47?AM, Blauth, Sebastian > wrote: Hello, I have some questions regarding TAO and the use the gradient norm. First, I want to use a custom inner product for the optimization in TAO (for computing the gradient norm and, e.g., in the double loop of a quasi-Newton method). I have seen that there is the method TAOSetGradientNorm https://petsc.org/release/manualpages/Tao/TaoSetGradientNorm/ which seems to do this. According to the petsc4py docs https://petsc.org/release/petsc4py/reference/petsc4py.PETSc.TAO.html#petsc4py.PETSc.TAO.setGradientNorm, this should do what I want. However, the method does not always seem to perform correctly: When I use it with ?-tao_type lmvm?, it really seems to work and gives the correct scaling of the residual in the default TAO monitor. However, when I use, e.g., ?-tao_type bqnls?, ?-tao_type cg?, or ?-tao_type bncg?, the (initial) residual is the same as it is when I do not use the TAOSetGradientNorm. However, there seem to be some slight internal changes (at least for the bqnls), as the number of iterations to reach the tolerance changes from 15 without TAOSetGradientNorm to 17 with TAOSetGradientNorm. For the context: Here, I am trying to solve a PDE constrained optimal control problem, which I tackle in a reduced fashion (using a reduced cost functional which results in an unconstrained optimization using the adjoint approach). For this, I would like to use the L2 inner product induced by the FEM discretization, so the L2 mass matrix. Moreover, I noticed that the performance of ?-tao_type lmvm? and ?-tao_type bqnls? as well as ?-tao_type cg? and ?-tao_type bncg? are drastically different for the same unconstrained problem. I would have expected that the algorithms are (more or less) identical for that case. Is this to be expected? Finally, I would like to use TAO for solving PDE constrained shape optimization problems. To do so, I would need to be able to specify the inner product used in the solver (see the above part) and this inner product would need to change in each iteration. Is it possible to do this with TAO? And could anyone give me some hints how to do so in python with petsc4py? Thanks a lot in advance, Sebastian -- Dr. Sebastian Blauth Fraunhofer-Institut f?r Techno- und Wirtschaftsmathematik ITWM Abteilung Transportvorg?nge Fraunhofer-Platz 1, 67663 Kaiserslautern Telefon: +49 631 31600-4968 sebastian.blauth at itwm.fraunhofer.de https://www.itwm.fraunhofer.de -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 7943 bytes Desc: not available URL: From knepley at gmail.com Fri Jul 12 06:29:30 2024 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 12 Jul 2024 07:29:30 -0400 Subject: [petsc-users] Questions on TAO and gradient norm / inner products In-Reply-To: References: Message-ID: On Fri, Jul 12, 2024 at 7:25?AM Blauth, Sebastian < sebastian.blauth at itwm.fraunhofer.de> wrote: > Dear Barry, > > > > thanks for the clarification. Oh, that?s unfortunate, that not all TAO > algorithms use the supplied matrix for the norm (and then probably also not > for computing inner products in, e.g., the limited memory formulas). > > > > I fear that I don?t have sufficient time at the moment to make a MR. I > could, however, provide some ?minimal? example where the behavior is shown. > However, that example would be using petsc4py as I am only familiar with > that and I would use the fenics FEM package to define the matrices. Would > this be okay? > Yes > And if that?s the case, should I post the example here or at the petsc > gitlab? > Either place is fine. Gitlab makes it easier to track. Thanks, Matt > Best regards, > > Sebastian > > > > -- > > Dr. Sebastian Blauth > > Fraunhofer-Institut f?r > > Techno- und Wirtschaftsmathematik ITWM > > Abteilung Transportvorg?nge > > Fraunhofer-Platz 1, 67663 Kaiserslautern > > Telefon: +49 631 31600-4968 > > sebastian.blauth at itwm.fraunhofer.de > > https://urldefense.us/v3/__https://www.itwm.fraunhofer.de__;!!G_uCfscf7eWS!aSNMLW5E3ZNlZwOAfQJMnwBM_4sDBJ8jXhjmfgZFQJJJFSw-7QaqXFgtPRpKudragLDd8ONBV5pJxKWFVczr$ > > > > *From:* Barry Smith > *Sent:* Tuesday, July 9, 2024 3:32 PM > *To:* Blauth, Sebastian ; Munson, > Todd ; toby Isaac > *Cc:* petsc-users at mcs.anl.gov > *Subject:* Re: [petsc-users] Questions on TAO and gradient norm / inner > products > > > > > > From > > > > $ git grep TaoGradientNorm > > bound/impls/blmvm/blmvm.c: PetscCall(*TaoGradientNorm*(tao, > tao->gradient, NORM_2, &gnorm)); > > bound/impls/blmvm/blmvm.c: PetscCall(*TaoGradientNorm*(tao, > tao->gradient, NORM_2, &gnorm)); > > bound/impls/bnk/bnk.c: PetscCall(*TaoGradientNorm*(tao, tao->gradient, > NORM_2, &bnk->gnorm)); > > bound/impls/bnk/bnk.c: PetscCall(*TaoGradientNorm*(tao, > tao->gradient, NORM_2, &bnk->gnorm)); > > bound/impls/bnk/bnls.c: PetscCall(*TaoGradientNorm*(tao, > tao->gradient, NORM_2, &bnk->gnorm)); > > bound/impls/bnk/bntl.c: PetscCall(*TaoGradientNorm*(tao, > tao->gradient, NORM_2, &bnk->gnorm)); > > bound/impls/bnk/bntl.c: PetscCall(*TaoGradientNorm*(tao, > tao->gradient, NORM_2, &bnk->gnorm)); > > bound/impls/bnk/bntr.c: PetscCall(*TaoGradientNorm*(tao, > tao->gradient, NORM_2, &bnk->gnorm)); > > interface/taosolver.c:.seealso: [](ch_tao), `Tao`, > `TaoGetGradientNorm()`, `*TaoGradientNorm*()` > > interface/taosolver.c:.seealso: [](ch_tao), `Tao`, > `TaoSetGradientNorm()`, `*TaoGradientNorm*()` > > interface/taosolver.c: *TaoGradientNorm* - Compute the norm using the > `NormType`, the user has selected > > interface/taosolver.c:PetscErrorCode *TaoGradientNorm*(Tao tao, Vec > gradient, NormType type, PetscReal *gnorm) > > unconstrained/impls/lmvm/lmvm.c: PetscCall(*TaoGradientNorm*(tao, > tao->gradient, NORM_2, &gnorm)); > > unconstrained/impls/lmvm/lmvm.c: PetscCall(*TaoGradientNorm*(tao, > tao->gradient, NORM_2, &gnorm)); > > unconstrained/impls/nls/nls.c: PetscCall(*TaoGradientNorm*(tao, > tao->gradient, NORM_2, &gnorm)); > > unconstrained/impls/nls/nls.c: PetscCall(*TaoGradientNorm*(tao, > tao->gradient, NORM_2, &gnorm)); > > unconstrained/impls/nls/nls.c: PetscCall(*TaoGradientNorm*(tao, > tao->gradient, NORM_2, &gnorm)); > > unconstrained/impls/ntr/ntr.c: PetscCall(*TaoGradientNorm*(tao, > tao->gradient, NORM_2, &gnorm)); > > unconstrained/impls/ntr/ntr.c: PetscCall(*TaoGradientNorm*(tao, > tao->gradient, NORM_2, &gnorm)); > > unconstrained/impls/ntr/ntr.c: PetscCall(*TaoGradientNorm*(tao, > tao->gradient, NORM_2, &gnorm)); > > > > it appears only some of the algorithm implementations use the norm you > provide. While > > > > git grep VecNorm > > > > indicates many places where it is not used. Likely some of the other > algorithm implementations could be easily "fixed" to support by > > changing the norm computed. But I am not an expert on the algorithms and > don't know if all algorithms can mathematically support a user provided > norm. > > > > You are welcome to take a stab at making the change in an MR, or do you > have a simple test problem with a mass matrix we can use to fix the > > "missing" implementations? > > > > Barry > > > > > > > > > > > > On Jul 9, 2024, at 3:47?AM, Blauth, Sebastian < > sebastian.blauth at itwm.fraunhofer.de> wrote: > > > > Hello, > > > > I have some questions regarding TAO and the use the gradient norm. > > > > First, I want to use a custom inner product for the optimization in TAO > (for computing the gradient norm and, e.g., in the double loop of a > quasi-Newton method). I have seen that there is the method > TAOSetGradientNorm > https://urldefense.us/v3/__https://petsc.org/release/manualpages/Tao/TaoSetGradientNorm/__;!!G_uCfscf7eWS!aSNMLW5E3ZNlZwOAfQJMnwBM_4sDBJ8jXhjmfgZFQJJJFSw-7QaqXFgtPRpKudragLDd8ONBV5pJxDTBKkKq$ which seems > to do this. According to the petsc4py docs > https://urldefense.us/v3/__https://petsc.org/release/petsc4py/reference/petsc4py.PETSc.TAO.html*petsc4py.PETSc.TAO.setGradientNorm__;Iw!!G_uCfscf7eWS!aSNMLW5E3ZNlZwOAfQJMnwBM_4sDBJ8jXhjmfgZFQJJJFSw-7QaqXFgtPRpKudragLDd8ONBV5pJxNddlrzE$ , > this should do what I want. However, the method does not always seem to > perform correctly: When I use it with ?-tao_type lmvm?, it really seems to > work and gives the correct scaling of the residual in the default TAO > monitor. However, when I use, e.g., ?-tao_type bqnls?, ?-tao_type cg?, or > ?-tao_type bncg?, the (initial) residual is the same as it is when I do not > use the TAOSetGradientNorm. However, there seem to be some slight internal > changes (at least for the bqnls), as the number of iterations to reach the > tolerance changes from 15 without TAOSetGradientNorm to 17 with > TAOSetGradientNorm. > > > > For the context: Here, I am trying to solve a PDE constrained optimal > control problem, which I tackle in a reduced fashion (using a reduced cost > functional which results in an unconstrained optimization using the adjoint > approach). For this, I would like to use the L2 inner product induced by > the FEM discretization, so the L2 mass matrix. > > > > Moreover, I noticed that the performance of ?-tao_type lmvm? and > ?-tao_type bqnls? as well as ?-tao_type cg? and ?-tao_type bncg? are > drastically different for the same unconstrained problem. I would have > expected that the algorithms are (more or less) identical for that case. Is > this to be expected? > > > > Finally, I would like to use TAO for solving PDE constrained shape > optimization problems. To do so, I would need to be able to specify the > inner product used in the solver (see the above part) and this inner > product would need to change in each iteration. Is it possible to do this > with TAO? And could anyone give me some hints how to do so in python with > petsc4py? > > > > Thanks a lot in advance, > > Sebastian > > > > > > > > -- > > Dr. Sebastian Blauth > > Fraunhofer-Institut f?r > > Techno- und Wirtschaftsmathematik ITWM > > Abteilung Transportvorg?nge > > Fraunhofer-Platz 1, 67663 Kaiserslautern > > Telefon: +49 631 31600-4968 > > sebastian.blauth at itwm.fraunhofer.de > > https://urldefense.us/v3/__https://www.itwm.fraunhofer.de__;!!G_uCfscf7eWS!aSNMLW5E3ZNlZwOAfQJMnwBM_4sDBJ8jXhjmfgZFQJJJFSw-7QaqXFgtPRpKudragLDd8ONBV5pJxKWFVczr$ > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!aSNMLW5E3ZNlZwOAfQJMnwBM_4sDBJ8jXhjmfgZFQJJJFSw-7QaqXFgtPRpKudragLDd8ONBV5pJxCQ5b78V$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Fri Jul 12 08:40:51 2024 From: bsmith at petsc.dev (Barry Smith) Date: Fri, 12 Jul 2024 09:40:51 -0400 Subject: [petsc-users] Warning [-Wformat-overflow] In-Reply-To: References: Message-ID: <8DFDFFF9-C665-4BEB-816A-548AAC7212DF@petsc.dev> Also ensure filename is a sufficiently long string to hold any possible output from the format. > On Jul 12, 2024, at 6:53?AM, Matthew Knepley wrote: > > This Message Is From an External Sender > This message came from outside your organization. > On Fri, Jul 12, 2024 at 2:26?AM Ivan Luthfi > wrote: >> This Message Is From an External Sender >> This message came from outside your organization. >> >> I have warning: >> >> FormatFunction.c: In function 'ComputeStiffnessMatrix': >> FormatFunction.c:128:46: warning: '%d' directive writing between 1 and 11 bytes into a region of size between 0 and 99 [-Wformat-overflow] >> (line128) sprintf(filename, "%sc%d_N%dM%d_permeability.log", up->problem_description, up->problem_flag, up->Nx, up->Mx) >> >> do you know why this warning appear? and how to fix it. > > You are writing a PetscInt into the spot for an int. You can > > 1) Cast to int: > > sprintf(filename, "%sc%d_N%dM%d_permeability.log", up->problem_description, up->problem_flag, (int)up->Nx, (int)up->Mx) > > 2) Use the custom format > > sprintf(filename, "%sc%d_N%" PetscInt_FMT "M%" PetscInt_FMT "_permeability.log", up->problem_description, up->problem_flag, up->Nx, up->Mx) > > Thanks, > > Matt > >> -- >> Best regards, >> -- > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!dMw87FGcY2KugOvdL79R4oeCp3123y3KJM7PolI9-vEhud2d9hXWYW_Us21eQGtkLa2GeY8mG9eiTO7R0_03y2w$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mlohry at gmail.com Fri Jul 12 15:09:59 2024 From: mlohry at gmail.com (Mark Lohry) Date: Fri, 12 Jul 2024 16:09:59 -0400 Subject: [petsc-users] finite difference jacobian errors when given non-constant initial condition In-Reply-To: References: Message-ID: The root cause of this turned out to be that I took code that i had historically used for JFNK solves via TS and SNESSolve interfaces. Here instead of calling SNESSolve I only wanted to compute the jacobian using what i had set up as to be computed via finite differences via coloring. when calling SNESSolve i confirmed it does generate the expected matrix, but if instead i directly call SNESComputeJacobian(ctx.snes_, petscsoln, ctx.JPre_, ctx.JPre_); // this is computing without coloring This appears to produce a matrix that is erroneous in some spots due to not using coloring. Is there some other way to get the jacobian only? i tried to follow similar steps as what is in the actual SNESSolve but i seem to have botched it somehow. On Sun, Apr 21, 2024 at 3:36?PM Zou, Ling wrote: > The other symptom is the same: > > - Using coloring, finite differencing respects the specified non-zero > pattern, but gives wrong (very large) Jacobian entries (J_ij) > - Using dense matrix assumption, finite differencing does not respect > the non-zero pattern determined by your numeric, which is a clear sign of > residual function code bug (your residual function does not respect your > numeric). > > -Ling > > > > *From: *petsc-users on behalf of Zou, > Ling via petsc-users > *Date: *Sunday, April 21, 2024 at 2:28 PM > *To: *Mark Lohry > *Cc: *PETSc > *Subject: *Re: [petsc-users] finite difference jacobian errors when given > non-constant initial condition > > Very interesting. > > I happened to encounter something very similar a couple of days ago, > which, of course, was due to a code bug I introduced. The code bug was in > the residual function. I used a local vector to track ?heat flux?, which > should be zero-ed out at the beginning of each residual function > evaluation. I did not zero it, and I observed very similar results, the > Jacobian is completely wrong, with large values (J_ij keeps increasing > after each iteration), and non-zero values are observed in locations which > should be perfect zero. The symptom is very much like what you are seeing > here. I suspect a similar bug. (Maybe you forgot zero the coefficients of > P1 re-construction? Using constant value 1, reconstructed dphi/dx = 0, so > however many iterations, still zero). > > > > -Ling > > > > *From: *Mark Lohry > *Date: *Sunday, April 21, 2024 at 12:35 PM > *To: *Zou, Ling > *Cc: *PETSc > *Subject: *Re: [petsc-users] finite difference jacobian errors when given > non-constant initial condition > > The coloring I'm fairly confident is correct -- I use the same process for > 3D unstructured grids and everything seems to work. The residual function > is also validated. As a test I did as you suggested -- assume the matrix is > dense -- and > > ZjQcmQRYFpfptBannerStart > > *This Message Is From an External Sender * > > This message came from outside your organization. > > > > ZjQcmQRYFpfptBannerEnd > > The coloring I'm fairly confident is correct -- I use the same process for > 3D unstructured grids and everything seems to work. The residual function > is also validated. > > > > As a test I did as you suggested -- assume the matrix is dense -- and I > get the same bad results, just now the zero blocks are filled. > > > > Assuming dense, giving it a constant vector, all is good: > > > > 4.23516e-16 -1.10266 0.31831 -0.0852909 0 > 0 -0.31831 1.18795 > 1.10266 -4.23516e-16 -1.18795 0.31831 0 > 0 0.0852909 -0.31831 > -0.31831 1.18795 2.11758e-16 -1.10266 0.31831 > -0.0852909 0 0 > 0.0852909 -0.31831 1.10266 -4.23516e-16 -1.18795 > 0.31831 0 0 > 0 0 -0.31831 1.18795 2.11758e-16 > -1.10266 0.31831 -0.0852909 > 0 0 0.0852909 -0.31831 1.10266 > -4.23516e-16 -1.18795 0.31831 > 0.31831 -0.0852909 0 0 -0.31831 > 1.18795 4.23516e-16 -1.10266 > > > > > > Assuming dense, giving it sin(x), all is bad: > > > > -1.76177e+08 -6.07287e+07 -6.07287e+07 -1.76177e+08 1.76177e+08 > 6.07287e+07 6.07287e+07 1.76177e+08 > -1.31161e+08 -4.52116e+07 -4.52116e+07 -1.31161e+08 1.31161e+08 > 4.52116e+07 4.52116e+07 1.31161e+08 > 1.31161e+08 4.52116e+07 4.52116e+07 1.31161e+08 -1.31161e+08 > -4.52116e+07 -4.52116e+07 -1.31161e+08 > 1.76177e+08 6.07287e+07 6.07287e+07 1.76177e+08 -1.76177e+08 > -6.07287e+07 -6.07287e+07 -1.76177e+08 > 1.76177e+08 6.07287e+07 6.07287e+07 1.76177e+08 -1.76177e+08 > -6.07287e+07 -6.07287e+07 -1.76177e+08 > 1.31161e+08 4.52116e+07 4.52116e+07 1.31161e+08 -1.31161e+08 > -4.52116e+07 -4.52116e+07 -1.31161e+08 > -1.31161e+08 -4.52116e+07 -4.52116e+07 -1.31161e+08 1.31161e+08 > 4.52116e+07 4.52116e+07 1.31161e+08 > -1.76177e+08 -6.07287e+07 -6.07287e+07 -1.76177e+08 1.76177e+08 > 6.07287e+07 6.07287e+07 1.76177e+08 > > > > Scratching my head over here... I've been using these routines > successfully for years in much more complex code. > > > > On Sun, Apr 21, 2024 at 12:36?PM Zou, Ling wrote: > > Edit: > > - how do you do the coloring when using PETSc finite differencing? An > incorrect coloring may give you wrong Jacobian. For debugging purpose, > the simplest way to avoid an incorrect coloring is to assume the matrix is > dense (slow but error proofing). If the numeric converges as expected, > then fine tune your coloring to make it right and fast. > > > > > > *From: *petsc-users on behalf of Zou, > Ling via petsc-users > *Date: *Sunday, April 21, 2024 at 11:29 AM > *To: *Mark Lohry , PETSc > *Subject: *Re: [petsc-users] finite difference jacobian errors when given > non-constant initial condition > > Hi Mark, I am working on a project having similar numeric you have, > one-dimensional finite volume method with second-order slope limiter TVD, > and PETSc finite differencing gives perfect Jacobian even for complex > problems. > > So, I tend to believe that your implementation may have some problem. Some > lessons I learned during my code development: > > > > - how do you do the coloring when using PETSc finite differencing? An > incorrect coloring may give you wrong Jacobian. The simplest way to avoid > an incorrect coloring is to assume the matrix is dense (slow but error > proofing). > - Residual function evaluation not correctly implemented can also lead > to incorrect Jacobian. In your case, you may want to take a careful look at > the order of execution, when to update your unknown vector, when to perform > P1 reconstruction, and when to evaluate the residual. > > > > -Ling > > > > *From: *petsc-users on behalf of Mark > Lohry > *Date: *Saturday, April 20, 2024 at 1:35 PM > *To: *PETSc > *Subject: *[petsc-users] finite difference jacobian errors when given > non-constant initial condition > > I have a 1-dimensional P1 discontinuous Galerkin discretization of the > linear advection equation with 4 cells and periodic boundaries on > [-pi,+pi]. I'm comparing the results from SNESComputeJacobian with a > hand-written Jacobian. Being linear, > > ZjQcmQRYFpfptBannerStart > > *This Message Is From an External Sender * > > This message came from outside your organization. > > > > ZjQcmQRYFpfptBannerEnd > > I have a 1-dimensional P1 discontinuous Galerkin discretization of the > linear advection equation with 4 cells and periodic boundaries on > [-pi,+pi]. I'm comparing the results from SNESComputeJacobian with a > hand-written Jacobian. Being linear, the Jacobian should be > constant/independent of the solution. > > > > When I set the initial condition passed to SNESComputeJacobian as some > constant, say f(x)=1 or 0, the petsc finite difference jacobian agrees with > my hand coded-version. But when I pass it some non-constant value, e.g. > f(x)=sin(x), something goes horribly wrong in the petsc jacobian. > Implementing my own rudimentary finite difference approximation (similar to > how I thought petsc computes it) it returns the correct jacobian to > expected error. Any idea what could be going on? > > > > Analytically computed Jacobian: > > 4.44089e-16 -1.10266 0.31831 -0.0852909 0 > 0 -0.31831 1.18795 > 1.10266 -4.44089e-16 -1.18795 0.31831 0 > 0 0.0852909 -0.31831 > -0.31831 1.18795 4.44089e-16 -1.10266 0.31831 > -0.0852909 0 0 > 0.0852909 -0.31831 1.10266 -4.44089e-16 -1.18795 > 0.31831 0 0 > 0 0 -0.31831 1.18795 4.44089e-16 > -1.10266 0.31831 -0.0852909 > 0 0 0.0852909 -0.31831 1.10266 > -4.44089e-16 -1.18795 0.31831 > 0.31831 -0.0852909 0 0 -0.31831 > 1.18795 4.44089e-16 -1.10266 > -1.18795 0.31831 0 0 0.0852909 > -0.31831 1.10266 -4.44089e-16 > > > > > > petsc finite difference jacobian when given f(x)=1: > > 4.44089e-16 -1.10266 0.31831 -0.0852909 0 > 0 -0.31831 1.18795 > 1.10266 -4.44089e-16 -1.18795 0.31831 0 > 0 0.0852909 -0.31831 > -0.31831 1.18795 4.44089e-16 -1.10266 0.31831 > -0.0852909 0 0 > 0.0852909 -0.31831 1.10266 -4.44089e-16 -1.18795 > 0.31831 0 0 > 0 0 -0.31831 1.18795 4.44089e-16 > -1.10266 0.31831 -0.0852909 > 0 0 0.0852909 -0.31831 1.10266 > -4.44089e-16 -1.18795 0.31831 > 0.31831 -0.0852909 0 0 -0.31831 > 1.18795 4.44089e-16 -1.10266 > -1.18795 0.31831 0 0 0.0852909 > -0.31831 1.10266 -4.44089e-16 > > > > petsc finite difference jacobian when given f(x) = sin(x): > > -1.65547e+08 -3.31856e+08 -1.25427e+09 4.4844e+08 0 > 0 1.03206e+08 7.86375e+07 > 9.13788e+07 1.83178e+08 6.92336e+08 -2.4753e+08 0 > 0 -5.69678e+07 -4.34064e+07 > 3.7084e+07 7.43387e+07 2.80969e+08 -1.00455e+08 -5.0384e+07 > -2.99747e+07 0 0 > 3.7084e+07 7.43387e+07 2.80969e+08 -1.00455e+08 -5.0384e+07 > -2.99747e+07 0 0 > 0 0 2.80969e+08 -1.00455e+08 -5.0384e+07 > -2.99747e+07 -2.31191e+07 -1.76155e+07 > 0 0 2.80969e+08 -1.00455e+08 -5.0384e+07 > -2.99747e+07 -2.31191e+07 -1.76155e+07 > 9.13788e+07 1.83178e+08 0 0 -1.24151e+08 > -7.38608e+07 -5.69678e+07 -4.34064e+07 > -1.65547e+08 -3.31856e+08 0 0 2.24919e+08 > 1.3381e+08 1.03206e+08 7.86375e+07 > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Fri Jul 12 18:01:27 2024 From: bsmith at petsc.dev (Barry Smith) Date: Fri, 12 Jul 2024 19:01:27 -0400 Subject: [petsc-users] finite difference jacobian errors when given non-constant initial condition In-Reply-To: References: Message-ID: So long as you called SNESSetJacobian(snes,A,A,SNESComputeJacobianDefaultColor); and the matrix A has the correct nonzero pattern it should work (I assume you are not attaching a DM to the SNES?). > On Jul 12, 2024, at 4:09?PM, Mark Lohry wrote: > > This Message Is From an External Sender > This message came from outside your organization. > The root cause of this turned out to be that I took code that i had historically used for JFNK solves via TS and SNESSolve interfaces. Here instead of calling SNESSolve I only wanted to compute the jacobian using what i had set up as to be computed via finite differences via coloring. when calling SNESSolve i confirmed it does generate the expected matrix, but if instead i directly call > > SNESComputeJacobian(ctx.snes_, petscsoln, ctx.JPre_, ctx.JPre_); // this is computing without coloring > > This appears to produce a matrix that is erroneous in some spots due to not using coloring. Is there some other way to get the jacobian only? i tried to follow similar steps as what is in the actual SNESSolve but i seem to have botched it somehow. > > On Sun, Apr 21, 2024 at 3:36?PM Zou, Ling > wrote: >> The other symptom is the same: >> >> Using coloring, finite differencing respects the specified non-zero pattern, but gives wrong (very large) Jacobian entries (J_ij) >> Using dense matrix assumption, finite differencing does not respect the non-zero pattern determined by your numeric, which is a clear sign of residual function code bug (your residual function does not respect your numeric). >> -Ling >> >> >> >> From: petsc-users > on behalf of Zou, Ling via petsc-users > >> Date: Sunday, April 21, 2024 at 2:28 PM >> To: Mark Lohry > >> Cc: PETSc > >> Subject: Re: [petsc-users] finite difference jacobian errors when given non-constant initial condition >> >> Very interesting. >> >> I happened to encounter something very similar a couple of days ago, which, of course, was due to a code bug I introduced. The code bug was in the residual function. I used a local vector to track ?heat flux?, which should be zero-ed out at the beginning of each residual function evaluation. I did not zero it, and I observed very similar results, the Jacobian is completely wrong, with large values (J_ij keeps increasing after each iteration), and non-zero values are observed in locations which should be perfect zero. The symptom is very much like what you are seeing here. I suspect a similar bug. (Maybe you forgot zero the coefficients of P1 re-construction? Using constant value 1, reconstructed dphi/dx = 0, so however many iterations, still zero). >> >> >> >> -Ling >> >> >> >> From: Mark Lohry > >> Date: Sunday, April 21, 2024 at 12:35 PM >> To: Zou, Ling > >> Cc: PETSc > >> Subject: Re: [petsc-users] finite difference jacobian errors when given non-constant initial condition >> >> The coloring I'm fairly confident is correct -- I use the same process for 3D unstructured grids and everything seems to work. The residual function is also validated. As a test I did as you suggested -- assume the matrix is dense -- and >> >> ZjQcmQRYFpfptBannerStart >> >> This Message Is From an External Sender >> >> This message came from outside your organization. >> >> >> >> ZjQcmQRYFpfptBannerEnd >> >> The coloring I'm fairly confident is correct -- I use the same process for 3D unstructured grids and everything seems to work. The residual function is also validated. >> >> >> >> As a test I did as you suggested -- assume the matrix is dense -- and I get the same bad results, just now the zero blocks are filled. >> >> >> >> Assuming dense, giving it a constant vector, all is good: >> >> >> >> 4.23516e-16 -1.10266 0.31831 -0.0852909 0 0 -0.31831 1.18795 >> 1.10266 -4.23516e-16 -1.18795 0.31831 0 0 0.0852909 -0.31831 >> -0.31831 1.18795 2.11758e-16 -1.10266 0.31831 -0.0852909 0 0 >> 0.0852909 -0.31831 1.10266 -4.23516e-16 -1.18795 0.31831 0 0 >> 0 0 -0.31831 1.18795 2.11758e-16 -1.10266 0.31831 -0.0852909 >> 0 0 0.0852909 -0.31831 1.10266 -4.23516e-16 -1.18795 0.31831 >> 0.31831 -0.0852909 0 0 -0.31831 1.18795 4.23516e-16 -1.10266 >> >> >> >> >> >> Assuming dense, giving it sin(x), all is bad: >> >> >> >> -1.76177e+08 -6.07287e+07 -6.07287e+07 -1.76177e+08 1.76177e+08 6.07287e+07 6.07287e+07 1.76177e+08 >> -1.31161e+08 -4.52116e+07 -4.52116e+07 -1.31161e+08 1.31161e+08 4.52116e+07 4.52116e+07 1.31161e+08 >> 1.31161e+08 4.52116e+07 4.52116e+07 1.31161e+08 -1.31161e+08 -4.52116e+07 -4.52116e+07 -1.31161e+08 >> 1.76177e+08 6.07287e+07 6.07287e+07 1.76177e+08 -1.76177e+08 -6.07287e+07 -6.07287e+07 -1.76177e+08 >> 1.76177e+08 6.07287e+07 6.07287e+07 1.76177e+08 -1.76177e+08 -6.07287e+07 -6.07287e+07 -1.76177e+08 >> 1.31161e+08 4.52116e+07 4.52116e+07 1.31161e+08 -1.31161e+08 -4.52116e+07 -4.52116e+07 -1.31161e+08 >> -1.31161e+08 -4.52116e+07 -4.52116e+07 -1.31161e+08 1.31161e+08 4.52116e+07 4.52116e+07 1.31161e+08 >> -1.76177e+08 -6.07287e+07 -6.07287e+07 -1.76177e+08 1.76177e+08 6.07287e+07 6.07287e+07 1.76177e+08 >> >> >> >> Scratching my head over here... I've been using these routines successfully for years in much more complex code. >> >> >> >> On Sun, Apr 21, 2024 at 12:36?PM Zou, Ling > wrote: >> >> Edit: >> >> how do you do the coloring when using PETSc finite differencing? An incorrect coloring may give you wrong Jacobian. For debugging purpose, the simplest way to avoid an incorrect coloring is to assume the matrix is dense (slow but error proofing). If the numeric converges as expected, then fine tune your coloring to make it right and fast. >> >> >> >> >> From: petsc-users > on behalf of Zou, Ling via petsc-users > >> Date: Sunday, April 21, 2024 at 11:29 AM >> To: Mark Lohry >, PETSc > >> Subject: Re: [petsc-users] finite difference jacobian errors when given non-constant initial condition >> >> Hi Mark, I am working on a project having similar numeric you have, one-dimensional finite volume method with second-order slope limiter TVD, and PETSc finite differencing gives perfect Jacobian even for complex problems. >> >> So, I tend to believe that your implementation may have some problem. Some lessons I learned during my code development: >> >> >> >> how do you do the coloring when using PETSc finite differencing? An incorrect coloring may give you wrong Jacobian. The simplest way to avoid an incorrect coloring is to assume the matrix is dense (slow but error proofing). >> Residual function evaluation not correctly implemented can also lead to incorrect Jacobian. In your case, you may want to take a careful look at the order of execution, when to update your unknown vector, when to perform P1 reconstruction, and when to evaluate the residual. >> >> >> -Ling >> >> >> >> From: petsc-users > on behalf of Mark Lohry > >> Date: Saturday, April 20, 2024 at 1:35 PM >> To: PETSc > >> Subject: [petsc-users] finite difference jacobian errors when given non-constant initial condition >> >> I have a 1-dimensional P1 discontinuous Galerkin discretization of the linear advection equation with 4 cells and periodic boundaries on [-pi,+pi]. I'm comparing the results from SNESComputeJacobian with a hand-written Jacobian. Being linear, >> >> ZjQcmQRYFpfptBannerStart >> >> This Message Is From an External Sender >> >> This message came from outside your organization. >> >> >> >> ZjQcmQRYFpfptBannerEnd >> >> I have a 1-dimensional P1 discontinuous Galerkin discretization of the linear advection equation with 4 cells and periodic boundaries on [-pi,+pi]. I'm comparing the results from SNESComputeJacobian with a hand-written Jacobian. Being linear, the Jacobian should be constant/independent of the solution. >> >> >> >> When I set the initial condition passed to SNESComputeJacobian as some constant, say f(x)=1 or 0, the petsc finite difference jacobian agrees with my hand coded-version. But when I pass it some non-constant value, e.g. f(x)=sin(x), something goes horribly wrong in the petsc jacobian. Implementing my own rudimentary finite difference approximation (similar to how I thought petsc computes it) it returns the correct jacobian to expected error. Any idea what could be going on? >> >> >> >> Analytically computed Jacobian: >> >> 4.44089e-16 -1.10266 0.31831 -0.0852909 0 0 -0.31831 1.18795 >> 1.10266 -4.44089e-16 -1.18795 0.31831 0 0 0.0852909 -0.31831 >> -0.31831 1.18795 4.44089e-16 -1.10266 0.31831 -0.0852909 0 0 >> 0.0852909 -0.31831 1.10266 -4.44089e-16 -1.18795 0.31831 0 0 >> 0 0 -0.31831 1.18795 4.44089e-16 -1.10266 0.31831 -0.0852909 >> 0 0 0.0852909 -0.31831 1.10266 -4.44089e-16 -1.18795 0.31831 >> 0.31831 -0.0852909 0 0 -0.31831 1.18795 4.44089e-16 -1.10266 >> -1.18795 0.31831 0 0 0.0852909 -0.31831 1.10266 -4.44089e-16 >> >> >> >> >> >> petsc finite difference jacobian when given f(x)=1: >> >> 4.44089e-16 -1.10266 0.31831 -0.0852909 0 0 -0.31831 1.18795 >> 1.10266 -4.44089e-16 -1.18795 0.31831 0 0 0.0852909 -0.31831 >> -0.31831 1.18795 4.44089e-16 -1.10266 0.31831 -0.0852909 0 0 >> 0.0852909 -0.31831 1.10266 -4.44089e-16 -1.18795 0.31831 0 0 >> 0 0 -0.31831 1.18795 4.44089e-16 -1.10266 0.31831 -0.0852909 >> 0 0 0.0852909 -0.31831 1.10266 -4.44089e-16 -1.18795 0.31831 >> 0.31831 -0.0852909 0 0 -0.31831 1.18795 4.44089e-16 -1.10266 >> -1.18795 0.31831 0 0 0.0852909 -0.31831 1.10266 -4.44089e-16 >> >> >> >> petsc finite difference jacobian when given f(x) = sin(x): >> >> -1.65547e+08 -3.31856e+08 -1.25427e+09 4.4844e+08 0 0 1.03206e+08 7.86375e+07 >> 9.13788e+07 1.83178e+08 6.92336e+08 -2.4753e+08 0 0 -5.69678e+07 -4.34064e+07 >> 3.7084e+07 7.43387e+07 2.80969e+08 -1.00455e+08 -5.0384e+07 -2.99747e+07 0 0 >> 3.7084e+07 7.43387e+07 2.80969e+08 -1.00455e+08 -5.0384e+07 -2.99747e+07 0 0 >> 0 0 2.80969e+08 -1.00455e+08 -5.0384e+07 -2.99747e+07 -2.31191e+07 -1.76155e+07 >> 0 0 2.80969e+08 -1.00455e+08 -5.0384e+07 -2.99747e+07 -2.31191e+07 -1.76155e+07 >> 9.13788e+07 1.83178e+08 0 0 -1.24151e+08 -7.38608e+07 -5.69678e+07 -4.34064e+07 >> -1.65547e+08 -3.31856e+08 0 0 2.24919e+08 1.3381e+08 1.03206e+08 7.86375e+07 >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From ivanluthfi5 at gmail.com Sat Jul 13 04:53:51 2024 From: ivanluthfi5 at gmail.com (Ivan Luthfi) Date: Sat, 13 Jul 2024 17:53:51 +0800 Subject: [petsc-users] Help me for compiling my Code In-Reply-To: References: Message-ID: Hi Mr. Knepley, I already copy and edit the makefile shared by PETSC. And here is the modification i made to compile my codes: app : MsFEM_poisson2D_DMDA.o UserParameter.o FormFunction.o MsFEM.o PCMsFEM.o $(LINK.C) -o $@ $^ $(LDLIBS) MsFEM_poisson2D_DMDA.o: MsFEM_poisson2D_DMDA.c $(LINK.C) -o $@ $^ $(LDLIBS) UserParameter.o: UserParameter.c $(LINK.C) -o $@ $^ $(LDLIBS) FormFunction.o: FormFunction.c $(LINK.C) -o $@ $^ $(LDLIBS) MsFEM.o: MsFEM.c $(LINK.C) -o $@ $^ $(LDLIBS) PCMsFEM.o: PCMsFEM.c $(LINK.c) -o $@ $^ $(LDLIBS) clean: rm -rf app *.o However, after I compile it by using "make app" it said that i have "*** missing separator. Stop. " in line 24, which is in that first "$(LINK.C) -o ........". What is wrong from my makefile? Pada Jum, 12 Jul 2024 pukul 18.57 Matthew Knepley menulis: > On Fri, Jul 12, 2024 at 3:16?AM Ivan Luthfi wrote: > >> I try to compile my code, but i get this error. Anyone can help me? Here >> is my terminal: $make bin_MsFEM_poisson2D_DMDA mpicc -o >> bin_MsFEM_poisson2D_DMDA MsFEM_poisson2D_DMDA. o UserParameter. o >> FormFunction. o MsFEM. o PCMsFEM. o /home/ivan/petsc/opt-3. 21. >> 2/lib/libpetsc. so >> ZjQcmQRYFpfptBannerStart >> This Message Is From an External Sender >> This message came from outside your organization. >> >> ZjQcmQRYFpfptBannerEnd >> I try to compile my code, but i get this error. Anyone can help me? >> >> Here is my terminal: >> >> $make bin_MsFEM_poisson2D_DMDA >> >> mpicc -o bin_MsFEM_poisson2D_DMDA MsFEM_poisson2D_DMDA.o UserParameter.o >> FormFunction.o MsFEM.o PCMsFEM.o >> /home/ivan/petsc/opt-3.21.2/lib/libpetsc.so \ >> /home/ivan/petsc/opt-3.21.2/lib/libsuperlu_dist.so \ >> /home/ivan/petsc/opt-3.21.2/lib/libparmetis.so \ >> /home/ivan/petsc/opt-3.21.2/lib/libmetis.so \ >> /usr/lib64/atlas/liblapack.a /usr/lib64/libblas.so.3 >> /usr/bin/ld: cannot find /usr/lib64/atlas/liblapack.a: No such file or >> directory >> /usr/bin/ld: cannot find /usr/lib64/libblas.so.3: No such file or >> directory >> collect2: error: ld returned 1 exit status >> make: *** [makefile:18: bin_MsFEM_poisson2D_DMDA] Error 1 >> > > You are specifying libraries that do not exist. Do not do this. You can > use the PETSc Makefiles to build > this, as described in the manual: > > > https://urldefense.us/v3/__https://petsc.org/main/manual/getting_started/*sec-writing-application-codes__;Iw!!G_uCfscf7eWS!aywTBmwveIMzChLO3PC6clvS78cZUC_7xMDMvnoHJZcLrpBUjAjMHGljcIDKjFtgJTRkfdfaW-YgEW8bUxu7RSJD2w$ > > under the section "For adding PETSc to an existing application" > > THanks, > > Matt > > >> -- >> Best regards, >> >> Ivan Luthfi Ihwani >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!aywTBmwveIMzChLO3PC6clvS78cZUC_7xMDMvnoHJZcLrpBUjAjMHGljcIDKjFtgJTRkfdfaW-YgEW8bUxtdSlUYlw$ > > -- Best regards, Ivan Luthfi Ihwani -- Ivan Luthfi Ihwani Mobile: 08979341681 -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sat Jul 13 06:12:33 2024 From: knepley at gmail.com (Matthew Knepley) Date: Sat, 13 Jul 2024 07:12:33 -0400 Subject: [petsc-users] Help me for compiling my Code In-Reply-To: References: Message-ID: On Sat, Jul 13, 2024 at 5:54?AM Ivan Luthfi wrote: > Hi Mr. Knepley, > I already copy and edit the makefile shared by PETSC. And here is the > modification i made to compile my codes: > > app : MsFEM_poisson2D_DMDA.o UserParameter.o FormFunction.o MsFEM.o > PCMsFEM.o > $(LINK.C) -o $@ $^ $(LDLIBS) > > MsFEM_poisson2D_DMDA.o: MsFEM_poisson2D_DMDA.c > $(LINK.C) -o $@ $^ $(LDLIBS) > > UserParameter.o: UserParameter.c > $(LINK.C) -o $@ $^ $(LDLIBS) > > FormFunction.o: FormFunction.c > $(LINK.C) -o $@ $^ $(LDLIBS) > > MsFEM.o: MsFEM.c > $(LINK.C) -o $@ $^ $(LDLIBS) > > PCMsFEM.o: PCMsFEM.c > $(LINK.c) -o $@ $^ $(LDLIBS) > > clean: > rm -rf app *.o > > However, after I compile it by using "make app" it said that i have "*** > missing separator. Stop. " in line 24, which is in that first "$(LINK.C) -o > ........". What is wrong from my makefile? > You used spaces instead of a tab at the beginning of that line. Thanks, Matt > > Pada Jum, 12 Jul 2024 pukul 18.57 Matthew Knepley > menulis: > >> On Fri, Jul 12, 2024 at 3:16?AM Ivan Luthfi >> wrote: >> >>> I try to compile my code, but i get this error. Anyone can help me? Here >>> is my terminal: $make bin_MsFEM_poisson2D_DMDA mpicc -o >>> bin_MsFEM_poisson2D_DMDA MsFEM_poisson2D_DMDA. o UserParameter. o >>> FormFunction. o MsFEM. o PCMsFEM. o /home/ivan/petsc/opt-3. 21. >>> 2/lib/libpetsc. so >>> ZjQcmQRYFpfptBannerStart >>> This Message Is From an External Sender >>> This message came from outside your organization. >>> >>> ZjQcmQRYFpfptBannerEnd >>> I try to compile my code, but i get this error. Anyone can help me? >>> >>> Here is my terminal: >>> >>> $make bin_MsFEM_poisson2D_DMDA >>> >>> mpicc -o bin_MsFEM_poisson2D_DMDA MsFEM_poisson2D_DMDA.o UserParameter.o >>> FormFunction.o MsFEM.o PCMsFEM.o >>> /home/ivan/petsc/opt-3.21.2/lib/libpetsc.so \ >>> /home/ivan/petsc/opt-3.21.2/lib/libsuperlu_dist.so \ >>> /home/ivan/petsc/opt-3.21.2/lib/libparmetis.so \ >>> /home/ivan/petsc/opt-3.21.2/lib/libmetis.so \ >>> /usr/lib64/atlas/liblapack.a /usr/lib64/libblas.so.3 >>> /usr/bin/ld: cannot find /usr/lib64/atlas/liblapack.a: No such file or >>> directory >>> /usr/bin/ld: cannot find /usr/lib64/libblas.so.3: No such file or >>> directory >>> collect2: error: ld returned 1 exit status >>> make: *** [makefile:18: bin_MsFEM_poisson2D_DMDA] Error 1 >>> >> >> You are specifying libraries that do not exist. Do not do this. You can >> use the PETSc Makefiles to build >> this, as described in the manual: >> >> >> https://urldefense.us/v3/__https://petsc.org/main/manual/getting_started/*sec-writing-application-codes__;Iw!!G_uCfscf7eWS!ZW-eOAeaybSu_nIZ2uX_NF-5zi5d5HL2RaW4WqSwuqjQVbSRMhNuMD1TMXk2GtRguFX9bJci4NMQdGQGJye6$ >> >> under the section "For adding PETSc to an existing application" >> >> THanks, >> >> Matt >> >> >>> -- >>> Best regards, >>> >>> Ivan Luthfi Ihwani >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!ZW-eOAeaybSu_nIZ2uX_NF-5zi5d5HL2RaW4WqSwuqjQVbSRMhNuMD1TMXk2GtRguFX9bJci4NMQdKQPlJPn$ >> >> > > > -- > Best regards, > > Ivan Luthfi Ihwani > > -- > Ivan Luthfi Ihwani > Mobile: 08979341681 > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!ZW-eOAeaybSu_nIZ2uX_NF-5zi5d5HL2RaW4WqSwuqjQVbSRMhNuMD1TMXk2GtRguFX9bJci4NMQdKQPlJPn$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay.anl at fastmail.org Sat Jul 13 09:47:26 2024 From: balay.anl at fastmail.org (Satish Balay) Date: Sat, 13 Jul 2024 09:47:26 -0500 (CDT) Subject: [petsc-users] Help me for compiling my Code In-Reply-To: References: Message-ID: <268a33c6-152b-9305-f0af-8d277910f90d@fastmail.org> An HTML attachment was scrubbed... URL: From FERRANJ2 at my.erau.edu Sat Jul 13 15:39:37 2024 From: FERRANJ2 at my.erau.edu (Ferrand, Jesus A.) Date: Sat, 13 Jul 2024 20:39:37 +0000 Subject: [petsc-users] [EXTERNAL] Re: What exactly is the GlobalToNatural PetscSF of DMPlex/DM? In-Reply-To: References: Message-ID: Matt: Thank you for the reply. The bulk of it makes a lot of sense. Yes! That need to keep track of the original mesh numbers (AKA "Natural") is what I find pressing for my research group. Awesome! I was separately keeping track of these numbers using a PetscSection that I was inputting into DMSetLocalSection() but of the coordinate DM, not the plex. It is good to know the "correct" way to do it. "What is repetitive? It should be able to be automated." Absolutely as the intrinsic process is ubiquitous between mesh formats. What I meant by "repetitive" is the information that is reused by different API calls (namely, global stratum sizes, and local point numbers corresponding to owned DAG points). I need to define a struct to bookkeep this. It's not really an issue, rather a minor annoyance (for me). I need the stratum sizes to offset DMPlex numbering cells in range [0,nCell) and vertices ranging in [nCell,nCell+nVert) to other mesh numberings where cells range from [1, nCell] and vertices range from [1, nVert]. In my experience, this information is needed at least three (3) times, during coordinate writes, during element connectivity writes, and during DMLabel writes for BC's and other labelled data. This information I determine using a code snippet like this: PetscCall(PetscObjectGetComm((PetscObject)plex,&mpiComm)); PetscCallMPI(MPI_Comm_rank(mpiComm,&mpiRank)); PetscCallMPI(MPI_Comm_size(mpiComm,&mpiCommSize)); PetscCall(DMPlexCreatePointNumbering(plex,&GlobalNumberIS)); PetscCall(ISGetIndices(GlobalNumberIS,&IdxPtr)); PetscCall(DMPlexGetDepth(plex,&Depth)); PetscCall(PetscMalloc3(// Depth,&LocalIdxPtrPtr,//Indices in the local stratum to owned points. Depth,&pOwnedPtr,//Number of points in the local stratum that are owned. Depth,&GlobalStratumSizePtr//Global stratum size. )); for(PetscInt jj = 0;jj < Depth;jj++){ PetscCall(DMPlexGetDepthStratum(plex,jj,&pStart,&pEnd)); pOwnedPtr[jj] = 0; for(PetscInt ii = pStart;ii < pEnd;ii++){ if(IdxPtr[ii] >= 0) pOwnedPtr[jj]++; } PetscCallMPI(MPI_Allreduce(&pOwnedPtr[jj],&GlobalStratumSizePtr[jj],1,MPIU_INT,MPI_MAX,mpiComm)); PetscCall(PetscMalloc1(pOwnedPtr[jj],&LocalIdxPtrPtr[jj])); kk = 0; for(PetscInt ii = pStart;ii < pEnd; ii++){ if(IdxPtr[ii] >= 0){ LocalIdxPtrPtr[jj][kk] = ii; kk++; } } } PetscCall(ISRestoreIndices(GlobalNumberIS,&IdxPtr)); PetscCall(ISDestroy(&GlobalNumberIS)); ________________________________ From: Matthew Knepley Sent: Thursday, July 11, 2024 8:32 PM To: Ferrand, Jesus A. Cc: petsc-users at mcs.anl.gov Subject: [EXTERNAL] Re: [petsc-users] What exactly is the GlobalToNatural PetscSF of DMPlex/DM? CAUTION: This email originated outside of Embry-Riddle Aeronautical University. Do not click links or open attachments unless you recognize the sender and know the content is safe. On Mon, Jul 8, 2024 at 10:28?PM Ferrand, Jesus A. > wrote: This Message Is From an External Sender This message came from outside your organization. Dear PETSc team: Greetings. I keep working on mesh I/O utilities using DMPlex. Specifically for the output stage, I need a solid grasp on the global numbers and ideally how to set them into the DMPlex during an input operation and carrying the global numbers through API calls to DMPlexDistribute() or DMPlexMigrate() and hopefully also through some of the mesh adaption APIs. I was wondering if the GlobalToNatural PetscSF manages these global numbers. The next most useful object is the PointSF, but to me, it seems to only help establish DAG point ownership, not DAG point global indices. This is a good question, and gets at a design point of Plex. I don't believe global numbers are the "right" way to talk about mesh points, or even a very useful way to do it, for several reasons. Plex is designed to run just fine without any global numbers. It can, of course, produce them on command, as many people remain committed to their existence. Thus, the first idea is that global numbers should not be stored, since they can always be created on command very cheaply. It is much more costly to write global numbers to disk, or pull them through memory, than compute them. The second idea is that we use a combination of local numbers, namely (rank, point num) pairs, and PetscSF objects to establish sharing relations for parallel meshes. Global numbering is a particular traversal of a mesh, running over the locally owned parts of each mesh in local order. Thus an SF + a local order = a global order, and the local order is provided by the point numbering. The third idea is that a "natural" order is just the global order in which a mesh is first fed to Plex. When I redistribute and reorder for good performance, I keep track of a PetscSF that can map the mesh back to the original order in which it was provided. I see this as an unneeded expense, but many many people want output written in the original order (mostly because processing tools are so poor). This management is what we mean by GlobalToNatural. Otherwise, I have been working with the IS obtained from DMPlexGetPointNumbering() and manually determining global stratum sizes, offsets, and numbers by looking at the signs of the involuted index list that comes with that IS. It's working for now (I can monolithically write meshes to CGNS in parallel), but it is resulting in repetitive code that I will need for another mesh format that I want to support. What is repetitive? It should be able to be automated. Thanks, Matt Sincerely: J.A. Ferrand Embry-Riddle Aeronautical University - Daytona Beach - FL Ph.D. Candidate, Aerospace Engineering M.Sc. Aerospace Engineering B.Sc. Aerospace Engineering B.Sc. Computational Mathematics Phone: (386)-843-1829 Email(s): ferranj2 at my.erau.edu jesus.ferrand at gmail.com -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!cOc1e73UmcgVF_64Q8DkGuEXm_3Lk1ZXoBu6H6FnWzrEQyMfXwYg-ZEB1SEFgw06kHuPQglPr-avz-FygftLKEv-s4I$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sat Jul 13 15:51:21 2024 From: knepley at gmail.com (Matthew Knepley) Date: Sat, 13 Jul 2024 16:51:21 -0400 Subject: [petsc-users] [EXTERNAL] Re: What exactly is the GlobalToNatural PetscSF of DMPlex/DM? In-Reply-To: References: Message-ID: On Sat, Jul 13, 2024 at 4:39?PM Ferrand, Jesus A. wrote: > Matt: > > Thank you for the reply. > The bulk of it makes a lot of sense. > Yes! That need to keep track of the original mesh numbers (AKA "Natural") > is what I find pressing for my research group. > Awesome! I was separately keeping track of these numbers using a > PetscSection that I was inputting into DMSetLocalSection() but of the > coordinate DM, not the plex. > It is good to know the "correct" way to do it. > > "What is repetitive? It should be able to be automated." > > Absolutely as the intrinsic process is ubiquitous between mesh formats. > What I meant by "repetitive" is the information that is reused by > different API calls (namely, global stratum sizes, and local point numbers > corresponding to owned DAG points). > I need to define a struct to bookkeep this. It's not really an issue, > rather a minor annoyance (for me). > I need the stratum sizes to offset DMPlex numbering cells in range > [0,nCell) and vertices ranging in [nCell,nCell+nVert) to other mesh > numberings where cells range from [1, nCell] and vertices range from [1, > nVert]. In my experience, this information is needed at least three (3) > times, during coordinate writes, during element connectivity writes, and > during DMLabel writes for BC's and other labelled data. > This is a good point, and I think supports my argument that these formats are insane. What you point you below is that the format demands a completely artificial division of points when writing. I don't do this when writing HDF5. This division can be recovered in linear time completely locally after a read, so I think by any metric is it crazy to put it in the file. However, I recognize that supporting previous formats is a good thing, so I do not complain too loudly :) Thanks, Matt > This information I determine using a code snippet like this: > PetscCall(PetscObjectGetComm((PetscObject)plex,&mpiComm)); > PetscCallMPI(MPI_Comm_rank(mpiComm,&mpiRank)); > PetscCallMPI(MPI_Comm_size(mpiComm,&mpiCommSize)); > PetscCall(DMPlexCreatePointNumbering(plex,&GlobalNumberIS)); > PetscCall(ISGetIndices(GlobalNumberIS,&IdxPtr)); > PetscCall(DMPlexGetDepth(plex,&Depth)); > PetscCall(PetscMalloc3(// > Depth,&LocalIdxPtrPtr,//Indices in the local stratum to owned points. > Depth,&pOwnedPtr,//Number of points in the local stratum that are owned. > Depth,&GlobalStratumSizePtr//Global stratum size. > )); > for(PetscInt jj = 0;jj < Depth;jj++){ > PetscCall(DMPlexGetDepthStratum(plex,jj,&pStart,&pEnd)); > pOwnedPtr[jj] = 0; > for(PetscInt ii = pStart;ii < pEnd;ii++){ > if(IdxPtr[ii] >= 0) pOwnedPtr[jj]++; > } > > PetscCallMPI(MPI_Allreduce(&pOwnedPtr[jj],&GlobalStratumSizePtr[jj],1,MPIU_INT,MPI_MAX,mpiComm)); > PetscCall(PetscMalloc1(pOwnedPtr[jj],&LocalIdxPtrPtr[jj])); > kk = 0; > for(PetscInt ii = pStart;ii < pEnd; ii++){ > if(IdxPtr[ii] >= 0){ > LocalIdxPtrPtr[jj][kk] = ii; > kk++; > } > } > } > PetscCall(ISRestoreIndices(GlobalNumberIS,&IdxPtr)); > PetscCall(ISDestroy(&GlobalNumberIS)); > ------------------------------ > *From:* Matthew Knepley > *Sent:* Thursday, July 11, 2024 8:32 PM > *To:* Ferrand, Jesus A. > *Cc:* petsc-users at mcs.anl.gov > *Subject:* [EXTERNAL] Re: [petsc-users] What exactly is the > GlobalToNatural PetscSF of DMPlex/DM? > > *CAUTION:* This email originated outside of Embry-Riddle Aeronautical > University. Do not click links or open attachments unless you recognize the > sender and know the content is safe. > > On Mon, Jul 8, 2024 at 10:28?PM Ferrand, Jesus A. > wrote: > > This Message Is From an External Sender > This message came from outside your organization. > > Dear PETSc team: > > Greetings. > I keep working on mesh I/O utilities using DMPlex. > Specifically for the output stage, I need a solid grasp on the global > numbers and ideally how to set them into the DMPlex during an input > operation and carrying the global numbers through API calls to > DMPlexDistribute() or DMPlexMigrate() and hopefully also through some of > the mesh adaption APIs. I was wondering if the GlobalToNatural PetscSF > manages these global numbers. The next most useful object is the PointSF, > but to me, it seems to only help establish DAG point ownership, not DAG > point global indices. > > > This is a good question, and gets at a design point of Plex. I don't > believe global numbers are the "right" way to talk about mesh points, or > even a very useful way to do it, for several reasons. Plex is designed to > run just fine without any global numbers. It can, of course, produce > them on command, as many people remain committed to their existence. > > Thus, the first idea is that global numbers should not be stored, since > they can always be created on command very cheaply. It is much more > costly to write global numbers to disk, or pull them through memory, than > compute them. > > The second idea is that we use a combination of local numbers, namely > (rank, point num) pairs, and PetscSF objects to establish sharing relations > for parallel meshes. Global numbering is a particular traversal of a mesh, > running over the locally owned parts of each mesh in local order. Thus an > SF + a local order = a global order, and the local order is provided by the > point numbering. > > The third idea is that a "natural" order is just the global order in which > a mesh is first fed to Plex. When I redistribute and reorder for good > performance, I keep track of a PetscSF that can map the mesh back to the > original order in which it was provided. I see this as an unneeded expense, > but many many people want output written in the original order (mostly > because processing tools are so poor). This management is what we mean by > GlobalToNatural. > > > Otherwise, I have been working with the IS obtained from > DMPlexGetPointNumbering() and manually determining global stratum sizes, > offsets, and numbers by looking at the signs of the involuted index list > that comes with that IS. It's working for now (I can monolithically write > meshes to CGNS in parallel), but it is resulting in repetitive code that I > will need for another mesh format that I want to support. > > > What is repetitive? It should be able to be automated. > > Thanks, > > Matt > > > Sincerely: > > *J.A. Ferrand* > > Embry-Riddle Aeronautical University - Daytona Beach - FL > Ph.D. Candidate, Aerospace Engineering > > M.Sc. Aerospace Engineering > > B.Sc. Aerospace Engineering > > B.Sc. Computational Mathematics > > > *Phone:* (386)-843-1829 > > *Email(s):* ferranj2 at my.erau.edu > > jesus.ferrand at gmail.com > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!bLSa5HDys-IGRjJaDJJCuDR-HrJx0HChqgMn-bRU4SNdeVJHOS_DDTqpyPcVlgdSHj-_cfoWf8BgbaCVtjAL$ > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!bLSa5HDys-IGRjJaDJJCuDR-HrJx0HChqgMn-bRU4SNdeVJHOS_DDTqpyPcVlgdSHj-_cfoWf8BgbaCVtjAL$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ivanluthfi5 at gmail.com Sun Jul 14 03:54:44 2024 From: ivanluthfi5 at gmail.com (Ivan Luthfi) Date: Sun, 14 Jul 2024 16:54:44 +0800 Subject: [petsc-users] Error in using KSPSOperators Message-ID: Hi there, I have an issue in compiling my code. Here is the warning: MsFEM_poisson2D_DMDA.c:159:65: error: cannot convert ?MatStructure? to ?Mat? {aka ?_p_Mat*?} 159 | ierr = KSPSetOperators(ksp_direct,up.Af,DIFFERENT_NONZERO_PATTERN);CHKERRQ(ierr); Please help me -- Best regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sun Jul 14 07:00:01 2024 From: knepley at gmail.com (Matthew Knepley) Date: Sun, 14 Jul 2024 08:00:01 -0400 Subject: [petsc-users] [petsc-maint] Error in using KSPSOperators In-Reply-To: References: Message-ID: On Sun, Jul 14, 2024 at 4:55?AM Ivan Luthfi wrote: > Hi there, I have an issue in compiling my code. Here is the warning: > MsFEM_poisson2D_DMDA. c: 159: 65: error: cannot convert ?MatStructure? to > ?Mat? {aka ?_p_Mat*?} 159 | ierr = KSPSetOperators(ksp_direct,up. > Af,DIFFERENT_NONZERO_PATTERN);CHKERRQ(ierr);Please > ZjQcmQRYFpfptBannerStart > This Message Is From an External Sender > This message came from outside your organization. > > ZjQcmQRYFpfptBannerEnd > Hi there, I have an issue in compiling my code. Here is the warning: > > MsFEM_poisson2D_DMDA.c:159:65: error: cannot convert ?MatStructure? to > ?Mat? {aka ?_p_Mat*?} > 159 | ierr = > KSPSetOperators(ksp_direct,up.Af,DIFFERENT_NONZERO_PATTERN);CHKERRQ(ierr); > > We removed the MatStructure flag in this call many years ago. The structure is now inferred automatically. Thanks, Matt > Please help me > -- > Best regards, > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!brkfTnuKK7Tio9g62K-XaKr_oEVo9jMQ5WVmR9pS2s7WZ5hJIEsUHwPAMXX0r368HQnLIeVYuGXDXGg-pObs$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex.sp2408 at gmail.com Sun Jul 14 17:04:14 2024 From: alex.sp2408 at gmail.com (Alex Sp) Date: Mon, 15 Jul 2024 01:04:14 +0300 Subject: [petsc-users] Matrix assembly takes too long Message-ID: Hello, I am trying to solve the problem of the 3D fluid flow around a cylinder using an in-house finite element Fortran code. I want to apply periodic boundary conditions in the neutral direction (the z direction which coincides with the axis of the cylinder). The method works fine and it gives correct results. However the assembly of the matrix for the Newton Raphson method takes too long. The whole process takes about 10 minutes for a single Newton iteration. For the same system size it takes only 10 seconds if instead of periodic conditions I set wall boundaries (no slip, no penetration). What is also strange is that the very first matrix assembly is quite normal (takes about 30 seconds). What could be the reason for this? It seems like some matrix entries are allocated anew every time. However this happens only when I apply periodicity. I do not do any matrix preallocation. Thanks in advance, Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Sun Jul 14 17:36:44 2024 From: bsmith at petsc.dev (Barry Smith) Date: Sun, 14 Jul 2024 18:36:44 -0400 Subject: [petsc-users] Matrix assembly takes too long In-Reply-To: References: Message-ID: <4ABBC112-877E-48F6-A2DA-2FC5D37C594E@petsc.dev> > On Jul 14, 2024, at 6:04?PM, Alex Sp wrote: > > This Message Is From an External Sender > This message came from outside your organization. > Hello, > > I am trying to solve the problem of the 3D fluid flow around a cylinder using an in-house finite element Fortran code. I want to apply periodic boundary conditions in the neutral direction (the z direction which coincides with the axis of the cylinder). The method works fine and it gives correct results. However the assembly of the matrix for the Newton Raphson method takes too long. The whole process takes about 10 minutes for a single Newton iteration. For the same system size it takes only 10 seconds if instead of periodic conditions I set wall boundaries (no slip, no penetration). What is also strange is that the very first matrix assembly is quite normal (takes about 30 seconds). Does the matrix non-zero structure change between the first and later matrix assemblies? Your behavior is generally a result of new non-zero locations being entered into the matrix after the first assembly is done. What version of PETSc are you using? You can run with the option -info and grep for mallocs to see if additional mallocs are needed > What could be the reason for this? It seems like some matrix entries are allocated anew every time. However this happens only when I apply periodicity. I do not do any matrix preallocation. > > Thanks in advance, > Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Mon Jul 15 08:27:27 2024 From: bsmith at petsc.dev (Barry Smith) Date: Mon, 15 Jul 2024 09:27:27 -0400 Subject: [petsc-users] Matrix assembly takes too long In-Reply-To: References: <4ABBC112-877E-48F6-A2DA-2FC5D37C594E@petsc.dev> Message-ID: Call MatSetOption(mat,MAT_KEEP_NONZERO_PATTERN,PETSC_TRUE); Before calling MatZeroRows() the first time, insert the periodic condition extra column value into the matrix for each periodic row so the matrix "has room" for that extra value. With this change, there should never be any mallocs, and the process should be pretty fast. Barry > On Jul 15, 2024, at 5:19?AM, Alex Sp wrote: > > I am using versions 3.19 and 3.20. > > Indeed I get extra mallocs during MatSetValues(). The problem is somewhere in the imposition of periodicity. Since the condition for any unknown u is: u(z=-W) - u(z=+W) = 0, what I do is to store in all processes the global node numbers in these boundaries. Then I call MatZeroRows() for the rows corresponding to (z=-W) and set 1 in the diagonal. Finally, I call MatSetValues() for the same rows and the columns corresponding to (z=+W) to insert the -1 value. I suspect that this creates the problem but why does it keep adding mallocs after the first assembly? > Here is part of the -info output: > > [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 -2080374781 > [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374778 > [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374778 > [0] VecAssemblyBegin_MPI_BTS(): Stash has 11980 entries, uses 3 mallocs. > [0] VecAssemblyBegin_MPI_BTS(): Block-Stash has 0 entries, uses 0 mallocs. > [0] MatAssemblyBegin_MPIAIJ(): Stash has 5529600 entries, uses 0 mallocs. > [0] VecAssemblyBegin_MPI_BTS(): Stash has 0 entries, uses 0 mallocs. > [0] VecAssemblyBegin_MPI_BTS(): Block-Stash has 0 entries, uses 0 mallocs. > [0] MatAssemblyBegin_MPIAIJ(): Stash has 0 entries, uses 0 mallocs. > [0] MatAssemblyEnd_SeqAIJ(): Matrix size: 13835 X 36985; storage space: 1990 unneeded,2343675 used > [0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 20339 > [0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 330 > [0] MatCheckCompressedRow(): Found the ratio (num_zerorows 0)/(num_localrows 13835) < 0.6. Do not use CompressedRow routines. > > On Mon, 15 Jul 2024 at 01:36, Barry Smith > wrote: >> >> >>> On Jul 14, 2024, at 6:04?PM, Alex Sp > wrote: >>> >>> This Message Is From an External Sender >>> This message came from outside your organization. >>> Hello, >>> >>> I am trying to solve the problem of the 3D fluid flow around a cylinder using an in-house finite element Fortran code. I want to apply periodic boundary conditions in the neutral direction (the z direction which coincides with the axis of the cylinder). The method works fine and it gives correct results. However the assembly of the matrix for the Newton Raphson method takes too long. The whole process takes about 10 minutes for a single Newton iteration. For the same system size it takes only 10 seconds if instead of periodic conditions I set wall boundaries (no slip, no penetration). What is also strange is that the very first matrix assembly is quite normal (takes about 30 seconds). >> >> Does the matrix non-zero structure change between the first and later matrix assemblies? Your behavior is generally a result of new non-zero locations being entered into the matrix after the first assembly is done. >> >> What version of PETSc are you using? >> >> You can run with the option -info and grep for mallocs to see if additional mallocs are needed >> >> >> >>> What could be the reason for this? It seems like some matrix entries are allocated anew every time. However this happens only when I apply periodicity. I do not do any matrix preallocation. >>> >>> Thanks in advance, >>> Alex >> > > > -- > Alexandros Spyridakis > PhD Candidate > Department of Chemical Engineering, University of Patras, Greece > Laboratory of Fluid Mechanics and Rheology -------------- next part -------------- An HTML attachment was scrubbed... URL: From ivanluthfi5 at gmail.com Tue Jul 16 08:54:25 2024 From: ivanluthfi5 at gmail.com (Ivan Luthfi) Date: Tue, 16 Jul 2024 21:54:25 +0800 Subject: [petsc-users] Many warning in my code Message-ID: Hello guys, I am still trying to compile my old multigrid code. But i get so many warning, one of those warning is like this: MsFEM_poisson2D_DMDA.c: In function ?int main(int, char**)?: MsFEM_poisson2D_DMDA.c:185:48: warning: variable ?finest? set but not used [-Wunused-but-set-variable] 185 | PetscInt mg_level = 2, finest; | ^~~~~~ MsFEM_poisson2D_DMDA.c:56:44: warning: variable ?pi? set but not used [-Wunused-but-set-variable] 56 | PetscScalar a,b,c,d,dt,pi; | ^~ MsFEM_poisson2D_DMDA.c:57:35: warning: variable ?Lx? set but not used [-Wunused-but-set-variable] 57 | PetscInt M,Lx,Ly,Nx,Ny,Mx,My; | ^~ MsFEM_poisson2D_DMDA.c:57:38: warning: variable ?Ly? set but not used [-Wunused-but-set-variable] 57 | PetscInt M,Lx,Ly,Nx,Ny,Mx,My; | ^~ MsFEM_poisson2D_DMDA.c:58:33: warning: variable ?hx? set but not used [-Wunused-but-set-variable] 58 | PetscScalar hx,hy,Hx,Hy; | ^~ MsFEM_poisson2D_DMDA.c:58:36: warning: variable ?hy? set but not used [-Wunused-but-set-variable] 58 | PetscScalar hx,hy,Hx,Hy; | ^~ MsFEM_poisson2D_DMDA.c:58:39: warning: variable ?Hx? set but not used [-Wunused-but-set-variable] 58 | PetscScalar hx,hy,Hx,Hy; | ^~ MsFEM_poisson2D_DMDA.c:58:42: warning: variable ?Hy? set but not used [-Wunused-but-set-variable] 58 | PetscScalar hx,hy,Hx,Hy; | ^~ MsFEM_poisson2D_DMDA.c:60:58: warning: variable ?Nondimensionalization? set but not used [-Wunused-but-set-variable] 60 | PetscInt Compute_finegridsolution,Nondimensionalization; | ^~~~~~~~~~~~~~~~~~~~~ MsFEM_poisson2D_DMDA.c:283:54: warning: ?%d? directive writing between 1 and 11 bytes into a region of size between 0 and 99 Can you guys help me to fix or solve this warning in order to get the code run smoothly. Please help -- Best regards, Ivan Luthfi Ihwani -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Jul 16 09:12:50 2024 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 16 Jul 2024 10:12:50 -0400 Subject: [petsc-users] Many warning in my code In-Reply-To: References: Message-ID: We cannot see your code. The warning says that you give a value to a variable, but then never use it. We cannot tell if that is true without looking at the code. Thanks, Matt On Tue, Jul 16, 2024 at 9:54?AM Ivan Luthfi wrote: > Hello guys, I am still trying to compile my old multigrid code. But i get > so many warning, one of those warning is like this: MsFEM_poisson2D_DMDA. > c: In function ?int main(int, char**)?: MsFEM_poisson2D_DMDA. c: 185: 48: > warning: variable ?finest? > ZjQcmQRYFpfptBannerStart > This Message Is From an External Sender > This message came from outside your organization. > > ZjQcmQRYFpfptBannerEnd > Hello guys, > I am still trying to compile my old multigrid code. But i get so many > warning, one of those warning is like this: > MsFEM_poisson2D_DMDA.c: In function ?int main(int, char**)?: > MsFEM_poisson2D_DMDA.c:185:48: warning: variable ?finest? set but not used > [-Wunused-but-set-variable] > 185 | PetscInt mg_level = 2, finest; > | ^~~~~~ > MsFEM_poisson2D_DMDA.c:56:44: warning: variable ?pi? set but not used > [-Wunused-but-set-variable] > 56 | PetscScalar a,b,c,d,dt,pi; > | ^~ > MsFEM_poisson2D_DMDA.c:57:35: warning: variable ?Lx? set but not used > [-Wunused-but-set-variable] > 57 | PetscInt M,Lx,Ly,Nx,Ny,Mx,My; > | ^~ > MsFEM_poisson2D_DMDA.c:57:38: warning: variable ?Ly? set but not used > [-Wunused-but-set-variable] > 57 | PetscInt M,Lx,Ly,Nx,Ny,Mx,My; > | ^~ > MsFEM_poisson2D_DMDA.c:58:33: warning: variable ?hx? set but not used > [-Wunused-but-set-variable] > 58 | PetscScalar hx,hy,Hx,Hy; > | ^~ > MsFEM_poisson2D_DMDA.c:58:36: warning: variable ?hy? set but not used > [-Wunused-but-set-variable] > 58 | PetscScalar hx,hy,Hx,Hy; > | ^~ > MsFEM_poisson2D_DMDA.c:58:39: warning: variable ?Hx? set but not used > [-Wunused-but-set-variable] > 58 | PetscScalar hx,hy,Hx,Hy; > | ^~ > MsFEM_poisson2D_DMDA.c:58:42: warning: variable ?Hy? set but not used > [-Wunused-but-set-variable] > 58 | PetscScalar hx,hy,Hx,Hy; > | ^~ > MsFEM_poisson2D_DMDA.c:60:58: warning: variable ?Nondimensionalization? > set but not used [-Wunused-but-set-variable] > 60 | PetscInt > Compute_finegridsolution,Nondimensionalization; > | > ^~~~~~~~~~~~~~~~~~~~~ > MsFEM_poisson2D_DMDA.c:283:54: warning: ?%d? directive writing between 1 > and 11 bytes into a region of size between 0 and 99 > > > Can you guys help me to fix or solve this warning in order to get the code > run smoothly. Please help > > > -- > Best regards, > > Ivan Luthfi Ihwani > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!awUD7DouA4Cjqw1gYsc2V7I8PVc9ojQXaCMFCcPwBUMEXEYkc9N6wUcfv_EYtLHhF_rlH_rNmpGvlGD9ZfYP$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Tue Jul 16 11:30:21 2024 From: bsmith at petsc.dev (Barry Smith) Date: Tue, 16 Jul 2024 12:30:21 -0400 Subject: [petsc-users] Many warning in my code In-Reply-To: References: Message-ID: <29EF47D4-65F2-4EDD-BB55-8F822C596789@petsc.dev> Recent versions of compilers provide a great deal more warnings and errors than older versions for "sloppy" code, such as assigning a variable but then never using it. The compiler doesn't know if you made a mistake like misspelling a variable name or forgetting to use a variable so it prints warnings that previous versions of the compilers did not. The best way to proceed is to go through all the warnings and "fix" your code to prevent the warnings from appearing. It is also possible to tell the compiler to not warn you about some of the problems with extra compiler options (for example passing the compiler flag -Wno-unused-but-set-variable) but that is not a good solution, best to fix the code. Barry > On Jul 16, 2024, at 9:54?AM, Ivan Luthfi wrote: > > This Message Is From an External Sender > This message came from outside your organization. > Hello guys, > I am still trying to compile my old multigrid code. But i get so many warning, one of those warning is like this: > MsFEM_poisson2D_DMDA.c: In function ?int main(int, char**)?: > MsFEM_poisson2D_DMDA.c:185:48: warning: variable ?finest? set but not used [-Wunused-but-set-variable] > 185 | PetscInt mg_level = 2, finest; > | ^~~~~~ > MsFEM_poisson2D_DMDA.c:56:44: warning: variable ?pi? set but not used [-Wunused-but-set-variable] > 56 | PetscScalar a,b,c,d,dt,pi; > | ^~ > MsFEM_poisson2D_DMDA.c:57:35: warning: variable ?Lx? set but not used [-Wunused-but-set-variable] > 57 | PetscInt M,Lx,Ly,Nx,Ny,Mx,My; > | ^~ > MsFEM_poisson2D_DMDA.c:57:38: warning: variable ?Ly? set but not used [-Wunused-but-set-variable] > 57 | PetscInt M,Lx,Ly,Nx,Ny,Mx,My; > | ^~ > MsFEM_poisson2D_DMDA.c:58:33: warning: variable ?hx? set but not used [-Wunused-but-set-variable] > 58 | PetscScalar hx,hy,Hx,Hy; > | ^~ > MsFEM_poisson2D_DMDA.c:58:36: warning: variable ?hy? set but not used [-Wunused-but-set-variable] > 58 | PetscScalar hx,hy,Hx,Hy; > | ^~ > MsFEM_poisson2D_DMDA.c:58:39: warning: variable ?Hx? set but not used [-Wunused-but-set-variable] > 58 | PetscScalar hx,hy,Hx,Hy; > | ^~ > MsFEM_poisson2D_DMDA.c:58:42: warning: variable ?Hy? set but not used [-Wunused-but-set-variable] > 58 | PetscScalar hx,hy,Hx,Hy; > | ^~ > MsFEM_poisson2D_DMDA.c:60:58: warning: variable ?Nondimensionalization? set but not used [-Wunused-but-set-variable] > 60 | PetscInt Compute_finegridsolution,Nondimensionalization; > | ^~~~~~~~~~~~~~~~~~~~~ > MsFEM_poisson2D_DMDA.c:283:54: warning: ?%d? directive writing between 1 and 11 bytes into a region of size between 0 and 99 > > > Can you guys help me to fix or solve this warning in order to get the code run smoothly. Please help > > > -- > Best regards, > > Ivan Luthfi Ihwani -------------- next part -------------- An HTML attachment was scrubbed... URL: From ivanluthfi5 at gmail.com Thu Jul 18 02:07:34 2024 From: ivanluthfi5 at gmail.com (Ivan Luthfi) Date: Thu, 18 Jul 2024 15:07:34 +0800 Subject: [petsc-users] Warning and Error in Makefile Message-ID: Hi friend, I get many warning (but its ok for the warning). However I didnt get the result of my code when i compile it, is there any possible mistake in my makefile? can you please help me? The attached file is my error and warning (error in makefile line 27) , and the makefile . Best regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- MsFEM_poisson2D_DMDA.c: In function ?int main(int, char**)?: MsFEM_poisson2D_DMDA.c:184:48: warning: variable ?finest? set but not used [-Wunused-but-set-variable] 184 | PetscInt mg_level = 2, finest; | ^~~~~~ MsFEM_poisson2D_DMDA.c:55:44: warning: variable ?pi? set but not used [-Wunused-but-set-variable] 55 | PetscScalar a,b,c,d,dt,pi; | ^~ MsFEM_poisson2D_DMDA.c:56:35: warning: variable ?Lx? set but not used [-Wunused-but-set-variable] 56 | PetscInt M,Lx,Ly,Nx,Ny,Mx,My; | ^~ MsFEM_poisson2D_DMDA.c:56:38: warning: variable ?Ly? set but not used [-Wunused-but-set-variable] 56 | PetscInt M,Lx,Ly,Nx,Ny,Mx,My; | ^~ MsFEM_poisson2D_DMDA.c:57:33: warning: variable ?hx? set but not used [-Wunused-but-set-variable] 57 | PetscScalar hx,hy,Hx,Hy; | ^~ MsFEM_poisson2D_DMDA.c:57:36: warning: variable ?hy? set but not used [-Wunused-but-set-variable] 57 | PetscScalar hx,hy,Hx,Hy; | ^~ MsFEM_poisson2D_DMDA.c:57:39: warning: variable ?Hx? set but not used [-Wunused-but-set-variable] 57 | PetscScalar hx,hy,Hx,Hy; | ^~ MsFEM_poisson2D_DMDA.c:57:42: warning: variable ?Hy? set but not used [-Wunused-but-set-variable] 57 | PetscScalar hx,hy,Hx,Hy; | ^~ MsFEM_poisson2D_DMDA.c:59:58: warning: variable ?Nondimensionalization? set but not used [-Wunused-but-set-variable] 59 | PetscInt Compute_finegridsolution,Nondimensionalization; | ^~~~~~~~~~~~~~~~~~~~~ MsFEM_poisson2D_DMDA.c:282:54: warning: ?%d? directive writing between 1 and 11 bytes into a region of size between 0 and 99 [-Wformat-overflow=] 282 | sprintf(filename,"%sc%d_N%dM%d_Bf.log",up.problem_description,up.problem_flag,up.Nx,up.Mx); | ^~ MsFEM_poisson2D_DMDA.c:282:40: note: ?sprintf? output between 15 and 144 bytes into a destination of size 100 282 | sprintf(filename,"%sc%d_N%dM%d_Bf.log",up.problem_description,up.problem_flag,up.Nx,up.Mx); | ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ MsFEM_poisson2D_DMDA.c:288:55: warning: ?%d? directive writing between 1 and 11 bytes into a region of size between 0 and 99 [-Wformat-overflow=] 288 | sprintf(filename, "%sc%d_Nx%dNy%dMx%dMy%d_Bf.log",up.problem_description,up.problem_flag,up.Nx,up.Ny,up.Mx,up.My); | ^~ MsFEM_poisson2D_DMDA.c:288:40: note: ?sprintf? output between 23 and 172 bytes into a destination of size 100 288 | sprintf(filename, "%sc%d_Nx%dNy%dMx%dMy%d_Bf.log",up.problem_description,up.problem_flag,up.Nx,up.Ny,up.Mx,up.My); | ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ MsFEM_poisson2D_DMDA.c:286:55: warning: ?%d? directive writing between 1 and 11 bytes into a region of size between 0 and 99 [-Wformat-overflow=] 286 | sprintf(filename, "%sc%d_Nx%dNy%dM%d_Bf.log",up.problem_description,up.problem_flag,up.Nx,up.Ny,up.Mx); | ^~ MsFEM_poisson2D_DMDA.c:286:40: note: ?sprintf? output between 19 and 158 bytes into a destination of size 100 286 | sprintf(filename, "%sc%d_Nx%dNy%dM%d_Bf.log",up.problem_description,up.problem_flag,up.Nx,up.Ny,up.Mx); | ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ MsFEM_poisson2D_DMDA.c:284:54: warning: ?%d? directive writing between 1 and 11 bytes into a region of size between 0 and 99 [-Wformat-overflow=] 284 | sprintf(filename,"%sc%d_N%dMx%dMy%d_Bf.log",up.problem_description,up.problem_flag,up.Nx,up.Mx,up.My); | ^~ MsFEM_poisson2D_DMDA.c:284:40: note: ?sprintf? output between 19 and 158 bytes into a destination of size 100 284 | sprintf(filename,"%sc%d_N%dMx%dMy%d_Bf.log",up.problem_description,up.problem_flag,up.Nx,up.Mx,up.My); | ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ MsFEM_poisson2D_DMDA.c:302:54: warning: ?%d? directive writing between 1 and 11 bytes into a region of size between 0 and 99 [-Wformat-overflow=] 302 | sprintf(filename,"%sc%d_N%dM%d_final_xf.log",up.problem_description,up.problem_flag,up.Nx,up.Mx); | ^~ MsFEM_poisson2D_DMDA.c:302:40: note: ?sprintf? output between 21 and 150 bytes into a destination of size 100 302 | sprintf(filename,"%sc%d_N%dM%d_final_xf.log",up.problem_description,up.problem_flag,up.Nx,up.Mx); | ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ MsFEM_poisson2D_DMDA.c:308:54: warning: ?%d? directive writing between 1 and 11 bytes into a region of size between 0 and 99 [-Wformat-overflow=] 308 | sprintf(filename,"%sc%d_Nx%dNy%dMx%dMy%d_final_xf.log",up.problem_description,up.problem_flag,up.Nx,up.Ny,up.Mx,up.My); | ^~ MsFEM_poisson2D_DMDA.c:308:40: note: ?sprintf? output between 29 and 178 bytes into a destination of size 100 308 | sprintf(filename,"%sc%d_Nx%dNy%dMx%dMy%d_final_xf.log",up.problem_description,up.problem_flag,up.Nx,up.Ny,up.Mx,up.My); | ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ MsFEM_poisson2D_DMDA.c:306:54: warning: ?%d? directive writing between 1 and 11 bytes into a region of size between 0 and 99 [-Wformat-overflow=] 306 | sprintf(filename,"%sc%d_Nx%dNy%dM%d_final_xf.log",up.problem_description,up.problem_flag,up.Nx,up.Ny,up.Mx); | ^~ MsFEM_poisson2D_DMDA.c:306:40: note: ?sprintf? output between 25 and 164 bytes into a destination of size 100 306 | sprintf(filename,"%sc%d_Nx%dNy%dM%d_final_xf.log",up.problem_description,up.problem_flag,up.Nx,up.Ny,up.Mx); | ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ MsFEM_poisson2D_DMDA.c:304:54: warning: ?%d? directive writing between 1 and 11 bytes into a region of size between 0 and 99 [-Wformat-overflow=] 304 | sprintf(filename,"%sc%d_N%dMx%dMy%d_final_xf.log",up.problem_description,up.problem_flag,up.Nx,up.Mx,up.My); | ^~ MsFEM_poisson2D_DMDA.c:304:40: note: ?sprintf? output between 25 and 164 bytes into a destination of size 100 304 | sprintf(filename,"%sc%d_N%dMx%dMy%d_final_xf.log",up.problem_description,up.problem_flag,up.Nx,up.Mx,up.My); | ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /usr/bin/ld: /tmp/cc4ILnLc.o: in function `main': /home/ivan/MsFEMbub/MsFEM_poisson2D_DMDA.c:69: undefined reference to `SetUserParameter(UserParameter*)' /usr/bin/ld: /home/ivan/MsFEMbub/MsFEM_poisson2D_DMDA.c:93: undefined reference to `CreateComputeTools(UserParameter*)' /usr/bin/ld: /home/ivan/MsFEMbub/MsFEM_poisson2D_DMDA.c:102: undefined reference to `ComputeExactsolution(UserParameter*)' /usr/bin/ld: /home/ivan/MsFEMbub/MsFEM_poisson2D_DMDA.c:120: undefined reference to `MsFEMCreate(MsFEM*, _p_DM*, _p_DM*, _p_IS*, _p_IS*, double, double, double, double, int)' /usr/bin/ld: /home/ivan/MsFEMbub/MsFEM_poisson2D_DMDA.c:121: undefined reference to `MsFEMSetFromOptions(MsFEM*)' /usr/bin/ld: /home/ivan/MsFEMbub/MsFEM_poisson2D_DMDA.c:122: undefined reference to `MsFEMSetOperators(MsFEM*, _p_Mat*)' /usr/bin/ld: /home/ivan/MsFEMbub/MsFEM_poisson2D_DMDA.c:124: undefined reference to `MsFEMSolve(MsFEM, _p_Vec*, _p_Vec*, _p_Vec*, _p_Vec**, _p_Vec*)' /usr/bin/ld: /home/ivan/MsFEMbub/MsFEM_poisson2D_DMDA.c:125: undefined reference to `MsFEMSolve(MsFEM, _p_Vec*, _p_Vec*, _p_Vec*, _p_Vec**, _p_Vec*)' /usr/bin/ld: /home/ivan/MsFEMbub/MsFEM_poisson2D_DMDA.c:129: undefined reference to `MsFEMSolve(MsFEM, _p_Vec*, _p_Vec*, _p_Vec*, _p_Vec**, _p_Vec*)' /usr/bin/ld: /home/ivan/MsFEMbub/MsFEM_poisson2D_DMDA.c:130: undefined reference to `MsFEMSolve(MsFEM, _p_Vec*, _p_Vec*, _p_Vec*, _p_Vec**, _p_Vec*)' /usr/bin/ld: /home/ivan/MsFEMbub/MsFEM_poisson2D_DMDA.c:132: undefined reference to `MsFEMDestroy(MsFEM*)' /usr/bin/ld: /home/ivan/MsFEMbub/MsFEM_poisson2D_DMDA.c:259: undefined reference to `PCContextCreate_MsFEM(_p_KSP*, _p_DM*, _p_DM*, _p_IS*, _p_IS*, double, double, double, double, int)' /usr/bin/ld: /home/ivan/MsFEMbub/MsFEM_poisson2D_DMDA.c:317: undefined reference to `DestroyBoundary(UserParameter*)' /usr/bin/ld: /home/ivan/MsFEMbub/MsFEM_poisson2D_DMDA.c:320: undefined reference to `ComputeMassMatrix(UserParameter*)' /usr/bin/ld: /home/ivan/MsFEMbub/MsFEM_poisson2D_DMDA.c:322: undefined reference to `ConstructInitialCondition(UserParameter*)' /usr/bin/ld: /home/ivan/MsFEMbub/MsFEM_poisson2D_DMDA.c:324: undefined reference to `ConstructOperator(UserParameter*)' /usr/bin/ld: /home/ivan/MsFEMbub/MsFEM_poisson2D_DMDA.c:325: undefined reference to `SetMatrixBoundaryCondition(_p_Mat**, _p_DM*, _p_IS*)' /usr/bin/ld: /home/ivan/MsFEMbub/MsFEM_poisson2D_DMDA.c:339: undefined reference to `MsFEMCreate(MsFEM*, _p_DM*, _p_DM*, _p_IS*, _p_IS*, double, double, double, double, int)' /usr/bin/ld: /home/ivan/MsFEMbub/MsFEM_poisson2D_DMDA.c:340: undefined reference to `MsFEMSetFromOptions(MsFEM*)' /usr/bin/ld: /home/ivan/MsFEMbub/MsFEM_poisson2D_DMDA.c:341: undefined reference to `MsFEMSetOperators(MsFEM*, _p_Mat*)' /usr/bin/ld: /home/ivan/MsFEMbub/MsFEM_poisson2D_DMDA.c:353: undefined reference to `DestroyBoundary(UserParameter*)' /usr/bin/ld: /home/ivan/MsFEMbub/MsFEM_poisson2D_DMDA.c:361: undefined reference to `ConstrctBoundary(UserParameter*, _p_IS**, _p_Vec**, int, int)' /usr/bin/ld: /home/ivan/MsFEMbub/MsFEM_poisson2D_DMDA.c:362: undefined reference to `ConstrctBoundary(UserParameter*, _p_IS**, _p_Vec**, int, int)' /usr/bin/ld: /home/ivan/MsFEMbub/MsFEM_poisson2D_DMDA.c:363: undefined reference to `ComputeBf(UserParameter*)' /usr/bin/ld: /home/ivan/MsFEMbub/MsFEM_poisson2D_DMDA.c:367: undefined reference to `ConstructRHS(UserParameter*)' /usr/bin/ld: /home/ivan/MsFEMbub/MsFEM_poisson2D_DMDA.c:368: undefined reference to `SetVectorBoundaryCondition(_p_Vec**, _p_DM*, _p_IS*, _p_Vec*)' /usr/bin/ld: /home/ivan/MsFEMbub/MsFEM_poisson2D_DMDA.c:379: undefined reference to `SetVectorBoundaryCondition(_p_Vec**, _p_DM*, _p_IS*, _p_Vec*)' /usr/bin/ld: /home/ivan/MsFEMbub/MsFEM_poisson2D_DMDA.c:383: undefined reference to `MsFEMSolve(MsFEM, _p_Vec*, _p_Vec*, _p_Vec*, _p_Vec**, _p_Vec*)' /usr/bin/ld: /home/ivan/MsFEMbub/MsFEM_poisson2D_DMDA.c:387: undefined reference to `DestroyBoundary(UserParameter*)' /usr/bin/ld: /home/ivan/MsFEMbub/MsFEM_poisson2D_DMDA.c:391: undefined reference to `MsFEMDestroy(MsFEM*)' /usr/bin/ld: /home/ivan/MsFEMbub/MsFEM_poisson2D_DMDA.c:464: undefined reference to `DestroyComputeTools(UserParameter*)' /usr/bin/ld: /tmp/cc4ILnLc.o: in function `FormMatrix(UserParameter*)': /home/ivan/MsFEMbub/MsFEM_poisson2D_DMDA.c:488: undefined reference to `LoadPermeabilityFile(UserParameter*)' /usr/bin/ld: /home/ivan/MsFEMbub/MsFEM_poisson2D_DMDA.c:490: undefined reference to `ConstrctBoundary(UserParameter*, _p_IS**, _p_Vec**, int, int)' /usr/bin/ld: /home/ivan/MsFEMbub/MsFEM_poisson2D_DMDA.c:491: undefined reference to `ConstrctBoundary(UserParameter*, _p_IS**, _p_Vec**, int, int)' /usr/bin/ld: /home/ivan/MsFEMbub/MsFEM_poisson2D_DMDA.c:495: undefined reference to `ComputeStiffMatrix(UserParameter*)' /usr/bin/ld: /home/ivan/MsFEMbub/MsFEM_poisson2D_DMDA.c:507: undefined reference to `SetMatrixBoundaryCondition(_p_Mat**, _p_DM*, _p_IS*)' /usr/bin/ld: /home/ivan/MsFEMbub/MsFEM_poisson2D_DMDA.c:515: undefined reference to `ComputeBf(UserParameter*)' /usr/bin/ld: /home/ivan/MsFEMbub/MsFEM_poisson2D_DMDA.c:516: undefined reference to `SetVectorBoundaryCondition(_p_Vec**, _p_DM*, _p_IS*, _p_Vec*)' /usr/bin/ld: /home/ivan/MsFEMbub/MsFEM_poisson2D_DMDA.c:518: undefined reference to `SetVectorBoundaryCondition(_p_Vec**, _p_DM*, _p_IS*, _p_Vec*)' /usr/bin/ld: /home/ivan/MsFEMbub/MsFEM_poisson2D_DMDA.c:522: undefined reference to `ComputeFinegridsolution(UserParameter*)' /usr/bin/ld: /home/ivan/MsFEMbub/MsFEM_poisson2D_DMDA.c:524: undefined reference to `SetInitialGuess(UserParameter*)' /usr/bin/ld: /home/ivan/MsFEMbub/MsFEM_poisson2D_DMDA.c:525: undefined reference to `SetVectorBoundaryCondition(_p_Vec**, _p_DM*, _p_IS*, _p_Vec*)' /usr/bin/ld: /tmp/cc4ILnLc.o: in function `PrintRE(UserParameter*, int*)': /home/ivan/MsFEMbub/MsFEM_poisson2D_DMDA.c:543: undefined reference to `Com_Residual(UserParameter*, _p_Vec**)' /usr/bin/ld: /home/ivan/MsFEMbub/MsFEM_poisson2D_DMDA.c:545: undefined reference to `Com_Error(UserParameter*, _p_Vec**)' /usr/bin/ld: /home/ivan/MsFEMbub/MsFEM_poisson2D_DMDA.c:546: undefined reference to `Com_Error(UserParameter*, _p_Vec**)' /usr/bin/ld: /home/ivan/MsFEMbub/MsFEM_poisson2D_DMDA.c:550: undefined reference to `Com_Error(UserParameter*, _p_Vec**)' collect2: error: ld returned 1 exit status make: *** [makefile:27: MsFEM_poisson2D_DMDA.o] Error 1 -------------- next part -------------- A non-text attachment was scrubbed... Name: makefile Type: application/octet-stream Size: 1900 bytes Desc: not available URL: From knepley at gmail.com Thu Jul 18 03:19:35 2024 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 18 Jul 2024 04:19:35 -0400 Subject: [petsc-users] Warning and Error in Makefile In-Reply-To: References: Message-ID: On Thu, Jul 18, 2024 at 3:07?AM Ivan Luthfi wrote: > Hi friend, I get many warning (but its ok for the warning). However I > didnt get the result of my code when i compile it, is there any possible > mistake in my makefile? can you please help me? The attached file is my > error and warning (error > ZjQcmQRYFpfptBannerStart > This Message Is From an External Sender > This message came from outside your organization. > > ZjQcmQRYFpfptBannerEnd > Hi friend, > > I get many warning (but its ok for the warning). However I didnt get the > result of my code when i compile it, is there any possible mistake in my > makefile? can you please help me? > > The attached file is my error and warning (error in makefile line 27) , > and the makefile . > > Try this makefile. Thanks, Matt > Best regards, > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!Ya_y__cU_okKfbiRLeVRQcZnpP-bUnDxsDbIoF8uy3PMC76NSOHWbBuUcKvcrtyXwRZOgsmVYgTguq3UHtKu$ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: makefile Type: application/octet-stream Size: 1596 bytes Desc: not available URL: From edofersan at gmail.com Thu Jul 18 09:30:15 2024 From: edofersan at gmail.com (Eduardo Fernandez) Date: Thu, 18 Jul 2024 16:30:15 +0200 Subject: [petsc-users] Question about DMSwarmMigrate Message-ID: Dear PETSc community, I have some questions about the DMSWARM structure and a request for suggestions in order to speed up a task that I require in my code. I am developing a numerical method that is similar to the PIC, MPM, etc, however, all cells of the DM mesh must contain particles. In this method, about 10 particles are included in each cell (element), particles are displaced at each time step, and carry information that must be projected onto/from the mesh. Searching on the internet I came across the DMSWARM structure. Indeed, the structure allows easy initialization of a cloud of particles, easy detection of the element that holds a specific particle and easy particle-mesh projection. So it turns out to be quite convenient for my purpose. However, in a test implementation I performed (based on the PETSC examples), I notice a significant bottleneck (IMHO) in the DMSwarmMigrate function (line 198 of the attached code). Given that I am not sure if my implementation is correct or optimal, I am attaching a sample code. Here, some extra info: 3D implementation (tetra and hexa); cpu, mpich & open mpi; DMPlex (from gmsh file, also attached). The following tables are obtained with the attached code using 9329 tetra elements and 128085 particles (but the desired goal is to reach millions of elements and particles). TABLE 1 : using the same mesh and particles ..................................processors: - - - - 1 - - - - - - 2 - - - - - - 4 - - - - - 8 - - - - - 16 ......num. of particles per process: - -128000 - - 64000 - - 32000 - - 16000 - - 8000 ...time per DMSwarmMigrate call: - - 200 s. - - - - 90 s. - - - 17 s. - - - 5 s. - - - 3 s. TABLE 2 : using the same amount of processors = 16 (all cases) ...................................elements: - - - 9329 - - - 18730 - - - - 45432 ....................................particles: - - 128085 - - 260554 - - - 631430 time per DMSwarmMigrate call: - - - - 3 s. - - - - 13 s. - - - - - 88 s. My questions: - Is this performance of DMSwarmMigrate normal ? (I have the feeling that it is quite time consuming, e.g. when compared to the time spent by KSPSolve in linear elasticity using an iterative solver). - The DMSwarm functionalities I use are: create a particle field, associate it to a DMPlex, and obtain Vecs that are dynamic in size as a result of the migration. ---> Is the implementation of the attached code optimal for the purpose I mention? Note that I did some tests with the DMSwarmSetLocalSizes function but saw no difference. Also, I set DMSWARM_MIGRATE_BASIC, which is faster but did not relocate particles. If the answer to both questions is YES, I would appreciate any suggestions on how I can reach my target (millions of particles). Thank you very much for taking the time to read this email. Kind regards, Eduardo. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: problem.cpp Type: text/x-c++src Size: 9218 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: geometry_tetra.geo Type: application/octet-stream Size: 1833 bytes Desc: not available URL: From liufield at gmail.com Thu Jul 18 15:53:11 2024 From: liufield at gmail.com (neil liu) Date: Thu, 18 Jul 2024 16:53:11 -0400 Subject: [petsc-users] Out of memory issue related to KSP. Message-ID: Dear Pestc team, I am trying to solve a complex linear system by Petsc KSP. When I committed out this piece code, no errors came out. Will my coding part affect ksp? I am using a direct solver, -pc_type LU. *PetscErrorCode ElementInfo::solveLinearSystem( ){ * *PetscFunctionBegin; * *KSP ksp; * *KSPCreate(PETSC_COMM_WORLD, &ksp); * *KSPSetType(ksp, KSPFGMRES); * *KSPSetOperators(ksp, A, A); * *KSPSetFromOptions(ksp); KSPSolve(ksp, b, x); * *KSPDestroy(&ksp); * *PetscFunctionReturn(PETSC_SUCCESS);* * }* The output with -malloc_test and -malloc_view is attached. It shows the following errors, *Line 5 in the attached file * [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Out of memory. This could be due to allocating [0]PETSC ERROR: too large an object or bleeding by not properly [0]PETSC ERROR: destroying unneeded objects. [0] Maximum memory PetscMalloc()ed 19474398848 maximum size of entire process 740352000 (*This only used 20% of my 64Gb memory .*) *Line 111 in the attached file* [0]PETSC ERROR: Memory requested *18446744069642786816 * (*This is too big.)* [0]PETSC ERROR: See https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!ewaycO7hltMdcVaaqzKPkHOnR0ZWqvS6IBWoUl2SPh2YM875bucnoBPzOb64iTZFmHL94VlaV3i5LEim3RbpQg$ for trouble shooting. [0]PETSC ERROR: Petsc Release Version 3.21.1, unknown [0]PETSC ERROR: ./app on a arch-linux-c-debug named kirin.remcominc.com by xiaodong.liu Thu Jul 18 16:11:52 2024 [0]PETSC ERROR: Configure options --with-cc=gcc --with-fc=gfortran --with-cxx=g++ --download-fblaslapack --download-mpich --with-scalar-type=complex --download-triangle [0]PETSC ERROR: #1 PetscMallocAlign() at /home/xiaod -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Out of memory. This could be due to allocating [0]PETSC ERROR: too large an object or bleeding by not properly [0]PETSC ERROR: destroying unneeded objects. [0] Maximum memory PetscMalloc()ed 19474398848 maximum size of entire process 740352000 [0] Memory usage sorted by function [0] 13 416 DMAddLabel() [0] 5 25280 DMCreate() [0] 4 64 DMCreateDS() [0] 3 2832 DMCreate_Plex() [0] 7 224 DMDSEnlarge_Static() [0] 3 96 DMFieldEnlarge_Static() [0] 3 144 DMGenerateRegister() [0] 1 2445728 DMGetLocalToGlobalMapping() [0] 23 1526176 DMGetWorkArray() [0] 13 8112 DMLabelCreate() [0] 20 384 DMLabelDuplicate() [0] 11 2518704 DMLabelMakeValid_Private() [0] 120 2144 DMLabelNewStratum() [0] 1 64960 DMPlexCreateGmsh() [0] 5458 174656 DMPlexGetFullJoin() [0] 4 3834352 DMPlexInterpolateFaces_Internal() [0] 3 832880 DMPlexSetCellType() [0] 6 13431984 DMPlexSymmetrize() [0] 6 20201056 DMSetUp_Plex() [0] 1 453232 ExtractEdgesDoffromFaces() [0] 10 1071856 GmshBufferGet() [0] 10 2154336 GmshBufferSizeGet() [0] 2 20969728 GmshElementsCreate() [0] 5 19520 GmshEntitiesCreate() [0] 1 112 GmshMeshCreate() [0] 4 1494112 GmshNodesCreate() [0] 1 64960 GmshReadElements() [0] 1 64960 GmshReadNodes() [0] 3 112 GmshReadPhysicalNames() [0] 60 48960 ISCreate() [0] 9 144 ISCreate_General() [0] 51 816 ISCreate_Stride() [0] 4 7447008 ISGeneralSetIndices_General() [0] 3 381360 ISGetIndices_Stride() [0] 1 2445728 ISInvertPermutation_General() [0] 1 608 ISLocalToGlobalMappingCreate() [0] 2 4891456 ISLocalToGlobalMappingGetIndices() [0] 6 12338464 ISSetPermutation() [0] 1 32 KSPConvergedDefaultCreate() [0] 1 1552 KSPCreate() [0] 1 240 KSPCreate_FGMRES() [0] 2 512 KSPSetUp_FGMRES() [0] 8 33440 KSPSetUp_GMRES() [0] 3 531953744 MatAssemblyEnd_Seq_Hash() [0] 5 14720 MatCreate() [0] 1 2445744 MatCreateColInode_Private() [0] 5 8320 MatCreate_SeqAIJ() [0] 4 5111104 MatGetOrdering_ND() [0] 5 32000672 MatGetRowIJ_SeqAIJ_Inode_Symmetric() [0] 3 6169232 MatInodeAdjustForInodes_SeqAIJ_Inode() [0] 5 14674448 MatLUFactorSymbolic_SeqAIJ() [0] 1 2445728 MatMarkDiagonal_SeqAIJ() [0] 8 256 MatRegisterRootName() [0] 1 2445744 MatSeqAIJCheckInode() [0] 6 539290928 MatSeqAIJSetPreallocation_SeqAIJ() [0] 1 2445728 MatSetUp_Seq_Hash() [0] 10 576 MatSolverTypeRegister() [0] 1 784 PCCreate() [0] 1 176 PCCreate_LU() [0] 5 79408 PetscBTCreate() [0] 14 336 PetscChunkBufferCreate() [0] 4 96 PetscCommDuplicate() [0] 3 1536 PetscContainerCreate() [0] 7 7280 PetscDSCreate() [0] 20 320 PetscDSEnlarge_Static() [0] 8 18774400096 PetscFreeSpaceGet() [0] 97 1552 PetscFunctionListCreate_Private() [0] 97 1552 PetscFunctionListDLAllPush_Private() [0] 2 528 PetscIntStackCreate() [0] 134 10720 PetscLayoutCreate() [0] 66 1056 PetscLayoutSetUp() [0] 2 2064 PetscLogClassArrayCreate() [0] 2 2064 PetscLogEventArrayCreate() [0] 2 12288 PetscLogEventArrayRecapacity() [0] 1 32 PetscLogRegistryCreate() [0] 2 80 PetscLogStageArrayCreate() [0] 1 48 PetscLogStateCreate() [0] 8 640 PetscObjectComposedDataIncrease_() [0] 2 576 PetscObjectListAdd() [0] 9 208 PetscOptionsGetEList() [0] 1 16 PetscOptionsHelpPrintedCreate() [0] 3 1776 PetscPartitionerCreate() [0] 3 96 PetscPartitionerCreate_Simple() [0] 1 32 PetscPushSignalHandler() [0] 13 13312 PetscSFCreate() [0] 13 8528 PetscSectionCreate() [0] 24 23368576 PetscSectionSetChart() [0] 3 64 PetscSectionSetFieldComponents() [0] 15 240 PetscSectionSetNumFields() [0] 19 2545504 PetscSegBufferAlloc_Private() [0] 6 20144 PetscSegBufferCreate() [0] 1394 30160 PetscStrallocpy() [0] 6 13072 PetscStrreplace() [0] 6 960 PetscTimSort() [0] 12 12487008 PetscTimSortResizeBuffer_Private() [0] 1 688 PetscViewerCreate() [0] 1 96 PetscViewerCreate_ASCII() [0] 14 5488 PetscWeakFormCreate() [0] 3 9456 VecCreate() [0] 9 28368 VecCreateWithLayout_Private() [0] 4 30128256 VecCreate_Seq() [0] 12 768 VecCreate_Seq_Private() [0] 4 78266448 VecDuplicateVecs_Seq_GEMV() [0] 2 144 setMatrix() [0]PETSC ERROR: Memory requested 18446744069642786816 [0]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting. [0]PETSC ERROR: Petsc Release Version 3.21.1, unknown [0]PETSC ERROR: ./app on a arch-linux-c-debug named kirin.remcominc.com by xiaodong.liu Thu Jul 18 16:11:52 2024 [0]PETSC ERROR: Configure options --with-cc=gcc --with-fc=gfortran --with-cxx=g++ --download-fblaslapack --download-mpich --with-scalar-type=complex --download-triangle [0]PETSC ERROR: #1 PetscMallocAlign() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/sys/memory/mal.c:53 [0]PETSC ERROR: #2 PetscTrMallocDefault() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/sys/memory/mtr.c:175 [0]PETSC ERROR: #3 PetscMallocA() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/sys/memory/mal.c:421 [0]PETSC ERROR: #4 MatLUFactorSymbolic_SeqAIJ() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/impls/aij/seq/aijfact.c:383 [0]PETSC ERROR: #5 MatLUFactorSymbolic() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/interface/matrix.c:3200 [0]PETSC ERROR: #6 PCSetUp_LU() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/pc/impls/factor/lu/lu.c:87 [0]PETSC ERROR: #7 PCSetUp() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/pc/interface/precon.c:1079 [0]PETSC ERROR: #8 KSPSetUp() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:415 [0]PETSC ERROR: #9 KSPSolve_Private() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:831 [0]PETSC ERROR: #10 KSPSolve() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:1078 Ampere edge # 19 Ampere Current (-0.0000000000000000,-0.0000000000000000) Summary of Memory Usage in PETSc Maximum (over computational time) process memory: total 1.9486e+10 max 1.9486e+10 min 1.9486e+10 Current process memory: total 1.8928e+10 max 1.8928e+10 min 1.8928e+10 Maximum (over computational time) space PetscMalloc()ed: total 1.9474e+10 max 1.9474e+10 min 1.9474e+10 Current space PetscMalloc()ed: total 1.8792e+10 max 1.8792e+10 min 1.8792e+10 [ 0] 16 bytes [0] PetscCommBuildTwoSided_Allreduce() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/sys/utils/mpits.c:149 [0] PetscCommBuildTwoSided() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/sys/utils/mpits.c:273 [0] PetscSFSetUp_Basic() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/sf/impls/basic/sfbasic.c:200 [0] PetscSFSetUp() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/sf/interface/sf.c:296 [0] VecScatterCreate() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/sf/interface/vscat.c:1093 [0] VecScatterCreateToAll() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/sf/interface/vscat.c:1165 [ 0] 8589934192 bytes [0] PetscFreeSpaceGet() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/utils/freespace.c:9 [0] MatLUFactorSymbolic_SeqAIJ() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/impls/aij/seq/aijfact.c:366 [0] MatLUFactorSymbolic() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/interface/matrix.c:3200 [0] PCSetUp_LU() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/pc/impls/factor/lu/lu.c:87 [0] PCSetUp() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/pc/interface/precon.c:1079 [0] KSPSetUp() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:415 [0] KSPSolve_Private() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:831 [0] KSPSolve() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:1078 [0] solveLinearSystem() at /home/xiaodong.liu/FEM3D_exp/src/ElementInfo.cpp:418 [0] main() at /home/xiaodong.liu/FEM3D_exp/src/main.cpp:24 [ 0] 48 bytes [0] PetscFreeSpaceGet() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/utils/freespace.c:8 [0] MatLUFactorSymbolic_SeqAIJ() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/impls/aij/seq/aijfact.c:366 [0] MatLUFactorSymbolic() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/interface/matrix.c:3200 [0] PCSetUp_LU() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/pc/impls/factor/lu/lu.c:87 [0] PCSetUp() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/pc/interface/precon.c:1079 [0] KSPSetUp() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:415 [0] KSPSolve_Private() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:831 [0] KSPSolve() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:1078 [0] solveLinearSystem() at /home/xiaodong.liu/FEM3D_exp/src/ElementInfo.cpp:418 [0] main() at /home/xiaodong.liu/FEM3D_exp/src/main.cpp:24 [ 0] 8589934192 bytes [0] PetscFreeSpaceGet() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/utils/freespace.c:9 [0] MatLUFactorSymbolic_SeqAIJ() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/impls/aij/seq/aijfact.c:366 [0] MatLUFactorSymbolic() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/interface/matrix.c:3200 [0] PCSetUp_LU() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/pc/impls/factor/lu/lu.c:87 [0] PCSetUp() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/pc/interface/precon.c:1079 [0] KSPSetUp() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:415 [0] KSPSolve_Private() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:831 [0] KSPSolve() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:1078 [0] solveLinearSystem() at /home/xiaodong.liu/FEM3D_exp/src/ElementInfo.cpp:418 [0] main() at /home/xiaodong.liu/FEM3D_exp/src/main.cpp:24 [ 0] 48 bytes [0] PetscFreeSpaceGet() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/utils/freespace.c:8 [0] MatLUFactorSymbolic_SeqAIJ() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/impls/aij/seq/aijfact.c:366 [0] MatLUFactorSymbolic() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/interface/matrix.c:3200 [0] PCSetUp_LU() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/pc/impls/factor/lu/lu.c:87 [0] PCSetUp() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/pc/interface/precon.c:1079 [0] KSPSetUp() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:415 [0] KSPSolve_Private() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:831 [0] KSPSolve() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:1078 [0] solveLinearSystem() at /home/xiaodong.liu/FEM3D_exp/src/ElementInfo.cpp:418 [0] main() at /home/xiaodong.liu/FEM3D_exp/src/main.cpp:24 [ 0] 1085901456 bytes [0] PetscFreeSpaceGet() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/utils/freespace.c:9 [0] MatLUFactorSymbolic_SeqAIJ() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/impls/aij/seq/aijfact.c:366 [0] MatLUFactorSymbolic() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/interface/matrix.c:3200 [0] PCSetUp_LU() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/pc/impls/factor/lu/lu.c:87 [0] PCSetUp() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/pc/interface/precon.c:1079 [0] KSPSetUp() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:415 [0] KSPSolve_Private() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:831 [0] KSPSolve() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:1078 [0] solveLinearSystem() at /home/xiaodong.liu/FEM3D_exp/src/ElementInfo.cpp:418 [0] main() at /home/xiaodong.liu/FEM3D_exp/src/main.cpp:24 [ 0] 48 bytes [0] PetscFreeSpaceGet() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/utils/freespace.c:8 [0] MatLUFactorSymbolic_SeqAIJ() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/impls/aij/seq/aijfact.c:366 [0] MatLUFactorSymbolic() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/interface/matrix.c:3200 [0] PCSetUp_LU() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/pc/impls/factor/lu/lu.c:87 [0] PCSetUp() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/pc/interface/precon.c:1079 [0] KSPSetUp() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:415 [0] KSPSolve_Private() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:831 [0] KSPSolve() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:1078 [0] solveLinearSystem() at /home/xiaodong.liu/FEM3D_exp/src/ElementInfo.cpp:418 [0] main() at /home/xiaodong.liu/FEM3D_exp/src/main.cpp:24 [ 0] 508630064 bytes [0] PetscFreeSpaceGet() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/utils/freespace.c:9 [0] MatLUFactorSymbolic_SeqAIJ() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/impls/aij/seq/aijfact.c:329 [0] MatLUFactorSymbolic() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/interface/matrix.c:3200 [0] PCSetUp_LU() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/pc/impls/factor/lu/lu.c:87 [0] PCSetUp() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/pc/interface/precon.c:1079 [0] KSPSetUp() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:415 [0] KSPSolve_Private() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:831 [0] KSPSolve() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:1078 [0] solveLinearSystem() at /home/xiaodong.liu/FEM3D_exp/src/ElementInfo.cpp:418 [0] main() at /home/xiaodong.liu/FEM3D_exp/src/main.cpp:24 [ 0] 48 bytes [0] PetscFreeSpaceGet() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/utils/freespace.c:8 [0] MatLUFactorSymbolic_SeqAIJ() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/impls/aij/seq/aijfact.c:329 [0] MatLUFactorSymbolic() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/interface/matrix.c:3200 [0] PCSetUp_LU() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/pc/impls/factor/lu/lu.c:87 [0] PCSetUp() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/pc/interface/precon.c:1079 [0] KSPSetUp() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:415 [0] KSPSolve_Private() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:831 [0] KSPSolve() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:1078 [0] solveLinearSystem() at /home/xiaodong.liu/FEM3D_exp/src/ElementInfo.cpp:418 [0] main() at /home/xiaodong.liu/FEM3D_exp/src/main.cpp:24 [ 0] 2445744 bytes [0] MatLUFactorSymbolic_SeqAIJ() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/impls/aij/seq/aijfact.c:324 [0] MatLUFactorSymbolic() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/interface/matrix.c:3200 [0] PCSetUp_LU() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/pc/impls/factor/lu/lu.c:87 [0] PCSetUp() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/pc/interface/precon.c:1079 [0] KSPSetUp() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:415 [0] KSPSolve_Private() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:831 [0] KSPSolve() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:1078 [0] solveLinearSystem() at /home/xiaodong.liu/FEM3D_exp/src/ElementInfo.cpp:418 [0] main() at /home/xiaodong.liu/FEM3D_exp/src/main.cpp:24 [ 0] 4891472 bytes [0] MatLUFactorSymbolic_SeqAIJ() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/impls/aij/seq/aijfact.c:324 [0] MatLUFactorSymbolic() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/interface/matrix.c:3200 [0] PCSetUp_LU() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/pc/impls/factor/lu/lu.c:87 [0] PCSetUp() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/pc/interface/precon.c:1079 [0] KSPSetUp() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:415 [0] KSPSolve_Private() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:831 [0] KSPSolve() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:1078 [0] solveLinearSystem() at /home/xiaodong.liu/FEM3D_exp/src/ElementInfo.cpp:418 [0] main() at /home/xiaodong.liu/FEM3D_exp/src/main.cpp:24 [ 0] 76432 bytes [0] MatLUFactorSymbolic_SeqAIJ() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/impls/aij/seq/aijfact.c:322 [0] MatLUFactorSymbolic() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/interface/matrix.c:3200 [0] PCSetUp_LU() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/pc/impls/factor/lu/lu.c:87 [0] PCSetUp() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/pc/interface/precon.c:1079 [0] KSPSetUp() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:415 [0] KSPSolve_Private() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:831 [0] KSPSolve() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:1078 [0] solveLinearSystem() at /home/xiaodong.liu/FEM3D_exp/src/ElementInfo.cpp:418 [0] main() at /home/xiaodong.liu/FEM3D_exp/src/main.cpp:24 [ 0] 2445744 bytes [0] MatLUFactorSymbolic_SeqAIJ() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/impls/aij/seq/aijfact.c:322 [0] MatLUFactorSymbolic() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/interface/matrix.c:3200 [0] PCSetUp_LU() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/pc/impls/factor/lu/lu.c:87 [0] PCSetUp() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/pc/interface/precon.c:1079 [0] KSPSetUp() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:415 [0] KSPSolve_Private() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:831 [0] KSPSolve() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:1078 [0] solveLinearSystem() at /home/xiaodong.liu/FEM3D_exp/src/ElementInfo.cpp:418 [0] main() at /home/xiaodong.liu/FEM3D_exp/src/main.cpp:24 [ 0] 2445744 bytes [0] MatLUFactorSymbolic_SeqAIJ() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/impls/aij/seq/aijfact.c:317 [0] MatLUFactorSymbolic() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/interface/matrix.c:3200 [0] PCSetUp_LU() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/pc/impls/factor/lu/lu.c:87 [0] PCSetUp() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/pc/interface/precon.c:1079 [0] KSPSetUp() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:415 [0] KSPSolve_Private() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:831 [0] KSPSolve() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:1078 [0] solveLinearSystem() at /home/xiaodong.liu/FEM3D_exp/src/ElementInfo.cpp:418 [0] main() at /home/xiaodong.liu/FEM3D_exp/src/main.cpp:24 [ 0] 2445744 bytes [0] MatLUFactorSymbolic_SeqAIJ() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/impls/aij/seq/aijfact.c:316 [0] MatLUFactorSymbolic() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/interface/matrix.c:3200 [0] PCSetUp_LU() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/pc/impls/factor/lu/lu.c:87 [0] PCSetUp() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/pc/interface/precon.c:1079 [0] KSPSetUp() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:415 [0] KSPSolve_Private() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:831 [0] KSPSolve() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:1078 [0] solveLinearSystem() at /home/xiaodong.liu/FEM3D_exp/src/ElementInfo.cpp:418 [0] main() at /home/xiaodong.liu/FEM3D_exp/src/main.cpp:24 [ 0] 16 bytes [0] PetscLayoutSetUp() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/utils/pmap.c:247 [0] PetscLayoutCreateFromSizes() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/utils/pmap.c:107 [0] ISGeneralSetIndices_General() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/is/impls/general/general.c:569 [0] ISGeneralSetIndices() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/is/impls/general/general.c:559 [0] ISCreateGeneral() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/is/impls/general/general.c:530 [0] ISInvertPermutation_General() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/is/impls/general/general.c:156 [0] ISInvertPermutation() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/is/interface/index.c:1096 [0] MatLUFactorSymbolic_SeqAIJ() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/impls/aij/seq/aijfact.c:311 [0] MatLUFactorSymbolic() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/interface/matrix.c:3200 [0] PCSetUp_LU() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/pc/impls/factor/lu/lu.c:87 [0] PCSetUp() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/pc/interface/precon.c:1079 [0] KSPSetUp() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:415 [0] KSPSolve_Private() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:831 [0] KSPSolve() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:1078 [0] solveLinearSystem() at /home/xiaodong.liu/FEM3D_exp/src/ElementInfo.cpp:418 [0] main() at /home/xiaodong.liu/FEM3D_exp/src/main.cpp:24 [ 0] 80 bytes [0] PetscLayoutCreate() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/utils/pmap.c:53 [0] PetscLayoutCreateFromSizes() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/utils/pmap.c:103 [0] ISGeneralSetIndices_General() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/is/impls/general/general.c:569 [0] ISGeneralSetIndices() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/is/impls/general/general.c:559 [0] ISCreateGeneral() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/is/impls/general/general.c:530 [0] ISInvertPermutation_General() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/is/impls/general/general.c:156 [0] ISInvertPermutation() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/is/interface/index.c:1096 [0] MatLUFactorSymbolic_SeqAIJ() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/impls/aij/seq/aijfact.c:311 [0] MatLUFactorSymbolic() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/interface/matrix.c:3200 [0] PCSetUp_LU() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/pc/impls/factor/lu/lu.c:87 [0] PCSetUp() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/pc/interface/precon.c:1079 [0] KSPSetUp() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:415 [0] KSPSolve_Private() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:831 [0] KSPSolve() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:1078 [0] solveLinearSystem() at /home/xiaodong.liu/FEM3D_exp/src/ElementInfo.cpp:418 [0] main() at /home/xiaodong.liu/FEM3D_exp/src/main.cpp:24 [ 0] 16 bytes [0] PetscStrallocpy() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/include/petscstring.h:151 [0] PetscObjectChangeTypeName() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/sys/objects/pname.c:134 [0] ISSetType() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/is/interface/isreg.c:78 [0] ISCreateGeneral() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/is/impls/general/general.c:529 [0] ISInvertPermutation_General() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/is/impls/general/general.c:156 [0] ISInvertPermutation() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/is/interface/index.c:1096 [0] MatLUFactorSymbolic_SeqAIJ() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/impls/aij/seq/aijfact.c:311 [0] MatLUFactorSymbolic() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/interface/matrix.c:3200 [0] PCSetUp_LU() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/pc/impls/factor/lu/lu.c:87 [0] PCSetUp() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/pc/interface/precon.c:1079 [0] KSPSetUp() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:415 [0] KSPSolve_Private() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:831 [0] KSPSolve() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:1078 [0] solveLinearSystem() at /home/xiaodong.liu/FEM3D_exp/src/ElementInfo.cpp:418 [0] main() at /home/xiaodong.liu/FEM3D_exp/src/main.cpp:24 [ 0] 16 bytes [0] PetscStrallocpy() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/include/petscstring.h:151 [0] PetscHMapFuncInsert_Private() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/sys/dll/reg.c:240 [0] PetscFunctionListAdd_Private() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/sys/dll/reg.c:299 [0] PetscObjectComposeFunction_Private() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/sys/objects/inherit.c:795 [0] ISCreate_General() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/is/impls/general/general.c:702 [0] ISSetType() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/is/interface/isreg.c:77 [0] ISCreateGeneral() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/is/impls/general/general.c:529 [0] ISInvertPermutation_General() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/is/impls/general/general.c:156 [0] ISInvertPermutation() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/is/interface/index.c:1096 [0] MatLUFactorSymbolic_SeqAIJ() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/impls/aij/seq/aijfact.c:311 [0] MatLUFactorSymbolic() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/interface/matrix.c:3200 [0] PCSetUp_LU() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/pc/impls/factor/lu/lu.c:87 [0] PCSetUp() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/pc/interface/precon.c:1079 [0] KSPSetUp() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:415 [0] KSPSolve_Private() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:831 [0] KSPSolve() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:1078 [0] solveLinearSystem() at /home/xiaodong.liu/FEM3D_exp/src/ElementInfo.cpp:418 [0] main() at /home/xiaodong.liu/FEM3D_exp/src/main.cpp:24 [ 0] 32 bytes [0] PetscStrallocpy() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/include/petscstring.h:151 [0] PetscHMapFuncInsert_Private() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/sys/dll/reg.c:240 [0] PetscFunctionListAdd_Private() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/sys/dll/reg.c:299 [0] PetscObjectComposeFunction_Private() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/sys/objects/inherit.c:795 [0] ISCreate_General() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/is/impls/general/general.c:701 [0] ISSetType() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/is/interface/isreg.c:77 [0] ISCreateGeneral() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/is/impls/general/general.c:529 [0] ISInvertPermutation_General() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/is/impls/general/general.c:156 [0] ISInvertPermutation() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/is/interface/index.c:1096 [0] MatLUFactorSymbolic_SeqAIJ() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/impls/aij/seq/aijfact.c:311 [0] MatLUFactorSymbolic() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/interface/matrix.c:3200 [0] PCSetUp_LU() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/pc/impls/factor/lu/lu.c:87 [0] PCSetUp() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/pc/interface/precon.c:1079 [0] KSPSetUp() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:415 [0] KSPSolve_Private() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:831 [0] KSPSolve() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:1078 [0] solveLinearSystem() at /home/xiaodong.liu/FEM3D_exp/src/ElementInfo.cpp:418 [0] main() at /home/xiaodong.liu/FEM3D_exp/src/main.cpp:24 [ 0] 32 bytes [0] PetscStrallocpy() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/include/petscstring.h:151 [0] PetscHMapFuncInsert_Private() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/sys/dll/reg.c:240 [0] PetscFunctionListAdd_Private() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/sys/dll/reg.c:299 [0] PetscObjectComposeFunction_Private() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/sys/objects/inherit.c:795 [0] ISCreate_General() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/is/impls/general/general.c:700 [0] ISSetType() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/is/interface/isreg.c:77 [0] ISCreateGeneral() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/is/impls/general/general.c:529 [0] ISInvertPermutation_General() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/is/impls/general/general.c:156 [0] ISInvertPermutation() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/is/interface/index.c:1096 [0] MatLUFactorSymbolic_SeqAIJ() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/impls/aij/seq/aijfact.c:311 [0] MatLUFactorSymbolic() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/interface/matrix.c:3200 [0] PCSetUp_LU() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/pc/impls/factor/lu/lu.c:87 [0] PCSetUp() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/pc/interface/precon.c:1079 [0] KSPSetUp() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:415 [0] KSPSolve_Private() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:831 [0] KSPSolve() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:1078 [0] solveLinearSystem() at /home/xiaodong.liu/FEM3D_exp/src/ElementInfo.cpp:418 [0] main() at /home/xiaodong.liu/FEM3D_exp/src/main.cpp:24 [ 0] 32 bytes [0] PetscStrallocpy() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/include/petscstring.h:151 [0] PetscHMapFuncInsert_Private() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/sys/dll/reg.c:240 [0] PetscFunctionListAdd_Private() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/sys/dll/reg.c:299 [0] PetscObjectComposeFunction_Private() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/sys/objects/inherit.c:795 [0] ISCreate_General() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/is/impls/general/general.c:699 [0] ISSetType() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/is/interface/isreg.c:77 [0] ISCreateGeneral() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/is/impls/general/general.c:529 [0] ISInvertPermutation_General() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/is/impls/general/general.c:156 [0] ISInvertPermutation() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/is/interface/index.c:1096 [0] MatLUFactorSymbolic_SeqAIJ() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/impls/aij/seq/aijfact.c:311 [0] MatLUFactorSymbolic() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/interface/matrix.c:3200 [0] PCSetUp_LU() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/pc/impls/factor/lu/lu.c:87 [0] PCSetUp() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/pc/interface/precon.c:1079 [0] KSPSetUp() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:415 [0] KSPSolve_Private() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:831 [0] KSPSolve() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:1078 [0] solveLinearSystem() at /home/xiaodong.liu/FEM3D_exp/src/ElementInfo.cpp:418 [0] main() at /home/xiaodong.liu/FEM3D_exp/src/main.cpp:24 [ 0] 16 bytes [0] PetscFunctionListDLAllPush_Private() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/sys/dll/reg.c:189 [0] PetscFunctionListCreate_Private() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/sys/dll/reg.c:259 [0] PetscFunctionListAdd_Private() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/sys/dll/reg.c:298 [0] PetscObjectComposeFunction_Private() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/sys/objects/inherit.c:795 [0] ISCreate_General() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/is/impls/general/general.c:699 [0] ISSetType() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/is/interface/isreg.c:77 [0] ISCreateGeneral() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/is/impls/general/general.c:529 [0] ISInvertPermutation_General() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/is/impls/general/general.c:156 [0] ISInvertPermutation() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/is/interface/index.c:1096 [0] MatLUFactorSymbolic_SeqAIJ() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/impls/aij/seq/aijfact.c:311 [0] MatLUFactorSymbolic() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/interface/matrix.c:3200 [0] PCSetUp_LU() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/pc/impls/factor/lu/lu.c:87 [0] PCSetUp() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/pc/interface/precon.c:1079 [0] KSPSetUp() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:415 [0] KSPSolve_Private() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:831 [0] KSPSolve() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:1078 [0] solveLinearSystem() at /home/xiaodong.liu/FEM3D_exp/src/ElementInfo.cpp:418 [0] main() at /home/xiaodong.liu/FEM3D_exp/src/main.cpp:24 [ 0] 16 bytes [0] PetscFunctionListCreate_Private() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/sys/dll/reg.c:257 [0] PetscFunctionListAdd_Private() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/sys/dll/reg.c:298 [0] PetscObjectComposeFunction_Private() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/sys/objects/inherit.c:795 [0] ISCreate_General() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/is/impls/general/general.c:699 [0] ISSetType() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/is/interface/isreg.c:77 [0] ISCreateGeneral() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/is/impls/general/general.c:529 [0] ISInvertPermutation_General() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/is/impls/general/general.c:156 [0] ISInvertPermutation() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/is/interface/index.c:1096 [0] MatLUFactorSymbolic_SeqAIJ() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/impls/aij/seq/aijfact.c:311 [0] MatLUFactorSymbolic() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/interface/matrix.c:3200 [0] PCSetUp_LU() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/pc/impls/factor/lu/lu.c:87 [0] PCSetUp() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/pc/interface/precon.c:1079 [0] KSPSetUp() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:415 [0] KSPSolve_Private() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:831 [0] KSPSolve() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:1078 [0] solveLinearSystem() at /home/xiaodong.liu/FEM3D_exp/src/ElementInfo.cpp:418 [0] main() at /home/xiaodong.liu/FEM3D_exp/src/main.cpp:24 [ 0] 16 bytes [0] ISCreate_General() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/is/impls/general/general.c:696 [0] ISSetType() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/is/interface/isreg.c:77 [0] ISCreateGeneral() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/is/impls/general/general.c:529 [0] ISInvertPermutation_General() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/is/impls/general/general.c:156 [0] ISInvertPermutation() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/is/interface/index.c:1096 [0] MatLUFactorSymbolic_SeqAIJ() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/impls/aij/seq/aijfact.c:311 [0] MatLUFactorSymbolic() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/interface/matrix.c:3200 [0] PCSetUp_LU() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/pc/impls/factor/lu/lu.c:87 [0] PCSetUp() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/pc/interface/precon.c:1079 [0] KSPSetUp() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:415 [0] KSPSolve_Private() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:831 [0] KSPSolve() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:1078 [0] solveLinearSystem() at /home/xiaodong.liu/FEM3D_exp/src/ElementInfo.cpp:418 [0] main() at /home/xiaodong.liu/FEM3D_exp/src/main.cpp:24 [ 0] 816 bytes [0] ISCreate() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/is/interface/isreg.c:33 [0] ISCreateGeneral() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/is/impls/general/general.c:528 [0] ISInvertPermutation_General() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/is/impls/general/general.c:156 [0] ISInvertPermutation() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/is/interface/index.c:1096 [0] MatLUFactorSymbolic_SeqAIJ() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/impls/aij/seq/aijfact.c:311 [0] MatLUFactorSymbolic() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/interface/matrix.c:3200 [0] PCSetUp_LU() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/pc/impls/factor/lu/lu.c:87 [0] PCSetUp() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/pc/interface/precon.c:1079 [0] KSPSetUp() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:415 [0] KSPSolve_Private() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:831 [0] KSPSolve() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:1078 [0] solveLinearSystem() at /home/xiaodong.liu/FEM3D_exp/src/ElementInfo.cpp:418 [0] main() at /home/xiaodong.liu/FEM3D_exp/src/main.cpp:24 [ 0] 2445728 bytes [0] ISInvertPermutation_General() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/is/impls/general/general.c:154 [0] ISInvertPermutation() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/is/interface/index.c:1096 [0] MatLUFactorSymbolic_SeqAIJ() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/impls/aij/seq/aijfact.c:311 [0] MatLUFactorSymbolic() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/interface/matrix.c:3200 [0] PCSetUp_LU() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/pc/impls/factor/lu/lu.c:87 [0] PCSetUp() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/pc/interface/precon.c:1079 [0] KSPSetUp() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:415 [0] KSPSolve_Private() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:831 [0] KSPSolve() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/ksp/ksp/interface/itfunc.c:1078 [0] solveLinearSystem() at /home/xiaodong.liu/FEM3D_exp/src/ElementInfo.cpp:418 [0] main() at /home/xiaodong.liu/FEM3D_exp/src/main.cpp:24 [ 0] 16 bytes [0] PetscCommDuplicate() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/sys/objects/tagm.c:227 [0] PetscHeaderCreate_Private() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/sys/objects/inherit.c:51 [0] PetscHeaderCreate_Function() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/sys/objects/inherit.c:26 [0] PetscDSCreate() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/dm/dt/interface/dtds.c:687 [0] DMCreate() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/dm/interface/dm.c:96 [0] DMPlexCreateGmsh() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/dm/impls/plex/plexgmsh.c:1545 [0] main() at /home/xiaodong.liu/FEM3D_exp/src/main.cpp:24 [ 0] 32 bytes [0] PetscCommDuplicate() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/sys/objects/tagm.c:223 [0] PetscHeaderCreate_Private() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/sys/objects/inherit.c:51 [0] PetscHeaderCreate_Function() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/sys/objects/inherit.c:26 [0] PetscDSCreate() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/dm/dt/interface/dtds.c:687 [0] DMCreate() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/dm/interface/dm.c:96 [0] DMPlexCreateGmsh() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/dm/impls/plex/plexgmsh.c:1545 [0] main() at /home/xiaodong.liu/FEM3D_exp/src/main.cpp:24 [0] Maximum memory PetscMalloc()ed 19474398848 maximum size of entire process 19485659136 [0] Memory usage sorted by function [0] 13 416 DMAddLabel() [0] 5 25280 DMCreate() [0] 4 64 DMCreateDS() [0] 3 2832 DMCreate_Plex() [0] 7 224 DMDSEnlarge_Static() [0] 3 96 DMFieldEnlarge_Static() [0] 3 144 DMGenerateRegister() [0] 1 2445728 DMGetLocalToGlobalMapping() [0] 23 1526176 DMGetWorkArray() [0] 13 8112 DMLabelCreate() [0] 20 384 DMLabelDuplicate() [0] 12 2518784 DMLabelMakeValid_Private() [0] 120 2144 DMLabelNewStratum() [0] 1 64960 DMPlexCreateGmsh() [0] 5458 174656 DMPlexGetFullJoin() [0] 4 3834352 DMPlexInterpolateFaces_Internal() [0] 3 832880 DMPlexSetCellType() [0] 6 13431984 DMPlexSymmetrize() [0] 6 20201056 DMSetUp_Plex() [0] 1 453232 ExtractEdgesDoffromFaces() [0] 10 1071856 GmshBufferGet() [0] 10 2154336 GmshBufferSizeGet() [0] 2 20969728 GmshElementsCreate() [0] 5 19520 GmshEntitiesCreate() [0] 1 112 GmshMeshCreate() [0] 4 1494112 GmshNodesCreate() [0] 1 64960 GmshReadElements() [0] 1 64960 GmshReadNodes() [0] 3 112 GmshReadPhysicalNames() [0] 62 50592 ISCreate() [0] 10 160 ISCreate_General() [0] 52 832 ISCreate_Stride() [0] 4 7447008 ISGeneralSetIndices_General() [0] 5 5272816 ISGetIndices_Stride() [0] 1 2445728 ISInvertPermutation_General() [0] 1 608 ISLocalToGlobalMappingCreate() [0] 3 7337184 ISLocalToGlobalMappingGetIndices() [0] 6 12338464 ISSetPermutation() [0] 1 32 KSPConvergedDefaultCreate() [0] 1 1552 KSPCreate() [0] 1 240 KSPCreate_FGMRES() [0] 2 512 KSPSetUp_FGMRES() [0] 8 33440 KSPSetUp_GMRES() [0] 3 531953744 MatAssemblyEnd_Seq_Hash() [0] 5 14720 MatCreate() [0] 1 2445744 MatCreateColInode_Private() [0] 5 8320 MatCreate_SeqAIJ() [0] 4 5111104 MatGetOrdering_ND() [0] 5 32000672 MatGetRowIJ_SeqAIJ_Inode_Symmetric() [0] 3 6169232 MatInodeAdjustForInodes_SeqAIJ_Inode() [0] 5 14674448 MatLUFactorSymbolic_SeqAIJ() [0] 1 2445728 MatMarkDiagonal_SeqAIJ() [0] 8 256 MatRegisterRootName() [0] 1 2445744 MatSeqAIJCheckInode() [0] 6 539290928 MatSeqAIJSetPreallocation_SeqAIJ() [0] 1 2445728 MatSetUp_Seq_Hash() [0] 10 576 MatSolverTypeRegister() [0] 1 784 PCCreate() [0] 1 176 PCCreate_LU() [0] 5 79408 PetscBTCreate() [0] 14 336 PetscChunkBufferCreate() [0] 1 16 PetscCommBuildTwoSided_Allreduce() [0] 6 144 PetscCommDuplicate() [0] 3 1536 PetscContainerCreate() [0] 7 7280 PetscDSCreate() [0] 20 320 PetscDSEnlarge_Static() [0] 8 18774400096 PetscFreeSpaceGet() [0] 100 1600 PetscFunctionListCreate_Private() [0] 100 1600 PetscFunctionListDLAllPush_Private() [0] 2 528 PetscIntStackCreate() [0] 140 11200 PetscLayoutCreate() [0] 70 1120 PetscLayoutSetUp() [0] 2 2064 PetscLogClassArrayCreate() [0] 2 2064 PetscLogEventArrayCreate() [0] 2 12288 PetscLogEventArrayRecapacity() [0] 1 32 PetscLogRegistryCreate() [0] 2 80 PetscLogStageArrayCreate() [0] 1 48 PetscLogStateCreate() [0] 10 800 PetscObjectComposedDataIncrease_() [0] 2 576 PetscObjectListAdd() [0] 10 224 PetscOptionsGetEList() [0] 1 16 PetscOptionsHelpPrintedCreate() [0] 3 1776 PetscPartitionerCreate() [0] 3 96 PetscPartitionerCreate_Simple() [0] 1 32 PetscPushSignalHandler() [0] 14 14336 PetscSFCreate() [0] 1 144 PetscSFCreate_Basic() [0] 1 832 PetscSFLinkCreate_MPI() [0] 8 4891552 PetscSFSetUpRanks() [0] 4 2445776 PetscSFSetUp_Basic() [0] 13 8528 PetscSectionCreate() [0] 24 23368576 PetscSectionSetChart() [0] 3 64 PetscSectionSetFieldComponents() [0] 15 240 PetscSectionSetNumFields() [0] 19 2545504 PetscSegBufferAlloc_Private() [0] 6 20144 PetscSegBufferCreate() [0] 1420 30864 PetscStrallocpy() [0] 12 26144 PetscStrreplace() [0] 6 960 PetscTimSort() [0] 12 12487008 PetscTimSortResizeBuffer_Private() [0] 1 16 PetscViewerASCIIOpen() [0] 2 1376 PetscViewerCreate() [0] 2 192 PetscViewerCreate_ASCII() [0] 14 5488 PetscWeakFormCreate() [0] 5 15760 VecCreate() [0] 9 28368 VecCreateWithLayout_Private() [0] 1 272 VecCreate_MPI_Private() [0] 5 39911168 VecCreate_Seq() [0] 13 832 VecCreate_Seq_Private() [0] 4 78266448 VecDuplicateVecs_Seq_GEMV() [0] 2 7337184 VecScatterCreate() [0] 2 32 VecStashCreate_Private() [0] 2 144 calcAmpereCurrent() [0] 2 144 setMatrix() From mfadams at lbl.gov Thu Jul 18 15:58:19 2024 From: mfadams at lbl.gov (Mark Adams) Date: Thu, 18 Jul 2024 16:58:19 -0400 Subject: [petsc-users] [petsc-maint] Out of memory issue related to KSP. In-Reply-To: References: Message-ID: How big is your matrix? On Thu, Jul 18, 2024 at 4:53?PM neil liu wrote: > Dear Pestc team, I am trying to solve a complex linear system by Petsc > KSP. When I committed out this piece code, no errors came out. Will my > coding part affect ksp? I am using a direct solver, -pc_type LU. > PetscErrorCode ElementInfo: : solveLinearSystem( > ZjQcmQRYFpfptBannerStart > This Message Is From an External Sender > This message came from outside your organization. > > ZjQcmQRYFpfptBannerEnd > Dear Pestc team, > > I am trying to solve a complex linear system by Petsc KSP. When I > committed out this piece code, no errors came out. Will my coding part > affect ksp? I am using a direct solver, -pc_type LU. > > *PetscErrorCode ElementInfo::solveLinearSystem( ){ * > *PetscFunctionBegin; * > *KSP ksp; * > *KSPCreate(PETSC_COMM_WORLD, &ksp); * > *KSPSetType(ksp, KSPFGMRES); * > *KSPSetOperators(ksp, A, A); * > *KSPSetFromOptions(ksp); KSPSolve(ksp, b, x); * > *KSPDestroy(&ksp); * > *PetscFunctionReturn(PETSC_SUCCESS);* > * }* > > The output with -malloc_test and -malloc_view is attached. It shows the > following errors, > *Line 5 in the attached file * > [0]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [0]PETSC ERROR: Out of memory. This could be due to allocating > [0]PETSC ERROR: too large an object or bleeding by not properly > [0]PETSC ERROR: destroying unneeded objects. > [0] Maximum memory PetscMalloc()ed 19474398848 maximum size of entire > process 740352000 (*This only used 20% of my 64Gb memory .*) > *Line 111 in the attached file* > [0]PETSC ERROR: Memory requested *18446744069642786816 * (*This is too > big.)* > [0]PETSC ERROR: See https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!aay4f2-4GIzJEBJpIcqH0Yr7D7_mWjpDVo_xgrhc6gIx6SGtJ9cliIKUYKq_IqoLvI6YDhX3BBPGXCq3Zchzdso$ > for > trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.21.1, unknown > [0]PETSC ERROR: ./app on a arch-linux-c-debug named kirin.remcominc.com > by > xiaodong.liu Thu Jul 18 16:11:52 2024 > [0]PETSC ERROR: Configure options --with-cc=gcc --with-fc=gfortran > --with-cxx=g++ --download-fblaslapack --download-mpich > --with-scalar-type=complex --download-triangle > [0]PETSC ERROR: #1 PetscMallocAlign() at /home/xiaod > -------------- next part -------------- An HTML attachment was scrubbed... URL: From liufield at gmail.com Thu Jul 18 17:23:14 2024 From: liufield at gmail.com (neil liu) Date: Thu, 18 Jul 2024 18:23:14 -0400 Subject: [petsc-users] [petsc-maint] Out of memory issue related to KSP. In-Reply-To: References: Message-ID: Thanks, mark. I am using a sparse matrix from a second-oder vector basis FEM. Will this sparse matrix still use a memory similar to a dense matrix? Thanks, Xiaodong On Thu, Jul 18, 2024 at 6:00?PM Mark Adams wrote: > keep on the list. > > > > > On Thu, Jul 18, 2024 at 5:06?PM neil liu wrote: > >> The matrix size (complex) is 611,432 x 611,432. >> > > A dense matrix of that size uses 3 terabytes to store 8*(611,432)^2 bytes. > > That is too big. > > Mark > > >> >> On Thu, Jul 18, 2024 at 4:58?PM Mark Adams wrote: >> >>> How big is your matrix? >>> >>> On Thu, Jul 18, 2024 at 4:53?PM neil liu wrote: >>> >>>> Dear Pestc team, I am trying to solve a complex linear system by Petsc >>>> KSP. When I committed out this piece code, no errors came out. Will my >>>> coding part affect ksp? I am using a direct solver, -pc_type LU. >>>> PetscErrorCode ElementInfo: : solveLinearSystem( >>>> ZjQcmQRYFpfptBannerStart >>>> This Message Is From an External Sender >>>> This message came from outside your organization. >>>> >>>> ZjQcmQRYFpfptBannerEnd >>>> Dear Pestc team, >>>> >>>> I am trying to solve a complex linear system by Petsc KSP. When I >>>> committed out this piece code, no errors came out. Will my coding part >>>> affect ksp? I am using a direct solver, -pc_type LU. >>>> >>>> *PetscErrorCode ElementInfo::solveLinearSystem( ){ * >>>> *PetscFunctionBegin; * >>>> *KSP ksp; * >>>> *KSPCreate(PETSC_COMM_WORLD, &ksp); * >>>> *KSPSetType(ksp, KSPFGMRES); * >>>> *KSPSetOperators(ksp, A, A); * >>>> *KSPSetFromOptions(ksp); KSPSolve(ksp, b, x); * >>>> *KSPDestroy(&ksp); * >>>> *PetscFunctionReturn(PETSC_SUCCESS);* >>>> * }* >>>> >>>> The output with -malloc_test and -malloc_view is attached. It shows the >>>> following errors, >>>> *Line 5 in the attached file * >>>> [0]PETSC ERROR: --------------------- Error Message >>>> -------------------------------------------------------------- >>>> [0]PETSC ERROR: Out of memory. This could be due to allocating >>>> [0]PETSC ERROR: too large an object or bleeding by not properly >>>> [0]PETSC ERROR: destroying unneeded objects. >>>> [0] Maximum memory PetscMalloc()ed 19474398848 maximum size of entire >>>> process 740352000 (*This only used 20% of my 64Gb memory .*) >>>> *Line 111 in the attached file* >>>> [0]PETSC ERROR: Memory requested *18446744069642786816 * (*This is too >>>> big.)* >>>> [0]PETSC ERROR: See https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!frXqnQME2vz-dsM2ohLke9bFaRKeV5R43otxvpWSX4XCA-Ln5mVS2v7CQa2AXWb971drSs1ICwejJ00Vj-zFFw$ >>>> for >>>> trouble shooting. >>>> [0]PETSC ERROR: Petsc Release Version 3.21.1, unknown >>>> [0]PETSC ERROR: ./app on a arch-linux-c-debug named kirin.remcominc.com >>>> by >>>> xiaodong.liu Thu Jul 18 16:11:52 2024 >>>> [0]PETSC ERROR: Configure options --with-cc=gcc --with-fc=gfortran >>>> --with-cxx=g++ --download-fblaslapack --download-mpich >>>> --with-scalar-type=complex --download-triangle >>>> [0]PETSC ERROR: #1 PetscMallocAlign() at /home/xiaod >>>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Thu Jul 18 17:57:14 2024 From: bsmith at petsc.dev (Barry Smith) Date: Thu, 18 Jul 2024 18:57:14 -0400 Subject: [petsc-users] [petsc-maint] Out of memory issue related to KSP. In-Reply-To: References: Message-ID: <6A4B6715-4A87-4B89-A0F3-562200C86F9F@petsc.dev> Sparse matrix factorizations generally take much less memory than dense factorizations. How much memory they require depends on the number of nonzeros in the original matrix and the nonzero pattern. The amount of memory required for the factorization can easily be 5 to 10 times that of the original matrix. You can start with smaller problems to "get a feeling" for how much memory is required for the factorization and then gradually increase the problem size to see what is achievable. For large problems with a direct solver, you will want to use MUMPS (./configure --download-mumps) and run it on multiple compute nodes. Iterative solvers generally use much less memory and scale better for larger problems but are problem specific. Barry > On Jul 18, 2024, at 6:23?PM, neil liu wrote: > > This Message Is From an External Sender > This message came from outside your organization. > Thanks, mark. > I am using a sparse matrix from a second-oder vector basis FEM. > Will this sparse matrix still use a memory similar to a dense matrix? > > Thanks, > > Xiaodong > > > On Thu, Jul 18, 2024 at 6:00?PM Mark Adams > wrote: >> keep on the list. >> >> >> >> >> On Thu, Jul 18, 2024 at 5:06?PM neil liu > wrote: >>> The matrix size (complex) is 611,432 x 611,432. >> >> A dense matrix of that size uses 3 terabytes to store 8*(611,432)^2 bytes. >> >> That is too big. >> >> Mark >> >>> >>> On Thu, Jul 18, 2024 at 4:58?PM Mark Adams > wrote: >>>> How big is your matrix? >>>> >>>> On Thu, Jul 18, 2024 at 4:53?PM neil liu > wrote: >>>>> This Message Is From an External Sender >>>>> This message came from outside your organization. >>>>> >>>>> Dear Pestc team, >>>>> >>>>> I am trying to solve a complex linear system by Petsc KSP. When I committed out this piece code, no errors came out. Will my coding part affect ksp? I am using a direct solver, -pc_type LU. >>>>> >>>>> PetscErrorCode ElementInfo::solveLinearSystem( ){ >>>>> PetscFunctionBegin; >>>>> KSP ksp; >>>>> KSPCreate(PETSC_COMM_WORLD, &ksp); >>>>> KSPSetType(ksp, KSPFGMRES); >>>>> KSPSetOperators(ksp, A, A); >>>>> KSPSetFromOptions(ksp); KSPSolve(ksp, b, x); >>>>> KSPDestroy(&ksp); >>>>> PetscFunctionReturn(PETSC_SUCCESS); >>>>> } >>>>> >>>>> The output with -malloc_test and -malloc_view is attached. It shows the following errors, >>>>> Line 5 in the attached file >>>>> [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- >>>>> [0]PETSC ERROR: Out of memory. This could be due to allocating >>>>> [0]PETSC ERROR: too large an object or bleeding by not properly >>>>> [0]PETSC ERROR: destroying unneeded objects. >>>>> [0] Maximum memory PetscMalloc()ed 19474398848 maximum size of entire process 740352000 (This only used 20% of my 64Gb memory .) >>>>> Line 111 in the attached file >>>>> [0]PETSC ERROR: Memory requested 18446744069642786816 (This is too big.) >>>>> [0]PETSC ERROR: See https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!cW4LxgQlFsBzBuRLqMZqdc1Snpka_OgCdlsFdPhw-JYpAdIrHQq4j9v2koyQZhBYxB8OuX7c_s2R6gBnSElpzQs$ for trouble shooting. >>>>> [0]PETSC ERROR: Petsc Release Version 3.21.1, unknown >>>>> [0]PETSC ERROR: ./app on a arch-linux-c-debug named kirin.remcominc.com by xiaodong.liu Thu Jul 18 16:11:52 2024 >>>>> [0]PETSC ERROR: Configure options --with-cc=gcc --with-fc=gfortran --with-cxx=g++ --download-fblaslapack --download-mpich --with-scalar-type=complex --download-triangle >>>>> [0]PETSC ERROR: #1 PetscMallocAlign() at /home/xiaod -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcos.vanella at nist.gov Fri Jul 19 11:20:23 2024 From: marcos.vanella at nist.gov (Vanella, Marcos (Fed)) Date: Fri, 19 Jul 2024 16:20:23 +0000 Subject: [petsc-users] compilation error with latest petsc source Message-ID: Hi, I did an update and compiled PETSc in Frontier with gnu compilers. When compiling my code with PETSc I see this new error pop up: Building mpich_gnu_frontier ftn -c -m64 -O2 -g -std=f2018 -frecursive -ffpe-summary=none -fall-intrinsics -cpp -DGITHASH_PP=\"FDS-6.9.1-894-g0b77ae0-FireX\" -DGITDATE_PP=\""Thu Jul 11 16:05:44 2024 -0400\"" -DBUILDDATE_PP=\""Jul 19, 2024 12:13:39\"" -DWITH_PETSC -I"/autofs/nccs-svm1_home1/vanellam/Software/petsc/include/" -I"/autofs/nccs-svm1_home1/vanellam/Software/petsc/arch-linux-frontier-opt-gcc2/include" -fopenmp ../../Source/pres.f90 ../../Source/pres.f90:2799:65: 2799 | CALL MATCREATESEQAIJ(PETSC_COMM_SELF,ZM%NUNKH,ZM%NUNKH,NNZ_7PT_H,PETSC_NULL_INTEGER,ZM%PETSC_MZ%A_H,PETSC_IERR) | 1 Error: Rank mismatch in argument ?e? at (1) (rank-1 and scalar) It seems the use of PETSC_NULL_INTEGER is causing an issue now. From the PETSc docs this entry is nnz which can be an array or NULL. Has there been any change on the API for this routine? Thanks, Marcos PS: I see some other erros in calls to PETSc routines, same type. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Fri Jul 19 11:42:11 2024 From: bsmith at petsc.dev (Barry Smith) Date: Fri, 19 Jul 2024 12:42:11 -0400 Subject: [petsc-users] compilation error with latest petsc source In-Reply-To: References: Message-ID: <75693A33-1C2B-4CB3-ACC3-C13EE4EAC838@petsc.dev> We made some superficial changes to the Fortran API to better support Fortran and its error checking. See the bottom of https://urldefense.us/v3/__https://petsc.org/main/changes/dev/__;!!G_uCfscf7eWS!ZE1LvDb2DSdDMK8nW0mqRHwzlc2NYRl5HME44w0td8MbAimMxM27NcCtuq_2ENFLXVCmBo5lMqctZHYCvO4zHj8$ Basically, you have to respect Fortran's pickiness about passing the correct dimension (or lack of dimension) of arguments. In the error below, you need to pass PETSC_NULL_INTEGER_ARRAY > On Jul 19, 2024, at 12:20?PM, Vanella, Marcos (Fed) via petsc-users wrote: > > This Message Is From an External Sender > This message came from outside your organization. > Hi, I did an update and compiled PETSc in Frontier with gnu compilers. When compiling my code with PETSc I see this new error pop up: > > Building mpich_gnu_frontier > ftn -c -m64 -O2 -g -std=f2018 -frecursive -ffpe-summary=none -fall-intrinsics -cpp -DGITHASH_PP=\"FDS-6.9.1-894-g0b77ae0-FireX\" -DGITDATE_PP=\""Thu Jul 11 16:05:44 2024 -0400\"" -DBUILDDATE_PP=\""Jul 19, 2024 12:13:39\"" -DWITH_PETSC -I"/autofs/nccs-svm1_home1/vanellam/Software/petsc/include/" -I"/autofs/nccs-svm1_home1/vanellam/Software/petsc/arch-linux-frontier-opt-gcc2/include" -fopenmp ../../Source/pres.f90 > ../../Source/pres.f90:2799:65: > > 2799 | CALL MATCREATESEQAIJ(PETSC_COMM_SELF,ZM%NUNKH,ZM%NUNKH,NNZ_7PT_H,PETSC_NULL_INTEGER,ZM%PETSC_MZ%A_H,PETSC_IERR) > | 1 > Error: Rank mismatch in argument ?e? at (1) (rank-1 and scalar) > > It seems the use of PETSC_NULL_INTEGER is causing an issue now. From the PETSc docs this entry is nnz which can be an array or NULL. Has there been any change on the API for this routine? > > Thanks, > Marcos > > PS: I see some other erros in calls to PETSc routines, same type. -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcos.vanella at nist.gov Fri Jul 19 11:54:17 2024 From: marcos.vanella at nist.gov (Vanella, Marcos (Fed)) Date: Fri, 19 Jul 2024 16:54:17 +0000 Subject: [petsc-users] compilation error with latest petsc source In-Reply-To: <75693A33-1C2B-4CB3-ACC3-C13EE4EAC838@petsc.dev> References: <75693A33-1C2B-4CB3-ACC3-C13EE4EAC838@petsc.dev> Message-ID: Thank you Barry! We'll address the change accordingly. M ________________________________ From: Barry Smith Sent: Friday, July 19, 2024 12:42 PM To: Vanella, Marcos (Fed) Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] compilation error with latest petsc source We made some superficial changes to the Fortran API to better support Fortran and its error checking. See the bottom of https://urldefense.us/v3/__https://petsc.org/main/changes/dev/__;!!G_uCfscf7eWS!fvKGCmdEepa3CkZntgFVJTKrPciDP3wDtRZU6A2jj-B3-NyAEZbbRB5VVVReh2fRbnJe_nQc6O_iWBLTr1wVtzaQBp1YWXy3$ Basically, you have to respect Fortran's pickiness about passing the correct dimension (or lack of dimension) of arguments. In the error below, you need to pass PETSC_NULL_INTEGER_ARRAY On Jul 19, 2024, at 12:20?PM, Vanella, Marcos (Fed) via petsc-users wrote: This Message Is From an External Sender This message came from outside your organization. Hi, I did an update and compiled PETSc in Frontier with gnu compilers. When compiling my code with PETSc I see this new error pop up: Building mpich_gnu_frontier ftn -c -m64 -O2 -g -std=f2018 -frecursive -ffpe-summary=none -fall-intrinsics -cpp -DGITHASH_PP=\"FDS-6.9.1-894-g0b77ae0-FireX\" -DGITDATE_PP=\""Thu Jul 11 16:05:44 2024 -0400\"" -DBUILDDATE_PP=\""Jul 19, 2024 12:13:39\"" -DWITH_PETSC -I"/autofs/nccs-svm1_home1/vanellam/Software/petsc/include/" -I"/autofs/nccs-svm1_home1/vanellam/Software/petsc/arch-linux-frontier-opt-gcc2/include" -fopenmp ../../Source/pres.f90 ../../Source/pres.f90:2799:65: 2799 | CALL MATCREATESEQAIJ(PETSC_COMM_SELF,ZM%NUNKH,ZM%NUNKH,NNZ_7PT_H,PETSC_NULL_INTEGER,ZM%PETSC_MZ%A_H,PETSC_IERR) | 1 Error: Rank mismatch in argument ?e? at (1) (rank-1 and scalar) It seems the use of PETSC_NULL_INTEGER is causing an issue now. From the PETSc docs this entry is nnz which can be an array or NULL. Has there been any change on the API for this routine? Thanks, Marcos PS: I see some other erros in calls to PETSc routines, same type. -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcos.vanella at nist.gov Fri Jul 19 14:37:13 2024 From: marcos.vanella at nist.gov (Vanella, Marcos (Fed)) Date: Fri, 19 Jul 2024 19:37:13 +0000 Subject: [petsc-users] compilation error with latest petsc source In-Reply-To: References: <75693A33-1C2B-4CB3-ACC3-C13EE4EAC838@petsc.dev> Message-ID: Hi Barry, with the changes in place for my fortran calls I'm now picking up the following error running PC + gamg preconditioner and mpiaijkokkos, kokkos vec: [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Caught signal number 7 BUS: Bus Error, possibly illegal memory access [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [0]PETSC ERROR: or see https://urldefense.us/v3/__https://petsc.org/release/faq/*valgrind__;Iw!!G_uCfscf7eWS!c9ba-qn-H5Nd7u-gSlKeT4yx78fRpyYpoTFvBgzvaq6SJkJPQ0MzGeR_DjeB4_rn-1VhQT1FBxHQNruqNJ4dGIXVWyyt_tvV$ and https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!c9ba-qn-H5Nd7u-gSlKeT4yx78fRpyYpoTFvBgzvaq6SJkJPQ0MzGeR_DjeB4_rn-1VhQT1FBxHQNruqNJ4dGIXVW2O-znOO$ [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [0]PETSC ERROR: The line numbers in the error traceback are not always exact. [0]PETSC ERROR: #1 MPI function [0]PETSC ERROR: #2 PetscSFLinkFinishCommunication_Default() at /autofs/nccs-svm1_home1/vanellam/Software/petsc/src/vec/is/sf/impls/basic/sfmpi.c:13 [0]PETSC ERROR: #3 PetscSFLinkFinishCommunication() at /autofs/nccs-svm1_home1/vanellam/Software/petsc/include/../src/vec/is/sf/impls/basic/sfpack.h:291 [0]PETSC ERROR: #4 PetscSFBcastEnd_Basic() at /autofs/nccs-svm1_home1/vanellam/Software/petsc/src/vec/is/sf/impls/basic/sfbasic.c:373 [0]PETSC ERROR: #5 PetscSFBcastEnd() at /autofs/nccs-svm1_home1/vanellam/Software/petsc/src/vec/is/sf/interface/sf.c:1540 [0]PETSC ERROR: #6 VecScatterEnd_Internal() at /autofs/nccs-svm1_home1/vanellam/Software/petsc/src/vec/is/sf/interface/vscat.c:95 [0]PETSC ERROR: #7 VecScatterEnd() at /autofs/nccs-svm1_home1/vanellam/Software/petsc/src/vec/is/sf/interface/vscat.c:1352 [0]PETSC ERROR: #8 MatDiagonalScale_MPIAIJ() at /autofs/nccs-svm1_home1/vanellam/Software/petsc/src/mat/impls/aij/mpi/mpiaij.c:1990 [0]PETSC ERROR: #9 MatDiagonalScale() at /autofs/nccs-svm1_home1/vanellam/Software/petsc/src/mat/interface/matrix.c:5691 [0]PETSC ERROR: #10 MatCreateGraph_Simple_AIJ() at /autofs/nccs-svm1_home1/vanellam/Software/petsc/src/mat/impls/aij/mpi/mpiaij.c:8026 [0]PETSC ERROR: #11 MatCreateGraph() at /autofs/nccs-svm1_home1/vanellam/Software/petsc/src/mat/interface/matrix.c:11426 [0]PETSC ERROR: #12 PCGAMGCreateGraph_AGG() at /autofs/nccs-svm1_home1/vanellam/Software/petsc/src/ksp/pc/impls/gamg/agg.c:663 [0]PETSC ERROR: #13 PCGAMGCreateGraph() at /autofs/nccs-svm1_home1/vanellam/Software/petsc/src/ksp/pc/impls/gamg/gamg.c:2041 [0]PETSC ERROR: #14 PCSetUp_GAMG() at /autofs/nccs-svm1_home1/vanellam/Software/petsc/src/ksp/pc/impls/gamg/gamg.c:695 [0]PETSC ERROR: #15 PCSetUp() at /autofs/nccs-svm1_home1/vanellam/Software/petsc/src/ksp/pc/interface/precon.c:1077 [0]PETSC ERROR: #16 KSPSetUp() at /autofs/nccs-svm1_home1/vanellam/Software/petsc/src/ksp/ksp/interface/itfunc.c:415 [0]PETSC ERROR: #17 KSPSolve_Private() at /autofs/nccs-svm1_home1/vanellam/Software/petsc/src/ksp/ksp/interface/itfunc.c:826 [0]PETSC ERROR: #18 KSPSolve() at /autofs/nccs-svm1_home1/vanellam/Software/petsc/src/ksp/ksp/interface/itfunc.c:1073 MPICH ERROR [Rank 0] [job id 2109802.0] [Fri Jul 19 15:31:18 2024] [frontier03726] - Abort(59) (rank 0 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 I setup PETSc with gnu compilers like this in Frontier: ./configure COPTFLAGS="-O2" CXXOPTFLAGS="-O2" FOPTFLAGS="-O2" FCOPTFLAGS="-O2" HIPOPTFLAGS="-O2 --offload-arch=gfx90a" --with-debugging=1 --with-cc=cc --with-cxx=CC --with-fc=ftn --with-hip --with-hip-arch=gfx908 --with-hipc=hipcc --LIBS="-L${MPICH_DIR}/lib -lmpi ${CRAY_XPMEM_POST_LINK_OPTS} -lxpmem ${PE_MPICH_GTL_DIR_amd_gfx90a} ${PE_MPICH_GTL_LIBS_amd_gfx90a}" --download-kokkos --download-kokkos-kernels --download-hypre --download-suitesparse --download-cmake --force Have you guys come across this before? Thank you for your time, Marcos ________________________________ From: Vanella, Marcos (Fed) Sent: Friday, July 19, 2024 12:54 PM To: Barry Smith Cc: petsc-users at mcs.anl.gov ; Patel, Saumil Sudhir Subject: Re: [petsc-users] compilation error with latest petsc source Thank you Barry! We'll address the change accordingly. M ________________________________ From: Barry Smith Sent: Friday, July 19, 2024 12:42 PM To: Vanella, Marcos (Fed) Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] compilation error with latest petsc source We made some superficial changes to the Fortran API to better support Fortran and its error checking. See the bottom of https://urldefense.us/v3/__https://petsc.org/main/changes/dev/__;!!G_uCfscf7eWS!c9ba-qn-H5Nd7u-gSlKeT4yx78fRpyYpoTFvBgzvaq6SJkJPQ0MzGeR_DjeB4_rn-1VhQT1FBxHQNruqNJ4dGIXVW9CAoatv$ Basically, you have to respect Fortran's pickiness about passing the correct dimension (or lack of dimension) of arguments. In the error below, you need to pass PETSC_NULL_INTEGER_ARRAY On Jul 19, 2024, at 12:20?PM, Vanella, Marcos (Fed) via petsc-users wrote: This Message Is From an External Sender This message came from outside your organization. Hi, I did an update and compiled PETSc in Frontier with gnu compilers. When compiling my code with PETSc I see this new error pop up: Building mpich_gnu_frontier ftn -c -m64 -O2 -g -std=f2018 -frecursive -ffpe-summary=none -fall-intrinsics -cpp -DGITHASH_PP=\"FDS-6.9.1-894-g0b77ae0-FireX\" -DGITDATE_PP=\""Thu Jul 11 16:05:44 2024 -0400\"" -DBUILDDATE_PP=\""Jul 19, 2024 12:13:39\"" -DWITH_PETSC -I"/autofs/nccs-svm1_home1/vanellam/Software/petsc/include/" -I"/autofs/nccs-svm1_home1/vanellam/Software/petsc/arch-linux-frontier-opt-gcc2/include" -fopenmp ../../Source/pres.f90 ../../Source/pres.f90:2799:65: 2799 | CALL MATCREATESEQAIJ(PETSC_COMM_SELF,ZM%NUNKH,ZM%NUNKH,NNZ_7PT_H,PETSC_NULL_INTEGER,ZM%PETSC_MZ%A_H,PETSC_IERR) | 1 Error: Rank mismatch in argument ?e? at (1) (rank-1 and scalar) It seems the use of PETSC_NULL_INTEGER is causing an issue now. From the PETSc docs this entry is nnz which can be an array or NULL. Has there been any change on the API for this routine? Thanks, Marcos PS: I see some other erros in calls to PETSc routines, same type. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Fri Jul 19 18:58:47 2024 From: bsmith at petsc.dev (Barry Smith) Date: Fri, 19 Jul 2024 19:58:47 -0400 Subject: [petsc-users] compilation error with latest petsc source In-Reply-To: References: <75693A33-1C2B-4CB3-ACC3-C13EE4EAC838@petsc.dev> Message-ID: It is unlikely, though, of course, possible that the problem comes from the Fortran code. Is there any way to ./configure/build the code in the same way on another system that is easier to debug for? Or with less options on Frontier? (For example without the optimization flags and the extra -lxpmem etc?) and see if it still crashes in the same way? Frontier is very flaky. Barry > On Jul 19, 2024, at 3:37?PM, Vanella, Marcos (Fed) wrote: > > Hi Barry, with the changes in place for my fortran calls I'm now picking up the following error running PC + gamg preconditioner and mpiaijkokkos, kokkos vec: > > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: Caught signal number 7 BUS: Bus Error, possibly illegal memory access > [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > [0]PETSC ERROR: or see https://urldefense.us/v3/__https://petsc.org/release/faq/*valgrind__;Iw!!G_uCfscf7eWS!eN32mKT-DWEElShhgK8OZIc3vOOsf_Mdz0zToUJkoNBD1nYonVt8s8ERpHIlut7cq7wLaBIV8INywSoF5L-ds58$ and https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!eN32mKT-DWEElShhgK8OZIc3vOOsf_Mdz0zToUJkoNBD1nYonVt8s8ERpHIlut7cq7wLaBIV8INywSoFnOnytYM$ > [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ > [0]PETSC ERROR: The line numbers in the error traceback are not always exact. > [0]PETSC ERROR: #1 MPI function > [0]PETSC ERROR: #2 PetscSFLinkFinishCommunication_Default() at /autofs/nccs-svm1_home1/vanellam/Software/petsc/src/vec/is/sf/impls/basic/sfmpi.c:13 > [0]PETSC ERROR: #3 PetscSFLinkFinishCommunication() at /autofs/nccs-svm1_home1/vanellam/Software/petsc/include/../src/vec/is/sf/impls/basic/sfpack.h:291 > [0]PETSC ERROR: #4 PetscSFBcastEnd_Basic() at /autofs/nccs-svm1_home1/vanellam/Software/petsc/src/vec/is/sf/impls/basic/sfbasic.c:373 > [0]PETSC ERROR: #5 PetscSFBcastEnd() at /autofs/nccs-svm1_home1/vanellam/Software/petsc/src/vec/is/sf/interface/sf.c:1540 > [0]PETSC ERROR: #6 VecScatterEnd_Internal() at /autofs/nccs-svm1_home1/vanellam/Software/petsc/src/vec/is/sf/interface/vscat.c:95 > [0]PETSC ERROR: #7 VecScatterEnd() at /autofs/nccs-svm1_home1/vanellam/Software/petsc/src/vec/is/sf/interface/vscat.c:1352 > [0]PETSC ERROR: #8 MatDiagonalScale_MPIAIJ() at /autofs/nccs-svm1_home1/vanellam/Software/petsc/src/mat/impls/aij/mpi/mpiaij.c:1990 > [0]PETSC ERROR: #9 MatDiagonalScale() at /autofs/nccs-svm1_home1/vanellam/Software/petsc/src/mat/interface/matrix.c:5691 > [0]PETSC ERROR: #10 MatCreateGraph_Simple_AIJ() at /autofs/nccs-svm1_home1/vanellam/Software/petsc/src/mat/impls/aij/mpi/mpiaij.c:8026 > [0]PETSC ERROR: #11 MatCreateGraph() at /autofs/nccs-svm1_home1/vanellam/Software/petsc/src/mat/interface/matrix.c:11426 > [0]PETSC ERROR: #12 PCGAMGCreateGraph_AGG() at /autofs/nccs-svm1_home1/vanellam/Software/petsc/src/ksp/pc/impls/gamg/agg.c:663 > [0]PETSC ERROR: #13 PCGAMGCreateGraph() at /autofs/nccs-svm1_home1/vanellam/Software/petsc/src/ksp/pc/impls/gamg/gamg.c:2041 > [0]PETSC ERROR: #14 PCSetUp_GAMG() at /autofs/nccs-svm1_home1/vanellam/Software/petsc/src/ksp/pc/impls/gamg/gamg.c:695 > [0]PETSC ERROR: #15 PCSetUp() at /autofs/nccs-svm1_home1/vanellam/Software/petsc/src/ksp/pc/interface/precon.c:1077 > [0]PETSC ERROR: #16 KSPSetUp() at /autofs/nccs-svm1_home1/vanellam/Software/petsc/src/ksp/ksp/interface/itfunc.c:415 > [0]PETSC ERROR: #17 KSPSolve_Private() at /autofs/nccs-svm1_home1/vanellam/Software/petsc/src/ksp/ksp/interface/itfunc.c:826 > [0]PETSC ERROR: #18 KSPSolve() at /autofs/nccs-svm1_home1/vanellam/Software/petsc/src/ksp/ksp/interface/itfunc.c:1073 > MPICH ERROR [Rank 0] [job id 2109802.0] [Fri Jul 19 15:31:18 2024] [frontier03726] - Abort(59) (rank 0 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 > > I setup PETSc with gnu compilers like this in Frontier: > ./configure COPTFLAGS="-O2" CXXOPTFLAGS="-O2" FOPTFLAGS="-O2" FCOPTFLAGS="-O2" HIPOPTFLAGS="-O2 --offload-arch=gfx90a" --with-debugging=1 --with-cc=cc --with-cxx=CC --with-fc=ftn --with-hip --with-hip-arch=gfx908 --with-hipc=hipcc --LIBS="-L${MPICH_DIR}/lib -lmpi ${CRAY_XPMEM_POST_LINK_OPTS} -lxpmem ${PE_MPICH_GTL_DIR_amd_gfx90a} ${PE_MPICH_GTL_LIBS_amd_gfx90a}" --download-kokkos --download-kokkos-kernels --download-hypre --download-suitesparse --download-cmake --force > > Have you guys come across this before? Thank you for your time, > Marcos > > From: Vanella, Marcos (Fed) > > Sent: Friday, July 19, 2024 12:54 PM > To: Barry Smith > > Cc: petsc-users at mcs.anl.gov >; Patel, Saumil Sudhir > > Subject: Re: [petsc-users] compilation error with latest petsc source > > Thank you Barry! We'll address the change accordingly. > M > From: Barry Smith > > Sent: Friday, July 19, 2024 12:42 PM > To: Vanella, Marcos (Fed) > > Cc: petsc-users at mcs.anl.gov > > Subject: Re: [petsc-users] compilation error with latest petsc source > > > We made some superficial changes to the Fortran API to better support Fortran and its error checking. See the bottom of https://urldefense.us/v3/__https://petsc.org/main/changes/dev/__;!!G_uCfscf7eWS!eN32mKT-DWEElShhgK8OZIc3vOOsf_Mdz0zToUJkoNBD1nYonVt8s8ERpHIlut7cq7wLaBIV8INywSoFtLbsQb4$ > > Basically, you have to respect Fortran's pickiness about passing the correct dimension (or lack of dimension) of arguments. In the error below, you need to pass PETSC_NULL_INTEGER_ARRAY > > > > > >> On Jul 19, 2024, at 12:20?PM, Vanella, Marcos (Fed) via petsc-users > wrote: >> >> This Message Is From an External Sender >> This message came from outside your organization. >> Hi, I did an update and compiled PETSc in Frontier with gnu compilers. When compiling my code with PETSc I see this new error pop up: >> >> Building mpich_gnu_frontier >> ftn -c -m64 -O2 -g -std=f2018 -frecursive -ffpe-summary=none -fall-intrinsics -cpp -DGITHASH_PP=\"FDS-6.9.1-894-g0b77ae0-FireX\" -DGITDATE_PP=\""Thu Jul 11 16:05:44 2024 -0400\"" -DBUILDDATE_PP=\""Jul 19, 2024 12:13:39\"" -DWITH_PETSC -I"/autofs/nccs-svm1_home1/vanellam/Software/petsc/include/" -I"/autofs/nccs-svm1_home1/vanellam/Software/petsc/arch-linux-frontier-opt-gcc2/include" -fopenmp ../../Source/pres.f90 >> ../../Source/pres.f90:2799:65: >> >> 2799 | CALL MATCREATESEQAIJ(PETSC_COMM_SELF,ZM%NUNKH,ZM%NUNKH,NNZ_7PT_H,PETSC_NULL_INTEGER,ZM%PETSC_MZ%A_H,PETSC_IERR) >> | 1 >> Error: Rank mismatch in argument ?e? at (1) (rank-1 and scalar) >> >> It seems the use of PETSC_NULL_INTEGER is causing an issue now. From the PETSc docs this entry is nnz which can be an array or NULL. Has there been any change on the API for this routine? >> >> Thanks, >> Marcos >> >> PS: I see some other erros in calls to PETSc routines, same type. -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcos.vanella at nist.gov Fri Jul 19 19:12:42 2024 From: marcos.vanella at nist.gov (Vanella, Marcos (Fed)) Date: Sat, 20 Jul 2024 00:12:42 +0000 Subject: [petsc-users] compilation error with latest petsc source In-Reply-To: References: <75693A33-1C2B-4CB3-ACC3-C13EE4EAC838@petsc.dev> Message-ID: I can try on other systems. I'll get back to you on this. Running in the CPU with mpiaij and vec mpi works correctly in Frontier. Thank you, M ________________________________ From: Barry Smith Sent: Friday, July 19, 2024 7:58 PM To: Vanella, Marcos (Fed) Cc: petsc-users at mcs.anl.gov ; Patel, Saumil Sudhir Subject: Re: [petsc-users] compilation error with latest petsc source It is unlikely, though, of course, possible that the problem comes from the Fortran code. Is there any way to ./configure/build the code in the same way on another system that is easier to debug for? Or with less options on Frontier? (For example without the optimization flags and the extra -lxpmem etc?) and see if it still crashes in the same way? Frontier is very flaky. Barry On Jul 19, 2024, at 3:37?PM, Vanella, Marcos (Fed) wrote: Hi Barry, with the changes in place for my fortran calls I'm now picking up the following error running PC + gamg preconditioner and mpiaijkokkos, kokkos vec: [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Caught signal number 7 BUS: Bus Error, possibly illegal memory access [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [0]PETSC ERROR: or see https://urldefense.us/v3/__https://petsc.org/release/faq/*valgrind__;Iw!!G_uCfscf7eWS!daJnYwttia2Bg9KKsNib1kyA1jOn2_4XG0YWVvYP72NswC9nJAvmCg63dDlMaA8hNmxlpr1kvCTSahSxkF6EdZAW5_Fg4KOp$ and https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!daJnYwttia2Bg9KKsNib1kyA1jOn2_4XG0YWVvYP72NswC9nJAvmCg63dDlMaA8hNmxlpr1kvCTSahSxkF6EdZAW5_q4WhLE$ [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [0]PETSC ERROR: The line numbers in the error traceback are not always exact. [0]PETSC ERROR: #1 MPI function [0]PETSC ERROR: #2 PetscSFLinkFinishCommunication_Default() at /autofs/nccs-svm1_home1/vanellam/Software/petsc/src/vec/is/sf/impls/basic/sfmpi.c:13 [0]PETSC ERROR: #3 PetscSFLinkFinishCommunication() at /autofs/nccs-svm1_home1/vanellam/Software/petsc/include/../src/vec/is/sf/impls/basic/sfpack.h:291 [0]PETSC ERROR: #4 PetscSFBcastEnd_Basic() at /autofs/nccs-svm1_home1/vanellam/Software/petsc/src/vec/is/sf/impls/basic/sfbasic.c:373 [0]PETSC ERROR: #5 PetscSFBcastEnd() at /autofs/nccs-svm1_home1/vanellam/Software/petsc/src/vec/is/sf/interface/sf.c:1540 [0]PETSC ERROR: #6 VecScatterEnd_Internal() at /autofs/nccs-svm1_home1/vanellam/Software/petsc/src/vec/is/sf/interface/vscat.c:95 [0]PETSC ERROR: #7 VecScatterEnd() at /autofs/nccs-svm1_home1/vanellam/Software/petsc/src/vec/is/sf/interface/vscat.c:1352 [0]PETSC ERROR: #8 MatDiagonalScale_MPIAIJ() at /autofs/nccs-svm1_home1/vanellam/Software/petsc/src/mat/impls/aij/mpi/mpiaij.c:1990 [0]PETSC ERROR: #9 MatDiagonalScale() at /autofs/nccs-svm1_home1/vanellam/Software/petsc/src/mat/interface/matrix.c:5691 [0]PETSC ERROR: #10 MatCreateGraph_Simple_AIJ() at /autofs/nccs-svm1_home1/vanellam/Software/petsc/src/mat/impls/aij/mpi/mpiaij.c:8026 [0]PETSC ERROR: #11 MatCreateGraph() at /autofs/nccs-svm1_home1/vanellam/Software/petsc/src/mat/interface/matrix.c:11426 [0]PETSC ERROR: #12 PCGAMGCreateGraph_AGG() at /autofs/nccs-svm1_home1/vanellam/Software/petsc/src/ksp/pc/impls/gamg/agg.c:663 [0]PETSC ERROR: #13 PCGAMGCreateGraph() at /autofs/nccs-svm1_home1/vanellam/Software/petsc/src/ksp/pc/impls/gamg/gamg.c:2041 [0]PETSC ERROR: #14 PCSetUp_GAMG() at /autofs/nccs-svm1_home1/vanellam/Software/petsc/src/ksp/pc/impls/gamg/gamg.c:695 [0]PETSC ERROR: #15 PCSetUp() at /autofs/nccs-svm1_home1/vanellam/Software/petsc/src/ksp/pc/interface/precon.c:1077 [0]PETSC ERROR: #16 KSPSetUp() at /autofs/nccs-svm1_home1/vanellam/Software/petsc/src/ksp/ksp/interface/itfunc.c:415 [0]PETSC ERROR: #17 KSPSolve_Private() at /autofs/nccs-svm1_home1/vanellam/Software/petsc/src/ksp/ksp/interface/itfunc.c:826 [0]PETSC ERROR: #18 KSPSolve() at /autofs/nccs-svm1_home1/vanellam/Software/petsc/src/ksp/ksp/interface/itfunc.c:1073 MPICH ERROR [Rank 0] [job id 2109802.0] [Fri Jul 19 15:31:18 2024] [frontier03726] - Abort(59) (rank 0 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 I setup PETSc with gnu compilers like this in Frontier: ./configure COPTFLAGS="-O2" CXXOPTFLAGS="-O2" FOPTFLAGS="-O2" FCOPTFLAGS="-O2" HIPOPTFLAGS="-O2 --offload-arch=gfx90a" --with-debugging=1 --with-cc=cc --with-cxx=CC --with-fc=ftn --with-hip --with-hip-arch=gfx908 --with-hipc=hipcc --LIBS="-L${MPICH_DIR}/lib -lmpi ${CRAY_XPMEM_POST_LINK_OPTS} -lxpmem ${PE_MPICH_GTL_DIR_amd_gfx90a} ${PE_MPICH_GTL_LIBS_amd_gfx90a}" --download-kokkos --download-kokkos-kernels --download-hypre --download-suitesparse --download-cmake --force Have you guys come across this before? Thank you for your time, Marcos ________________________________ From: Vanella, Marcos (Fed) > Sent: Friday, July 19, 2024 12:54 PM To: Barry Smith > Cc: petsc-users at mcs.anl.gov >; Patel, Saumil Sudhir > Subject: Re: [petsc-users] compilation error with latest petsc source Thank you Barry! We'll address the change accordingly. M ________________________________ From: Barry Smith > Sent: Friday, July 19, 2024 12:42 PM To: Vanella, Marcos (Fed) > Cc: petsc-users at mcs.anl.gov > Subject: Re: [petsc-users] compilation error with latest petsc source We made some superficial changes to the Fortran API to better support Fortran and its error checking. See the bottom of https://urldefense.us/v3/__https://petsc.org/main/changes/dev/__;!!G_uCfscf7eWS!daJnYwttia2Bg9KKsNib1kyA1jOn2_4XG0YWVvYP72NswC9nJAvmCg63dDlMaA8hNmxlpr1kvCTSahSxkF6EdZAW52HSty8Q$ Basically, you have to respect Fortran's pickiness about passing the correct dimension (or lack of dimension) of arguments. In the error below, you need to pass PETSC_NULL_INTEGER_ARRAY On Jul 19, 2024, at 12:20?PM, Vanella, Marcos (Fed) via petsc-users > wrote: This Message Is From an External Sender This message came from outside your organization. Hi, I did an update and compiled PETSc in Frontier with gnu compilers. When compiling my code with PETSc I see this new error pop up: Building mpich_gnu_frontier ftn -c -m64 -O2 -g -std=f2018 -frecursive -ffpe-summary=none -fall-intrinsics -cpp -DGITHASH_PP=\"FDS-6.9.1-894-g0b77ae0-FireX\" -DGITDATE_PP=\""Thu Jul 11 16:05:44 2024 -0400\"" -DBUILDDATE_PP=\""Jul 19, 2024 12:13:39\"" -DWITH_PETSC -I"/autofs/nccs-svm1_home1/vanellam/Software/petsc/include/" -I"/autofs/nccs-svm1_home1/vanellam/Software/petsc/arch-linux-frontier-opt-gcc2/include" -fopenmp ../../Source/pres.f90 ../../Source/pres.f90:2799:65: 2799 | CALL MATCREATESEQAIJ(PETSC_COMM_SELF,ZM%NUNKH,ZM%NUNKH,NNZ_7PT_H,PETSC_NULL_INTEGER,ZM%PETSC_MZ%A_H,PETSC_IERR) | 1 Error: Rank mismatch in argument ?e? at (1) (rank-1 and scalar) It seems the use of PETSC_NULL_INTEGER is causing an issue now. From the PETSc docs this entry is nnz which can be an array or NULL. Has there been any change on the API for this routine? Thanks, Marcos PS: I see some other erros in calls to PETSc routines, same type. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ivanluthfi5 at gmail.com Mon Jul 22 03:25:54 2024 From: ivanluthfi5 at gmail.com (Ivan Luthfi) Date: Mon, 22 Jul 2024 16:25:54 +0800 Subject: [petsc-users] configure petsc with X windows Message-ID: Hi there, I want to configure PETSc with X windows, but i get this notification. Unable to find x in default locations! Perhaps you can specify with --with-x-dir= If you do not want X, then give --with-x=0 I already find my X11 path location and put it to --with-x-dir= X11 path.. But it fail. Do you guys have solution? -- Best regards, Ivan Luthfi Ihwani -------------- next part -------------- An HTML attachment was scrubbed... URL: From konstantin.murusidze at math.msu.ru Mon Jul 22 06:37:08 2024 From: konstantin.murusidze at math.msu.ru (=?utf-8?B?0JrQvtC90YHRgtCw0L3RgtC40L0g0JzRg9GA0YPRgdC40LTQt9C1?=) Date: Mon, 22 Jul 2024 14:37:08 +0300 Subject: [petsc-users] (no subject) Message-ID: <441311721646454@mail.yandex.ru> An HTML attachment was scrubbed... URL: From balay.anl at fastmail.org Mon Jul 22 09:16:44 2024 From: balay.anl at fastmail.org (Satish Balay) Date: Mon, 22 Jul 2024 09:16:44 -0500 (CDT) Subject: [petsc-users] configure petsc with X windows In-Reply-To: References: Message-ID: An HTML attachment was scrubbed... URL: From balay.anl at fastmail.org Mon Jul 22 09:18:28 2024 From: balay.anl at fastmail.org (Satish Balay) Date: Mon, 22 Jul 2024 09:18:28 -0500 (CDT) Subject: [petsc-users] configure petsc with X windows In-Reply-To: References: Message-ID: An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Mon Jul 22 09:22:03 2024 From: bsmith at petsc.dev (Barry Smith) Date: Mon, 22 Jul 2024 10:22:03 -0400 Subject: [petsc-users] (no subject) In-Reply-To: <441311721646454@mail.yandex.ru> References: <441311721646454@mail.yandex.ru> Message-ID: Run with -ksp_monitor_true_residual -ksp_converged_reason -ksp_view to see why it is stopping at 38 iterations. Barry > On Jul 22, 2024, at 7:37?AM, ?????????? ????????? wrote: > > This Message Is From an External Sender > This message came from outside your organization. > Good afternoon. I am a student at the Faculty of Mathematics and for my course work I need to solve SLAE with a relative accuracy of 1e-8 or more. To do this, I created the function PetscCall(KSPSetTolerances(ksp, 1.e-8, PETSC_DEFAULT, PETSC_DEFAULT, 100000));. But in the end, only 38 iterations were made and the relative norm ||Ax-b||/||b|| it turns out 4.54011. If you reply to my email, I can give you more information about the solver settings. -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.thomas1 at anu.edu.au Mon Jul 22 20:18:36 2024 From: matthew.thomas1 at anu.edu.au (Matthew Thomas) Date: Tue, 23 Jul 2024 01:18:36 +0000 Subject: [petsc-users] Memory usage scaling with number of processors Message-ID: <757672CD-521F-4CA1-A5E0-913D3BB2A0E3@anu.edu.au> An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Mon Jul 22 21:27:08 2024 From: bsmith at petsc.dev (Barry Smith) Date: Mon, 22 Jul 2024 22:27:08 -0400 Subject: [petsc-users] Memory usage scaling with number of processors In-Reply-To: <757672CD-521F-4CA1-A5E0-913D3BB2A0E3@anu.edu.au> References: <757672CD-521F-4CA1-A5E0-913D3BB2A0E3@anu.edu.au> Message-ID: <0105FD9A-8430-4C77-A5A3-B3E8B94F5385@petsc.dev> Send the code. > On Jul 22, 2024, at 9:18?PM, Matthew Thomas via petsc-users wrote: > > This Message Is From an External Sender > This message came from outside your organization. > Hello, > > I am using petsc and slepc to solve an eigenvalue problem for sparse matrices. When I run my code with double the number of processors, the memory usage also doubles. > > I am able to reproduce this behaviour with ex1 of slepc?s hands on exercises. > > The issue is occurring with petsc not with slepc as this still occurs when I remove the solve step and just create and assemble the petsc matrix. > > With n=100000, this uses ~1Gb with 8 processors, but ~5Gb with 40 processors. > > This was done with petsc 3.21.3, on linux compiled with Intel using Intel-MPI > > Is this the expected behaviour? If not, how can I bug fix this? > > > Thanks, > Matt -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Jul 23 06:24:10 2024 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 23 Jul 2024 07:24:10 -0400 Subject: [petsc-users] Memory usage scaling with number of processors In-Reply-To: <0105FD9A-8430-4C77-A5A3-B3E8B94F5385@petsc.dev> References: <757672CD-521F-4CA1-A5E0-913D3BB2A0E3@anu.edu.au> <0105FD9A-8430-4C77-A5A3-B3E8B94F5385@petsc.dev> Message-ID: Also, you could run with -mat_view ::ascii_info_detail and send the output for both cases. The storage of matrix values is not redundant, so something else is going on. First, what communicator do you use for the matrix, and what partitioning? Thanks, Matt On Mon, Jul 22, 2024 at 10:27?PM Barry Smith wrote: > Send the code. On Jul 22, 2024, at 9: 18 PM, Matthew Thomas via > petsc-users wrote: ? ? ? ? ? ? ? ? ? ? ? ? ? > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? > ZjQcmQRYFpfptBannerStart > This Message Is From an External Sender > This message came from outside your organization. > > ZjQcmQRYFpfptBannerEnd > > Send the code. > > On Jul 22, 2024, at 9:18?PM, Matthew Thomas via petsc-users < > petsc-users at mcs.anl.gov> wrote: > > This Message Is From an External Sender > This message came from outside your organization. > > Hello, > > I am using petsc and slepc to solve an eigenvalue problem for sparse matrices. When I run my code with double the number of processors, the memory usage also doubles. > > I am able to reproduce this behaviour with ex1 of slepc?s hands on exercises. > > The issue is occurring with petsc not with slepc as this still occurs when I remove the solve step and just create and assemble the petsc matrix. > > With n=100000, this uses ~1Gb with 8 processors, but ~5Gb with 40 processors. > > This was done with petsc 3.21.3, on linux compiled with Intel using Intel-MPI > > Is this the expected behaviour? If not, how can I bug fix this? > > > Thanks, > Matt > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!bBuFOzIbQmePGNYKEiglz1pFB-m95_3tE7Dv1DS5LMTtblIFQltGEJC3V0Vyw3OVtQGdNMEF7g-pCek2kf6P$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.thomas1 at anu.edu.au Mon Jul 22 22:32:51 2024 From: matthew.thomas1 at anu.edu.au (Matthew Thomas) Date: Tue, 23 Jul 2024 03:32:51 +0000 Subject: [petsc-users] Memory usage scaling with number of processors In-Reply-To: <0105FD9A-8430-4C77-A5A3-B3E8B94F5385@petsc.dev> References: <757672CD-521F-4CA1-A5E0-913D3BB2A0E3@anu.edu.au> <0105FD9A-8430-4C77-A5A3-B3E8B94F5385@petsc.dev> Message-ID: <19738382-1080-4E2B-A202-3FEA1C4151A5@anu.edu.au> Hi Barry, The minimal example is shown below. #include int main(int argc,char **argv) { Mat A; /* problem matrix */ PetscInt n=100000,i,Istart,Iend; PetscFunctionBeginUser; PetscCall(SlepcInitialize(&argc,&argv,(char*)0,help)); PetscCall(PetscOptionsGetInt(NULL,NULL,"-n",&n,NULL)); PetscCall(MatCreate(PETSC_COMM_WORLD,&A)); PetscCall(MatSetSizes(A,PETSC_DECIDE,PETSC_DECIDE,n,n)); PetscCall(MatSetFromOptions(A)); PetscCall(MatGetOwnershipRange(A,&Istart,&Iend)); for (i=Istart;i0) PetscCall(MatSetValue(A,i,i-1,-1.0,INSERT_VALUES)); if (i wrote: You don't often get email from bsmith at petsc.dev. Learn why this is important Send the code. On Jul 22, 2024, at 9:18?PM, Matthew Thomas via petsc-users wrote: This Message Is From an External Sender This message came from outside your organization. Hello, I am using petsc and slepc to solve an eigenvalue problem for sparse matrices. When I run my code with double the number of processors, the memory usage also doubles. I am able to reproduce this behaviour with ex1 of slepc?s hands on exercises. The issue is occurring with petsc not with slepc as this still occurs when I remove the solve step and just create and assemble the petsc matrix. With n=100000, this uses ~1Gb with 8 processors, but ~5Gb with 40 processors. This was done with petsc 3.21.3, on linux compiled with Intel using Intel-MPI Is this the expected behaviour? If not, how can I bug fix this? Thanks, Matt -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.thomas1 at anu.edu.au Tue Jul 23 19:02:33 2024 From: matthew.thomas1 at anu.edu.au (Matthew Thomas) Date: Wed, 24 Jul 2024 00:02:33 +0000 Subject: [petsc-users] Memory usage scaling with number of processors In-Reply-To: References: <757672CD-521F-4CA1-A5E0-913D3BB2A0E3@anu.edu.au> <0105FD9A-8430-4C77-A5A3-B3E8B94F5385@petsc.dev> Message-ID: Hello Matt, I have attached the output with mat_view for 8 and 40 processors. I am unsure what is meant by the matrix communicator and the partitioning. I am using the default behaviour in every case. How can I find this information? I have attached the log view as well if that helps. Thanks, Matt On 23 Jul 2024, at 9:24?PM, Matthew Knepley wrote: You don't often get email from knepley at gmail.com. Learn why this is important Also, you could run with -mat_view ::ascii_info_detail and send the output for both cases. The storage of matrix values is not redundant, so something else is going on. First, what communicator do you use for the matrix, and what partitioning? Thanks, Matt On Mon, Jul 22, 2024 at 10:27?PM Barry Smith > wrote: This Message Is From an External Sender This message came from outside your organization. Send the code. On Jul 22, 2024, at 9:18?PM, Matthew Thomas via petsc-users > wrote: This Message Is From an External Sender This message came from outside your organization. Hello, I am using petsc and slepc to solve an eigenvalue problem for sparse matrices. When I run my code with double the number of processors, the memory usage also doubles. I am able to reproduce this behaviour with ex1 of slepc?s hands on exercises. The issue is occurring with petsc not with slepc as this still occurs when I remove the solve step and just create and assemble the petsc matrix. With n=100000, this uses ~1Gb with 8 processors, but ~5Gb with 40 processors. This was done with petsc 3.21.3, on linux compiled with Intel using Intel-MPI Is this the expected behaviour? If not, how can I bug fix this? Thanks, Matt -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!defNF55JDHADXFMCPrlWVnASGb8l1sxXg5-10IVx4Ff5FFmO2N003z0BQ80cCU3clrwdPmEGeMWVUhdzckDhFG0VKlPduQ6gjvc$ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: mat_view_8.txt URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: mat_view_40.txt URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: log_view.txt URL: From yongzhong.li at mail.utoronto.ca Tue Jul 23 21:04:06 2024 From: yongzhong.li at mail.utoronto.ca (Yongzhong Li) Date: Wed, 24 Jul 2024 02:04:06 +0000 Subject: [petsc-users] MKL installation can't be used to configure PETSc Message-ID: Dear PETSc developers, Recently, when I configured the PETSc with blas/lapack provided by Intel MKL, I got the following error message, [ 1%] Performing configure step for 'external_petsc' ============================================================================================= Configuring PETSc to compile on your system ============================================================================================= ============================================================================================= ***** WARNING ***** Found environment variable: MAKEFLAGS=s. Ignoring it! Use "./configure MAKEFLAGS=$MAKEFLAGS" if you really want to use this value ============================================================================================= TESTING: checkLib from config.packages.BlasLapack(config/BuildSystem/config/packages/BlasLapack.py:114) ********************************************************************************************* UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details): --------------------------------------------------------------------------------------------- You set a value for --with-blaslapack-dir=, but /tool/pandora64/.package/oneMKL-2021.3/mkl cannot be used ********************************************************************************************* make[2]: *** [CMakeFiles/external_petsc.dir/build.make:92: external/builds/petsc-3.21.0/src/external_petsc-stamp/external_petsc-configure] Error 1 make[1]: *** [CMakeFiles/Makefile2:114: CMakeFiles/external_petsc.dir/all] Error 2 make: *** [Makefile:91: all] Error 2 However, I think I was using the correct MKL root address to configure PETSc. I have attached the configure.log file, could you help me look at where might be wrong? Thanks! Yongzhong ----------------------------------------------------------- Yongzhong Li PhD student | Electromagnetics Group Department of Electrical & Computer Engineering University of Toronto https://urldefense.us/v3/__http://www.modelics.org__;!!G_uCfscf7eWS!aa4LdUX0JV3i0ce3Jp20GLg521rd7L25vZSpFhCmlvTUN8EZFjfbk-IsfiTOD3IpCfXCB_zHA8nh1WMIkiHosbwZgv8PzmzPEec$ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: application/octet-stream Size: 1056342 bytes Desc: configure.log URL: From balay.anl at fastmail.org Tue Jul 23 21:43:38 2024 From: balay.anl at fastmail.org (Satish Balay) Date: Tue, 23 Jul 2024 21:43:38 -0500 (CDT) Subject: [petsc-users] MKL installation can't be used to configure PETSc In-Reply-To: References: Message-ID: An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Jul 24 05:41:04 2024 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 24 Jul 2024 06:41:04 -0400 Subject: [petsc-users] Memory usage scaling with number of processors In-Reply-To: References: <757672CD-521F-4CA1-A5E0-913D3BB2A0E3@anu.edu.au> <0105FD9A-8430-4C77-A5A3-B3E8B94F5385@petsc.dev> Message-ID: On Tue, Jul 23, 2024 at 8:02?PM Matthew Thomas wrote: > Hello Matt, > > I have attached the output with mat_view for 8 and 40 processors. > > I am unsure what is meant by the matrix communicator and the partitioning. > I am using the default behaviour in every case. How can I find this > information? > This shows that the matrix is taking the same amount of memory for 8 and 40 procs, so that is not your problem. Also, it is a very small amount of memory: 100K rows x 3 nz/row x 8 bytes/nz = 2.4 MB and 50% overhead for indexing, so something under 4MB. I am not sure what is taking up the rest of the memory, but I do not think it is PETSc from the log you included. Thanks, Matt > I have attached the log view as well if that helps. > > Thanks, > Matt > > > > > On 23 Jul 2024, at 9:24?PM, Matthew Knepley wrote: > > You don't often get email from knepley at gmail.com. Learn why this is > important > Also, you could run with > > -mat_view ::ascii_info_detail > > and send the output for both cases. The storage of matrix values is not > redundant, so something else is > going on. First, what communicator do you use for the matrix, and what > partitioning? > > Thanks, > > Matt > > On Mon, Jul 22, 2024 at 10:27?PM Barry Smith wrote: > > This Message Is From an External Sender > This message came from outside your organization. > > > Send the code. > > On Jul 22, 2024, at 9:18?PM, Matthew Thomas via petsc-users < > petsc-users at mcs.anl.gov> wrote: > > This Message Is From an External Sender > This message came from outside your organization. > > Hello, > > I am using petsc and slepc to solve an eigenvalue problem for sparse matrices. When I run my code with double the number of processors, the memory usage also doubles. > > I am able to reproduce this behaviour with ex1 of slepc?s hands on exercises. > > The issue is occurring with petsc not with slepc as this still occurs when I remove the solve step and just create and assemble the petsc matrix. > > With n=100000, this uses ~1Gb with 8 processors, but ~5Gb with 40 processors. > > This was done with petsc 3.21.3, on linux compiled with Intel using Intel-MPI > > Is this the expected behaviour? If not, how can I bug fix this? > > > Thanks, > Matt > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!b_JFRb7MxmdPHCjjuC42vps0Cvkz5tuUTRRK-Yh20xdmpvEHr2guqznV0TGVXhEiNnXVEZeCCPSlW-rDI1i4$ > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!b_JFRb7MxmdPHCjjuC42vps0Cvkz5tuUTRRK-Yh20xdmpvEHr2guqznV0TGVXhEiNnXVEZeCCPSlW-rDI1i4$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From srvenkat at utexas.edu Wed Jul 24 13:44:56 2024 From: srvenkat at utexas.edu (Sreeram R Venkat) Date: Wed, 24 Jul 2024 11:44:56 -0700 Subject: [petsc-users] Dense Matrix Factorization/Solve Message-ID: I have an SPD dense matrix of size NxN, where N can range from 10^4-10^5. Are there any Cholesky factorization/solve routines for it in PETSc (or in any of the external libraries)? If possible, I want to use GPU acceleration with 1 or more GPUs. The matrix type can be MATSEQDENSE/MATMPIDENSE or MATSEQDENSECUDA/MATMPIDENSECUDA accordingly. If it is possible to do the factorization beforehand and store it to do the triangular solves later, that would be great. Thanks, Sreeram -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Wed Jul 24 14:07:56 2024 From: bsmith at petsc.dev (Barry Smith) Date: Wed, 24 Jul 2024 15:07:56 -0400 Subject: [petsc-users] Dense Matrix Factorization/Solve In-Reply-To: References: Message-ID: <94067889-0D86-4FCD-A661-4AEE45B1E2DA@petsc.dev> For one MPI rank, it looks like you can use -pc_type cholesky -pc_factor_mat_solver_type cupm though it is not documented in https://urldefense.us/v3/__https://petsc.org/release/overview/linear_solve_table/*direct-solvers__;Iw!!G_uCfscf7eWS!YpFrRe8Wul8hbJjnWia9KlpTHLeU2HBIpo45YA5ZnmqISNTy0txGndaBOsORw3xw3Q0Uhvq0Bsb5eJhCKlCe9Bk$ Of if you also ./configure --download-kokkos --download-kokkos-kernels you can use -pc_factor_mat_solver_type kokkos if you also this may also work for multiple GPUs but that is not documented in the table either (Junchao) Nor are sparse Kokkos or CUDA stuff documented (if they exist) in the table. Barry > On Jul 24, 2024, at 2:44?PM, Sreeram R Venkat wrote: > > This Message Is From an External Sender > This message came from outside your organization. > I have an SPD dense matrix of size NxN, where N can range from 10^4-10^5. Are there any Cholesky factorization/solve routines for it in PETSc (or in any of the external libraries)? If possible, I want to use GPU acceleration with 1 or more GPUs. The matrix type can be MATSEQDENSE/MATMPIDENSE or MATSEQDENSECUDA/MATMPIDENSECUDA accordingly. If it is possible to do the factorization beforehand and store it to do the triangular solves later, that would be great. > > Thanks, > Sreeram -------------- next part -------------- An HTML attachment was scrubbed... URL: From hongzhang at anl.gov Wed Jul 24 15:28:18 2024 From: hongzhang at anl.gov (Zhang, Hong) Date: Wed, 24 Jul 2024 20:28:18 +0000 Subject: [petsc-users] Questions on EIMEX: In-Reply-To: <8DF55AE6-9DC2-406E-861E-159155AAEC13@stonybrook.edu> References: <8DF55AE6-9DC2-406E-861E-159155AAEC13@stonybrook.edu> Message-ID: Hi Derek, Sorry for the late reply. 1. The W-IMEX is used as the base method in TSEIMEX. So TSEIMEX implements 2.4b. 2. Thank you for catching the mismatch. An MR has been submitted to fix this. Hong (Mr.) From: petsc-users on behalf of Derek Teaney via petsc-users Date: Wednesday, June 12, 2024 at 10:22 AM To: petsc-users at mcs.anl.gov Subject: [petsc-users] Questions on EIMEX: Dear All, I have a question and a comment on the TSEIMEX scheme in the TS routines. 1/ Looking at the cited reference, I see three schemes there 2.?4b, 2.?4c, 2.?4d . It is not clear which of these is being implemented.?2/ The documentation for ZjQcmQRYFpfptBannerStart This Message Is From an External Sender This message came from outside your organization. ZjQcmQRYFpfptBannerEnd Dear All, I have a question and a comment on the TSEIMEX scheme in the TS routines. 1/ Looking at the cited reference, I see three schemes there 2.4b, 2.4c, 2.4d . It is not clear which of these is being implemented. 2/ The documentation for EIMEX mixes up F(u, udot ) and G(u) relative to the users manual. This may have been done on purpose, to conform with the Constantinescu ref., but a perhaps a comment is in order. Thanks, Derek ------------------------------------------------------------------------ Derek Teaney Professor Dept. of Physics & Astronomy Stony Brook University Stony Brook, NY 11794-3800 Tel: (631) 632-4489 e-mail: Derek.Teaney at stonybrook.edu ------------------------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From srvenkat at utexas.edu Wed Jul 24 16:33:49 2024 From: srvenkat at utexas.edu (Sreeram R Venkat) Date: Wed, 24 Jul 2024 14:33:49 -0700 Subject: [petsc-users] Dense Matrix Factorization/Solve In-Reply-To: <94067889-0D86-4FCD-A661-4AEE45B1E2DA@petsc.dev> References: <94067889-0D86-4FCD-A661-4AEE45B1E2DA@petsc.dev> Message-ID: Thanks for the suggestions; I will try them out. Dense factorization is used as the benchmark for Top500 right? That's why I thought there would be some state-of-the-art multi GPU dense linear solvers out there. I saw this library called cuSOLVERMp https://urldefense.us/v3/__https://docs.nvidia.com/cuda/cusolvermp/__;!!G_uCfscf7eWS!ebpPt6OKSu0Ua8y56LhYJM0ol0OAD-aZ4XGMbFPoIIzc0oNqKZryYg0uIRdhObPv7MOrgO1jJFieu5U2hVjtUcOPaA$ from NVIDIA. It looks somewhat difficult to integrate with other code, though. I also found this https://urldefense.us/v3/__https://github.com/nv-legate/cunumeric__;!!G_uCfscf7eWS!ebpPt6OKSu0Ua8y56LhYJM0ol0OAD-aZ4XGMbFPoIIzc0oNqKZryYg0uIRdhObPv7MOrgO1jJFieu5U2hVg0mdmrig$ from NVIDIA which shows some good results for multi GPU Cholesky, but I'm having some trouble getting it set up correctly. On Wed, Jul 24, 2024, 12:08?PM Barry Smith wrote: > > For one MPI rank, it looks like you can use -pc_type cholesky > -pc_factor_mat_solver_type cupm though it is not documented in > https://urldefense.us/v3/__https://petsc.org/release/overview/linear_solve_table/*direct-solvers__;Iw!!G_uCfscf7eWS!ebpPt6OKSu0Ua8y56LhYJM0ol0OAD-aZ4XGMbFPoIIzc0oNqKZryYg0uIRdhObPv7MOrgO1jJFieu5U2hVhs7ez3Mw$ > > Of if you also ./configure --download-kokkos --download-kokkos-kernels > you can use -pc_factor_mat_solver_type kokkos if you also this may also > work for multiple GPUs but that is not documented in the table either > (Junchao) Nor are sparse Kokkos or CUDA stuff documented (if they exist) in > the table. > > > Barry > > > > On Jul 24, 2024, at 2:44?PM, Sreeram R Venkat wrote: > > This Message Is From an External Sender > This message came from outside your organization. > I have an SPD dense matrix of size NxN, where N can range from 10^4-10^5. > Are there any Cholesky factorization/solve routines for it in PETSc (or in > any of the external libraries)? If possible, I want to use GPU acceleration > with 1 or more GPUs. The matrix type can be MATSEQDENSE/MATMPIDENSE or > MATSEQDENSECUDA/MATMPIDENSECUDA accordingly. If it is possible to do the > factorization beforehand and store it to do the triangular solves later, > that would be great. > > Thanks, > Sreeram > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From junchao.zhang at gmail.com Wed Jul 24 17:03:58 2024 From: junchao.zhang at gmail.com (Junchao Zhang) Date: Wed, 24 Jul 2024 17:03:58 -0500 Subject: [petsc-users] Dense Matrix Factorization/Solve In-Reply-To: <94067889-0D86-4FCD-A661-4AEE45B1E2DA@petsc.dev> References: <94067889-0D86-4FCD-A661-4AEE45B1E2DA@petsc.dev> Message-ID: Currently we don't support Kokkos dense matrix and its solvers. You can use MATSEQDENSECUDA/HIP --Junchao Zhang On Wed, Jul 24, 2024 at 2:08?PM Barry Smith wrote: > > For one MPI rank, it looks like you can use -pc_type cholesky > -pc_factor_mat_solver_type cupm though it is not documented in > https://urldefense.us/v3/__https://petsc.org/release/overview/linear_solve_table/*direct-solvers__;Iw!!G_uCfscf7eWS!fvASLTU48U_NIIf2O2CcYRSSki2GbUmrm4zw6CBOWx1rIY8-CnqmhboVIA5-aey5_QOPCQhaI2nbv4FJhCDdQMiJrHPc$ > > Of if you also ./configure --download-kokkos --download-kokkos-kernels > you can use -pc_factor_mat_solver_type kokkos if you also this may also > work for multiple GPUs but that is not documented in the table either > (Junchao) Nor are sparse Kokkos or CUDA stuff documented (if they exist) in > the table. > > > Barry > > > > On Jul 24, 2024, at 2:44?PM, Sreeram R Venkat wrote: > > This Message Is From an External Sender > This message came from outside your organization. > I have an SPD dense matrix of size NxN, where N can range from 10^4-10^5. > Are there any Cholesky factorization/solve routines for it in PETSc (or in > any of the external libraries)? If possible, I want to use GPU acceleration > with 1 or more GPUs. The matrix type can be MATSEQDENSE/MATMPIDENSE or > MATSEQDENSECUDA/MATMPIDENSECUDA accordingly. If it is possible to do the > factorization beforehand and store it to do the triangular solves later, > that would be great. > > Thanks, > Sreeram > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Wed Jul 24 19:07:12 2024 From: bsmith at petsc.dev (Barry Smith) Date: Wed, 24 Jul 2024 20:07:12 -0400 Subject: [petsc-users] Dense Matrix Factorization/Solve In-Reply-To: References: <94067889-0D86-4FCD-A661-4AEE45B1E2DA@petsc.dev> Message-ID: <24155488-D04C-4DCD-976C-556269246A04@petsc.dev> > On Jul 24, 2024, at 5:33?PM, Sreeram R Venkat wrote: > > Thanks for the suggestions; I will try them out. > > Dense factorization is used as the benchmark for Top500 right? That's why I thought there would be some state-of-the-art multi GPU dense linear solvers out there. > > I saw this library called cuSOLVERMp https://urldefense.us/v3/__https://docs.nvidia.com/cuda/cusolvermp/__;!!G_uCfscf7eWS!fcn2UKnRziZm0rP7CEBwWeaeUaiRcgQDOKZWgikZt6UgU6FW640vVQ3rGtF-f3-0f1PZMImxSVNZzTtK5aVt4mw$ from NVIDIA. It looks somewhat difficult to integrate with other code, though. The PETSc Scalapack interface could possibly be jiggered to get something to work with cusolvermp since their API's are similar. > > I also found this https://urldefense.us/v3/__https://github.com/nv-legate/cunumeric__;!!G_uCfscf7eWS!fcn2UKnRziZm0rP7CEBwWeaeUaiRcgQDOKZWgikZt6UgU6FW640vVQ3rGtF-f3-0f1PZMImxSVNZzTtKep1PYdI$ from NVIDIA which shows some good results for multi GPU Cholesky, but I'm having some trouble getting it set up correctly. > > On Wed, Jul 24, 2024, 12:08?PM Barry Smith > wrote: >> >> For one MPI rank, it looks like you can use -pc_type cholesky -pc_factor_mat_solver_type cupm though it is not documented in https://urldefense.us/v3/__https://petsc.org/release/overview/linear_solve_table/*direct-solvers__;Iw!!G_uCfscf7eWS!fcn2UKnRziZm0rP7CEBwWeaeUaiRcgQDOKZWgikZt6UgU6FW640vVQ3rGtF-f3-0f1PZMImxSVNZzTtKXNY00vs$ >> >> Of if you also ./configure --download-kokkos --download-kokkos-kernels you can use -pc_factor_mat_solver_type kokkos if you also this may also work for multiple GPUs but that is not documented in the table either (Junchao) Nor are sparse Kokkos or CUDA stuff documented (if they exist) in the table. >> >> >> Barry >> >> >> >>> On Jul 24, 2024, at 2:44?PM, Sreeram R Venkat > wrote: >>> >>> This Message Is From an External Sender >>> This message came from outside your organization. >>> I have an SPD dense matrix of size NxN, where N can range from 10^4-10^5. Are there any Cholesky factorization/solve routines for it in PETSc (or in any of the external libraries)? If possible, I want to use GPU acceleration with 1 or more GPUs. The matrix type can be MATSEQDENSE/MATMPIDENSE or MATSEQDENSECUDA/MATMPIDENSECUDA accordingly. If it is possible to do the factorization beforehand and store it to do the triangular solves later, that would be great. >>> >>> Thanks, >>> Sreeram >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.thomas1 at anu.edu.au Wed Jul 24 19:37:16 2024 From: matthew.thomas1 at anu.edu.au (Matthew Thomas) Date: Thu, 25 Jul 2024 00:37:16 +0000 Subject: [petsc-users] Memory usage scaling with number of processors In-Reply-To: References: <757672CD-521F-4CA1-A5E0-913D3BB2A0E3@anu.edu.au> <0105FD9A-8430-4C77-A5A3-B3E8B94F5385@petsc.dev> Message-ID: <0ADFC5DC-6ED7-4B2E-9079-4AACB1DD8F04@anu.edu.au> Hello Matt, Thanks for the help. I believe the problem is coming from an incorrect linking with MPI and PETSc. I tried running with petscmpiexec from $PETSC_DIR/lib/petsc/bin/petscmpiexec. This gave me the error Error build location not found! Please set PETSC_DIR and PETSC_ARCH correctly for this build. Naturally I have set these two values and echo $PETSC_DIR gives the path I expect, so it seems like I am running my programs with a different version of MPI than petsc expects which could explain the memory usage. Do you have any ideas how to fix this? Thanks, Matt On 24 Jul 2024, at 8:41?PM, Matthew Knepley wrote: You don't often get email from knepley at gmail.com. Learn why this is important On Tue, Jul 23, 2024 at 8:02?PM Matthew Thomas > wrote: Hello Matt, I have attached the output with mat_view for 8 and 40 processors. I am unsure what is meant by the matrix communicator and the partitioning. I am using the default behaviour in every case. How can I find this information? This shows that the matrix is taking the same amount of memory for 8 and 40 procs, so that is not your problem. Also, it is a very small amount of memory: 100K rows x 3 nz/row x 8 bytes/nz = 2.4 MB and 50% overhead for indexing, so something under 4MB. I am not sure what is taking up the rest of the memory, but I do not think it is PETSc from the log you included. Thanks, Matt I have attached the log view as well if that helps. Thanks, Matt On 23 Jul 2024, at 9:24?PM, Matthew Knepley > wrote: You don't often get email from knepley at gmail.com. Learn why this is important Also, you could run with -mat_view ::ascii_info_detail and send the output for both cases. The storage of matrix values is not redundant, so something else is going on. First, what communicator do you use for the matrix, and what partitioning? Thanks, Matt On Mon, Jul 22, 2024 at 10:27?PM Barry Smith > wrote: This Message Is From an External Sender This message came from outside your organization. Send the code. On Jul 22, 2024, at 9:18?PM, Matthew Thomas via petsc-users > wrote: This Message Is From an External Sender This message came from outside your organization. Hello, I am using petsc and slepc to solve an eigenvalue problem for sparse matrices. When I run my code with double the number of processors, the memory usage also doubles. I am able to reproduce this behaviour with ex1 of slepc?s hands on exercises. The issue is occurring with petsc not with slepc as this still occurs when I remove the solve step and just create and assemble the petsc matrix. With n=100000, this uses ~1Gb with 8 processors, but ~5Gb with 40 processors. This was done with petsc 3.21.3, on linux compiled with Intel using Intel-MPI Is this the expected behaviour? If not, how can I bug fix this? Thanks, Matt -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!epHRik2gEGTbxUIxHTZA8CUffEfzhhzq27rzxfOcoDlVKKH1y-WhrMjjMMlcrPTnLioR4wetiSo-dP3RtNApVyMi1iL0XAS7Mps$ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!epHRik2gEGTbxUIxHTZA8CUffEfzhhzq27rzxfOcoDlVKKH1y-WhrMjjMMlcrPTnLioR4wetiSo-dP3RtNApVyMi1iL0XAS7Mps$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Jul 24 20:26:26 2024 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 24 Jul 2024 21:26:26 -0400 Subject: [petsc-users] Memory usage scaling with number of processors In-Reply-To: <0ADFC5DC-6ED7-4B2E-9079-4AACB1DD8F04@anu.edu.au> References: <757672CD-521F-4CA1-A5E0-913D3BB2A0E3@anu.edu.au> <0105FD9A-8430-4C77-A5A3-B3E8B94F5385@petsc.dev> <0ADFC5DC-6ED7-4B2E-9079-4AACB1DD8F04@anu.edu.au> Message-ID: On Wed, Jul 24, 2024 at 8:37?PM Matthew Thomas wrote: > Hello Matt, > > Thanks for the help. I believe the problem is coming from an incorrect > linking with MPI and PETSc. > > I tried running with petscmpiexec from > $PETSC_DIR/lib/petsc/bin/petscmpiexec. This gave me the error > > Error build location not found! Please set PETSC_DIR and PETSC_ARCH > correctly for this build. > > > Naturally I have set these two values and echo $PETSC_DIR gives the path I > expect, so it seems like I am running my programs with a different version > of MPI than petsc expects which could explain the memory usage. > > Do you have any ideas how to fix this? > Yes. First we determine what MPI you configured with. Send configure.log, which has this information. Thanks, Matt > Thanks, > Matt > > On 24 Jul 2024, at 8:41?PM, Matthew Knepley wrote: > > You don't often get email from knepley at gmail.com. Learn why this is > important > On Tue, Jul 23, 2024 at 8:02?PM Matthew Thomas > wrote: > >> Hello Matt, >> >> I have attached the output with mat_view for 8 and 40 processors. >> >> I am unsure what is meant by the matrix communicator and the >> partitioning. I am using the default behaviour in every case. How can I >> find this information? >> > > This shows that the matrix is taking the same amount of memory for 8 and > 40 procs, so that is not your problem. Also, > it is a very small amount of memory: > > 100K rows x 3 nz/row x 8 bytes/nz = 2.4 MB > > and 50% overhead for indexing, so something under 4MB. I am not sure what > is taking up the rest of the memory, but I do not > think it is PETSc from the log you included. > > Thanks, > > Matt > > >> I have attached the log view as well if that helps. >> >> Thanks, >> Matt >> >> >> >> >> On 23 Jul 2024, at 9:24?PM, Matthew Knepley wrote: >> >> You don't often get email from knepley at gmail.com. Learn why this is >> important >> Also, you could run with >> >> -mat_view ::ascii_info_detail >> >> and send the output for both cases. The storage of matrix values is not >> redundant, so something else is >> going on. First, what communicator do you use for the matrix, and what >> partitioning? >> >> Thanks, >> >> Matt >> >> On Mon, Jul 22, 2024 at 10:27?PM Barry Smith wrote: >> >> This Message Is From an External Sender >> This message came from outside your organization. >> >> >> Send the code. >> >> On Jul 22, 2024, at 9:18?PM, Matthew Thomas via petsc-users < >> petsc-users at mcs.anl.gov> wrote: >> >> This Message Is From an External Sender >> This message came from outside your organization. >> >> Hello, >> >> I am using petsc and slepc to solve an eigenvalue problem for sparse matrices. When I run my code with double the number of processors, the memory usage also doubles. >> >> I am able to reproduce this behaviour with ex1 of slepc?s hands on exercises. >> >> The issue is occurring with petsc not with slepc as this still occurs when I remove the solve step and just create and assemble the petsc matrix. >> >> With n=100000, this uses ~1Gb with 8 processors, but ~5Gb with 40 processors. >> >> This was done with petsc 3.21.3, on linux compiled with Intel using Intel-MPI >> >> Is this the expected behaviour? If not, how can I bug fix this? >> >> >> Thanks, >> Matt >> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!YsrpkGTifXON5FvrYzu4-3O6KuJWEiDvzSfIcxkjEH8QL1hE8VRmza2im3Pq0WtxMtakhzbXzzqBKNCMX0GP$ >> >> >> >> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!YsrpkGTifXON5FvrYzu4-3O6KuJWEiDvzSfIcxkjEH8QL1hE8VRmza2im3Pq0WtxMtakhzbXzzqBKNCMX0GP$ > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!YsrpkGTifXON5FvrYzu4-3O6KuJWEiDvzSfIcxkjEH8QL1hE8VRmza2im3Pq0WtxMtakhzbXzzqBKNCMX0GP$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.thomas1 at anu.edu.au Wed Jul 24 20:32:42 2024 From: matthew.thomas1 at anu.edu.au (Matthew Thomas) Date: Thu, 25 Jul 2024 01:32:42 +0000 Subject: [petsc-users] Memory usage scaling with number of processors In-Reply-To: References: <757672CD-521F-4CA1-A5E0-913D3BB2A0E3@anu.edu.au> <0105FD9A-8430-4C77-A5A3-B3E8B94F5385@petsc.dev> <0ADFC5DC-6ED7-4B2E-9079-4AACB1DD8F04@anu.edu.au> Message-ID: <7EFC7D85-DC71-48CB-B7DF-D00080E07DE3@anu.edu.au> Hi Matt, I have attached the configuration file below. Thanks, Matt On 25 Jul 2024, at 11:26?AM, Matthew Knepley wrote: On Wed, Jul 24, 2024 at 8:37?PM Matthew Thomas > wrote: Hello Matt, Thanks for the help. I believe the problem is coming from an incorrect linking with MPI and PETSc. I tried running with petscmpiexec from $PETSC_DIR/lib/petsc/bin/petscmpiexec. This gave me the error Error build location not found! Please set PETSC_DIR and PETSC_ARCH correctly for this build. Naturally I have set these two values and echo $PETSC_DIR gives the path I expect, so it seems like I am running my programs with a different version of MPI than petsc expects which could explain the memory usage. Do you have any ideas how to fix this? Yes. First we determine what MPI you configured with. Send configure.log, which has this information. Thanks, Matt Thanks, Matt On 24 Jul 2024, at 8:41?PM, Matthew Knepley > wrote: You don't often get email from knepley at gmail.com. Learn why this is important On Tue, Jul 23, 2024 at 8:02?PM Matthew Thomas > wrote: Hello Matt, I have attached the output with mat_view for 8 and 40 processors. I am unsure what is meant by the matrix communicator and the partitioning. I am using the default behaviour in every case. How can I find this information? This shows that the matrix is taking the same amount of memory for 8 and 40 procs, so that is not your problem. Also, it is a very small amount of memory: 100K rows x 3 nz/row x 8 bytes/nz = 2.4 MB and 50% overhead for indexing, so something under 4MB. I am not sure what is taking up the rest of the memory, but I do not think it is PETSc from the log you included. Thanks, Matt I have attached the log view as well if that helps. Thanks, Matt On 23 Jul 2024, at 9:24?PM, Matthew Knepley > wrote: You don't often get email from knepley at gmail.com. Learn why this is important Also, you could run with -mat_view ::ascii_info_detail and send the output for both cases. The storage of matrix values is not redundant, so something else is going on. First, what communicator do you use for the matrix, and what partitioning? Thanks, Matt On Mon, Jul 22, 2024 at 10:27?PM Barry Smith > wrote: This Message Is From an External Sender This message came from outside your organization. Send the code. On Jul 22, 2024, at 9:18?PM, Matthew Thomas via petsc-users > wrote: This Message Is From an External Sender This message came from outside your organization. Hello, I am using petsc and slepc to solve an eigenvalue problem for sparse matrices. When I run my code with double the number of processors, the memory usage also doubles. I am able to reproduce this behaviour with ex1 of slepc?s hands on exercises. The issue is occurring with petsc not with slepc as this still occurs when I remove the solve step and just create and assemble the petsc matrix. With n=100000, this uses ~1Gb with 8 processors, but ~5Gb with 40 processors. This was done with petsc 3.21.3, on linux compiled with Intel using Intel-MPI Is this the expected behaviour? If not, how can I bug fix this? Thanks, Matt -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!dRBUqYBDOj9QHmnycbLUuktH6lG_1402EZUstkt_mXuc_Qjt6o_CVNgqncZyn7znHJWBTI0ASPvo3VLgqx-G5FI3ovuhAhmUSCQ$ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!dRBUqYBDOj9QHmnycbLUuktH6lG_1402EZUstkt_mXuc_Qjt6o_CVNgqncZyn7znHJWBTI0ASPvo3VLgqx-G5FI3ovuhAhmUSCQ$ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!dRBUqYBDOj9QHmnycbLUuktH6lG_1402EZUstkt_mXuc_Qjt6o_CVNgqncZyn7znHJWBTI0ASPvo3VLgqx-G5FI3ovuhAhmUSCQ$ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: application/octet-stream Size: 1136496 bytes Desc: configure.log URL: From knepley at gmail.com Wed Jul 24 20:44:04 2024 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 24 Jul 2024 21:44:04 -0400 Subject: [petsc-users] Memory usage scaling with number of processors In-Reply-To: <7EFC7D85-DC71-48CB-B7DF-D00080E07DE3@anu.edu.au> References: <757672CD-521F-4CA1-A5E0-913D3BB2A0E3@anu.edu.au> <0105FD9A-8430-4C77-A5A3-B3E8B94F5385@petsc.dev> <0ADFC5DC-6ED7-4B2E-9079-4AACB1DD8F04@anu.edu.au> <7EFC7D85-DC71-48CB-B7DF-D00080E07DE3@anu.edu.au> Message-ID: On Wed, Jul 24, 2024 at 9:32?PM Matthew Thomas wrote: > Hi Matt, > > I have attached the configuration file below. > >From the log: MPI: Version: 3 mpiexec: /apps/intel-tools/intel-mpi/2021.11.0/bin/mpiexec Implementation: mpich3 I_MPI_NUMVERSION: 20211100300 MPICH_NUMVERSION: 30400002 so you want to use that mpiexec. This is what will be used if you try make check in $PETSC_DIR. Thanks, Matt > Thanks, > Matt > > > > > On 25 Jul 2024, at 11:26?AM, Matthew Knepley wrote: > > On Wed, Jul 24, 2024 at 8:37?PM Matthew Thomas > wrote: > > Hello Matt, > > Thanks for the help. I believe the problem is coming from an incorrect > linking with MPI and PETSc. > > I tried running with petscmpiexec from > $PETSC_DIR/lib/petsc/bin/petscmpiexec. This gave me the error > > Error build location not found! Please set PETSC_DIR and PETSC_ARCH > correctly for this build. > > > Naturally I have set these two values and echo $PETSC_DIR gives the path I > expect, so it seems like I am running my programs with a different version > of MPI than petsc expects which could explain the memory usage. > > Do you have any ideas how to fix this? > > > Yes. First we determine what MPI you configured with. Send configure.log, > which has this information. > > Thanks, > > Matt > > > Thanks, > Matt > > On 24 Jul 2024, at 8:41?PM, Matthew Knepley wrote: > > You don't often get email from knepley at gmail.com. Learn why this is > important > On Tue, Jul 23, 2024 at 8:02?PM Matthew Thomas > wrote: > > Hello Matt, > > I have attached the output with mat_view for 8 and 40 processors. > > I am unsure what is meant by the matrix communicator and the partitioning. > I am using the default behaviour in every case. How can I find this > information? > > > This shows that the matrix is taking the same amount of memory for 8 and > 40 procs, so that is not your problem. Also, > it is a very small amount of memory: > > 100K rows x 3 nz/row x 8 bytes/nz = 2.4 MB > > and 50% overhead for indexing, so something under 4MB. I am not sure what > is taking up the rest of the memory, but I do not > think it is PETSc from the log you included. > > Thanks, > > Matt > > > I have attached the log view as well if that helps. > > Thanks, > Matt > > > > > On 23 Jul 2024, at 9:24?PM, Matthew Knepley wrote: > > You don't often get email from knepley at gmail.com. Learn why this is > important > Also, you could run with > > -mat_view ::ascii_info_detail > > and send the output for both cases. The storage of matrix values is not > redundant, so something else is > going on. First, what communicator do you use for the matrix, and what > partitioning? > > Thanks, > > Matt > > On Mon, Jul 22, 2024 at 10:27?PM Barry Smith wrote: > > This Message Is From an External Sender > This message came from outside your organization. > > > Send the code. > > On Jul 22, 2024, at 9:18?PM, Matthew Thomas via petsc-users < > petsc-users at mcs.anl.gov> wrote: > > This Message Is From an External Sender > This message came from outside your organization. > > Hello, > > I am using petsc and slepc to solve an eigenvalue problem for sparse matrices. When I run my code with double the number of processors, the memory usage also doubles. > > I am able to reproduce this behaviour with ex1 of slepc?s hands on exercises. > > The issue is occurring with petsc not with slepc as this still occurs when I remove the solve step and just create and assemble the petsc matrix. > > With n=100000, this uses ~1Gb with 8 processors, but ~5Gb with 40 processors. > > This was done with petsc 3.21.3, on linux compiled with Intel using Intel-MPI > > Is this the expected behaviour? If not, how can I bug fix this? > > > Thanks, > Matt > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!Z2rv8eb7Cg2HQOiE2c11H9inWPs58oj2IuqkyMjKueg_gDd5G7Ya_P8dkPggDNh_IrYKprQIK0WvlSHTmW6H$ > > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!Z2rv8eb7Cg2HQOiE2c11H9inWPs58oj2IuqkyMjKueg_gDd5G7Ya_P8dkPggDNh_IrYKprQIK0WvlSHTmW6H$ > > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!Z2rv8eb7Cg2HQOiE2c11H9inWPs58oj2IuqkyMjKueg_gDd5G7Ya_P8dkPggDNh_IrYKprQIK0WvlSHTmW6H$ > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!Z2rv8eb7Cg2HQOiE2c11H9inWPs58oj2IuqkyMjKueg_gDd5G7Ya_P8dkPggDNh_IrYKprQIK0WvlSHTmW6H$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave.mayhem23 at gmail.com Thu Jul 25 09:59:15 2024 From: dave.mayhem23 at gmail.com (Dave May) Date: Thu, 25 Jul 2024 07:59:15 -0700 Subject: [petsc-users] (no subject) In-Reply-To: <441311721646454@mail.yandex.ru> References: <441311721646454@mail.yandex.ru> Message-ID: I think to achieve what you want you will need to use a KSP which supports right preconditioning, that is they monitor || Ax-b || rather than || P^{-1} ( Ax - b ) ||, where P^{-1} is the application of the preconditioner. Try running with -ksp_type fgmres or -ksp_type gcr. These Krylov methods support right preconditioning by default. Thanks, Dave On Mon 22. Jul 2024 at 04:37, ?????????? ????????? < konstantin.murusidze at math.msu.ru> wrote: > Good afternoon. I am a student at the Faculty of Mathematics and for my > course work I need to solve SLAE with a relative accuracy of 1e-8 or more. > To do this, I created the function PetscCall(KSPSetTolerances(ksp, 1. e-8, > PETSC_DEFAULT, PETSC_DEFAULT, > ZjQcmQRYFpfptBannerStart > This Message Is From an External Sender > This message came from outside your organization. > > ZjQcmQRYFpfptBannerEnd > Good afternoon. I am a student at the Faculty of Mathematics and for my > course work I need to solve SLAE with a relative accuracy of 1e-8 or more. > To do this, I created the function PetscCall(KSPSetTolerances(ksp, 1.e-8, > PETSC_DEFAULT, PETSC_DEFAULT, 100000));. But in the end, only 38 iterations > were made and the relative norm ||Ax-b||/||b|| it turns out 4.54011. If you > reply to my email, I can give you more information about the solver > settings. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yongzhong.li at mail.utoronto.ca Thu Jul 25 10:21:52 2024 From: yongzhong.li at mail.utoronto.ca (Yongzhong Li) Date: Thu, 25 Jul 2024 15:21:52 +0000 Subject: [petsc-users] MKL installation can't be used to configure PETSc In-Reply-To: References: Message-ID: Thank you Satish! I think I find the issue, the MKLROOT should be one of the subdirectory for the previous address. Another question, do you know if PETSc can be configured with AMD AOCL https://urldefense.us/v3/__https://www.amd.com/en/developer/aocl.html__;!!G_uCfscf7eWS!dPM5PLRb-ePnG4HHZzs-Z7FUTkod4fvyBeYnsMy2Kg11pRg4gTAZ2D7J7w0Cx5UbezSeYC5tDcB2GqbdDP2VwiDJ-vwREa2E1Ns$ as the BLAS and LAPACK. Thanks, Yongzhong From: Satish Balay Date: Tuesday, July 23, 2024 at 10:43?PM To: Yongzhong Li Cc: petsc-users at mcs.anl.gov , petsc-maint at mcs.anl.gov Subject: Re: [petsc-users] MKL installation can't be used to configure PETSc [????????? balay.anl at fastmail.org ????????? https://urldefense.us/v3/__https://aka.ms/LearnAboutSenderIdentification?**K__;Pz8_Pz8_Pz8_Pz8_!!G_uCfscf7eWS!dPM5PLRb-ePnG4HHZzs-Z7FUTkod4fvyBeYnsMy2Kg11pRg4gTAZ2D7J7w0Cx5UbezSeYC5tDcB2GqbdDP2VwiDJ-vwRUsxH6jE$ ] Are you sure you are providing the correct path? >>>> --with-blaslapack-dir=/tool/pandora64/.package/oneMKL-2021.3/mkl MKL Path not found.. skipping: /tool/pandora64/.package/oneMKL-2021.3/mkl/lib/64 MKL Path not found.. skipping: /tool/pandora64/.package/oneMKL-2021.3/mkl/lib/ia64 MKL Path not found.. skipping: /tool/pandora64/.package/oneMKL-2021.3/mkl/lib/em64t MKL Path not found.. skipping: /tool/pandora64/.package/oneMKL-2021.3/mkl/lib/intel64 MKL Path not found.. skipping: /tool/pandora64/.package/oneMKL-2021.3/mkl/lib MKL Path not found.. skipping: /tool/pandora64/.package/oneMKL-2021.3/mkl/64 MKL Path not found.. skipping: /tool/pandora64/.package/oneMKL-2021.3/mkl/ia64 MKL Path not found.. skipping: /tool/pandora64/.package/oneMKL-2021.3/mkl/em64t MKL Path not found.. skipping: /tool/pandora64/.package/oneMKL-2021.3/mkl/intel64 MKL Path not found.. skipping: /tool/pandora64/.package/oneMKL-2021.3/mkl/lib/32 MKL Path not found.. skipping: /tool/pandora64/.package/oneMKL-2021.3/mkl/lib/ia32 MKL Path not found.. skipping: /tool/pandora64/.package/oneMKL-2021.3/mkl/32 MKL Path not found.. skipping: /tool/pandora64/.package/oneMKL-2021.3/mkl/ia32 <<< Normally you should see something like the following [wrt the dir you specify to --with-blaslapack-dir] >>> balay at cg:~$ env |grep MKL MKLROOT=/nfs/gce/software/custom/linux-ubuntu22.04-x86_64/spack/opt/spack/linux-ubuntu22.04-x86_64/gcc-11.2.0/intel-oneapi-mkl-2022.0.2-lzrncoj/mkl/2022.0.2 balay at cg:~$ find /nfs/gce/software/custom/linux-ubuntu22.04-x86_64/spack/opt/spack/linux-ubuntu22.04-x86_64/gcc-11.2.0/intel-oneapi-mkl-2022.0.2-lzrncoj/mkl/2022.0.2 -name libmkl_core.so /nfs/gce/software/custom/linux-ubuntu22.04-x86_64/spack/opt/spack/linux-ubuntu22.04-x86_64/gcc-11.2.0/intel-oneapi-mkl-2022.0.2-lzrncoj/mkl/2022.0.2/lib/ia32/libmkl_core.so /nfs/gce/software/custom/linux-ubuntu22.04-x86_64/spack/opt/spack/linux-ubuntu22.04-x86_64/gcc-11.2.0/intel-oneapi-mkl-2022.0.2-lzrncoj/mkl/2022.0.2/lib/intel64/libmkl_core.so Satish On Wed, 24 Jul 2024, Yongzhong Li wrote: > Dear PETSc developers, > > Recently, when I configured the PETSc with blas/lapack provided by Intel MKL, I got the following error message, > > > [ 1%] Performing configure step for 'external_petsc' > ============================================================================================= > Configuring PETSc to compile on your system > ============================================================================================= > ============================================================================================= > ***** WARNING ***** > Found environment variable: MAKEFLAGS=s. Ignoring it! Use "./configure > MAKEFLAGS=$MAKEFLAGS" if you really want to use this value > ============================================================================================= > TESTING: checkLib from config.packages.BlasLapack(config/BuildSystem/config/packages/BlasLapack.py:114) > ********************************************************************************************* > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details): > --------------------------------------------------------------------------------------------- > You set a value for --with-blaslapack-dir=, but > /tool/pandora64/.package/oneMKL-2021.3/mkl cannot be used > ********************************************************************************************* > > make[2]: *** [CMakeFiles/external_petsc.dir/build.make:92: external/builds/petsc-3.21.0/src/external_petsc-stamp/external_petsc-configure] Error 1 > make[1]: *** [CMakeFiles/Makefile2:114: CMakeFiles/external_petsc.dir/all] Error 2 > make: *** [Makefile:91: all] Error 2 > > However, I think I was using the correct MKL root address to configure PETSc. I have attached the configure.log file, could you help me look at where might be wrong? > > Thanks! > Yongzhong > ----------------------------------------------------------- > Yongzhong Li > PhD student | Electromagnetics Group > Department of Electrical & Computer Engineering > University of Toronto > https://urldefense.us/v3/__https://can01.safelinks.protection.outlook.com/?url=https*3A*2F*2Furldefense.us*2Fv3*2F__http*3A*2F*2Fwww.modelics.org__*3B!!G_uCfscf7eWS!aa4LdUX0JV3i0ce3Jp20GLg521rd7L25vZSpFhCmlvTUN8EZFjfbk-IsfiTOD3IpCfXCB_zHA8nh1WMIkiHosbwZgv8PzmzPEec*24&data=05*7C02*7Cyongzhong.li*40mail.utoronto.ca*7Cac6df78a38534d3cced208dcab8a6ce0*7C78aac2262f034b4d9037b46d56c55210*7C0*7C0*7C638573858217585114*7CUnknown*7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0*3D*7C0*7C*7C*7C&sdata=c237HxX0WzOtO2poHtci4lSZaaG37UH81irVGlu*2FsUg*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUl!!G_uCfscf7eWS!dPM5PLRb-ePnG4HHZzs-Z7FUTkod4fvyBeYnsMy2Kg11pRg4gTAZ2D7J7w0Cx5UbezSeYC5tDcB2GqbdDP2VwiDJ-vwR0oLyWl4$ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Thu Jul 25 10:56:24 2024 From: bsmith at petsc.dev (Barry Smith) Date: Thu, 25 Jul 2024 11:56:24 -0400 Subject: [petsc-users] [petsc-maint] MKL installation can't be used to configure PETSc In-Reply-To: References: Message-ID: <68F92309-1F03-469B-B1FF-5671D9E4FC9C@petsc.dev> Thank you Satish! > > I think I find the issue, the MKLROOT should be one of the subdirectory for the previous address. > > Another question, do you know if PETSc can be configured with AMD AOCL https://urldefense.us/v3/__https://www.amd.com/en/developer/aocl.html__;!!G_uCfscf7eWS!brsW3VFdIkOx2YYJn_E9B4AxRRg1Ccxzi3m0X4IhDVuWBH2Q7qwq6gDOaiNM8DClyEWiy_Me3_vHAIUpiOKKQik$ as the BLAS and LAPACK. You should be able to use the configure option --with-blaslapack-dir=directory_of_aocl Please let us know of any difficulties. Barry > > Thanks, > Yongzhong > > From: Satish Balay > > Date: Tuesday, July 23, 2024 at 10:43?PM > To: Yongzhong Li > > Cc: petsc-users at mcs.anl.gov >, petsc-maint at mcs.anl.gov > > Subject: Re: [petsc-users] MKL installation can't be used to configure PETSc > > [????????? balay.anl at fastmail.org ????????? https://urldefense.us/v3/__https://aka.ms/LearnAboutSenderIdentification?**K__;Pz8_Pz8_Pz8_Pz8_!!G_uCfscf7eWS!brsW3VFdIkOx2YYJn_E9B4AxRRg1Ccxzi3m0X4IhDVuWBH2Q7qwq6gDOaiNM8DClyEWiy_Me3_vHAIUpD0n3Xpw$ ] > > Are you sure you are providing the correct path? > > >>>> > --with-blaslapack-dir=/tool/pandora64/.package/oneMKL-2021.3/mkl > > MKL Path not found.. skipping: /tool/pandora64/.package/oneMKL-2021.3/mkl/lib/64 > MKL Path not found.. skipping: /tool/pandora64/.package/oneMKL-2021.3/mkl/lib/ia64 > MKL Path not found.. skipping: /tool/pandora64/.package/oneMKL-2021.3/mkl/lib/em64t > MKL Path not found.. skipping: /tool/pandora64/.package/oneMKL-2021.3/mkl/lib/intel64 > MKL Path not found.. skipping: /tool/pandora64/.package/oneMKL-2021.3/mkl/lib > MKL Path not found.. skipping: /tool/pandora64/.package/oneMKL-2021.3/mkl/64 > MKL Path not found.. skipping: /tool/pandora64/.package/oneMKL-2021.3/mkl/ia64 > MKL Path not found.. skipping: /tool/pandora64/.package/oneMKL-2021.3/mkl/em64t > MKL Path not found.. skipping: /tool/pandora64/.package/oneMKL-2021.3/mkl/intel64 > MKL Path not found.. skipping: /tool/pandora64/.package/oneMKL-2021.3/mkl/lib/32 > MKL Path not found.. skipping: /tool/pandora64/.package/oneMKL-2021.3/mkl/lib/ia32 > MKL Path not found.. skipping: /tool/pandora64/.package/oneMKL-2021.3/mkl/32 > MKL Path not found.. skipping: /tool/pandora64/.package/oneMKL-2021.3/mkl/ia32 > > <<< > > Normally you should see something like the following [wrt the dir you specify to --with-blaslapack-dir] > > >>> > balay at cg:~$ env |grep MKL > MKLROOT=/nfs/gce/software/custom/linux-ubuntu22.04-x86_64/spack/opt/spack/linux-ubuntu22.04-x86_64/gcc-11.2.0/intel-oneapi-mkl-2022.0.2-lzrncoj/mkl/2022.0.2 > balay at cg:~$ find /nfs/gce/software/custom/linux-ubuntu22.04-x86_64/spack/opt/spack/linux-ubuntu22.04-x86_64/gcc-11.2.0/intel-oneapi-mkl-2022.0.2-lzrncoj/mkl/2022.0.2 -name libmkl_core.so > /nfs/gce/software/custom/linux-ubuntu22.04-x86_64/spack/opt/spack/linux-ubuntu22.04-x86_64/gcc-11.2.0/intel-oneapi-mkl-2022.0.2-lzrncoj/mkl/2022.0.2/lib/ia32/libmkl_core.so > /nfs/gce/software/custom/linux-ubuntu22.04-x86_64/spack/opt/spack/linux-ubuntu22.04-x86_64/gcc-11.2.0/intel-oneapi-mkl-2022.0.2-lzrncoj/mkl/2022.0.2/lib/intel64/libmkl_core.so > > Satish > > On Wed, 24 Jul 2024, Yongzhong Li wrote: > > > Dear PETSc developers, > > > > Recently, when I configured the PETSc with blas/lapack provided by Intel MKL, I got the following error message, > > > > > > [ 1%] Performing configure step for 'external_petsc' > > ============================================================================================= > > Configuring PETSc to compile on your system > > ============================================================================================= > > ============================================================================================= > > ***** WARNING ***** > > Found environment variable: MAKEFLAGS=s. Ignoring it! Use "./configure > > MAKEFLAGS=$MAKEFLAGS" if you really want to use this value > > ============================================================================================= > > TESTING: checkLib from config.packages.BlasLapack(config/BuildSystem/config/packages/BlasLapack.py:114) > > ********************************************************************************************* > > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details): > > --------------------------------------------------------------------------------------------- > > You set a value for --with-blaslapack-dir=, but > > /tool/pandora64/.package/oneMKL-2021.3/mkl cannot be used > > ********************************************************************************************* > > > > make[2]: *** [CMakeFiles/external_petsc.dir/build.make:92: external/builds/petsc-3.21.0/src/external_petsc-stamp/external_petsc-configure] Error 1 > > make[1]: *** [CMakeFiles/Makefile2:114: CMakeFiles/external_petsc.dir/all] Error 2 > > make: *** [Makefile:91: all] Error 2 > > > > However, I think I was using the correct MKL root address to configure PETSc. I have attached the configure.log file, could you help me look at where might be wrong? > > > > Thanks! > > Yongzhong > > ----------------------------------------------------------- > > Yongzhong Li > > PhD student | Electromagnetics Group > > Department of Electrical & Computer Engineering > > University of Toronto > > https://urldefense.us/v3/__https://can01.safelinks.protection.outlook.com/?url=https*3A*2F*2Furldefense.us*2Fv3*2F__http*3A*2F*2Fwww.modelics.org__*3B!!G_uCfscf7eWS!aa4LdUX0JV3i0ce3Jp20GLg521rd7L25vZSpFhCmlvTUN8EZFjfbk-IsfiTOD3IpCfXCB_zHA8nh1WMIkiHosbwZgv8PzmzPEec*24&data=05*7C02*7Cyongzhong.li*40mail.utoronto.ca*7Cac6df78a38534d3cced208dcab8a6ce0*7C78aac2262f034b4d9037b46d56c55210*7C0*7C0*7C638573858217585114*7CUnknown*7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0*3D*7C0*7C*7C*7C&sdata=c237HxX0WzOtO2poHtci4lSZaaG37UH81irVGlu*2FsUg*3D&reserved=0__;JSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUl!!G_uCfscf7eWS!brsW3VFdIkOx2YYJn_E9B4AxRRg1Ccxzi3m0X4IhDVuWBH2Q7qwq6gDOaiNM8DClyEWiy_Me3_vHAIUp4pSlR3Y$ > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay.anl at fastmail.org Thu Jul 25 11:15:26 2024 From: balay.anl at fastmail.org (Satish Balay) Date: Thu, 25 Jul 2024 11:15:26 -0500 (CDT) Subject: [petsc-users] [petsc-maint] MKL installation can't be used to configure PETSc In-Reply-To: <68F92309-1F03-469B-B1FF-5671D9E4FC9C@petsc.dev> References: <68F92309-1F03-469B-B1FF-5671D9E4FC9C@petsc.dev> Message-ID: An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Thu Jul 25 11:28:22 2024 From: bsmith at petsc.dev (Barry Smith) Date: Thu, 25 Jul 2024 12:28:22 -0400 Subject: [petsc-users] [petsc-maint] MKL installation can't be used to configure PETSc In-Reply-To: References: <68F92309-1F03-469B-B1FF-5671D9E4FC9C@petsc.dev> Message-ID: An HTML attachment was scrubbed... URL: From ivanluthfi5 at gmail.com Fri Jul 26 22:24:06 2024 From: ivanluthfi5 at gmail.com (Ivan Luthfi) Date: Sat, 27 Jul 2024 11:24:06 +0800 Subject: [petsc-users] Error MPI_ABORT Message-ID: Hi friend, I am trying to try my second Petsc exercise in a lecture from Eijkhout. I run it using command: mpiexec -n 2 ./vec_view in my Macbook with 2 cores. but I got an error. Here is the error message: [1]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [1]PETSC ERROR: Argument out of range [1]PETSC ERROR: Out of range index value 200 maximum 200 [1]PETSC ERROR: WARNING! There are unused option(s) set! Could be the program crashed before usage or a spelling mistake, etc! [1]PETSC ERROR: Option left: name:-vec_view (no value) source: command line [1]PETSC ERROR: See https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!c9Sc5i6Zy6_vzM9RLEJfU92oYcv7s-RDyELXxNOCh10iT66qkMKsVNOdNN5Mylbznn2ag7w1daVGSrqSnN7HmPLABA$ for trouble shooting. [1]PETSC ERROR: Petsc Release Version 3.21.3, unknown [1]PETSC ERROR: ./vec on a arch-darwin-c-debug named ivanpro.local by jkh Sat Jul 27 11:21:26 2024 [1]PETSC ERROR: Configure options --with-cc=/Users/jkh/projects/openmpi/opt-5.0.4/bin/mpicc --with-cxx=/Users/jkh/projects/openmpi/opt-5.0.4/bin/mpicxx --with-fc=0 --download-make --download-cmake --download-bison --with-x=0 --download-f2cblaslapack --download-metis --download-parmetis --download-ptscotch --download-superlu_dist [1]PETSC ERROR: #1 VecSetValues_MPI() at /Users/jkh/projects/petsc/src/vec/vec/impls/mpi/pdvec.c:859 [1]PETSC ERROR: #2 VecSetValues() at /Users/jkh/projects/petsc/src/vec/vec/interface/rvector.c:917 [1]PETSC ERROR: #3 main() at vec.c:70 [1]PETSC ERROR: PETSc Option Table entries: [1]PETSC ERROR: -vec_view (source: command line) [1]PETSC ERROR: ----------------End of Error Message -------send entire error message to petsc-maint at mcs.anl.gov---------- -------------------------------------------------------------------------- MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_SELF Proc: [[953,1],1] Errorcode: 63 NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. You may or may not see output from other processes, depending on exactly when Open MPI kills them. -------------------------------------------------------------------------- -------------------------------------------------------------------------- prterun has exited due to process rank 1 with PID 0 on node ivanpro calling "abort". This may have caused other processes in the application to be terminated by signals sent by prterun (as reported here). can you help me guys? -------------- next part -------------- An HTML attachment was scrubbed... URL: From junchao.zhang at gmail.com Sat Jul 27 08:55:49 2024 From: junchao.zhang at gmail.com (Junchao Zhang) Date: Sat, 27 Jul 2024 08:55:49 -0500 Subject: [petsc-users] Error MPI_ABORT In-Reply-To: References: Message-ID: The error message is a little confusing. It says the indices should be in [0, 200), but you used an out of range index 200 in VecSetValues. --Junchao Zhang On Fri, Jul 26, 2024 at 10:24?PM Ivan Luthfi wrote: > Hi friend, I am trying to try my second Petsc exercise in a lecture from > Eijkhout. I run it using command: mpiexec -n 2 ./vec_view in my Macbook > with 2 cores. but I got an error. Here is the error message: [1]PETSC > ERROR: --------------------- > ZjQcmQRYFpfptBannerStart > This Message Is From an External Sender > This message came from outside your organization. > > ZjQcmQRYFpfptBannerEnd > Hi friend, > > I am trying to try my second Petsc exercise in a lecture from Eijkhout. > I run it using command: mpiexec -n 2 ./vec_view in my Macbook with 2 > cores. > > but I got an error. Here is the error message: > > [1]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [1]PETSC ERROR: Argument out of range > [1]PETSC ERROR: Out of range index value 200 maximum 200 > [1]PETSC ERROR: WARNING! There are unused option(s) set! Could be the > program crashed before usage or a spelling mistake, etc! > [1]PETSC ERROR: Option left: name:-vec_view (no value) source: command > line > [1]PETSC ERROR: See https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!d8ms3P5k6AI_J1wzoWIxg3JejQiHEmiJTNS2rI9Mdud8gZz4W5tW8iBtEQzOmmctlfUc3AbUF0W3UDTKSUMFK0Y16D4M$ > > for trouble shooting. > [1]PETSC ERROR: Petsc Release Version 3.21.3, unknown > [1]PETSC ERROR: ./vec on a arch-darwin-c-debug named ivanpro.local by jkh > Sat Jul 27 11:21:26 2024 > [1]PETSC ERROR: Configure options > --with-cc=/Users/jkh/projects/openmpi/opt-5.0.4/bin/mpicc > --with-cxx=/Users/jkh/projects/openmpi/opt-5.0.4/bin/mpicxx --with-fc=0 > --download-make --download-cmake --download-bison --with-x=0 > --download-f2cblaslapack --download-metis --download-parmetis > --download-ptscotch --download-superlu_dist > [1]PETSC ERROR: #1 VecSetValues_MPI() at > /Users/jkh/projects/petsc/src/vec/vec/impls/mpi/pdvec.c:859 > [1]PETSC ERROR: #2 VecSetValues() at > /Users/jkh/projects/petsc/src/vec/vec/interface/rvector.c:917 > [1]PETSC ERROR: #3 main() at vec.c:70 > [1]PETSC ERROR: PETSc Option Table entries: > [1]PETSC ERROR: -vec_view (source: command line) > [1]PETSC ERROR: ----------------End of Error Message -------send entire > error message to petsc-maint at mcs.anl.gov---------- > -------------------------------------------------------------------------- > MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_SELF > Proc: [[953,1],1] > Errorcode: 63 > > NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. > You may or may not see output from other processes, depending on > exactly when Open MPI kills them. > -------------------------------------------------------------------------- > -------------------------------------------------------------------------- > prterun has exited due to process rank 1 with PID 0 on node ivanpro calling > "abort". This may have caused other processes in the application to be > terminated by signals sent by prterun (as reported here). > > can you help me guys? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nick4f42 at proton.me Sat Jul 27 11:49:14 2024 From: nick4f42 at proton.me (Nick OBrien) Date: Sat, 27 Jul 2024 16:49:14 +0000 Subject: [petsc-users] PetscBinaryIO.py: readVec fails to read 0-length Vec Message-ID: <87jzh6eq6c.fsf@proton.me> An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Mon Jul 29 08:51:03 2024 From: bsmith at petsc.dev (Barry Smith) Date: Mon, 29 Jul 2024 09:51:03 -0400 Subject: [petsc-users] PetscBinaryIO.py: readVec fails to read 0-length Vec In-Reply-To: <87jzh6eq6c.fsf@proton.me> References: <87jzh6eq6c.fsf@proton.me> Message-ID: https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/merge_requests/7724__;!!G_uCfscf7eWS!c2pUfXzQgwy7S-LHON8g6ZH69NclYWSFtbeLLrMTZC_ubB8ziO9UykEAHS_ZMauViQm7pXW9lC06zLXAFT_v52Q$ > On Jul 27, 2024, at 12:49?PM, Nick OBrien via petsc-users wrote: > > This Message Is From an External Sender > This message came from outside your organization. > PetscBinaryIO errors when reading a Vec with zero elements. The > following python script demonstrates the issue: > > --8<---------------cut here---------------start------------->8--- > import PetscBinaryIO > import numpy as np > > io = PetscBinaryIO.PetscBinaryIO() > v = np.array([], np.float32).view(PetscBinaryIO.Vec) > io.writeBinaryFile("test.bin", (v,)) > io.readBinaryFile("test.bin") > --8<---------------cut here---------------end--------------->8--- > > --8<---------------cut here---------------start------------->8--- > [...] > File "/[...]/lib/petsc/bin/PetscBinaryIO.py", line 258, in readVec > raise IOError('Inconsistent or invalid Vec data in file') > OSError: Inconsistent or invalid Vec data in file > --8<---------------cut here---------------end--------------->8--- > > Tested on 3.20.5. readVec errors if `len(vals) == 0`, but maybe the > check should be `len(vals) != nz`? > > -- > Nick OBrien > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From liufield at gmail.com Mon Jul 29 14:55:30 2024 From: liufield at gmail.com (neil liu) Date: Mon, 29 Jul 2024 15:55:30 -0400 Subject: [petsc-users] Trying to run https://petsc.org/release/src/ksp/ksp/tutorials/ex72.c.html Message-ID: Dear Petsc developers,, I am trying to run https://urldefense.us/v3/__https://petsc.org/release/src/ksp/ksp/tutorials/ex72.c.html__;!!G_uCfscf7eWS!ZG4gvmS6hQD8ymbvCUDfAatzRUJHzmWO-hOgp9m0xXuAXgIB-fxe_xspYs3WEPi_Ed0UFLMHKanYuYWrTlQGrA$ with petsc-3.21.1/petsc/arch-linux-c-opt/bin/mpirun -n 2 ./ex72 -f /Documents/PetscData/poisson_DMPLEX_32x32_16.dat -pc_type bddc -ksp_type cg -ksp_norm_type natural -ksp_error_if_not_converged -mat_type is The file was downloaded and put in the directory PetscData. The error is shown as follows, 0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Read from file failed [0]PETSC ERROR: Read past end of file [0]PETSC ERROR: WARNING! There are unused option(s) set! Could be the program crashed before usage or a spelling mistake, etc! [0]PETSC ERROR: Option left: name:-ksp_error_if_not_converged (no value) source: command line [0]PETSC ERROR: Option left: name:-ksp_norm_type value: natural source: command line [0]PETSC ERROR: Option left: name:-ksp_type value: cg source: command line [0]PETSC ERROR: Option left: name:-pc_type value: bddc source: command line [0]PETSC ERROR: See https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!ZG4gvmS6hQD8ymbvCUDfAatzRUJHzmWO-hOgp9m0xXuAXgIB-fxe_xspYs3WEPi_Ed0UFLMHKanYuYVR0x14Xg$ for trouble shooting. [0]PETSC ERROR: Petsc Release Version 3.21.1, unknown [0]PETSC ERROR: ./ex72 on a arch-linux-c-opt named Mon Jul 29 15:50:04 2024 [0]PETSC ERROR: Configure options --with-cc=gcc --with-fc=gfortran --with-cxx=g++ --download-fblaslapack --download-mpich --with-scalar-type=complex --download-triangle --with-debugging=no [0]PETSC ERROR: #1 PetscBinaryRead() at /home/xxxxxx/Documents/petsc-3.21.1/petsc/src/sys/fileio/sysio.c:327 [0]PETSC ERROR: #2 PetscViewerBinaryWriteReadAll() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/sys/classes/viewer/impls/binary/binv.c:1077 [0]PETSC ERROR: #3 PetscViewerBinaryReadAll() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/sys/classes/viewer/impls/binary/binv.c:1119 [0]PETSC ERROR: #4 MatLoad_MPIAIJ_Binary() at Documents/petsc-3.21.1/petsc/src/mat/impls/aij/mpi/mpiaij.c:3093 [0]PETSC ERROR: #5 MatLoad_MPIAIJ() at /Documents/petsc-3.21.1/petsc/src/mat/impls/aij/mpi/mpiaij.c:3035 [0]PETSC ERROR: #6 MatLoad() at /Documents/petsc-3.21.1/petsc/src/mat/interface/matrix.c:1344 [0]PETSC ERROR: #7 MatLoad_IS() at /Documents/petsc-3.21.1/petsc/src/mat/impls/is/matis.c:2575 [0]PETSC ERROR: #8 MatLoad() at /home/Documents/petsc-3.21.1/petsc/src/mat/interface/matrix.c:1344 [0]PETSC ERROR: #9 main() at ex72.c:105 [0]PETSC ERROR: PETSc Option Table entries: [0]PETSC ERROR: -f /Documents/PetscData/poisson_DMPLEX_32x32_16.dat (source: command line) [0]PETSC ERROR: -ksp_error_if_not_converged (source: command line) [0]PETSC ERROR: -ksp_norm_type natural (source: command line) [0]PETSC ERROR: -ksp_type cg (source: command line) [0]PETSC ERROR: -mat_type is (source: command line) [0]PETSC ERROR: -pc_type bddc (source: command line) [0]PETSC ERROR: ----------------End of Error Message -------send entire error message to petsc-maint at mcs.anl.gov---------- application called MPI_Abort(MPI_COMM_SELF, 66) - process 0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Mon Jul 29 15:24:56 2024 From: bsmith at petsc.dev (Barry Smith) Date: Mon, 29 Jul 2024 16:24:56 -0400 Subject: [petsc-users] Trying to run https://petsc.org/release/src/ksp/ksp/tutorials/ex72.c.html In-Reply-To: References: Message-ID: This can happen if the data was stored in single precision and PETSc was built for double. > On Jul 29, 2024, at 3:55?PM, neil liu wrote: > > This Message Is From an External Sender > This message came from outside your organization. > Dear Petsc developers,, > > I am trying to run > https://urldefense.us/v3/__https://petsc.org/release/src/ksp/ksp/tutorials/ex72.c.html__;!!G_uCfscf7eWS!cxcXQ-gJ2AqH7pUs5_UkS1JsktrP9Txs8erD1qj1O_mKwMjlMkbN22tMb1YFrYSbiNc-ykszIflh5IhdXM1a_YQ$ > with > > petsc-3.21.1/petsc/arch-linux-c-opt/bin/mpirun -n 2 ./ex72 -f /Documents/PetscData/poisson_DMPLEX_32x32_16.dat -pc_type bddc -ksp_type cg -ksp_norm_type natural -ksp_error_if_not_converged -mat_type is > > The file was downloaded and put in the directory PetscData. > > The error is shown as follows, > > 0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > [0]PETSC ERROR: Read from file failed > [0]PETSC ERROR: Read past end of file > [0]PETSC ERROR: WARNING! There are unused option(s) set! Could be the program crashed before usage or a spelling mistake, etc! > [0]PETSC ERROR: Option left: name:-ksp_error_if_not_converged (no value) source: command line > [0]PETSC ERROR: Option left: name:-ksp_norm_type value: natural source: command line > [0]PETSC ERROR: Option left: name:-ksp_type value: cg source: command line > [0]PETSC ERROR: Option left: name:-pc_type value: bddc source: command line > [0]PETSC ERROR: See https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!cxcXQ-gJ2AqH7pUs5_UkS1JsktrP9Txs8erD1qj1O_mKwMjlMkbN22tMb1YFrYSbiNc-ykszIflh5IhdqaBPI7o$ for trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.21.1, unknown > [0]PETSC ERROR: ./ex72 on a arch-linux-c-opt named > Mon Jul 29 15:50:04 2024 > [0]PETSC ERROR: Configure options --with-cc=gcc --with-fc=gfortran --with-cxx=g++ --download-fblaslapack --download-mpich --with-scalar-type=complex --download-triangle --with-debugging=no > [0]PETSC ERROR: #1 PetscBinaryRead() at /home/xxxxxx/Documents/petsc-3.21.1/petsc/src/sys/fileio/sysio.c:327 > [0]PETSC ERROR: #2 PetscViewerBinaryWriteReadAll() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/sys/classes/viewer/impls/binary/binv.c:1077 > [0]PETSC ERROR: #3 PetscViewerBinaryReadAll() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/sys/classes/viewer/impls/binary/binv.c:1119 > [0]PETSC ERROR: #4 MatLoad_MPIAIJ_Binary() at Documents/petsc-3.21.1/petsc/src/mat/impls/aij/mpi/mpiaij.c:3093 > [0]PETSC ERROR: #5 MatLoad_MPIAIJ() at /Documents/petsc-3.21.1/petsc/src/mat/impls/aij/mpi/mpiaij.c:3035 > [0]PETSC ERROR: #6 MatLoad() at /Documents/petsc-3.21.1/petsc/src/mat/interface/matrix.c:1344 > [0]PETSC ERROR: #7 MatLoad_IS() at /Documents/petsc-3.21.1/petsc/src/mat/impls/is/matis.c:2575 > [0]PETSC ERROR: #8 MatLoad() at /home/Documents/petsc-3.21.1/petsc/src/mat/interface/matrix.c:1344 > [0]PETSC ERROR: #9 main() at ex72.c:105 > [0]PETSC ERROR: PETSc Option Table entries: > [0]PETSC ERROR: -f > /Documents/PetscData/poisson_DMPLEX_32x32_16.dat (source: command line) > [0]PETSC ERROR: -ksp_error_if_not_converged (source: command line) > [0]PETSC ERROR: -ksp_norm_type natural (source: command line) > [0]PETSC ERROR: -ksp_type cg (source: command line) > [0]PETSC ERROR: -mat_type is (source: command line) > [0]PETSC ERROR: -pc_type bddc (source: command line) > [0]PETSC ERROR: ----------------End of Error Message -------send entire error message to petsc-maint at mcs.anl.gov ---------- > application called MPI_Abort(MPI_COMM_SELF, 66) - process 0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From liufield at gmail.com Mon Jul 29 16:14:21 2024 From: liufield at gmail.com (neil liu) Date: Mon, 29 Jul 2024 17:14:21 -0400 Subject: [petsc-users] Trying to run https://petsc.org/release/src/ksp/ksp/tutorials/ex72.c.html In-Reply-To: References: Message-ID: I compiled Petsc with single precision. However, it is not converged with the data. Please see the attached file. On Mon, Jul 29, 2024 at 4:25?PM Barry Smith wrote: > > This can happen if the data was stored in single precision and PETSc > was built for double. > > > On Jul 29, 2024, at 3:55?PM, neil liu wrote: > > This Message Is From an External Sender > This message came from outside your organization. > Dear Petsc developers,, > > I am trying to run > https://urldefense.us/v3/__https://petsc.org/release/src/ksp/ksp/tutorials/ex72.c.html__;!!G_uCfscf7eWS!ddQAzGhXI9wgpzpsAee6djDx2KTUfGC3FtZ0R51gJeg777b3Wv8Q1A73PbZHMN2QREl0wUNh8KcquslAlSuF7Q$ > > with > > petsc-3.21.1/petsc/arch-linux-c-opt/bin/mpirun -n 2 ./ex72 -f > /Documents/PetscData/poisson_DMPLEX_32x32_16.dat -pc_type bddc -ksp_type cg > -ksp_norm_type natural -ksp_error_if_not_converged -mat_type is > > The file was downloaded and put in the directory PetscData. > > The error is shown as follows, > > 0]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [0]PETSC ERROR: Read from file failed > [0]PETSC ERROR: Read past end of file > [0]PETSC ERROR: WARNING! There are unused option(s) set! Could be the > program crashed before usage or a spelling mistake, etc! > [0]PETSC ERROR: Option left: name:-ksp_error_if_not_converged (no value) > source: command line > [0]PETSC ERROR: Option left: name:-ksp_norm_type value: natural source: > command line > [0]PETSC ERROR: Option left: name:-ksp_type value: cg source: command > line > [0]PETSC ERROR: Option left: name:-pc_type value: bddc source: command > line > [0]PETSC ERROR: See https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!ddQAzGhXI9wgpzpsAee6djDx2KTUfGC3FtZ0R51gJeg777b3Wv8Q1A73PbZHMN2QREl0wUNh8KcqusmvozIhKg$ > > for trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.21.1, unknown > [0]PETSC ERROR: ./ex72 on a arch-linux-c-opt named > Mon Jul 29 15:50:04 2024 > [0]PETSC ERROR: Configure options --with-cc=gcc --with-fc=gfortran > --with-cxx=g++ --download-fblaslapack --download-mpich > --with-scalar-type=complex --download-triangle --with-debugging=no > [0]PETSC ERROR: #1 PetscBinaryRead() at > /home/xxxxxx/Documents/petsc-3.21.1/petsc/src/sys/fileio/sysio.c:327 > [0]PETSC ERROR: #2 PetscViewerBinaryWriteReadAll() at > /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/sys/classes/viewer/impls/binary/binv.c:1077 > [0]PETSC ERROR: #3 PetscViewerBinaryReadAll() at > /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/sys/classes/viewer/impls/binary/binv.c:1119 > [0]PETSC ERROR: #4 MatLoad_MPIAIJ_Binary() at > Documents/petsc-3.21.1/petsc/src/mat/impls/aij/mpi/mpiaij.c:3093 > [0]PETSC ERROR: #5 MatLoad_MPIAIJ() at > /Documents/petsc-3.21.1/petsc/src/mat/impls/aij/mpi/mpiaij.c:3035 > [0]PETSC ERROR: #6 MatLoad() at > /Documents/petsc-3.21.1/petsc/src/mat/interface/matrix.c:1344 > [0]PETSC ERROR: #7 MatLoad_IS() at > /Documents/petsc-3.21.1/petsc/src/mat/impls/is/matis.c:2575 > [0]PETSC ERROR: #8 MatLoad() at > /home/Documents/petsc-3.21.1/petsc/src/mat/interface/matrix.c:1344 > [0]PETSC ERROR: #9 main() at ex72.c:105 > [0]PETSC ERROR: PETSc Option Table entries: > [0]PETSC ERROR: -f > /Documents/PetscData/poisson_DMPLEX_32x32_16.dat (source: command line) > [0]PETSC ERROR: -ksp_error_if_not_converged (source: command line) > [0]PETSC ERROR: -ksp_norm_type natural (source: command line) > [0]PETSC ERROR: -ksp_type cg (source: command line) > [0]PETSC ERROR: -mat_type is (source: command line) > [0]PETSC ERROR: -pc_type bddc (source: command line) > [0]PETSC ERROR: ----------------End of Error Message -------send entire > error message to petsc-maint at mcs.anl.gov---------- > application called MPI_Abort(MPI_COMM_SELF, 66) - process 0 > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: error Type: application/octet-stream Size: 1890 bytes Desc: not available URL: From stefano.zampini at gmail.com Mon Jul 29 16:35:47 2024 From: stefano.zampini at gmail.com (Stefano Zampini) Date: Mon, 29 Jul 2024 23:35:47 +0200 Subject: [petsc-users] Trying to run https://petsc.org/release/src/ksp/ksp/tutorials/ex72.c.html In-Reply-To: References: Message-ID: Your PETSc installation is for complex, data is for real On Mon, Jul 29, 2024, 23:14 neil liu wrote: > I compiled Petsc with single precision. However, it is not converged with > the data. Please see the attached file. On Mon, Jul 29, 2024 at 4: 25 PM > Barry Smith wrote: This can happen if the data was > stored in single precision > ZjQcmQRYFpfptBannerStart > This Message Is From an External Sender > This message came from outside your organization. > > ZjQcmQRYFpfptBannerEnd > I compiled Petsc with single precision. However, it is not converged with > the data. > > Please see the attached file. > > On Mon, Jul 29, 2024 at 4:25?PM Barry Smith wrote: > >> >> This can happen if the data was stored in single precision and PETSc >> was built for double. >> >> >> On Jul 29, 2024, at 3:55?PM, neil liu wrote: >> >> This Message Is From an External Sender >> This message came from outside your organization. >> Dear Petsc developers,, >> >> I am trying to run >> https://urldefense.us/v3/__https://petsc.org/release/src/ksp/ksp/tutorials/ex72.c.html__;!!G_uCfscf7eWS!dwTKXCltGM9-bkaDPXs4VJqHhf4VpcCQ1EbltIKpsrXyXtSUl5mppPpD5dckFB6TV2fuRs-VxwKdeuGLe7mV4ToyX_LjFK0$ >> >> with >> >> petsc-3.21.1/petsc/arch-linux-c-opt/bin/mpirun -n 2 ./ex72 -f >> /Documents/PetscData/poisson_DMPLEX_32x32_16.dat -pc_type bddc -ksp_type cg >> -ksp_norm_type natural -ksp_error_if_not_converged -mat_type is >> >> The file was downloaded and put in the directory PetscData. >> >> The error is shown as follows, >> >> 0]PETSC ERROR: --------------------- Error Message >> -------------------------------------------------------------- >> [0]PETSC ERROR: Read from file failed >> [0]PETSC ERROR: Read past end of file >> [0]PETSC ERROR: WARNING! There are unused option(s) set! Could be the >> program crashed before usage or a spelling mistake, etc! >> [0]PETSC ERROR: Option left: name:-ksp_error_if_not_converged (no >> value) source: command line >> [0]PETSC ERROR: Option left: name:-ksp_norm_type value: natural source: >> command line >> [0]PETSC ERROR: Option left: name:-ksp_type value: cg source: command >> line >> [0]PETSC ERROR: Option left: name:-pc_type value: bddc source: command >> line >> [0]PETSC ERROR: See https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!dwTKXCltGM9-bkaDPXs4VJqHhf4VpcCQ1EbltIKpsrXyXtSUl5mppPpD5dckFB6TV2fuRs-VxwKdeuGLe7mV4Toy5wnaJks$ >> >> for trouble shooting. >> [0]PETSC ERROR: Petsc Release Version 3.21.1, unknown >> [0]PETSC ERROR: ./ex72 on a arch-linux-c-opt named >> Mon Jul 29 15:50:04 2024 >> [0]PETSC ERROR: Configure options --with-cc=gcc --with-fc=gfortran >> --with-cxx=g++ --download-fblaslapack --download-mpich >> --with-scalar-type=complex --download-triangle --with-debugging=no >> [0]PETSC ERROR: #1 PetscBinaryRead() at >> /home/xxxxxx/Documents/petsc-3.21.1/petsc/src/sys/fileio/sysio.c:327 >> [0]PETSC ERROR: #2 PetscViewerBinaryWriteReadAll() at >> /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/sys/classes/viewer/impls/binary/binv.c:1077 >> [0]PETSC ERROR: #3 PetscViewerBinaryReadAll() at >> /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/sys/classes/viewer/impls/binary/binv.c:1119 >> [0]PETSC ERROR: #4 MatLoad_MPIAIJ_Binary() at >> Documents/petsc-3.21.1/petsc/src/mat/impls/aij/mpi/mpiaij.c:3093 >> [0]PETSC ERROR: #5 MatLoad_MPIAIJ() at >> /Documents/petsc-3.21.1/petsc/src/mat/impls/aij/mpi/mpiaij.c:3035 >> [0]PETSC ERROR: #6 MatLoad() at >> /Documents/petsc-3.21.1/petsc/src/mat/interface/matrix.c:1344 >> [0]PETSC ERROR: #7 MatLoad_IS() at >> /Documents/petsc-3.21.1/petsc/src/mat/impls/is/matis.c:2575 >> [0]PETSC ERROR: #8 MatLoad() at >> /home/Documents/petsc-3.21.1/petsc/src/mat/interface/matrix.c:1344 >> [0]PETSC ERROR: #9 main() at ex72.c:105 >> [0]PETSC ERROR: PETSc Option Table entries: >> [0]PETSC ERROR: -f >> /Documents/PetscData/poisson_DMPLEX_32x32_16.dat (source: command line) >> [0]PETSC ERROR: -ksp_error_if_not_converged (source: command line) >> [0]PETSC ERROR: -ksp_norm_type natural (source: command line) >> [0]PETSC ERROR: -ksp_type cg (source: command line) >> [0]PETSC ERROR: -mat_type is (source: command line) >> [0]PETSC ERROR: -pc_type bddc (source: command line) >> [0]PETSC ERROR: ----------------End of Error Message -------send entire >> error message to petsc-maint at mcs.anl.gov---------- >> application called MPI_Abort(MPI_COMM_SELF, 66) - process 0 >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From liufield at gmail.com Mon Jul 29 17:19:01 2024 From: liufield at gmail.com (neil liu) Date: Mon, 29 Jul 2024 18:19:01 -0400 Subject: [petsc-users] Trying to run https://petsc.org/release/src/ksp/ksp/tutorials/ex72.c.html In-Reply-To: References: Message-ID: When I compile with real data, it shows the attached error. The data file is in binary format, right? On Mon, Jul 29, 2024 at 5:36?PM Stefano Zampini wrote: > Your PETSc installation is for complex, data is for real > > On Mon, Jul 29, 2024, 23:14 neil liu wrote: > >> I compiled Petsc with single precision. However, it is not converged with >> the data. Please see the attached file. On Mon, Jul 29, 2024 at 4: 25 PM >> Barry Smith wrote: This can happen if the data was >> stored in single precision >> ZjQcmQRYFpfptBannerStart >> This Message Is From an External Sender >> This message came from outside your organization. >> >> ZjQcmQRYFpfptBannerEnd >> I compiled Petsc with single precision. However, it is not converged with >> the data. >> >> Please see the attached file. >> >> On Mon, Jul 29, 2024 at 4:25?PM Barry Smith wrote: >> >>> >>> This can happen if the data was stored in single precision and PETSc >>> was built for double. >>> >>> >>> On Jul 29, 2024, at 3:55?PM, neil liu wrote: >>> >>> This Message Is From an External Sender >>> This message came from outside your organization. >>> Dear Petsc developers,, >>> >>> I am trying to run >>> https://urldefense.us/v3/__https://petsc.org/release/src/ksp/ksp/tutorials/ex72.c.html__;!!G_uCfscf7eWS!bQ7LQoKmWInxGh234P3yc1H6VB9xutX12MkG2-7ZV5GSAmVt70Z2vX6aPGzGCL4t9ypRczUxcH684DnT-BO0kA$ >>> >>> with >>> >>> petsc-3.21.1/petsc/arch-linux-c-opt/bin/mpirun -n 2 ./ex72 -f >>> /Documents/PetscData/poisson_DMPLEX_32x32_16.dat -pc_type bddc -ksp_type cg >>> -ksp_norm_type natural -ksp_error_if_not_converged -mat_type is >>> >>> The file was downloaded and put in the directory PetscData. >>> >>> The error is shown as follows, >>> >>> 0]PETSC ERROR: --------------------- Error Message >>> -------------------------------------------------------------- >>> [0]PETSC ERROR: Read from file failed >>> [0]PETSC ERROR: Read past end of file >>> [0]PETSC ERROR: WARNING! There are unused option(s) set! Could be the >>> program crashed before usage or a spelling mistake, etc! >>> [0]PETSC ERROR: Option left: name:-ksp_error_if_not_converged (no >>> value) source: command line >>> [0]PETSC ERROR: Option left: name:-ksp_norm_type value: natural >>> source: command line >>> [0]PETSC ERROR: Option left: name:-ksp_type value: cg source: command >>> line >>> [0]PETSC ERROR: Option left: name:-pc_type value: bddc source: command >>> line >>> [0]PETSC ERROR: See https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!bQ7LQoKmWInxGh234P3yc1H6VB9xutX12MkG2-7ZV5GSAmVt70Z2vX6aPGzGCL4t9ypRczUxcH684DmesNpXFA$ >>> >>> for trouble shooting. >>> [0]PETSC ERROR: Petsc Release Version 3.21.1, unknown >>> [0]PETSC ERROR: ./ex72 on a arch-linux-c-opt named >>> Mon Jul 29 15:50:04 2024 >>> [0]PETSC ERROR: Configure options --with-cc=gcc --with-fc=gfortran >>> --with-cxx=g++ --download-fblaslapack --download-mpich >>> --with-scalar-type=complex --download-triangle --with-debugging=no >>> [0]PETSC ERROR: #1 PetscBinaryRead() at >>> /home/xxxxxx/Documents/petsc-3.21.1/petsc/src/sys/fileio/sysio.c:327 >>> [0]PETSC ERROR: #2 PetscViewerBinaryWriteReadAll() at >>> /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/sys/classes/viewer/impls/binary/binv.c:1077 >>> [0]PETSC ERROR: #3 PetscViewerBinaryReadAll() at >>> /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/sys/classes/viewer/impls/binary/binv.c:1119 >>> [0]PETSC ERROR: #4 MatLoad_MPIAIJ_Binary() at >>> Documents/petsc-3.21.1/petsc/src/mat/impls/aij/mpi/mpiaij.c:3093 >>> [0]PETSC ERROR: #5 MatLoad_MPIAIJ() at >>> /Documents/petsc-3.21.1/petsc/src/mat/impls/aij/mpi/mpiaij.c:3035 >>> [0]PETSC ERROR: #6 MatLoad() at >>> /Documents/petsc-3.21.1/petsc/src/mat/interface/matrix.c:1344 >>> [0]PETSC ERROR: #7 MatLoad_IS() at >>> /Documents/petsc-3.21.1/petsc/src/mat/impls/is/matis.c:2575 >>> [0]PETSC ERROR: #8 MatLoad() at >>> /home/Documents/petsc-3.21.1/petsc/src/mat/interface/matrix.c:1344 >>> [0]PETSC ERROR: #9 main() at ex72.c:105 >>> [0]PETSC ERROR: PETSc Option Table entries: >>> [0]PETSC ERROR: -f >>> /Documents/PetscData/poisson_DMPLEX_32x32_16.dat (source: command line) >>> [0]PETSC ERROR: -ksp_error_if_not_converged (source: command line) >>> [0]PETSC ERROR: -ksp_norm_type natural (source: command line) >>> [0]PETSC ERROR: -ksp_type cg (source: command line) >>> [0]PETSC ERROR: -mat_type is (source: command line) >>> [0]PETSC ERROR: -pc_type bddc (source: command line) >>> [0]PETSC ERROR: ----------------End of Error Message -------send entire >>> error message to petsc-maint at mcs.anl.gov---------- >>> application called MPI_Abort(MPI_COMM_SELF, 66) - process 0 >>> >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: error Type: application/octet-stream Size: 2330 bytes Desc: not available URL: From marco at kit.ac.jp Mon Jul 29 23:24:05 2024 From: marco at kit.ac.jp (Marco Seiz) Date: Tue, 30 Jul 2024 13:24:05 +0900 Subject: [petsc-users] Right DM for a particle network Message-ID: <54a9b9e5-691c-4535-bc49-5c00bc19a0df@kit.ac.jp> An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ex_graphlapl.c Type: text/x-csrc Size: 6149 bytes Desc: not available URL: From knepley at gmail.com Tue Jul 30 06:58:23 2024 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 30 Jul 2024 07:58:23 -0400 Subject: [petsc-users] Right DM for a particle network In-Reply-To: <54a9b9e5-691c-4535-bc49-5c00bc19a0df@kit.ac.jp> References: <54a9b9e5-691c-4535-bc49-5c00bc19a0df@kit.ac.jp> Message-ID: On Tue, Jul 30, 2024 at 12:24?AM Marco Seiz wrote: > Hello, I'd like to solve transient heat transport at a particle scale > using TS, with the per-particle equation being something like dT_i / dt = > (S(T_i) + sum(F(T_j, T_i), j connecting to i)) with a nonlinear source term > S and a conduction term > ZjQcmQRYFpfptBannerStart > This Message Is From an External Sender > This message came from outside your organization. > > ZjQcmQRYFpfptBannerEnd > > Hello, > > I'd like to solve transient heat transport at a particle scale using TS, with the per-particle equation being something like > > dT_i / dt = (S(T_i) + sum(F(T_j, T_i), j connecting to i)) > > with a nonlinear source term S and a conduction term F. The particles can move, deform and grow/shrink/vanish on a voxel grid, but for the temperature a particle-scale resolution should be sufficient. The particles' connectivity will change during the simulation, but is assumed constant during a single timestep. I have a data structure tracking the particles' connectivity, so I can say which particles should conduct heat to each other. I exploit symmetry and so only save the "forward" edges, so e.g. for touching particles 1->2->3, I only store [[2], [3], []], from which the full list [[2], [1, 3], [2]] could be reconstructed but which I'd like to avoid. In parallel each worker would own some of the particle data, so e.g. for the 1->2->3 example and 2 workers, worker 0 could own [[2]] and worker 1 [[3],[]]. > > Looking over the DM variants, either DMNetwork or some manual mesh build with DMPlex seem suited for this. I'd especially like it if the adjacency information is handled by the DM automagically based on the edges so I don't have to deal with ghost particle communication myself. I already tried something basic with DMNetwork, though for some reason the offsets I get from DMNetworkGetGlobalVecOffset() are larger than the actual network. I've attached what I have so far but comparing to e.g. src/snes/tutorials/network/ex1.c I don't see what I'm doing wrong if I don't need data at the edges. I might not be seeing the trees for the forest though. The output with -dmnetwork_view looks reasonable to me. Any help in fixing this approach, or if it would seem suitable pointers to using DMPlex for this problem, would be appreciated. > > To me, this sounds like you should built it with DMSwarm. Why? 1) We only have vertices and edges, so a mesh does not buy us anything. 2) You are managing the parallel particle connectivity, so DMPlex topology is not buying us anything. Unless I am misunderstanding. 3) DMNetwork has a lot of support for vertices with different characteristics. Your particles all have the same attributes, so this is unnecessary. How would you set this up? 1) Declare all particle attributes. There are many Swarm examples, but say ex6 which simulates particles moving under a central force. 2) That example decides when to move particles using a parallel background mesh. However, you know which particles you want to move, so you just change the _rank_ field to the new rank and call DMSwarmMigrate() with migration type _basic_. It should be straightforward to setup a tiny example moving around a few particles to see if it does everything you want. Thanks, Matt > Best regards, > Marco > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!bLVHnoUGooYpdfGD8zNQrHTY2ln70W082hEc6pG7vdjA2fCvs77tcI9d7QOA0i_FjGK1of3nNOKXCEGdiWx7$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Tue Jul 30 08:56:06 2024 From: mfadams at lbl.gov (Mark Adams) Date: Tue, 30 Jul 2024 09:56:06 -0400 Subject: [petsc-users] Right DM for a particle network In-Reply-To: References: <54a9b9e5-691c-4535-bc49-5c00bc19a0df@kit.ac.jp> Message-ID: * they do have a vocal mesh, so perhaps They want DM Plex. * they want ghost particle communication, that also might want a mesh * DM swarm does not have a notion of ghost particle, as far as I know, but it could use one On Tue, Jul 30, 2024 at 7:58?AM Matthew Knepley wrote: > On Tue, Jul 30, 2024 at 12: 24 AM Marco Seiz wrote: > Hello, I'd like to solve transient heat transport at a particle scale using > TS, with the per-particle equation being something like dT_i / dt = (S(T_i) > + sum(F(T_j, > ZjQcmQRYFpfptBannerStart > This Message Is From an External Sender > This message came from outside your organization. > > ZjQcmQRYFpfptBannerEnd > On Tue, Jul 30, 2024 at 12:24?AM Marco Seiz wrote: > >> Hello, I'd like to solve transient heat transport at a particle scale >> using TS, with the per-particle equation being something like dT_i / dt = >> (S(T_i) + sum(F(T_j, T_i), j connecting to i)) with a nonlinear source term >> S and a conduction term >> ZjQcmQRYFpfptBannerStart >> This Message Is From an External Sender >> This message came from outside your organization. >> >> ZjQcmQRYFpfptBannerEnd >> >> Hello, >> >> I'd like to solve transient heat transport at a particle scale using TS, with the per-particle equation being something like >> >> dT_i / dt = (S(T_i) + sum(F(T_j, T_i), j connecting to i)) >> >> with a nonlinear source term S and a conduction term F. The particles can move, deform and grow/shrink/vanish on a voxel grid, but for the temperature a particle-scale resolution should be sufficient. The particles' connectivity will change during the simulation, but is assumed constant during a single timestep. I have a data structure tracking the particles' connectivity, so I can say which particles should conduct heat to each other. I exploit symmetry and so only save the "forward" edges, so e.g. for touching particles 1->2->3, I only store [[2], [3], []], from which the full list [[2], [1, 3], [2]] could be reconstructed but which I'd like to avoid. In parallel each worker would own some of the particle data, so e.g. for the 1->2->3 example and 2 workers, worker 0 could own [[2]] and worker 1 [[3],[]]. >> >> Looking over the DM variants, either DMNetwork or some manual mesh build with DMPlex seem suited for this. I'd especially like it if the adjacency information is handled by the DM automagically based on the edges so I don't have to deal with ghost particle communication myself. I already tried something basic with DMNetwork, though for some reason the offsets I get from DMNetworkGetGlobalVecOffset() are larger than the actual network. I've attached what I have so far but comparing to e.g. src/snes/tutorials/network/ex1.c I don't see what I'm doing wrong if I don't need data at the edges. I might not be seeing the trees for the forest though. The output with -dmnetwork_view looks reasonable to me. Any help in fixing this approach, or if it would seem suitable pointers to using DMPlex for this problem, would be appreciated. >> >> To me, this sounds like you should built it with DMSwarm. Why? > > 1) We only have vertices and edges, so a mesh does not buy us anything. > > 2) You are managing the parallel particle connectivity, so DMPlex topology > is not buying us anything. Unless I am misunderstanding. > > 3) DMNetwork has a lot of support for vertices with different > characteristics. Your particles all have the same attributes, so this is > unnecessary. > > How would you set this up? > > 1) Declare all particle attributes. There are many Swarm examples, but say > ex6 which simulates particles moving under a central force. > > 2) That example decides when to move particles using a parallel background > mesh. However, you know which particles you want to move, > so you just change the _rank_ field to the new rank and call > DMSwarmMigrate() with migration type _basic_. > > It should be straightforward to setup a tiny example moving around a few > particles to see if it does everything you want. > > Thanks, > > Matt > > >> Best regards, >> Marco >> >> -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!ZZf2CYk1CsN2Q43ocye-8RrM6OO4A88bnH3Z4v_lyS33IuTwZEAglC2goNp7wyWJr6cONppAYIh0DQ0qFGzNNYg$ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From liufield at gmail.com Tue Jul 30 10:50:18 2024 From: liufield at gmail.com (neil liu) Date: Tue, 30 Jul 2024 11:50:18 -0400 Subject: [petsc-users] Trying to run https://petsc.org/release/src/ksp/ksp/tutorials/ex72.c.html In-Reply-To: References: Message-ID: Hi, I am trying to use PCBDDC for the vector based FEM. (Complex system, double precision ) My code can work well with *asm*, petsc-3.21.1/petsc/arch-linux-c-opt/bin/mpirun -n 8 ./app -pc_type asm -pc_asm_overlap 6 -ksp_converged_reason -ksp_view -ksp_gmres_modifiedgramschmidt -ksp_gmres_restart 1500 -ksp_rtol 1e-8 -ksp_monitor -ksp_max_it 100000 When I tried BDDC, it was stuck for solving the linear system (it can not print anything for ksp_monitor). I did the conversion for matrix, * Mat J;* * MatConvert(A, MATIS, MAT_INITIAL_MATRIX, &J);* * KSPSetOperators(ksp, A, J);* * MatDestroy(&J);* * KSPSetInitialGuessNonzero(ksp, PETSC_TRUE);* * KSPSetFromOptions(ksp);* petsc-3.21.1/petsc/arch-linux-c-debug/bin/mpirun -n 2 ./app -ksp_type cg -pc_type bddc -ksp_monitor -mat_type is Do you have any suggestions? Thanks , Xiaodong On Mon, Jul 29, 2024 at 6:19?PM neil liu wrote: > When I compile with real data, > it shows the attached error. > > The data file is in binary format, right? > > > > On Mon, Jul 29, 2024 at 5:36?PM Stefano Zampini > wrote: > >> Your PETSc installation is for complex, data is for real >> >> On Mon, Jul 29, 2024, 23:14 neil liu wrote: >> >>> I compiled Petsc with single precision. However, it is not converged >>> with the data. Please see the attached file. On Mon, Jul 29, 2024 at 4: 25 >>> PM Barry Smith wrote: This can happen if the data >>> was stored in single precision >>> ZjQcmQRYFpfptBannerStart >>> This Message Is From an External Sender >>> This message came from outside your organization. >>> >>> ZjQcmQRYFpfptBannerEnd >>> I compiled Petsc with single precision. However, it is not converged >>> with the data. >>> >>> Please see the attached file. >>> >>> On Mon, Jul 29, 2024 at 4:25?PM Barry Smith wrote: >>> >>>> >>>> This can happen if the data was stored in single precision and PETSc >>>> was built for double. >>>> >>>> >>>> On Jul 29, 2024, at 3:55?PM, neil liu wrote: >>>> >>>> This Message Is From an External Sender >>>> This message came from outside your organization. >>>> Dear Petsc developers,, >>>> >>>> I am trying to run >>>> https://urldefense.us/v3/__https://petsc.org/release/src/ksp/ksp/tutorials/ex72.c.html__;!!G_uCfscf7eWS!f-0Lu-su_W4yzz3WY5WMTUfWrVt3g96PdhasAO6-CtYzQzHqw-HviyclvMgCjclVqn6LQ6joNJmKan04GFddJg$ >>>> >>>> with >>>> >>>> petsc-3.21.1/petsc/arch-linux-c-opt/bin/mpirun -n 2 ./ex72 -f >>>> /Documents/PetscData/poisson_DMPLEX_32x32_16.dat -pc_type bddc -ksp_type cg >>>> -ksp_norm_type natural -ksp_error_if_not_converged -mat_type is >>>> >>>> The file was downloaded and put in the directory PetscData. >>>> >>>> The error is shown as follows, >>>> >>>> 0]PETSC ERROR: --------------------- Error Message >>>> -------------------------------------------------------------- >>>> [0]PETSC ERROR: Read from file failed >>>> [0]PETSC ERROR: Read past end of file >>>> [0]PETSC ERROR: WARNING! There are unused option(s) set! Could be the >>>> program crashed before usage or a spelling mistake, etc! >>>> [0]PETSC ERROR: Option left: name:-ksp_error_if_not_converged (no >>>> value) source: command line >>>> [0]PETSC ERROR: Option left: name:-ksp_norm_type value: natural >>>> source: command line >>>> [0]PETSC ERROR: Option left: name:-ksp_type value: cg source: command >>>> line >>>> [0]PETSC ERROR: Option left: name:-pc_type value: bddc source: >>>> command line >>>> [0]PETSC ERROR: See https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!f-0Lu-su_W4yzz3WY5WMTUfWrVt3g96PdhasAO6-CtYzQzHqw-HviyclvMgCjclVqn6LQ6joNJmKan22rsyOwQ$ >>>> >>>> for trouble shooting. >>>> [0]PETSC ERROR: Petsc Release Version 3.21.1, unknown >>>> [0]PETSC ERROR: ./ex72 on a arch-linux-c-opt named >>>> Mon Jul 29 15:50:04 2024 >>>> [0]PETSC ERROR: Configure options --with-cc=gcc --with-fc=gfortran >>>> --with-cxx=g++ --download-fblaslapack --download-mpich >>>> --with-scalar-type=complex --download-triangle --with-debugging=no >>>> [0]PETSC ERROR: #1 PetscBinaryRead() at >>>> /home/xxxxxx/Documents/petsc-3.21.1/petsc/src/sys/fileio/sysio.c:327 >>>> [0]PETSC ERROR: #2 PetscViewerBinaryWriteReadAll() at >>>> /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/sys/classes/viewer/impls/binary/binv.c:1077 >>>> [0]PETSC ERROR: #3 PetscViewerBinaryReadAll() at >>>> /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/sys/classes/viewer/impls/binary/binv.c:1119 >>>> [0]PETSC ERROR: #4 MatLoad_MPIAIJ_Binary() at >>>> Documents/petsc-3.21.1/petsc/src/mat/impls/aij/mpi/mpiaij.c:3093 >>>> [0]PETSC ERROR: #5 MatLoad_MPIAIJ() at >>>> /Documents/petsc-3.21.1/petsc/src/mat/impls/aij/mpi/mpiaij.c:3035 >>>> [0]PETSC ERROR: #6 MatLoad() at >>>> /Documents/petsc-3.21.1/petsc/src/mat/interface/matrix.c:1344 >>>> [0]PETSC ERROR: #7 MatLoad_IS() at >>>> /Documents/petsc-3.21.1/petsc/src/mat/impls/is/matis.c:2575 >>>> [0]PETSC ERROR: #8 MatLoad() at >>>> /home/Documents/petsc-3.21.1/petsc/src/mat/interface/matrix.c:1344 >>>> [0]PETSC ERROR: #9 main() at ex72.c:105 >>>> [0]PETSC ERROR: PETSc Option Table entries: >>>> [0]PETSC ERROR: -f >>>> /Documents/PetscData/poisson_DMPLEX_32x32_16.dat (source: command line) >>>> [0]PETSC ERROR: -ksp_error_if_not_converged (source: command line) >>>> [0]PETSC ERROR: -ksp_norm_type natural (source: command line) >>>> [0]PETSC ERROR: -ksp_type cg (source: command line) >>>> [0]PETSC ERROR: -mat_type is (source: command line) >>>> [0]PETSC ERROR: -pc_type bddc (source: command line) >>>> [0]PETSC ERROR: ----------------End of Error Message -------send entire >>>> error message to petsc-maint at mcs.anl.gov---------- >>>> application called MPI_Abort(MPI_COMM_SELF, 66) - process 0 >>>> >>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.zampini at gmail.com Tue Jul 30 10:56:06 2024 From: stefano.zampini at gmail.com (Stefano Zampini) Date: Tue, 30 Jul 2024 17:56:06 +0200 Subject: [petsc-users] Trying to run https://petsc.org/release/src/ksp/ksp/tutorials/ex72.c.html In-Reply-To: References: Message-ID: BDDC needs the matrix in MATIS format. Using MatConvert will give you back the right format, but the subdomain matrices are wrong. You need to assemble directly in MATIS format, something like MatCreate(comm,&A) MatSetType(A,MATIS) MatSetLocalToGlobalMapping(A,l2gmap, l2gmap) for e in local_elements: E = compute_element_matrix(e) MatSetValues(A,local_element_dofs,local_element_dofs,....) l2gmap is an ISLocalToGlobalMapping that stores the global dof number of the dofs that are local to the mesh See e.g. https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/blob/main/src/ksp/ksp/tutorials/ex59.c?ref_type=heads__;!!G_uCfscf7eWS!fto0Jg_PHzGcngIMMMYrrY5g7xg9PGrrG0vYe8jDHgqRACNehCc-0TsT-Y9KPf3LYo5ETkwf2-eR5ivUvFCoJaLlQHohtvo$ or https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/blob/main/src/ksp/ksp/tutorials/ex71.c?ref_type=heads__;!!G_uCfscf7eWS!fto0Jg_PHzGcngIMMMYrrY5g7xg9PGrrG0vYe8jDHgqRACNehCc-0TsT-Y9KPf3LYo5ETkwf2-eR5ivUvFCoJaLl4pw8W7c$ Il giorno mar 30 lug 2024 alle ore 17:50 neil liu ha scritto: > Hi, > I am trying to use PCBDDC for the vector based FEM. (Complex system, > double precision ) > My code can work well with *asm*, > petsc-3.21.1/petsc/arch-linux-c-opt/bin/mpirun -n 8 ./app -pc_type asm > -pc_asm_overlap 6 -ksp_converged_reason -ksp_view > -ksp_gmres_modifiedgramschmidt -ksp_gmres_restart 1500 -ksp_rtol 1e-8 > -ksp_monitor -ksp_max_it 100000 > > When I tried BDDC, it was stuck for solving the linear system (it can not > print anything for ksp_monitor). I did the conversion for matrix, > > * Mat J;* > * MatConvert(A, MATIS, MAT_INITIAL_MATRIX, &J);* > * KSPSetOperators(ksp, A, J);* > * MatDestroy(&J);* > * KSPSetInitialGuessNonzero(ksp, PETSC_TRUE);* > * KSPSetFromOptions(ksp);* > > petsc-3.21.1/petsc/arch-linux-c-debug/bin/mpirun -n 2 ./app -ksp_type cg > -pc_type bddc -ksp_monitor -mat_type is > > Do you have any suggestions? > > Thanks , > Xiaodong > > > On Mon, Jul 29, 2024 at 6:19?PM neil liu wrote: > >> When I compile with real data, >> it shows the attached error. >> >> The data file is in binary format, right? >> >> >> >> On Mon, Jul 29, 2024 at 5:36?PM Stefano Zampini < >> stefano.zampini at gmail.com> wrote: >> >>> Your PETSc installation is for complex, data is for real >>> >>> On Mon, Jul 29, 2024, 23:14 neil liu wrote: >>> >>>> I compiled Petsc with single precision. However, it is not converged >>>> with the data. Please see the attached file. On Mon, Jul 29, 2024 at 4: 25 >>>> PM Barry Smith wrote: This can happen if the data >>>> was stored in single precision >>>> ZjQcmQRYFpfptBannerStart >>>> This Message Is From an External Sender >>>> This message came from outside your organization. >>>> >>>> ZjQcmQRYFpfptBannerEnd >>>> I compiled Petsc with single precision. However, it is not converged >>>> with the data. >>>> >>>> Please see the attached file. >>>> >>>> On Mon, Jul 29, 2024 at 4:25?PM Barry Smith wrote: >>>> >>>>> >>>>> This can happen if the data was stored in single precision and >>>>> PETSc was built for double. >>>>> >>>>> >>>>> On Jul 29, 2024, at 3:55?PM, neil liu wrote: >>>>> >>>>> This Message Is From an External Sender >>>>> This message came from outside your organization. >>>>> Dear Petsc developers,, >>>>> >>>>> I am trying to run >>>>> https://urldefense.us/v3/__https://petsc.org/release/src/ksp/ksp/tutorials/ex72.c.html__;!!G_uCfscf7eWS!fto0Jg_PHzGcngIMMMYrrY5g7xg9PGrrG0vYe8jDHgqRACNehCc-0TsT-Y9KPf3LYo5ETkwf2-eR5ivUvFCoJaLlfC7UbLU$ >>>>> >>>>> with >>>>> >>>>> petsc-3.21.1/petsc/arch-linux-c-opt/bin/mpirun -n 2 ./ex72 -f >>>>> /Documents/PetscData/poisson_DMPLEX_32x32_16.dat -pc_type bddc -ksp_type cg >>>>> -ksp_norm_type natural -ksp_error_if_not_converged -mat_type is >>>>> >>>>> The file was downloaded and put in the directory PetscData. >>>>> >>>>> The error is shown as follows, >>>>> >>>>> 0]PETSC ERROR: --------------------- Error Message >>>>> -------------------------------------------------------------- >>>>> [0]PETSC ERROR: Read from file failed >>>>> [0]PETSC ERROR: Read past end of file >>>>> [0]PETSC ERROR: WARNING! There are unused option(s) set! Could be the >>>>> program crashed before usage or a spelling mistake, etc! >>>>> [0]PETSC ERROR: Option left: name:-ksp_error_if_not_converged (no >>>>> value) source: command line >>>>> [0]PETSC ERROR: Option left: name:-ksp_norm_type value: natural >>>>> source: command line >>>>> [0]PETSC ERROR: Option left: name:-ksp_type value: cg source: >>>>> command line >>>>> [0]PETSC ERROR: Option left: name:-pc_type value: bddc source: >>>>> command line >>>>> [0]PETSC ERROR: See https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!fto0Jg_PHzGcngIMMMYrrY5g7xg9PGrrG0vYe8jDHgqRACNehCc-0TsT-Y9KPf3LYo5ETkwf2-eR5ivUvFCoJaLlnbED1k0$ >>>>> >>>>> for trouble shooting. >>>>> [0]PETSC ERROR: Petsc Release Version 3.21.1, unknown >>>>> [0]PETSC ERROR: ./ex72 on a arch-linux-c-opt named >>>>> Mon Jul 29 15:50:04 2024 >>>>> [0]PETSC ERROR: Configure options --with-cc=gcc --with-fc=gfortran >>>>> --with-cxx=g++ --download-fblaslapack --download-mpich >>>>> --with-scalar-type=complex --download-triangle --with-debugging=no >>>>> [0]PETSC ERROR: #1 PetscBinaryRead() at >>>>> /home/xxxxxx/Documents/petsc-3.21.1/petsc/src/sys/fileio/sysio.c:327 >>>>> [0]PETSC ERROR: #2 PetscViewerBinaryWriteReadAll() at >>>>> /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/sys/classes/viewer/impls/binary/binv.c:1077 >>>>> [0]PETSC ERROR: #3 PetscViewerBinaryReadAll() at >>>>> /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/sys/classes/viewer/impls/binary/binv.c:1119 >>>>> [0]PETSC ERROR: #4 MatLoad_MPIAIJ_Binary() at >>>>> Documents/petsc-3.21.1/petsc/src/mat/impls/aij/mpi/mpiaij.c:3093 >>>>> [0]PETSC ERROR: #5 MatLoad_MPIAIJ() at >>>>> /Documents/petsc-3.21.1/petsc/src/mat/impls/aij/mpi/mpiaij.c:3035 >>>>> [0]PETSC ERROR: #6 MatLoad() at >>>>> /Documents/petsc-3.21.1/petsc/src/mat/interface/matrix.c:1344 >>>>> [0]PETSC ERROR: #7 MatLoad_IS() at >>>>> /Documents/petsc-3.21.1/petsc/src/mat/impls/is/matis.c:2575 >>>>> [0]PETSC ERROR: #8 MatLoad() at >>>>> /home/Documents/petsc-3.21.1/petsc/src/mat/interface/matrix.c:1344 >>>>> [0]PETSC ERROR: #9 main() at ex72.c:105 >>>>> [0]PETSC ERROR: PETSc Option Table entries: >>>>> [0]PETSC ERROR: -f >>>>> /Documents/PetscData/poisson_DMPLEX_32x32_16.dat (source: command line) >>>>> [0]PETSC ERROR: -ksp_error_if_not_converged (source: command line) >>>>> [0]PETSC ERROR: -ksp_norm_type natural (source: command line) >>>>> [0]PETSC ERROR: -ksp_type cg (source: command line) >>>>> [0]PETSC ERROR: -mat_type is (source: command line) >>>>> [0]PETSC ERROR: -pc_type bddc (source: command line) >>>>> [0]PETSC ERROR: ----------------End of Error Message -------send >>>>> entire error message to petsc-maint at mcs.anl.gov---------- >>>>> application called MPI_Abort(MPI_COMM_SELF, 66) - process 0 >>>>> >>>>> >>>>> -- Stefano -------------- next part -------------- An HTML attachment was scrubbed... URL: From liufield at gmail.com Tue Jul 30 12:47:04 2024 From: liufield at gmail.com (neil liu) Date: Tue, 30 Jul 2024 13:47:04 -0400 Subject: [petsc-users] Trying to run https://petsc.org/release/src/ksp/ksp/tutorials/ex72.c.html In-Reply-To: References: Message-ID: Thanks, Stefano, I am trying to modify the code as follows, MatCreate(PETSC_COMM_WORLD, &A); MatSetSizes(A, PETSC_DECIDE, PETSC_DECIDE, numberDof_global, numberDof_global); MatSetType(A, MATIS); MatSetUp(A); MatZeroEntries(A); VecCreate(PETSC_COMM_WORLD, &b); VecSetSizes(b, PETSC_DECIDE, numberDof_global); VecSetUp(b); VecSet(b,0.0); VecDuplicate(b, &x); const PetscInt *g_idx; ISLocalToGlobalMapping ltogm; DMGetLocalToGlobalMapping(dm, <ogm); ISLocalToGlobalMappingGetIndices(ltogm, &g_idx); //Build idxm_global and Set LHS idxm_Global[ idxDofLocal ] = g_idx[ numdofPerFace*idxm[idxDofLocal]]; MatSetValues(A, numberDof_local, idxm_Global.data(), numberDof_local, idxm_Global.data(), MatrixLocal.data(), ADD_VALUES); //Set RHS PetscScalar valueDiag = 1.0 ; MatZeroRows(A, objGeometryInfo.numberDof_Dirichlet, (objGeometryInfo. arrayDofSeqGlobal_Dirichlet).data(), valueDiag, 0, 0); VecSetValues(b, objGeometryInfo.numberDof_Dirichlet, (objGeometryInfo. arrayDofSeqGlobal_Dirichlet).data(), (objGeometryInfo.dof_Dirichlet).data(), INSERT_VALUES); VecSetValues(x, objGeometryInfo.numberDof_Dirichlet, (objGeometryInfo. arrayDofSeqGlobal_Dirichlet).data(), (objGeometryInfo.dof_Dirichlet).data(), INSERT_VALUES); ISLocalToGlobalMappingRestoreIndices(ltogm, &g_idx); VecAssemblyBegin(b); VecAssemblyEnd(b); VecAssemblyBegin(x); VecAssemblyEnd(x); It shows the attached error when I run the code. It seems something wrong is with setting RHS. Could you please help me double check my above code to setup the RHS? Thanks, On Tue, Jul 30, 2024 at 11:56?AM Stefano Zampini wrote: > BDDC needs the matrix in MATIS format. Using MatConvert will give you back > the right format, but the subdomain matrices are wrong. You need to > assemble directly in MATIS format, something like > > MatCreate(comm,&A) > MatSetType(A,MATIS) > MatSetLocalToGlobalMapping(A,l2gmap, l2gmap) > for e in local_elements: > E = compute_element_matrix(e) > MatSetValues(A,local_element_dofs,local_element_dofs,....) > > l2gmap is an ISLocalToGlobalMapping that stores the global dof number of > the dofs that are local to the mesh > > See e.g. > https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/blob/main/src/ksp/ksp/tutorials/ex59.c?ref_type=heads__;!!G_uCfscf7eWS!cDRkl4iAuyMeT5FvByDeeIBRUN6lrFt7MnusMnOcqch-xF5D2nGsgMCZN6-FBjxF1G_dSiPtnq7Yj-RlBHZiiQ$ > or > https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/blob/main/src/ksp/ksp/tutorials/ex71.c?ref_type=heads__;!!G_uCfscf7eWS!cDRkl4iAuyMeT5FvByDeeIBRUN6lrFt7MnusMnOcqch-xF5D2nGsgMCZN6-FBjxF1G_dSiPtnq7Yj-REAmdXuA$ > > Il giorno mar 30 lug 2024 alle ore 17:50 neil liu ha > scritto: > >> Hi, >> I am trying to use PCBDDC for the vector based FEM. (Complex system, >> double precision ) >> My code can work well with *asm*, >> petsc-3.21.1/petsc/arch-linux-c-opt/bin/mpirun -n 8 ./app -pc_type asm >> -pc_asm_overlap 6 -ksp_converged_reason -ksp_view >> -ksp_gmres_modifiedgramschmidt -ksp_gmres_restart 1500 -ksp_rtol 1e-8 >> -ksp_monitor -ksp_max_it 100000 >> >> When I tried BDDC, it was stuck for solving the linear system (it can not >> print anything for ksp_monitor). I did the conversion for matrix, >> >> * Mat J;* >> * MatConvert(A, MATIS, MAT_INITIAL_MATRIX, &J);* >> * KSPSetOperators(ksp, A, J);* >> * MatDestroy(&J);* >> * KSPSetInitialGuessNonzero(ksp, PETSC_TRUE);* >> * KSPSetFromOptions(ksp);* >> >> petsc-3.21.1/petsc/arch-linux-c-debug/bin/mpirun -n 2 ./app -ksp_type cg >> -pc_type bddc -ksp_monitor -mat_type is >> >> Do you have any suggestions? >> >> Thanks , >> Xiaodong >> >> >> On Mon, Jul 29, 2024 at 6:19?PM neil liu wrote: >> >>> When I compile with real data, >>> it shows the attached error. >>> >>> The data file is in binary format, right? >>> >>> >>> >>> On Mon, Jul 29, 2024 at 5:36?PM Stefano Zampini < >>> stefano.zampini at gmail.com> wrote: >>> >>>> Your PETSc installation is for complex, data is for real >>>> >>>> On Mon, Jul 29, 2024, 23:14 neil liu wrote: >>>> >>>>> I compiled Petsc with single precision. However, it is not converged >>>>> with the data. Please see the attached file. On Mon, Jul 29, 2024 at 4: 25 >>>>> PM Barry Smith wrote: This can happen if the >>>>> data was stored in single precision >>>>> ZjQcmQRYFpfptBannerStart >>>>> This Message Is From an External Sender >>>>> This message came from outside your organization. >>>>> >>>>> ZjQcmQRYFpfptBannerEnd >>>>> I compiled Petsc with single precision. However, it is not converged >>>>> with the data. >>>>> >>>>> Please see the attached file. >>>>> >>>>> On Mon, Jul 29, 2024 at 4:25?PM Barry Smith wrote: >>>>> >>>>>> >>>>>> This can happen if the data was stored in single precision and >>>>>> PETSc was built for double. >>>>>> >>>>>> >>>>>> On Jul 29, 2024, at 3:55?PM, neil liu wrote: >>>>>> >>>>>> This Message Is From an External Sender >>>>>> This message came from outside your organization. >>>>>> Dear Petsc developers,, >>>>>> >>>>>> I am trying to run >>>>>> https://urldefense.us/v3/__https://petsc.org/release/src/ksp/ksp/tutorials/ex72.c.html__;!!G_uCfscf7eWS!cDRkl4iAuyMeT5FvByDeeIBRUN6lrFt7MnusMnOcqch-xF5D2nGsgMCZN6-FBjxF1G_dSiPtnq7Yj-Tge50R8A$ >>>>>> >>>>>> with >>>>>> >>>>>> petsc-3.21.1/petsc/arch-linux-c-opt/bin/mpirun -n 2 ./ex72 -f >>>>>> /Documents/PetscData/poisson_DMPLEX_32x32_16.dat -pc_type bddc -ksp_type cg >>>>>> -ksp_norm_type natural -ksp_error_if_not_converged -mat_type is >>>>>> >>>>>> The file was downloaded and put in the directory PetscData. >>>>>> >>>>>> The error is shown as follows, >>>>>> >>>>>> 0]PETSC ERROR: --------------------- Error Message >>>>>> -------------------------------------------------------------- >>>>>> [0]PETSC ERROR: Read from file failed >>>>>> [0]PETSC ERROR: Read past end of file >>>>>> [0]PETSC ERROR: WARNING! There are unused option(s) set! Could be the >>>>>> program crashed before usage or a spelling mistake, etc! >>>>>> [0]PETSC ERROR: Option left: name:-ksp_error_if_not_converged (no >>>>>> value) source: command line >>>>>> [0]PETSC ERROR: Option left: name:-ksp_norm_type value: natural >>>>>> source: command line >>>>>> [0]PETSC ERROR: Option left: name:-ksp_type value: cg source: >>>>>> command line >>>>>> [0]PETSC ERROR: Option left: name:-pc_type value: bddc source: >>>>>> command line >>>>>> [0]PETSC ERROR: See https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!cDRkl4iAuyMeT5FvByDeeIBRUN6lrFt7MnusMnOcqch-xF5D2nGsgMCZN6-FBjxF1G_dSiPtnq7Yj-TqIGjr8Q$ >>>>>> >>>>>> for trouble shooting. >>>>>> [0]PETSC ERROR: Petsc Release Version 3.21.1, unknown >>>>>> [0]PETSC ERROR: ./ex72 on a arch-linux-c-opt named >>>>>> Mon Jul 29 15:50:04 2024 >>>>>> [0]PETSC ERROR: Configure options --with-cc=gcc --with-fc=gfortran >>>>>> --with-cxx=g++ --download-fblaslapack --download-mpich >>>>>> --with-scalar-type=complex --download-triangle --with-debugging=no >>>>>> [0]PETSC ERROR: #1 PetscBinaryRead() at >>>>>> /home/xxxxxx/Documents/petsc-3.21.1/petsc/src/sys/fileio/sysio.c:327 >>>>>> [0]PETSC ERROR: #2 PetscViewerBinaryWriteReadAll() at >>>>>> /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/sys/classes/viewer/impls/binary/binv.c:1077 >>>>>> [0]PETSC ERROR: #3 PetscViewerBinaryReadAll() at >>>>>> /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/sys/classes/viewer/impls/binary/binv.c:1119 >>>>>> [0]PETSC ERROR: #4 MatLoad_MPIAIJ_Binary() at >>>>>> Documents/petsc-3.21.1/petsc/src/mat/impls/aij/mpi/mpiaij.c:3093 >>>>>> [0]PETSC ERROR: #5 MatLoad_MPIAIJ() at >>>>>> /Documents/petsc-3.21.1/petsc/src/mat/impls/aij/mpi/mpiaij.c:3035 >>>>>> [0]PETSC ERROR: #6 MatLoad() at >>>>>> /Documents/petsc-3.21.1/petsc/src/mat/interface/matrix.c:1344 >>>>>> [0]PETSC ERROR: #7 MatLoad_IS() at >>>>>> /Documents/petsc-3.21.1/petsc/src/mat/impls/is/matis.c:2575 >>>>>> [0]PETSC ERROR: #8 MatLoad() at >>>>>> /home/Documents/petsc-3.21.1/petsc/src/mat/interface/matrix.c:1344 >>>>>> [0]PETSC ERROR: #9 main() at ex72.c:105 >>>>>> [0]PETSC ERROR: PETSc Option Table entries: >>>>>> [0]PETSC ERROR: -f >>>>>> /Documents/PetscData/poisson_DMPLEX_32x32_16.dat (source: command >>>>>> line) >>>>>> [0]PETSC ERROR: -ksp_error_if_not_converged (source: command line) >>>>>> [0]PETSC ERROR: -ksp_norm_type natural (source: command line) >>>>>> [0]PETSC ERROR: -ksp_type cg (source: command line) >>>>>> [0]PETSC ERROR: -mat_type is (source: command line) >>>>>> [0]PETSC ERROR: -pc_type bddc (source: command line) >>>>>> [0]PETSC ERROR: ----------------End of Error Message -------send >>>>>> entire error message to petsc-maint at mcs.anl.gov---------- >>>>>> application called MPI_Abort(MPI_COMM_SELF, 66) - process 0 >>>>>> >>>>>> >>>>>> > > -- > Stefano > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: error Type: application/octet-stream Size: 1321 bytes Desc: not available URL: From liufield at gmail.com Tue Jul 30 13:51:11 2024 From: liufield at gmail.com (neil liu) Date: Tue, 30 Jul 2024 14:51:11 -0400 Subject: [petsc-users] Trying to run https://petsc.org/release/src/ksp/ksp/tutorials/ex72.c.html In-Reply-To: References: Message-ID: Hi, Stefano, I am trying to understand the example there you mentioned. I have a question, the example always use DMDA there. Does BDDC also work for DMPLEX? Thanks , On Tue, Jul 30, 2024 at 1:47?PM neil liu wrote: > Thanks, Stefano, > > I am trying to modify the code as follows, > MatCreate(PETSC_COMM_WORLD, &A); > MatSetSizes(A, PETSC_DECIDE, PETSC_DECIDE, numberDof_global, > numberDof_global); > MatSetType(A, MATIS); > MatSetUp(A); > MatZeroEntries(A); > VecCreate(PETSC_COMM_WORLD, &b); > VecSetSizes(b, PETSC_DECIDE, numberDof_global); > VecSetUp(b); > VecSet(b,0.0); > VecDuplicate(b, &x); > > const PetscInt *g_idx; > ISLocalToGlobalMapping ltogm; > DMGetLocalToGlobalMapping(dm, <ogm); > ISLocalToGlobalMappingGetIndices(ltogm, &g_idx); > > //Build idxm_global and Set LHS > idxm_Global[ idxDofLocal ] = g_idx[ numdofPerFace*idxm[idxDofLocal]]; > MatSetValues(A, numberDof_local, idxm_Global.data(), numberDof_local, > idxm_Global.data(), MatrixLocal.data(), ADD_VALUES); > > //Set RHS > PetscScalar valueDiag = 1.0 ; > MatZeroRows(A, objGeometryInfo.numberDof_Dirichlet, (objGeometryInfo. > arrayDofSeqGlobal_Dirichlet).data(), valueDiag, 0, 0); > > VecSetValues(b, objGeometryInfo.numberDof_Dirichlet, (objGeometryInfo. > arrayDofSeqGlobal_Dirichlet).data(), (objGeometryInfo.dof_Dirichlet).data(), > INSERT_VALUES); > VecSetValues(x, objGeometryInfo.numberDof_Dirichlet, (objGeometryInfo. > arrayDofSeqGlobal_Dirichlet).data(), (objGeometryInfo.dof_Dirichlet).data(), > INSERT_VALUES); > ISLocalToGlobalMappingRestoreIndices(ltogm, &g_idx); > VecAssemblyBegin(b); > VecAssemblyEnd(b); > VecAssemblyBegin(x); > VecAssemblyEnd(x); > It shows the attached error when I run the code. It seems something wrong > is with setting RHS. > Could you please help me double check my above code to setup the RHS? > Thanks, > > On Tue, Jul 30, 2024 at 11:56?AM Stefano Zampini < > stefano.zampini at gmail.com> wrote: > >> BDDC needs the matrix in MATIS format. Using MatConvert will give you >> back the right format, but the subdomain matrices are wrong. You need to >> assemble directly in MATIS format, something like >> >> MatCreate(comm,&A) >> MatSetType(A,MATIS) >> MatSetLocalToGlobalMapping(A,l2gmap, l2gmap) >> for e in local_elements: >> E = compute_element_matrix(e) >> MatSetValues(A,local_element_dofs,local_element_dofs,....) >> >> l2gmap is an ISLocalToGlobalMapping that stores the global dof number of >> the dofs that are local to the mesh >> >> See e.g. >> https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/blob/main/src/ksp/ksp/tutorials/ex59.c?ref_type=heads__;!!G_uCfscf7eWS!YZTeegRXq8o-IIT2OMXi5DW9t-AcAW7uZK8sHK5c9smyhbUNh3uZuwx305oWSbpKrkFCmCE_9yKGjm8EhI7eHA$ >> or >> https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/blob/main/src/ksp/ksp/tutorials/ex71.c?ref_type=heads__;!!G_uCfscf7eWS!YZTeegRXq8o-IIT2OMXi5DW9t-AcAW7uZK8sHK5c9smyhbUNh3uZuwx305oWSbpKrkFCmCE_9yKGjm_k5lEofA$ >> >> Il giorno mar 30 lug 2024 alle ore 17:50 neil liu >> ha scritto: >> >>> Hi, >>> I am trying to use PCBDDC for the vector based FEM. (Complex system, >>> double precision ) >>> My code can work well with *asm*, >>> petsc-3.21.1/petsc/arch-linux-c-opt/bin/mpirun -n 8 ./app -pc_type asm >>> -pc_asm_overlap 6 -ksp_converged_reason -ksp_view >>> -ksp_gmres_modifiedgramschmidt -ksp_gmres_restart 1500 -ksp_rtol 1e-8 >>> -ksp_monitor -ksp_max_it 100000 >>> >>> When I tried BDDC, it was stuck for solving the linear system (it can >>> not print anything for ksp_monitor). I did the conversion for matrix, >>> >>> * Mat J;* >>> * MatConvert(A, MATIS, MAT_INITIAL_MATRIX, &J);* >>> * KSPSetOperators(ksp, A, J);* >>> * MatDestroy(&J);* >>> * KSPSetInitialGuessNonzero(ksp, PETSC_TRUE);* >>> * KSPSetFromOptions(ksp);* >>> >>> petsc-3.21.1/petsc/arch-linux-c-debug/bin/mpirun -n 2 ./app -ksp_type cg >>> -pc_type bddc -ksp_monitor -mat_type is >>> >>> Do you have any suggestions? >>> >>> Thanks , >>> Xiaodong >>> >>> >>> On Mon, Jul 29, 2024 at 6:19?PM neil liu wrote: >>> >>>> When I compile with real data, >>>> it shows the attached error. >>>> >>>> The data file is in binary format, right? >>>> >>>> >>>> >>>> On Mon, Jul 29, 2024 at 5:36?PM Stefano Zampini < >>>> stefano.zampini at gmail.com> wrote: >>>> >>>>> Your PETSc installation is for complex, data is for real >>>>> >>>>> On Mon, Jul 29, 2024, 23:14 neil liu wrote: >>>>> >>>>>> I compiled Petsc with single precision. However, it is not converged >>>>>> with the data. Please see the attached file. On Mon, Jul 29, 2024 at 4: 25 >>>>>> PM Barry Smith wrote: This can happen if the >>>>>> data was stored in single precision >>>>>> ZjQcmQRYFpfptBannerStart >>>>>> This Message Is From an External Sender >>>>>> This message came from outside your organization. >>>>>> >>>>>> ZjQcmQRYFpfptBannerEnd >>>>>> I compiled Petsc with single precision. However, it is not converged >>>>>> with the data. >>>>>> >>>>>> Please see the attached file. >>>>>> >>>>>> On Mon, Jul 29, 2024 at 4:25?PM Barry Smith wrote: >>>>>> >>>>>>> >>>>>>> This can happen if the data was stored in single precision and >>>>>>> PETSc was built for double. >>>>>>> >>>>>>> >>>>>>> On Jul 29, 2024, at 3:55?PM, neil liu wrote: >>>>>>> >>>>>>> This Message Is From an External Sender >>>>>>> This message came from outside your organization. >>>>>>> Dear Petsc developers,, >>>>>>> >>>>>>> I am trying to run >>>>>>> https://urldefense.us/v3/__https://petsc.org/release/src/ksp/ksp/tutorials/ex72.c.html__;!!G_uCfscf7eWS!YZTeegRXq8o-IIT2OMXi5DW9t-AcAW7uZK8sHK5c9smyhbUNh3uZuwx305oWSbpKrkFCmCE_9yKGjm_Z20MEcA$ >>>>>>> >>>>>>> with >>>>>>> >>>>>>> petsc-3.21.1/petsc/arch-linux-c-opt/bin/mpirun -n 2 ./ex72 -f >>>>>>> /Documents/PetscData/poisson_DMPLEX_32x32_16.dat -pc_type bddc -ksp_type cg >>>>>>> -ksp_norm_type natural -ksp_error_if_not_converged -mat_type is >>>>>>> >>>>>>> The file was downloaded and put in the directory PetscData. >>>>>>> >>>>>>> The error is shown as follows, >>>>>>> >>>>>>> 0]PETSC ERROR: --------------------- Error Message >>>>>>> -------------------------------------------------------------- >>>>>>> [0]PETSC ERROR: Read from file failed >>>>>>> [0]PETSC ERROR: Read past end of file >>>>>>> [0]PETSC ERROR: WARNING! There are unused option(s) set! Could be >>>>>>> the program crashed before usage or a spelling mistake, etc! >>>>>>> [0]PETSC ERROR: Option left: name:-ksp_error_if_not_converged (no >>>>>>> value) source: command line >>>>>>> [0]PETSC ERROR: Option left: name:-ksp_norm_type value: natural >>>>>>> source: command line >>>>>>> [0]PETSC ERROR: Option left: name:-ksp_type value: cg source: >>>>>>> command line >>>>>>> [0]PETSC ERROR: Option left: name:-pc_type value: bddc source: >>>>>>> command line >>>>>>> [0]PETSC ERROR: See https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!YZTeegRXq8o-IIT2OMXi5DW9t-AcAW7uZK8sHK5c9smyhbUNh3uZuwx305oWSbpKrkFCmCE_9yKGjm_9Texx-Q$ >>>>>>> >>>>>>> for trouble shooting. >>>>>>> [0]PETSC ERROR: Petsc Release Version 3.21.1, unknown >>>>>>> [0]PETSC ERROR: ./ex72 on a arch-linux-c-opt named >>>>>>> Mon Jul 29 15:50:04 2024 >>>>>>> [0]PETSC ERROR: Configure options --with-cc=gcc --with-fc=gfortran >>>>>>> --with-cxx=g++ --download-fblaslapack --download-mpich >>>>>>> --with-scalar-type=complex --download-triangle --with-debugging=no >>>>>>> [0]PETSC ERROR: #1 PetscBinaryRead() at >>>>>>> /home/xxxxxx/Documents/petsc-3.21.1/petsc/src/sys/fileio/sysio.c:327 >>>>>>> [0]PETSC ERROR: #2 PetscViewerBinaryWriteReadAll() at >>>>>>> /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/sys/classes/viewer/impls/binary/binv.c:1077 >>>>>>> [0]PETSC ERROR: #3 PetscViewerBinaryReadAll() at >>>>>>> /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/sys/classes/viewer/impls/binary/binv.c:1119 >>>>>>> [0]PETSC ERROR: #4 MatLoad_MPIAIJ_Binary() at >>>>>>> Documents/petsc-3.21.1/petsc/src/mat/impls/aij/mpi/mpiaij.c:3093 >>>>>>> [0]PETSC ERROR: #5 MatLoad_MPIAIJ() at >>>>>>> /Documents/petsc-3.21.1/petsc/src/mat/impls/aij/mpi/mpiaij.c:3035 >>>>>>> [0]PETSC ERROR: #6 MatLoad() at >>>>>>> /Documents/petsc-3.21.1/petsc/src/mat/interface/matrix.c:1344 >>>>>>> [0]PETSC ERROR: #7 MatLoad_IS() at >>>>>>> /Documents/petsc-3.21.1/petsc/src/mat/impls/is/matis.c:2575 >>>>>>> [0]PETSC ERROR: #8 MatLoad() at >>>>>>> /home/Documents/petsc-3.21.1/petsc/src/mat/interface/matrix.c:1344 >>>>>>> [0]PETSC ERROR: #9 main() at ex72.c:105 >>>>>>> [0]PETSC ERROR: PETSc Option Table entries: >>>>>>> [0]PETSC ERROR: -f >>>>>>> /Documents/PetscData/poisson_DMPLEX_32x32_16.dat (source: command >>>>>>> line) >>>>>>> [0]PETSC ERROR: -ksp_error_if_not_converged (source: command line) >>>>>>> [0]PETSC ERROR: -ksp_norm_type natural (source: command line) >>>>>>> [0]PETSC ERROR: -ksp_type cg (source: command line) >>>>>>> [0]PETSC ERROR: -mat_type is (source: command line) >>>>>>> [0]PETSC ERROR: -pc_type bddc (source: command line) >>>>>>> [0]PETSC ERROR: ----------------End of Error Message -------send >>>>>>> entire error message to petsc-maint at mcs.anl.gov---------- >>>>>>> application called MPI_Abort(MPI_COMM_SELF, 66) - process 0 >>>>>>> >>>>>>> >>>>>>> >> >> -- >> Stefano >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marco at kit.ac.jp Tue Jul 30 22:32:18 2024 From: marco at kit.ac.jp (Marco Seiz) Date: Wed, 31 Jul 2024 12:32:18 +0900 Subject: [petsc-users] Right DM for a particle network In-Reply-To: References: <54a9b9e5-691c-4535-bc49-5c00bc19a0df@kit.ac.jp> Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graph_pos.jpg Type: image/jpeg Size: 114432 bytes Desc: not available URL: From knepley at gmail.com Wed Jul 31 05:24:56 2024 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 31 Jul 2024 06:24:56 -0400 Subject: [petsc-users] Right DM for a particle network In-Reply-To: References: <54a9b9e5-691c-4535-bc49-5c00bc19a0df@kit.ac.jp> Message-ID: On Tue, Jul 30, 2024 at 11:32?PM Marco Seiz wrote: > Hello, > > maybe to clarify a bit further: I'd essentially like to solve heat > transport between particles only, without solving the transport on my voxel > mesh since there's a large scale difference between the voxel size and the > particle size and heat transport should be fast enough that voxel > resolution is unnecessary. Basically a discrete element method just for > heat transport. The whole motion/size change part is handled separately on > the voxel mesh. > Based on the connectivity, I can make a graph (attached an example from a > 3D case, for description see [1]) and on each vertex (particle) of the > graph I want to account for source terms and conduction along the edges. > What I'd like to avoid is managing the exchange for non-locally owned > vertices during the solve (e.g. for RHS evaluation) myself but rather have > the DM do it with DMGlobalToLocal() and friends. Thinking a bit further, > I'd probably also want to associate some data with the edges since that > will enter the conduction term but stays constant during a solve (think > contact area between particles). > > Looking over the DMSwarm examples, the coupling between particles is via > the background mesh, so e.g. I can't say "loop over all local particles and > for each particle and its neighbours do X". I could use the projection part > to dump the the source terms from the particles to a coarser background > mesh but for the conduction term I don't see how I could get a good > approximation of the contact area on the background mesh without having a > mesh at a similar resolution as I already have, kinda destroying the > purpose of the whole thing. > The point I was trying to make in my previous message is that DMSwarm does not require a background mesh. The examples use one because that is what we use to evaluate particle grouping. However, you have an independent way to do this, so you do not need it. Second, the issue of replicated particles. DMSwarmMigrate allows you to replicate particles, using the input flag. Of course, you would have to manage removing particles you no longer want. Thanks, Matt > [1] Points represent particles, black lines are edges, with the color > indicating which worker "owns" the particle, with 3 workers being used and > only a fraction of edges/vertices being displayed to keep it somewhat tidy. > The position of the points corresponds to the particles' x-y position, with > the z position being ignored. Particle ownership isn't done via looking > where the particle is on the voxel grid, but rather by dividing the array > of particle indices into subarrays, so e.g. particles [0-n/3) go to the > first worker and so on. Since my particles can span multiple workers on the > voxel grid this makes it much easier to update edge information with > one-sided communication. As you can see the "mesh" is quite irregular with > no nice boundary existing for connected particles owned by different > workers. > > Best regards, > Marco > > On 30.07.24 22:56, Mark Adams wrote: > > * they do have a vocal mesh, so perhaps They want DM Plex. > > > > * they want ghost particle communication, that also might want a mesh > > > > * DM swarm does not have a notion of ghost particle, as far as I know, > but it could use one > > > > On Tue, Jul 30, 2024 at 7:58?AM Matthew Knepley > wrote: > > > > On Tue, Jul 30, 2024 at 12: 24 AM Marco Seiz > wrote: Hello, I'd like to solve transient heat transport at a particle > scale using TS, with the per-particle equation being something like dT_i / > dt = (S(T_i) + sum(F(T_j, > > ZjQcmQRYFpfptBannerStart > > __ > > This Message Is From an External Sender > > This message came from outside your organization. > > > > __ > > ZjQcmQRYFpfptBannerEnd > > On Tue, Jul 30, 2024 at 12:24?AM Marco Seiz > wrote: > > > > __ > > Hello, I'd like to solve transient heat transport at a particle > scale using TS, with the per-particle equation being something like dT_i / > dt = (S(T_i) + sum(F(T_j, T_i), j connecting to i)) with a nonlinear source > term S and a conduction term > > ZjQcmQRYFpfptBannerStart > > __ > > This Message Is From an External Sender > > This message came from outside your organization. > > > > __ > > ZjQcmQRYFpfptBannerEnd > > > > Hello, > > > > I'd like to solve transient heat transport at a particle scale > using TS, with the per-particle equation being something like > > > > dT_i / dt = (S(T_i) + sum(F(T_j, T_i), j connecting to i)) > > > > with a nonlinear source term S and a conduction term F. The > particles can move, deform and grow/shrink/vanish on a voxel grid, but for > the temperature a particle-scale resolution should be sufficient. The > particles' connectivity will change during the simulation, but is assumed > constant during a single timestep. I have a data structure tracking the > particles' connectivity, so I can say which particles should conduct heat > to each other. I exploit symmetry and so only save the "forward" edges, so > e.g. for touching particles 1->2->3, I only store [[2], [3], []], from > which the full list [[2], [1, 3], [2]] could be reconstructed but which I'd > like to avoid. In parallel each worker would own some of the particle data, > so e.g. for the 1->2->3 example and 2 workers, worker 0 could own [[2]] and > worker 1 [[3],[]]. > > > > Looking over the DM variants, either DMNetwork or some manual > mesh build with DMPlex seem suited for this. I'd especially like it if the > adjacency information is handled by the DM automagically based on the edges > so I don't have to deal with ghost particle communication myself. I already > tried something basic with DMNetwork, though for some reason the offsets I > get from DMNetworkGetGlobalVecOffset() are larger than the actual network. > I've attached what I have so far but comparing to e.g. > src/snes/tutorials/network/ex1.c I don't see what I'm doing wrong if I > don't need data at the edges. I might not be seeing the trees for the > forest though. The output with -dmnetwork_view looks reasonable to me. Any > help in fixing this approach, or if it would seem suitable pointers to > using DMPlex for this problem, would be appreciated. > > > > To me, this sounds like you should built it with DMSwarm. Why? > > > > 1) We only have vertices and edges, so a mesh does not buy us > anything. > > > > 2) You are managing the parallel particle connectivity, so DMPlex > topology is not buying us anything. Unless I am misunderstanding. > > > > 3) DMNetwork has a lot of support for vertices with different > characteristics. Your particles all have the same attributes, so this is > unnecessary. > > > > How would you set this up? > > > > 1) Declare all particle attributes. There are many Swarm examples, > but say ex6 which simulates particles moving under a central force. > > > > 2) That example decides when to move particles using a parallel > background mesh. However, you know which particles you want to move, > > so you just change the _rank_ field to the new rank and call > DMSwarmMigrate() with migration type _basic_. > > > > It should be straightforward to setup a tiny example moving around a > few particles to see if it does everything you want. > > > > Thanks, > > > > Matt > > > > > > Best regards, > > Marco > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > > -- Norbert Wiener > > > > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!c2gVsrwkHTL4PROAsy9Pu3vtrctvJ1pJIz8p7ZFKYBh8wzleus-q3_IRynQ6EzGWXU-JGCBltgv6ciTTLl3x$ < > https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!bLVHnoUGooYpdfGD8zNQrHTY2ln70W082hEc6pG7vdjA2fCvs77tcI9d7QOA0i_FjGK1of3nNOKXCE-4Rwb0$ > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!c2gVsrwkHTL4PROAsy9Pu3vtrctvJ1pJIz8p7ZFKYBh8wzleus-q3_IRynQ6EzGWXU-JGCBltgv6ciTTLl3x$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Wed Jul 31 09:08:35 2024 From: mfadams at lbl.gov (Mark Adams) Date: Wed, 31 Jul 2024 10:08:35 -0400 Subject: [petsc-users] Right DM for a particle network In-Reply-To: References: <54a9b9e5-691c-4535-bc49-5c00bc19a0df@kit.ac.jp> Message-ID: Just a thought, but perhaps he may want to use just sparse matrices, AIJ. He Manages the connectivity And we manage ghost values. On Wed, Jul 31, 2024 at 6:25?AM Matthew Knepley wrote: > On Tue, Jul 30, 2024 at 11:32?PM Marco Seiz wrote: > >> Hello, >> >> maybe to clarify a bit further: I'd essentially like to solve heat >> transport between particles only, without solving the transport on my voxel >> mesh since there's a large scale difference between the voxel size and the >> particle size and heat transport should be fast enough that voxel >> resolution is unnecessary. Basically a discrete element method just for >> heat transport. The whole motion/size change part is handled separately on >> the voxel mesh. >> Based on the connectivity, I can make a graph (attached an example from a >> 3D case, for description see [1]) and on each vertex (particle) of the >> graph I want to account for source terms and conduction along the edges. >> What I'd like to avoid is managing the exchange for non-locally owned >> vertices during the solve (e.g. for RHS evaluation) myself but rather have >> the DM do it with DMGlobalToLocal() and friends. Thinking a bit further, >> I'd probably also want to associate some data with the edges since that >> will enter the conduction term but stays constant during a solve (think >> contact area between particles). >> >> Looking over the DMSwarm examples, the coupling between particles is via >> the background mesh, so e.g. I can't say "loop over all local particles and >> for each particle and its neighbours do X". I could use the projection part >> to dump the the source terms from the particles to a coarser background >> mesh but for the conduction term I don't see how I could get a good >> approximation of the contact area on the background mesh without having a >> mesh at a similar resolution as I already have, kinda destroying the >> purpose of the whole thing. >> > > The point I was trying to make in my previous message is that DMSwarm does > not require a background mesh. The examples use one because that is what we > use to evaluate particle grouping. However, you have an independent way to > do this, so you do not need it. > > Second, the issue of replicated particles. DMSwarmMigrate allows you to > replicate particles, using the input flag. Of course, you would have to > manage removing particles you no longer want. > > Thanks, > > Matt > > >> [1] Points represent particles, black lines are edges, with the color >> indicating which worker "owns" the particle, with 3 workers being used and >> only a fraction of edges/vertices being displayed to keep it somewhat tidy. >> The position of the points corresponds to the particles' x-y position, with >> the z position being ignored. Particle ownership isn't done via looking >> where the particle is on the voxel grid, but rather by dividing the array >> of particle indices into subarrays, so e.g. particles [0-n/3) go to the >> first worker and so on. Since my particles can span multiple workers on the >> voxel grid this makes it much easier to update edge information with >> one-sided communication. As you can see the "mesh" is quite irregular with >> no nice boundary existing for connected particles owned by different >> workers. >> >> Best regards, >> Marco >> >> On 30.07.24 22:56, Mark Adams wrote: >> > * they do have a vocal mesh, so perhaps They want DM Plex. >> > >> > * they want ghost particle communication, that also might want a mesh >> > >> > * DM swarm does not have a notion of ghost particle, as far as I know, >> but it could use one >> > >> > On Tue, Jul 30, 2024 at 7:58?AM Matthew Knepley > > wrote: >> > >> > On Tue, Jul 30, 2024 at 12: 24 AM Marco Seiz >> wrote: Hello, I'd like to solve transient heat transport at a particle >> scale using TS, with the per-particle equation being something like dT_i / >> dt = (S(T_i) + sum(F(T_j, >> > ZjQcmQRYFpfptBannerStart >> > __ >> > This Message Is From an External Sender >> > This message came from outside your organization. >> > >> > __ >> > ZjQcmQRYFpfptBannerEnd >> > On Tue, Jul 30, 2024 at 12:24?AM Marco Seiz > > wrote: >> > >> > __ >> > Hello, I'd like to solve transient heat transport at a particle >> scale using TS, with the per-particle equation being something like dT_i / >> dt = (S(T_i) + sum(F(T_j, T_i), j connecting to i)) with a nonlinear source >> term S and a conduction term >> > ZjQcmQRYFpfptBannerStart >> > __ >> > This Message Is From an External Sender >> > This message came from outside your organization. >> > >> > __ >> > ZjQcmQRYFpfptBannerEnd >> > >> > Hello, >> > >> > I'd like to solve transient heat transport at a particle scale >> using TS, with the per-particle equation being something like >> > >> > dT_i / dt = (S(T_i) + sum(F(T_j, T_i), j connecting to i)) >> > >> > with a nonlinear source term S and a conduction term F. The >> particles can move, deform and grow/shrink/vanish on a voxel grid, but for >> the temperature a particle-scale resolution should be sufficient. The >> particles' connectivity will change during the simulation, but is assumed >> constant during a single timestep. I have a data structure tracking the >> particles' connectivity, so I can say which particles should conduct heat >> to each other. I exploit symmetry and so only save the "forward" edges, so >> e.g. for touching particles 1->2->3, I only store [[2], [3], []], from >> which the full list [[2], [1, 3], [2]] could be reconstructed but which I'd >> like to avoid. In parallel each worker would own some of the particle data, >> so e.g. for the 1->2->3 example and 2 workers, worker 0 could own [[2]] and >> worker 1 [[3],[]]. >> > >> > Looking over the DM variants, either DMNetwork or some manual >> mesh build with DMPlex seem suited for this. I'd especially like it if the >> adjacency information is handled by the DM automagically based on the edges >> so I don't have to deal with ghost particle communication myself. I already >> tried something basic with DMNetwork, though for some reason the offsets I >> get from DMNetworkGetGlobalVecOffset() are larger than the actual network. >> I've attached what I have so far but comparing to e.g. >> src/snes/tutorials/network/ex1.c I don't see what I'm doing wrong if I >> don't need data at the edges. I might not be seeing the trees for the >> forest though. The output with -dmnetwork_view looks reasonable to me. Any >> help in fixing this approach, or if it would seem suitable pointers to >> using DMPlex for this problem, would be appreciated. >> > >> > To me, this sounds like you should built it with DMSwarm. Why? >> > >> > 1) We only have vertices and edges, so a mesh does not buy us >> anything. >> > >> > 2) You are managing the parallel particle connectivity, so DMPlex >> topology is not buying us anything. Unless I am misunderstanding. >> > >> > 3) DMNetwork has a lot of support for vertices with different >> characteristics. Your particles all have the same attributes, so this is >> unnecessary. >> > >> > How would you set this up? >> > >> > 1) Declare all particle attributes. There are many Swarm examples, >> but say ex6 which simulates particles moving under a central force. >> > >> > 2) That example decides when to move particles using a parallel >> background mesh. However, you know which particles you want to move, >> > so you just change the _rank_ field to the new rank and call >> DMSwarmMigrate() with migration type _basic_. >> > >> > It should be straightforward to setup a tiny example moving around >> a few particles to see if it does everything you want. >> > >> > Thanks, >> > >> > Matt >> > >> > >> > Best regards, >> > Marco >> > >> > -- >> > What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> > -- Norbert Wiener >> > >> > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!ZJr2_J0yDqE986DrPL02S_TaBzfsRouKOJpjlNaYGxRPDZjvsFmDXuM54NAtwLM9H4zC3A2fiCmrGuv4SoW9qjg$ < >> https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!bLVHnoUGooYpdfGD8zNQrHTY2ln70W082hEc6pG7vdjA2fCvs77tcI9d7QOA0i_FjGK1of3nNOKXCE-4Rwb0$ >> > >> > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!ZJr2_J0yDqE986DrPL02S_TaBzfsRouKOJpjlNaYGxRPDZjvsFmDXuM54NAtwLM9H4zC3A2fiCmrGuv4SoW9qjg$ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Jul 31 09:28:14 2024 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 31 Jul 2024 10:28:14 -0400 Subject: [petsc-users] Right DM for a particle network In-Reply-To: References: <54a9b9e5-691c-4535-bc49-5c00bc19a0df@kit.ac.jp> Message-ID: On Wed, Jul 31, 2024 at 10:08?AM Mark Adams wrote: > Just a thought, but perhaps he may want to use just sparse matrices, AIJ. > He Manages the connectivity And we manage ghost values. > He is reconfiguring the neighborhood (row) each time, so you would essentially create a new matrix at each step with different sparsity. It would definitely function, but I wonder if he would have enough local information to construct the rows? Thanks, Matt > On Wed, Jul 31, 2024 at 6:25?AM Matthew Knepley wrote: > >> On Tue, Jul 30, 2024 at 11:32?PM Marco Seiz wrote: >> >>> Hello, >>> >>> maybe to clarify a bit further: I'd essentially like to solve heat >>> transport between particles only, without solving the transport on my voxel >>> mesh since there's a large scale difference between the voxel size and the >>> particle size and heat transport should be fast enough that voxel >>> resolution is unnecessary. Basically a discrete element method just for >>> heat transport. The whole motion/size change part is handled separately on >>> the voxel mesh. >>> Based on the connectivity, I can make a graph (attached an example from >>> a 3D case, for description see [1]) and on each vertex (particle) of the >>> graph I want to account for source terms and conduction along the edges. >>> What I'd like to avoid is managing the exchange for non-locally owned >>> vertices during the solve (e.g. for RHS evaluation) myself but rather have >>> the DM do it with DMGlobalToLocal() and friends. Thinking a bit further, >>> I'd probably also want to associate some data with the edges since that >>> will enter the conduction term but stays constant during a solve (think >>> contact area between particles). >>> >>> Looking over the DMSwarm examples, the coupling between particles is via >>> the background mesh, so e.g. I can't say "loop over all local particles and >>> for each particle and its neighbours do X". I could use the projection part >>> to dump the the source terms from the particles to a coarser background >>> mesh but for the conduction term I don't see how I could get a good >>> approximation of the contact area on the background mesh without having a >>> mesh at a similar resolution as I already have, kinda destroying the >>> purpose of the whole thing. >>> >> >> The point I was trying to make in my previous message is that DMSwarm >> does not require a background mesh. The examples use one because that is >> what we use to evaluate particle grouping. However, you have an independent >> way to do this, so you do not need it. >> >> Second, the issue of replicated particles. DMSwarmMigrate allows you to >> replicate particles, using the input flag. Of course, you would have to >> manage removing particles you no longer want. >> >> Thanks, >> >> Matt >> >> >>> [1] Points represent particles, black lines are edges, with the color >>> indicating which worker "owns" the particle, with 3 workers being used and >>> only a fraction of edges/vertices being displayed to keep it somewhat tidy. >>> The position of the points corresponds to the particles' x-y position, with >>> the z position being ignored. Particle ownership isn't done via looking >>> where the particle is on the voxel grid, but rather by dividing the array >>> of particle indices into subarrays, so e.g. particles [0-n/3) go to the >>> first worker and so on. Since my particles can span multiple workers on the >>> voxel grid this makes it much easier to update edge information with >>> one-sided communication. As you can see the "mesh" is quite irregular with >>> no nice boundary existing for connected particles owned by different >>> workers. >>> >>> Best regards, >>> Marco >>> >>> On 30.07.24 22:56, Mark Adams wrote: >>> > * they do have a vocal mesh, so perhaps They want DM Plex. >>> > >>> > * they want ghost particle communication, that also might want a mesh >>> > >>> > * DM swarm does not have a notion of ghost particle, as far as I know, >>> but it could use one >>> > >>> > On Tue, Jul 30, 2024 at 7:58?AM Matthew Knepley >> > wrote: >>> > >>> > On Tue, Jul 30, 2024 at 12: 24 AM Marco Seiz >>> wrote: Hello, I'd like to solve transient heat transport at a particle >>> scale using TS, with the per-particle equation being something like dT_i / >>> dt = (S(T_i) + sum(F(T_j, >>> > ZjQcmQRYFpfptBannerStart >>> > __ >>> > This Message Is From an External Sender >>> > This message came from outside your organization. >>> > >>> > __ >>> > ZjQcmQRYFpfptBannerEnd >>> > On Tue, Jul 30, 2024 at 12:24?AM Marco Seiz >> > wrote: >>> > >>> > __ >>> > Hello, I'd like to solve transient heat transport at a >>> particle scale using TS, with the per-particle equation being something >>> like dT_i / dt = (S(T_i) + sum(F(T_j, T_i), j connecting to i)) with a >>> nonlinear source term S and a conduction term >>> > ZjQcmQRYFpfptBannerStart >>> > __ >>> > This Message Is From an External Sender >>> > This message came from outside your organization. >>> > >>> > __ >>> > ZjQcmQRYFpfptBannerEnd >>> > >>> > Hello, >>> > >>> > I'd like to solve transient heat transport at a particle scale >>> using TS, with the per-particle equation being something like >>> > >>> > dT_i / dt = (S(T_i) + sum(F(T_j, T_i), j connecting to i)) >>> > >>> > with a nonlinear source term S and a conduction term F. The >>> particles can move, deform and grow/shrink/vanish on a voxel grid, but for >>> the temperature a particle-scale resolution should be sufficient. The >>> particles' connectivity will change during the simulation, but is assumed >>> constant during a single timestep. I have a data structure tracking the >>> particles' connectivity, so I can say which particles should conduct heat >>> to each other. I exploit symmetry and so only save the "forward" edges, so >>> e.g. for touching particles 1->2->3, I only store [[2], [3], []], from >>> which the full list [[2], [1, 3], [2]] could be reconstructed but which I'd >>> like to avoid. In parallel each worker would own some of the particle data, >>> so e.g. for the 1->2->3 example and 2 workers, worker 0 could own [[2]] and >>> worker 1 [[3],[]]. >>> > >>> > Looking over the DM variants, either DMNetwork or some manual >>> mesh build with DMPlex seem suited for this. I'd especially like it if the >>> adjacency information is handled by the DM automagically based on the edges >>> so I don't have to deal with ghost particle communication myself. I already >>> tried something basic with DMNetwork, though for some reason the offsets I >>> get from DMNetworkGetGlobalVecOffset() are larger than the actual network. >>> I've attached what I have so far but comparing to e.g. >>> src/snes/tutorials/network/ex1.c I don't see what I'm doing wrong if I >>> don't need data at the edges. I might not be seeing the trees for the >>> forest though. The output with -dmnetwork_view looks reasonable to me. Any >>> help in fixing this approach, or if it would seem suitable pointers to >>> using DMPlex for this problem, would be appreciated. >>> > >>> > To me, this sounds like you should built it with DMSwarm. Why? >>> > >>> > 1) We only have vertices and edges, so a mesh does not buy us >>> anything. >>> > >>> > 2) You are managing the parallel particle connectivity, so DMPlex >>> topology is not buying us anything. Unless I am misunderstanding. >>> > >>> > 3) DMNetwork has a lot of support for vertices with different >>> characteristics. Your particles all have the same attributes, so this is >>> unnecessary. >>> > >>> > How would you set this up? >>> > >>> > 1) Declare all particle attributes. There are many Swarm examples, >>> but say ex6 which simulates particles moving under a central force. >>> > >>> > 2) That example decides when to move particles using a parallel >>> background mesh. However, you know which particles you want to move, >>> > so you just change the _rank_ field to the new rank and call >>> DMSwarmMigrate() with migration type _basic_. >>> > >>> > It should be straightforward to setup a tiny example moving around >>> a few particles to see if it does everything you want. >>> > >>> > Thanks, >>> > >>> > Matt >>> > >>> > >>> > Best regards, >>> > Marco >>> > >>> > -- >>> > What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> > -- Norbert Wiener >>> > >>> > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!eru_lbEwlvhK2Nbu6OsMVCn27QSZS67cnVfZgM-_OSyUCm_OPRqO4HSgfcDBzj2a9C4AC8cb_OVLajN1Jsba$ < >>> https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!bLVHnoUGooYpdfGD8zNQrHTY2ln70W082hEc6pG7vdjA2fCvs77tcI9d7QOA0i_FjGK1of3nNOKXCE-4Rwb0$ >>> > >>> > >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!eru_lbEwlvhK2Nbu6OsMVCn27QSZS67cnVfZgM-_OSyUCm_OPRqO4HSgfcDBzj2a9C4AC8cb_OVLajN1Jsba$ >> >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!eru_lbEwlvhK2Nbu6OsMVCn27QSZS67cnVfZgM-_OSyUCm_OPRqO4HSgfcDBzj2a9C4AC8cb_OVLajN1Jsba$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From Eric.Chamberland at giref.ulaval.ca Wed Jul 31 15:16:47 2024 From: Eric.Chamberland at giref.ulaval.ca (Eric Chamberland) Date: Wed, 31 Jul 2024 16:16:47 -0400 Subject: [petsc-users] How to combine different element types into a single DMPlex? In-Reply-To: References: <6e78845e-2054-92b1-d6db-2c0820c05b64@giref.ulaval.ca> Message-ID: <9021c53e-18af-428a-978a-54a3c7371378@giref.ulaval.ca> Hi Vaclav, Okay, I am coming back with this question after some time... ;) I am just wondering if it is now possible to call DMPlexBuildFromCellListParallel or something else, to build a mesh that combine different element types into a single DMPlex (in parallel of course) ? Thanks, Eric On 2021-09-23 11:30, Hapla Vaclav wrote: > Note there will soon be a generalization of > DMPlexBuildFromCellListParallel() around, as a side product of our > current collaborative efforts with Firedrake guys. It will take a > PetscSection instead of relying on the blocksize [which is indeed > always constant for the given dataset]. Stay tuned. > > https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/merge_requests/4350__;!!G_uCfscf7eWS!fgngRI84CI_6fHdWdRiVbv0bvraQkhXyV396Uo5y2mORbR4DjlMYrHTqX0p22Ysp8t9yJPhCL1E3fkbWxmIZpQp6dYhcdaKz9_yUIGas$ > > Thanks, > > Vaclav > >> On 23 Sep 2021, at 16:53, Eric Chamberland >> wrote: >> >> Hi, >> >> oh, that's a great news! >> >> In our case we have our home-made file-format, invariant to the >> number of processes (thanks to MPI_File_set_view), that uses >> collective, asynchronous MPI I/O native calls for unstructured hybrid >> meshes and fields . >> >> So our needs are not for reading meshes but only to fill an hybrid >> DMPlex with DMPlexBuildFromCellListParallel (or something else to >> come?)... to exploit petsc partitioners and parallel overlap >> computation... >> >> Thanks for the follow-up! :) >> >> Eric >> >> >> On 2021-09-22 7:20 a.m., Matthew Knepley wrote: >>> On Wed, Sep 22, 2021 at 3:04 AM Karin&NiKo wrote: >>> >>> Dear Matthew, >>> >>> This is great news! >>> For my part, I would be mostly interested?in the parallel input >>> interface. Sorry for that... >>> Indeed, in our application, we already have a parallel mesh data >>> structure that supports hybrid meshes with parallel I/O and >>> distribution (based on the MED format). We would like to use a >>> DMPlex to make parallel mesh adaptation. >>> ?As a matter of fact, all our meshes are in the MED format. We >>> could also?contribute to extend the interface of DMPlex with MED >>> (if you consider it could be usefull). >>> >>> >>> An MED interface does exist. I stopped using it for two reasons: >>> >>> ? 1) The code was not portable and the build was failing on >>> different architectures. I had to manually fix it. >>> >>> ? 2) The boundary markers did not provide global information, so >>> that parallel reading was much harder. >>> >>> Feel free to update my MED reader to a better design. >>> >>> ? Thanks, >>> >>> ? ? ?Matt >>> >>> Best regards, >>> Nicolas >>> >>> >>> Le?mar. 21 sept. 2021 ??21:56, Matthew Knepley >>> a ?crit?: >>> >>> On Tue, Sep 21, 2021 at 10:31 AM Karin&NiKo >>> wrote: >>> >>> Dear Eric, dear Matthew, >>> >>> I share Eric's desire to be able to manipulate meshes >>> composed of different types of elements in a PETSc's >>> DMPlex. >>> Since this discussion, is there anything new on this >>> feature for the DMPlex?object or am I missing something? >>> >>> >>> Thanks for finding this! >>> >>> Okay, I did a rewrite of the Plex internals this summer. It >>> should now be possible to interpolate a mesh with any >>> number of cell types, partition it, redistribute it, and >>> many other manipulations. >>> >>> You can read in some formats that support hybrid?meshes. If >>> you let me know how you plan to read it in, we can make it work. >>> Right now, I don't want to make input interfaces that no one >>> will ever use. We have a project, joint with Firedrake, to >>> finalize >>> parallel I/O. This will make parallel reading and writing >>> for checkpointing possible, supporting topology, geometry, >>> fields and >>> layouts, for many meshes?in one HDF5 file. I think we will >>> finish in November. >>> >>> ? Thanks, >>> >>> ? ? ?Matt >>> >>> Thanks, >>> Nicolas >>> >>> Le?mer. 21 juil. 2021 ??04:25, Eric Chamberland >>> a ?crit?: >>> >>> Hi, >>> >>> On 2021-07-14 3:14 p.m., Matthew Knepley wrote: >>>> On Wed, Jul 14, 2021 at 1:25 PM Eric Chamberland >>>> wrote: >>>> >>>> Hi, >>>> >>>> while playing with >>>> DMPlexBuildFromCellListParallel, I noticed we >>>> have to >>>> specify "numCorners" which is a fixed value, >>>> then gives a fixed number >>>> of nodes for a series of elements. >>>> >>>> How can I then add, for example, triangles and >>>> quadrangles into a DMPlex? >>>> >>>> >>>> You can't with that function. It would be much mich >>>> more complicated if you could, and I am not sure >>>> it is worth it for that function. The reason is >>>> that you would need index information to >>>> offset?into the >>>> connectivity list, and that would need to be >>>> replicated to some extent so that all processes >>>> know what >>>> the others are doing. Possible, but complicated. >>>> >>>> Maybe I can help suggest something for what you are >>>> trying?to do? >>> >>> Yes: we are trying to partition our parallel mesh >>> with PETSc functions. The mesh has been read in >>> parallel so each process owns a part of it, but we >>> have to manage mixed elements types. >>> >>> When we directly use ParMETIS_V3_PartMeshKway, we >>> give two arrays to describe the elements which >>> allows mixed elements. >>> >>> So, how would I read my mixed mesh in parallel and >>> give it to PETSc DMPlex so I can use a >>> PetscPartitioner with DMPlexDistribute ? >>> >>> A second goal we have is to use PETSc to compute the >>> overlap, which is something I can't find in PARMetis >>> (and any other partitionning library?) >>> >>> Thanks, >>> >>> Eric >>> >>> >>>> >>>> ? Thanks, >>>> >>>> ? ? ? Matt >>>> >>>> Thanks, >>>> >>>> Eric >>>> >>>> -- >>>> Eric Chamberland, ing., M. Ing >>>> Professionnel de recherche >>>> GIREF/Universit? Laval >>>> (418) 656-2131 poste 41 22 42 >>>> >>>> >>>> >>>> -- >>>> What most experimenters take for granted before >>>> they begin their experiments is infinitely more >>>> interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>>> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!fgngRI84CI_6fHdWdRiVbv0bvraQkhXyV396Uo5y2mORbR4DjlMYrHTqX0p22Ysp8t9yJPhCL1E3fkbWxmIZpQp6dYhcdaKz9-bqwEGn$ >>>> >>> >>> -- >>> Eric Chamberland, ing., M. Ing >>> Professionnel de recherche >>> GIREF/Universit? Laval >>> (418) 656-2131 poste 41 22 42 >>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin >>> their experiments is infinitely more interesting than any >>> results to which their experiments lead. >>> -- Norbert Wiener >>> >>> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!fgngRI84CI_6fHdWdRiVbv0bvraQkhXyV396Uo5y2mORbR4DjlMYrHTqX0p22Ysp8t9yJPhCL1E3fkbWxmIZpQp6dYhcdaKz9-bqwEGn$ >>> >>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which >>> their experiments lead. >>> -- Norbert Wiener >>> >>> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!fgngRI84CI_6fHdWdRiVbv0bvraQkhXyV396Uo5y2mORbR4DjlMYrHTqX0p22Ysp8t9yJPhCL1E3fkbWxmIZpQp6dYhcdaKz9-bqwEGn$ >>> >> -- >> Eric Chamberland, ing., M. Ing >> Professionnel de recherche >> GIREF/Universit? Laval >> (418) 656-2131 poste 41 22 42 > -- Eric Chamberland, ing., M. Ing Professionnel de recherche GIREF/Universit? Laval (418) 656-2131 poste 41 22 42 -------------- next part -------------- An HTML attachment was scrubbed... URL: From liufield at gmail.com Wed Jul 31 17:01:03 2024 From: liufield at gmail.com (neil liu) Date: Wed, 31 Jul 2024 18:01:03 -0400 Subject: [petsc-users] Trying to run https://petsc.org/release/src/ksp/ksp/tutorials/ex72.c.html In-Reply-To: References: Message-ID: Hi, all, Following Stefano's advice, my code is reorganized as follows, *MatCreate(PETSC_COMM_WORLD, &A);* * MatSetType(A, MATIS); MatSetSizes(A, PETSC_DECIDE, PETSC_DECIDE, numberDof_global, numberDof_global); MatSetUp(A); ISLocalToGlobalMapping ltogm; DMGetLocalToGlobalMapping(dm, <ogm); MatSetLocalToGlobalMapping(A, ltogm, ltogm);* Then I just ran the above code snippet, which gave me some errors as following. (Local size 67 not compatible with block size 2). It doesn't seems it is actually calling my routine, but I could be wrong about this. Can anyone give me some ideas to debug this issue? I am just coding vector FEM, assigning 2 dofs each edge and 2 dofs each face. The ltogm seems normal. ISLocalToGlobalMapping Object: 2 MPI processes type not yet set [0] 0:2 0:2 [0] 2:4 2:4 [0] 4:6 54:56 [0] 6:8 4:6 [0] 8:10 6:8 [0] 10:12 8:10 [0] 12:14 10:12 [0] 14:16 56:58 [0] 16:18 12:14 [0] 18:20 14:16 [0] 20:22 16:18 [0] 22:24 58:60 [0] 24:26 18:20 [0] 26:28 20:22 [0] 28:30 22:24 [0] 30:32 24:26 [0] 32:34 60:62 [0] 34:36 26:28 [0] 36:38 28:30 [0] 38:40 30:32 [0] 40:42 32:34 [0] 42:44 92:94 [0] 44:46 94:96 [0] 46:48 34:36 [0] 48:50 96:98 [0] 50:52 36:38 [0] 52:54 98:100 [0] 54:56 38:40 [0] 56:58 40:42 [0] 58:60 100:102 [0] 60:62 42:44 [0] 62:64 102:104 [0] 64:66 44:46 [0] 66:68 104:106 [0] 68:70 46:48 [0] 70:72 106:108 [0] 72:74 48:50 [0] 74:76 50:52 [0] 76:78 108:110 [0] 78:80 52:54 [1] 0:2 54:56 [1] 2:4 56:58 [1] 4:6 58:60 [1] 6:8 60:62 [1] 8:10 62:64 [1] 10:12 64:66 [1] 12:14 66:68 [1] 14:16 68:70 [1] 16:18 70:72 [1] 18:20 72:74 [1] 20:22 74:76 [1] 22:24 76:78 [1] 24:26 78:80 [1] 26:28 80:82 [1] 28:30 82:84 [1] 30:32 84:86 [1] 32:34 86:88 [1] 34:36 88:90 [1] 36:38 90:92 [1] 38:40 92:94 [1] 40:42 94:96 [1] 42:44 96:98 [1] 44:46 98:100 [1] 46:48 100:102 [1] 48:50 102:104 [1] 50:52 104:106 [1] 52:54 106:108 [1] 54:56 108:110 [1] 56:58 110:112 [1] 58:60 112:114 [1] 60:62 114:116 [1] 62:64 116:118 [1] 64:66 118:120 [1] 66:68 120:122 [1] 68:70 122:124 [1] 70:72 124:126 [1] 72:74 126:128 [1] 74:76 128:130 [1] 76:78 130:132 [1] 78:80 132:134 [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Arguments are incompatible [0]PETSC ERROR: [1]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- Local size 67 not compatible with block size 2 [0]PETSC ERROR: See https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!e-uvzwHpfgg9sVc4ib2hKBtNW-zwESfVKM06g2WKVg_pt-OLo4KoCcW693V4rObDsmlBOSQczWEJUUxyeldBwA$ for trouble shooting. [0]PETSC ERROR: Petsc Release Version 3.21.1, unknown [0]PETSC ERROR: [1]PETSC ERROR: Arguments are incompatible [1]PETSC ERROR: Local size 67 not compatible with block size 2 [1]PETSC ERROR: ./app on a arch-linux-c-debug by xiaodong.liu Wed Jul 31 17:43:28 2024 [0]PETSC ERROR: Configure options --with-cc=gcc --with-fc=gfortran --with-cxx=g++ --download-fblaslapack --download-mpich --with-scalar-type=complex --download-triangle [0]PETSC ERROR: See https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!e-uvzwHpfgg9sVc4ib2hKBtNW-zwESfVKM06g2WKVg_pt-OLo4KoCcW693V4rObDsmlBOSQczWEJUUxyeldBwA$ for trouble shooting. [1]PETSC ERROR: Petsc Release Version 3.21.1, unknown [1]PETSC ERROR: ./app on a arch-linux-c-debug by xiaodong.liu Wed Jul 31 17:43:28 2024 [1]PETSC ERROR: #1 PetscLayoutSetBlockSize() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/utils/pmap.c:473 [0]PETSC ERROR: #2 MatSetLocalToGlobalMapping_IS() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/impls/is/matis.c:2831 [0]PETSC ERROR: #3 MatSetLocalToGlobalMapping() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/interface/matrix.c:2252 Configure options --with-cc=gcc --with-fc=gfortran --with-cxx=g++ --download-fblaslapack --download-mpich --with-scalar-type=complex --download-triangle [1]PETSC ERROR: #1 PetscLayoutSetBlockSize() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/utils/pmap.c:473 [1]PETSC ERROR: After Mat set local to global mapping! #2 MatSetLocalToGlobalMapping_IS() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/impls/is/matis.c:2831 [1]PETSC ERROR: #3 MatSetLocalToGlobalMapping() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/interface/matrix.c:2252 After Mat set local to global mapping! Thanks, On Tue, Jul 30, 2024 at 2:51?PM neil liu wrote: > Hi, Stefano, > > I am trying to understand the example there you mentioned. I have a > question, > the example always use DMDA there. Does BDDC also work for DMPLEX? > > Thanks , > > On Tue, Jul 30, 2024 at 1:47?PM neil liu wrote: > >> Thanks, Stefano, >> >> I am trying to modify the code as follows, >> MatCreate(PETSC_COMM_WORLD, &A); >> MatSetSizes(A, PETSC_DECIDE, PETSC_DECIDE, numberDof_global, >> numberDof_global); >> MatSetType(A, MATIS); >> MatSetUp(A); >> MatZeroEntries(A); >> VecCreate(PETSC_COMM_WORLD, &b); >> VecSetSizes(b, PETSC_DECIDE, numberDof_global); >> VecSetUp(b); >> VecSet(b,0.0); >> VecDuplicate(b, &x); >> >> const PetscInt *g_idx; >> ISLocalToGlobalMapping ltogm; >> DMGetLocalToGlobalMapping(dm, <ogm); >> ISLocalToGlobalMappingGetIndices(ltogm, &g_idx); >> >> //Build idxm_global and Set LHS >> idxm_Global[ idxDofLocal ] = g_idx[ numdofPerFace*idxm[idxDofLocal]]; >> MatSetValues(A, numberDof_local, idxm_Global.data(), numberDof_local, >> idxm_Global.data(), MatrixLocal.data(), ADD_VALUES); >> >> //Set RHS >> PetscScalar valueDiag = 1.0 ; >> MatZeroRows(A, objGeometryInfo.numberDof_Dirichlet, (objGeometryInfo. >> arrayDofSeqGlobal_Dirichlet).data(), valueDiag, 0, 0); >> >> VecSetValues(b, objGeometryInfo.numberDof_Dirichlet, (objGeometryInfo. >> arrayDofSeqGlobal_Dirichlet).data(), (objGeometryInfo.dof_Dirichlet).data(), >> INSERT_VALUES); >> VecSetValues(x, objGeometryInfo.numberDof_Dirichlet, (objGeometryInfo. >> arrayDofSeqGlobal_Dirichlet).data(), (objGeometryInfo.dof_Dirichlet).data(), >> INSERT_VALUES); >> ISLocalToGlobalMappingRestoreIndices(ltogm, &g_idx); >> VecAssemblyBegin(b); >> VecAssemblyEnd(b); >> VecAssemblyBegin(x); >> VecAssemblyEnd(x); >> It shows the attached error when I run the code. It seems something wrong >> is with setting RHS. >> Could you please help me double check my above code to setup the RHS? >> Thanks, >> >> On Tue, Jul 30, 2024 at 11:56?AM Stefano Zampini < >> stefano.zampini at gmail.com> wrote: >> >>> BDDC needs the matrix in MATIS format. Using MatConvert will give you >>> back the right format, but the subdomain matrices are wrong. You need to >>> assemble directly in MATIS format, something like >>> >>> MatCreate(comm,&A) >>> MatSetType(A,MATIS) >>> MatSetLocalToGlobalMapping(A,l2gmap, l2gmap) >>> for e in local_elements: >>> E = compute_element_matrix(e) >>> MatSetValues(A,local_element_dofs,local_element_dofs,....) >>> >>> l2gmap is an ISLocalToGlobalMapping that stores the global dof number of >>> the dofs that are local to the mesh >>> >>> See e.g. >>> https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/blob/main/src/ksp/ksp/tutorials/ex59.c?ref_type=heads__;!!G_uCfscf7eWS!e-uvzwHpfgg9sVc4ib2hKBtNW-zwESfVKM06g2WKVg_pt-OLo4KoCcW693V4rObDsmlBOSQczWEJUUyBIcTMGA$ >>> or >>> https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/blob/main/src/ksp/ksp/tutorials/ex71.c?ref_type=heads__;!!G_uCfscf7eWS!e-uvzwHpfgg9sVc4ib2hKBtNW-zwESfVKM06g2WKVg_pt-OLo4KoCcW693V4rObDsmlBOSQczWEJUUzNHwlGXQ$ >>> >>> Il giorno mar 30 lug 2024 alle ore 17:50 neil liu >>> ha scritto: >>> >>>> Hi, >>>> I am trying to use PCBDDC for the vector based FEM. (Complex system, >>>> double precision ) >>>> My code can work well with *asm*, >>>> petsc-3.21.1/petsc/arch-linux-c-opt/bin/mpirun -n 8 ./app -pc_type asm >>>> -pc_asm_overlap 6 -ksp_converged_reason -ksp_view >>>> -ksp_gmres_modifiedgramschmidt -ksp_gmres_restart 1500 -ksp_rtol 1e-8 >>>> -ksp_monitor -ksp_max_it 100000 >>>> >>>> When I tried BDDC, it was stuck for solving the linear system (it can >>>> not print anything for ksp_monitor). I did the conversion for matrix, >>>> >>>> * Mat J;* >>>> * MatConvert(A, MATIS, MAT_INITIAL_MATRIX, &J);* >>>> * KSPSetOperators(ksp, A, J);* >>>> * MatDestroy(&J);* >>>> * KSPSetInitialGuessNonzero(ksp, PETSC_TRUE);* >>>> * KSPSetFromOptions(ksp);* >>>> >>>> petsc-3.21.1/petsc/arch-linux-c-debug/bin/mpirun -n 2 ./app -ksp_type >>>> cg -pc_type bddc -ksp_monitor -mat_type is >>>> >>>> Do you have any suggestions? >>>> >>>> Thanks , >>>> Xiaodong >>>> >>>> >>>> On Mon, Jul 29, 2024 at 6:19?PM neil liu wrote: >>>> >>>>> When I compile with real data, >>>>> it shows the attached error. >>>>> >>>>> The data file is in binary format, right? >>>>> >>>>> >>>>> >>>>> On Mon, Jul 29, 2024 at 5:36?PM Stefano Zampini < >>>>> stefano.zampini at gmail.com> wrote: >>>>> >>>>>> Your PETSc installation is for complex, data is for real >>>>>> >>>>>> On Mon, Jul 29, 2024, 23:14 neil liu wrote: >>>>>> >>>>>>> I compiled Petsc with single precision. However, it is not converged >>>>>>> with the data. Please see the attached file. On Mon, Jul 29, 2024 at 4: 25 >>>>>>> PM Barry Smith wrote: This can happen if the >>>>>>> data was stored in single precision >>>>>>> ZjQcmQRYFpfptBannerStart >>>>>>> This Message Is From an External Sender >>>>>>> This message came from outside your organization. >>>>>>> >>>>>>> ZjQcmQRYFpfptBannerEnd >>>>>>> I compiled Petsc with single precision. However, it is not converged >>>>>>> with the data. >>>>>>> >>>>>>> Please see the attached file. >>>>>>> >>>>>>> On Mon, Jul 29, 2024 at 4:25?PM Barry Smith >>>>>>> wrote: >>>>>>> >>>>>>>> >>>>>>>> This can happen if the data was stored in single precision and >>>>>>>> PETSc was built for double. >>>>>>>> >>>>>>>> >>>>>>>> On Jul 29, 2024, at 3:55?PM, neil liu wrote: >>>>>>>> >>>>>>>> This Message Is From an External Sender >>>>>>>> This message came from outside your organization. >>>>>>>> Dear Petsc developers,, >>>>>>>> >>>>>>>> I am trying to run >>>>>>>> https://urldefense.us/v3/__https://petsc.org/release/src/ksp/ksp/tutorials/ex72.c.html__;!!G_uCfscf7eWS!e-uvzwHpfgg9sVc4ib2hKBtNW-zwESfVKM06g2WKVg_pt-OLo4KoCcW693V4rObDsmlBOSQczWEJUUwKgH2q3Q$ >>>>>>>> >>>>>>>> with >>>>>>>> >>>>>>>> petsc-3.21.1/petsc/arch-linux-c-opt/bin/mpirun -n 2 ./ex72 -f >>>>>>>> /Documents/PetscData/poisson_DMPLEX_32x32_16.dat -pc_type bddc -ksp_type cg >>>>>>>> -ksp_norm_type natural -ksp_error_if_not_converged -mat_type is >>>>>>>> >>>>>>>> The file was downloaded and put in the directory PetscData. >>>>>>>> >>>>>>>> The error is shown as follows, >>>>>>>> >>>>>>>> 0]PETSC ERROR: --------------------- Error Message >>>>>>>> -------------------------------------------------------------- >>>>>>>> [0]PETSC ERROR: Read from file failed >>>>>>>> [0]PETSC ERROR: Read past end of file >>>>>>>> [0]PETSC ERROR: WARNING! There are unused option(s) set! Could be >>>>>>>> the program crashed before usage or a spelling mistake, etc! >>>>>>>> [0]PETSC ERROR: Option left: name:-ksp_error_if_not_converged (no >>>>>>>> value) source: command line >>>>>>>> [0]PETSC ERROR: Option left: name:-ksp_norm_type value: natural >>>>>>>> source: command line >>>>>>>> [0]PETSC ERROR: Option left: name:-ksp_type value: cg source: >>>>>>>> command line >>>>>>>> [0]PETSC ERROR: Option left: name:-pc_type value: bddc source: >>>>>>>> command line >>>>>>>> [0]PETSC ERROR: See https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!e-uvzwHpfgg9sVc4ib2hKBtNW-zwESfVKM06g2WKVg_pt-OLo4KoCcW693V4rObDsmlBOSQczWEJUUxyeldBwA$ >>>>>>>> >>>>>>>> for trouble shooting. >>>>>>>> [0]PETSC ERROR: Petsc Release Version 3.21.1, unknown >>>>>>>> [0]PETSC ERROR: ./ex72 on a arch-linux-c-opt named >>>>>>>> Mon Jul 29 15:50:04 2024 >>>>>>>> [0]PETSC ERROR: Configure options --with-cc=gcc --with-fc=gfortran >>>>>>>> --with-cxx=g++ --download-fblaslapack --download-mpich >>>>>>>> --with-scalar-type=complex --download-triangle --with-debugging=no >>>>>>>> [0]PETSC ERROR: #1 PetscBinaryRead() at >>>>>>>> /home/xxxxxx/Documents/petsc-3.21.1/petsc/src/sys/fileio/sysio.c:327 >>>>>>>> [0]PETSC ERROR: #2 PetscViewerBinaryWriteReadAll() at >>>>>>>> /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/sys/classes/viewer/impls/binary/binv.c:1077 >>>>>>>> [0]PETSC ERROR: #3 PetscViewerBinaryReadAll() at >>>>>>>> /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/sys/classes/viewer/impls/binary/binv.c:1119 >>>>>>>> [0]PETSC ERROR: #4 MatLoad_MPIAIJ_Binary() at >>>>>>>> Documents/petsc-3.21.1/petsc/src/mat/impls/aij/mpi/mpiaij.c:3093 >>>>>>>> [0]PETSC ERROR: #5 MatLoad_MPIAIJ() at >>>>>>>> /Documents/petsc-3.21.1/petsc/src/mat/impls/aij/mpi/mpiaij.c:3035 >>>>>>>> [0]PETSC ERROR: #6 MatLoad() at >>>>>>>> /Documents/petsc-3.21.1/petsc/src/mat/interface/matrix.c:1344 >>>>>>>> [0]PETSC ERROR: #7 MatLoad_IS() at >>>>>>>> /Documents/petsc-3.21.1/petsc/src/mat/impls/is/matis.c:2575 >>>>>>>> [0]PETSC ERROR: #8 MatLoad() at >>>>>>>> /home/Documents/petsc-3.21.1/petsc/src/mat/interface/matrix.c:1344 >>>>>>>> [0]PETSC ERROR: #9 main() at ex72.c:105 >>>>>>>> [0]PETSC ERROR: PETSc Option Table entries: >>>>>>>> [0]PETSC ERROR: -f >>>>>>>> /Documents/PetscData/poisson_DMPLEX_32x32_16.dat (source: command >>>>>>>> line) >>>>>>>> [0]PETSC ERROR: -ksp_error_if_not_converged (source: command line) >>>>>>>> [0]PETSC ERROR: -ksp_norm_type natural (source: command line) >>>>>>>> [0]PETSC ERROR: -ksp_type cg (source: command line) >>>>>>>> [0]PETSC ERROR: -mat_type is (source: command line) >>>>>>>> [0]PETSC ERROR: -pc_type bddc (source: command line) >>>>>>>> [0]PETSC ERROR: ----------------End of Error Message -------send >>>>>>>> entire error message to petsc-maint at mcs.anl.gov---------- >>>>>>>> application called MPI_Abort(MPI_COMM_SELF, 66) - process 0 >>>>>>>> >>>>>>>> >>>>>>>> >>> >>> -- >>> Stefano >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Wed Jul 31 17:21:21 2024 From: bsmith at petsc.dev (Barry Smith) Date: Wed, 31 Jul 2024 18:21:21 -0400 Subject: [petsc-users] Trying to run https://petsc.org/release/src/ksp/ksp/tutorials/ex72.c.html In-Reply-To: References: Message-ID: <3F4ECD77-5F02-4C79-BFD0-1558F7E84D00@petsc.dev> Take a look at src/ksp/ksp/tutorials/ex71.c To have your code below not crash at this point call MatSetBlockSize(A,2) before MatSetUp() > On Jul 31, 2024, at 6:01?PM, neil liu wrote: > > Hi, all, > Following Stefano's advice, my code is reorganized as follows, > > MatCreate(PETSC_COMM_WORLD, &A); > MatSetType(A, MATIS); > MatSetSizes(A, PETSC_DECIDE, PETSC_DECIDE, numberDof_global, numberDof_global); > MatSetUp(A); > ISLocalToGlobalMapping ltogm; > DMGetLocalToGlobalMapping(dm, <ogm); > MatSetLocalToGlobalMapping(A, ltogm, ltogm); > > Then I just ran the above code snippet, which gave me some errors as following. (Local size 67 not compatible with block size 2). > It doesn't seems it is actually calling my routine, but I could be wrong about this. > Can anyone give me some ideas to debug this issue? > I am just coding vector FEM, assigning 2 dofs each edge and 2 dofs each face. > > The ltogm seems normal. > ISLocalToGlobalMapping Object: 2 MPI processes > type not yet set > [0] 0:2 0:2 > [0] 2:4 2:4 > [0] 4:6 54:56 > [0] 6:8 4:6 > [0] 8:10 6:8 > [0] 10:12 8:10 > [0] 12:14 10:12 > [0] 14:16 56:58 > [0] 16:18 12:14 > [0] 18:20 14:16 > [0] 20:22 16:18 > [0] 22:24 58:60 > [0] 24:26 18:20 > [0] 26:28 20:22 > [0] 28:30 22:24 > [0] 30:32 24:26 > [0] 32:34 60:62 > [0] 34:36 26:28 > [0] 36:38 28:30 > [0] 38:40 30:32 > [0] 40:42 32:34 > [0] 42:44 92:94 > [0] 44:46 94:96 > [0] 46:48 34:36 > [0] 48:50 96:98 > [0] 50:52 36:38 > [0] 52:54 98:100 > [0] 54:56 38:40 > [0] 56:58 40:42 > [0] 58:60 100:102 > [0] 60:62 42:44 > [0] 62:64 102:104 > [0] 64:66 44:46 > [0] 66:68 104:106 > [0] 68:70 46:48 > [0] 70:72 106:108 > [0] 72:74 48:50 > [0] 74:76 50:52 > [0] 76:78 108:110 > [0] 78:80 52:54 > [1] 0:2 54:56 > [1] 2:4 56:58 > [1] 4:6 58:60 > [1] 6:8 60:62 > [1] 8:10 62:64 > [1] 10:12 64:66 > [1] 12:14 66:68 > [1] 14:16 68:70 > [1] 16:18 70:72 > [1] 18:20 72:74 > [1] 20:22 74:76 > [1] 22:24 76:78 > [1] 24:26 78:80 > [1] 26:28 80:82 > [1] 28:30 82:84 > [1] 30:32 84:86 > [1] 32:34 86:88 > [1] 34:36 88:90 > [1] 36:38 90:92 > [1] 38:40 92:94 > [1] 40:42 94:96 > [1] 42:44 96:98 > [1] 44:46 98:100 > [1] 46:48 100:102 > [1] 48:50 102:104 > [1] 50:52 104:106 > [1] 52:54 106:108 > [1] 54:56 108:110 > [1] 56:58 110:112 > [1] 58:60 112:114 > [1] 60:62 114:116 > [1] 62:64 116:118 > [1] 64:66 118:120 > [1] 66:68 120:122 > [1] 68:70 122:124 > [1] 70:72 124:126 > [1] 72:74 126:128 > [1] 74:76 128:130 > [1] 76:78 130:132 > [1] 78:80 132:134 > > [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > [0]PETSC ERROR: Arguments are incompatible > [0]PETSC ERROR: [1]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > Local size 67 not compatible with block size 2 > [0]PETSC ERROR: See https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!fJ1ibiYg4k47CYmvheN6QZ-hfBzb_wjpc_EnmueTZgQBm5eNVzfMFkUgw7EVOhyXJnw44CPWay_QB-74ioBjjfQ$ for trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.21.1, unknown > [0]PETSC ERROR: [1]PETSC ERROR: Arguments are incompatible > [1]PETSC ERROR: Local size 67 not compatible with block size 2 > [1]PETSC ERROR: ./app on a arch-linux-c-debug by xiaodong.liu Wed Jul 31 17:43:28 2024 > [0]PETSC ERROR: Configure options --with-cc=gcc --with-fc=gfortran --with-cxx=g++ --download-fblaslapack --download-mpich --with-scalar-type=complex --download-triangle > [0]PETSC ERROR: See https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!fJ1ibiYg4k47CYmvheN6QZ-hfBzb_wjpc_EnmueTZgQBm5eNVzfMFkUgw7EVOhyXJnw44CPWay_QB-74ioBjjfQ$ for trouble shooting. > [1]PETSC ERROR: Petsc Release Version 3.21.1, unknown > [1]PETSC ERROR: ./app on a arch-linux-c-debug by xiaodong.liu Wed Jul 31 17:43:28 2024 > [1]PETSC ERROR: #1 PetscLayoutSetBlockSize() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/utils/pmap.c:473 > [0]PETSC ERROR: #2 MatSetLocalToGlobalMapping_IS() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/impls/is/matis.c:2831 > [0]PETSC ERROR: #3 MatSetLocalToGlobalMapping() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/interface/matrix.c:2252 > Configure options --with-cc=gcc --with-fc=gfortran --with-cxx=g++ --download-fblaslapack --download-mpich --with-scalar-type=complex --download-triangle > [1]PETSC ERROR: #1 PetscLayoutSetBlockSize() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/utils/pmap.c:473 > [1]PETSC ERROR: After Mat set local to global mapping! > #2 MatSetLocalToGlobalMapping_IS() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/impls/is/matis.c:2831 > [1]PETSC ERROR: #3 MatSetLocalToGlobalMapping() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/interface/matrix.c:2252 > After Mat set local to global mapping! > > Thanks, > > On Tue, Jul 30, 2024 at 2:51?PM neil liu > wrote: >> Hi, Stefano, >> >> I am trying to understand the example there you mentioned. I have a question, >> the example always use DMDA there. Does BDDC also work for DMPLEX? >> >> Thanks , >> >> On Tue, Jul 30, 2024 at 1:47?PM neil liu > wrote: >>> Thanks, Stefano, >>> >>> I am trying to modify the code as follows, >>> MatCreate(PETSC_COMM_WORLD, &A); >>> MatSetSizes(A, PETSC_DECIDE, PETSC_DECIDE, numberDof_global, numberDof_global); >>> MatSetType(A, MATIS); >>> MatSetUp(A); >>> MatZeroEntries(A); >>> VecCreate(PETSC_COMM_WORLD, &b); >>> VecSetSizes(b, PETSC_DECIDE, numberDof_global); >>> VecSetUp(b); >>> VecSet(b,0.0); >>> VecDuplicate(b, &x); >>> >>> const PetscInt *g_idx; >>> ISLocalToGlobalMapping ltogm; >>> DMGetLocalToGlobalMapping(dm, <ogm); >>> ISLocalToGlobalMappingGetIndices(ltogm, &g_idx); >>> >>> //Build idxm_global and Set LHS >>> idxm_Global[ idxDofLocal ] = g_idx[ numdofPerFace*idxm[idxDofLocal]]; >>> MatSetValues(A, numberDof_local, idxm_Global.data(), numberDof_local, idxm_Global.data(), MatrixLocal.data(), ADD_VALUES); >>> >>> //Set RHS >>> PetscScalar valueDiag = 1.0 ; >>> MatZeroRows(A, objGeometryInfo.numberDof_Dirichlet, (objGeometryInfo.arrayDofSeqGlobal_Dirichlet).data(), valueDiag, 0, 0); >>> >>> VecSetValues(b, objGeometryInfo.numberDof_Dirichlet, (objGeometryInfo.arrayDofSeqGlobal_Dirichlet).data(), (objGeometryInfo.dof_Dirichlet).data(), INSERT_VALUES); >>> VecSetValues(x, objGeometryInfo.numberDof_Dirichlet, (objGeometryInfo.arrayDofSeqGlobal_Dirichlet).data(), (objGeometryInfo.dof_Dirichlet).data(), INSERT_VALUES); >>> ISLocalToGlobalMappingRestoreIndices(ltogm, &g_idx); >>> VecAssemblyBegin(b); >>> VecAssemblyEnd(b); >>> VecAssemblyBegin(x); >>> VecAssemblyEnd(x); >>> It shows the attached error when I run the code. It seems something wrong is with setting RHS. >>> Could you please help me double check my above code to setup the RHS? >>> Thanks, >>> >>> On Tue, Jul 30, 2024 at 11:56?AM Stefano Zampini > wrote: >>>> BDDC needs the matrix in MATIS format. Using MatConvert will give you back the right format, but the subdomain matrices are wrong. You need to assemble directly in MATIS format, something like >>>> >>>> MatCreate(comm,&A) >>>> MatSetType(A,MATIS) >>>> MatSetLocalToGlobalMapping(A,l2gmap, l2gmap) >>>> for e in local_elements: >>>> E = compute_element_matrix(e) >>>> MatSetValues(A,local_element_dofs,local_element_dofs,....) >>>> >>>> l2gmap is an ISLocalToGlobalMapping that stores the global dof number of the dofs that are local to the mesh >>>> >>>> See e.g. https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/blob/main/src/ksp/ksp/tutorials/ex59.c?ref_type=heads__;!!G_uCfscf7eWS!fJ1ibiYg4k47CYmvheN6QZ-hfBzb_wjpc_EnmueTZgQBm5eNVzfMFkUgw7EVOhyXJnw44CPWay_QB-74jmQ3o2U$ or https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/blob/main/src/ksp/ksp/tutorials/ex71.c?ref_type=heads__;!!G_uCfscf7eWS!fJ1ibiYg4k47CYmvheN6QZ-hfBzb_wjpc_EnmueTZgQBm5eNVzfMFkUgw7EVOhyXJnw44CPWay_QB-74rdvFRNE$ >>>> >>>> Il giorno mar 30 lug 2024 alle ore 17:50 neil liu > ha scritto: >>>>> Hi, >>>>> I am trying to use PCBDDC for the vector based FEM. (Complex system, double precision ) >>>>> My code can work well with asm, >>>>> petsc-3.21.1/petsc/arch-linux-c-opt/bin/mpirun -n 8 ./app -pc_type asm -pc_asm_overlap 6 -ksp_converged_reason -ksp_view -ksp_gmres_modifiedgramschmidt -ksp_gmres_restart 1500 -ksp_rtol 1e-8 -ksp_monitor -ksp_max_it 100000 >>>>> >>>>> When I tried BDDC, it was stuck for solving the linear system (it can not print anything for ksp_monitor). I did the conversion for matrix, >>>>> >>>>> Mat J; >>>>> MatConvert(A, MATIS, MAT_INITIAL_MATRIX, &J); >>>>> KSPSetOperators(ksp, A, J); >>>>> MatDestroy(&J); >>>>> KSPSetInitialGuessNonzero(ksp, PETSC_TRUE); >>>>> KSPSetFromOptions(ksp); >>>>> >>>>> petsc-3.21.1/petsc/arch-linux-c-debug/bin/mpirun -n 2 ./app -ksp_type cg -pc_type bddc -ksp_monitor -mat_type is >>>>> >>>>> Do you have any suggestions? >>>>> >>>>> Thanks , >>>>> Xiaodong >>>>> >>>>> >>>>> On Mon, Jul 29, 2024 at 6:19?PM neil liu > wrote: >>>>>> When I compile with real data, >>>>>> it shows the attached error. >>>>>> >>>>>> The data file is in binary format, right? >>>>>> >>>>>> >>>>>> >>>>>> On Mon, Jul 29, 2024 at 5:36?PM Stefano Zampini > wrote: >>>>>>> Your PETSc installation is for complex, data is for real >>>>>>> >>>>>>> On Mon, Jul 29, 2024, 23:14 neil liu > wrote: >>>>>>>> This Message Is From an External Sender >>>>>>>> This message came from outside your organization. >>>>>>>> >>>>>>>> I compiled Petsc with single precision. However, it is not converged with the data. >>>>>>>> >>>>>>>> Please see the attached file. >>>>>>>> >>>>>>>> On Mon, Jul 29, 2024 at 4:25?PM Barry Smith > wrote: >>>>>>>>> >>>>>>>>> This can happen if the data was stored in single precision and PETSc was built for double. >>>>>>>>> >>>>>>>>> >>>>>>>>>> On Jul 29, 2024, at 3:55?PM, neil liu > wrote: >>>>>>>>>> >>>>>>>>>> This Message Is From an External Sender >>>>>>>>>> This message came from outside your organization. >>>>>>>>>> Dear Petsc developers,, >>>>>>>>>> >>>>>>>>>> I am trying to run >>>>>>>>>> https://urldefense.us/v3/__https://petsc.org/release/src/ksp/ksp/tutorials/ex72.c.html__;!!G_uCfscf7eWS!fJ1ibiYg4k47CYmvheN6QZ-hfBzb_wjpc_EnmueTZgQBm5eNVzfMFkUgw7EVOhyXJnw44CPWay_QB-74MofxzyQ$ >>>>>>>>>> with >>>>>>>>>> >>>>>>>>>> petsc-3.21.1/petsc/arch-linux-c-opt/bin/mpirun -n 2 ./ex72 -f /Documents/PetscData/poisson_DMPLEX_32x32_16.dat -pc_type bddc -ksp_type cg -ksp_norm_type natural -ksp_error_if_not_converged -mat_type is >>>>>>>>>> >>>>>>>>>> The file was downloaded and put in the directory PetscData. >>>>>>>>>> >>>>>>>>>> The error is shown as follows, >>>>>>>>>> >>>>>>>>>> 0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- >>>>>>>>>> [0]PETSC ERROR: Read from file failed >>>>>>>>>> [0]PETSC ERROR: Read past end of file >>>>>>>>>> [0]PETSC ERROR: WARNING! There are unused option(s) set! Could be the program crashed before usage or a spelling mistake, etc! >>>>>>>>>> [0]PETSC ERROR: Option left: name:-ksp_error_if_not_converged (no value) source: command line >>>>>>>>>> [0]PETSC ERROR: Option left: name:-ksp_norm_type value: natural source: command line >>>>>>>>>> [0]PETSC ERROR: Option left: name:-ksp_type value: cg source: command line >>>>>>>>>> [0]PETSC ERROR: Option left: name:-pc_type value: bddc source: command line >>>>>>>>>> [0]PETSC ERROR: See https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!fJ1ibiYg4k47CYmvheN6QZ-hfBzb_wjpc_EnmueTZgQBm5eNVzfMFkUgw7EVOhyXJnw44CPWay_QB-74ioBjjfQ$ for trouble shooting. >>>>>>>>>> [0]PETSC ERROR: Petsc Release Version 3.21.1, unknown >>>>>>>>>> [0]PETSC ERROR: ./ex72 on a arch-linux-c-opt named >>>>>>>>>> Mon Jul 29 15:50:04 2024 >>>>>>>>>> [0]PETSC ERROR: Configure options --with-cc=gcc --with-fc=gfortran --with-cxx=g++ --download-fblaslapack --download-mpich --with-scalar-type=complex --download-triangle --with-debugging=no >>>>>>>>>> [0]PETSC ERROR: #1 PetscBinaryRead() at /home/xxxxxx/Documents/petsc-3.21.1/petsc/src/sys/fileio/sysio.c:327 >>>>>>>>>> [0]PETSC ERROR: #2 PetscViewerBinaryWriteReadAll() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/sys/classes/viewer/impls/binary/binv.c:1077 >>>>>>>>>> [0]PETSC ERROR: #3 PetscViewerBinaryReadAll() at /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/sys/classes/viewer/impls/binary/binv.c:1119 >>>>>>>>>> [0]PETSC ERROR: #4 MatLoad_MPIAIJ_Binary() at Documents/petsc-3.21.1/petsc/src/mat/impls/aij/mpi/mpiaij.c:3093 >>>>>>>>>> [0]PETSC ERROR: #5 MatLoad_MPIAIJ() at /Documents/petsc-3.21.1/petsc/src/mat/impls/aij/mpi/mpiaij.c:3035 >>>>>>>>>> [0]PETSC ERROR: #6 MatLoad() at /Documents/petsc-3.21.1/petsc/src/mat/interface/matrix.c:1344 >>>>>>>>>> [0]PETSC ERROR: #7 MatLoad_IS() at /Documents/petsc-3.21.1/petsc/src/mat/impls/is/matis.c:2575 >>>>>>>>>> [0]PETSC ERROR: #8 MatLoad() at /home/Documents/petsc-3.21.1/petsc/src/mat/interface/matrix.c:1344 >>>>>>>>>> [0]PETSC ERROR: #9 main() at ex72.c:105 >>>>>>>>>> [0]PETSC ERROR: PETSc Option Table entries: >>>>>>>>>> [0]PETSC ERROR: -f >>>>>>>>>> /Documents/PetscData/poisson_DMPLEX_32x32_16.dat (source: command line) >>>>>>>>>> [0]PETSC ERROR: -ksp_error_if_not_converged (source: command line) >>>>>>>>>> [0]PETSC ERROR: -ksp_norm_type natural (source: command line) >>>>>>>>>> [0]PETSC ERROR: -ksp_type cg (source: command line) >>>>>>>>>> [0]PETSC ERROR: -mat_type is (source: command line) >>>>>>>>>> [0]PETSC ERROR: -pc_type bddc (source: command line) >>>>>>>>>> [0]PETSC ERROR: ----------------End of Error Message -------send entire error message to petsc-maint at mcs.anl.gov ---------- >>>>>>>>>> application called MPI_Abort(MPI_COMM_SELF, 66) - process 0 >>>>>>>>> >>>> >>>> >>>> -- >>>> Stefano -------------- next part -------------- An HTML attachment was scrubbed... URL: From liufield at gmail.com Wed Jul 31 17:31:30 2024 From: liufield at gmail.com (neil liu) Date: Wed, 31 Jul 2024 18:31:30 -0400 Subject: [petsc-users] Trying to run https://petsc.org/release/src/ksp/ksp/tutorials/ex72.c.html In-Reply-To: <3F4ECD77-5F02-4C79-BFD0-1558F7E84D00@petsc.dev> References: <3F4ECD77-5F02-4C79-BFD0-1558F7E84D00@petsc.dev> Message-ID: Thanks a lot, Berry. It works now. Very interesting. I will explore what MatSetBlockSize is doing. Have a good night. On Wed, Jul 31, 2024 at 6:21?PM Barry Smith wrote: > > Take a look at src/ksp/ksp/tutorials/ex71.c > > To have your code below not crash at this point call > MatSetBlockSize(A,2) before MatSetUp() > > > On Jul 31, 2024, at 6:01?PM, neil liu wrote: > > Hi, all, > Following Stefano's advice, my code is reorganized as follows, > > *MatCreate(PETSC_COMM_WORLD, &A);* > > > > > > * MatSetType(A, MATIS); MatSetSizes(A, PETSC_DECIDE, PETSC_DECIDE, > numberDof_global, numberDof_global); MatSetUp(A); ISLocalToGlobalMapping > ltogm; DMGetLocalToGlobalMapping(dm, <ogm); > MatSetLocalToGlobalMapping(A, ltogm, ltogm);* > > Then I just ran the above code snippet, which gave me some errors as > following. (Local size 67 not compatible with block size 2). > It doesn't seems it is actually calling my routine, but I could be wrong > about this. > Can anyone give me some ideas to debug this issue? > I am just coding vector FEM, assigning 2 dofs each edge and 2 dofs each > face. > > The ltogm seems normal. > ISLocalToGlobalMapping Object: 2 MPI processes > type not yet set > [0] 0:2 0:2 > [0] 2:4 2:4 > [0] 4:6 54:56 > [0] 6:8 4:6 > [0] 8:10 6:8 > [0] 10:12 8:10 > [0] 12:14 10:12 > [0] 14:16 56:58 > [0] 16:18 12:14 > [0] 18:20 14:16 > [0] 20:22 16:18 > [0] 22:24 58:60 > [0] 24:26 18:20 > [0] 26:28 20:22 > [0] 28:30 22:24 > [0] 30:32 24:26 > [0] 32:34 60:62 > [0] 34:36 26:28 > [0] 36:38 28:30 > [0] 38:40 30:32 > [0] 40:42 32:34 > [0] 42:44 92:94 > [0] 44:46 94:96 > [0] 46:48 34:36 > [0] 48:50 96:98 > [0] 50:52 36:38 > [0] 52:54 98:100 > [0] 54:56 38:40 > [0] 56:58 40:42 > [0] 58:60 100:102 > [0] 60:62 42:44 > [0] 62:64 102:104 > [0] 64:66 44:46 > [0] 66:68 104:106 > [0] 68:70 46:48 > [0] 70:72 106:108 > [0] 72:74 48:50 > [0] 74:76 50:52 > [0] 76:78 108:110 > [0] 78:80 52:54 > [1] 0:2 54:56 > [1] 2:4 56:58 > [1] 4:6 58:60 > [1] 6:8 60:62 > [1] 8:10 62:64 > [1] 10:12 64:66 > [1] 12:14 66:68 > [1] 14:16 68:70 > [1] 16:18 70:72 > [1] 18:20 72:74 > [1] 20:22 74:76 > [1] 22:24 76:78 > [1] 24:26 78:80 > [1] 26:28 80:82 > [1] 28:30 82:84 > [1] 30:32 84:86 > [1] 32:34 86:88 > [1] 34:36 88:90 > [1] 36:38 90:92 > [1] 38:40 92:94 > [1] 40:42 94:96 > [1] 42:44 96:98 > [1] 44:46 98:100 > [1] 46:48 100:102 > [1] 48:50 102:104 > [1] 50:52 104:106 > [1] 52:54 106:108 > [1] 54:56 108:110 > [1] 56:58 110:112 > [1] 58:60 112:114 > [1] 60:62 114:116 > [1] 62:64 116:118 > [1] 64:66 118:120 > [1] 66:68 120:122 > [1] 68:70 122:124 > [1] 70:72 124:126 > [1] 72:74 126:128 > [1] 74:76 128:130 > [1] 76:78 130:132 > [1] 78:80 132:134 > > [0]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [0]PETSC ERROR: Arguments are incompatible > [0]PETSC ERROR: [1]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > Local size 67 not compatible with block size 2 > [0]PETSC ERROR: See https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!ezBBwLu_ohCVxEbEjF9KAyHOajlCzLcFYsXr_HXTqn0SGLkTERqqi0jm5qKxXga1bLCySC2Jmm_e5lFYstYzVw$ for trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.21.1, unknown > [0]PETSC ERROR: [1]PETSC ERROR: Arguments are incompatible > [1]PETSC ERROR: Local size 67 not compatible with block size 2 > [1]PETSC ERROR: ./app on a arch-linux-c-debug by xiaodong.liu Wed Jul 31 > 17:43:28 2024 > [0]PETSC ERROR: Configure options --with-cc=gcc --with-fc=gfortran > --with-cxx=g++ --download-fblaslapack --download-mpich > --with-scalar-type=complex --download-triangle > [0]PETSC ERROR: See https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!ezBBwLu_ohCVxEbEjF9KAyHOajlCzLcFYsXr_HXTqn0SGLkTERqqi0jm5qKxXga1bLCySC2Jmm_e5lFYstYzVw$ for trouble shooting. > [1]PETSC ERROR: Petsc Release Version 3.21.1, unknown > [1]PETSC ERROR: ./app on a arch-linux-c-debug by xiaodong.liu Wed Jul 31 > 17:43:28 2024 > [1]PETSC ERROR: #1 PetscLayoutSetBlockSize() at > /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/utils/pmap.c:473 > [0]PETSC ERROR: #2 MatSetLocalToGlobalMapping_IS() at > /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/impls/is/matis.c:2831 > [0]PETSC ERROR: #3 MatSetLocalToGlobalMapping() at > /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/interface/matrix.c:2252 > Configure options --with-cc=gcc --with-fc=gfortran --with-cxx=g++ > --download-fblaslapack --download-mpich --with-scalar-type=complex > --download-triangle > [1]PETSC ERROR: #1 PetscLayoutSetBlockSize() at > /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/vec/is/utils/pmap.c:473 > [1]PETSC ERROR: After Mat set local to global mapping! > #2 MatSetLocalToGlobalMapping_IS() at > /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/impls/is/matis.c:2831 > [1]PETSC ERROR: #3 MatSetLocalToGlobalMapping() at > /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/mat/interface/matrix.c:2252 > After Mat set local to global mapping! > > Thanks, > > On Tue, Jul 30, 2024 at 2:51?PM neil liu wrote: > >> Hi, Stefano, >> >> I am trying to understand the example there you mentioned. I have a >> question, >> the example always use DMDA there. Does BDDC also work for DMPLEX? >> >> Thanks , >> >> On Tue, Jul 30, 2024 at 1:47?PM neil liu wrote: >> >>> Thanks, Stefano, >>> >>> I am trying to modify the code as follows, >>> MatCreate(PETSC_COMM_WORLD, &A); >>> MatSetSizes(A, PETSC_DECIDE, PETSC_DECIDE, numberDof_global, >>> numberDof_global); >>> MatSetType(A, MATIS); >>> MatSetUp(A); >>> MatZeroEntries(A); >>> VecCreate(PETSC_COMM_WORLD, &b); >>> VecSetSizes(b, PETSC_DECIDE, numberDof_global); >>> VecSetUp(b); >>> VecSet(b,0.0); >>> VecDuplicate(b, &x); >>> >>> const PetscInt *g_idx; >>> ISLocalToGlobalMapping ltogm; >>> DMGetLocalToGlobalMapping(dm, <ogm); >>> ISLocalToGlobalMappingGetIndices(ltogm, &g_idx); >>> >>> //Build idxm_global and Set LHS >>> idxm_Global[ idxDofLocal ] = g_idx[ numdofPerFace*idxm[idxDofLocal]]; >>> MatSetValues(A, numberDof_local, idxm_Global.data(), numberDof_local, >>> idxm_Global.data(), MatrixLocal.data(), ADD_VALUES); >>> >>> //Set RHS >>> PetscScalar valueDiag = 1.0 ; >>> MatZeroRows(A, objGeometryInfo.numberDof_Dirichlet, (objGeometryInfo. >>> arrayDofSeqGlobal_Dirichlet).data(), valueDiag, 0, 0); >>> >>> VecSetValues(b, objGeometryInfo.numberDof_Dirichlet, (objGeometryInfo. >>> arrayDofSeqGlobal_Dirichlet).data(), (objGeometryInfo.dof_Dirichlet). >>> data(), INSERT_VALUES); >>> VecSetValues(x, objGeometryInfo.numberDof_Dirichlet, (objGeometryInfo. >>> arrayDofSeqGlobal_Dirichlet).data(), (objGeometryInfo.dof_Dirichlet). >>> data(), INSERT_VALUES); >>> ISLocalToGlobalMappingRestoreIndices(ltogm, &g_idx); >>> VecAssemblyBegin(b); >>> VecAssemblyEnd(b); >>> VecAssemblyBegin(x); >>> VecAssemblyEnd(x); >>> It shows the attached error when I run the code. It seems something >>> wrong is with setting RHS. >>> Could you please help me double check my above code to setup the RHS? >>> Thanks, >>> >>> On Tue, Jul 30, 2024 at 11:56?AM Stefano Zampini < >>> stefano.zampini at gmail.com> wrote: >>> >>>> BDDC needs the matrix in MATIS format. Using MatConvert will give you >>>> back the right format, but the subdomain matrices are wrong. You need to >>>> assemble directly in MATIS format, something like >>>> >>>> MatCreate(comm,&A) >>>> MatSetType(A,MATIS) >>>> MatSetLocalToGlobalMapping(A,l2gmap, l2gmap) >>>> for e in local_elements: >>>> E = compute_element_matrix(e) >>>> MatSetValues(A,local_element_dofs,local_element_dofs,....) >>>> >>>> l2gmap is an ISLocalToGlobalMapping that stores the global dof number >>>> of the dofs that are local to the mesh >>>> >>>> See e.g. >>>> https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/blob/main/src/ksp/ksp/tutorials/ex59.c?ref_type=heads__;!!G_uCfscf7eWS!ezBBwLu_ohCVxEbEjF9KAyHOajlCzLcFYsXr_HXTqn0SGLkTERqqi0jm5qKxXga1bLCySC2Jmm_e5lFLn1UiNw$ >>>> or >>>> https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/blob/main/src/ksp/ksp/tutorials/ex71.c?ref_type=heads__;!!G_uCfscf7eWS!ezBBwLu_ohCVxEbEjF9KAyHOajlCzLcFYsXr_HXTqn0SGLkTERqqi0jm5qKxXga1bLCySC2Jmm_e5lFhsK-8Zg$ >>>> >>>> Il giorno mar 30 lug 2024 alle ore 17:50 neil liu >>>> ha scritto: >>>> >>>>> Hi, >>>>> I am trying to use PCBDDC for the vector based FEM. (Complex system, >>>>> double precision ) >>>>> My code can work well with *asm*, >>>>> petsc-3.21.1/petsc/arch-linux-c-opt/bin/mpirun -n 8 ./app -pc_type asm >>>>> -pc_asm_overlap 6 -ksp_converged_reason -ksp_view >>>>> -ksp_gmres_modifiedgramschmidt -ksp_gmres_restart 1500 -ksp_rtol 1e-8 >>>>> -ksp_monitor -ksp_max_it 100000 >>>>> >>>>> When I tried BDDC, it was stuck for solving the linear system (it can >>>>> not print anything for ksp_monitor). I did the conversion for matrix, >>>>> >>>>> * Mat J;* >>>>> * MatConvert(A, MATIS, MAT_INITIAL_MATRIX, &J);* >>>>> * KSPSetOperators(ksp, A, J);* >>>>> * MatDestroy(&J);* >>>>> * KSPSetInitialGuessNonzero(ksp, PETSC_TRUE);* >>>>> * KSPSetFromOptions(ksp);* >>>>> >>>>> petsc-3.21.1/petsc/arch-linux-c-debug/bin/mpirun -n 2 ./app -ksp_type >>>>> cg -pc_type bddc -ksp_monitor -mat_type is >>>>> >>>>> Do you have any suggestions? >>>>> >>>>> Thanks , >>>>> Xiaodong >>>>> >>>>> >>>>> On Mon, Jul 29, 2024 at 6:19?PM neil liu wrote: >>>>> >>>>>> When I compile with real data, >>>>>> it shows the attached error. >>>>>> >>>>>> The data file is in binary format, right? >>>>>> >>>>>> >>>>>> >>>>>> On Mon, Jul 29, 2024 at 5:36?PM Stefano Zampini < >>>>>> stefano.zampini at gmail.com> wrote: >>>>>> >>>>>>> Your PETSc installation is for complex, data is for real >>>>>>> >>>>>>> On Mon, Jul 29, 2024, 23:14 neil liu wrote: >>>>>>> >>>>>>>> I compiled Petsc with single precision. However, it is not >>>>>>>> converged with the data. Please see the attached file. On Mon, Jul 29, 2024 >>>>>>>> at 4: 25 PM Barry Smith wrote: This can >>>>>>>> happen if the data was stored in single precision >>>>>>>> ZjQcmQRYFpfptBannerStart >>>>>>>> This Message Is From an External Sender >>>>>>>> This message came from outside your organization. >>>>>>>> >>>>>>>> ZjQcmQRYFpfptBannerEnd >>>>>>>> I compiled Petsc with single precision. However, it is not >>>>>>>> converged with the data. >>>>>>>> >>>>>>>> Please see the attached file. >>>>>>>> >>>>>>>> On Mon, Jul 29, 2024 at 4:25?PM Barry Smith >>>>>>>> wrote: >>>>>>>> >>>>>>>>> >>>>>>>>> This can happen if the data was stored in single precision and >>>>>>>>> PETSc was built for double. >>>>>>>>> >>>>>>>>> >>>>>>>>> On Jul 29, 2024, at 3:55?PM, neil liu wrote: >>>>>>>>> >>>>>>>>> This Message Is From an External Sender >>>>>>>>> This message came from outside your organization. >>>>>>>>> Dear Petsc developers,, >>>>>>>>> >>>>>>>>> I am trying to run >>>>>>>>> https://urldefense.us/v3/__https://petsc.org/release/src/ksp/ksp/tutorials/ex72.c.html__;!!G_uCfscf7eWS!ezBBwLu_ohCVxEbEjF9KAyHOajlCzLcFYsXr_HXTqn0SGLkTERqqi0jm5qKxXga1bLCySC2Jmm_e5lHlwHtSAw$ >>>>>>>>> >>>>>>>>> with >>>>>>>>> >>>>>>>>> petsc-3.21.1/petsc/arch-linux-c-opt/bin/mpirun -n 2 ./ex72 -f >>>>>>>>> /Documents/PetscData/poisson_DMPLEX_32x32_16.dat -pc_type bddc -ksp_type cg >>>>>>>>> -ksp_norm_type natural -ksp_error_if_not_converged -mat_type is >>>>>>>>> >>>>>>>>> The file was downloaded and put in the directory PetscData. >>>>>>>>> >>>>>>>>> The error is shown as follows, >>>>>>>>> >>>>>>>>> 0]PETSC ERROR: --------------------- Error Message >>>>>>>>> -------------------------------------------------------------- >>>>>>>>> [0]PETSC ERROR: Read from file failed >>>>>>>>> [0]PETSC ERROR: Read past end of file >>>>>>>>> [0]PETSC ERROR: WARNING! There are unused option(s) set! Could be >>>>>>>>> the program crashed before usage or a spelling mistake, etc! >>>>>>>>> [0]PETSC ERROR: Option left: name:-ksp_error_if_not_converged >>>>>>>>> (no value) source: command line >>>>>>>>> [0]PETSC ERROR: Option left: name:-ksp_norm_type value: natural >>>>>>>>> source: command line >>>>>>>>> [0]PETSC ERROR: Option left: name:-ksp_type value: cg source: >>>>>>>>> command line >>>>>>>>> [0]PETSC ERROR: Option left: name:-pc_type value: bddc source: >>>>>>>>> command line >>>>>>>>> [0]PETSC ERROR: See https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!ezBBwLu_ohCVxEbEjF9KAyHOajlCzLcFYsXr_HXTqn0SGLkTERqqi0jm5qKxXga1bLCySC2Jmm_e5lFYstYzVw$ >>>>>>>>> >>>>>>>>> for trouble shooting. >>>>>>>>> [0]PETSC ERROR: Petsc Release Version 3.21.1, unknown >>>>>>>>> [0]PETSC ERROR: ./ex72 on a arch-linux-c-opt named >>>>>>>>> Mon Jul 29 15:50:04 2024 >>>>>>>>> [0]PETSC ERROR: Configure options --with-cc=gcc --with-fc=gfortran >>>>>>>>> --with-cxx=g++ --download-fblaslapack --download-mpich >>>>>>>>> --with-scalar-type=complex --download-triangle --with-debugging=no >>>>>>>>> [0]PETSC ERROR: #1 PetscBinaryRead() at >>>>>>>>> /home/xxxxxx/Documents/petsc-3.21.1/petsc/src/sys/fileio/sysio.c:327 >>>>>>>>> [0]PETSC ERROR: #2 PetscViewerBinaryWriteReadAll() at >>>>>>>>> /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/sys/classes/viewer/impls/binary/binv.c:1077 >>>>>>>>> [0]PETSC ERROR: #3 PetscViewerBinaryReadAll() at >>>>>>>>> /home/xiaodong.liu/Documents/petsc-3.21.1/petsc/src/sys/classes/viewer/impls/binary/binv.c:1119 >>>>>>>>> [0]PETSC ERROR: #4 MatLoad_MPIAIJ_Binary() at >>>>>>>>> Documents/petsc-3.21.1/petsc/src/mat/impls/aij/mpi/mpiaij.c:3093 >>>>>>>>> [0]PETSC ERROR: #5 MatLoad_MPIAIJ() at >>>>>>>>> /Documents/petsc-3.21.1/petsc/src/mat/impls/aij/mpi/mpiaij.c:3035 >>>>>>>>> [0]PETSC ERROR: #6 MatLoad() at >>>>>>>>> /Documents/petsc-3.21.1/petsc/src/mat/interface/matrix.c:1344 >>>>>>>>> [0]PETSC ERROR: #7 MatLoad_IS() at >>>>>>>>> /Documents/petsc-3.21.1/petsc/src/mat/impls/is/matis.c:2575 >>>>>>>>> [0]PETSC ERROR: #8 MatLoad() at >>>>>>>>> /home/Documents/petsc-3.21.1/petsc/src/mat/interface/matrix.c:1344 >>>>>>>>> [0]PETSC ERROR: #9 main() at ex72.c:105 >>>>>>>>> [0]PETSC ERROR: PETSc Option Table entries: >>>>>>>>> [0]PETSC ERROR: -f >>>>>>>>> /Documents/PetscData/poisson_DMPLEX_32x32_16.dat (source: command >>>>>>>>> line) >>>>>>>>> [0]PETSC ERROR: -ksp_error_if_not_converged (source: command line) >>>>>>>>> [0]PETSC ERROR: -ksp_norm_type natural (source: command line) >>>>>>>>> [0]PETSC ERROR: -ksp_type cg (source: command line) >>>>>>>>> [0]PETSC ERROR: -mat_type is (source: command line) >>>>>>>>> [0]PETSC ERROR: -pc_type bddc (source: command line) >>>>>>>>> [0]PETSC ERROR: ----------------End of Error Message -------send >>>>>>>>> entire error message to petsc-maint at mcs.anl.gov---------- >>>>>>>>> application called MPI_Abort(MPI_COMM_SELF, 66) - process 0 >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>> >>>> -- >>>> Stefano >>>> >>> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marco at kit.ac.jp Wed Jul 31 18:34:45 2024 From: marco at kit.ac.jp (Marco Seiz) Date: Thu, 1 Aug 2024 08:34:45 +0900 Subject: [petsc-users] Right DM for a particle network In-Reply-To: References: <54a9b9e5-691c-4535-bc49-5c00bc19a0df@kit.ac.jp> Message-ID: An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Jul 31 21:00:25 2024 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 31 Jul 2024 22:00:25 -0400 Subject: [petsc-users] Right DM for a particle network In-Reply-To: References: <54a9b9e5-691c-4535-bc49-5c00bc19a0df@kit.ac.jp> Message-ID: On Wed, Jul 31, 2024 at 7:34?PM Marco Seiz wrote: > Since PETSc allows for setting of non-local matrix entries I should > probably be able to set the "missing" entries. Would something like > > 1) Construct matrix A for conduction term > > 2) Calculate RHS as rhs = source(local information) + A * global vector > > 3) hand off to TS > > work then? Basically skipping over the DM in the first place and letting > the matrix handle the connectivity. > It sounds like it could work, but now I feel I do not understand your code enough to be certain. Particle neighborhoods will determine row sparsity, but how will that be determined in parallel? You could move rows to different processes with MatGetSubMatrix and then query them, but this does not seem superior to sending particles. I think I will not understand until I see a small prototype. Thanks, Matt > Best regards, > > Marco > > > On 31.07.24 23:28, Matthew Knepley wrote: > > On Wed, Jul 31, 2024 at 10:08?AM Mark Adams mfadams at lbl.gov>> wrote: > > > > Just a thought, but perhaps he may want to use just sparse matrices, > AIJ. He Manages the connectivity And we manage ghost values. > > > > > > He is reconfiguring the neighborhood (row) each time, so you would > essentially create a new matrix at each step with different sparsity. It > would definitely function, but I wonder if he would have enough local > information to construct the rows? > > > > Thanks, > > > > Matt > > > > > > On Wed, Jul 31, 2024 at 6:25?AM Matthew Knepley > wrote: > > > > On Tue, Jul 30, 2024 at 11:32?PM Marco Seiz > wrote: > > > > Hello, > > > > maybe to clarify a bit further: I'd essentially like to > solve heat transport between particles only, without solving the transport > on my voxel mesh since there's a large scale difference between the voxel > size and the particle size and heat transport should be fast enough that > voxel resolution is unnecessary. Basically a discrete element method just > for heat transport. The whole motion/size change part is handled separately > on the voxel mesh. > > Based on the connectivity, I can make a graph (attached an > example from a 3D case, for description see [1]) and on each vertex > (particle) of the graph I want to account for source terms and conduction > along the edges. What I'd like to avoid is managing the exchange for > non-locally owned vertices during the solve (e.g. for RHS evaluation) > myself but rather have the DM do it with DMGlobalToLocal() and friends. > Thinking a bit further, I'd probably also want to associate some data with > the edges since that will enter the conduction term but stays constant > during a solve (think contact area between particles). > > > > Looking over the DMSwarm examples, the coupling between > particles is via the background mesh, so e.g. I can't say "loop over all > local particles and for each particle and its neighbours do X". I could use > the projection part to dump the the source terms from the particles to a > coarser background mesh but for the conduction term I don't see how I could > get a good approximation of the contact area on the background mesh without > having a mesh at a similar resolution as I already have, kinda destroying > the purpose of the whole thing. > > > > > > The point I was trying to make in my previous message is that > DMSwarm does not require a background mesh. The examples use one because > that is what we use to evaluate particle grouping. However, you have an > independent way to do this, so you do not need it. > > > > Second, the issue of replicated particles. DMSwarmMigrate allows > you to replicate particles, using the input flag. Of course, you would have > to manage removing particles you no longer want. > > > > Thanks, > > > > Matt > > > > > > [1] Points represent particles, black lines are edges, with > the color indicating which worker "owns" the particle, with 3 workers being > used and only a fraction of edges/vertices being displayed to keep it > somewhat tidy. The position of the points corresponds to the particles' x-y > position, with the z position being ignored. Particle ownership isn't done > via looking where the particle is on the voxel grid, but rather by dividing > the array of particle indices into subarrays, so e.g. particles [0-n/3) go > to the first worker and so on. Since my particles can span multiple workers > on the voxel grid this makes it much easier to update edge information with > one-sided communication. As you can see the "mesh" is quite irregular with > no nice boundary existing for connected particles owned by different > workers. > > > > Best regards, > > Marco > > > > On 30.07.24 22:56, Mark Adams wrote: > > > * they do have a vocal mesh, so perhaps They want DM Plex. > > > > > > * they want ghost particle communication, that also might > want a mesh > > > > > > * DM swarm does not have a notion of ghost particle, as > far as I know, but it could use one > > > > > > On Tue, Jul 30, 2024 at 7:58?AM Matthew Knepley < > knepley at gmail.com >> wrote: > > > > > > On Tue, Jul 30, 2024 at 12: 24 AM Marco Seiz kit. ac. jp> wrote: Hello, I'd like to solve transient heat transport at a > particle scale using TS, with the per-particle equation being something > like dT_i / dt = (S(T_i) + sum(F(T_j, > > > ZjQcmQRYFpfptBannerStart > > > __ > > > This Message Is From an External Sender > > > This message came from outside your organization. > > > > > > __ > > > ZjQcmQRYFpfptBannerEnd > > > On Tue, Jul 30, 2024 at 12:24?AM Marco Seiz < > marco at kit.ac.jp marco at kit.ac.jp>>> wrote: > > > > > > __ > > > Hello, I'd like to solve transient heat transport > at a particle scale using TS, with the per-particle equation being > something like dT_i / dt = (S(T_i) + sum(F(T_j, T_i), j connecting to i)) > with a nonlinear source term S and a conduction term > > > ZjQcmQRYFpfptBannerStart > > > __ > > > This Message Is From an External Sender > > > This message came from outside your organization. > > > > > > __ > > > ZjQcmQRYFpfptBannerEnd > > > > > > Hello, > > > > > > I'd like to solve transient heat transport at a > particle scale using TS, with the per-particle equation being something like > > > > > > dT_i / dt = (S(T_i) + sum(F(T_j, T_i), j > connecting to i)) > > > > > > with a nonlinear source term S and a conduction > term F. The particles can move, deform and grow/shrink/vanish on a voxel > grid, but for the temperature a particle-scale resolution should be > sufficient. The particles' connectivity will change during the simulation, > but is assumed constant during a single timestep. I have a data structure > tracking the particles' connectivity, so I can say which particles should > conduct heat to each other. I exploit symmetry and so only save the > "forward" edges, so e.g. for touching particles 1->2->3, I only store [[2], > [3], []], from which the full list [[2], [1, 3], [2]] could be > reconstructed but which I'd like to avoid. In parallel each worker would > own some of the particle data, so e.g. for the 1->2->3 example and 2 > workers, worker 0 could own [[2]] and worker 1 [[3],[]]. > > > > > > Looking over the DM variants, either DMNetwork or > some manual mesh build with DMPlex seem suited for this. I'd especially > like it if the adjacency information is handled by the DM automagically > based on the edges so I don't have to deal with ghost particle > communication myself. I already tried something basic with DMNetwork, > though for some reason the offsets I get from DMNetworkGetGlobalVecOffset() > are larger than the actual network. I've attached what I have so far but > comparing to e.g. src/snes/tutorials/network/ex1.c I don't see what I'm > doing wrong if I don't need data at the edges. I might not be seeing the > trees for the forest though. The output with -dmnetwork_view looks > reasonable to me. Any help in fixing this approach, or if it would seem > suitable pointers to using DMPlex for this problem, would be appreciated. > > > > > > To me, this sounds like you should built it with > DMSwarm. Why? > > > > > > 1) We only have vertices and edges, so a mesh does not > buy us anything. > > > > > > 2) You are managing the parallel particle > connectivity, so DMPlex topology is not buying us anything. Unless I am > misunderstanding. > > > > > > 3) DMNetwork has a lot of support for vertices with > different characteristics. Your particles all have the same attributes, so > this is unnecessary. > > > > > > How would you set this up? > > > > > > 1) Declare all particle attributes. There are many > Swarm examples, but say ex6 which simulates particles moving under a > central force. > > > > > > 2) That example decides when to move particles using a > parallel background mesh. However, you know which particles you want to > move, > > > so you just change the _rank_ field to the new > rank and call DMSwarmMigrate() with migration type _basic_. > > > > > > It should be straightforward to setup a tiny example > moving around a few particles to see if it does everything you want. > > > > > > Thanks, > > > > > > Matt > > > > > > > > > Best regards, > > > Marco > > > > > > -- > > > What most experimenters take for granted before they > begin their experiments is infinitely more interesting than any results to > which their experiments lead. > > > -- Norbert Wiener > > > > > > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!YL_1KA8S6hXo_7KadaomeVGRJ3rcpPCWSKldVHeN9BX4QcQQVawkdT5T-vOLAz--lVQxj5WxRhOwA_i_lNEf$ < > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!YL_1KA8S6hXo_7KadaomeVGRJ3rcpPCWSKldVHeN9BX4QcQQVawkdT5T-vOLAz--lVQxj5WxRhOwA_i_lNEf$ > < > https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!bLVHnoUGooYpdfGD8zNQrHTY2ln70W082hEc6pG7vdjA2fCvs77tcI9d7QOA0i_FjGK1of3nNOKXCE-4Rwb0$ > < > https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!bLVHnoUGooYpdfGD8zNQrHTY2ln70W082hEc6pG7vdjA2fCvs77tcI9d7QOA0i_FjGK1of3nNOKXCE-4Rwb0$ > >> > > > > > > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > > -- Norbert Wiener > > > > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!YL_1KA8S6hXo_7KadaomeVGRJ3rcpPCWSKldVHeN9BX4QcQQVawkdT5T-vOLAz--lVQxj5WxRhOwA_i_lNEf$ < > https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!YL_1KA8S6hXo_7KadaomeVGRJ3rcpPCWSKldVHeN9BX4QcQQVawkdT5T-vOLAz--lVQxj5WxRhOwA8lfrK9k$ > > > > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > > -- Norbert Wiener > > > > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!YL_1KA8S6hXo_7KadaomeVGRJ3rcpPCWSKldVHeN9BX4QcQQVawkdT5T-vOLAz--lVQxj5WxRhOwA_i_lNEf$ < > https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!YL_1KA8S6hXo_7KadaomeVGRJ3rcpPCWSKldVHeN9BX4QcQQVawkdT5T-vOLAz--lVQxj5WxRhOwA8lfrK9k$ > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!YL_1KA8S6hXo_7KadaomeVGRJ3rcpPCWSKldVHeN9BX4QcQQVawkdT5T-vOLAz--lVQxj5WxRhOwA_i_lNEf$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Jul 31 21:09:52 2024 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 31 Jul 2024 22:09:52 -0400 Subject: [petsc-users] How to combine different element types into a single DMPlex? In-Reply-To: <9021c53e-18af-428a-978a-54a3c7371378@giref.ulaval.ca> References: <6e78845e-2054-92b1-d6db-2c0820c05b64@giref.ulaval.ca> <9021c53e-18af-428a-978a-54a3c7371378@giref.ulaval.ca> Message-ID: On Wed, Jul 31, 2024 at 4:16?PM Eric Chamberland < Eric.Chamberland at giref.ulaval.ca> wrote: > Hi Vaclav, > > Okay, I am coming back with this question after some time... ;) > > I am just wondering if it is now possible to call > DMPlexBuildFromCellListParallel or something else, to build a mesh that > combine different element types into a single DMPlex (in parallel of > course) ? > 1) Meshes with different cell types are fully functional, and some applications have been using them for a while now. 2) The Firedrake I/O methods support these hybrid meshes. 3) You can, for example, read in a GMsh or ExodusII file with different cell types. However, there is no direct interface like DMPlexBuildFromCellListParallel(). If you plan on creating meshes by hand, I can build that for you. No one so far has wanted that. Rather they want to read in a mesh in some format, or alter a base mesh by inserting other cell types. So, what is the motivating use case? Thanks, Matt > Thanks, > > Eric > On 2021-09-23 11:30, Hapla Vaclav wrote: > > Note there will soon be a generalization of > DMPlexBuildFromCellListParallel() around, as a side product of our current > collaborative efforts with Firedrake guys. It will take a PetscSection > instead of relying on the blocksize [which is indeed always constant for > the given dataset]. Stay tuned. > > https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/merge_requests/4350__;!!G_uCfscf7eWS!a7Z4JG-PH0CquDikXpywg-JEECEGlEIcXI5LzffVcIr4qqITdSAQJibbguyeQOCvW6DkzTDDbP58oBuRbcJg$ > > Thanks, > > Vaclav > > On 23 Sep 2021, at 16:53, Eric Chamberland < > Eric.Chamberland at giref.ulaval.ca> wrote: > > Hi, > > oh, that's a great news! > > In our case we have our home-made file-format, invariant to the number of > processes (thanks to MPI_File_set_view), that uses collective, asynchronous > MPI I/O native calls for unstructured hybrid meshes and fields . > > So our needs are not for reading meshes but only to fill an hybrid DMPlex > with DMPlexBuildFromCellListParallel (or something else to come?)... to > exploit petsc partitioners and parallel overlap computation... > > Thanks for the follow-up! :) > > Eric > > > On 2021-09-22 7:20 a.m., Matthew Knepley wrote: > > On Wed, Sep 22, 2021 at 3:04 AM Karin&NiKo wrote: > >> Dear Matthew, >> >> This is great news! >> For my part, I would be mostly interested in the parallel input >> interface. Sorry for that... >> Indeed, in our application, we already have a parallel mesh data >> structure that supports hybrid meshes with parallel I/O and distribution >> (based on the MED format). We would like to use a DMPlex to make parallel >> mesh adaptation. >> As a matter of fact, all our meshes are in the MED format. We could >> also contribute to extend the interface of DMPlex with MED (if you consider >> it could be usefull). >> > > An MED interface does exist. I stopped using it for two reasons: > > 1) The code was not portable and the build was failing on different > architectures. I had to manually fix it. > > 2) The boundary markers did not provide global information, so that > parallel reading was much harder. > > Feel free to update my MED reader to a better design. > > Thanks, > > Matt > > >> Best regards, >> Nicolas >> >> >> Le mar. 21 sept. 2021 ? 21:56, Matthew Knepley a >> ?crit : >> >>> On Tue, Sep 21, 2021 at 10:31 AM Karin&NiKo >>> wrote: >>> >>>> Dear Eric, dear Matthew, >>>> >>>> I share Eric's desire to be able to manipulate meshes composed of >>>> different types of elements in a PETSc's DMPlex. >>>> Since this discussion, is there anything new on this feature for the >>>> DMPlex object or am I missing something? >>>> >>> >>> Thanks for finding this! >>> >>> Okay, I did a rewrite of the Plex internals this summer. It should now >>> be possible to interpolate a mesh with any >>> number of cell types, partition it, redistribute it, and many other >>> manipulations. >>> >>> You can read in some formats that support hybrid meshes. If you let me >>> know how you plan to read it in, we can make it work. >>> Right now, I don't want to make input interfaces that no one will ever >>> use. We have a project, joint with Firedrake, to finalize >>> parallel I/O. This will make parallel reading and writing for >>> checkpointing possible, supporting topology, geometry, fields and >>> layouts, for many meshes in one HDF5 file. I think we will finish in >>> November. >>> >>> Thanks, >>> >>> Matt >>> >>> >>>> Thanks, >>>> Nicolas >>>> >>>> Le mer. 21 juil. 2021 ? 04:25, Eric Chamberland < >>>> Eric.Chamberland at giref.ulaval.ca> a ?crit : >>>> >>>>> Hi, >>>>> On 2021-07-14 3:14 p.m., Matthew Knepley wrote: >>>>> >>>>> On Wed, Jul 14, 2021 at 1:25 PM Eric Chamberland < >>>>> Eric.Chamberland at giref.ulaval.ca> wrote: >>>>> >>>>>> Hi, >>>>>> >>>>>> while playing with DMPlexBuildFromCellListParallel, I noticed we have >>>>>> to >>>>>> specify "numCorners" which is a fixed value, then gives a fixed >>>>>> number >>>>>> of nodes for a series of elements. >>>>>> >>>>>> How can I then add, for example, triangles and quadrangles into a >>>>>> DMPlex? >>>>>> >>>>> >>>>> You can't with that function. It would be much mich more complicated >>>>> if you could, and I am not sure >>>>> it is worth it for that function. The reason is that you would need >>>>> index information to offset into the >>>>> connectivity list, and that would need to be replicated to some extent >>>>> so that all processes know what >>>>> the others are doing. Possible, but complicated. >>>>> >>>>> Maybe I can help suggest something for what you are trying to do? >>>>> >>>>> Yes: we are trying to partition our parallel mesh with PETSc >>>>> functions. The mesh has been read in parallel so each process owns a part >>>>> of it, but we have to manage mixed elements types. >>>>> >>>>> When we directly use ParMETIS_V3_PartMeshKway, we give two arrays to >>>>> describe the elements which allows mixed elements. >>>>> >>>>> So, how would I read my mixed mesh in parallel and give it to PETSc >>>>> DMPlex so I can use a PetscPartitioner with DMPlexDistribute ? >>>>> >>>>> A second goal we have is to use PETSc to compute the overlap, which is >>>>> something I can't find in PARMetis (and any other partitionning library?) >>>>> >>>>> Thanks, >>>>> >>>>> Eric >>>>> >>>>> >>>>> >>>>> Thanks, >>>>> >>>>> Matt >>>>> >>>>> >>>>> >>>>>> Thanks, >>>>>> >>>>>> Eric >>>>>> >>>>>> -- >>>>>> Eric Chamberland, ing., M. Ing >>>>>> Professionnel de recherche >>>>>> GIREF/Universit? Laval >>>>>> (418) 656-2131 poste 41 22 42 >>>>>> >>>>>> >>>>> >>>>> -- >>>>> What most experimenters take for granted before they begin their >>>>> experiments is infinitely more interesting than any results to which their >>>>> experiments lead. >>>>> -- Norbert Wiener >>>>> >>>>> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!a7Z4JG-PH0CquDikXpywg-JEECEGlEIcXI5LzffVcIr4qqITdSAQJibbguyeQOCvW6DkzTDDbP58oP6W0x8e$ >>>>> >>>>> >>>>> -- >>>>> Eric Chamberland, ing., M. Ing >>>>> Professionnel de recherche >>>>> GIREF/Universit? Laval >>>>> (418) 656-2131 poste 41 22 42 >>>>> >>>>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >>> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!a7Z4JG-PH0CquDikXpywg-JEECEGlEIcXI5LzffVcIr4qqITdSAQJibbguyeQOCvW6DkzTDDbP58oP6W0x8e$ >>> >>> >> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!a7Z4JG-PH0CquDikXpywg-JEECEGlEIcXI5LzffVcIr4qqITdSAQJibbguyeQOCvW6DkzTDDbP58oP6W0x8e$ > > > -- > Eric Chamberland, ing., M. Ing > Professionnel de recherche > GIREF/Universit? Laval > (418) 656-2131 poste 41 22 42 > > > -- > Eric Chamberland, ing., M. Ing > Professionnel de recherche > GIREF/Universit? Laval > (418) 656-2131 poste 41 22 42 > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!a7Z4JG-PH0CquDikXpywg-JEECEGlEIcXI5LzffVcIr4qqITdSAQJibbguyeQOCvW6DkzTDDbP58oP6W0x8e$ -------------- next part -------------- An HTML attachment was scrubbed... URL: