From Pierre.LEDAC at cea.fr Mon Sep 1 04:16:32 2025 From: Pierre.LEDAC at cea.fr (LEDAC Pierre) Date: Mon, 1 Sep 2025 09:16:32 +0000 Subject: [petsc-users] [MPI][GPU] In-Reply-To: <84040E99-73D3-4F3F-BDF3-C942CD51FD92@petsc.dev> References: , <84040E99-73D3-4F3F-BDF3-C942CD51FD92@petsc.dev> Message-ID: <1d651da6ee084a42958c1b4180c7bbe2@cea.fr> Sure, For instance on the below graphic, there is a sequence: [cid:c0738ea8-66fe-4bae-aeb0-3cb66f6bfeb4] Blue: PCApply/spmv_fixup_kernel_v2 Red: D2H copy (8000 bytes) Grey: MPI_irecv Grey: MPI_isend Grey: MPI_Waitall Green: H2D copy (8000 bytes) Blue: PCApply/csmv_v2_partition_kernel And this is happening in Hypre code. I replaced boomeramg by gamg, and confirm now there is only D2D copy during MPI calls. So the issue is related to Hypre (using 2.33). I double checked that PETSc configure enables correctly --enable-gpu-aware-mpi into Hypre during build, so I think I should contact Hypre team now. And switch to gamg for the moment. Thanks for your patience, Pierre LEDAC Commissariat ? l??nergie atomique et aux ?nergies alternatives Centre de SACLAY DES/ISAS/DM2S/SGLS/LCAN B?timent 451 ? point courrier n?41 F-91191 Gif-sur-Yvette +33 1 69 08 04 03 +33 6 83 42 05 79 ________________________________ De : Barry Smith Envoy? : lundi 1 septembre 2025 00:32:23 ? : LEDAC Pierre Cc : petsc-users at mcs.anl.gov Objet : Re: [petsc-users] [MPI][GPU] Can you pinpoint the MPI calls (and the routines in PETSc or hypre that they are in) that are not using CUDA-aware message passing? That is, inside the MPI call, they are copying to host memory and doing the needed inter-process communication from there? I do not understand the graphics you have sent. Barry Or is it possible the buffers passed to MPI are not on the GPU and so naturally do the MPI from host memory? If so, where? On Aug 31, 2025, at 1:30?PM, LEDAC Pierre wrote: Ok, I just tried --enable-gpu-aware-mpi passed to Hypre, Hypre_config.h defines now HYPRE_USING_GPU_AWARE_MPI 1 But still no D2D copy near MPI calls in ex46.c example. Probably an obvious thing I forgot during PETSc configure, but I don't see... Pierre LEDAC Commissariat ? l??nergie atomique et aux ?nergies alternatives Centre de SACLAY DES/ISAS/DM2S/SGLS/LCAN B?timent 451 ? point courrier n?41 F-91191 Gif-sur-Yvette +33 1 69 08 04 03 +33 6 83 42 05 79 ________________________________ De : LEDAC Pierre Envoy? : dimanche 31 ao?t 2025 19:13:36 ? : Barry Smith Cc : petsc-users at mcs.anl.gov Objet : RE: [petsc-users] [MPI][GPU] Barry, It solved the unrecognized option but still exchanging MPI messages through the host. I switch to a simpler test case without reading a matrix (src/ksp/ksp/tutorials/ex46.c) but get the same behaviour. In the Nsys profile for ex46, the MPI synchronizations occurs during PCApply so now I am wondering if the issue is related to the fact than Hypre is not configured/enabled with MPI GPU-Aware in the PETSc build. I will give a try with --enable-gpu-aware-mpi passed to Hypre. Do you know an example in PETSc which specifically bench with/without Cuda-Aware enabled for MPI ? Pierre LEDAC Commissariat ? l??nergie atomique et aux ?nergies alternatives Centre de SACLAY DES/ISAS/DM2S/SGLS/LCAN B?timent 451 ? point courrier n?41 F-91191 Gif-sur-Yvette +33 1 69 08 04 03 +33 6 83 42 05 79 ________________________________ De : Barry Smith > Envoy? : dimanche 31 ao?t 2025 16:33:38 ? : LEDAC Pierre Cc : petsc-users at mcs.anl.gov Objet : Re: [petsc-users] [MPI][GPU] Ahh, that ex10.c is missing a VecSetFromOptions() call before the VecLoad() and friends. In contrast, the matrix has a MatSetFromOptions(). Can you try adding it to ex10.c and see if that resolves the problem with ex10.c (and may be a path forward for your code)? Barry On Aug 31, 2025, at 4:32?AM, LEDAC Pierre > wrote: Yes, but was surprised it was not used, so I removed it (same for -vec_type mpicuda) mpirun -np 2 ./ex10 2 -f Matrix_3133717_rows_1_cpus.petsc -ksp_view -log_view -ksp_monitor -ksp_type cg -pc_type hypre -pc_hypre_type boomeramg -pc_hypre_boomeramg_strong_threshold 0.7 -mat_type aijcusparse -vec_type cuda ... WARNING! There are options you set that were not used! WARNING! could be spelling mistake, etc! There is one unused database option. It is: Option left: name:-vec_type value: cuda source: command lin Pierre LEDAC Commissariat ? l??nergie atomique et aux ?nergies alternatives Centre de SACLAY DES/ISAS/DM2S/SGLS/LCAN B?timent 451 ? point courrier n?41 F-91191 Gif-sur-Yvette +33 1 69 08 04 03 +33 6 83 42 05 79 ________________________________ De : Barry Smith > Envoy? : samedi 30 ao?t 2025 21:47:07 ? : LEDAC Pierre Cc : petsc-users at mcs.anl.gov Objet : Re: [petsc-users] [MPI][GPU] Did you try the additional option -vec_type cuda with ex10.c ? On Aug 30, 2025, at 1:16?PM, LEDAC Pierre > wrote: Hello, My code is built with PETSc 3.23+OpenMPI 4.1.6 (Cuda support enabled) and profling indicates that MPI communications are done between GPUs in all the code except PETSc part where D2H transfers occur. I reproduced the PETSc issue with the example under src/ksp/ksp/tutorials/ex10 on 2 MPI ranks. See output in ex10.log Also below the Nsys system profiling on ex10 with D2H and H2D copies before/after MPI calls. Thanks for your help, Pierre LEDAC Commissariat ? l??nergie atomique et aux ?nergies alternatives Centre de SACLAY DES/ISAS/DM2S/SGLS/LCAN B?timent 451 ? point courrier n?41 F-91191 Gif-sur-Yvette +33 1 69 08 04 03 +33 6 83 42 05 79 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: pastedImage.png Type: image/png Size: 289873 bytes Desc: pastedImage.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: pastedImage.png Type: image/png Size: 93240 bytes Desc: pastedImage.png URL: From eirik.hoydalsvik at sintef.no Thu Sep 11 18:25:20 2025 From: eirik.hoydalsvik at sintef.no (=?iso-8859-1?Q?Eirik_Jaccheri_H=F8ydalsvik?=) Date: Thu, 11 Sep 2025 23:25:20 +0000 Subject: [petsc-users] Custom matrix coloring for finite difference Jacobian Message-ID: Hi, I have made a two-phase flow code which computes motion of two phases in one dimension, where the phases are allowed to intermix. This code relies on a finite difference Jacobian computed using the standard coloring I get from the DMDA object: da = PETSc.DMDA().create( dim=(N_vertical,), dof=3, stencil_width=1, ) I now want to add a variable for the interphase height L_z in addition to a velocity u_v, giving the velocity of the vapor flowing in to the interface. The interface will move throughout the grid, meaning that these two variables will not be coupled to a fixed set of grid cells, but will be coupled to different sets of three grid cells throughout the simulation. Questions: 1. Is it possible to create a custom coloring to efficiently compute the finite difference Jacobian including the interphase height and vapor velocity? 2. How do I revert to computing the full finite difference Jacobian with the purpose of testing if the interphase model works? Best regards, Eirik Jaccheri H?ydalsvik Sintef ER and NTNU EPT -------------- next part -------------- An HTML attachment was scrubbed... URL: From bramkamp at nsc.liu.se Fri Sep 12 11:08:04 2025 From: bramkamp at nsc.liu.se (Frank Bramkamp) Date: Fri, 12 Sep 2025 18:08:04 +0200 Subject: [petsc-users] CHANGE OF ILU LEVEL DURING COMPUTATION Message-ID: <84F3B437-8740-4778-A8B6-22CFD6179ED7@nsc.liu.se> Duar PETSC Team, I have the following question. During the runtime I would yo change the level of ILU fill in eg from to 1, and sometimes back to 0. Is it sufficient simply to set in fortran call PCFactorSetLevels(IMP_CTX%PC_METHOD,ILU_LEVELS,IERROR) and petsc will see if it needs to change things and if it has to set up data structured again if the UILU fill in changes, or do I have to destroy the previous preconditioner context and setup a completely new one again ?! I typically set other parameter for ILU as well as CALL PCFactorSetPivotInBlocks(IMP_CTX%PC_METHOD,PETSC_TRUE,IERROR) CALL PCFactorSetAllowDiagonalFill(IMP_CTX%PC_METHOD,PETSC_TRUE,IERROR) If I change the ILU fill in, would I have to set those again as well ?! The problem is often to know the problem in advance how many iterations are needed and if it is worth to use eg ILU(1) over ILU(0). If the number of iterations is low, then ILU(1) is a waste of time. Theerfore I want to have a winwod eg pf 10-15 nonlinear iterations where I store the last GMRES iterations eg using ILU(0). Then I can determine an average and use this average as indicator. eg if the average irations are below eg 25 or 30 then I keep ILU(0)m if more, ILU(1) could be used. At least that is the idea to control the number of level a bit more dynamic to what I would try to do if just looking at the number of iterations it takes. Maybe, later one can also use timings of PCSetup and PCAppply to see how much each section takes to refine the approach a bit. Greetings, Frank Bramkamp From mfadams at lbl.gov Mon Sep 15 06:18:17 2025 From: mfadams at lbl.gov (Mark Adams) Date: Mon, 15 Sep 2025 07:18:17 -0400 Subject: [petsc-users] CHANGE OF ILU LEVEL DURING COMPUTATION In-Reply-To: <84F3B437-8740-4778-A8B6-22CFD6179ED7@nsc.liu.se> References: <84F3B437-8740-4778-A8B6-22CFD6179ED7@nsc.liu.se> Message-ID: I think you need to call PCReset and redo your constructor code. There is not much to salvage if you change the fill. Mark On Fri, Sep 12, 2025 at 12:08?PM Frank Bramkamp wrote: > Duar PETSC Team, > > I have the following question. > > During the runtime I would yo change the level of ILU fill in eg from to > 1, and sometimes back to 0. > Is it sufficient simply to set in fortran > call PCFactorSetLevels(IMP_CTX%PC_METHOD,ILU_LEVELS,IERROR) > > and petsc will see if it needs to change things and if it has to set up > data structured again if the UILU fill in changes, > or do I have to destroy the previous preconditioner context and setup a > completely new one again ?! > > I typically set other parameter for ILU as well as > CALL PCFactorSetPivotInBlocks(IMP_CTX%PC_METHOD,PETSC_TRUE,IERROR) > CALL PCFactorSetAllowDiagonalFill(IMP_CTX%PC_METHOD,PETSC_TRUE,IERROR) > > > If I change the ILU fill in, would I have to set those again as well ?! > > The problem is often to know the problem in advance how many iterations > are needed and if it is worth to use eg ILU(1) over ILU(0). > If the number of iterations is low, then ILU(1) is a waste of time. > Theerfore I want to have a winwod eg pf 10-15 nonlinear iterations where I > store the > last GMRES iterations eg using ILU(0). Then I can determine an average and > use this average as indicator. eg if the average irations are below eg 25 > or 30 then > I keep ILU(0)m if more, ILU(1) could be used. > > At least that is the idea to control the number of level a bit more > dynamic to what I would try to do if just looking at the number of > iterations it takes. > Maybe, later one can also use timings of PCSetup and PCAppply to see how > much each section takes to refine the approach a bit. > > > Greetings, Frank Bramkamp > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas.tardieu at edf.fr Mon Sep 15 08:52:32 2025 From: nicolas.tardieu at edf.fr (TARDIEU Nicolas) Date: Mon, 15 Sep 2025 13:52:32 +0000 Subject: [petsc-users] Fieldsplit and Fortran Message-ID: Dear PETSc Team, I am having difficulty upgrading the interface of our code to the new Fortran API (I am currently using v3.23.6). For instance, I cannot find a Fortran example demonstrating how to use PCFieldSplitGetSubKSP. I have therefore modified ksp/ps/tests/ex9.F90 to demonstrate the issues I am facing. I have provided some explanations in the test, but I would like to highlight one key point: the splits are generated automatically in the code based on the user's preferences. Therefore, I need to dynamically create sub-KSPs that are stored in a dedicated data structure (this explains the manipulations I am trying to set up in the test). Regards, Nicolas -- Nicolas Tardieu Ing PhD Computational Mechanics EDF - R&D Dpt ERMES PARIS-SACLAY, FRANCE Ce message et toutes les pi?ces jointes (ci-apr?s le 'Message') sont ?tablis ? l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme ? sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse. Si vous n'?tes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez re?u ce Message par erreur, merci de le supprimer de votre syst?me, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions ?galement d'en avertir imm?diatement l'exp?diteur par retour du message. Il est impossible de garantir que les communications par messagerie ?lectronique arrivent en temps utile, sont s?curis?es ou d?nu?es de toute erreur ou virus. ____________________________________________________ This message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval. If you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message. E-mail communication cannot be guaranteed to be timely secure, error or virus-free. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ex9f.F90 Type: application/octet-stream Size: 6834 bytes Desc: ex9f.F90 URL: From jroman at dsic.upv.es Mon Sep 15 09:27:24 2025 From: jroman at dsic.upv.es (Jose E. Roman) Date: Mon, 15 Sep 2025 14:27:24 +0000 Subject: [petsc-users] Fieldsplit and Fortran In-Reply-To: References: Message-ID: <1B9E265A-956F-4D0A-BDDB-E43D27E2C9DF@dsic.upv.es> I would recommend trying in the development version (main branch) since it contains many fixes/changes to Fortran bindings. In particular, I think the issue you are reporting does not appear in main. Jose > El 15 sept 2025, a las 15:52, TARDIEU Nicolas via petsc-users escribi?: > > Dear PETSc Team, > I am having difficulty upgrading the interface of our code to the new Fortran API (I am currently using v3.23.6). > For instance, I cannot find a Fortran example demonstrating how to use PCFieldSplitGetSubKSP. I have therefore modified ksp/ps/tests/ex9.F90 to demonstrate the issues I am facing. > I have provided some explanations in the test, but I would like to highlight one key point: the splits are generated automatically in the code based on the user's preferences. Therefore, I need to dynamically create sub-KSPs that are stored in a dedicated data structure (this explains the manipulations I am trying to set up in the test). > Regards, > Nicolas > -- > Nicolas Tardieu > Ing PhD Computational Mechanics > EDF - R&D Dpt ERMES > PARIS-SACLAY, FRANCE > > Ce message et toutes les pi?ces jointes (ci-apr?s le 'Message') sont ?tablis ? l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme ? sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse. > Si vous n'?tes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez re?u ce Message par erreur, merci de le supprimer de votre syst?me, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions ?galement d'en avertir imm?diatement l'exp?diteur par retour du message. > Il est impossible de garantir que les communications par messagerie ?lectronique arrivent en temps utile, sont s?curis?es ou d?nu?es de toute erreur ou virus. > ____________________________________________________ > This message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval. > If you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message. > E-mail communication cannot be guaranteed to be timely secure, error or virus-free. From lucia.barandiaran at upc.edu Mon Sep 15 09:45:52 2025 From: lucia.barandiaran at upc.edu (Lucia Barandiaran) Date: Mon, 15 Sep 2025 16:45:52 +0200 Subject: [petsc-users] Nested FieldSplit Implementation for FE using Fortran Message-ID: <8e6ce1b8-feca-46a4-9e85-81793fb27688@upc.edu> Dear PETSc community, Our research group is employing the PETSc libraries within a Fortran-based finite element (FE) code to solve large-scale, coupled multi-physics problems. The systems of interest typically involve more than two primary fields, specifically displacements (*U*), water pressure (*W*), gas pressure (*G*), and temperature (*T*). We have recently integrated the |FieldSplit|preconditioner into our solver pipeline and have achieved successful results on a 3D gas injection reservoir simulation model comprising approximately 130,000 nodes. The indices for each field variable were defined using |ISCreateGeneral|, and the splits were established via |PCFieldSplitSetIS|(one split per variable). The following command-line options were utilized effectively for this case: -ksp_type gmres -ksp_gmres_modifiedgramschmidt \ -pc_type fieldsplit -pc_fieldsplit_type schur \ -pc_fieldsplit_schur_fact_type lower \ -pc_fieldsplit_schur_precondition selfp \ -fieldsplit_U_ksp_type preonly \ -fieldsplit_U_pc_type hypre \ -fieldsplit_U_pc_hypre_type boomeramg \ -fieldsplit_U_pc_hypre_boomeramg_coarsen_type PMIS \ -fieldsplit_U_pc_hypre_boomeramg_strong_threshold 0.6 \ -fieldsplit_U_pc_hypre_boomeramg_max_levels 25 -fieldsplit_G_ksp_type preonly \ -fieldsplit_G_pc_type hypre \ -fieldsplit_G_pc_hypre_type boomeramg \ -fieldsplit_G_pc_hypre_boomeramg_coarsen_type PMIS \ -fieldsplit_G_pc_hypre_boomeramg_strong_threshold 0.6 \ -fieldsplit_G_pc_hypre_boomeramg_max_levels 25 Subsequently, we have also configured a case with three primary fields by combining *W*and *G*into a single monolithic *P*(pressure) variable, which was solved using a similar approach. We are now interested in advancing our preconditioning strategy by implementing a *nested*(or recursive) Schur complement decomposition. Our objective is to define a hierarchical structure where, for instance, an outer Schur complement is first constructed, and then the primary split itself is solved using an inner Schur complement preconditioner. We have encountered an example of this nested configuration in presentation slides for the PETSc example |ex31|(e.g., from [1]), which outlines a command-line structure similar to: -ksp_type fgmres -pc_type fieldsplit -pc_fieldsplit_type schur -pc_fieldsplit_0_fields 0,1 -pc_fieldsplit_1_fields 2 -pc_fieldsplit_schur_factorization_type upper -fieldsplit_0_ksp_type fgmres -fieldsplit_0_pc_type fieldsplit -fieldsplit_0_pc_fieldsplit_type schur -fieldsplit_0_pc_fieldsplit_schur_factorization_type full -fieldsplit_0_fieldsplit_velocity_ksp_type preonly -fieldsplit_0_fieldsplit_velocity_pc_type lu -fieldsplit_0_fieldsplit_pressure_ksp_rtol 1e-10 -fieldsplit_0_fieldsplit_pressure_pc_type jacobi -fieldsplit_temperature_ksp_type gmres -fieldsplit_temperature_pc_type lsc We are currently seeking guidance on how to implement this nested |FieldSplit|functionality within our Fortran code. We were wondering if the source code for this specific |ex31|example is publicly available, or if you could direct us to any other Fortran examples that demonstrate the implementation of a recursive Schur complement preconditioner. Any advice, references, or code examples you could provide would be immensely valuable to our research. Thank you very much for your time and assistance. Sincerely, Luc?a Barandiar?n Scientific software developer? - Dracsys Collaborator at MECMAT group - Universitat Polit?cnica de Catalunya (UPC) Reference: [1] Knepley, M. G. (2016). /Advanced PETSc Preconditioning/. Presentation. Retrieved from https://urldefense.us/v3/__https://cse.buffalo.edu/*knepley/presentations/PresMIT2016.pdf__;fg!!G_uCfscf7eWS!cyBB84JdoI3gXz-X7IFeVHTUEuq5x6-N36gHKdnu1acRmIh47LSfDP3AqW10CDEpIUW2fWhiL5g7nsUDzIMbTGkWotRffgz__g$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas.tardieu at edf.fr Tue Sep 16 11:02:30 2025 From: nicolas.tardieu at edf.fr (TARDIEU Nicolas) Date: Tue, 16 Sep 2025 16:02:30 +0000 Subject: [petsc-users] Fieldsplit and Fortran In-Reply-To: <1B9E265A-956F-4D0A-BDDB-E43D27E2C9DF@dsic.upv.es> References: <1B9E265A-956F-4D0A-BDDB-E43D27E2C9DF@dsic.upv.es> Message-ID: Dear Jose, Thank you very much for you advice. The main branch indeed solved the problem I pointed out. I can go to the next one ;-) Regards, Nicolas -- Nicolas Tardieu Ing PhD Computational Mechanics EDF - R&D Dpt ERMES PARIS-SACLAY, FRANCE ________________________________ De : jroman at dsic.upv.es Envoy? : lundi 15 septembre 2025 16:27 ? : TARDIEU Nicolas Cc : petsc-users at mcs.anl.gov Objet : Re: [petsc-users] Fieldsplit and Fortran Exp?diteur externe : V?rifier l?exp?diteur avant de cliquer sur les liens ou pi?ces-jointes External Sender : Check the sender before clicking any links or attachments I would recommend trying in the development version (main branch) since it contains many fixes/changes to Fortran bindings. In particular, I think the issue you are reporting does not appear in main. Jose > El 15 sept 2025, a las 15:52, TARDIEU Nicolas via petsc-users escribi?: > > Dear PETSc Team, > I am having difficulty upgrading the interface of our code to the new Fortran API (I am currently using v3.23.6). > For instance, I cannot find a Fortran example demonstrating how to use PCFieldSplitGetSubKSP. I have therefore modified ksp/ps/tests/ex9.F90 to demonstrate the issues I am facing. > I have provided some explanations in the test, but I would like to highlight one key point: the splits are generated automatically in the code based on the user's preferences. Therefore, I need to dynamically create sub-KSPs that are stored in a dedicated data structure (this explains the manipulations I am trying to set up in the test). > Regards, > Nicolas > -- > Nicolas Tardieu > Ing PhD Computational Mechanics > EDF - R&D Dpt ERMES > PARIS-SACLAY, FRANCE > > Ce message et toutes les pi?ces jointes (ci-apr?s le 'Message') sont ?tablis ? l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme ? sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse. > Si vous n'?tes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez re?u ce Message par erreur, merci de le supprimer de votre syst?me, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions ?galement d'en avertir imm?diatement l'exp?diteur par retour du message. > Il est impossible de garantir que les communications par messagerie ?lectronique arrivent en temps utile, sont s?curis?es ou d?nu?es de toute erreur ou virus. > ____________________________________________________ > This message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval. > If you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message. > E-mail communication cannot be guaranteed to be timely secure, error or virus-free. Ce message et toutes les pi?ces jointes (ci-apr?s le 'Message') sont ?tablis ? l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme ? sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse. Si vous n'?tes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez re?u ce Message par erreur, merci de le supprimer de votre syst?me, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions ?galement d'en avertir imm?diatement l'exp?diteur par retour du message. Il est impossible de garantir que les communications par messagerie ?lectronique arrivent en temps utile, sont s?curis?es ou d?nu?es de toute erreur ou virus. ____________________________________________________ This message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval. If you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message. E-mail communication cannot be guaranteed to be timely secure, error or virus-free. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Wed Sep 17 14:10:21 2025 From: bsmith at petsc.dev (Barry Smith) Date: Wed, 17 Sep 2025 15:10:21 -0400 Subject: [petsc-users] Custom matrix coloring for finite difference Jacobian In-Reply-To: References: Message-ID: <4EA9B47B-4A0A-442D-94ED-7C59B5749E3F@petsc.dev> Sorry, it looks like no one has gotten back to you yet. We don't have a mechanism for providing your own CreateColoring() for a DM at this time. The DMShellSetXXX() functionality appears to be the best way to provide the functionality. This would require adding a C DMShellSetCreateColoring() function and then mirroring the functionality over to petsc4py as is done for other DMShellSetXXX functions. As a side note, the various DMShellSetXXX() functions currently have an error checker that restricts their use to a DMSHELL. I am not sure that is needed or desirable in the PETSc source, and I suggest we remove it since it is sometimes legitimate to overwrite the default DM operation, as in your situation. If this is something you would benefit from, consider making an MR that adds this functionality, or we can try to provide it. Barry > On Sep 11, 2025, at 7:25?PM, Eirik Jaccheri H?ydalsvik via petsc-users wrote: > > Hi, > > I have made a two-phase flow code which computes motion of two phases in one dimension, where the phases are allowed to intermix. This code relies on a finite difference Jacobian computed using the standard coloring I get from the DMDA object: > > da = PETSc.DMDA().create( > dim=(N_vertical,), > dof=3, > stencil_width=1, > ) > > I now want to add a variable for the interphase height L_z in addition to a velocity u_v, giving the velocity of the vapor flowing in to the interface. The interface will move throughout the grid, meaning that these two variables will not be coupled to a fixed set of grid cells, but will be coupled to different sets of three grid cells throughout the simulation. > > Questions: > > 1. Is it possible to create a custom coloring to efficiently compute the finite difference Jacobian including the interphase height and vapor velocity? > > 2. How do I revert to computing the full finite difference Jacobian with the purpose of testing if the interphase model works? > > Best regards, > Eirik Jaccheri H?ydalsvik > Sintef ER and NTNU EPT -------------- next part -------------- An HTML attachment was scrubbed... URL: From RIOUSSEJ at erau.edu Thu Sep 18 14:44:46 2025 From: RIOUSSEJ at erau.edu (Riousset, Jeremy) Date: Thu, 18 Sep 2025 19:44:46 +0000 Subject: [petsc-users] Reading -options_file Message-ID: <30ECC67D-C4B1-4E5E-B0A8-C1EB7169307F@erau.edu> Hi, I?m updating a code from PETSc 3.4 to run with the latest PETSc. Here is my status: * I read the parameters via: mpiexec -n 1 bin/./M4 --options_file input/main.in * I?m not opening main.in correctly * None of the options are retrieved by e.g., PetscOptionsGetReal Do you have a suggestion to test what file the code is actually trying to access? Thanks Jeremy A. Riousset, Ph.D. (he, him, his) Associate Professor of Engineering Physics Physical Science 1 Aerospace Boulevard Daytona Beach, FL 32114 +1 (386) 226-6407 jeremy.riousset at erau.edu https://urldefense.us/v3/__https://orcid.org/0000-0003-1516-5337__;!!G_uCfscf7eWS!YiGxTKiNGs8EnS4pzFoO1YK18z4fx-sCtisUBloRO8dd6sUuh3kfUkimanAH5cJ-jgPhYw-Qk_8q8BBxAtPO0s_6$ Embry-Riddle Aeronautical University Florida | Arizona | Worldwide -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay.anl at fastmail.org Thu Sep 18 14:59:32 2025 From: balay.anl at fastmail.org (Satish Balay) Date: Thu, 18 Sep 2025 14:59:32 -0500 (CDT) Subject: [petsc-users] Reading -options_file In-Reply-To: <30ECC67D-C4B1-4E5E-B0A8-C1EB7169307F@erau.edu> References: <30ECC67D-C4B1-4E5E-B0A8-C1EB7169307F@erau.edu> Message-ID: Perhaps you can try reproducing the issue with a petsc example and then check for differences with your code [or build process?] Or you can check in debugger. [with a break point in PetscOptionsInsertString_Private - this is where "-options_file" is checked in the library] Satish --- balay at p1 /home/balay/petsc/src/ksp/ksp/tutorials (release =) $ make ex1 mpicc -fPIC -Wall -Wwrite-strings -Wno-unknown-pragmas -Wno-lto-type-mismatch -Wno-stringop-overflow -fstack-protector -fvisibility=hidden -g3 -O0 -I/home/balay/petsc/include -I/home/balay/petsc/arch-linux-c-debug/include -Wl,-export-dynamic ex1.c -Wl,-rpath,/home/balay/petsc/arch-linux-c-debug/lib -L/home/balay/petsc/arch-linux-c-debug/lib -Wl,-rpath,/software/mpich-4.3.0/lib -L/software/mpich-4.3.0/lib -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/15 -L/usr/lib/gcc/x86_64-redhat-linux/15 -lpetsc -llapack -lblas -lm -lX11 -lmpifort -lmpi -lgfortran -lm -lgfortran -lm -lgcc_s -lquadmath -lstdc++ -lquadmath -o ex1 balay at p1 /home/balay/petsc/src/ksp/ksp/tutorials (release =) $ ./ex1 Norm of error 2.41202e-15, Iterations 5 balay at p1 /home/balay/petsc/src/ksp/ksp/tutorials (release =) $ cat options.lst -ksp_monitor balay at p1 /home/balay/petsc/src/ksp/ksp/tutorials (release =) $ ./ex1 -options_file options.lst 0 KSP Residual norm 7.071067811865e-01 1 KSP Residual norm 3.162277660168e-01 2 KSP Residual norm 1.889822365046e-01 3 KSP Residual norm 1.290994448736e-01 4 KSP Residual norm 9.534625892456e-02 5 KSP Residual norm 8.082545620881e-16 Norm of error 2.41202e-15, Iterations 5 0 KSP Residual norm 3.535533905933e-01 1 KSP Residual norm 8.574929257125e-02 2 KSP Residual norm 2.272727272727e-02 3 KSP Residual norm 6.083103193616e-03 4 KSP Residual norm 1.629797545433e-03 5 KSP Residual norm 6.414906535963e-17 balay at p1 /home/balay/petsc/src/ksp/ksp/tutorials (release =) $ ./ex1 --options_file options.lst Norm of error 2.41202e-15, Iterations 5 WARNING! There are options you set that were not used! WARNING! could be spelling mistake, etc! There is one unused database option. It is: Option left: name:--options_file value: options.lst source: command line On Thu, 18 Sep 2025, Riousset, Jeremy via petsc-users wrote: > Hi, > > I?m updating a code from PETSc 3.4 to run with the latest PETSc. Here is my status: > > * I read the parameters via: mpiexec -n 1 bin/./M4 --options_file input/main.in > * I?m not opening main.in correctly > * None of the options are retrieved by e.g., PetscOptionsGetReal > > Do you have a suggestion to test what file the code is actually trying to access? > > Thanks > > Jeremy A. Riousset, Ph.D. (he, him, his) > Associate Professor of Engineering Physics > Physical Science > > 1 Aerospace Boulevard > Daytona Beach, FL 32114 > +1 (386) 226-6407 > jeremy.riousset at erau.edu > https://urldefense.us/v3/__https://orcid.org/0000-0003-1516-5337__;!!G_uCfscf7eWS!YiGxTKiNGs8EnS4pzFoO1YK18z4fx-sCtisUBloRO8dd6sUuh3kfUkimanAH5cJ-jgPhYw-Qk_8q8BBxAtPO0s_6$ > > Embry-Riddle Aeronautical University > Florida | Arizona | Worldwide > > From bsmith at petsc.dev Thu Sep 18 14:59:50 2025 From: bsmith at petsc.dev (Barry Smith) Date: Thu, 18 Sep 2025 15:59:50 -0400 Subject: [petsc-users] Reading -options_file In-Reply-To: <30ECC67D-C4B1-4E5E-B0A8-C1EB7169307F@erau.edu> References: <30ECC67D-C4B1-4E5E-B0A8-C1EB7169307F@erau.edu> Message-ID: <0527B74A-141C-4DAB-AC96-6892A5516950@petsc.dev> The PETSc options all begin with a single dash, so it is -options_file input/main.in > On Sep 18, 2025, at 3:44?PM, Riousset, Jeremy via petsc-users wrote: > > Hi, > > I?m updating a code from PETSc 3.4 to run with the latest PETSc. Here is my status: > I read the parameters via: mpiexec -n 1 bin/./M4 --options_file input/main.in > I?m not opening main.in correctly > None of the options are retrieved by e.g., PetscOptionsGetReal > Do you have a suggestion to test what file the code is actually trying to access? > > Thanks > > Jeremy A. Riousset, Ph.D. (he, him, his) > Associate Professor of Engineering Physics > Physical Science > > 1 Aerospace Boulevard > Daytona Beach, FL 32114 > +1 (386) 226-6407 > jeremy.riousset at erau.edu > https://urldefense.us/v3/__https://orcid.org/0000-0003-1516-5337__;!!G_uCfscf7eWS!daPHuppUEFVq0MXFucwjQfHajb2F7k9SacXGc97wyrP0Y6xe_NMmBuGL3VWZlGslDIwvXhNegAOGwWLhjspnvIo$ > Embry-Riddle Aeronautical University > Florida | Arizona | Worldwide > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From RIOUSSEJ at erau.edu Thu Sep 18 15:42:14 2025 From: RIOUSSEJ at erau.edu (Riousset, Jeremy) Date: Thu, 18 Sep 2025 20:42:14 +0000 Subject: [petsc-users] [EXTERNAL] Reading -options_file In-Reply-To: <0527B74A-141C-4DAB-AC96-6892A5516950@petsc.dev> References: <30ECC67D-C4B1-4E5E-B0A8-C1EB7169307F@erau.edu> <0527B74A-141C-4DAB-AC96-6892A5516950@petsc.dev> Message-ID: That helped. I now read my input file correctly. I am getting an error with the line below: DM da; ierr = DMDACreate3d(PETSC_COMM_WORLD,DM_BOUNDARY_GHOSTED,DM_BOUNDARY_GHOSTED,DM_BOUNDARY_GHOSTED,DMDA_STENCIL_STAR,21,21,21,PETSC_DECIDE,PETSC_DECIDE,PETSC_DECIDE,19,1,PETSC_NULL,PETSC_NULL,PETSC_NULL,&da);CHKERRQ(ierr); The error is: 0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Caught signal number 10 BUS: Bus Error, possibly illegal memory access [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [0]PETSC ERROR: or see https://urldefense.us/v3/__https://petsc.org/release/faq/*valgrind__;Iw!!G_uCfscf7eWS!drLsYfgMTCNAyHHkj7gJrIUzQ9gnkjwyajLeWWqLLyV3mEdA2B7X0fX4pFYTyw9FVWpCJQ7BKm4Ba2f-wE5DJLwP$ and https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!drLsYfgMTCNAyHHkj7gJrIUzQ9gnkjwyajLeWWqLLyV3mEdA2B7X0fX4pFYTyw9FVWpCJQ7BKm4Ba2f-wJxIkrms$ [0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run [0]PETSC ERROR: to get more information on the crash. [0]PETSC ERROR: Run with -malloc_debug to check if memory corruption is causing the crash. -------------------------------------------------------------------------- MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD Proc: [[44836,1],0] Errorcode: 59 Since the code was running with an old version of PETSc, I am wondering what I did to break it. Jeremy A. Riousset, Ph.D. (he, him, his) Associate Professor of Engineering Physics Physical Science 1 Aerospace Boulevard Daytona Beach, FL 32114 +1 (386) 226-6407 jeremy.riousset at erau.edu https://urldefense.us/v3/__https://orcid.org/0000-0003-1516-5337__;!!G_uCfscf7eWS!drLsYfgMTCNAyHHkj7gJrIUzQ9gnkjwyajLeWWqLLyV3mEdA2B7X0fX4pFYTyw9FVWpCJQ7BKm4Ba2f-wPlXko4P$ Embry-Riddle Aeronautical University Florida | Arizona | Worldwide On Sep 18, 2025, at 3:59?PM, Barry Smith wrote: External Sender: Use caution with links, attachments, or when sharing information. The PETSc options all begin with a single dash, so it is -options_file input/main.in On Sep 18, 2025, at 3:44?PM, Riousset, Jeremy via petsc-users wrote: Hi, I?m updating a code from PETSc 3.4 to run with the latest PETSc. Here is my status: * I read the parameters via: mpiexec -n 1 bin/./M4 --options_file input/main.in * I?m not opening main.in correctly * None of the options are retrieved by e.g., PetscOptionsGetReal Do you have a suggestion to test what file the code is actually trying to access? Thanks Jeremy A. Riousset, Ph.D. (he, him, his) Associate Professor of Engineering Physics Physical Science 1 Aerospace Boulevard Daytona Beach, FL 32114 +1 (386) 226-6407 jeremy.riousset at erau.edu https://urldefense.us/v3/__https://orcid.org/0000-0003-1516-5337__;!!G_uCfscf7eWS!drLsYfgMTCNAyHHkj7gJrIUzQ9gnkjwyajLeWWqLLyV3mEdA2B7X0fX4pFYTyw9FVWpCJQ7BKm4Ba2f-wPlXko4P$ Embry-Riddle Aeronautical University Florida | Arizona | Worldwide -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Thu Sep 18 15:55:44 2025 From: bsmith at petsc.dev (Barry Smith) Date: Thu, 18 Sep 2025 16:55:44 -0400 Subject: [petsc-users] [EXTERNAL] Reading -options_file In-Reply-To: References: <30ECC67D-C4B1-4E5E-B0A8-C1EB7169307F@erau.edu> <0527B74A-141C-4DAB-AC96-6892A5516950@petsc.dev> Message-ID: <5378311A-62B3-43EE-9016-99EC24EB95ED@petsc.dev> Build a PETSC_ARCH with -with-debugging=yes as a configure option and run it, preferably in a debugger (for example -start_in_debugger option) to find out where the bus error is taking place. > On Sep 18, 2025, at 4:42?PM, Riousset, Jeremy via petsc-users wrote: > > That helped. I now read my input file correctly. I am getting an error with the line below: > > DM da; > ierr = DMDACreate3d(PETSC_COMM_WORLD,DM_BOUNDARY_GHOSTED,DM_BOUNDARY_GHOSTED,DM_BOUNDARY_GHOSTED,DMDA_STENCIL_STAR,21,21,21,PETSC_DECIDE,PETSC_DECIDE,PETSC_DECIDE,19,1,PETSC_NULL,PETSC_NULL,PETSC_NULL,&da);CHKERRQ(ierr); > > The error is: > 0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: Caught signal number 10 BUS: Bus Error, possibly illegal memory access > [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > [0]PETSC ERROR: or see https://urldefense.us/v3/__https://petsc.org/release/faq/*valgrind__;Iw!!G_uCfscf7eWS!fh2X84CPIXKaFn6PkQPhrXJHX2qplg9tKOZ5R9PVc5RWTkcgK4phevcVoO3P_YgU5vu4GZREF8o52gfFFZ4W1b0$ and https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!fh2X84CPIXKaFn6PkQPhrXJHX2qplg9tKOZ5R9PVc5RWTkcgK4phevcVoO3P_YgU5vu4GZREF8o52gfFoqV6M1A$ > [0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run > [0]PETSC ERROR: to get more information on the crash. > [0]PETSC ERROR: Run with -malloc_debug to check if memory corruption is causing the crash. > -------------------------------------------------------------------------- > MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD > Proc: [[44836,1],0] > Errorcode: 59 > > Since the code was running with an old version of PETSc, I am wondering what I did to break it. > > Jeremy A. Riousset, Ph.D. (he, him, his) > Associate Professor of Engineering Physics > Physical Science > > 1 Aerospace Boulevard > Daytona Beach, FL 32114 > +1 (386) 226-6407 > jeremy.riousset at erau.edu > https://urldefense.us/v3/__https://orcid.org/0000-0003-1516-5337__;!!G_uCfscf7eWS!fh2X84CPIXKaFn6PkQPhrXJHX2qplg9tKOZ5R9PVc5RWTkcgK4phevcVoO3P_YgU5vu4GZREF8o52gfFDu9pfak$ > Embry-Riddle Aeronautical University > Florida | Arizona | Worldwide > > >> On Sep 18, 2025, at 3:59?PM, Barry Smith wrote: >> >> External Sender: Use caution with links, attachments, or when sharing information. >> >> >> The PETSc options all begin with a single dash, so it is -options_file input/main.in >> >>> On Sep 18, 2025, at 3:44?PM, Riousset, Jeremy via petsc-users wrote: >>> >>> Hi, >>> >>> I?m updating a code from PETSc 3.4 to run with the latest PETSc. Here is my status: >>> I read the parameters via: mpiexec -n 1 bin/./M4 --options_file input/main.in >>> I?m not opening main.in correctly >>> None of the options are retrieved by e.g., PetscOptionsGetReal >>> Do you have a suggestion to test what file the code is actually trying to access? >>> >>> Thanks >>> >>> Jeremy A. Riousset, Ph.D. (he, him, his) >>> Associate Professor of Engineering Physics >>> Physical Science >>> >>> 1 Aerospace Boulevard >>> Daytona Beach, FL 32114 >>> +1 (386) 226-6407 >>> jeremy.riousset at erau.edu >>> https://urldefense.us/v3/__https://orcid.org/0000-0003-1516-5337__;!!G_uCfscf7eWS!fh2X84CPIXKaFn6PkQPhrXJHX2qplg9tKOZ5R9PVc5RWTkcgK4phevcVoO3P_YgU5vu4GZREF8o52gfFDu9pfak$ >>> Embry-Riddle Aeronautical University >>> Florida | Arizona | Worldwide >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sawsan.shatanawi at wsu.edu Thu Sep 18 20:24:48 2025 From: sawsan.shatanawi at wsu.edu (Shatanawi, Sawsan Muhammad) Date: Fri, 19 Sep 2025 01:24:48 +0000 Subject: [petsc-users] Seeking help in integration Petsc with NoahMP Message-ID: Hello everyone, I developed a groundwater module to be integrated within Noah-MP using the HRLDAS driver. For the numerical solver, I rely on PETSc. However, I am encountering an error that I do not fully understand. To debug, I added print statements in the global NoahmpIOVarType.F90 module (This is the first module that needs to be read), but these statements did not appear in the output. This makes me wonder if I may have declared the PETSc variables incorrectly. Could you please review my NoahmpIOVarType.F90 module and Makefile, let me know if the PETSc variables are declared in the correct place? I would greatly appreciate any guidance on this issue. Thank you in advance Best regards, Sawsan Shatanawi: -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Makefile Type: application/octet-stream Size: 3964 bytes Desc: Makefile URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: NoahmpIOVarType.F90 Type: application/octet-stream Size: 99515 bytes Desc: NoahmpIOVarType.F90 URL: From knepley at gmail.com Fri Sep 19 05:55:48 2025 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 19 Sep 2025 06:55:48 -0400 Subject: [petsc-users] Seeking help in integration Petsc with NoahMP In-Reply-To: References: Message-ID: On Thu, Sep 18, 2025 at 9:25?PM Shatanawi, Sawsan Muhammad via petsc-users < petsc-users at mcs.anl.gov> wrote: > Hello everyone, > I developed a groundwater module to be integrated within Noah-MP using the > HRLDAS driver. For the numerical solver, I rely on PETSc. However, I am > encountering an error that I do not fully understand. > To debug, I added print statements in the global NoahmpIOVarType.F90 module > (This is the first module that needs to be read), but these statements did > not appear in the output. This makes me wonder if I may have declared the > PETSc variables incorrectly. > Are you calling the function you added? Thanks,' Matt > Could you please review my NoahmpIOVarType.F90 module and Makefile, let me > know if the PETSc variables are declared in the correct place? I would > greatly appreciate any guidance on this issue. > Thank you in advance > Best regards, > Sawsan Shatanawi: > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!ZqrI3yhHcgsmY0-w0j_m1IWXQb6BPVXBEYspZHgD-EGre3tYQNnPaQbxjnMIEbdftA9k4Z4kFSBVGtZ3ajwH$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From sawsan.shatanawi at wsu.edu Sat Sep 20 18:31:48 2025 From: sawsan.shatanawi at wsu.edu (Shatanawi, Sawsan Muhammad) Date: Sat, 20 Sep 2025 23:31:48 +0000 Subject: [petsc-users] Seeking help in integration Petsc with NoahMP In-Reply-To: References: Message-ID: Hello Matthew, Thank you for getting back to me. Yes, I called all the functions. The thing is that when I write a simple print statement, the run crashes. Here is the error message that I usually get: Breakpoint 1, module_noahmp_hrldas_driver::land_driver_ini (ntime_out=0, wrfits=, wrfite=, wrfjts=, wrfjte=) at module_NoahMP_hrldas_driver.f90:75 75 real(kind=kind_noahmp), allocatable :: BC_type_real(:,:,:), source_type_real(:,:,:) Missing separate debuginfos, use: dnf debuginfo-install cyrus-sasl-lib-2.1.27-21.el9.x86_64 glibc-2.34-168.el9_6.20.x86_64 keyutils-libs-1.6.3-1.el9.x86_64 krb5-libs-1.21.1-8.el9_6.x86_64 libX11-1.7.0-9.el9.x86_64 libXau-1.0.9-8.el9.x86_64 libbrotli-1.0.9-6.el9.x86_64 libcom_err-1.46.5-5.el9.x86_64 libcurl-7.76.1-29.el9_4.1.x86_64 libevent-2.1.12-8.el9_4.x86_64 libidn2-2.3.0-7.el9.x86_64 libjpeg-turbo-2.0.90-7.el9.x86_64 libnghttp2-1.43.0-5.el9_4.3.x86_64 libpsl-0.21.1-5.el9.x86_64 libselinux-3.6-1.el9.x86_64 libunistring-0.9.10-15.el9.x86_64 libxcb-1.13.1-9.el9.x86_64 libxcrypt-4.4.18-3.el9.x86_64 libxml2-2.9.13-12.el9_6.x86_64 openssl-libs-3.2.2-6.el9_5.1.x86_64 systemd-libs-252-32.el9_4.7.x86_64 xz-libs-5.2.5-8.el9_0.x86_64 zlib-1.2.11-40.el9.x86_64 (gdb) n 77 write(6,*) "sawsan was here 1" (gdb) forrtl: severe (257): formatted I/O to unit open for unformatted transfers, unit 6, file /dev/pts/0 Image PC Routine Line Source libpetsc.so.3.20. 0000155551029BA6 for__io_return Unknown Unknown hrldas.exe 00000000006906E0 for_write_seq_lis Unknown Unknown hrldas.exe 0000000000627135 Unknown Unknown Unknown hrldas.exe 000000000040926D Unknown Unknown Unknown hrldas.exe 000000000040921D Unknown Unknown Unknown libc.so.6 000015554A8295D0 Unknown Unknown Unknown libc.so.6 000015554A829680 __libc_start_main Unknown Unknown hrldas.exe 0000000000409135 Unknown Unknown Unknown [Inferior 1 (process 3244707) exited with code 01] If I use print, the error will be forrtl: severe (257): formatted I/O to unit open for unformatted transfers, unit -1, file /dev/pts/0 I also attached the NoahMP driver module, where I initialize and finalize PETSc. I only added PETSc functions in the land part (and its different subroutines) of the NoahMP model. Thank you for your help Sawsan ________________________________ From: Matthew Knepley Sent: Friday, September 19, 2025 3:55 AM To: Shatanawi, Sawsan Muhammad Cc: petsc-users at mcs.anl.gov ; petsc-maint at mcs.anl.gov Subject: Re: [petsc-users] Seeking help in integration Petsc with NoahMP [EXTERNAL EMAIL] On Thu, Sep 18, 2025 at 9:25?PM Shatanawi, Sawsan Muhammad via petsc-users > wrote: Hello everyone, I developed a groundwater module to be integrated within Noah-MP using the HRLDAS driver. For the numerical solver, I rely on PETSc. However, I am encountering an error that I do not fully understand. To debug, I added print statements in the global NoahmpIOVarType.F90 module (This is the first module that needs to be read), but these statements did not appear in the output. This makes me wonder if I may have declared the PETSc variables incorrectly. Are you calling the function you added? Thanks,' Matt Could you please review my NoahmpIOVarType.F90 module and Makefile, let me know if the PETSc variables are declared in the correct place? I would greatly appreciate any guidance on this issue. Thank you in advance Best regards, Sawsan Shatanawi: -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!ZzsHnjSM7RI5-q7hIbK0SgM6A8N3IdRaheTQ9rmAghDMLW0bwhvq6O0AGOXbHH9IgbGNKbLxmJpt0SVZihaWH-_CYUo3r-SluQ$ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: module_NoahMP_hrldas_driver.F Type: application/octet-stream Size: 128346 bytes Desc: module_NoahMP_hrldas_driver.F URL: From bsmith at petsc.dev Sat Sep 20 20:49:47 2025 From: bsmith at petsc.dev (Barry Smith) Date: Sat, 20 Sep 2025 21:49:47 -0400 Subject: [petsc-users] Seeking help in integration Petsc with NoahMP In-Reply-To: References: Message-ID: Instead of writing to 6 try opening a file and writing to that. It seems like somewhere earlier in the code it is changing the the IO units. Or better just run in the debugger and put a break point in at that line > On Sep 19, 2025, at 6:55?AM, Matthew Knepley wrote: > > On Thu, Sep 18, 2025 at 9:25?PM Shatanawi, Sawsan Muhammad via petsc-users > wrote: >> Hello everyone, >> I developed a groundwater module to be integrated within Noah-MP using the HRLDAS driver. For the numerical solver, I rely on PETSc. However, I am encountering an error that I do not fully understand. >> To debug, I added print statements in the global NoahmpIOVarType.F90 module (This is the first module that needs to be read), but these statements did not appear in the output. This makes me wonder if I may have declared the PETSc variables incorrectly. > Are you calling the function you added? > > Thanks,' > > Matt > >> Could you please review my NoahmpIOVarType.F90 module and Makefile, let me know if the PETSc variables are declared in the correct place? I would greatly appreciate any guidance on this issue. >> Thank you in advance >> Best regards, >> Sawsan Shatanawi: >> > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!Z5sBe9AbSfvxy4foyp51iQG3j2pIg28MKbaO9EzU-hhC3D9ynruEFL9ZdpQb2bwi7jp2ivfm8GC9W-kpXhIkBoM$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Sun Sep 21 15:28:35 2025 From: bsmith at petsc.dev (Barry Smith) Date: Sun, 21 Sep 2025 16:28:35 -0400 Subject: [petsc-users] Nested FieldSplit Implementation for FE using Fortran In-Reply-To: <8e6ce1b8-feca-46a4-9e85-81793fb27688@upc.edu> References: <8e6ce1b8-feca-46a4-9e85-81793fb27688@upc.edu> Message-ID: The example you indicate that allows nesting field splits on the command line -pc_fieldsplit_0_fields 0,1 -pc_fieldsplit_1_fields 2 works because the "fields" can be indicated by an offset that defines each field in the vector, where the fields are stored interlaced in the array. For example doing -fieldsplit_0_pc_type fieldsplit -fieldsplit_0_pc_fieldsplit_block_size 2 -fieldsplit_0_pc_fieldsplit_0_fields 0 -fieldsplit_0_pc_fieldsplit_1_fields 1 will split the original first split into its own fieldsplit See src/ksp/ksp/tutorials/ex42-mgschur_nestedfs.opts for such an example. You can ignore the -stokes_mg_levels_prefix, it appears because we are doing the nesting field split on each level of multigrid. If the fields have a more complicated structure in the array used by the vector, for example, if some variables are cell-centered and some are vertex-centered, you cannot indicate a field by a single offset. Thus indicating the preconditioner purely from the options database is more difficult (or not possible?) But if the inner fieldsplit involves fields that are stored in the simple interlaced pattern (even if the outer fields are not) you can do the same trick as above to define the inner split from the command line. For the most general case where the outer splits are defined by two general IS and the inner splits of one (or both) of the outer splits are also defined by two general IS you actually have to write some code :-). Basically you tell the PC the outer split IS and then pull out of the PC the sub-PC for that split and provide it with its IS. See src/dm/impls/stag/tests/ex43.c which does exactly that. Feel free to contact us with questions etc. Barry > On Sep 15, 2025, at 10:45?AM, Lucia Barandiaran via petsc-users wrote: > > Dear PETSc community, > > Our research group is employing the PETSc libraries within a Fortran-based finite element (FE) code to solve large-scale, coupled multi-physics problems. The systems of interest typically involve more than two primary fields, specifically displacements (U), water pressure (W), gas pressure (G), and temperature (T). > > We have recently integrated the FieldSplit preconditioner into our solver pipeline and have achieved successful results on a 3D gas injection reservoir simulation model comprising approximately 130,000 nodes. The indices for each field variable were defined using ISCreateGeneral, and the splits were established via PCFieldSplitSetIS (one split per variable). The following command-line options were utilized effectively for this case: > > -ksp_type gmres -ksp_gmres_modifiedgramschmidt \ > -pc_type fieldsplit -pc_fieldsplit_type schur \ > -pc_fieldsplit_schur_fact_type lower \ > -pc_fieldsplit_schur_precondition selfp \ > -fieldsplit_U_ksp_type preonly \ > -fieldsplit_U_pc_type hypre \ > -fieldsplit_U_pc_hypre_type boomeramg \ > -fieldsplit_U_pc_hypre_boomeramg_coarsen_type PMIS \ > -fieldsplit_U_pc_hypre_boomeramg_strong_threshold 0.6 \ > -fieldsplit_U_pc_hypre_boomeramg_max_levels 25 > -fieldsplit_G_ksp_type preonly \ > -fieldsplit_G_pc_type hypre \ > -fieldsplit_G_pc_hypre_type boomeramg \ > -fieldsplit_G_pc_hypre_boomeramg_coarsen_type PMIS \ > -fieldsplit_G_pc_hypre_boomeramg_strong_threshold 0.6 \ > -fieldsplit_G_pc_hypre_boomeramg_max_levels 25 > > Subsequently, we have also configured a case with three primary fields by combining W and G into a single monolithic P (pressure) variable, which was solved using a similar approach. > > We are now interested in advancing our preconditioning strategy by implementing a nested (or recursive) Schur complement decomposition. Our objective is to define a hierarchical structure where, for instance, an outer Schur complement is first constructed, and then the primary split itself is solved using an inner Schur complement preconditioner. > > We have encountered an example of this nested configuration in presentation slides for the PETSc example ex31 (e.g., from [1]), which outlines a command-line structure similar to: > > -ksp_type fgmres -pc_type fieldsplit -pc_fieldsplit_type schur > -pc_fieldsplit_0_fields 0,1 -pc_fieldsplit_1_fields 2 > -pc_fieldsplit_schur_factorization_type upper > -fieldsplit_0_ksp_type fgmres -fieldsplit_0_pc_type fieldsplit > -fieldsplit_0_pc_fieldsplit_type schur > -fieldsplit_0_pc_fieldsplit_schur_factorization_type full > -fieldsplit_0_fieldsplit_velocity_ksp_type preonly > -fieldsplit_0_fieldsplit_velocity_pc_type lu > -fieldsplit_0_fieldsplit_pressure_ksp_rtol 1e-10 > -fieldsplit_0_fieldsplit_pressure_pc_type jacobi > -fieldsplit_temperature_ksp_type gmres > -fieldsplit_temperature_pc_type lsc > > We are currently seeking guidance on how to implement this nested FieldSplit functionality within our Fortran code. We were wondering if the source code for this specific ex31 example is publicly available, or if you could direct us to any other Fortran examples that demonstrate the implementation of a recursive Schur complement preconditioner. > > Any advice, references, or code examples you could provide would be immensely valuable to our research. > > Thank you very much for your time and assistance. > > Sincerely, > > Luc?a Barandiar?n > > Scientific software developer - Dracsys > > Collaborator at MECMAT group - Universitat Polit?cnica de Catalunya (UPC) > > Reference: > [1] Knepley, M. G. (2016). Advanced PETSc Preconditioning. Presentation. Retrieved from https://urldefense.us/v3/__https://cse.buffalo.edu/*knepley/presentations/PresMIT2016.pdf__;fg!!G_uCfscf7eWS!dXD6kmWC3csMp5SCWAsN98pB8hX8nC9aOzIImmwoHNypDcfaoyjWZ1zVSnNWMB-lmgFmxtNb3o1pN4MK6dogoko$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Sep 22 05:56:23 2025 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 22 Sep 2025 06:56:23 -0400 Subject: [petsc-users] Nested FieldSplit Implementation for FE using Fortran In-Reply-To: References: <8e6ce1b8-feca-46a4-9e85-81793fb27688@upc.edu> Message-ID: On Sun, Sep 21, 2025 at 4:29?PM Barry Smith wrote: > > The example you indicate that allows nesting field splits on the > command line > > -pc_fieldsplit_0_fields 0,1 -pc_fieldsplit_1_fields 2 > > works because the "fields" can be indicated by an offset that defines each > field in the vector, where the fields are stored interlaced in the array. > For example doing > > -fieldsplit_0_pc_type fieldsplit > -fieldsplit_0_pc_fieldsplit_block_size 2 > -fieldsplit_0_pc_fieldsplit_0_fields 0 > -fieldsplit_0_pc_fieldsplit_1_fields 1 > > will split the original first split into its own fieldsplit > > See src/ksp/ksp/tutorials/ex42-mgschur_nestedfs.opts for such an example. > You can ignore the -stokes_mg_levels_prefix, it appears because we are > doing the nesting field split on each level of multigrid. > > If the fields have a more complicated structure in the array used by the > vector, for example, if some variables are cell-centered and some are > vertex-centered, you cannot indicate a field by a single offset. Thus > indicating the preconditioner purely from the options database is more > difficult (or not possible?) > It is possible, but you need to indicate it using a DM. You could use one of our builtin classes, or you could use DMShell. It must respond to the DMGetLocalSection() (which indicates the field split) and DMCreateSubDM(), which would make another DMShell for the subset of fields. Thanks, Matt > But if the inner fieldsplit involves fields that are stored in the simple > interlaced pattern (even if the outer fields are not) you can do the same > trick as above to define the inner split from the command line. > > For the most general case where the outer splits are defined by two > general IS and the inner splits of one (or both) of the outer splits are > also defined by two general IS you actually have to write some code :-). > Basically you tell the PC the outer split IS and then pull out of the PC > the sub-PC for that split and provide it with its IS. > See src/dm/impls/stag/tests/ex43.c which does exactly that. > > Feel free to contact us with questions etc. > > Barry > > > > > > > On Sep 15, 2025, at 10:45?AM, Lucia Barandiaran via petsc-users < > petsc-users at mcs.anl.gov> wrote: > > Dear PETSc community, > > Our research group is employing the PETSc libraries within a Fortran-based > finite element (FE) code to solve large-scale, coupled multi-physics > problems. The systems of interest typically involve more than two primary > fields, specifically displacements (*U*), water pressure (*W*), gas > pressure (*G*), and temperature (*T*). > > We have recently integrated the FieldSplit preconditioner into our solver > pipeline and have achieved successful results on a 3D gas injection > reservoir simulation model comprising approximately 130,000 nodes. The > indices for each field variable were defined using ISCreateGeneral, and > the splits were established via PCFieldSplitSetIS (one split per > variable). The following command-line options were utilized effectively for > this case: > > -ksp_type gmres -ksp_gmres_modifiedgramschmidt \ > -pc_type fieldsplit -pc_fieldsplit_type schur \ > -pc_fieldsplit_schur_fact_type lower \ > -pc_fieldsplit_schur_precondition selfp \ > -fieldsplit_U_ksp_type preonly \ > -fieldsplit_U_pc_type hypre \ > -fieldsplit_U_pc_hypre_type boomeramg \ > -fieldsplit_U_pc_hypre_boomeramg_coarsen_type PMIS \ > -fieldsplit_U_pc_hypre_boomeramg_strong_threshold 0.6 \ > -fieldsplit_U_pc_hypre_boomeramg_max_levels 25 > -fieldsplit_G_ksp_type preonly \ > -fieldsplit_G_pc_type hypre \ > -fieldsplit_G_pc_hypre_type boomeramg \ > -fieldsplit_G_pc_hypre_boomeramg_coarsen_type PMIS \ > -fieldsplit_G_pc_hypre_boomeramg_strong_threshold 0.6 \ > -fieldsplit_G_pc_hypre_boomeramg_max_levels 25 > > Subsequently, we have also configured a case with three primary fields by > combining *W* and *G* into a single monolithic *P* (pressure) variable, > which was solved using a similar approach. > > We are now interested in advancing our preconditioning strategy by > implementing a *nested* (or recursive) Schur complement decomposition. > Our objective is to define a hierarchical structure where, for instance, an > outer Schur complement is first constructed, and then the primary split > itself is solved using an inner Schur complement preconditioner. > > We have encountered an example of this nested configuration in > presentation slides for the PETSc example ex31 (e.g., from [1]), which > outlines a command-line structure similar to: > > -ksp_type fgmres -pc_type fieldsplit -pc_fieldsplit_type schur > -pc_fieldsplit_0_fields 0,1 -pc_fieldsplit_1_fields 2 > -pc_fieldsplit_schur_factorization_type upper > -fieldsplit_0_ksp_type fgmres -fieldsplit_0_pc_type fieldsplit > -fieldsplit_0_pc_fieldsplit_type schur > -fieldsplit_0_pc_fieldsplit_schur_factorization_type full > -fieldsplit_0_fieldsplit_velocity_ksp_type preonly > -fieldsplit_0_fieldsplit_velocity_pc_type lu > -fieldsplit_0_fieldsplit_pressure_ksp_rtol 1e-10 > -fieldsplit_0_fieldsplit_pressure_pc_type jacobi > -fieldsplit_temperature_ksp_type gmres > -fieldsplit_temperature_pc_type lsc > > We are currently seeking guidance on how to implement this nested > FieldSplit functionality within our Fortran code. We were wondering if > the source code for this specific ex31 example is publicly available, or > if you could direct us to any other Fortran examples that demonstrate the > implementation of a recursive Schur complement preconditioner. > > Any advice, references, or code examples you could provide would be > immensely valuable to our research. > > Thank you very much for your time and assistance. > > Sincerely, > > Luc?a Barandiar?n > > Scientific software developer - Dracsys > > Collaborator at MECMAT group - Universitat Polit?cnica de Catalunya (UPC) > > Reference: > [1] Knepley, M. G. (2016). *Advanced PETSc Preconditioning*. > Presentation. Retrieved from > https://urldefense.us/v3/__https://cse.buffalo.edu/*knepley/presentations/PresMIT2016.pdf__;fg!!G_uCfscf7eWS!YW_Nc3UPRl4kkQe8lx6GShdWvfyeqFrPsWGu01zABiDWPT4ms6_KLv1b3MGZ_EHnNQ2qyvPGd8Ejsb2XkeQr$ > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!YW_Nc3UPRl4kkQe8lx6GShdWvfyeqFrPsWGu01zABiDWPT4ms6_KLv1b3MGZ_EHnNQ2qyvPGd8EjsWnWJHA3$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Thu Sep 25 18:28:31 2025 From: bsmith at petsc.dev (Barry Smith) Date: Thu, 25 Sep 2025 19:28:31 -0400 Subject: [petsc-users] Custom matrix coloring for finite difference Jacobian In-Reply-To: References: Message-ID: <270DA296-A604-42D9-8D28-B1E15CC75110@petsc.dev> Is this something you are still interested in having? > On Sep 11, 2025, at 7:25?PM, Eirik Jaccheri H?ydalsvik via petsc-users wrote: > > Hi, > > I have made a two-phase flow code which computes motion of two phases in one dimension, where the phases are allowed to intermix. This code relies on a finite difference Jacobian computed using the standard coloring I get from the DMDA object: > > da = PETSc.DMDA().create( > dim=(N_vertical,), > dof=3, > stencil_width=1, > ) > > I now want to add a variable for the interphase height L_z in addition to a velocity u_v, giving the velocity of the vapor flowing in to the interface. The interface will move throughout the grid, meaning that these two variables will not be coupled to a fixed set of grid cells, but will be coupled to different sets of three grid cells throughout the simulation. > > Questions: > > 1. Is it possible to create a custom coloring to efficiently compute the finite difference Jacobian including the interphase height and vapor velocity? > > 2. How do I revert to computing the full finite difference Jacobian with the purpose of testing if the interphase model works? > > Best regards, > Eirik Jaccheri H?ydalsvik > Sintef ER and NTNU EPT -------------- next part -------------- An HTML attachment was scrubbed... URL: From 12332508 at mail.sustech.edu.cn Fri Sep 26 01:19:54 2025 From: 12332508 at mail.sustech.edu.cn (=?utf-8?B?5bKz5paw5rW3?=) Date: Fri, 26 Sep 2025 14:19:54 +0800 Subject: [petsc-users] Question on using VECCUDA with VECGHOST Message-ID: Dear PETSc Team, I am currently working on a cluster where I would like to use PETSc with CUDA support. In particular, I am interested in whether it is possible to combine VECCUDA with VECGHOST. I have searched through the documentation and but have not found explicit examples. May I ask: Has VECCUDA been used successfully together with VECGHOST? If so, are there recommended approaches, examples, or best practices to follow? Any guidance or references would be greatly appreciated. Best regards, Xinhai ??? ??????/??/???/2023???? ?????????????1088?   -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Sep 26 07:51:42 2025 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 26 Sep 2025 08:51:42 -0400 Subject: [petsc-users] Question on using VECCUDA with VECGHOST In-Reply-To: References: Message-ID: On Fri, Sep 26, 2025 at 2:54?AM ??? <12332508 at mail.sustech.edu.cn> wrote: > Dear PETSc Team, > > I am currently working on a cluster where I would like to use PETSc with > CUDA support. In particular, I am interested in whether it is possible to > combine VECCUDA with VECGHOST. > > I have searched through the documentation and but have not found explicit > examples. May I ask: > > > - Has VECCUDA been used successfully together with VECGHOST? > > VECGHOST just uses two regular vectors (a local form and a ghosted form), and VecScatter to map between them. Those vectors can be of any type. > > - If so, are there recommended approaches, examples, or best practices > to follow? > > I don't think you have to do anything special. Have you had a problem? Thanks, Matt > Any guidance or references would be greatly appreciated. > > Best regards, > Xinhai > > > > > ??? > > ??????/??/???/2023???? > > ?????????????1088? > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!Z9O6linS5rsoZ4zfmavdqHZKKdI8XmUvjqUaU0AOvQ-DjDvowFwa4-uIjkH_1v4TVLEQzTay7Shiqt8XYfbv$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at joliv.et Fri Sep 26 08:13:54 2025 From: pierre at joliv.et (Pierre Jolivet) Date: Fri, 26 Sep 2025 15:13:54 +0200 Subject: [petsc-users] Question on using VECCUDA with VECGHOST In-Reply-To: References: Message-ID: <3BAB7ED5-2A9C-4D17-8FD9-B109DAF22844@joliv.et> An HTML attachment was scrubbed... URL: From Elena.Moral.Sanchez at ipp.mpg.de Fri Sep 26 11:49:06 2025 From: Elena.Moral.Sanchez at ipp.mpg.de (Moral Sanchez, Elena) Date: Fri, 26 Sep 2025 16:49:06 +0000 Subject: [petsc-users] setting correct tolerances for MG smoother CG at the finest level Message-ID: Hi, I am using multigrid (multiplicative) as a preconditioner with a V-cycle of two levels. At each level, I am setting CG as the smoother with certain tolerance. What I observe is that in the finest level the CG continues iterating after the residual norm reaches the tolerance (atol) and it only stops when reaching the maximum number of iterations at that level. At the coarsest level this does not occur and the CG stops when the tolerance is reached. I double-checked that the smoother at the finest level has the right tolerance. And I am using a Monitor function to track the residual. Do you know how to make the smoother at the finest level stop when reaching the tolerance? Cheers, Elena. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Fri Sep 26 12:05:02 2025 From: bsmith at petsc.dev (Barry Smith) Date: Fri, 26 Sep 2025 13:05:02 -0400 Subject: [petsc-users] setting correct tolerances for MG smoother CG at the finest level In-Reply-To: References: Message-ID: Send the output using -ksp_view Normally one uses a fixed number of iterations of smoothing on level with multigrid rather than a tolerance, but yes PETSc should respect such a tolerance. Barry > On Sep 26, 2025, at 12:49?PM, Moral Sanchez, Elena wrote: > > Hi, > I am using multigrid (multiplicative) as a preconditioner with a V-cycle of two levels. At each level, I am setting CG as the smoother with certain tolerance. > > What I observe is that in the finest level the CG continues iterating after the residual norm reaches the tolerance (atol) and it only stops when reaching the maximum number of iterations at that level. At the coarsest level this does not occur and the CG stops when the tolerance is reached. > > I double-checked that the smoother at the finest level has the right tolerance. And I am using a Monitor function to track the residual. > > Do you know how to make the smoother at the finest level stop when reaching the tolerance? > > Cheers, > Elena. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Fri Sep 26 14:20:36 2025 From: bsmith at petsc.dev (Barry Smith) Date: Fri, 26 Sep 2025 15:20:36 -0400 Subject: [petsc-users] setting correct tolerances for MG smoother CG at the finest level In-Reply-To: References: Message-ID: Looks reasonable. Send the output running with -ksp_monitor -mg_levels_ksp_monitor -ksp_converged_reason -mg_levels_ksp_converged_reason > On Sep 26, 2025, at 1:19?PM, Moral Sanchez, Elena wrote: > > Dear Barry, > > This is -ksp_view for the smoother at the finest level: > KSP Object: (mg_levels_1_) 1 MPI process > type: cg > variant HERMITIAN > maximum iterations=10, nonzero initial guess > tolerances: relative=0.1, absolute=0.1, divergence=1e+30 > left preconditioning > using UNPRECONDITIONED norm type for convergence test > PC Object: (mg_levels_1_) 1 MPI process > type: none > linear system matrix = precond matrix: > Mat Object: 1 MPI process > type: python > rows=524, cols=524 > Python: Solver_petsc.LeastSquaresOperator > And at the coarsest level: > KSP Object: (mg_coarse_) 1 MPI process > type: cg > variant HERMITIAN > maximum iterations=100, initial guess is zero > tolerances: relative=0.1, absolute=0.1, divergence=1e+30 > left preconditioning > using UNPRECONDITIONED norm type for convergence test > PC Object: (mg_coarse_) 1 MPI process > type: none > linear system matrix = precond matrix: > Mat Object: 1 MPI process > type: python > rows=344, cols=344 > Python: Solver_petsc.LeastSquaresOperator > And for the whole solver: > KSP Object: 1 MPI process > type: cg > variant HERMITIAN > maximum iterations=100, nonzero initial guess > tolerances: relative=1e-08, absolute=1e-09, divergence=10000. > left preconditioning > using UNPRECONDITIONED norm type for convergence test > PC Object: 1 MPI process > type: mg > type is MULTIPLICATIVE, levels=2 cycles=v > Cycles per PCApply=1 > Not using Galerkin computed coarse grid matrices > Coarse grid solver -- level 0 ------------------------------- > KSP Object: (mg_coarse_) 1 MPI process > type: cg > variant HERMITIAN > maximum iterations=100, initial guess is zero > tolerances: relative=0.1, absolute=0.1, divergence=1e+30 > left preconditioning > using UNPRECONDITIONED norm type for convergence test > PC Object: (mg_coarse_) 1 MPI process > type: none > linear system matrix = precond matrix: > Mat Object: 1 MPI process > type: python > rows=344, cols=344 > Python: Solver_petsc.LeastSquaresOperator > Down solver (pre-smoother) on level 1 ------------------------------- > KSP Object: (mg_levels_1_) 1 MPI process > type: cg > variant HERMITIAN > maximum iterations=10, nonzero initial guess > tolerances: relative=0.1, absolute=0.1, divergence=1e+30 > left preconditioning > using UNPRECONDITIONED norm type for convergence test > PC Object: (mg_levels_1_) 1 MPI process > type: none > linear system matrix = precond matrix: > Mat Object: 1 MPI process > type: python > rows=524, cols=524 > Python: Solver_petsc.LeastSquaresOperator > Up solver (post-smoother) same as down solver (pre-smoother) > linear system matrix = precond matrix: > Mat Object: 1 MPI process > type: python > rows=524, cols=524 > Python: Solver_petsc.LeastSquaresOperator > Best, > Elena > > From: Barry Smith > > Sent: 26 September 2025 19:05:02 > To: Moral Sanchez, Elena > Cc: petsc-users at mcs.anl.gov > Subject: Re: [petsc-users] setting correct tolerances for MG smoother CG at the finest level > > > Send the output using -ksp_view > > Normally one uses a fixed number of iterations of smoothing on level with multigrid rather than a tolerance, but yes PETSc should respect such a tolerance. > > Barry > > >> On Sep 26, 2025, at 12:49?PM, Moral Sanchez, Elena > wrote: >> >> Hi, >> I am using multigrid (multiplicative) as a preconditioner with a V-cycle of two levels. At each level, I am setting CG as the smoother with certain tolerance. >> >> What I observe is that in the finest level the CG continues iterating after the residual norm reaches the tolerance (atol) and it only stops when reaching the maximum number of iterations at that level. At the coarsest level this does not occur and the CG stops when the tolerance is reached. >> >> I double-checked that the smoother at the finest level has the right tolerance. And I am using a Monitor function to track the residual. >> >> Do you know how to make the smoother at the finest level stop when reaching the tolerance? >> >> Cheers, >> Elena. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Fri Sep 26 14:20:36 2025 From: bsmith at petsc.dev (Barry Smith) Date: Fri, 26 Sep 2025 15:20:36 -0400 Subject: [petsc-users] setting correct tolerances for MG smoother CG at the finest level In-Reply-To: References: Message-ID: Looks reasonable. Send the output running with -ksp_monitor -mg_levels_ksp_monitor -ksp_converged_reason -mg_levels_ksp_converged_reason > On Sep 26, 2025, at 1:19?PM, Moral Sanchez, Elena wrote: > > Dear Barry, > > This is -ksp_view for the smoother at the finest level: > KSP Object: (mg_levels_1_) 1 MPI process > type: cg > variant HERMITIAN > maximum iterations=10, nonzero initial guess > tolerances: relative=0.1, absolute=0.1, divergence=1e+30 > left preconditioning > using UNPRECONDITIONED norm type for convergence test > PC Object: (mg_levels_1_) 1 MPI process > type: none > linear system matrix = precond matrix: > Mat Object: 1 MPI process > type: python > rows=524, cols=524 > Python: Solver_petsc.LeastSquaresOperator > And at the coarsest level: > KSP Object: (mg_coarse_) 1 MPI process > type: cg > variant HERMITIAN > maximum iterations=100, initial guess is zero > tolerances: relative=0.1, absolute=0.1, divergence=1e+30 > left preconditioning > using UNPRECONDITIONED norm type for convergence test > PC Object: (mg_coarse_) 1 MPI process > type: none > linear system matrix = precond matrix: > Mat Object: 1 MPI process > type: python > rows=344, cols=344 > Python: Solver_petsc.LeastSquaresOperator > And for the whole solver: > KSP Object: 1 MPI process > type: cg > variant HERMITIAN > maximum iterations=100, nonzero initial guess > tolerances: relative=1e-08, absolute=1e-09, divergence=10000. > left preconditioning > using UNPRECONDITIONED norm type for convergence test > PC Object: 1 MPI process > type: mg > type is MULTIPLICATIVE, levels=2 cycles=v > Cycles per PCApply=1 > Not using Galerkin computed coarse grid matrices > Coarse grid solver -- level 0 ------------------------------- > KSP Object: (mg_coarse_) 1 MPI process > type: cg > variant HERMITIAN > maximum iterations=100, initial guess is zero > tolerances: relative=0.1, absolute=0.1, divergence=1e+30 > left preconditioning > using UNPRECONDITIONED norm type for convergence test > PC Object: (mg_coarse_) 1 MPI process > type: none > linear system matrix = precond matrix: > Mat Object: 1 MPI process > type: python > rows=344, cols=344 > Python: Solver_petsc.LeastSquaresOperator > Down solver (pre-smoother) on level 1 ------------------------------- > KSP Object: (mg_levels_1_) 1 MPI process > type: cg > variant HERMITIAN > maximum iterations=10, nonzero initial guess > tolerances: relative=0.1, absolute=0.1, divergence=1e+30 > left preconditioning > using UNPRECONDITIONED norm type for convergence test > PC Object: (mg_levels_1_) 1 MPI process > type: none > linear system matrix = precond matrix: > Mat Object: 1 MPI process > type: python > rows=524, cols=524 > Python: Solver_petsc.LeastSquaresOperator > Up solver (post-smoother) same as down solver (pre-smoother) > linear system matrix = precond matrix: > Mat Object: 1 MPI process > type: python > rows=524, cols=524 > Python: Solver_petsc.LeastSquaresOperator > Best, > Elena > > From: Barry Smith > > Sent: 26 September 2025 19:05:02 > To: Moral Sanchez, Elena > Cc: petsc-users at mcs.anl.gov > Subject: Re: [petsc-users] setting correct tolerances for MG smoother CG at the finest level > > > Send the output using -ksp_view > > Normally one uses a fixed number of iterations of smoothing on level with multigrid rather than a tolerance, but yes PETSc should respect such a tolerance. > > Barry > > >> On Sep 26, 2025, at 12:49?PM, Moral Sanchez, Elena > wrote: >> >> Hi, >> I am using multigrid (multiplicative) as a preconditioner with a V-cycle of two levels. At each level, I am setting CG as the smoother with certain tolerance. >> >> What I observe is that in the finest level the CG continues iterating after the residual norm reaches the tolerance (atol) and it only stops when reaching the maximum number of iterations at that level. At the coarsest level this does not occur and the CG stops when the tolerance is reached. >> >> I double-checked that the smoother at the finest level has the right tolerance. And I am using a Monitor function to track the residual. >> >> Do you know how to make the smoother at the finest level stop when reaching the tolerance? >> >> Cheers, >> Elena. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Sun Sep 28 13:13:54 2025 From: mfadams at lbl.gov (Mark Adams) Date: Sun, 28 Sep 2025 14:13:54 -0400 Subject: [petsc-users] setting correct tolerances for MG smoother CG at the finest level In-Reply-To: References: Message-ID: Not sure why your "whole"solver does not have a coarse grid but this is wrong: KSP Object: (mg_coarse_) 1 MPI process type: cg variant HERMITIAN maximum iterations=100, initial guess is zero tolerances: relative=0.1, absolute=0.1, divergence=1e+30 The coarse grid has to be accurate. The defaults are a good place to start: max_it=10.000, rtol=1e-5, atol=1e-30 (ish) On Fri, Sep 26, 2025 at 3:21?PM Barry Smith wrote: > Looks reasonable. Send the output running with > > -ksp_monitor -mg_levels_ksp_monitor -ksp_converged_reason > -mg_levels_ksp_converged_reason > > On Sep 26, 2025, at 1:19?PM, Moral Sanchez, Elena < > Elena.Moral.Sanchez at ipp.mpg.de> wrote: > > Dear Barry, > > This is -ksp_view for the smoother at the finest level: > > KSP Object: (mg_levels_1_) 1 MPI process > type: cg > variant HERMITIAN > maximum iterations=10, nonzero initial guess > tolerances: relative=0.1, absolute=0.1, divergence=1e+30 > left preconditioning > using UNPRECONDITIONED norm type for convergence test > PC Object: (mg_levels_1_) 1 MPI process > type: none > linear system matrix = precond matrix: > Mat Object: 1 MPI process > type: python > rows=524, cols=524 > Python: Solver_petsc.LeastSquaresOperator > > And at the coarsest level: > > KSP Object: (mg_coarse_) 1 MPI process > type: cg > variant HERMITIAN > maximum iterations=100, initial guess is zero > tolerances: relative=0.1, absolute=0.1, divergence=1e+30 > left preconditioning > using UNPRECONDITIONED norm type for convergence test > PC Object: (mg_coarse_) 1 MPI process > type: none > linear system matrix = precond matrix: > Mat Object: 1 MPI process > type: python > rows=344, cols=344 > Python: Solver_petsc.LeastSquaresOperator > > And for the whole solver: > > KSP Object: 1 MPI process > type: cg > variant HERMITIAN > maximum iterations=100, nonzero initial guess > tolerances: relative=1e-08, absolute=1e-09, divergence=10000. > left preconditioning > using UNPRECONDITIONED norm type for convergence test > PC Object: 1 MPI process > type: mg > type is MULTIPLICATIVE, levels=2 cycles=v > Cycles per PCApply=1 > Not using Galerkin computed coarse grid matrices > Coarse grid solver -- level 0 ------------------------------- > KSP Object: (mg_coarse_) 1 MPI process > type: cg > variant HERMITIAN > maximum iterations=100, initial guess is zero > tolerances: relative=0.1, absolute=0.1, divergence=1e+30 > left preconditioning > using UNPRECONDITIONED norm type for convergence test > PC Object: (mg_coarse_) 1 MPI process > type: none > linear system matrix = precond matrix: > Mat Object: 1 MPI process > type: python > rows=344, cols=344 > Python: Solver_petsc.LeastSquaresOperator > Down solver (pre-smoother) on level 1 ------------------------------- > KSP Object: (mg_levels_1_) 1 MPI process > type: cg > variant HERMITIAN > maximum iterations=10, nonzero initial guess > tolerances: relative=0.1, absolute=0.1, divergence=1e+30 > left preconditioning > using UNPRECONDITIONED norm type for convergence test > PC Object: (mg_levels_1_) 1 MPI process > type: none > linear system matrix = precond matrix: > Mat Object: 1 MPI process > type: python > rows=524, cols=524 > Python: Solver_petsc.LeastSquaresOperator > Up solver (post-smoother) same as down solver (pre-smoother) > linear system matrix = precond matrix: > Mat Object: 1 MPI process > type: python > rows=524, cols=524 > Python: Solver_petsc.LeastSquaresOperator > > Best, > Elena > > ------------------------------ > *From:* Barry Smith > *Sent:* 26 September 2025 19:05:02 > *To:* Moral Sanchez, Elena > *Cc:* petsc-users at mcs.anl.gov > *Subject:* Re: [petsc-users] setting correct tolerances for MG smoother > CG at the finest level > > > Send the output using -ksp_view > > Normally one uses a fixed number of iterations of smoothing on level with > multigrid rather than a tolerance, but yes PETSc should respect such a > tolerance. > > Barry > > > On Sep 26, 2025, at 12:49?PM, Moral Sanchez, Elena < > Elena.Moral.Sanchez at ipp.mpg.de> wrote: > > Hi, > I am using multigrid (multiplicative) as a preconditioner with a V-cycle > of two levels. At each level, I am setting CG as the smoother with certain > tolerance. > > What I observe is that in the finest level the CG continues iterating > after the residual norm reaches the tolerance (atol) and it only stops when > reaching the maximum number of iterations at that level. At the coarsest > level this does not occur and the CG stops when the tolerance is reached. > > I double-checked that the smoother at the finest level has the right > tolerance. And I am using a Monitor function to track the residual. > > Do you know how to make the smoother at the finest level stop when > reaching the tolerance? > > Cheers, > Elena. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Elena.Moral.Sanchez at ipp.mpg.de Mon Sep 29 07:07:54 2025 From: Elena.Moral.Sanchez at ipp.mpg.de (Moral Sanchez, Elena) Date: Mon, 29 Sep 2025 12:07:54 +0000 Subject: [petsc-users] setting correct tolerances for MG smoother CG at the finest level In-Reply-To: References: , Message-ID: <421fd9ac0ed0437f88e921d063a6f45f@ipp.mpg.de> Hi, I doubled the system size and changed the tolerances just to show a better example of the problem. This is the output of the callbacks in the first iteration: CG Iter 0/1 | res = 2.25e+00/1.00e-09 | 0.1 s MG lvl 0 (s=884): CG Iter 0/15 | res = 2.25e+00/1.00e-01 | 0.3 s MG lvl 0 (s=884): CG Iter 1/15 | res = 1.43e+00/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 2/15 | res = 1.17e+00/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 3/15 | res = 1.32e+00/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 4/15 | res = 5.01e-01/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 5/15 | res = 3.57e-01/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 6/15 | res = 2.49e-01/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 7/15 | res = 3.04e-01/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 8/15 | res = 2.78e-01/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 9/15 | res = 1.68e-01/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 10/15 | res = 1.21e-01/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 11/15 | res = 9.45e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 12/15 | res = 8.31e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 13/15 | res = 5.47e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 14/15 | res = 4.36e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 15/15 | res = 5.08e-02/1.00e-01 | 0.1 s ConvergedReason MG lvl 0: 4 MG lvl -1 (s=524): CG Iter 0/15 | res = 8.15e-02/1.00e-01 | 3.0 s ConvergedReason MG lvl -1: 3 MG lvl 0 (s=884): CG Iter 0/15 | res = 5.08e-02/1.00e-01 | 0.3 s MG lvl 0 (s=884): CG Iter 1/15 | res = 2.93e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 2/15 | res = 3.26e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 3/15 | res = 4.14e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 4/15 | res = 4.82e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 5/15 | res = 3.20e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 6/15 | res = 3.46e-02/1.00e-01 | 0.3 s MG lvl 0 (s=884): CG Iter 7/15 | res = 3.41e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 8/15 | res = 4.69e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 9/15 | res = 3.37e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 10/15 | res = 4.07e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 11/15 | res = 2.66e-02/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 12/15 | res = 2.83e-02/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 13/15 | res = 2.98e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 14/15 | res = 3.53e-02/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 15/15 | res = 2.33e-02/1.00e-01 | 0.2 s ConvergedReason MG lvl 0: 4 CG Iter 1/1 | res = 2.42e-02/1.00e-09 | 5.6 s MG lvl 0 (s=884): CG Iter 0/15 | res = 2.42e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 1/15 | res = 1.76e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 2/15 | res = 1.40e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 3/15 | res = 1.42e-02/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 4/15 | res = 1.62e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 5/15 | res = 1.35e-02/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 6/15 | res = 1.39e-02/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 7/15 | res = 1.51e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 8/15 | res = 1.28e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 9/15 | res = 1.24e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 10/15 | res = 9.59e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 11/15 | res = 9.02e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 12/15 | res = 1.19e-02/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 13/15 | res = 1.08e-02/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 14/15 | res = 8.19e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 15/15 | res = 7.61e-03/1.00e-01 | 0.1 s ConvergedReason MG lvl 0: 4 MG lvl -1 (s=524): CG Iter 0/15 | res = 1.38e-02/1.00e-01 | 5.2 s ConvergedReason MG lvl -1: 3 MG lvl 0 (s=884): CG Iter 0/15 | res = 7.61e-03/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 1/15 | res = 5.62e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 2/15 | res = 6.64e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 3/15 | res = 8.71e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 4/15 | res = 6.40e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 5/15 | res = 7.23e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 6/15 | res = 6.20e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 7/15 | res = 7.04e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 8/15 | res = 7.19e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 9/15 | res = 6.35e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 10/15 | res = 7.31e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 11/15 | res = 6.64e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 12/15 | res = 7.24e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 13/15 | res = 4.97e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 14/15 | res = 5.36e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 15/15 | res = 5.84e-03/1.00e-01 | 0.1 s ConvergedReason MG lvl 0: 4 CG ConvergedReason: -3 For completeness, I add here the -ksp_view of the whole solver: KSP Object: 1 MPI process type: cg variant HERMITIAN maximum iterations=1, nonzero initial guess tolerances: relative=1e-08, absolute=1e-09, divergence=10000. left preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: 1 MPI process type: mg type is MULTIPLICATIVE, levels=2 cycles=v Cycles per PCApply=1 Not using Galerkin computed coarse grid matrices Coarse grid solver -- level 0 ------------------------------- KSP Object: (mg_coarse_) 1 MPI process type: cg variant HERMITIAN maximum iterations=15, nonzero initial guess tolerances: relative=0.1, absolute=0.1, divergence=1e+30 left preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: (mg_coarse_) 1 MPI process type: none linear system matrix = precond matrix: Mat Object: 1 MPI process type: python rows=524, cols=524 Python: Solver_petsc.LeastSquaresOperator Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 1 MPI process type: cg variant HERMITIAN maximum iterations=15, nonzero initial guess tolerances: relative=0.1, absolute=0.1, divergence=1e+30 left preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: (mg_levels_1_) 1 MPI process type: none linear system matrix = precond matrix: Mat Object: 1 MPI process type: python rows=884, cols=884 Python: Solver_petsc.LeastSquaresOperator Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Mat Object: 1 MPI process type: python rows=884, cols=884 Python: Solver_petsc.LeastSquaresOperator Regarding Mark's Email: What do you mean with "the whole solver doesn't have a coarse grid"? I am using my own Restriction and Interpolation operators. Thanks for the help, Elena ________________________________ From: Mark Adams Sent: 28 September 2025 20:13:54 To: Barry Smith Cc: Moral Sanchez, Elena; petsc-users Subject: Re: [petsc-users] setting correct tolerances for MG smoother CG at the finest level Not sure why your "whole"solver does not have a coarse grid but this is wrong: KSP Object: (mg_coarse_) 1 MPI process type: cg variant HERMITIAN maximum iterations=100, initial guess is zero tolerances: relative=0.1, absolute=0.1, divergence=1e+30 The coarse grid has to be accurate. The defaults are a good place to start: max_it=10.000, rtol=1e-5, atol=1e-30 (ish) On Fri, Sep 26, 2025 at 3:21?PM Barry Smith > wrote: Looks reasonable. Send the output running with -ksp_monitor -mg_levels_ksp_monitor -ksp_converged_reason -mg_levels_ksp_converged_reason On Sep 26, 2025, at 1:19?PM, Moral Sanchez, Elena > wrote: Dear Barry, This is -ksp_view for the smoother at the finest level: KSP Object: (mg_levels_1_) 1 MPI process type: cg variant HERMITIAN maximum iterations=10, nonzero initial guess tolerances: relative=0.1, absolute=0.1, divergence=1e+30 left preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: (mg_levels_1_) 1 MPI process type: none linear system matrix = precond matrix: Mat Object: 1 MPI process type: python rows=524, cols=524 Python: Solver_petsc.LeastSquaresOperator And at the coarsest level: KSP Object: (mg_coarse_) 1 MPI process type: cg variant HERMITIAN maximum iterations=100, initial guess is zero tolerances: relative=0.1, absolute=0.1, divergence=1e+30 left preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: (mg_coarse_) 1 MPI process type: none linear system matrix = precond matrix: Mat Object: 1 MPI process type: python rows=344, cols=344 Python: Solver_petsc.LeastSquaresOperator And for the whole solver: KSP Object: 1 MPI process type: cg variant HERMITIAN maximum iterations=100, nonzero initial guess tolerances: relative=1e-08, absolute=1e-09, divergence=10000. left preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: 1 MPI process type: mg type is MULTIPLICATIVE, levels=2 cycles=v Cycles per PCApply=1 Not using Galerkin computed coarse grid matrices Coarse grid solver -- level 0 ------------------------------- KSP Object: (mg_coarse_) 1 MPI process type: cg variant HERMITIAN maximum iterations=100, initial guess is zero tolerances: relative=0.1, absolute=0.1, divergence=1e+30 left preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: (mg_coarse_) 1 MPI process type: none linear system matrix = precond matrix: Mat Object: 1 MPI process type: python rows=344, cols=344 Python: Solver_petsc.LeastSquaresOperator Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 1 MPI process type: cg variant HERMITIAN maximum iterations=10, nonzero initial guess tolerances: relative=0.1, absolute=0.1, divergence=1e+30 left preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: (mg_levels_1_) 1 MPI process type: none linear system matrix = precond matrix: Mat Object: 1 MPI process type: python rows=524, cols=524 Python: Solver_petsc.LeastSquaresOperator Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Mat Object: 1 MPI process type: python rows=524, cols=524 Python: Solver_petsc.LeastSquaresOperator Best, Elena ________________________________ From: Barry Smith > Sent: 26 September 2025 19:05:02 To: Moral Sanchez, Elena Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] setting correct tolerances for MG smoother CG at the finest level Send the output using -ksp_view Normally one uses a fixed number of iterations of smoothing on level with multigrid rather than a tolerance, but yes PETSc should respect such a tolerance. Barry On Sep 26, 2025, at 12:49?PM, Moral Sanchez, Elena > wrote: Hi, I am using multigrid (multiplicative) as a preconditioner with a V-cycle of two levels. At each level, I am setting CG as the smoother with certain tolerance. What I observe is that in the finest level the CG continues iterating after the residual norm reaches the tolerance (atol) and it only stops when reaching the maximum number of iterations at that level. At the coarsest level this does not occur and the CG stops when the tolerance is reached. I double-checked that the smoother at the finest level has the right tolerance. And I am using a Monitor function to track the residual. Do you know how to make the smoother at the finest level stop when reaching the tolerance? Cheers, Elena. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Mon Sep 29 07:20:56 2025 From: mfadams at lbl.gov (Mark Adams) Date: Mon, 29 Sep 2025 08:20:56 -0400 Subject: [petsc-users] setting correct tolerances for MG smoother CG at the finest level In-Reply-To: <421fd9ac0ed0437f88e921d063a6f45f@ipp.mpg.de> References: <421fd9ac0ed0437f88e921d063a6f45f@ipp.mpg.de> Message-ID: Oh I see the coarse grid solver in your full solver output now. You still want an accurate coarse grid solve. Usually (the default in GAMG) you use a direct solver on one process, and cousin until the coarse grid is small enough to make that cheap. On Mon, Sep 29, 2025 at 8:07?AM Moral Sanchez, Elena < Elena.Moral.Sanchez at ipp.mpg.de> wrote: > Hi, I doubled the system size and changed the tolerances just to show a > better example of the problem. This is the output of the callbacks in the > first iteration: > CG Iter 0/1 | res = 2.25e+00/1.00e-09 | 0.1 s > MG lvl 0 (s=884): CG Iter 0/15 | res = 2.25e+00/1.00e-01 | 0.3 s > MG lvl 0 (s=884): CG Iter 1/15 | res = 1.43e+00/1.00e-01 | 0.2 s > MG lvl 0 (s=884): CG Iter 2/15 | res = 1.17e+00/1.00e-01 | 0.2 s > MG lvl 0 (s=884): CG Iter 3/15 | res = 1.32e+00/1.00e-01 | 0.1 s > MG lvl 0 (s=884): CG Iter 4/15 | res = 5.01e-01/1.00e-01 | 0.2 s > MG lvl 0 (s=884): CG Iter 5/15 | res = 3.57e-01/1.00e-01 | 0.2 s > MG lvl 0 (s=884): CG Iter 6/15 | res = 2.49e-01/1.00e-01 | 0.2 s > MG lvl 0 (s=884): CG Iter 7/15 | res = 3.04e-01/1.00e-01 | 0.2 s > MG lvl 0 (s=884): CG Iter 8/15 | res = 2.78e-01/1.00e-01 | 0.2 s > MG lvl 0 (s=884): CG Iter 9/15 | res = 1.68e-01/1.00e-01 | 0.2 s > MG lvl 0 (s=884): CG Iter 10/15 | res = 1.21e-01/1.00e-01 | 0.1 s > MG lvl 0 (s=884): CG Iter 11/15 | res = 9.45e-02/1.00e-01 | 0.2 s > MG lvl 0 (s=884): CG Iter 12/15 | res = 8.31e-02/1.00e-01 | 0.2 s > MG lvl 0 (s=884): CG Iter 13/15 | res = 5.47e-02/1.00e-01 | 0.2 s > MG lvl 0 (s=884): CG Iter 14/15 | res = 4.36e-02/1.00e-01 | 0.2 s > MG lvl 0 (s=884): CG Iter 15/15 | res = 5.08e-02/1.00e-01 | 0.1 s > ConvergedReason MG lvl 0: 4 > MG lvl -1 (s=524): CG Iter 0/15 | res = 8.15e-02/1.00e-01 | 3.0 s > ConvergedReason MG lvl -1: 3 > MG lvl 0 (s=884): CG Iter 0/15 | res = 5.08e-02/1.00e-01 | 0.3 s > MG lvl 0 (s=884): CG Iter 1/15 | res = 2.93e-02/1.00e-01 | 0.2 s > MG lvl 0 (s=884): CG Iter 2/15 | res = 3.26e-02/1.00e-01 | 0.2 s > MG lvl 0 (s=884): CG Iter 3/15 | res = 4.14e-02/1.00e-01 | 0.2 s > MG lvl 0 (s=884): CG Iter 4/15 | res = 4.82e-02/1.00e-01 | 0.2 s > MG lvl 0 (s=884): CG Iter 5/15 | res = 3.20e-02/1.00e-01 | 0.2 s > MG lvl 0 (s=884): CG Iter 6/15 | res = 3.46e-02/1.00e-01 | 0.3 s > MG lvl 0 (s=884): CG Iter 7/15 | res = 3.41e-02/1.00e-01 | 0.2 s > MG lvl 0 (s=884): CG Iter 8/15 | res = 4.69e-02/1.00e-01 | 0.2 s > MG lvl 0 (s=884): CG Iter 9/15 | res = 3.37e-02/1.00e-01 | 0.2 s > MG lvl 0 (s=884): CG Iter 10/15 | res = 4.07e-02/1.00e-01 | 0.2 s > MG lvl 0 (s=884): CG Iter 11/15 | res = 2.66e-02/1.00e-01 | 0.1 s > MG lvl 0 (s=884): CG Iter 12/15 | res = 2.83e-02/1.00e-01 | 0.1 s > MG lvl 0 (s=884): CG Iter 13/15 | res = 2.98e-02/1.00e-01 | 0.2 s > MG lvl 0 (s=884): CG Iter 14/15 | res = 3.53e-02/1.00e-01 | 0.1 s > MG lvl 0 (s=884): CG Iter 15/15 | res = 2.33e-02/1.00e-01 | 0.2 s > ConvergedReason MG lvl 0: 4 > CG Iter 1/1 | res = 2.42e-02/1.00e-09 | 5.6 s > MG lvl 0 (s=884): CG Iter 0/15 | res = 2.42e-02/1.00e-01 | 0.2 s > MG lvl 0 (s=884): CG Iter 1/15 | res = 1.76e-02/1.00e-01 | 0.2 s > MG lvl 0 (s=884): CG Iter 2/15 | res = 1.40e-02/1.00e-01 | 0.2 s > MG lvl 0 (s=884): CG Iter 3/15 | res = 1.42e-02/1.00e-01 | 0.1 s > MG lvl 0 (s=884): CG Iter 4/15 | res = 1.62e-02/1.00e-01 | 0.2 s > MG lvl 0 (s=884): CG Iter 5/15 | res = 1.35e-02/1.00e-01 | 0.1 s > MG lvl 0 (s=884): CG Iter 6/15 | res = 1.39e-02/1.00e-01 | 0.1 s > MG lvl 0 (s=884): CG Iter 7/15 | res = 1.51e-02/1.00e-01 | 0.2 s > MG lvl 0 (s=884): CG Iter 8/15 | res = 1.28e-02/1.00e-01 | 0.2 s > MG lvl 0 (s=884): CG Iter 9/15 | res = 1.24e-02/1.00e-01 | 0.2 s > MG lvl 0 (s=884): CG Iter 10/15 | res = 9.59e-03/1.00e-01 | 0.1 s > MG lvl 0 (s=884): CG Iter 11/15 | res = 9.02e-03/1.00e-01 | 0.1 s > MG lvl 0 (s=884): CG Iter 12/15 | res = 1.19e-02/1.00e-01 | 0.1 s > MG lvl 0 (s=884): CG Iter 13/15 | res = 1.08e-02/1.00e-01 | 0.1 s > MG lvl 0 (s=884): CG Iter 14/15 | res = 8.19e-03/1.00e-01 | 0.1 s > MG lvl 0 (s=884): CG Iter 15/15 | res = 7.61e-03/1.00e-01 | 0.1 s > ConvergedReason MG lvl 0: 4 > MG lvl -1 (s=524): CG Iter 0/15 | res = 1.38e-02/1.00e-01 | 5.2 s > ConvergedReason MG lvl -1: 3 > MG lvl 0 (s=884): CG Iter 0/15 | res = 7.61e-03/1.00e-01 | 0.2 s > MG lvl 0 (s=884): CG Iter 1/15 | res = 5.62e-03/1.00e-01 | 0.1 s > MG lvl 0 (s=884): CG Iter 2/15 | res = 6.64e-03/1.00e-01 | 0.1 s > MG lvl 0 (s=884): CG Iter 3/15 | res = 8.71e-03/1.00e-01 | 0.1 s > MG lvl 0 (s=884): CG Iter 4/15 | res = 6.40e-03/1.00e-01 | 0.1 s > MG lvl 0 (s=884): CG Iter 5/15 | res = 7.23e-03/1.00e-01 | 0.1 s > MG lvl 0 (s=884): CG Iter 6/15 | res = 6.20e-03/1.00e-01 | 0.1 s > MG lvl 0 (s=884): CG Iter 7/15 | res = 7.04e-03/1.00e-01 | 0.1 s > MG lvl 0 (s=884): CG Iter 8/15 | res = 7.19e-03/1.00e-01 | 0.1 s > MG lvl 0 (s=884): CG Iter 9/15 | res = 6.35e-03/1.00e-01 | 0.1 s > MG lvl 0 (s=884): CG Iter 10/15 | res = 7.31e-03/1.00e-01 | 0.1 s > MG lvl 0 (s=884): CG Iter 11/15 | res = 6.64e-03/1.00e-01 | 0.1 s > MG lvl 0 (s=884): CG Iter 12/15 | res = 7.24e-03/1.00e-01 | 0.1 s > MG lvl 0 (s=884): CG Iter 13/15 | res = 4.97e-03/1.00e-01 | 0.1 s > MG lvl 0 (s=884): CG Iter 14/15 | res = 5.36e-03/1.00e-01 | 0.1 s > MG lvl 0 (s=884): CG Iter 15/15 | res = 5.84e-03/1.00e-01 | 0.1 s > ConvergedReason MG lvl 0: 4 > CG ConvergedReason: -3 > > For completeness, I add here the -ksp_view of the whole solver: > KSP Object: 1 MPI process > type: cg > variant HERMITIAN > maximum iterations=1, nonzero initial guess > tolerances: relative=1e-08, absolute=1e-09, divergence=10000. > left preconditioning > using UNPRECONDITIONED norm type for convergence test > PC Object: 1 MPI process > type: mg > type is MULTIPLICATIVE, levels=2 cycles=v > Cycles per PCApply=1 > Not using Galerkin computed coarse grid matrices > Coarse grid solver -- level 0 ------------------------------- > KSP Object: (mg_coarse_) 1 MPI process > type: cg > variant HERMITIAN > maximum iterations=15, nonzero initial guess > tolerances: relative=0.1, absolute=0.1, divergence=1e+30 > left preconditioning > using UNPRECONDITIONED norm type for convergence test > PC Object: (mg_coarse_) 1 MPI process > type: none > linear system matrix = precond matrix: > Mat Object: 1 MPI process > type: python > rows=524, cols=524 > Python: Solver_petsc.LeastSquaresOperator > Down solver (pre-smoother) on level 1 ------------------------------- > KSP Object: (mg_levels_1_) 1 MPI process > type: cg > variant HERMITIAN > maximum iterations=15, nonzero initial guess > tolerances: relative=0.1, absolute=0.1, divergence=1e+30 > left preconditioning > using UNPRECONDITIONED norm type for convergence test > PC Object: (mg_levels_1_) 1 MPI process > type: none > linear system matrix = precond matrix: > Mat Object: 1 MPI process > type: python > rows=884, cols=884 > Python: Solver_petsc.LeastSquaresOperator > Up solver (post-smoother) same as down solver (pre-smoother) > linear system matrix = precond matrix: > Mat Object: 1 MPI process > type: python > rows=884, cols=884 > Python: Solver_petsc.LeastSquaresOperator > > Regarding Mark's Email: What do you mean with "the whole solver doesn't > have a coarse grid"? I am using my own Restriction and Interpolation > operators. > Thanks for the help, > Elena > > ------------------------------ > *From:* Mark Adams > *Sent:* 28 September 2025 20:13:54 > *To:* Barry Smith > *Cc:* Moral Sanchez, Elena; petsc-users > *Subject:* Re: [petsc-users] setting correct tolerances for MG smoother > CG at the finest level > > Not sure why your "whole"solver does not have a coarse grid but this is > wrong: > > KSP Object: (mg_coarse_) 1 MPI process > type: cg > variant HERMITIAN > maximum iterations=100, initial guess is zero > tolerances: relative=0.1, absolute=0.1, divergence=1e+30 > > The coarse grid has to be accurate. The defaults are a good place to > start: max_it=10.000, rtol=1e-5, atol=1e-30 (ish) > > > On Fri, Sep 26, 2025 at 3:21?PM Barry Smith wrote: > >> Looks reasonable. Send the output running with >> >> -ksp_monitor -mg_levels_ksp_monitor -ksp_converged_reason >> -mg_levels_ksp_converged_reason >> >> On Sep 26, 2025, at 1:19?PM, Moral Sanchez, Elena < >> Elena.Moral.Sanchez at ipp.mpg.de> wrote: >> >> Dear Barry, >> >> This is -ksp_view for the smoother at the finest level: >> >> KSP Object: (mg_levels_1_) 1 MPI process >> type: cg >> variant HERMITIAN >> maximum iterations=10, nonzero initial guess >> tolerances: relative=0.1, absolute=0.1, divergence=1e+30 >> left preconditioning >> using UNPRECONDITIONED norm type for convergence test >> PC Object: (mg_levels_1_) 1 MPI process >> type: none >> linear system matrix = precond matrix: >> Mat Object: 1 MPI process >> type: python >> rows=524, cols=524 >> Python: Solver_petsc.LeastSquaresOperator >> >> And at the coarsest level: >> >> KSP Object: (mg_coarse_) 1 MPI process >> type: cg >> variant HERMITIAN >> maximum iterations=100, initial guess is zero >> tolerances: relative=0.1, absolute=0.1, divergence=1e+30 >> left preconditioning >> using UNPRECONDITIONED norm type for convergence test >> PC Object: (mg_coarse_) 1 MPI process >> type: none >> linear system matrix = precond matrix: >> Mat Object: 1 MPI process >> type: python >> rows=344, cols=344 >> Python: Solver_petsc.LeastSquaresOperator >> >> And for the whole solver: >> >> KSP Object: 1 MPI process >> type: cg >> variant HERMITIAN >> maximum iterations=100, nonzero initial guess >> tolerances: relative=1e-08, absolute=1e-09, divergence=10000. >> left preconditioning >> using UNPRECONDITIONED norm type for convergence test >> PC Object: 1 MPI process >> type: mg >> type is MULTIPLICATIVE, levels=2 cycles=v >> Cycles per PCApply=1 >> Not using Galerkin computed coarse grid matrices >> Coarse grid solver -- level 0 ------------------------------- >> KSP Object: (mg_coarse_) 1 MPI process >> type: cg >> variant HERMITIAN >> maximum iterations=100, initial guess is zero >> tolerances: relative=0.1, absolute=0.1, divergence=1e+30 >> left preconditioning >> using UNPRECONDITIONED norm type for convergence test >> PC Object: (mg_coarse_) 1 MPI process >> type: none >> linear system matrix = precond matrix: >> Mat Object: 1 MPI process >> type: python >> rows=344, cols=344 >> Python: Solver_petsc.LeastSquaresOperator >> Down solver (pre-smoother) on level 1 ------------------------------- >> KSP Object: (mg_levels_1_) 1 MPI process >> type: cg >> variant HERMITIAN >> maximum iterations=10, nonzero initial guess >> tolerances: relative=0.1, absolute=0.1, divergence=1e+30 >> left preconditioning >> using UNPRECONDITIONED norm type for convergence test >> PC Object: (mg_levels_1_) 1 MPI process >> type: none >> linear system matrix = precond matrix: >> Mat Object: 1 MPI process >> type: python >> rows=524, cols=524 >> Python: Solver_petsc.LeastSquaresOperator >> Up solver (post-smoother) same as down solver (pre-smoother) >> linear system matrix = precond matrix: >> Mat Object: 1 MPI process >> type: python >> rows=524, cols=524 >> Python: Solver_petsc.LeastSquaresOperator >> >> Best, >> Elena >> >> ------------------------------ >> *From:* Barry Smith >> *Sent:* 26 September 2025 19:05:02 >> *To:* Moral Sanchez, Elena >> *Cc:* petsc-users at mcs.anl.gov >> *Subject:* Re: [petsc-users] setting correct tolerances for MG smoother >> CG at the finest level >> >> >> Send the output using -ksp_view >> >> Normally one uses a fixed number of iterations of smoothing on level >> with multigrid rather than a tolerance, but yes PETSc should respect such a >> tolerance. >> >> Barry >> >> >> On Sep 26, 2025, at 12:49?PM, Moral Sanchez, Elena < >> Elena.Moral.Sanchez at ipp.mpg.de> wrote: >> >> Hi, >> I am using multigrid (multiplicative) as a preconditioner with a V-cycle >> of two levels. At each level, I am setting CG as the smoother with certain >> tolerance. >> >> What I observe is that in the finest level the CG continues iterating >> after the residual norm reaches the tolerance (atol) and it only stops when >> reaching the maximum number of iterations at that level. At the coarsest >> level this does not occur and the CG stops when the tolerance is reached. >> >> I double-checked that the smoother at the finest level has the right >> tolerance. And I am using a Monitor function to track the residual. >> >> Do you know how to make the smoother at the finest level stop when >> reaching the tolerance? >> >> Cheers, >> Elena. >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From Elena.Moral.Sanchez at ipp.mpg.de Mon Sep 29 07:39:51 2025 From: Elena.Moral.Sanchez at ipp.mpg.de (Moral Sanchez, Elena) Date: Mon, 29 Sep 2025 12:39:51 +0000 Subject: [petsc-users] setting correct tolerances for MG smoother CG at the finest level In-Reply-To: References: <421fd9ac0ed0437f88e921d063a6f45f@ipp.mpg.de>, Message-ID: <2622f5910bef400f983345df49977fa8@ipp.mpg.de> Thanks for the hint. I agree that the coarse solve should be much more "accurate". However, for the moment I am just trying to understand what the MG is doing exactly. I am puzzled to see that the fine grid smoother ("lvl 0") does not stop when the residual becomes less than 1e-1. It should converge due to the atol. ________________________________ From: Mark Adams Sent: 29 September 2025 14:20:56 To: Moral Sanchez, Elena Cc: Barry Smith; petsc-users Subject: Re: [petsc-users] setting correct tolerances for MG smoother CG at the finest level Oh I see the coarse grid solver in your full solver output now. You still want an accurate coarse grid solve. Usually (the default in GAMG) you use a direct solver on one process, and cousin until the coarse grid is small enough to make that cheap. On Mon, Sep 29, 2025 at 8:07?AM Moral Sanchez, Elena > wrote: Hi, I doubled the system size and changed the tolerances just to show a better example of the problem. This is the output of the callbacks in the first iteration: CG Iter 0/1 | res = 2.25e+00/1.00e-09 | 0.1 s MG lvl 0 (s=884): CG Iter 0/15 | res = 2.25e+00/1.00e-01 | 0.3 s MG lvl 0 (s=884): CG Iter 1/15 | res = 1.43e+00/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 2/15 | res = 1.17e+00/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 3/15 | res = 1.32e+00/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 4/15 | res = 5.01e-01/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 5/15 | res = 3.57e-01/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 6/15 | res = 2.49e-01/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 7/15 | res = 3.04e-01/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 8/15 | res = 2.78e-01/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 9/15 | res = 1.68e-01/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 10/15 | res = 1.21e-01/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 11/15 | res = 9.45e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 12/15 | res = 8.31e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 13/15 | res = 5.47e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 14/15 | res = 4.36e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 15/15 | res = 5.08e-02/1.00e-01 | 0.1 s ConvergedReason MG lvl 0: 4 MG lvl -1 (s=524): CG Iter 0/15 | res = 8.15e-02/1.00e-01 | 3.0 s ConvergedReason MG lvl -1: 3 MG lvl 0 (s=884): CG Iter 0/15 | res = 5.08e-02/1.00e-01 | 0.3 s MG lvl 0 (s=884): CG Iter 1/15 | res = 2.93e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 2/15 | res = 3.26e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 3/15 | res = 4.14e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 4/15 | res = 4.82e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 5/15 | res = 3.20e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 6/15 | res = 3.46e-02/1.00e-01 | 0.3 s MG lvl 0 (s=884): CG Iter 7/15 | res = 3.41e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 8/15 | res = 4.69e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 9/15 | res = 3.37e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 10/15 | res = 4.07e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 11/15 | res = 2.66e-02/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 12/15 | res = 2.83e-02/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 13/15 | res = 2.98e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 14/15 | res = 3.53e-02/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 15/15 | res = 2.33e-02/1.00e-01 | 0.2 s ConvergedReason MG lvl 0: 4 CG Iter 1/1 | res = 2.42e-02/1.00e-09 | 5.6 s MG lvl 0 (s=884): CG Iter 0/15 | res = 2.42e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 1/15 | res = 1.76e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 2/15 | res = 1.40e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 3/15 | res = 1.42e-02/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 4/15 | res = 1.62e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 5/15 | res = 1.35e-02/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 6/15 | res = 1.39e-02/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 7/15 | res = 1.51e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 8/15 | res = 1.28e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 9/15 | res = 1.24e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 10/15 | res = 9.59e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 11/15 | res = 9.02e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 12/15 | res = 1.19e-02/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 13/15 | res = 1.08e-02/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 14/15 | res = 8.19e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 15/15 | res = 7.61e-03/1.00e-01 | 0.1 s ConvergedReason MG lvl 0: 4 MG lvl -1 (s=524): CG Iter 0/15 | res = 1.38e-02/1.00e-01 | 5.2 s ConvergedReason MG lvl -1: 3 MG lvl 0 (s=884): CG Iter 0/15 | res = 7.61e-03/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 1/15 | res = 5.62e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 2/15 | res = 6.64e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 3/15 | res = 8.71e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 4/15 | res = 6.40e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 5/15 | res = 7.23e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 6/15 | res = 6.20e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 7/15 | res = 7.04e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 8/15 | res = 7.19e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 9/15 | res = 6.35e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 10/15 | res = 7.31e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 11/15 | res = 6.64e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 12/15 | res = 7.24e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 13/15 | res = 4.97e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 14/15 | res = 5.36e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 15/15 | res = 5.84e-03/1.00e-01 | 0.1 s ConvergedReason MG lvl 0: 4 CG ConvergedReason: -3 For completeness, I add here the -ksp_view of the whole solver: KSP Object: 1 MPI process type: cg variant HERMITIAN maximum iterations=1, nonzero initial guess tolerances: relative=1e-08, absolute=1e-09, divergence=10000. left preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: 1 MPI process type: mg type is MULTIPLICATIVE, levels=2 cycles=v Cycles per PCApply=1 Not using Galerkin computed coarse grid matrices Coarse grid solver -- level 0 ------------------------------- KSP Object: (mg_coarse_) 1 MPI process type: cg variant HERMITIAN maximum iterations=15, nonzero initial guess tolerances: relative=0.1, absolute=0.1, divergence=1e+30 left preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: (mg_coarse_) 1 MPI process type: none linear system matrix = precond matrix: Mat Object: 1 MPI process type: python rows=524, cols=524 Python: Solver_petsc.LeastSquaresOperator Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 1 MPI process type: cg variant HERMITIAN maximum iterations=15, nonzero initial guess tolerances: relative=0.1, absolute=0.1, divergence=1e+30 left preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: (mg_levels_1_) 1 MPI process type: none linear system matrix = precond matrix: Mat Object: 1 MPI process type: python rows=884, cols=884 Python: Solver_petsc.LeastSquaresOperator Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Mat Object: 1 MPI process type: python rows=884, cols=884 Python: Solver_petsc.LeastSquaresOperator Regarding Mark's Email: What do you mean with "the whole solver doesn't have a coarse grid"? I am using my own Restriction and Interpolation operators. Thanks for the help, Elena ________________________________ From: Mark Adams > Sent: 28 September 2025 20:13:54 To: Barry Smith Cc: Moral Sanchez, Elena; petsc-users Subject: Re: [petsc-users] setting correct tolerances for MG smoother CG at the finest level Not sure why your "whole"solver does not have a coarse grid but this is wrong: KSP Object: (mg_coarse_) 1 MPI process type: cg variant HERMITIAN maximum iterations=100, initial guess is zero tolerances: relative=0.1, absolute=0.1, divergence=1e+30 The coarse grid has to be accurate. The defaults are a good place to start: max_it=10.000, rtol=1e-5, atol=1e-30 (ish) On Fri, Sep 26, 2025 at 3:21?PM Barry Smith > wrote: Looks reasonable. Send the output running with -ksp_monitor -mg_levels_ksp_monitor -ksp_converged_reason -mg_levels_ksp_converged_reason On Sep 26, 2025, at 1:19?PM, Moral Sanchez, Elena > wrote: Dear Barry, This is -ksp_view for the smoother at the finest level: KSP Object: (mg_levels_1_) 1 MPI process type: cg variant HERMITIAN maximum iterations=10, nonzero initial guess tolerances: relative=0.1, absolute=0.1, divergence=1e+30 left preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: (mg_levels_1_) 1 MPI process type: none linear system matrix = precond matrix: Mat Object: 1 MPI process type: python rows=524, cols=524 Python: Solver_petsc.LeastSquaresOperator And at the coarsest level: KSP Object: (mg_coarse_) 1 MPI process type: cg variant HERMITIAN maximum iterations=100, initial guess is zero tolerances: relative=0.1, absolute=0.1, divergence=1e+30 left preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: (mg_coarse_) 1 MPI process type: none linear system matrix = precond matrix: Mat Object: 1 MPI process type: python rows=344, cols=344 Python: Solver_petsc.LeastSquaresOperator And for the whole solver: KSP Object: 1 MPI process type: cg variant HERMITIAN maximum iterations=100, nonzero initial guess tolerances: relative=1e-08, absolute=1e-09, divergence=10000. left preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: 1 MPI process type: mg type is MULTIPLICATIVE, levels=2 cycles=v Cycles per PCApply=1 Not using Galerkin computed coarse grid matrices Coarse grid solver -- level 0 ------------------------------- KSP Object: (mg_coarse_) 1 MPI process type: cg variant HERMITIAN maximum iterations=100, initial guess is zero tolerances: relative=0.1, absolute=0.1, divergence=1e+30 left preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: (mg_coarse_) 1 MPI process type: none linear system matrix = precond matrix: Mat Object: 1 MPI process type: python rows=344, cols=344 Python: Solver_petsc.LeastSquaresOperator Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 1 MPI process type: cg variant HERMITIAN maximum iterations=10, nonzero initial guess tolerances: relative=0.1, absolute=0.1, divergence=1e+30 left preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: (mg_levels_1_) 1 MPI process type: none linear system matrix = precond matrix: Mat Object: 1 MPI process type: python rows=524, cols=524 Python: Solver_petsc.LeastSquaresOperator Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Mat Object: 1 MPI process type: python rows=524, cols=524 Python: Solver_petsc.LeastSquaresOperator Best, Elena ________________________________ From: Barry Smith > Sent: 26 September 2025 19:05:02 To: Moral Sanchez, Elena Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] setting correct tolerances for MG smoother CG at the finest level Send the output using -ksp_view Normally one uses a fixed number of iterations of smoothing on level with multigrid rather than a tolerance, but yes PETSc should respect such a tolerance. Barry On Sep 26, 2025, at 12:49?PM, Moral Sanchez, Elena > wrote: Hi, I am using multigrid (multiplicative) as a preconditioner with a V-cycle of two levels. At each level, I am setting CG as the smoother with certain tolerance. What I observe is that in the finest level the CG continues iterating after the residual norm reaches the tolerance (atol) and it only stops when reaching the maximum number of iterations at that level. At the coarsest level this does not occur and the CG stops when the tolerance is reached. I double-checked that the smoother at the finest level has the right tolerance. And I am using a Monitor function to track the residual. Do you know how to make the smoother at the finest level stop when reaching the tolerance? Cheers, Elena. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Mon Sep 29 08:56:33 2025 From: bsmith at petsc.dev (Barry Smith) Date: Mon, 29 Sep 2025 09:56:33 -0400 Subject: [petsc-users] setting correct tolerances for MG smoother CG at the finest level In-Reply-To: <2622f5910bef400f983345df49977fa8@ipp.mpg.de> References: <421fd9ac0ed0437f88e921d063a6f45f@ipp.mpg.de> <2622f5910bef400f983345df49977fa8@ipp.mpg.de> Message-ID: I asked you to run with >>> -ksp_monitor -mg_levels_ksp_monitor -ksp_converged_reason -mg_levels_ksp_converged_reason you chose not to, delaying the process of understanding what is happening. Please run with those options and send the output. My guess is that you are computing the "residual norms" in your own monitor code, and it is doing so differently than what PETSc does, thus resulting in the appearance of a sufficiently small residual norm, whereas PETSc may not have calculated something that small. Barry > On Sep 29, 2025, at 8:39?AM, Moral Sanchez, Elena wrote: > > Thanks for the hint. I agree that the coarse solve should be much more "accurate". However, for the moment I am just trying to understand what the MG is doing exactly. > > I am puzzled to see that the fine grid smoother ("lvl 0") does not stop when the residual becomes less than 1e-1. It should converge due to the atol. > > From: Mark Adams > > Sent: 29 September 2025 14:20:56 > To: Moral Sanchez, Elena > Cc: Barry Smith; petsc-users > Subject: Re: [petsc-users] setting correct tolerances for MG smoother CG at the finest level > > Oh I see the coarse grid solver in your full solver output now. > You still want an accurate coarse grid solve. Usually (the default in GAMG) you use a direct solver on one process, and cousin until the coarse grid is small enough to make that cheap. > > On Mon, Sep 29, 2025 at 8:07?AM Moral Sanchez, Elena > wrote: >> Hi, I doubled the system size and changed the tolerances just to show a better example of the problem. This is the output of the callbacks in the first iteration: >> CG Iter 0/1 | res = 2.25e+00/1.00e-09 | 0.1 s >> MG lvl 0 (s=884): CG Iter 0/15 | res = 2.25e+00/1.00e-01 | 0.3 s >> MG lvl 0 (s=884): CG Iter 1/15 | res = 1.43e+00/1.00e-01 | 0.2 s >> MG lvl 0 (s=884): CG Iter 2/15 | res = 1.17e+00/1.00e-01 | 0.2 s >> MG lvl 0 (s=884): CG Iter 3/15 | res = 1.32e+00/1.00e-01 | 0.1 s >> MG lvl 0 (s=884): CG Iter 4/15 | res = 5.01e-01/1.00e-01 | 0.2 s >> MG lvl 0 (s=884): CG Iter 5/15 | res = 3.57e-01/1.00e-01 | 0.2 s >> MG lvl 0 (s=884): CG Iter 6/15 | res = 2.49e-01/1.00e-01 | 0.2 s >> MG lvl 0 (s=884): CG Iter 7/15 | res = 3.04e-01/1.00e-01 | 0.2 s >> MG lvl 0 (s=884): CG Iter 8/15 | res = 2.78e-01/1.00e-01 | 0.2 s >> MG lvl 0 (s=884): CG Iter 9/15 | res = 1.68e-01/1.00e-01 | 0.2 s >> MG lvl 0 (s=884): CG Iter 10/15 | res = 1.21e-01/1.00e-01 | 0.1 s >> MG lvl 0 (s=884): CG Iter 11/15 | res = 9.45e-02/1.00e-01 | 0.2 s >> MG lvl 0 (s=884): CG Iter 12/15 | res = 8.31e-02/1.00e-01 | 0.2 s >> MG lvl 0 (s=884): CG Iter 13/15 | res = 5.47e-02/1.00e-01 | 0.2 s >> MG lvl 0 (s=884): CG Iter 14/15 | res = 4.36e-02/1.00e-01 | 0.2 s >> MG lvl 0 (s=884): CG Iter 15/15 | res = 5.08e-02/1.00e-01 | 0.1 s >> ConvergedReason MG lvl 0: 4 >> MG lvl -1 (s=524): CG Iter 0/15 | res = 8.15e-02/1.00e-01 | 3.0 s >> ConvergedReason MG lvl -1: 3 >> MG lvl 0 (s=884): CG Iter 0/15 | res = 5.08e-02/1.00e-01 | 0.3 s >> MG lvl 0 (s=884): CG Iter 1/15 | res = 2.93e-02/1.00e-01 | 0.2 s >> MG lvl 0 (s=884): CG Iter 2/15 | res = 3.26e-02/1.00e-01 | 0.2 s >> MG lvl 0 (s=884): CG Iter 3/15 | res = 4.14e-02/1.00e-01 | 0.2 s >> MG lvl 0 (s=884): CG Iter 4/15 | res = 4.82e-02/1.00e-01 | 0.2 s >> MG lvl 0 (s=884): CG Iter 5/15 | res = 3.20e-02/1.00e-01 | 0.2 s >> MG lvl 0 (s=884): CG Iter 6/15 | res = 3.46e-02/1.00e-01 | 0.3 s >> MG lvl 0 (s=884): CG Iter 7/15 | res = 3.41e-02/1.00e-01 | 0.2 s >> MG lvl 0 (s=884): CG Iter 8/15 | res = 4.69e-02/1.00e-01 | 0.2 s >> MG lvl 0 (s=884): CG Iter 9/15 | res = 3.37e-02/1.00e-01 | 0.2 s >> MG lvl 0 (s=884): CG Iter 10/15 | res = 4.07e-02/1.00e-01 | 0.2 s >> MG lvl 0 (s=884): CG Iter 11/15 | res = 2.66e-02/1.00e-01 | 0.1 s >> MG lvl 0 (s=884): CG Iter 12/15 | res = 2.83e-02/1.00e-01 | 0.1 s >> MG lvl 0 (s=884): CG Iter 13/15 | res = 2.98e-02/1.00e-01 | 0.2 s >> MG lvl 0 (s=884): CG Iter 14/15 | res = 3.53e-02/1.00e-01 | 0.1 s >> MG lvl 0 (s=884): CG Iter 15/15 | res = 2.33e-02/1.00e-01 | 0.2 s >> ConvergedReason MG lvl 0: 4 >> CG Iter 1/1 | res = 2.42e-02/1.00e-09 | 5.6 s >> MG lvl 0 (s=884): CG Iter 0/15 | res = 2.42e-02/1.00e-01 | 0.2 s >> MG lvl 0 (s=884): CG Iter 1/15 | res = 1.76e-02/1.00e-01 | 0.2 s >> MG lvl 0 (s=884): CG Iter 2/15 | res = 1.40e-02/1.00e-01 | 0.2 s >> MG lvl 0 (s=884): CG Iter 3/15 | res = 1.42e-02/1.00e-01 | 0.1 s >> MG lvl 0 (s=884): CG Iter 4/15 | res = 1.62e-02/1.00e-01 | 0.2 s >> MG lvl 0 (s=884): CG Iter 5/15 | res = 1.35e-02/1.00e-01 | 0.1 s >> MG lvl 0 (s=884): CG Iter 6/15 | res = 1.39e-02/1.00e-01 | 0.1 s >> MG lvl 0 (s=884): CG Iter 7/15 | res = 1.51e-02/1.00e-01 | 0.2 s >> MG lvl 0 (s=884): CG Iter 8/15 | res = 1.28e-02/1.00e-01 | 0.2 s >> MG lvl 0 (s=884): CG Iter 9/15 | res = 1.24e-02/1.00e-01 | 0.2 s >> MG lvl 0 (s=884): CG Iter 10/15 | res = 9.59e-03/1.00e-01 | 0.1 s >> MG lvl 0 (s=884): CG Iter 11/15 | res = 9.02e-03/1.00e-01 | 0.1 s >> MG lvl 0 (s=884): CG Iter 12/15 | res = 1.19e-02/1.00e-01 | 0.1 s >> MG lvl 0 (s=884): CG Iter 13/15 | res = 1.08e-02/1.00e-01 | 0.1 s >> MG lvl 0 (s=884): CG Iter 14/15 | res = 8.19e-03/1.00e-01 | 0.1 s >> MG lvl 0 (s=884): CG Iter 15/15 | res = 7.61e-03/1.00e-01 | 0.1 s >> ConvergedReason MG lvl 0: 4 >> MG lvl -1 (s=524): CG Iter 0/15 | res = 1.38e-02/1.00e-01 | 5.2 s >> ConvergedReason MG lvl -1: 3 >> MG lvl 0 (s=884): CG Iter 0/15 | res = 7.61e-03/1.00e-01 | 0.2 s >> MG lvl 0 (s=884): CG Iter 1/15 | res = 5.62e-03/1.00e-01 | 0.1 s >> MG lvl 0 (s=884): CG Iter 2/15 | res = 6.64e-03/1.00e-01 | 0.1 s >> MG lvl 0 (s=884): CG Iter 3/15 | res = 8.71e-03/1.00e-01 | 0.1 s >> MG lvl 0 (s=884): CG Iter 4/15 | res = 6.40e-03/1.00e-01 | 0.1 s >> MG lvl 0 (s=884): CG Iter 5/15 | res = 7.23e-03/1.00e-01 | 0.1 s >> MG lvl 0 (s=884): CG Iter 6/15 | res = 6.20e-03/1.00e-01 | 0.1 s >> MG lvl 0 (s=884): CG Iter 7/15 | res = 7.04e-03/1.00e-01 | 0.1 s >> MG lvl 0 (s=884): CG Iter 8/15 | res = 7.19e-03/1.00e-01 | 0.1 s >> MG lvl 0 (s=884): CG Iter 9/15 | res = 6.35e-03/1.00e-01 | 0.1 s >> MG lvl 0 (s=884): CG Iter 10/15 | res = 7.31e-03/1.00e-01 | 0.1 s >> MG lvl 0 (s=884): CG Iter 11/15 | res = 6.64e-03/1.00e-01 | 0.1 s >> MG lvl 0 (s=884): CG Iter 12/15 | res = 7.24e-03/1.00e-01 | 0.1 s >> MG lvl 0 (s=884): CG Iter 13/15 | res = 4.97e-03/1.00e-01 | 0.1 s >> MG lvl 0 (s=884): CG Iter 14/15 | res = 5.36e-03/1.00e-01 | 0.1 s >> MG lvl 0 (s=884): CG Iter 15/15 | res = 5.84e-03/1.00e-01 | 0.1 s >> ConvergedReason MG lvl 0: 4 >> CG ConvergedReason: -3 >> >> For completeness, I add here the -ksp_view of the whole solver: >> KSP Object: 1 MPI process >> type: cg >> variant HERMITIAN >> maximum iterations=1, nonzero initial guess >> tolerances: relative=1e-08, absolute=1e-09, divergence=10000. >> left preconditioning >> using UNPRECONDITIONED norm type for convergence test >> PC Object: 1 MPI process >> type: mg >> type is MULTIPLICATIVE, levels=2 cycles=v >> Cycles per PCApply=1 >> Not using Galerkin computed coarse grid matrices >> Coarse grid solver -- level 0 ------------------------------- >> KSP Object: (mg_coarse_) 1 MPI process >> type: cg >> variant HERMITIAN >> maximum iterations=15, nonzero initial guess >> tolerances: relative=0.1, absolute=0.1, divergence=1e+30 >> left preconditioning >> using UNPRECONDITIONED norm type for convergence test >> PC Object: (mg_coarse_) 1 MPI process >> type: none >> linear system matrix = precond matrix: >> Mat Object: 1 MPI process >> type: python >> rows=524, cols=524 >> Python: Solver_petsc.LeastSquaresOperator >> Down solver (pre-smoother) on level 1 ------------------------------- >> KSP Object: (mg_levels_1_) 1 MPI process >> type: cg >> variant HERMITIAN >> maximum iterations=15, nonzero initial guess >> tolerances: relative=0.1, absolute=0.1, divergence=1e+30 >> left preconditioning >> using UNPRECONDITIONED norm type for convergence test >> PC Object: (mg_levels_1_) 1 MPI process >> type: none >> linear system matrix = precond matrix: >> Mat Object: 1 MPI process >> type: python >> rows=884, cols=884 >> Python: Solver_petsc.LeastSquaresOperator >> Up solver (post-smoother) same as down solver (pre-smoother) >> linear system matrix = precond matrix: >> Mat Object: 1 MPI process >> type: python >> rows=884, cols=884 >> Python: Solver_petsc.LeastSquaresOperator >> >> Regarding Mark's Email: What do you mean with "the whole solver doesn't have a coarse grid"? I am using my own Restriction and Interpolation operators. >> Thanks for the help, >> Elena >> >> From: Mark Adams > >> Sent: 28 September 2025 20:13:54 >> To: Barry Smith >> Cc: Moral Sanchez, Elena; petsc-users >> Subject: Re: [petsc-users] setting correct tolerances for MG smoother CG at the finest level >> >> Not sure why your "whole"solver does not have a coarse grid but this is wrong: >> >>> KSP Object: (mg_coarse_) 1 MPI process >>> type: cg >>> variant HERMITIAN >>> maximum iterations=100, initial guess is zero >>> tolerances: relative=0.1, absolute=0.1, divergence=1e+30 >>> >>> The coarse grid has to be accurate. The defaults are a good place to start: max_it=10.000, rtol=1e-5, atol=1e-30 (ish) >> >> On Fri, Sep 26, 2025 at 3:21?PM Barry Smith > wrote: >>> Looks reasonable. Send the output running with >>> >>> -ksp_monitor -mg_levels_ksp_monitor -ksp_converged_reason -mg_levels_ksp_converged_reason >>> >>>> On Sep 26, 2025, at 1:19?PM, Moral Sanchez, Elena > wrote: >>>> >>>> Dear Barry, >>>> >>>> This is -ksp_view for the smoother at the finest level: >>>> KSP Object: (mg_levels_1_) 1 MPI process >>>> type: cg >>>> variant HERMITIAN >>>> maximum iterations=10, nonzero initial guess >>>> tolerances: relative=0.1, absolute=0.1, divergence=1e+30 >>>> left preconditioning >>>> using UNPRECONDITIONED norm type for convergence test >>>> PC Object: (mg_levels_1_) 1 MPI process >>>> type: none >>>> linear system matrix = precond matrix: >>>> Mat Object: 1 MPI process >>>> type: python >>>> rows=524, cols=524 >>>> Python: Solver_petsc.LeastSquaresOperator >>>> And at the coarsest level: >>>> KSP Object: (mg_coarse_) 1 MPI process >>>> type: cg >>>> variant HERMITIAN >>>> maximum iterations=100, initial guess is zero >>>> tolerances: relative=0.1, absolute=0.1, divergence=1e+30 >>>> left preconditioning >>>> using UNPRECONDITIONED norm type for convergence test >>>> PC Object: (mg_coarse_) 1 MPI process >>>> type: none >>>> linear system matrix = precond matrix: >>>> Mat Object: 1 MPI process >>>> type: python >>>> rows=344, cols=344 >>>> Python: Solver_petsc.LeastSquaresOperator >>>> And for the whole solver: >>>> KSP Object: 1 MPI process >>>> type: cg >>>> variant HERMITIAN >>>> maximum iterations=100, nonzero initial guess >>>> tolerances: relative=1e-08, absolute=1e-09, divergence=10000. >>>> left preconditioning >>>> using UNPRECONDITIONED norm type for convergence test >>>> PC Object: 1 MPI process >>>> type: mg >>>> type is MULTIPLICATIVE, levels=2 cycles=v >>>> Cycles per PCApply=1 >>>> Not using Galerkin computed coarse grid matrices >>>> Coarse grid solver -- level 0 ------------------------------- >>>> KSP Object: (mg_coarse_) 1 MPI process >>>> type: cg >>>> variant HERMITIAN >>>> maximum iterations=100, initial guess is zero >>>> tolerances: relative=0.1, absolute=0.1, divergence=1e+30 >>>> left preconditioning >>>> using UNPRECONDITIONED norm type for convergence test >>>> PC Object: (mg_coarse_) 1 MPI process >>>> type: none >>>> linear system matrix = precond matrix: >>>> Mat Object: 1 MPI process >>>> type: python >>>> rows=344, cols=344 >>>> Python: Solver_petsc.LeastSquaresOperator >>>> Down solver (pre-smoother) on level 1 ------------------------------- >>>> KSP Object: (mg_levels_1_) 1 MPI process >>>> type: cg >>>> variant HERMITIAN >>>> maximum iterations=10, nonzero initial guess >>>> tolerances: relative=0.1, absolute=0.1, divergence=1e+30 >>>> left preconditioning >>>> using UNPRECONDITIONED norm type for convergence test >>>> PC Object: (mg_levels_1_) 1 MPI process >>>> type: none >>>> linear system matrix = precond matrix: >>>> Mat Object: 1 MPI process >>>> type: python >>>> rows=524, cols=524 >>>> Python: Solver_petsc.LeastSquaresOperator >>>> Up solver (post-smoother) same as down solver (pre-smoother) >>>> linear system matrix = precond matrix: >>>> Mat Object: 1 MPI process >>>> type: python >>>> rows=524, cols=524 >>>> Python: Solver_petsc.LeastSquaresOperator >>>> Best, >>>> Elena >>>> >>>> From: Barry Smith > >>>> Sent: 26 September 2025 19:05:02 >>>> To: Moral Sanchez, Elena >>>> Cc: petsc-users at mcs.anl.gov >>>> Subject: Re: [petsc-users] setting correct tolerances for MG smoother CG at the finest level >>>> >>>> >>>> Send the output using -ksp_view >>>> >>>> Normally one uses a fixed number of iterations of smoothing on level with multigrid rather than a tolerance, but yes PETSc should respect such a tolerance. >>>> >>>> Barry >>>> >>>> >>>>> On Sep 26, 2025, at 12:49?PM, Moral Sanchez, Elena > wrote: >>>>> >>>>> Hi, >>>>> I am using multigrid (multiplicative) as a preconditioner with a V-cycle of two levels. At each level, I am setting CG as the smoother with certain tolerance. >>>>> >>>>> What I observe is that in the finest level the CG continues iterating after the residual norm reaches the tolerance (atol) and it only stops when reaching the maximum number of iterations at that level. At the coarsest level this does not occur and the CG stops when the tolerance is reached. >>>>> >>>>> I double-checked that the smoother at the finest level has the right tolerance. And I am using a Monitor function to track the residual. >>>>> >>>>> Do you know how to make the smoother at the finest level stop when reaching the tolerance? >>>>> >>>>> Cheers, >>>>> Elena. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Elena.Moral.Sanchez at ipp.mpg.de Mon Sep 29 10:12:23 2025 From: Elena.Moral.Sanchez at ipp.mpg.de (Moral Sanchez, Elena) Date: Mon, 29 Sep 2025 15:12:23 +0000 Subject: [petsc-users] setting correct tolerances for MG smoother CG at the finest level In-Reply-To: References: <421fd9ac0ed0437f88e921d063a6f45f@ipp.mpg.de> <2622f5910bef400f983345df49977fa8@ipp.mpg.de>, Message-ID: <67889c32cacf4cf3ac7e7b643297863b@ipp.mpg.de> This is the output: Residual norms for mg_levels_1_ solve. 0 KSP Residual norm 2.249726733143e+00 1 KSP Residual norm 1.433120400946e+00 2 KSP Residual norm 1.169262560123e+00 3 KSP Residual norm 1.323528716607e+00 4 KSP Residual norm 5.006323254234e-01 5 KSP Residual norm 3.569836784785e-01 6 KSP Residual norm 2.493182937513e-01 7 KSP Residual norm 3.038202502298e-01 8 KSP Residual norm 2.780214194402e-01 9 KSP Residual norm 1.676826341491e-01 10 KSP Residual norm 1.209985378713e-01 11 KSP Residual norm 9.445076689969e-02 12 KSP Residual norm 8.308555284580e-02 13 KSP Residual norm 5.472865592585e-02 14 KSP Residual norm 4.357870564398e-02 15 KSP Residual norm 5.079681292439e-02 Linear mg_levels_1_ solve converged due to CONVERGED_ITS iterations 15 Residual norms for mg_levels_1_ solve. 0 KSP Residual norm 5.079681292439e-02 1 KSP Residual norm 2.934938644003e-02 2 KSP Residual norm 3.257065831294e-02 3 KSP Residual norm 4.143063876867e-02 4 KSP Residual norm 4.822471409489e-02 5 KSP Residual norm 3.197538246153e-02 6 KSP Residual norm 3.461217019835e-02 7 KSP Residual norm 3.410193775327e-02 8 KSP Residual norm 4.690424294464e-02 9 KSP Residual norm 3.366148892800e-02 10 KSP Residual norm 4.068015727689e-02 11 KSP Residual norm 2.658836123104e-02 12 KSP Residual norm 2.826244186003e-02 13 KSP Residual norm 2.981793619508e-02 14 KSP Residual norm 3.525455091450e-02 15 KSP Residual norm 2.331539121838e-02 Linear mg_levels_1_ solve converged due to CONVERGED_ITS iterations 15 Residual norms for mg_levels_1_ solve. 0 KSP Residual norm 2.421498365806e-02 1 KSP Residual norm 1.761072112362e-02 2 KSP Residual norm 1.400842489042e-02 3 KSP Residual norm 1.419665483348e-02 4 KSP Residual norm 1.617590701667e-02 5 KSP Residual norm 1.354824081005e-02 6 KSP Residual norm 1.387252917475e-02 7 KSP Residual norm 1.514043102087e-02 8 KSP Residual norm 1.275811124745e-02 9 KSP Residual norm 1.241039155981e-02 10 KSP Residual norm 9.585207801652e-03 11 KSP Residual norm 9.022641230732e-03 12 KSP Residual norm 1.187709152046e-02 13 KSP Residual norm 1.084880112494e-02 14 KSP Residual norm 8.194750346781e-03 15 KSP Residual norm 7.614246199165e-03 Linear mg_levels_1_ solve converged due to CONVERGED_ITS iterations 15 Residual norms for mg_levels_1_ solve. 0 KSP Residual norm 7.614246199165e-03 1 KSP Residual norm 5.620014684145e-03 2 KSP Residual norm 6.643368363907e-03 3 KSP Residual norm 8.708642393659e-03 4 KSP Residual norm 6.401852907459e-03 5 KSP Residual norm 7.230576215262e-03 6 KSP Residual norm 6.204081601285e-03 7 KSP Residual norm 7.038656665944e-03 8 KSP Residual norm 7.194079694050e-03 9 KSP Residual norm 6.353576889135e-03 10 KSP Residual norm 7.313589502731e-03 11 KSP Residual norm 6.643320423193e-03 12 KSP Residual norm 7.235443182108e-03 13 KSP Residual norm 4.971292307201e-03 14 KSP Residual norm 5.357933842147e-03 15 KSP Residual norm 5.841682994497e-03 Linear mg_levels_1_ solve converged due to CONVERGED_ITS iterations 15 ________________________________ From: Barry Smith Sent: 29 September 2025 15:56:33 To: Moral Sanchez, Elena Cc: Mark Adams; petsc-users Subject: Re: [petsc-users] setting correct tolerances for MG smoother CG at the finest level I asked you to run with -ksp_monitor -mg_levels_ksp_monitor -ksp_converged_reason -mg_levels_ksp_converged_reason you chose not to, delaying the process of understanding what is happening. Please run with those options and send the output. My guess is that you are computing the "residual norms" in your own monitor code, and it is doing so differently than what PETSc does, thus resulting in the appearance of a sufficiently small residual norm, whereas PETSc may not have calculated something that small. Barry On Sep 29, 2025, at 8:39?AM, Moral Sanchez, Elena wrote: Thanks for the hint. I agree that the coarse solve should be much more "accurate". However, for the moment I am just trying to understand what the MG is doing exactly. I am puzzled to see that the fine grid smoother ("lvl 0") does not stop when the residual becomes less than 1e-1. It should converge due to the atol. ________________________________ From: Mark Adams > Sent: 29 September 2025 14:20:56 To: Moral Sanchez, Elena Cc: Barry Smith; petsc-users Subject: Re: [petsc-users] setting correct tolerances for MG smoother CG at the finest level Oh I see the coarse grid solver in your full solver output now. You still want an accurate coarse grid solve. Usually (the default in GAMG) you use a direct solver on one process, and cousin until the coarse grid is small enough to make that cheap. On Mon, Sep 29, 2025 at 8:07?AM Moral Sanchez, Elena > wrote: Hi, I doubled the system size and changed the tolerances just to show a better example of the problem. This is the output of the callbacks in the first iteration: CG Iter 0/1 | res = 2.25e+00/1.00e-09 | 0.1 s MG lvl 0 (s=884): CG Iter 0/15 | res = 2.25e+00/1.00e-01 | 0.3 s MG lvl 0 (s=884): CG Iter 1/15 | res = 1.43e+00/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 2/15 | res = 1.17e+00/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 3/15 | res = 1.32e+00/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 4/15 | res = 5.01e-01/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 5/15 | res = 3.57e-01/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 6/15 | res = 2.49e-01/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 7/15 | res = 3.04e-01/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 8/15 | res = 2.78e-01/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 9/15 | res = 1.68e-01/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 10/15 | res = 1.21e-01/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 11/15 | res = 9.45e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 12/15 | res = 8.31e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 13/15 | res = 5.47e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 14/15 | res = 4.36e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 15/15 | res = 5.08e-02/1.00e-01 | 0.1 s ConvergedReason MG lvl 0: 4 MG lvl -1 (s=524): CG Iter 0/15 | res = 8.15e-02/1.00e-01 | 3.0 s ConvergedReason MG lvl -1: 3 MG lvl 0 (s=884): CG Iter 0/15 | res = 5.08e-02/1.00e-01 | 0.3 s MG lvl 0 (s=884): CG Iter 1/15 | res = 2.93e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 2/15 | res = 3.26e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 3/15 | res = 4.14e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 4/15 | res = 4.82e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 5/15 | res = 3.20e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 6/15 | res = 3.46e-02/1.00e-01 | 0.3 s MG lvl 0 (s=884): CG Iter 7/15 | res = 3.41e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 8/15 | res = 4.69e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 9/15 | res = 3.37e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 10/15 | res = 4.07e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 11/15 | res = 2.66e-02/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 12/15 | res = 2.83e-02/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 13/15 | res = 2.98e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 14/15 | res = 3.53e-02/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 15/15 | res = 2.33e-02/1.00e-01 | 0.2 s ConvergedReason MG lvl 0: 4 CG Iter 1/1 | res = 2.42e-02/1.00e-09 | 5.6 s MG lvl 0 (s=884): CG Iter 0/15 | res = 2.42e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 1/15 | res = 1.76e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 2/15 | res = 1.40e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 3/15 | res = 1.42e-02/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 4/15 | res = 1.62e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 5/15 | res = 1.35e-02/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 6/15 | res = 1.39e-02/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 7/15 | res = 1.51e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 8/15 | res = 1.28e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 9/15 | res = 1.24e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 10/15 | res = 9.59e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 11/15 | res = 9.02e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 12/15 | res = 1.19e-02/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 13/15 | res = 1.08e-02/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 14/15 | res = 8.19e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 15/15 | res = 7.61e-03/1.00e-01 | 0.1 s ConvergedReason MG lvl 0: 4 MG lvl -1 (s=524): CG Iter 0/15 | res = 1.38e-02/1.00e-01 | 5.2 s ConvergedReason MG lvl -1: 3 MG lvl 0 (s=884): CG Iter 0/15 | res = 7.61e-03/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 1/15 | res = 5.62e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 2/15 | res = 6.64e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 3/15 | res = 8.71e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 4/15 | res = 6.40e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 5/15 | res = 7.23e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 6/15 | res = 6.20e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 7/15 | res = 7.04e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 8/15 | res = 7.19e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 9/15 | res = 6.35e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 10/15 | res = 7.31e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 11/15 | res = 6.64e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 12/15 | res = 7.24e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 13/15 | res = 4.97e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 14/15 | res = 5.36e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 15/15 | res = 5.84e-03/1.00e-01 | 0.1 s ConvergedReason MG lvl 0: 4 CG ConvergedReason: -3 For completeness, I add here the -ksp_view of the whole solver: KSP Object: 1 MPI process type: cg variant HERMITIAN maximum iterations=1, nonzero initial guess tolerances: relative=1e-08, absolute=1e-09, divergence=10000. left preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: 1 MPI process type: mg type is MULTIPLICATIVE, levels=2 cycles=v Cycles per PCApply=1 Not using Galerkin computed coarse grid matrices Coarse grid solver -- level 0 ------------------------------- KSP Object: (mg_coarse_) 1 MPI process type: cg variant HERMITIAN maximum iterations=15, nonzero initial guess tolerances: relative=0.1, absolute=0.1, divergence=1e+30 left preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: (mg_coarse_) 1 MPI process type: none linear system matrix = precond matrix: Mat Object: 1 MPI process type: python rows=524, cols=524 Python: Solver_petsc.LeastSquaresOperator Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 1 MPI process type: cg variant HERMITIAN maximum iterations=15, nonzero initial guess tolerances: relative=0.1, absolute=0.1, divergence=1e+30 left preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: (mg_levels_1_) 1 MPI process type: none linear system matrix = precond matrix: Mat Object: 1 MPI process type: python rows=884, cols=884 Python: Solver_petsc.LeastSquaresOperator Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Mat Object: 1 MPI process type: python rows=884, cols=884 Python: Solver_petsc.LeastSquaresOperator Regarding Mark's Email: What do you mean with "the whole solver doesn't have a coarse grid"? I am using my own Restriction and Interpolation operators. Thanks for the help, Elena ________________________________ From: Mark Adams > Sent: 28 September 2025 20:13:54 To: Barry Smith Cc: Moral Sanchez, Elena; petsc-users Subject: Re: [petsc-users] setting correct tolerances for MG smoother CG at the finest level Not sure why your "whole"solver does not have a coarse grid but this is wrong: KSP Object: (mg_coarse_) 1 MPI process type: cg variant HERMITIAN maximum iterations=100, initial guess is zero tolerances: relative=0.1, absolute=0.1, divergence=1e+30 The coarse grid has to be accurate. The defaults are a good place to start: max_it=10.000, rtol=1e-5, atol=1e-30 (ish) On Fri, Sep 26, 2025 at 3:21?PM Barry Smith > wrote: Looks reasonable. Send the output running with -ksp_monitor -mg_levels_ksp_monitor -ksp_converged_reason -mg_levels_ksp_converged_reason On Sep 26, 2025, at 1:19?PM, Moral Sanchez, Elena > wrote: Dear Barry, This is -ksp_view for the smoother at the finest level: KSP Object: (mg_levels_1_) 1 MPI process type: cg variant HERMITIAN maximum iterations=10, nonzero initial guess tolerances: relative=0.1, absolute=0.1, divergence=1e+30 left preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: (mg_levels_1_) 1 MPI process type: none linear system matrix = precond matrix: Mat Object: 1 MPI process type: python rows=524, cols=524 Python: Solver_petsc.LeastSquaresOperator And at the coarsest level: KSP Object: (mg_coarse_) 1 MPI process type: cg variant HERMITIAN maximum iterations=100, initial guess is zero tolerances: relative=0.1, absolute=0.1, divergence=1e+30 left preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: (mg_coarse_) 1 MPI process type: none linear system matrix = precond matrix: Mat Object: 1 MPI process type: python rows=344, cols=344 Python: Solver_petsc.LeastSquaresOperator And for the whole solver: KSP Object: 1 MPI process type: cg variant HERMITIAN maximum iterations=100, nonzero initial guess tolerances: relative=1e-08, absolute=1e-09, divergence=10000. left preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: 1 MPI process type: mg type is MULTIPLICATIVE, levels=2 cycles=v Cycles per PCApply=1 Not using Galerkin computed coarse grid matrices Coarse grid solver -- level 0 ------------------------------- KSP Object: (mg_coarse_) 1 MPI process type: cg variant HERMITIAN maximum iterations=100, initial guess is zero tolerances: relative=0.1, absolute=0.1, divergence=1e+30 left preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: (mg_coarse_) 1 MPI process type: none linear system matrix = precond matrix: Mat Object: 1 MPI process type: python rows=344, cols=344 Python: Solver_petsc.LeastSquaresOperator Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 1 MPI process type: cg variant HERMITIAN maximum iterations=10, nonzero initial guess tolerances: relative=0.1, absolute=0.1, divergence=1e+30 left preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: (mg_levels_1_) 1 MPI process type: none linear system matrix = precond matrix: Mat Object: 1 MPI process type: python rows=524, cols=524 Python: Solver_petsc.LeastSquaresOperator Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Mat Object: 1 MPI process type: python rows=524, cols=524 Python: Solver_petsc.LeastSquaresOperator Best, Elena ________________________________ From: Barry Smith > Sent: 26 September 2025 19:05:02 To: Moral Sanchez, Elena Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] setting correct tolerances for MG smoother CG at the finest level Send the output using -ksp_view Normally one uses a fixed number of iterations of smoothing on level with multigrid rather than a tolerance, but yes PETSc should respect such a tolerance. Barry On Sep 26, 2025, at 12:49?PM, Moral Sanchez, Elena > wrote: Hi, I am using multigrid (multiplicative) as a preconditioner with a V-cycle of two levels. At each level, I am setting CG as the smoother with certain tolerance. What I observe is that in the finest level the CG continues iterating after the residual norm reaches the tolerance (atol) and it only stops when reaching the maximum number of iterations at that level. At the coarsest level this does not occur and the CG stops when the tolerance is reached. I double-checked that the smoother at the finest level has the right tolerance. And I am using a Monitor function to track the residual. Do you know how to make the smoother at the finest level stop when reaching the tolerance? Cheers, Elena. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Mon Sep 29 13:31:26 2025 From: bsmith at petsc.dev (Barry Smith) Date: Mon, 29 Sep 2025 14:31:26 -0400 Subject: [petsc-users] setting correct tolerances for MG smoother CG at the finest level In-Reply-To: <67889c32cacf4cf3ac7e7b643297863b@ipp.mpg.de> References: <421fd9ac0ed0437f88e921d063a6f45f@ipp.mpg.de> <2622f5910bef400f983345df49977fa8@ipp.mpg.de> <67889c32cacf4cf3ac7e7b643297863b@ipp.mpg.de> Message-ID: Thanks. I missed something earlier in the KSPView >> using UNPRECONDITIONED norm type for convergence test Please add the options >>>> -ksp_monitor_true_residual -mg_levels_ksp_monitor_true_residual It is using the unpreconditioned residual norms for convergence testing but we are printing the preconditioned norms. Barry > On Sep 29, 2025, at 11:12?AM, Moral Sanchez, Elena wrote: > > This is the output: > Residual norms for mg_levels_1_ solve. > 0 KSP Residual norm 2.249726733143e+00 > 1 KSP Residual norm 1.433120400946e+00 > 2 KSP Residual norm 1.169262560123e+00 > 3 KSP Residual norm 1.323528716607e+00 > 4 KSP Residual norm 5.006323254234e-01 > 5 KSP Residual norm 3.569836784785e-01 > 6 KSP Residual norm 2.493182937513e-01 > 7 KSP Residual norm 3.038202502298e-01 > 8 KSP Residual norm 2.780214194402e-01 > 9 KSP Residual norm 1.676826341491e-01 > 10 KSP Residual norm 1.209985378713e-01 > 11 KSP Residual norm 9.445076689969e-02 > 12 KSP Residual norm 8.308555284580e-02 > 13 KSP Residual norm 5.472865592585e-02 > 14 KSP Residual norm 4.357870564398e-02 > 15 KSP Residual norm 5.079681292439e-02 > Linear mg_levels_1_ solve converged due to CONVERGED_ITS iterations 15 > Residual norms for mg_levels_1_ solve. > 0 KSP Residual norm 5.079681292439e-02 > 1 KSP Residual norm 2.934938644003e-02 > 2 KSP Residual norm 3.257065831294e-02 > 3 KSP Residual norm 4.143063876867e-02 > 4 KSP Residual norm 4.822471409489e-02 > 5 KSP Residual norm 3.197538246153e-02 > 6 KSP Residual norm 3.461217019835e-02 > 7 KSP Residual norm 3.410193775327e-02 > 8 KSP Residual norm 4.690424294464e-02 > 9 KSP Residual norm 3.366148892800e-02 > 10 KSP Residual norm 4.068015727689e-02 > 11 KSP Residual norm 2.658836123104e-02 > 12 KSP Residual norm 2.826244186003e-02 > 13 KSP Residual norm 2.981793619508e-02 > 14 KSP Residual norm 3.525455091450e-02 > 15 KSP Residual norm 2.331539121838e-02 > Linear mg_levels_1_ solve converged due to CONVERGED_ITS iterations 15 > Residual norms for mg_levels_1_ solve. > 0 KSP Residual norm 2.421498365806e-02 > 1 KSP Residual norm 1.761072112362e-02 > 2 KSP Residual norm 1.400842489042e-02 > 3 KSP Residual norm 1.419665483348e-02 > 4 KSP Residual norm 1.617590701667e-02 > 5 KSP Residual norm 1.354824081005e-02 > 6 KSP Residual norm 1.387252917475e-02 > 7 KSP Residual norm 1.514043102087e-02 > 8 KSP Residual norm 1.275811124745e-02 > 9 KSP Residual norm 1.241039155981e-02 > 10 KSP Residual norm 9.585207801652e-03 > 11 KSP Residual norm 9.022641230732e-03 > 12 KSP Residual norm 1.187709152046e-02 > 13 KSP Residual norm 1.084880112494e-02 > 14 KSP Residual norm 8.194750346781e-03 > 15 KSP Residual norm 7.614246199165e-03 > Linear mg_levels_1_ solve converged due to CONVERGED_ITS iterations 15 > Residual norms for mg_levels_1_ solve. > 0 KSP Residual norm 7.614246199165e-03 > 1 KSP Residual norm 5.620014684145e-03 > 2 KSP Residual norm 6.643368363907e-03 > 3 KSP Residual norm 8.708642393659e-03 > 4 KSP Residual norm 6.401852907459e-03 > 5 KSP Residual norm 7.230576215262e-03 > 6 KSP Residual norm 6.204081601285e-03 > 7 KSP Residual norm 7.038656665944e-03 > 8 KSP Residual norm 7.194079694050e-03 > 9 KSP Residual norm 6.353576889135e-03 > 10 KSP Residual norm 7.313589502731e-03 > 11 KSP Residual norm 6.643320423193e-03 > 12 KSP Residual norm 7.235443182108e-03 > 13 KSP Residual norm 4.971292307201e-03 > 14 KSP Residual norm 5.357933842147e-03 > 15 KSP Residual norm 5.841682994497e-03 > Linear mg_levels_1_ solve converged due to CONVERGED_ITS iterations 15 > > From: Barry Smith > > Sent: 29 September 2025 15:56:33 > To: Moral Sanchez, Elena > Cc: Mark Adams; petsc-users > Subject: Re: [petsc-users] setting correct tolerances for MG smoother CG at the finest level > > > I asked you to run with > >>>> -ksp_monitor -mg_levels_ksp_monitor -ksp_converged_reason -mg_levels_ksp_converged_reason > > you chose not to, delaying the process of understanding what is happening. > > Please run with those options and send the output. My guess is that you are computing the "residual norms" in your own monitor code, and it is doing so differently than what PETSc does, thus resulting in the appearance of a sufficiently small residual norm, whereas PETSc may not have calculated something that small. > > Barry > > >> On Sep 29, 2025, at 8:39?AM, Moral Sanchez, Elena > wrote: >> >> Thanks for the hint. I agree that the coarse solve should be much more "accurate". However, for the moment I am just trying to understand what the MG is doing exactly. >> >> I am puzzled to see that the fine grid smoother ("lvl 0") does not stop when the residual becomes less than 1e-1. It should converge due to the atol. >> >> From: Mark Adams > >> Sent: 29 September 2025 14:20:56 >> To: Moral Sanchez, Elena >> Cc: Barry Smith; petsc-users >> Subject: Re: [petsc-users] setting correct tolerances for MG smoother CG at the finest level >> >> Oh I see the coarse grid solver in your full solver output now. >> You still want an accurate coarse grid solve. Usually (the default in GAMG) you use a direct solver on one process, and cousin until the coarse grid is small enough to make that cheap. >> >> On Mon, Sep 29, 2025 at 8:07?AM Moral Sanchez, Elena > wrote: >>> Hi, I doubled the system size and changed the tolerances just to show a better example of the problem. This is the output of the callbacks in the first iteration: >>> CG Iter 0/1 | res = 2.25e+00/1.00e-09 | 0.1 s >>> MG lvl 0 (s=884): CG Iter 0/15 | res = 2.25e+00/1.00e-01 | 0.3 s >>> MG lvl 0 (s=884): CG Iter 1/15 | res = 1.43e+00/1.00e-01 | 0.2 s >>> MG lvl 0 (s=884): CG Iter 2/15 | res = 1.17e+00/1.00e-01 | 0.2 s >>> MG lvl 0 (s=884): CG Iter 3/15 | res = 1.32e+00/1.00e-01 | 0.1 s >>> MG lvl 0 (s=884): CG Iter 4/15 | res = 5.01e-01/1.00e-01 | 0.2 s >>> MG lvl 0 (s=884): CG Iter 5/15 | res = 3.57e-01/1.00e-01 | 0.2 s >>> MG lvl 0 (s=884): CG Iter 6/15 | res = 2.49e-01/1.00e-01 | 0.2 s >>> MG lvl 0 (s=884): CG Iter 7/15 | res = 3.04e-01/1.00e-01 | 0.2 s >>> MG lvl 0 (s=884): CG Iter 8/15 | res = 2.78e-01/1.00e-01 | 0.2 s >>> MG lvl 0 (s=884): CG Iter 9/15 | res = 1.68e-01/1.00e-01 | 0.2 s >>> MG lvl 0 (s=884): CG Iter 10/15 | res = 1.21e-01/1.00e-01 | 0.1 s >>> MG lvl 0 (s=884): CG Iter 11/15 | res = 9.45e-02/1.00e-01 | 0.2 s >>> MG lvl 0 (s=884): CG Iter 12/15 | res = 8.31e-02/1.00e-01 | 0.2 s >>> MG lvl 0 (s=884): CG Iter 13/15 | res = 5.47e-02/1.00e-01 | 0.2 s >>> MG lvl 0 (s=884): CG Iter 14/15 | res = 4.36e-02/1.00e-01 | 0.2 s >>> MG lvl 0 (s=884): CG Iter 15/15 | res = 5.08e-02/1.00e-01 | 0.1 s >>> ConvergedReason MG lvl 0: 4 >>> MG lvl -1 (s=524): CG Iter 0/15 | res = 8.15e-02/1.00e-01 | 3.0 s >>> ConvergedReason MG lvl -1: 3 >>> MG lvl 0 (s=884): CG Iter 0/15 | res = 5.08e-02/1.00e-01 | 0.3 s >>> MG lvl 0 (s=884): CG Iter 1/15 | res = 2.93e-02/1.00e-01 | 0.2 s >>> MG lvl 0 (s=884): CG Iter 2/15 | res = 3.26e-02/1.00e-01 | 0.2 s >>> MG lvl 0 (s=884): CG Iter 3/15 | res = 4.14e-02/1.00e-01 | 0.2 s >>> MG lvl 0 (s=884): CG Iter 4/15 | res = 4.82e-02/1.00e-01 | 0.2 s >>> MG lvl 0 (s=884): CG Iter 5/15 | res = 3.20e-02/1.00e-01 | 0.2 s >>> MG lvl 0 (s=884): CG Iter 6/15 | res = 3.46e-02/1.00e-01 | 0.3 s >>> MG lvl 0 (s=884): CG Iter 7/15 | res = 3.41e-02/1.00e-01 | 0.2 s >>> MG lvl 0 (s=884): CG Iter 8/15 | res = 4.69e-02/1.00e-01 | 0.2 s >>> MG lvl 0 (s=884): CG Iter 9/15 | res = 3.37e-02/1.00e-01 | 0.2 s >>> MG lvl 0 (s=884): CG Iter 10/15 | res = 4.07e-02/1.00e-01 | 0.2 s >>> MG lvl 0 (s=884): CG Iter 11/15 | res = 2.66e-02/1.00e-01 | 0.1 s >>> MG lvl 0 (s=884): CG Iter 12/15 | res = 2.83e-02/1.00e-01 | 0.1 s >>> MG lvl 0 (s=884): CG Iter 13/15 | res = 2.98e-02/1.00e-01 | 0.2 s >>> MG lvl 0 (s=884): CG Iter 14/15 | res = 3.53e-02/1.00e-01 | 0.1 s >>> MG lvl 0 (s=884): CG Iter 15/15 | res = 2.33e-02/1.00e-01 | 0.2 s >>> ConvergedReason MG lvl 0: 4 >>> CG Iter 1/1 | res = 2.42e-02/1.00e-09 | 5.6 s >>> MG lvl 0 (s=884): CG Iter 0/15 | res = 2.42e-02/1.00e-01 | 0.2 s >>> MG lvl 0 (s=884): CG Iter 1/15 | res = 1.76e-02/1.00e-01 | 0.2 s >>> MG lvl 0 (s=884): CG Iter 2/15 | res = 1.40e-02/1.00e-01 | 0.2 s >>> MG lvl 0 (s=884): CG Iter 3/15 | res = 1.42e-02/1.00e-01 | 0.1 s >>> MG lvl 0 (s=884): CG Iter 4/15 | res = 1.62e-02/1.00e-01 | 0.2 s >>> MG lvl 0 (s=884): CG Iter 5/15 | res = 1.35e-02/1.00e-01 | 0.1 s >>> MG lvl 0 (s=884): CG Iter 6/15 | res = 1.39e-02/1.00e-01 | 0.1 s >>> MG lvl 0 (s=884): CG Iter 7/15 | res = 1.51e-02/1.00e-01 | 0.2 s >>> MG lvl 0 (s=884): CG Iter 8/15 | res = 1.28e-02/1.00e-01 | 0.2 s >>> MG lvl 0 (s=884): CG Iter 9/15 | res = 1.24e-02/1.00e-01 | 0.2 s >>> MG lvl 0 (s=884): CG Iter 10/15 | res = 9.59e-03/1.00e-01 | 0.1 s >>> MG lvl 0 (s=884): CG Iter 11/15 | res = 9.02e-03/1.00e-01 | 0.1 s >>> MG lvl 0 (s=884): CG Iter 12/15 | res = 1.19e-02/1.00e-01 | 0.1 s >>> MG lvl 0 (s=884): CG Iter 13/15 | res = 1.08e-02/1.00e-01 | 0.1 s >>> MG lvl 0 (s=884): CG Iter 14/15 | res = 8.19e-03/1.00e-01 | 0.1 s >>> MG lvl 0 (s=884): CG Iter 15/15 | res = 7.61e-03/1.00e-01 | 0.1 s >>> ConvergedReason MG lvl 0: 4 >>> MG lvl -1 (s=524): CG Iter 0/15 | res = 1.38e-02/1.00e-01 | 5.2 s >>> ConvergedReason MG lvl -1: 3 >>> MG lvl 0 (s=884): CG Iter 0/15 | res = 7.61e-03/1.00e-01 | 0.2 s >>> MG lvl 0 (s=884): CG Iter 1/15 | res = 5.62e-03/1.00e-01 | 0.1 s >>> MG lvl 0 (s=884): CG Iter 2/15 | res = 6.64e-03/1.00e-01 | 0.1 s >>> MG lvl 0 (s=884): CG Iter 3/15 | res = 8.71e-03/1.00e-01 | 0.1 s >>> MG lvl 0 (s=884): CG Iter 4/15 | res = 6.40e-03/1.00e-01 | 0.1 s >>> MG lvl 0 (s=884): CG Iter 5/15 | res = 7.23e-03/1.00e-01 | 0.1 s >>> MG lvl 0 (s=884): CG Iter 6/15 | res = 6.20e-03/1.00e-01 | 0.1 s >>> MG lvl 0 (s=884): CG Iter 7/15 | res = 7.04e-03/1.00e-01 | 0.1 s >>> MG lvl 0 (s=884): CG Iter 8/15 | res = 7.19e-03/1.00e-01 | 0.1 s >>> MG lvl 0 (s=884): CG Iter 9/15 | res = 6.35e-03/1.00e-01 | 0.1 s >>> MG lvl 0 (s=884): CG Iter 10/15 | res = 7.31e-03/1.00e-01 | 0.1 s >>> MG lvl 0 (s=884): CG Iter 11/15 | res = 6.64e-03/1.00e-01 | 0.1 s >>> MG lvl 0 (s=884): CG Iter 12/15 | res = 7.24e-03/1.00e-01 | 0.1 s >>> MG lvl 0 (s=884): CG Iter 13/15 | res = 4.97e-03/1.00e-01 | 0.1 s >>> MG lvl 0 (s=884): CG Iter 14/15 | res = 5.36e-03/1.00e-01 | 0.1 s >>> MG lvl 0 (s=884): CG Iter 15/15 | res = 5.84e-03/1.00e-01 | 0.1 s >>> ConvergedReason MG lvl 0: 4 >>> CG ConvergedReason: -3 >>> >>> For completeness, I add here the -ksp_view of the whole solver: >>> KSP Object: 1 MPI process >>> type: cg >>> variant HERMITIAN >>> maximum iterations=1, nonzero initial guess >>> tolerances: relative=1e-08, absolute=1e-09, divergence=10000. >>> left preconditioning >>> using UNPRECONDITIONED norm type for convergence test >>> PC Object: 1 MPI process >>> type: mg >>> type is MULTIPLICATIVE, levels=2 cycles=v >>> Cycles per PCApply=1 >>> Not using Galerkin computed coarse grid matrices >>> Coarse grid solver -- level 0 ------------------------------- >>> KSP Object: (mg_coarse_) 1 MPI process >>> type: cg >>> variant HERMITIAN >>> maximum iterations=15, nonzero initial guess >>> tolerances: relative=0.1, absolute=0.1, divergence=1e+30 >>> left preconditioning >>> using UNPRECONDITIONED norm type for convergence test >>> PC Object: (mg_coarse_) 1 MPI process >>> type: none >>> linear system matrix = precond matrix: >>> Mat Object: 1 MPI process >>> type: python >>> rows=524, cols=524 >>> Python: Solver_petsc.LeastSquaresOperator >>> Down solver (pre-smoother) on level 1 ------------------------------- >>> KSP Object: (mg_levels_1_) 1 MPI process >>> type: cg >>> variant HERMITIAN >>> maximum iterations=15, nonzero initial guess >>> tolerances: relative=0.1, absolute=0.1, divergence=1e+30 >>> left preconditioning >>> using UNPRECONDITIONED norm type for convergence test >>> PC Object: (mg_levels_1_) 1 MPI process >>> type: none >>> linear system matrix = precond matrix: >>> Mat Object: 1 MPI process >>> type: python >>> rows=884, cols=884 >>> Python: Solver_petsc.LeastSquaresOperator >>> Up solver (post-smoother) same as down solver (pre-smoother) >>> linear system matrix = precond matrix: >>> Mat Object: 1 MPI process >>> type: python >>> rows=884, cols=884 >>> Python: Solver_petsc.LeastSquaresOperator >>> >>> Regarding Mark's Email: What do you mean with "the whole solver doesn't have a coarse grid"? I am using my own Restriction and Interpolation operators. >>> Thanks for the help, >>> Elena >>> >>> From: Mark Adams > >>> Sent: 28 September 2025 20:13:54 >>> To: Barry Smith >>> Cc: Moral Sanchez, Elena; petsc-users >>> Subject: Re: [petsc-users] setting correct tolerances for MG smoother CG at the finest level >>> >>> Not sure why your "whole"solver does not have a coarse grid but this is wrong: >>> >>>> KSP Object: (mg_coarse_) 1 MPI process >>>> type: cg >>>> variant HERMITIAN >>>> maximum iterations=100, initial guess is zero >>>> tolerances: relative=0.1, absolute=0.1, divergence=1e+30 >>>> >>>> The coarse grid has to be accurate. The defaults are a good place to start: max_it=10.000, rtol=1e-5, atol=1e-30 (ish) >>> >>> On Fri, Sep 26, 2025 at 3:21?PM Barry Smith > wrote: >>>> Looks reasonable. Send the output running with >>>> >>>> -ksp_monitor -mg_levels_ksp_monitor -ksp_converged_reason -mg_levels_ksp_converged_reason >>>> >>>>> On Sep 26, 2025, at 1:19?PM, Moral Sanchez, Elena > wrote: >>>>> >>>>> Dear Barry, >>>>> >>>>> This is -ksp_view for the smoother at the finest level: >>>>> KSP Object: (mg_levels_1_) 1 MPI process >>>>> type: cg >>>>> variant HERMITIAN >>>>> maximum iterations=10, nonzero initial guess >>>>> tolerances: relative=0.1, absolute=0.1, divergence=1e+30 >>>>> left preconditioning >>>>> using UNPRECONDITIONED norm type for convergence test >>>>> PC Object: (mg_levels_1_) 1 MPI process >>>>> type: none >>>>> linear system matrix = precond matrix: >>>>> Mat Object: 1 MPI process >>>>> type: python >>>>> rows=524, cols=524 >>>>> Python: Solver_petsc.LeastSquaresOperator >>>>> And at the coarsest level: >>>>> KSP Object: (mg_coarse_) 1 MPI process >>>>> type: cg >>>>> variant HERMITIAN >>>>> maximum iterations=100, initial guess is zero >>>>> tolerances: relative=0.1, absolute=0.1, divergence=1e+30 >>>>> left preconditioning >>>>> using UNPRECONDITIONED norm type for convergence test >>>>> PC Object: (mg_coarse_) 1 MPI process >>>>> type: none >>>>> linear system matrix = precond matrix: >>>>> Mat Object: 1 MPI process >>>>> type: python >>>>> rows=344, cols=344 >>>>> Python: Solver_petsc.LeastSquaresOperator >>>>> And for the whole solver: >>>>> KSP Object: 1 MPI process >>>>> type: cg >>>>> variant HERMITIAN >>>>> maximum iterations=100, nonzero initial guess >>>>> tolerances: relative=1e-08, absolute=1e-09, divergence=10000. >>>>> left preconditioning >>>>> using UNPRECONDITIONED norm type for convergence test >>>>> PC Object: 1 MPI process >>>>> type: mg >>>>> type is MULTIPLICATIVE, levels=2 cycles=v >>>>> Cycles per PCApply=1 >>>>> Not using Galerkin computed coarse grid matrices >>>>> Coarse grid solver -- level 0 ------------------------------- >>>>> KSP Object: (mg_coarse_) 1 MPI process >>>>> type: cg >>>>> variant HERMITIAN >>>>> maximum iterations=100, initial guess is zero >>>>> tolerances: relative=0.1, absolute=0.1, divergence=1e+30 >>>>> left preconditioning >>>>> using UNPRECONDITIONED norm type for convergence test >>>>> PC Object: (mg_coarse_) 1 MPI process >>>>> type: none >>>>> linear system matrix = precond matrix: >>>>> Mat Object: 1 MPI process >>>>> type: python >>>>> rows=344, cols=344 >>>>> Python: Solver_petsc.LeastSquaresOperator >>>>> Down solver (pre-smoother) on level 1 ------------------------------- >>>>> KSP Object: (mg_levels_1_) 1 MPI process >>>>> type: cg >>>>> variant HERMITIAN >>>>> maximum iterations=10, nonzero initial guess >>>>> tolerances: relative=0.1, absolute=0.1, divergence=1e+30 >>>>> left preconditioning >>>>> using UNPRECONDITIONED norm type for convergence test >>>>> PC Object: (mg_levels_1_) 1 MPI process >>>>> type: none >>>>> linear system matrix = precond matrix: >>>>> Mat Object: 1 MPI process >>>>> type: python >>>>> rows=524, cols=524 >>>>> Python: Solver_petsc.LeastSquaresOperator >>>>> Up solver (post-smoother) same as down solver (pre-smoother) >>>>> linear system matrix = precond matrix: >>>>> Mat Object: 1 MPI process >>>>> type: python >>>>> rows=524, cols=524 >>>>> Python: Solver_petsc.LeastSquaresOperator >>>>> Best, >>>>> Elena >>>>> >>>>> >>>>> From: Barry Smith > >>>>> Sent: 26 September 2025 19:05:02 >>>>> To: Moral Sanchez, Elena >>>>> Cc: petsc-users at mcs.anl.gov >>>>> Subject: Re: [petsc-users] setting correct tolerances for MG smoother CG at the finest level >>>>> >>>>> >>>>> Send the output using -ksp_view >>>>> >>>>> Normally one uses a fixed number of iterations of smoothing on level with multigrid rather than a tolerance, but yes PETSc should respect such a tolerance. >>>>> >>>>> Barry >>>>> >>>>> >>>>>> On Sep 26, 2025, at 12:49?PM, Moral Sanchez, Elena > wrote: >>>>>> >>>>>> Hi, >>>>>> I am using multigrid (multiplicative) as a preconditioner with a V-cycle of two levels. At each level, I am setting CG as the smoother with certain tolerance. >>>>>> >>>>>> What I observe is that in the finest level the CG continues iterating after the residual norm reaches the tolerance (atol) and it only stops when reaching the maximum number of iterations at that level. At the coarsest level this does not occur and the CG stops when the tolerance is reached. >>>>>> >>>>>> I double-checked that the smoother at the finest level has the right tolerance. And I am using a Monitor function to track the residual. >>>>>> >>>>>> Do you know how to make the smoother at the finest level stop when reaching the tolerance? >>>>>> >>>>>> Cheers, >>>>>> Elena. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Elena.Moral.Sanchez at ipp.mpg.de Tue Sep 30 04:05:52 2025 From: Elena.Moral.Sanchez at ipp.mpg.de (Moral Sanchez, Elena) Date: Tue, 30 Sep 2025 09:05:52 +0000 Subject: [petsc-users] setting correct tolerances for MG smoother CG at the finest level In-Reply-To: References: <421fd9ac0ed0437f88e921d063a6f45f@ipp.mpg.de> <2622f5910bef400f983345df49977fa8@ipp.mpg.de> <67889c32cacf4cf3ac7e7b643297863b@ipp.mpg.de>, Message-ID: This is what I get: Residual norms for mg_levels_1_ solve. 0 KSP Residual norm 2.249726733143e+00 Residual norms for mg_levels_1_ solve. 0 KSP unpreconditioned resid norm 2.249726733143e+00 true resid norm 2.249726733143e+00 ||r(i)||/||b|| 1.000000000000e+00 1 KSP Residual norm 1.433120400946e+00 1 KSP unpreconditioned resid norm 1.433120400946e+00 true resid norm 1.433120400946e+00 ||r(i)||/||b|| 6.370197677051e-01 2 KSP Residual norm 1.169262560123e+00 2 KSP unpreconditioned resid norm 1.169262560123e+00 true resid norm 1.169262560123e+00 ||r(i)||/||b|| 5.197353718108e-01 3 KSP Residual norm 1.323528716607e+00 3 KSP unpreconditioned resid norm 1.323528716607e+00 true resid norm 1.323528716607e+00 ||r(i)||/||b|| 5.883064361148e-01 4 KSP Residual norm 5.006323254234e-01 4 KSP unpreconditioned resid norm 5.006323254234e-01 true resid norm 5.006323254234e-01 ||r(i)||/||b|| 2.225302824775e-01 5 KSP Residual norm 3.569836784785e-01 5 KSP unpreconditioned resid norm 3.569836784785e-01 true resid norm 3.569836784785e-01 ||r(i)||/||b|| 1.586786844906e-01 6 KSP Residual norm 2.493182937513e-01 6 KSP unpreconditioned resid norm 2.493182937513e-01 true resid norm 2.493182937513e-01 ||r(i)||/||b|| 1.108215900529e-01 7 KSP Residual norm 3.038202502298e-01 7 KSP unpreconditioned resid norm 3.038202502298e-01 true resid norm 3.038202502298e-01 ||r(i)||/||b|| 1.350476241198e-01 8 KSP Residual norm 2.780214194402e-01 8 KSP unpreconditioned resid norm 2.780214194402e-01 true resid norm 2.780214194402e-01 ||r(i)||/||b|| 1.235800843473e-01 9 KSP Residual norm 1.676826341491e-01 9 KSP unpreconditioned resid norm 1.676826341491e-01 true resid norm 1.676826341491e-01 ||r(i)||/||b|| 7.453466755710e-02 10 KSP Residual norm 1.209985378713e-01 10 KSP unpreconditioned resid norm 1.209985378713e-01 true resid norm 1.209985378713e-01 ||r(i)||/||b|| 5.378366007245e-02 11 KSP Residual norm 9.445076689969e-02 11 KSP unpreconditioned resid norm 9.445076689969e-02 true resid norm 9.445076689969e-02 ||r(i)||/||b|| 4.198321756516e-02 12 KSP Residual norm 8.308555284580e-02 12 KSP unpreconditioned resid norm 8.308555284580e-02 true resid norm 8.308555284580e-02 ||r(i)||/||b|| 3.693139776569e-02 13 KSP Residual norm 5.472865592585e-02 13 KSP unpreconditioned resid norm 5.472865592585e-02 true resid norm 5.472865592585e-02 ||r(i)||/||b|| 2.432680161532e-02 14 KSP Residual norm 4.357870564398e-02 14 KSP unpreconditioned resid norm 4.357870564398e-02 true resid norm 4.357870564398e-02 ||r(i)||/||b|| 1.937066622447e-02 15 KSP Residual norm 5.079681292439e-02 15 KSP unpreconditioned resid norm 5.079681292439e-02 true resid norm 5.079681292439e-02 ||r(i)||/||b|| 2.257910357558e-02 Linear mg_levels_1_ solve converged due to CONVERGED_ITS iterations 15 Residual norms for mg_levels_1_ solve. 0 KSP Residual norm 5.079681292439e-02 Residual norms for mg_levels_1_ solve. 0 KSP unpreconditioned resid norm 5.079681292439e-02 true resid norm 5.079681292439e-02 ||r(i)||/||b|| 2.257910357559e-02 1 KSP Residual norm 2.934938644003e-02 1 KSP unpreconditioned resid norm 2.934938644003e-02 true resid norm 2.934938644003e-02 ||r(i)||/||b|| 1.304575618348e-02 2 KSP Residual norm 3.257065831294e-02 2 KSP unpreconditioned resid norm 3.257065831294e-02 true resid norm 3.257065831294e-02 ||r(i)||/||b|| 1.447760647243e-02 3 KSP Residual norm 4.143063876867e-02 3 KSP unpreconditioned resid norm 4.143063876867e-02 true resid norm 4.143063876867e-02 ||r(i)||/||b|| 1.841585387164e-02 4 KSP Residual norm 4.822471409489e-02 4 KSP unpreconditioned resid norm 4.822471409489e-02 true resid norm 4.822471409489e-02 ||r(i)||/||b|| 2.143580968499e-02 5 KSP Residual norm 3.197538246153e-02 5 KSP unpreconditioned resid norm 3.197538246153e-02 true resid norm 3.197538246153e-02 ||r(i)||/||b|| 1.421300729127e-02 6 KSP Residual norm 3.461217019835e-02 6 KSP unpreconditioned resid norm 3.461217019835e-02 true resid norm 3.461217019835e-02 ||r(i)||/||b|| 1.538505529958e-02 7 KSP Residual norm 3.410193775327e-02 7 KSP unpreconditioned resid norm 3.410193775327e-02 true resid norm 3.410193775327e-02 ||r(i)||/||b|| 1.515825777899e-02 8 KSP Residual norm 4.690424294464e-02 8 KSP unpreconditioned resid norm 4.690424294464e-02 true resid norm 4.690424294464e-02 ||r(i)||/||b|| 2.084886233233e-02 9 KSP Residual norm 3.366148892800e-02 9 KSP unpreconditioned resid norm 3.366148892800e-02 true resid norm 3.366148892800e-02 ||r(i)||/||b|| 1.496247896783e-02 10 KSP Residual norm 4.068015727689e-02 10 KSP unpreconditioned resid norm 4.068015727689e-02 true resid norm 4.068015727689e-02 ||r(i)||/||b|| 1.808226602707e-02 11 KSP Residual norm 2.658836123104e-02 11 KSP unpreconditioned resid norm 2.658836123104e-02 true resid norm 2.658836123104e-02 ||r(i)||/||b|| 1.181848481389e-02 12 KSP Residual norm 2.826244186003e-02 12 KSP unpreconditioned resid norm 2.826244186003e-02 true resid norm 2.826244186003e-02 ||r(i)||/||b|| 1.256261102456e-02 13 KSP Residual norm 2.981793619508e-02 13 KSP unpreconditioned resid norm 2.981793619508e-02 true resid norm 2.981793619508e-02 ||r(i)||/||b|| 1.325402581380e-02 14 KSP Residual norm 3.525455091450e-02 14 KSP unpreconditioned resid norm 3.525455091450e-02 true resid norm 3.525455091450e-02 ||r(i)||/||b|| 1.567059251914e-02 15 KSP Residual norm 2.331539121838e-02 15 KSP unpreconditioned resid norm 2.331539121838e-02 true resid norm 2.331539121838e-02 ||r(i)||/||b|| 1.036365478300e-02 Linear mg_levels_1_ solve converged due to CONVERGED_ITS iterations 15 Residual norms for mg_levels_1_ solve. 0 KSP Residual norm 2.421498365806e-02 Residual norms for mg_levels_1_ solve. 0 KSP unpreconditioned resid norm 2.421498365806e-02 true resid norm 2.421498365806e-02 ||r(i)||/||b|| 1.000000000000e+00 1 KSP Residual norm 1.761072112362e-02 1 KSP unpreconditioned resid norm 1.761072112362e-02 true resid norm 1.761072112362e-02 ||r(i)||/||b|| 7.272654556492e-01 2 KSP Residual norm 1.400842489042e-02 2 KSP unpreconditioned resid norm 1.400842489042e-02 true resid norm 1.400842489042e-02 ||r(i)||/||b|| 5.785023474818e-01 3 KSP Residual norm 1.419665483348e-02 3 KSP unpreconditioned resid norm 1.419665483348e-02 true resid norm 1.419665483348e-02 ||r(i)||/||b|| 5.862756314004e-01 4 KSP Residual norm 1.617590701667e-02 4 KSP unpreconditioned resid norm 1.617590701667e-02 true resid norm 1.617590701667e-02 ||r(i)||/||b|| 6.680123036665e-01 5 KSP Residual norm 1.354824081005e-02 5 KSP unpreconditioned resid norm 1.354824081005e-02 true resid norm 1.354824081005e-02 ||r(i)||/||b|| 5.594982429624e-01 6 KSP Residual norm 1.387252917475e-02 6 KSP unpreconditioned resid norm 1.387252917475e-02 true resid norm 1.387252917475e-02 ||r(i)||/||b|| 5.728902967950e-01 7 KSP Residual norm 1.514043102087e-02 7 KSP unpreconditioned resid norm 1.514043102087e-02 true resid norm 1.514043102087e-02 ||r(i)||/||b|| 6.252505157414e-01 8 KSP Residual norm 1.275811124745e-02 8 KSP unpreconditioned resid norm 1.275811124745e-02 true resid norm 1.275811124745e-02 ||r(i)||/||b|| 5.268684640721e-01 9 KSP Residual norm 1.241039155981e-02 9 KSP unpreconditioned resid norm 1.241039155981e-02 true resid norm 1.241039155981e-02 ||r(i)||/||b|| 5.125087728764e-01 10 KSP Residual norm 9.585207801652e-03 10 KSP unpreconditioned resid norm 9.585207801652e-03 true resid norm 9.585207801652e-03 ||r(i)||/||b|| 3.958378802565e-01 11 KSP Residual norm 9.022641230732e-03 11 KSP unpreconditioned resid norm 9.022641230732e-03 true resid norm 9.022641230732e-03 ||r(i)||/||b|| 3.726057121550e-01 12 KSP Residual norm 1.187709152046e-02 12 KSP unpreconditioned resid norm 1.187709152046e-02 true resid norm 1.187709152046e-02 ||r(i)||/||b|| 4.904852172597e-01 13 KSP Residual norm 1.084880112494e-02 13 KSP unpreconditioned resid norm 1.084880112494e-02 true resid norm 1.084880112494e-02 ||r(i)||/||b|| 4.480201712351e-01 14 KSP Residual norm 8.194750346781e-03 14 KSP unpreconditioned resid norm 8.194750346781e-03 true resid norm 8.194750346781e-03 ||r(i)||/||b|| 3.384165136140e-01 15 KSP Residual norm 7.614246199165e-03 15 KSP unpreconditioned resid norm 7.614246199165e-03 true resid norm 7.614246199165e-03 ||r(i)||/||b|| 3.144435819857e-01 Linear mg_levels_1_ solve converged due to CONVERGED_ITS iterations 15 Residual norms for mg_levels_1_ solve. 0 KSP Residual norm 7.614246199165e-03 Residual norms for mg_levels_1_ solve. 0 KSP unpreconditioned resid norm 7.614246199165e-03 true resid norm 7.614246199165e-03 ||r(i)||/||b|| 3.144435819857e-01 1 KSP Residual norm 5.620014684145e-03 1 KSP unpreconditioned resid norm 5.620014684145e-03 true resid norm 5.620014684145e-03 ||r(i)||/||b|| 2.320883120759e-01 2 KSP Residual norm 6.643368363907e-03 2 KSP unpreconditioned resid norm 6.643368363907e-03 true resid norm 6.643368363907e-03 ||r(i)||/||b|| 2.743494878096e-01 3 KSP Residual norm 8.708642393659e-03 3 KSP unpreconditioned resid norm 8.708642393659e-03 true resid norm 8.708642393659e-03 ||r(i)||/||b|| 3.596385823189e-01 4 KSP Residual norm 6.401852907459e-03 4 KSP unpreconditioned resid norm 6.401852907459e-03 true resid norm 6.401852907459e-03 ||r(i)||/||b|| 2.643756856440e-01 5 KSP Residual norm 7.230576215262e-03 5 KSP unpreconditioned resid norm 7.230576215262e-03 true resid norm 7.230576215262e-03 ||r(i)||/||b|| 2.985992605803e-01 6 KSP Residual norm 6.204081601285e-03 6 KSP unpreconditioned resid norm 6.204081601285e-03 true resid norm 6.204081601285e-03 ||r(i)||/||b|| 2.562083744880e-01 7 KSP Residual norm 7.038656665944e-03 7 KSP unpreconditioned resid norm 7.038656665944e-03 true resid norm 7.038656665944e-03 ||r(i)||/||b|| 2.906736079337e-01 8 KSP Residual norm 7.194079694050e-03 8 KSP unpreconditioned resid norm 7.194079694050e-03 true resid norm 7.194079694050e-03 ||r(i)||/||b|| 2.970920730585e-01 9 KSP Residual norm 6.353576889135e-03 9 KSP unpreconditioned resid norm 6.353576889135e-03 true resid norm 6.353576889135e-03 ||r(i)||/||b|| 2.623820432363e-01 10 KSP Residual norm 7.313589502731e-03 10 KSP unpreconditioned resid norm 7.313589502731e-03 true resid norm 7.313589502731e-03 ||r(i)||/||b|| 3.020274391264e-01 11 KSP Residual norm 6.643320423193e-03 11 KSP unpreconditioned resid norm 6.643320423193e-03 true resid norm 6.643320423193e-03 ||r(i)||/||b|| 2.743475080142e-01 12 KSP Residual norm 7.235443182108e-03 12 KSP unpreconditioned resid norm 7.235443182108e-03 true resid norm 7.235443182108e-03 ||r(i)||/||b|| 2.988002504681e-01 13 KSP Residual norm 4.971292307201e-03 13 KSP unpreconditioned resid norm 4.971292307201e-03 true resid norm 4.971292307201e-03 ||r(i)||/||b|| 2.052981896416e-01 14 KSP Residual norm 5.357933842147e-03 14 KSP unpreconditioned resid norm 5.357933842147e-03 true resid norm 5.357933842147e-03 ||r(i)||/||b|| 2.212652264320e-01 15 KSP Residual norm 5.841682994497e-03 15 KSP unpreconditioned resid norm 5.841682994497e-03 true resid norm 5.841682994497e-03 ||r(i)||/||b|| 2.412424917146e-01 Linear mg_levels_1_ solve converged due to CONVERGED_ITS iterations 15 Cheers, Elena ________________________________ From: Barry Smith Sent: 29 September 2025 20:31:26 To: Moral Sanchez, Elena Cc: Mark Adams; petsc-users Subject: Re: [petsc-users] setting correct tolerances for MG smoother CG at the finest level Thanks. I missed something earlier in the KSPView using UNPRECONDITIONED norm type for convergence test Please add the options -ksp_monitor_true_residual -mg_levels_ksp_monitor_true_residual It is using the unpreconditioned residual norms for convergence testing but we are printing the preconditioned norms. Barry On Sep 29, 2025, at 11:12?AM, Moral Sanchez, Elena wrote: This is the output: Residual norms for mg_levels_1_ solve. 0 KSP Residual norm 2.249726733143e+00 1 KSP Residual norm 1.433120400946e+00 2 KSP Residual norm 1.169262560123e+00 3 KSP Residual norm 1.323528716607e+00 4 KSP Residual norm 5.006323254234e-01 5 KSP Residual norm 3.569836784785e-01 6 KSP Residual norm 2.493182937513e-01 7 KSP Residual norm 3.038202502298e-01 8 KSP Residual norm 2.780214194402e-01 9 KSP Residual norm 1.676826341491e-01 10 KSP Residual norm 1.209985378713e-01 11 KSP Residual norm 9.445076689969e-02 12 KSP Residual norm 8.308555284580e-02 13 KSP Residual norm 5.472865592585e-02 14 KSP Residual norm 4.357870564398e-02 15 KSP Residual norm 5.079681292439e-02 Linear mg_levels_1_ solve converged due to CONVERGED_ITS iterations 15 Residual norms for mg_levels_1_ solve. 0 KSP Residual norm 5.079681292439e-02 1 KSP Residual norm 2.934938644003e-02 2 KSP Residual norm 3.257065831294e-02 3 KSP Residual norm 4.143063876867e-02 4 KSP Residual norm 4.822471409489e-02 5 KSP Residual norm 3.197538246153e-02 6 KSP Residual norm 3.461217019835e-02 7 KSP Residual norm 3.410193775327e-02 8 KSP Residual norm 4.690424294464e-02 9 KSP Residual norm 3.366148892800e-02 10 KSP Residual norm 4.068015727689e-02 11 KSP Residual norm 2.658836123104e-02 12 KSP Residual norm 2.826244186003e-02 13 KSP Residual norm 2.981793619508e-02 14 KSP Residual norm 3.525455091450e-02 15 KSP Residual norm 2.331539121838e-02 Linear mg_levels_1_ solve converged due to CONVERGED_ITS iterations 15 Residual norms for mg_levels_1_ solve. 0 KSP Residual norm 2.421498365806e-02 1 KSP Residual norm 1.761072112362e-02 2 KSP Residual norm 1.400842489042e-02 3 KSP Residual norm 1.419665483348e-02 4 KSP Residual norm 1.617590701667e-02 5 KSP Residual norm 1.354824081005e-02 6 KSP Residual norm 1.387252917475e-02 7 KSP Residual norm 1.514043102087e-02 8 KSP Residual norm 1.275811124745e-02 9 KSP Residual norm 1.241039155981e-02 10 KSP Residual norm 9.585207801652e-03 11 KSP Residual norm 9.022641230732e-03 12 KSP Residual norm 1.187709152046e-02 13 KSP Residual norm 1.084880112494e-02 14 KSP Residual norm 8.194750346781e-03 15 KSP Residual norm 7.614246199165e-03 Linear mg_levels_1_ solve converged due to CONVERGED_ITS iterations 15 Residual norms for mg_levels_1_ solve. 0 KSP Residual norm 7.614246199165e-03 1 KSP Residual norm 5.620014684145e-03 2 KSP Residual norm 6.643368363907e-03 3 KSP Residual norm 8.708642393659e-03 4 KSP Residual norm 6.401852907459e-03 5 KSP Residual norm 7.230576215262e-03 6 KSP Residual norm 6.204081601285e-03 7 KSP Residual norm 7.038656665944e-03 8 KSP Residual norm 7.194079694050e-03 9 KSP Residual norm 6.353576889135e-03 10 KSP Residual norm 7.313589502731e-03 11 KSP Residual norm 6.643320423193e-03 12 KSP Residual norm 7.235443182108e-03 13 KSP Residual norm 4.971292307201e-03 14 KSP Residual norm 5.357933842147e-03 15 KSP Residual norm 5.841682994497e-03 Linear mg_levels_1_ solve converged due to CONVERGED_ITS iterations 15 ________________________________ From: Barry Smith > Sent: 29 September 2025 15:56:33 To: Moral Sanchez, Elena Cc: Mark Adams; petsc-users Subject: Re: [petsc-users] setting correct tolerances for MG smoother CG at the finest level I asked you to run with -ksp_monitor -mg_levels_ksp_monitor -ksp_converged_reason -mg_levels_ksp_converged_reason you chose not to, delaying the process of understanding what is happening. Please run with those options and send the output. My guess is that you are computing the "residual norms" in your own monitor code, and it is doing so differently than what PETSc does, thus resulting in the appearance of a sufficiently small residual norm, whereas PETSc may not have calculated something that small. Barry On Sep 29, 2025, at 8:39?AM, Moral Sanchez, Elena > wrote: Thanks for the hint. I agree that the coarse solve should be much more "accurate". However, for the moment I am just trying to understand what the MG is doing exactly. I am puzzled to see that the fine grid smoother ("lvl 0") does not stop when the residual becomes less than 1e-1. It should converge due to the atol. ________________________________ From: Mark Adams > Sent: 29 September 2025 14:20:56 To: Moral Sanchez, Elena Cc: Barry Smith; petsc-users Subject: Re: [petsc-users] setting correct tolerances for MG smoother CG at the finest level Oh I see the coarse grid solver in your full solver output now. You still want an accurate coarse grid solve. Usually (the default in GAMG) you use a direct solver on one process, and cousin until the coarse grid is small enough to make that cheap. On Mon, Sep 29, 2025 at 8:07?AM Moral Sanchez, Elena > wrote: Hi, I doubled the system size and changed the tolerances just to show a better example of the problem. This is the output of the callbacks in the first iteration: CG Iter 0/1 | res = 2.25e+00/1.00e-09 | 0.1 s MG lvl 0 (s=884): CG Iter 0/15 | res = 2.25e+00/1.00e-01 | 0.3 s MG lvl 0 (s=884): CG Iter 1/15 | res = 1.43e+00/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 2/15 | res = 1.17e+00/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 3/15 | res = 1.32e+00/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 4/15 | res = 5.01e-01/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 5/15 | res = 3.57e-01/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 6/15 | res = 2.49e-01/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 7/15 | res = 3.04e-01/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 8/15 | res = 2.78e-01/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 9/15 | res = 1.68e-01/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 10/15 | res = 1.21e-01/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 11/15 | res = 9.45e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 12/15 | res = 8.31e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 13/15 | res = 5.47e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 14/15 | res = 4.36e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 15/15 | res = 5.08e-02/1.00e-01 | 0.1 s ConvergedReason MG lvl 0: 4 MG lvl -1 (s=524): CG Iter 0/15 | res = 8.15e-02/1.00e-01 | 3.0 s ConvergedReason MG lvl -1: 3 MG lvl 0 (s=884): CG Iter 0/15 | res = 5.08e-02/1.00e-01 | 0.3 s MG lvl 0 (s=884): CG Iter 1/15 | res = 2.93e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 2/15 | res = 3.26e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 3/15 | res = 4.14e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 4/15 | res = 4.82e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 5/15 | res = 3.20e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 6/15 | res = 3.46e-02/1.00e-01 | 0.3 s MG lvl 0 (s=884): CG Iter 7/15 | res = 3.41e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 8/15 | res = 4.69e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 9/15 | res = 3.37e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 10/15 | res = 4.07e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 11/15 | res = 2.66e-02/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 12/15 | res = 2.83e-02/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 13/15 | res = 2.98e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 14/15 | res = 3.53e-02/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 15/15 | res = 2.33e-02/1.00e-01 | 0.2 s ConvergedReason MG lvl 0: 4 CG Iter 1/1 | res = 2.42e-02/1.00e-09 | 5.6 s MG lvl 0 (s=884): CG Iter 0/15 | res = 2.42e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 1/15 | res = 1.76e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 2/15 | res = 1.40e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 3/15 | res = 1.42e-02/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 4/15 | res = 1.62e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 5/15 | res = 1.35e-02/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 6/15 | res = 1.39e-02/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 7/15 | res = 1.51e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 8/15 | res = 1.28e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 9/15 | res = 1.24e-02/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 10/15 | res = 9.59e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 11/15 | res = 9.02e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 12/15 | res = 1.19e-02/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 13/15 | res = 1.08e-02/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 14/15 | res = 8.19e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 15/15 | res = 7.61e-03/1.00e-01 | 0.1 s ConvergedReason MG lvl 0: 4 MG lvl -1 (s=524): CG Iter 0/15 | res = 1.38e-02/1.00e-01 | 5.2 s ConvergedReason MG lvl -1: 3 MG lvl 0 (s=884): CG Iter 0/15 | res = 7.61e-03/1.00e-01 | 0.2 s MG lvl 0 (s=884): CG Iter 1/15 | res = 5.62e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 2/15 | res = 6.64e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 3/15 | res = 8.71e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 4/15 | res = 6.40e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 5/15 | res = 7.23e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 6/15 | res = 6.20e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 7/15 | res = 7.04e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 8/15 | res = 7.19e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 9/15 | res = 6.35e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 10/15 | res = 7.31e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 11/15 | res = 6.64e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 12/15 | res = 7.24e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 13/15 | res = 4.97e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 14/15 | res = 5.36e-03/1.00e-01 | 0.1 s MG lvl 0 (s=884): CG Iter 15/15 | res = 5.84e-03/1.00e-01 | 0.1 s ConvergedReason MG lvl 0: 4 CG ConvergedReason: -3 For completeness, I add here the -ksp_view of the whole solver: KSP Object: 1 MPI process type: cg variant HERMITIAN maximum iterations=1, nonzero initial guess tolerances: relative=1e-08, absolute=1e-09, divergence=10000. left preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: 1 MPI process type: mg type is MULTIPLICATIVE, levels=2 cycles=v Cycles per PCApply=1 Not using Galerkin computed coarse grid matrices Coarse grid solver -- level 0 ------------------------------- KSP Object: (mg_coarse_) 1 MPI process type: cg variant HERMITIAN maximum iterations=15, nonzero initial guess tolerances: relative=0.1, absolute=0.1, divergence=1e+30 left preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: (mg_coarse_) 1 MPI process type: none linear system matrix = precond matrix: Mat Object: 1 MPI process type: python rows=524, cols=524 Python: Solver_petsc.LeastSquaresOperator Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 1 MPI process type: cg variant HERMITIAN maximum iterations=15, nonzero initial guess tolerances: relative=0.1, absolute=0.1, divergence=1e+30 left preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: (mg_levels_1_) 1 MPI process type: none linear system matrix = precond matrix: Mat Object: 1 MPI process type: python rows=884, cols=884 Python: Solver_petsc.LeastSquaresOperator Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Mat Object: 1 MPI process type: python rows=884, cols=884 Python: Solver_petsc.LeastSquaresOperator Regarding Mark's Email: What do you mean with "the whole solver doesn't have a coarse grid"? I am using my own Restriction and Interpolation operators. Thanks for the help, Elena ________________________________ From: Mark Adams > Sent: 28 September 2025 20:13:54 To: Barry Smith Cc: Moral Sanchez, Elena; petsc-users Subject: Re: [petsc-users] setting correct tolerances for MG smoother CG at the finest level Not sure why your "whole"solver does not have a coarse grid but this is wrong: KSP Object: (mg_coarse_) 1 MPI process type: cg variant HERMITIAN maximum iterations=100, initial guess is zero tolerances: relative=0.1, absolute=0.1, divergence=1e+30 The coarse grid has to be accurate. The defaults are a good place to start: max_it=10.000, rtol=1e-5, atol=1e-30 (ish) On Fri, Sep 26, 2025 at 3:21?PM Barry Smith > wrote: Looks reasonable. Send the output running with -ksp_monitor -mg_levels_ksp_monitor -ksp_converged_reason -mg_levels_ksp_converged_reason On Sep 26, 2025, at 1:19?PM, Moral Sanchez, Elena > wrote: Dear Barry, This is -ksp_view for the smoother at the finest level: KSP Object: (mg_levels_1_) 1 MPI process type: cg variant HERMITIAN maximum iterations=10, nonzero initial guess tolerances: relative=0.1, absolute=0.1, divergence=1e+30 left preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: (mg_levels_1_) 1 MPI process type: none linear system matrix = precond matrix: Mat Object: 1 MPI process type: python rows=524, cols=524 Python: Solver_petsc.LeastSquaresOperator And at the coarsest level: KSP Object: (mg_coarse_) 1 MPI process type: cg variant HERMITIAN maximum iterations=100, initial guess is zero tolerances: relative=0.1, absolute=0.1, divergence=1e+30 left preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: (mg_coarse_) 1 MPI process type: none linear system matrix = precond matrix: Mat Object: 1 MPI process type: python rows=344, cols=344 Python: Solver_petsc.LeastSquaresOperator And for the whole solver: KSP Object: 1 MPI process type: cg variant HERMITIAN maximum iterations=100, nonzero initial guess tolerances: relative=1e-08, absolute=1e-09, divergence=10000. left preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: 1 MPI process type: mg type is MULTIPLICATIVE, levels=2 cycles=v Cycles per PCApply=1 Not using Galerkin computed coarse grid matrices Coarse grid solver -- level 0 ------------------------------- KSP Object: (mg_coarse_) 1 MPI process type: cg variant HERMITIAN maximum iterations=100, initial guess is zero tolerances: relative=0.1, absolute=0.1, divergence=1e+30 left preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: (mg_coarse_) 1 MPI process type: none linear system matrix = precond matrix: Mat Object: 1 MPI process type: python rows=344, cols=344 Python: Solver_petsc.LeastSquaresOperator Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 1 MPI process type: cg variant HERMITIAN maximum iterations=10, nonzero initial guess tolerances: relative=0.1, absolute=0.1, divergence=1e+30 left preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: (mg_levels_1_) 1 MPI process type: none linear system matrix = precond matrix: Mat Object: 1 MPI process type: python rows=524, cols=524 Python: Solver_petsc.LeastSquaresOperator Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Mat Object: 1 MPI process type: python rows=524, cols=524 Python: Solver_petsc.LeastSquaresOperator Best, Elena ________________________________ From: Barry Smith > Sent: 26 September 2025 19:05:02 To: Moral Sanchez, Elena Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] setting correct tolerances for MG smoother CG at the finest level Send the output using -ksp_view Normally one uses a fixed number of iterations of smoothing on level with multigrid rather than a tolerance, but yes PETSc should respect such a tolerance. Barry On Sep 26, 2025, at 12:49?PM, Moral Sanchez, Elena > wrote: Hi, I am using multigrid (multiplicative) as a preconditioner with a V-cycle of two levels. At each level, I am setting CG as the smoother with certain tolerance. What I observe is that in the finest level the CG continues iterating after the residual norm reaches the tolerance (atol) and it only stops when reaching the maximum number of iterations at that level. At the coarsest level this does not occur and the CG stops when the tolerance is reached. I double-checked that the smoother at the finest level has the right tolerance. And I am using a Monitor function to track the residual. Do you know how to make the smoother at the finest level stop when reaching the tolerance? Cheers, Elena. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Tue Sep 30 08:27:04 2025 From: bsmith at petsc.dev (Barry Smith) Date: Tue, 30 Sep 2025 09:27:04 -0400 Subject: [petsc-users] setting correct tolerances for MG smoother CG at the finest level In-Reply-To: References: <421fd9ac0ed0437f88e921d063a6f45f@ipp.mpg.de> <2622f5910bef400f983345df49977fa8@ipp.mpg.de> <67889c32cacf4cf3ac7e7b643297863b@ipp.mpg.de> Message-ID: <608352C7-1016-4E35-A099-33D81BC24739@petsc.dev> Would you be able to share your code? I'm at a loss as to why we are seeing this behavior and can much more quickly figure it out by running the code in a debugger. Barry You can send the code petsc-maint at mcs.anl.gov if you don't want to share the code with everyone, > On Sep 30, 2025, at 5:05?AM, Moral Sanchez, Elena wrote: > > This is what I get: > Residual norms for mg_levels_1_ solve. > 0 KSP Residual norm 2.249726733143e+00 > Residual norms for mg_levels_1_ solve. > 0 KSP unpreconditioned resid norm 2.249726733143e+00 true resid norm 2.249726733143e+00 ||r(i)||/||b|| 1.000000000000e+00 > 1 KSP Residual norm 1.433120400946e+00 > 1 KSP unpreconditioned resid norm 1.433120400946e+00 true resid norm 1.433120400946e+00 ||r(i)||/||b|| 6.370197677051e-01 > 2 KSP Residual norm 1.169262560123e+00 > 2 KSP unpreconditioned resid norm 1.169262560123e+00 true resid norm 1.169262560123e+00 ||r(i)||/||b|| 5.197353718108e-01 > 3 KSP Residual norm 1.323528716607e+00 > 3 KSP unpreconditioned resid norm 1.323528716607e+00 true resid norm 1.323528716607e+00 ||r(i)||/||b|| 5.883064361148e-01 > 4 KSP Residual norm 5.006323254234e-01 > 4 KSP unpreconditioned resid norm 5.006323254234e-01 true resid norm 5.006323254234e-01 ||r(i)||/||b|| 2.225302824775e-01 > 5 KSP Residual norm 3.569836784785e-01 > 5 KSP unpreconditioned resid norm 3.569836784785e-01 true resid norm 3.569836784785e-01 ||r(i)||/||b|| 1.586786844906e-01 > 6 KSP Residual norm 2.493182937513e-01 > 6 KSP unpreconditioned resid norm 2.493182937513e-01 true resid norm 2.493182937513e-01 ||r(i)||/||b|| 1.108215900529e-01 > 7 KSP Residual norm 3.038202502298e-01 > 7 KSP unpreconditioned resid norm 3.038202502298e-01 true resid norm 3.038202502298e-01 ||r(i)||/||b|| 1.350476241198e-01 > 8 KSP Residual norm 2.780214194402e-01 > 8 KSP unpreconditioned resid norm 2.780214194402e-01 true resid norm 2.780214194402e-01 ||r(i)||/||b|| 1.235800843473e-01 > 9 KSP Residual norm 1.676826341491e-01 > 9 KSP unpreconditioned resid norm 1.676826341491e-01 true resid norm 1.676826341491e-01 ||r(i)||/||b|| 7.453466755710e-02 > 10 KSP Residual norm 1.209985378713e-01 > 10 KSP unpreconditioned resid norm 1.209985378713e-01 true resid norm 1.209985378713e-01 ||r(i)||/||b|| 5.378366007245e-02 > 11 KSP Residual norm 9.445076689969e-02 > 11 KSP unpreconditioned resid norm 9.445076689969e-02 true resid norm 9.445076689969e-02 ||r(i)||/||b|| 4.198321756516e-02 > 12 KSP Residual norm 8.308555284580e-02 > 12 KSP unpreconditioned resid norm 8.308555284580e-02 true resid norm 8.308555284580e-02 ||r(i)||/||b|| 3.693139776569e-02 > 13 KSP Residual norm 5.472865592585e-02 > 13 KSP unpreconditioned resid norm 5.472865592585e-02 true resid norm 5.472865592585e-02 ||r(i)||/||b|| 2.432680161532e-02 > 14 KSP Residual norm 4.357870564398e-02 > 14 KSP unpreconditioned resid norm 4.357870564398e-02 true resid norm 4.357870564398e-02 ||r(i)||/||b|| 1.937066622447e-02 > 15 KSP Residual norm 5.079681292439e-02 > 15 KSP unpreconditioned resid norm 5.079681292439e-02 true resid norm 5.079681292439e-02 ||r(i)||/||b|| 2.257910357558e-02 > Linear mg_levels_1_ solve converged due to CONVERGED_ITS iterations 15 > Residual norms for mg_levels_1_ solve. > 0 KSP Residual norm 5.079681292439e-02 > Residual norms for mg_levels_1_ solve. > 0 KSP unpreconditioned resid norm 5.079681292439e-02 true resid norm 5.079681292439e-02 ||r(i)||/||b|| 2.257910357559e-02 > 1 KSP Residual norm 2.934938644003e-02 > 1 KSP unpreconditioned resid norm 2.934938644003e-02 true resid norm 2.934938644003e-02 ||r(i)||/||b|| 1.304575618348e-02 > 2 KSP Residual norm 3.257065831294e-02 > 2 KSP unpreconditioned resid norm 3.257065831294e-02 true resid norm 3.257065831294e-02 ||r(i)||/||b|| 1.447760647243e-02 > 3 KSP Residual norm 4.143063876867e-02 > 3 KSP unpreconditioned resid norm 4.143063876867e-02 true resid norm 4.143063876867e-02 ||r(i)||/||b|| 1.841585387164e-02 > 4 KSP Residual norm 4.822471409489e-02 > 4 KSP unpreconditioned resid norm 4.822471409489e-02 true resid norm 4.822471409489e-02 ||r(i)||/||b|| 2.143580968499e-02 > 5 KSP Residual norm 3.197538246153e-02 > 5 KSP unpreconditioned resid norm 3.197538246153e-02 true resid norm 3.197538246153e-02 ||r(i)||/||b|| 1.421300729127e-02 > 6 KSP Residual norm 3.461217019835e-02 > 6 KSP unpreconditioned resid norm 3.461217019835e-02 true resid norm 3.461217019835e-02 ||r(i)||/||b|| 1.538505529958e-02 > 7 KSP Residual norm 3.410193775327e-02 > 7 KSP unpreconditioned resid norm 3.410193775327e-02 true resid norm 3.410193775327e-02 ||r(i)||/||b|| 1.515825777899e-02 > 8 KSP Residual norm 4.690424294464e-02 > 8 KSP unpreconditioned resid norm 4.690424294464e-02 true resid norm 4.690424294464e-02 ||r(i)||/||b|| 2.084886233233e-02 > 9 KSP Residual norm 3.366148892800e-02 > 9 KSP unpreconditioned resid norm 3.366148892800e-02 true resid norm 3.366148892800e-02 ||r(i)||/||b|| 1.496247896783e-02 > 10 KSP Residual norm 4.068015727689e-02 > 10 KSP unpreconditioned resid norm 4.068015727689e-02 true resid norm 4.068015727689e-02 ||r(i)||/||b|| 1.808226602707e-02 > 11 KSP Residual norm 2.658836123104e-02 > 11 KSP unpreconditioned resid norm 2.658836123104e-02 true resid norm 2.658836123104e-02 ||r(i)||/||b|| 1.181848481389e-02 > 12 KSP Residual norm 2.826244186003e-02 > 12 KSP unpreconditioned resid norm 2.826244186003e-02 true resid norm 2.826244186003e-02 ||r(i)||/||b|| 1.256261102456e-02 > 13 KSP Residual norm 2.981793619508e-02 > 13 KSP unpreconditioned resid norm 2.981793619508e-02 true resid norm 2.981793619508e-02 ||r(i)||/||b|| 1.325402581380e-02 > 14 KSP Residual norm 3.525455091450e-02 > 14 KSP unpreconditioned resid norm 3.525455091450e-02 true resid norm 3.525455091450e-02 ||r(i)||/||b|| 1.567059251914e-02 > 15 KSP Residual norm 2.331539121838e-02 > 15 KSP unpreconditioned resid norm 2.331539121838e-02 true resid norm 2.331539121838e-02 ||r(i)||/||b|| 1.036365478300e-02 > Linear mg_levels_1_ solve converged due to CONVERGED_ITS iterations 15 > Residual norms for mg_levels_1_ solve. > 0 KSP Residual norm 2.421498365806e-02 > Residual norms for mg_levels_1_ solve. > 0 KSP unpreconditioned resid norm 2.421498365806e-02 true resid norm 2.421498365806e-02 ||r(i)||/||b|| 1.000000000000e+00 > 1 KSP Residual norm 1.761072112362e-02 > 1 KSP unpreconditioned resid norm 1.761072112362e-02 true resid norm 1.761072112362e-02 ||r(i)||/||b|| 7.272654556492e-01 > 2 KSP Residual norm 1.400842489042e-02 > 2 KSP unpreconditioned resid norm 1.400842489042e-02 true resid norm 1.400842489042e-02 ||r(i)||/||b|| 5.785023474818e-01 > 3 KSP Residual norm 1.419665483348e-02 > 3 KSP unpreconditioned resid norm 1.419665483348e-02 true resid norm 1.419665483348e-02 ||r(i)||/||b|| 5.862756314004e-01 > 4 KSP Residual norm 1.617590701667e-02 > 4 KSP unpreconditioned resid norm 1.617590701667e-02 true resid norm 1.617590701667e-02 ||r(i)||/||b|| 6.680123036665e-01 > 5 KSP Residual norm 1.354824081005e-02 > 5 KSP unpreconditioned resid norm 1.354824081005e-02 true resid norm 1.354824081005e-02 ||r(i)||/||b|| 5.594982429624e-01 > 6 KSP Residual norm 1.387252917475e-02 > 6 KSP unpreconditioned resid norm 1.387252917475e-02 true resid norm 1.387252917475e-02 ||r(i)||/||b|| 5.728902967950e-01 > 7 KSP Residual norm 1.514043102087e-02 > 7 KSP unpreconditioned resid norm 1.514043102087e-02 true resid norm 1.514043102087e-02 ||r(i)||/||b|| 6.252505157414e-01 > 8 KSP Residual norm 1.275811124745e-02 > 8 KSP unpreconditioned resid norm 1.275811124745e-02 true resid norm 1.275811124745e-02 ||r(i)||/||b|| 5.268684640721e-01 > 9 KSP Residual norm 1.241039155981e-02 > 9 KSP unpreconditioned resid norm 1.241039155981e-02 true resid norm 1.241039155981e-02 ||r(i)||/||b|| 5.125087728764e-01 > 10 KSP Residual norm 9.585207801652e-03 > 10 KSP unpreconditioned resid norm 9.585207801652e-03 true resid norm 9.585207801652e-03 ||r(i)||/||b|| 3.958378802565e-01 > 11 KSP Residual norm 9.022641230732e-03 > 11 KSP unpreconditioned resid norm 9.022641230732e-03 true resid norm 9.022641230732e-03 ||r(i)||/||b|| 3.726057121550e-01 > 12 KSP Residual norm 1.187709152046e-02 > 12 KSP unpreconditioned resid norm 1.187709152046e-02 true resid norm 1.187709152046e-02 ||r(i)||/||b|| 4.904852172597e-01 > 13 KSP Residual norm 1.084880112494e-02 > 13 KSP unpreconditioned resid norm 1.084880112494e-02 true resid norm 1.084880112494e-02 ||r(i)||/||b|| 4.480201712351e-01 > 14 KSP Residual norm 8.194750346781e-03 > 14 KSP unpreconditioned resid norm 8.194750346781e-03 true resid norm 8.194750346781e-03 ||r(i)||/||b|| 3.384165136140e-01 > 15 KSP Residual norm 7.614246199165e-03 > 15 KSP unpreconditioned resid norm 7.614246199165e-03 true resid norm 7.614246199165e-03 ||r(i)||/||b|| 3.144435819857e-01 > Linear mg_levels_1_ solve converged due to CONVERGED_ITS iterations 15 > Residual norms for mg_levels_1_ solve. > 0 KSP Residual norm 7.614246199165e-03 > Residual norms for mg_levels_1_ solve. > 0 KSP unpreconditioned resid norm 7.614246199165e-03 true resid norm 7.614246199165e-03 ||r(i)||/||b|| 3.144435819857e-01 > 1 KSP Residual norm 5.620014684145e-03 > 1 KSP unpreconditioned resid norm 5.620014684145e-03 true resid norm 5.620014684145e-03 ||r(i)||/||b|| 2.320883120759e-01 > 2 KSP Residual norm 6.643368363907e-03 > 2 KSP unpreconditioned resid norm 6.643368363907e-03 true resid norm 6.643368363907e-03 ||r(i)||/||b|| 2.743494878096e-01 > 3 KSP Residual norm 8.708642393659e-03 > 3 KSP unpreconditioned resid norm 8.708642393659e-03 true resid norm 8.708642393659e-03 ||r(i)||/||b|| 3.596385823189e-01 > 4 KSP Residual norm 6.401852907459e-03 > 4 KSP unpreconditioned resid norm 6.401852907459e-03 true resid norm 6.401852907459e-03 ||r(i)||/||b|| 2.643756856440e-01 > 5 KSP Residual norm 7.230576215262e-03 > 5 KSP unpreconditioned resid norm 7.230576215262e-03 true resid norm 7.230576215262e-03 ||r(i)||/||b|| 2.985992605803e-01 > 6 KSP Residual norm 6.204081601285e-03 > 6 KSP unpreconditioned resid norm 6.204081601285e-03 true resid norm 6.204081601285e-03 ||r(i)||/||b|| 2.562083744880e-01 > 7 KSP Residual norm 7.038656665944e-03 > 7 KSP unpreconditioned resid norm 7.038656665944e-03 true resid norm 7.038656665944e-03 ||r(i)||/||b|| 2.906736079337e-01 > 8 KSP Residual norm 7.194079694050e-03 > 8 KSP unpreconditioned resid norm 7.194079694050e-03 true resid norm 7.194079694050e-03 ||r(i)||/||b|| 2.970920730585e-01 > 9 KSP Residual norm 6.353576889135e-03 > 9 KSP unpreconditioned resid norm 6.353576889135e-03 true resid norm 6.353576889135e-03 ||r(i)||/||b|| 2.623820432363e-01 > 10 KSP Residual norm 7.313589502731e-03 > 10 KSP unpreconditioned resid norm 7.313589502731e-03 true resid norm 7.313589502731e-03 ||r(i)||/||b|| 3.020274391264e-01 > 11 KSP Residual norm 6.643320423193e-03 > 11 KSP unpreconditioned resid norm 6.643320423193e-03 true resid norm 6.643320423193e-03 ||r(i)||/||b|| 2.743475080142e-01 > 12 KSP Residual norm 7.235443182108e-03 > 12 KSP unpreconditioned resid norm 7.235443182108e-03 true resid norm 7.235443182108e-03 ||r(i)||/||b|| 2.988002504681e-01 > 13 KSP Residual norm 4.971292307201e-03 > 13 KSP unpreconditioned resid norm 4.971292307201e-03 true resid norm 4.971292307201e-03 ||r(i)||/||b|| 2.052981896416e-01 > 14 KSP Residual norm 5.357933842147e-03 > 14 KSP unpreconditioned resid norm 5.357933842147e-03 true resid norm 5.357933842147e-03 ||r(i)||/||b|| 2.212652264320e-01 > 15 KSP Residual norm 5.841682994497e-03 > 15 KSP unpreconditioned resid norm 5.841682994497e-03 true resid norm 5.841682994497e-03 ||r(i)||/||b|| 2.412424917146e-01 > Linear mg_levels_1_ solve converged due to CONVERGED_ITS iterations 15 > Cheers, > Elena > From: Barry Smith > > Sent: 29 September 2025 20:31:26 > To: Moral Sanchez, Elena > Cc: Mark Adams; petsc-users > Subject: Re: [petsc-users] setting correct tolerances for MG smoother CG at the finest level > > > Thanks. I missed something earlier in the KSPView > >>> using UNPRECONDITIONED norm type for convergence test > > Please add the options > >>>>> -ksp_monitor_true_residual -mg_levels_ksp_monitor_true_residual > > It is using the unpreconditioned residual norms for convergence testing but we are printing the preconditioned norms. > > Barry > > >> On Sep 29, 2025, at 11:12?AM, Moral Sanchez, Elena > wrote: >> >> This is the output: >> Residual norms for mg_levels_1_ solve. >> 0 KSP Residual norm 2.249726733143e+00 >> 1 KSP Residual norm 1.433120400946e+00 >> 2 KSP Residual norm 1.169262560123e+00 >> 3 KSP Residual norm 1.323528716607e+00 >> 4 KSP Residual norm 5.006323254234e-01 >> 5 KSP Residual norm 3.569836784785e-01 >> 6 KSP Residual norm 2.493182937513e-01 >> 7 KSP Residual norm 3.038202502298e-01 >> 8 KSP Residual norm 2.780214194402e-01 >> 9 KSP Residual norm 1.676826341491e-01 >> 10 KSP Residual norm 1.209985378713e-01 >> 11 KSP Residual norm 9.445076689969e-02 >> 12 KSP Residual norm 8.308555284580e-02 >> 13 KSP Residual norm 5.472865592585e-02 >> 14 KSP Residual norm 4.357870564398e-02 >> 15 KSP Residual norm 5.079681292439e-02 >> Linear mg_levels_1_ solve converged due to CONVERGED_ITS iterations 15 >> Residual norms for mg_levels_1_ solve. >> 0 KSP Residual norm 5.079681292439e-02 >> 1 KSP Residual norm 2.934938644003e-02 >> 2 KSP Residual norm 3.257065831294e-02 >> 3 KSP Residual norm 4.143063876867e-02 >> 4 KSP Residual norm 4.822471409489e-02 >> 5 KSP Residual norm 3.197538246153e-02 >> 6 KSP Residual norm 3.461217019835e-02 >> 7 KSP Residual norm 3.410193775327e-02 >> 8 KSP Residual norm 4.690424294464e-02 >> 9 KSP Residual norm 3.366148892800e-02 >> 10 KSP Residual norm 4.068015727689e-02 >> 11 KSP Residual norm 2.658836123104e-02 >> 12 KSP Residual norm 2.826244186003e-02 >> 13 KSP Residual norm 2.981793619508e-02 >> 14 KSP Residual norm 3.525455091450e-02 >> 15 KSP Residual norm 2.331539121838e-02 >> Linear mg_levels_1_ solve converged due to CONVERGED_ITS iterations 15 >> Residual norms for mg_levels_1_ solve. >> 0 KSP Residual norm 2.421498365806e-02 >> 1 KSP Residual norm 1.761072112362e-02 >> 2 KSP Residual norm 1.400842489042e-02 >> 3 KSP Residual norm 1.419665483348e-02 >> 4 KSP Residual norm 1.617590701667e-02 >> 5 KSP Residual norm 1.354824081005e-02 >> 6 KSP Residual norm 1.387252917475e-02 >> 7 KSP Residual norm 1.514043102087e-02 >> 8 KSP Residual norm 1.275811124745e-02 >> 9 KSP Residual norm 1.241039155981e-02 >> 10 KSP Residual norm 9.585207801652e-03 >> 11 KSP Residual norm 9.022641230732e-03 >> 12 KSP Residual norm 1.187709152046e-02 >> 13 KSP Residual norm 1.084880112494e-02 >> 14 KSP Residual norm 8.194750346781e-03 >> 15 KSP Residual norm 7.614246199165e-03 >> Linear mg_levels_1_ solve converged due to CONVERGED_ITS iterations 15 >> Residual norms for mg_levels_1_ solve. >> 0 KSP Residual norm 7.614246199165e-03 >> 1 KSP Residual norm 5.620014684145e-03 >> 2 KSP Residual norm 6.643368363907e-03 >> 3 KSP Residual norm 8.708642393659e-03 >> 4 KSP Residual norm 6.401852907459e-03 >> 5 KSP Residual norm 7.230576215262e-03 >> 6 KSP Residual norm 6.204081601285e-03 >> 7 KSP Residual norm 7.038656665944e-03 >> 8 KSP Residual norm 7.194079694050e-03 >> 9 KSP Residual norm 6.353576889135e-03 >> 10 KSP Residual norm 7.313589502731e-03 >> 11 KSP Residual norm 6.643320423193e-03 >> 12 KSP Residual norm 7.235443182108e-03 >> 13 KSP Residual norm 4.971292307201e-03 >> 14 KSP Residual norm 5.357933842147e-03 >> 15 KSP Residual norm 5.841682994497e-03 >> Linear mg_levels_1_ solve converged due to CONVERGED_ITS iterations 15 >> >> From: Barry Smith > >> Sent: 29 September 2025 15:56:33 >> To: Moral Sanchez, Elena >> Cc: Mark Adams; petsc-users >> Subject: Re: [petsc-users] setting correct tolerances for MG smoother CG at the finest level >> >> >> I asked you to run with >> >>>>> -ksp_monitor -mg_levels_ksp_monitor -ksp_converged_reason -mg_levels_ksp_converged_reason >> >> you chose not to, delaying the process of understanding what is happening. >> >> Please run with those options and send the output. My guess is that you are computing the "residual norms" in your own monitor code, and it is doing so differently than what PETSc does, thus resulting in the appearance of a sufficiently small residual norm, whereas PETSc may not have calculated something that small. >> >> Barry >> >> >>> On Sep 29, 2025, at 8:39?AM, Moral Sanchez, Elena > wrote: >>> >>> Thanks for the hint. I agree that the coarse solve should be much more "accurate". However, for the moment I am just trying to understand what the MG is doing exactly. >>> >>> I am puzzled to see that the fine grid smoother ("lvl 0") does not stop when the residual becomes less than 1e-1. It should converge due to the atol. >>> >>> From: Mark Adams > >>> Sent: 29 September 2025 14:20:56 >>> To: Moral Sanchez, Elena >>> Cc: Barry Smith; petsc-users >>> Subject: Re: [petsc-users] setting correct tolerances for MG smoother CG at the finest level >>> >>> Oh I see the coarse grid solver in your full solver output now. >>> You still want an accurate coarse grid solve. Usually (the default in GAMG) you use a direct solver on one process, and cousin until the coarse grid is small enough to make that cheap. >>> >>> On Mon, Sep 29, 2025 at 8:07?AM Moral Sanchez, Elena > wrote: >>>> Hi, I doubled the system size and changed the tolerances just to show a better example of the problem. This is the output of the callbacks in the first iteration: >>>> CG Iter 0/1 | res = 2.25e+00/1.00e-09 | 0.1 s >>>> MG lvl 0 (s=884): CG Iter 0/15 | res = 2.25e+00/1.00e-01 | 0.3 s >>>> MG lvl 0 (s=884): CG Iter 1/15 | res = 1.43e+00/1.00e-01 | 0.2 s >>>> MG lvl 0 (s=884): CG Iter 2/15 | res = 1.17e+00/1.00e-01 | 0.2 s >>>> MG lvl 0 (s=884): CG Iter 3/15 | res = 1.32e+00/1.00e-01 | 0.1 s >>>> MG lvl 0 (s=884): CG Iter 4/15 | res = 5.01e-01/1.00e-01 | 0.2 s >>>> MG lvl 0 (s=884): CG Iter 5/15 | res = 3.57e-01/1.00e-01 | 0.2 s >>>> MG lvl 0 (s=884): CG Iter 6/15 | res = 2.49e-01/1.00e-01 | 0.2 s >>>> MG lvl 0 (s=884): CG Iter 7/15 | res = 3.04e-01/1.00e-01 | 0.2 s >>>> MG lvl 0 (s=884): CG Iter 8/15 | res = 2.78e-01/1.00e-01 | 0.2 s >>>> MG lvl 0 (s=884): CG Iter 9/15 | res = 1.68e-01/1.00e-01 | 0.2 s >>>> MG lvl 0 (s=884): CG Iter 10/15 | res = 1.21e-01/1.00e-01 | 0.1 s >>>> MG lvl 0 (s=884): CG Iter 11/15 | res = 9.45e-02/1.00e-01 | 0.2 s >>>> MG lvl 0 (s=884): CG Iter 12/15 | res = 8.31e-02/1.00e-01 | 0.2 s >>>> MG lvl 0 (s=884): CG Iter 13/15 | res = 5.47e-02/1.00e-01 | 0.2 s >>>> MG lvl 0 (s=884): CG Iter 14/15 | res = 4.36e-02/1.00e-01 | 0.2 s >>>> MG lvl 0 (s=884): CG Iter 15/15 | res = 5.08e-02/1.00e-01 | 0.1 s >>>> ConvergedReason MG lvl 0: 4 >>>> MG lvl -1 (s=524): CG Iter 0/15 | res = 8.15e-02/1.00e-01 | 3.0 s >>>> ConvergedReason MG lvl -1: 3 >>>> MG lvl 0 (s=884): CG Iter 0/15 | res = 5.08e-02/1.00e-01 | 0.3 s >>>> MG lvl 0 (s=884): CG Iter 1/15 | res = 2.93e-02/1.00e-01 | 0.2 s >>>> MG lvl 0 (s=884): CG Iter 2/15 | res = 3.26e-02/1.00e-01 | 0.2 s >>>> MG lvl 0 (s=884): CG Iter 3/15 | res = 4.14e-02/1.00e-01 | 0.2 s >>>> MG lvl 0 (s=884): CG Iter 4/15 | res = 4.82e-02/1.00e-01 | 0.2 s >>>> MG lvl 0 (s=884): CG Iter 5/15 | res = 3.20e-02/1.00e-01 | 0.2 s >>>> MG lvl 0 (s=884): CG Iter 6/15 | res = 3.46e-02/1.00e-01 | 0.3 s >>>> MG lvl 0 (s=884): CG Iter 7/15 | res = 3.41e-02/1.00e-01 | 0.2 s >>>> MG lvl 0 (s=884): CG Iter 8/15 | res = 4.69e-02/1.00e-01 | 0.2 s >>>> MG lvl 0 (s=884): CG Iter 9/15 | res = 3.37e-02/1.00e-01 | 0.2 s >>>> MG lvl 0 (s=884): CG Iter 10/15 | res = 4.07e-02/1.00e-01 | 0.2 s >>>> MG lvl 0 (s=884): CG Iter 11/15 | res = 2.66e-02/1.00e-01 | 0.1 s >>>> MG lvl 0 (s=884): CG Iter 12/15 | res = 2.83e-02/1.00e-01 | 0.1 s >>>> MG lvl 0 (s=884): CG Iter 13/15 | res = 2.98e-02/1.00e-01 | 0.2 s >>>> MG lvl 0 (s=884): CG Iter 14/15 | res = 3.53e-02/1.00e-01 | 0.1 s >>>> MG lvl 0 (s=884): CG Iter 15/15 | res = 2.33e-02/1.00e-01 | 0.2 s >>>> ConvergedReason MG lvl 0: 4 >>>> CG Iter 1/1 | res = 2.42e-02/1.00e-09 | 5.6 s >>>> MG lvl 0 (s=884): CG Iter 0/15 | res = 2.42e-02/1.00e-01 | 0.2 s >>>> MG lvl 0 (s=884): CG Iter 1/15 | res = 1.76e-02/1.00e-01 | 0.2 s >>>> MG lvl 0 (s=884): CG Iter 2/15 | res = 1.40e-02/1.00e-01 | 0.2 s >>>> MG lvl 0 (s=884): CG Iter 3/15 | res = 1.42e-02/1.00e-01 | 0.1 s >>>> MG lvl 0 (s=884): CG Iter 4/15 | res = 1.62e-02/1.00e-01 | 0.2 s >>>> MG lvl 0 (s=884): CG Iter 5/15 | res = 1.35e-02/1.00e-01 | 0.1 s >>>> MG lvl 0 (s=884): CG Iter 6/15 | res = 1.39e-02/1.00e-01 | 0.1 s >>>> MG lvl 0 (s=884): CG Iter 7/15 | res = 1.51e-02/1.00e-01 | 0.2 s >>>> MG lvl 0 (s=884): CG Iter 8/15 | res = 1.28e-02/1.00e-01 | 0.2 s >>>> MG lvl 0 (s=884): CG Iter 9/15 | res = 1.24e-02/1.00e-01 | 0.2 s >>>> MG lvl 0 (s=884): CG Iter 10/15 | res = 9.59e-03/1.00e-01 | 0.1 s >>>> MG lvl 0 (s=884): CG Iter 11/15 | res = 9.02e-03/1.00e-01 | 0.1 s >>>> MG lvl 0 (s=884): CG Iter 12/15 | res = 1.19e-02/1.00e-01 | 0.1 s >>>> MG lvl 0 (s=884): CG Iter 13/15 | res = 1.08e-02/1.00e-01 | 0.1 s >>>> MG lvl 0 (s=884): CG Iter 14/15 | res = 8.19e-03/1.00e-01 | 0.1 s >>>> MG lvl 0 (s=884): CG Iter 15/15 | res = 7.61e-03/1.00e-01 | 0.1 s >>>> ConvergedReason MG lvl 0: 4 >>>> MG lvl -1 (s=524): CG Iter 0/15 | res = 1.38e-02/1.00e-01 | 5.2 s >>>> ConvergedReason MG lvl -1: 3 >>>> MG lvl 0 (s=884): CG Iter 0/15 | res = 7.61e-03/1.00e-01 | 0.2 s >>>> MG lvl 0 (s=884): CG Iter 1/15 | res = 5.62e-03/1.00e-01 | 0.1 s >>>> MG lvl 0 (s=884): CG Iter 2/15 | res = 6.64e-03/1.00e-01 | 0.1 s >>>> MG lvl 0 (s=884): CG Iter 3/15 | res = 8.71e-03/1.00e-01 | 0.1 s >>>> MG lvl 0 (s=884): CG Iter 4/15 | res = 6.40e-03/1.00e-01 | 0.1 s >>>> MG lvl 0 (s=884): CG Iter 5/15 | res = 7.23e-03/1.00e-01 | 0.1 s >>>> MG lvl 0 (s=884): CG Iter 6/15 | res = 6.20e-03/1.00e-01 | 0.1 s >>>> MG lvl 0 (s=884): CG Iter 7/15 | res = 7.04e-03/1.00e-01 | 0.1 s >>>> MG lvl 0 (s=884): CG Iter 8/15 | res = 7.19e-03/1.00e-01 | 0.1 s >>>> MG lvl 0 (s=884): CG Iter 9/15 | res = 6.35e-03/1.00e-01 | 0.1 s >>>> MG lvl 0 (s=884): CG Iter 10/15 | res = 7.31e-03/1.00e-01 | 0.1 s >>>> MG lvl 0 (s=884): CG Iter 11/15 | res = 6.64e-03/1.00e-01 | 0.1 s >>>> MG lvl 0 (s=884): CG Iter 12/15 | res = 7.24e-03/1.00e-01 | 0.1 s >>>> MG lvl 0 (s=884): CG Iter 13/15 | res = 4.97e-03/1.00e-01 | 0.1 s >>>> MG lvl 0 (s=884): CG Iter 14/15 | res = 5.36e-03/1.00e-01 | 0.1 s >>>> MG lvl 0 (s=884): CG Iter 15/15 | res = 5.84e-03/1.00e-01 | 0.1 s >>>> ConvergedReason MG lvl 0: 4 >>>> CG ConvergedReason: -3 >>>> >>>> For completeness, I add here the -ksp_view of the whole solver: >>>> KSP Object: 1 MPI process >>>> type: cg >>>> variant HERMITIAN >>>> maximum iterations=1, nonzero initial guess >>>> tolerances: relative=1e-08, absolute=1e-09, divergence=10000. >>>> left preconditioning >>>> using UNPRECONDITIONED norm type for convergence test >>>> PC Object: 1 MPI process >>>> type: mg >>>> type is MULTIPLICATIVE, levels=2 cycles=v >>>> Cycles per PCApply=1 >>>> Not using Galerkin computed coarse grid matrices >>>> Coarse grid solver -- level 0 ------------------------------- >>>> KSP Object: (mg_coarse_) 1 MPI process >>>> type: cg >>>> variant HERMITIAN >>>> maximum iterations=15, nonzero initial guess >>>> tolerances: relative=0.1, absolute=0.1, divergence=1e+30 >>>> left preconditioning >>>> using UNPRECONDITIONED norm type for convergence test >>>> PC Object: (mg_coarse_) 1 MPI process >>>> type: none >>>> linear system matrix = precond matrix: >>>> Mat Object: 1 MPI process >>>> type: python >>>> rows=524, cols=524 >>>> Python: Solver_petsc.LeastSquaresOperator >>>> Down solver (pre-smoother) on level 1 ------------------------------- >>>> KSP Object: (mg_levels_1_) 1 MPI process >>>> type: cg >>>> variant HERMITIAN >>>> maximum iterations=15, nonzero initial guess >>>> tolerances: relative=0.1, absolute=0.1, divergence=1e+30 >>>> left preconditioning >>>> using UNPRECONDITIONED norm type for convergence test >>>> PC Object: (mg_levels_1_) 1 MPI process >>>> type: none >>>> linear system matrix = precond matrix: >>>> Mat Object: 1 MPI process >>>> type: python >>>> rows=884, cols=884 >>>> Python: Solver_petsc.LeastSquaresOperator >>>> Up solver (post-smoother) same as down solver (pre-smoother) >>>> linear system matrix = precond matrix: >>>> Mat Object: 1 MPI process >>>> type: python >>>> rows=884, cols=884 >>>> Python: Solver_petsc.LeastSquaresOperator >>>> >>>> Regarding Mark's Email: What do you mean with "the whole solver doesn't have a coarse grid"? I am using my own Restriction and Interpolation operators. >>>> Thanks for the help, >>>> Elena >>>> >>>> From: Mark Adams > >>>> Sent: 28 September 2025 20:13:54 >>>> To: Barry Smith >>>> Cc: Moral Sanchez, Elena; petsc-users >>>> Subject: Re: [petsc-users] setting correct tolerances for MG smoother CG at the finest level >>>> >>>> Not sure why your "whole"solver does not have a coarse grid but this is wrong: >>>> >>>>> KSP Object: (mg_coarse_) 1 MPI process >>>>> type: cg >>>>> variant HERMITIAN >>>>> maximum iterations=100, initial guess is zero >>>>> tolerances: relative=0.1, absolute=0.1, divergence=1e+30 >>>>> >>>>> The coarse grid has to be accurate. The defaults are a good place to start: max_it=10.000, rtol=1e-5, atol=1e-30 (ish) >>>> >>>> On Fri, Sep 26, 2025 at 3:21?PM Barry Smith > wrote: >>>>> Looks reasonable. Send the output running with >>>>> >>>>> -ksp_monitor -mg_levels_ksp_monitor -ksp_converged_reason -mg_levels_ksp_converged_reason >>>>> >>>>>> On Sep 26, 2025, at 1:19?PM, Moral Sanchez, Elena > wrote: >>>>>> >>>>>> Dear Barry, >>>>>> >>>>>> This is -ksp_view for the smoother at the finest level: >>>>>> KSP Object: (mg_levels_1_) 1 MPI process >>>>>> type: cg >>>>>> variant HERMITIAN >>>>>> maximum iterations=10, nonzero initial guess >>>>>> tolerances: relative=0.1, absolute=0.1, divergence=1e+30 >>>>>> left preconditioning >>>>>> using UNPRECONDITIONED norm type for convergence test >>>>>> PC Object: (mg_levels_1_) 1 MPI process >>>>>> type: none >>>>>> linear system matrix = precond matrix: >>>>>> Mat Object: 1 MPI process >>>>>> type: python >>>>>> rows=524, cols=524 >>>>>> Python: Solver_petsc.LeastSquaresOperator >>>>>> And at the coarsest level: >>>>>> KSP Object: (mg_coarse_) 1 MPI process >>>>>> type: cg >>>>>> variant HERMITIAN >>>>>> maximum iterations=100, initial guess is zero >>>>>> tolerances: relative=0.1, absolute=0.1, divergence=1e+30 >>>>>> left preconditioning >>>>>> using UNPRECONDITIONED norm type for convergence test >>>>>> PC Object: (mg_coarse_) 1 MPI process >>>>>> type: none >>>>>> linear system matrix = precond matrix: >>>>>> Mat Object: 1 MPI process >>>>>> type: python >>>>>> rows=344, cols=344 >>>>>> Python: Solver_petsc.LeastSquaresOperator >>>>>> And for the whole solver: >>>>>> KSP Object: 1 MPI process >>>>>> type: cg >>>>>> variant HERMITIAN >>>>>> maximum iterations=100, nonzero initial guess >>>>>> tolerances: relative=1e-08, absolute=1e-09, divergence=10000. >>>>>> left preconditioning >>>>>> using UNPRECONDITIONED norm type for convergence test >>>>>> PC Object: 1 MPI process >>>>>> type: mg >>>>>> type is MULTIPLICATIVE, levels=2 cycles=v >>>>>> Cycles per PCApply=1 >>>>>> Not using Galerkin computed coarse grid matrices >>>>>> Coarse grid solver -- level 0 ------------------------------- >>>>>> KSP Object: (mg_coarse_) 1 MPI process >>>>>> type: cg >>>>>> variant HERMITIAN >>>>>> maximum iterations=100, initial guess is zero >>>>>> tolerances: relative=0.1, absolute=0.1, divergence=1e+30 >>>>>> left preconditioning >>>>>> using UNPRECONDITIONED norm type for convergence test >>>>>> PC Object: (mg_coarse_) 1 MPI process >>>>>> type: none >>>>>> linear system matrix = precond matrix: >>>>>> Mat Object: 1 MPI process >>>>>> type: python >>>>>> rows=344, cols=344 >>>>>> Python: Solver_petsc.LeastSquaresOperator >>>>>> Down solver (pre-smoother) on level 1 ------------------------------- >>>>>> KSP Object: (mg_levels_1_) 1 MPI process >>>>>> type: cg >>>>>> variant HERMITIAN >>>>>> maximum iterations=10, nonzero initial guess >>>>>> tolerances: relative=0.1, absolute=0.1, divergence=1e+30 >>>>>> left preconditioning >>>>>> using UNPRECONDITIONED norm type for convergence test >>>>>> PC Object: (mg_levels_1_) 1 MPI process >>>>>> type: none >>>>>> linear system matrix = precond matrix: >>>>>> Mat Object: 1 MPI process >>>>>> type: python >>>>>> rows=524, cols=524 >>>>>> Python: Solver_petsc.LeastSquaresOperator >>>>>> Up solver (post-smoother) same as down solver (pre-smoother) >>>>>> linear system matrix = precond matrix: >>>>>> Mat Object: 1 MPI process >>>>>> type: python >>>>>> rows=524, cols=524 >>>>>> Python: Solver_petsc.LeastSquaresOperator >>>>>> Best, >>>>>> Elena >>>>>> >>>>>> >>>>>> From: Barry Smith > >>>>>> Sent: 26 September 2025 19:05:02 >>>>>> To: Moral Sanchez, Elena >>>>>> Cc: petsc-users at mcs.anl.gov >>>>>> Subject: Re: [petsc-users] setting correct tolerances for MG smoother CG at the finest level >>>>>> >>>>>> >>>>>> Send the output using -ksp_view >>>>>> >>>>>> Normally one uses a fixed number of iterations of smoothing on level with multigrid rather than a tolerance, but yes PETSc should respect such a tolerance. >>>>>> >>>>>> Barry >>>>>> >>>>>> >>>>>>> On Sep 26, 2025, at 12:49?PM, Moral Sanchez, Elena > wrote: >>>>>>> >>>>>>> Hi, >>>>>>> I am using multigrid (multiplicative) as a preconditioner with a V-cycle of two levels. At each level, I am setting CG as the smoother with certain tolerance. >>>>>>> >>>>>>> What I observe is that in the finest level the CG continues iterating after the residual norm reaches the tolerance (atol) and it only stops when reaching the maximum number of iterations at that level. At the coarsest level this does not occur and the CG stops when the tolerance is reached. >>>>>>> >>>>>>> I double-checked that the smoother at the finest level has the right tolerance. And I am using a Monitor function to track the residual. >>>>>>> >>>>>>> Do you know how to make the smoother at the finest level stop when reaching the tolerance? >>>>>>> >>>>>>> Cheers, >>>>>>> Elena. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Tue Sep 30 18:22:02 2025 From: bsmith at petsc.dev (Barry Smith) Date: Tue, 30 Sep 2025 19:22:02 -0400 Subject: [petsc-users] [GPU] Jacobi preconditioner In-Reply-To: References: <386853b1efae4269919b977b88c7e679@cea.fr> <49396000-D752-4C95-AF1B-524EC68BC5BC@petsc.dev> Message-ID: <99869713-EEB9-48FE-9FA1-F3927FB2488D@petsc.dev> Being AMD is far more embarrassing, given that they copied CUDA and CUDA libraries but chickened out, making small changes that make writing CUDA/HIP portable code very difficult. > On Jul 31, 2025, at 11:05?AM, Junchao Zhang wrote: > > What would embarrass me more is to copy the same code to MatGetDiagonal_SeqAIJHIPSPARSE. > > --Junchao Zhang > > On Wed, Jul 30, 2025 at 1:34?PM Barry Smith > wrote: >> >> We absolutely should have a MatGetDiagonal_SeqAIJCUSPARSE(). It's somewhat embarrassing that we don't provide this. >> >> I have found some potential code at https://urldefense.us/v3/__https://stackoverflow.com/questions/60311408/how-to-get-the-diagonal-of-a-sparse-matrix-in-cusparse__;!!G_uCfscf7eWS!Zeos8WEMBA9SXqNMlaLtoQlkk9ZioafXWfp5BrfIjZfJlBHviZSTlVxLMBM6aShBRzjv-lsatFXllFMQQIhsXvQ$ >> >> Barry >> >> >> >> >>> On Jul 28, 2025, at 11:43?AM, Junchao Zhang > wrote: >>> >>> Yes, MatGetDiagonal_SeqAIJCUSPARSE hasn't been implemented. petsc/cuda and petsc/kokkos backends are separate code. >>> If petsc/kokkos meet your needs, then just use them. For petsc users, we hope it will be just a difference of extra --download-kokkos --download-kokkos-kernels in configuration. >>> >>> --Junchao Zhang >>> >>> >>> On Mon, Jul 28, 2025 at 2:51?AM LEDAC Pierre > wrote: >>>> Hello all, >>>> >>>> >>>> >>>> We are solving with PETSc a linear system updated every time step (constant stencil but coefficients changing). >>>> >>>> >>>> >>>> The matrix is preallocated once with MatSetPreallocationCOO() then filled each time step with MatSetValuesCOO() and we use device pointers for coo_i, coo_j, and coefficients values. >>>> >>>> >>>> >>>> It is working fine with a GMRES Ksp solver and PC Jacobi but we are surprised to see that every time step, during PCSetUp, MatGetDiagonal_SeqAIJ is called whereas the matrix is on the device. Looking at the API, it seems there is no MatGetDiagonal_SeqAIJCUSPARSE() but a MatGetDiagonal_SeqAIJKOKKOS(). >>>> >>>> >>>> >>>> Does it mean we should use Kokkos backend in PETSc to have Jacobi preconditioner built directly on device ? Or I am doing something wrong ? >>>> >>>> NB: Gmres is running well on device. >>>> >>>> >>>> >>>> I could use -ksp_reuse_preconditioner to avoid Jacobi being recreated each solve on host but it increases significantly the number of iterations. >>>> >>>> >>>> >>>> Thanks, >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> Pierre LEDAC >>>> Commissariat ? l??nergie atomique et aux ?nergies alternatives >>>> Centre de SACLAY >>>> DES/ISAS/DM2S/SGLS/LCAN >>>> B?timent 451 ? point courrier n?41 >>>> F-91191 Gif-sur-Yvette >>>> +33 1 69 08 04 03 >>>> +33 6 83 42 05 79 >> -------------- next part -------------- An HTML attachment was scrubbed... URL: