From dargaville.steven at gmail.com Wed Apr 2 19:11:06 2025 From: dargaville.steven at gmail.com (Steven Dargaville) Date: Thu, 3 Apr 2025 00:11:06 +0000 Subject: [petsc-users] kokkos gpu/cpu copy Message-ID: Hi I have some code that does a solve with a PCMAT preconditioner. The mat used is a shell and inside the shell MatMult it calls VecPointwiseDivide with a vector "diag" that is the diagonal of a matrix assigned outside the shell. If I use mat/vec type of cuda, this occurs without any gpu/cpu copies as I would expect. If however I use mat/vec type kokkos, at every iteration of the solve there is a gpu/cpu copy that occurs. It seems this is triggered by the offloadmask in the vector "diag", as it stays as 1 and hence a copy occurs in VecPointwiseDivide. I would have expected the offload mask to be 256 (kokkos) after the first iteration, as the offload mask of "diag" changes to 3 when using cuda after the first iteration. Is this the expected behaviour with Kokkos, or is there something I need to do to trigger that "diag" has its values on the gpu to prevent copies? I have example c++ code that demonstrates this below. You can see the difference when run with petsc 3.23.0 and either "-log_view -mat_type aijcusparse -vec_type cuda" or "-log_view -mat_type aijkokkos -vec_type kokkos". Thanks for your help Steven Example c++ code: // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ static char help[] = "Tests Kokkos for SHELL matrices\n\n"; #include #include #include typedef struct _n_User *User; struct _n_User { Mat A; Vec diag; }; static PetscErrorCode MatMult_User(Mat A, Vec X, Vec Y) { User user; PetscFunctionBegin; PetscCall(MatShellGetContext(A, &user)); // Print the offload mask inside the matmult PetscOffloadMask offloadmask; PetscCall(VecGetOffloadMask(X, &offloadmask)); std::cout << "offload inside X " << offloadmask << std::endl; PetscCall(VecGetOffloadMask(Y, &offloadmask)); std::cout << "offload inside Y " << offloadmask << std::endl; PetscCall(VecGetOffloadMask(user->diag, &offloadmask)); std::cout << "offload inside diag " << offloadmask << std::endl; PetscCall(VecPointwiseDivide(Y, X, user->diag)); PetscFunctionReturn(PETSC_SUCCESS); } int main(int argc, char **args) { const PetscScalar xvals[] = {11, 13}, yvals[] = {17, 19}; const PetscInt inds[] = {0, 1}; PetscScalar avals[] = {2, 3, 5, 7}; Mat S1, A; Vec X, Y, diag; KSP ksp; PC pc; User user; PetscLogStage stage1, gpu_copy; PetscFunctionBeginUser; PetscCall(PetscInitialize(&argc, &args, NULL, help)); // Build a matrix and vectors PetscCall(MatCreateFromOptions(PETSC_COMM_WORLD, NULL, 1, 2, 2, 2, 2, &A)); PetscCall(MatSetUp(A)); PetscCall(MatSetValues(A, 2, inds, 2, inds, avals, INSERT_VALUES)); PetscCall(MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY)); PetscCall(MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY)); PetscCall(MatCreateVecs(A, NULL, &X)); PetscCall(VecCreateFromOptions(PETSC_COMM_WORLD, NULL, 1, 2, 2, &X)); PetscCall(VecSetValues(X, 2, inds, xvals, INSERT_VALUES)); PetscCall(VecDuplicate(X, &Y)); PetscCall(VecDuplicate(X, &diag)); PetscCall(VecSetValues(Y, 2, inds, yvals, INSERT_VALUES)); PetscCall(VecAssemblyBegin(Y)); PetscCall(VecAssemblyEnd(Y)); // Create a shell matrix PetscCall(MatGetDiagonal(A, diag)); PetscCall(PetscNew(&user)); user->A = A; user->diag = diag; PetscCall(MatCreateShell(PETSC_COMM_WORLD, 2, 2, 2, 2, user, &S1)); PetscCall(MatSetUp(S1)); PetscCall(MatShellSetOperation(S1, MATOP_MULT, (void (*)(void))MatMult_User)); PetscCall(MatAssemblyBegin(S1, MAT_FINAL_ASSEMBLY)); PetscCall(MatAssemblyEnd(S1, MAT_FINAL_ASSEMBLY)); // Do a solve PetscCall(KSPCreate(PETSC_COMM_WORLD,&ksp)); // Give the ksp a pcmat as the preconditioner and the mat is the shell PetscCall(KSPSetOperators(ksp,A, S1)); PetscCall(KSPSetType(ksp, KSPRICHARDSON)); PetscCall(KSPSetFromOptions(ksp)); PetscCall(KSPGetPC(ksp, &pc)); PetscCall(PCSetType(pc, PCMAT)); PetscCall(KSPSetUp(ksp)); // Print the offload mask before our solve PetscOffloadMask offloadmask; PetscCall(VecGetOffloadMask(X, &offloadmask)); std::cout << "offload X " << offloadmask << std::endl; PetscCall(VecGetOffloadMask(Y, &offloadmask)); std::cout << "offload Y " << offloadmask << std::endl; PetscCall(VecGetOffloadMask(user->diag, &offloadmask)); std::cout << "offload diag " << offloadmask << std::endl; // Trigger any gpu copies in the first solve PetscCall(PetscLogStageRegister("gpu_copy",&gpu_copy)); PetscCall(PetscLogStagePush(gpu_copy)); PetscCall(KSPSolve(ksp, X, Y)); PetscCall(PetscLogStagePop()); // There should be no copies in this solve PetscCall(PetscLogStageRegister("no copy",&stage1)); PetscCall(PetscLogStagePush(stage1)); PetscCall(KSPSolve(ksp, X, Y)); PetscCall(PetscLogStagePop()); PetscCall(MatDestroy(&S1)); PetscCall(VecDestroy(&X)); PetscCall(VecDestroy(&Y)); PetscCall(PetscFinalize()); return 0; } -------------- next part -------------- An HTML attachment was scrubbed... URL: From junchao.zhang at gmail.com Wed Apr 2 22:51:52 2025 From: junchao.zhang at gmail.com (Junchao Zhang) Date: Wed, 2 Apr 2025 22:51:52 -0500 Subject: [petsc-users] kokkos gpu/cpu copy In-Reply-To: References: Message-ID: Hi, Steven, Thanks for the test, which helped me easily find the petsc bug. I have a fix at https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/merge_requests/8272__;!!G_uCfscf7eWS!dEECoiAgY3gt4OtiEoIEoGuacFHbL1whaXexphGauRP-CI_sNm-iHlr3Aqh1kQRUkVGWcgNgtRJsaiMGB0Mt_63EjYXr$ . VecKokkos does not use offloadmask for its gpu/cpu sync state, while VecCUDA/VecHIP do. My expectation is users should not use VecGetOffloadMask(), because it is too low level. We have bad API design here. Thank you! --Junchao Zhang On Wed, Apr 2, 2025 at 7:11?PM Steven Dargaville < dargaville.steven at gmail.com> wrote: > Hi > > I have some code that does a solve with a PCMAT preconditioner. The mat > used is a shell and inside the shell MatMult it calls VecPointwiseDivide > with a vector "diag" that is the diagonal of a matrix assigned outside the > shell. > > If I use mat/vec type of cuda, this occurs without any gpu/cpu copies as I > would expect. If however I use mat/vec type kokkos, at every iteration of > the solve there is a gpu/cpu copy that occurs. It seems this is triggered > by the offloadmask in the vector "diag", as it stays as 1 and hence a copy > occurs in VecPointwiseDivide. > > I would have expected the offload mask to be 256 (kokkos) after the first > iteration, as the offload mask of "diag" changes to 3 when using cuda after > the first iteration. > > Is this the expected behaviour with Kokkos, or is there something I need > to do to trigger that "diag" has its values on the gpu to prevent copies? I > have example c++ code that demonstrates this below. You can see the > difference when run with petsc 3.23.0 and either "-log_view -mat_type > aijcusparse -vec_type cuda" or "-log_view -mat_type aijkokkos -vec_type > kokkos". > > Thanks for your help > Steven > > Example c++ code: > > // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > > static char help[] = "Tests Kokkos for SHELL matrices\n\n"; > > #include > #include > #include > > typedef struct _n_User *User; > struct _n_User { > Mat A; > Vec diag; > }; > > static PetscErrorCode MatMult_User(Mat A, Vec X, Vec Y) > { > User user; > > PetscFunctionBegin; > PetscCall(MatShellGetContext(A, &user)); > > // Print the offload mask inside the matmult > PetscOffloadMask offloadmask; > PetscCall(VecGetOffloadMask(X, &offloadmask)); > std::cout << "offload inside X " << offloadmask << std::endl; > PetscCall(VecGetOffloadMask(Y, &offloadmask)); > std::cout << "offload inside Y " << offloadmask << std::endl; > PetscCall(VecGetOffloadMask(user->diag, &offloadmask)); > std::cout << "offload inside diag " << offloadmask << std::endl; > > PetscCall(VecPointwiseDivide(Y, X, user->diag)); > PetscFunctionReturn(PETSC_SUCCESS); > } > > int main(int argc, char **args) > { > const PetscScalar xvals[] = {11, 13}, yvals[] = {17, 19}; > const PetscInt inds[] = {0, 1}; > PetscScalar avals[] = {2, 3, 5, 7}; > Mat S1, A; > Vec X, Y, diag; > KSP ksp; > PC pc; > User user; > PetscLogStage stage1, gpu_copy; > > PetscFunctionBeginUser; > PetscCall(PetscInitialize(&argc, &args, NULL, help)); > > // Build a matrix and vectors > PetscCall(MatCreateFromOptions(PETSC_COMM_WORLD, NULL, 1, 2, 2, 2, 2, > &A)); > PetscCall(MatSetUp(A)); > PetscCall(MatSetValues(A, 2, inds, 2, inds, avals, INSERT_VALUES)); > PetscCall(MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY)); > PetscCall(MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY)); > PetscCall(MatCreateVecs(A, NULL, &X)); > PetscCall(VecCreateFromOptions(PETSC_COMM_WORLD, NULL, 1, 2, 2, &X)); > PetscCall(VecSetValues(X, 2, inds, xvals, INSERT_VALUES)); > PetscCall(VecDuplicate(X, &Y)); > PetscCall(VecDuplicate(X, &diag)); > PetscCall(VecSetValues(Y, 2, inds, yvals, INSERT_VALUES)); > PetscCall(VecAssemblyBegin(Y)); > PetscCall(VecAssemblyEnd(Y)); > > // Create a shell matrix > PetscCall(MatGetDiagonal(A, diag)); > PetscCall(PetscNew(&user)); > user->A = A; > user->diag = diag; > PetscCall(MatCreateShell(PETSC_COMM_WORLD, 2, 2, 2, 2, user, &S1)); > PetscCall(MatSetUp(S1)); > PetscCall(MatShellSetOperation(S1, MATOP_MULT, (void > (*)(void))MatMult_User)); > PetscCall(MatAssemblyBegin(S1, MAT_FINAL_ASSEMBLY)); > PetscCall(MatAssemblyEnd(S1, MAT_FINAL_ASSEMBLY)); > > // Do a solve > PetscCall(KSPCreate(PETSC_COMM_WORLD,&ksp)); > // Give the ksp a pcmat as the preconditioner and the mat is the shell > PetscCall(KSPSetOperators(ksp,A, S1)); > PetscCall(KSPSetType(ksp, KSPRICHARDSON)); > PetscCall(KSPSetFromOptions(ksp)); > PetscCall(KSPGetPC(ksp, &pc)); > PetscCall(PCSetType(pc, PCMAT)); > PetscCall(KSPSetUp(ksp)); > > // Print the offload mask before our solve > PetscOffloadMask offloadmask; > PetscCall(VecGetOffloadMask(X, &offloadmask)); > std::cout << "offload X " << offloadmask << std::endl; > PetscCall(VecGetOffloadMask(Y, &offloadmask)); > std::cout << "offload Y " << offloadmask << std::endl; > PetscCall(VecGetOffloadMask(user->diag, &offloadmask)); > std::cout << "offload diag " << offloadmask << std::endl; > > // Trigger any gpu copies in the first solve > PetscCall(PetscLogStageRegister("gpu_copy",&gpu_copy)); > PetscCall(PetscLogStagePush(gpu_copy)); > PetscCall(KSPSolve(ksp, X, Y)); > PetscCall(PetscLogStagePop()); > > // There should be no copies in this solve > PetscCall(PetscLogStageRegister("no copy",&stage1)); > PetscCall(PetscLogStagePush(stage1)); > PetscCall(KSPSolve(ksp, X, Y)); > PetscCall(PetscLogStagePop()); > > PetscCall(MatDestroy(&S1)); > PetscCall(VecDestroy(&X)); > PetscCall(VecDestroy(&Y)); > PetscCall(PetscFinalize()); > return 0; > } > -------------- next part -------------- An HTML attachment was scrubbed... URL: From s_g at berkeley.edu Wed Apr 2 23:10:44 2025 From: s_g at berkeley.edu (Sanjay Govindjee) Date: Wed, 2 Apr 2025 21:10:44 -0700 Subject: [petsc-users] fieldsplit question Message-ID: We would like to solve an FEA problem (unstructured grid) where the nodes on the elements have different dofs. For example the corner nodes have only dof 0 and then mid-side nodes have dofs 0,1,2 (think 8 node serendipity element). This is a multi-physics problem so we are looking to use the fieldsplit features to pre-condition and solve. Is there a simple example of this type of usage in the src that we can try to mimic? I presume this will take programming as opposed to just setting command line options. -sanjay -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Apr 3 07:56:07 2025 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 3 Apr 2025 08:56:07 -0400 Subject: [petsc-users] fieldsplit question In-Reply-To: References: Message-ID: On Thu, Apr 3, 2025 at 12:11?AM Sanjay Govindjee via petsc-users < petsc-users at mcs.anl.gov> wrote: > We would like to solve an FEA problem (unstructured grid) where the nodes > on the elements have different dofs. For example the corner nodes have > only dof 0 and then mid-side nodes have dofs 0,1,2 (think 8 node > serendipity element). This is a multi-physics problem so we are looking to > use the fieldsplit features to pre-condition and solve. Is there a simple > example of this type of usage in the src that we can try to mimic? > > I presume this will take programming as opposed to just setting command > line options. > It will take a little programming, but not much. Here is the idea. FieldSplit needs to know what dofs belong to what field. There are a couple of ways to do this, at different levels of abstraction. 1. Low level You can explicitly makes lists of the dofs in each field, as an IS, and call https://urldefense.us/v3/__https://petsc.org/main/manualpages/PC/PCFieldSplitSetIS/__;!!G_uCfscf7eWS!aMROjbrPD3RYXMpO8mIii7q8eXZX1uN-6F6-g_jcNLLXGCgYPt2JEDkIIQCHs_vNBhhxsiwQJaz57ydxKyWe$ for each field. This is not very flexible, but the easiest to understand. 2. Intermediate level You can make a DMShell, and then make a PetscSection, that gives the number of dofs on each vertex and edge. Then call KSPSetDM() or SNESSetDM(), and you can do nested fieldsplits from the command line. This also retains a connection between the topology and the data layout, but you have to deal with that pesky DM object. 3. High level You can use a DMPlex to represent your grid and a PetscFE to represent the discretization, and then layout is done automatically, and nested fieldsplits can be done from the command line. I am not 100% sure PetscFE can represent what you want, but you can always call DMPlexCreateSection() by hand to make the PetscSection. Thanks, Matt > -sanjay > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!aMROjbrPD3RYXMpO8mIii7q8eXZX1uN-6F6-g_jcNLLXGCgYPt2JEDkIIQCHs_vNBhhxsiwQJaz579za_RM4$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From dargaville.steven at gmail.com Thu Apr 3 08:30:29 2025 From: dargaville.steven at gmail.com (Steven Dargaville) Date: Thu, 3 Apr 2025 13:30:29 +0000 Subject: [petsc-users] kokkos gpu/cpu copy In-Reply-To: References: Message-ID: Perfect, that seems to have fixed the issue. Thanks for your help! Steven On Thu, 3 Apr 2025 at 04:52, Junchao Zhang wrote: > Hi, Steven, > Thanks for the test, which helped me easily find the petsc bug. I have > a fix at https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/merge_requests/8272__;!!G_uCfscf7eWS!ZNkvUNeiuBo6kRAR5k_yCtp_mOCmBVppGB_yvYsMwSiohil2UIhWXsli1sEeYZvqM0PkQe6HjVYqR8YxD-S1fT_OHcxzNRK2$ . > VecKokkos does not use offloadmask for its gpu/cpu sync state, while > VecCUDA/VecHIP do. My expectation is users should not use > VecGetOffloadMask(), because it is too low level. We have bad API design > here. > > Thank you! > --Junchao Zhang > > > On Wed, Apr 2, 2025 at 7:11?PM Steven Dargaville < > dargaville.steven at gmail.com> wrote: > >> Hi >> >> I have some code that does a solve with a PCMAT preconditioner. The mat >> used is a shell and inside the shell MatMult it calls VecPointwiseDivide >> with a vector "diag" that is the diagonal of a matrix assigned outside the >> shell. >> >> If I use mat/vec type of cuda, this occurs without any gpu/cpu copies as >> I would expect. If however I use mat/vec type kokkos, at every iteration of >> the solve there is a gpu/cpu copy that occurs. It seems this is triggered >> by the offloadmask in the vector "diag", as it stays as 1 and hence a copy >> occurs in VecPointwiseDivide. >> >> I would have expected the offload mask to be 256 (kokkos) after the first >> iteration, as the offload mask of "diag" changes to 3 when using cuda after >> the first iteration. >> >> Is this the expected behaviour with Kokkos, or is there something I need >> to do to trigger that "diag" has its values on the gpu to prevent copies? I >> have example c++ code that demonstrates this below. You can see the >> difference when run with petsc 3.23.0 and either "-log_view -mat_type >> aijcusparse -vec_type cuda" or "-log_view -mat_type aijkokkos -vec_type >> kokkos". >> >> Thanks for your help >> Steven >> >> Example c++ code: >> >> // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >> >> static char help[] = "Tests Kokkos for SHELL matrices\n\n"; >> >> #include >> #include >> #include >> >> typedef struct _n_User *User; >> struct _n_User { >> Mat A; >> Vec diag; >> }; >> >> static PetscErrorCode MatMult_User(Mat A, Vec X, Vec Y) >> { >> User user; >> >> PetscFunctionBegin; >> PetscCall(MatShellGetContext(A, &user)); >> >> // Print the offload mask inside the matmult >> PetscOffloadMask offloadmask; >> PetscCall(VecGetOffloadMask(X, &offloadmask)); >> std::cout << "offload inside X " << offloadmask << std::endl; >> PetscCall(VecGetOffloadMask(Y, &offloadmask)); >> std::cout << "offload inside Y " << offloadmask << std::endl; >> PetscCall(VecGetOffloadMask(user->diag, &offloadmask)); >> std::cout << "offload inside diag " << offloadmask << std::endl; >> >> PetscCall(VecPointwiseDivide(Y, X, user->diag)); >> PetscFunctionReturn(PETSC_SUCCESS); >> } >> >> int main(int argc, char **args) >> { >> const PetscScalar xvals[] = {11, 13}, yvals[] = {17, 19}; >> const PetscInt inds[] = {0, 1}; >> PetscScalar avals[] = {2, 3, 5, 7}; >> Mat S1, A; >> Vec X, Y, diag; >> KSP ksp; >> PC pc; >> User user; >> PetscLogStage stage1, gpu_copy; >> >> PetscFunctionBeginUser; >> PetscCall(PetscInitialize(&argc, &args, NULL, help)); >> >> // Build a matrix and vectors >> PetscCall(MatCreateFromOptions(PETSC_COMM_WORLD, NULL, 1, 2, 2, 2, 2, >> &A)); >> PetscCall(MatSetUp(A)); >> PetscCall(MatSetValues(A, 2, inds, 2, inds, avals, INSERT_VALUES)); >> PetscCall(MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY)); >> PetscCall(MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY)); >> PetscCall(MatCreateVecs(A, NULL, &X)); >> PetscCall(VecCreateFromOptions(PETSC_COMM_WORLD, NULL, 1, 2, 2, &X)); >> PetscCall(VecSetValues(X, 2, inds, xvals, INSERT_VALUES)); >> PetscCall(VecDuplicate(X, &Y)); >> PetscCall(VecDuplicate(X, &diag)); >> PetscCall(VecSetValues(Y, 2, inds, yvals, INSERT_VALUES)); >> PetscCall(VecAssemblyBegin(Y)); >> PetscCall(VecAssemblyEnd(Y)); >> >> // Create a shell matrix >> PetscCall(MatGetDiagonal(A, diag)); >> PetscCall(PetscNew(&user)); >> user->A = A; >> user->diag = diag; >> PetscCall(MatCreateShell(PETSC_COMM_WORLD, 2, 2, 2, 2, user, &S1)); >> PetscCall(MatSetUp(S1)); >> PetscCall(MatShellSetOperation(S1, MATOP_MULT, (void >> (*)(void))MatMult_User)); >> PetscCall(MatAssemblyBegin(S1, MAT_FINAL_ASSEMBLY)); >> PetscCall(MatAssemblyEnd(S1, MAT_FINAL_ASSEMBLY)); >> >> // Do a solve >> PetscCall(KSPCreate(PETSC_COMM_WORLD,&ksp)); >> // Give the ksp a pcmat as the preconditioner and the mat is the shell >> PetscCall(KSPSetOperators(ksp,A, S1)); >> PetscCall(KSPSetType(ksp, KSPRICHARDSON)); >> PetscCall(KSPSetFromOptions(ksp)); >> PetscCall(KSPGetPC(ksp, &pc)); >> PetscCall(PCSetType(pc, PCMAT)); >> PetscCall(KSPSetUp(ksp)); >> >> // Print the offload mask before our solve >> PetscOffloadMask offloadmask; >> PetscCall(VecGetOffloadMask(X, &offloadmask)); >> std::cout << "offload X " << offloadmask << std::endl; >> PetscCall(VecGetOffloadMask(Y, &offloadmask)); >> std::cout << "offload Y " << offloadmask << std::endl; >> PetscCall(VecGetOffloadMask(user->diag, &offloadmask)); >> std::cout << "offload diag " << offloadmask << std::endl; >> >> // Trigger any gpu copies in the first solve >> PetscCall(PetscLogStageRegister("gpu_copy",&gpu_copy)); >> PetscCall(PetscLogStagePush(gpu_copy)); >> PetscCall(KSPSolve(ksp, X, Y)); >> PetscCall(PetscLogStagePop()); >> >> // There should be no copies in this solve >> PetscCall(PetscLogStageRegister("no copy",&stage1)); >> PetscCall(PetscLogStagePush(stage1)); >> PetscCall(KSPSolve(ksp, X, Y)); >> PetscCall(PetscLogStagePop()); >> >> PetscCall(MatDestroy(&S1)); >> PetscCall(VecDestroy(&X)); >> PetscCall(VecDestroy(&Y)); >> PetscCall(PetscFinalize()); >> return 0; >> } >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Fri Apr 4 04:17:04 2025 From: mfadams at lbl.gov (Mark Adams) Date: Fri, 4 Apr 2025 05:17:04 -0400 Subject: [petsc-users] fieldsplit question In-Reply-To: References: Message-ID: Matt, Matt, Matt, Sanjay, (1) is the only sane option. Here is an example taken from a code that I work with. This simply has 3 fields with 12 ("stride", not a great name, should be "bs") dofs on each vertex in each field (2D Bell + 1D cubic Hermite). This uses ISCreateBlock, which you might want to use for dof (1,2) in your case. It lets, with stride = 2, input IS = [1,3] create an IS with [2,3,6,7] semantically. Thanks, Mark int matrix_solve:: setFieldSplitType() { // the global parameters PetscInt ierr, dofPerEnt,stride,k; int startDof, endDofPlusOne; int num_own_ent=m3dc1_mesh::instance()->num_own_ent[0], num_own_dof; m3dc1_field_getnumowndof(&fieldOrdering, &num_own_dof); if (num_own_ent) dofPerEnt = num_own_dof/num_own_ent; stride=dofPerEnt/3; //U 0->11, Omega 12->23, Chi 24->35 m3dc1_field_getowndofid (&fieldOrdering, &startDof, &endDofPlusOne); startDof=startDof/stride; // the 3 fields for PCFIELDSPLIT IS field0, field1, field2; PetscInt *idx0, *idx1, *idx2; ierr=PetscMalloc1(num_own_ent, &idx0); ierr=PetscMalloc1(num_own_ent, &idx1); ierr=PetscMalloc1(num_own_ent, &idx2); for (k=0; k(ISDestroy (&field0));PetscCall (ISDestroy (&field1));PetscCall (ISDestroy (&field2)); ierr=PetscFree(idx1); ierr=PetscFree(idx2); fsSet=1; return 0; } On Thu, Apr 3, 2025 at 8:57?AM Matthew Knepley wrote: > On Thu, Apr 3, 2025 at 12:11?AM Sanjay Govindjee via petsc-users < > petsc-users at mcs.anl.gov> wrote: > >> We would like to solve an FEA problem (unstructured grid) where the nodes >> on the elements have different dofs. For example the corner nodes have >> only dof 0 and then mid-side nodes have dofs 0,1,2 (think 8 node >> serendipity element). This is a multi-physics problem so we are looking to >> use the fieldsplit features to pre-condition and solve. Is there a simple >> example of this type of usage in the src that we can try to mimic? >> >> I presume this will take programming as opposed to just setting command >> line options. >> > > It will take a little programming, but not much. Here is the idea. > FieldSplit needs to know what dofs belong to what field. There are a couple > of ways to do this, at different levels of abstraction. > > 1. Low level > > You can explicitly makes lists of the dofs in each field, as an IS, and > call https://urldefense.us/v3/__https://petsc.org/main/manualpages/PC/PCFieldSplitSetIS/__;!!G_uCfscf7eWS!c_nKrPbKpTddk11qYmEtY65UqhTbG4LXm7rfRLNAprFTA0x4y_NfBtBIEhOMant3sxPS91A_TgyDkqZ9-jY5Sfg$ > > for each field. This is not very flexible, but the easiest to understand. > > 2. Intermediate level > > You can make a DMShell, and then make a PetscSection, that gives the > number of dofs on each vertex and edge. Then call KSPSetDM() or > SNESSetDM(), and you can do nested fieldsplits from the command line. This > also retains a connection between the topology and the data layout, but you > have to deal with that pesky DM object. > > 3. High level > > You can use a DMPlex to represent your grid and a PetscFE to represent the > discretization, and then layout is done automatically, and nested > fieldsplits can be done from the command line. I am not 100% sure PetscFE > can represent what you want, but you can always call DMPlexCreateSection() > by hand to make the PetscSection. > > Thanks, > > Matt > > >> -sanjay >> >> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!c_nKrPbKpTddk11qYmEtY65UqhTbG4LXm7rfRLNAprFTA0x4y_NfBtBIEhOMant3sxPS91A_TgyDkqZ9zTSheMM$ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From s_g at berkeley.edu Fri Apr 4 14:12:25 2025 From: s_g at berkeley.edu (Sanjay Govindjee) Date: Fri, 4 Apr 2025 12:12:25 -0700 Subject: [petsc-users] fieldsplit question In-Reply-To: References: Message-ID: <2b4aa73b-f1c1-41ad-84f6-ba4969e2abb0@berkeley.edu> Thanks Mark.? I think I know what to do now.? Time to start coding. -sanjay On 4/4/25 2:17 AM, Mark Adams wrote: > Matt, Matt, Matt, > > Sanjay, (1) is the only sane option. Here is an example taken from a > code that I work with. > > This simply has 3 fields with 12 ("stride", not a great name, should > be "bs") dofs on each vertex in each field (2D Bell + 1D cubic Hermite). > This uses ISCreateBlock, which you might want to use for dof (1,2) in > your case. It lets,?with stride = 2, input IS = [1,3] create an IS > with [2,3,6,7] semantically. > > Thanks, > Mark > > int?matrix_solve:: setFieldSplitType() > > { > > // the global parameters > > ??PetscInt ierr, dofPerEnt,stride,k; > > int?startDof, endDofPlusOne; > > int?num_own_ent=m3dc1_mesh::instance()->num_own_ent[0], num_own_dof; > > ??m3dc1_field_getnumowndof(&fieldOrdering, &num_own_dof); > > if?(num_own_ent) dofPerEnt = num_own_dof/num_own_ent; > > ??stride=dofPerEnt/3; //U 0->11, Omega 12->23, Chi 24->35 > > ??m3dc1_field_getowndofid (&fieldOrdering, &startDof, &endDofPlusOne); > > ??startDof=startDof/stride; > > > // the 3?fields for PCFIELDSPLIT > > IS field0, field1, field2; > > ??PetscInt *idx0, *idx1, *idx2; > > ??ierr=PetscMalloc1(num_own_ent, &idx0); > > ??ierr=PetscMalloc1(num_own_ent, &idx1); > > ??ierr=PetscMalloc1(num_own_ent, &idx2); > > > for?(k=0; k > ??ierr=ISCreateBlock(PETSC_COMM_WORLD, stride, num_own_ent, idx0, > PETSC_COPY_VALUES, &field0); > > > for?(k=0; k > ??ierr=ISCreateBlock(PETSC_COMM_WORLD, stride, num_own_ent, idx1, > PETSC_COPY_VALUES, &field1); > > > for?(k=0; k > ??ierr=ISCreateBlock(PETSC_COMM_WORLD, stride, num_own_ent, idx2, > PETSC_COPY_VALUES, &field2); > > > ??PC pc; > > ??ierr= KSPAppendOptionsPrefix(*ksp,"fs_"); // ksp is a global here > > ??ierr=KSPGetPC(*ksp, &pc); > > ??ierr=PCSetType(pc, PCFIELDSPLIT); > > ??ierr=PCFieldSplitSetIS(pc, NULL, field0); > > ??ierr=PCFieldSplitSetIS(pc, NULL, field1); > > ??ierr=PCFieldSplitSetIS(pc, NULL, field2); > > > ??ierr=PetscFree(idx0);PetscCall > (ISDestroy > (&field0));PetscCall > (ISDestroy > (&field1));PetscCall > (ISDestroy > (&field2)); > > ??ierr=PetscFree(idx1); > > ??ierr=PetscFree(idx2); > > ??fsSet=1; > > return0; > > } > > > > On Thu, Apr 3, 2025 at 8:57?AM Matthew Knepley wrote: > > On Thu, Apr 3, 2025 at 12:11?AM Sanjay Govindjee via petsc-users > wrote: > > We would like to solve an FEA problem (unstructured grid) > where the nodes on the elements have different dofs.? For > example the corner nodes have only dof 0 and then mid-side > nodes have dofs 0,1,2? (think 8 node serendipity element).? > This is a multi-physics problem so we are looking to use the > fieldsplit features to pre-condition and solve.? Is there a > simple example of this type of usage in the src that we can > try to mimic? > > I presume this will take programming as opposed to just > setting command line options. > > > It will take a little programming, but not much. Here is the idea. > FieldSplit needs to know what dofs belong to what field. There are > a couple of ways to do this, at different levels of abstraction. > > 1. Low level > > You can explicitly?makes lists of the dofs in each field, as an > IS, and call > https://urldefense.us/v3/__https://petsc.org/main/manualpages/PC/PCFieldSplitSetIS/__;!!G_uCfscf7eWS!YYxJCjLuXQe2lb7pLGBk_jFm3tcpXjp7fMV9Z2SjL4wXwdnCNa3qKfT2WvEkGU5XtL60cuWHGdn1_3C_POMvmw$ > > for each field. This is not very flexible, but the easiest to > understand. > > 2. Intermediate?level > > You can make a DMShell, and then make a PetscSection, that gives > the number of dofs on each vertex and edge. Then call KSPSetDM() > or SNESSetDM(), and you can do nested fieldsplits from the command > line. This also retains a connection between the topology and the > data layout, but you have to deal with that pesky DM object. > > 3. High level > > You can use a DMPlex to represent your grid and a PetscFE to > represent the discretization, and then layout is done > automatically, and nested fieldsplits can be done from the command > line. I am not 100% sure PetscFE can represent what you want, but > you can always call DMPlexCreateSection() by hand to make the > PetscSection. > > ? Thanks, > > ? ? ? Matt > > -sanjay > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to > which their experiments lead. > -- Norbert Wiener > > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!YYxJCjLuXQe2lb7pLGBk_jFm3tcpXjp7fMV9Z2SjL4wXwdnCNa3qKfT2WvEkGU5XtL60cuWHGdn1_3ALPfLgEw$ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From joal at sdu.dk Mon Apr 7 02:15:05 2025 From: joal at sdu.dk (Joe Alexandersen) Date: Mon, 7 Apr 2025 07:15:05 +0000 Subject: [petsc-users] Semi-coarsening for GMG using DMDA? In-Reply-To: References: <3376A9C0-8E4A-4DE0-A510-EF1645F22A26@petsc.dev> Message-ID: Dear all, Our initial testings indicates that indeed this works as it should using DMDAs, DMDASetRefinementFactor and DMDARefine. Thanks for your insights! Sincerely, Joe Alexandersen Associate Professor DFF Sapere Aude Research Leader The Faculty of Engineering Institute of Mechanical and Electrical Engineering SDU Mechanical Engineering T +45 65 50 74 65 M +45 93 50 72 44 joal at sdu.dk https://urldefense.us/v3/__http://www.sdu.dk/ansat/joal__;!!G_uCfscf7eWS!dCmFZnVLqfD57VRiFZfer9P6Zv6XgPOnTO9eUH3h9e6d2X1XtNmoK8SyFrMhoCM0rkiFDpCq8KLJ8XaqNQ$ University of Southern Denmark Campusvej 55 DK-5230 Odense M https://urldefense.us/v3/__http://www.sdu.dk__;!!G_uCfscf7eWS!dCmFZnVLqfD57VRiFZfer9P6Zv6XgPOnTO9eUH3h9e6d2X1XtNmoK8SyFrMhoCM0rkiFDpCq8KLZfAMdWA$ [https://urldefense.us/v3/__https://cdn.sdu.dk/img/sdulogos/SDU_BLACK_signatur.png__;!!G_uCfscf7eWS!dCmFZnVLqfD57VRiFZfer9P6Zv6XgPOnTO9eUH3h9e6d2X1XtNmoK8SyFrMhoCM0rkiFDpCq8KLP942N2Q$ ] From: Matthew Knepley Sent: 22 March 2025 15:31 To: Joe Alexandersen Cc: Mark Adams ; Barry Smith ; petsc-users at mcs.anl.gov Subject: Re: [petsc-users] Semi-coarsening for GMG using DMDA? You don't often get email from knepley at gmail.com. Learn why this is important On Thu, Mar 20, 2025 at 12:09?PM Joe Alexandersen via petsc-users > wrote: Great, thanks for the input so far. We will wait for Matt's response soonish. Looking at the code, as Barry says, it should work. Please let us know if it does not. You can also do this with Plex, as Mark says. The drawback here is that I do not have code to determine that this kind of refinement is nested. Therefore it will fall back to the slow code for constructing arbitrary interpolators. If you really wanted this, we could improve the interpolator code to be fast for this kind of nesting. Thanks, Matt Sincerely, Joe Alexandersen Associate Professor DFF Sapere Aude Research Leader The Faculty of Engineering Institute of Mechanical and Electrical Engineering SDU Mechanical Engineering T +45 65 50 74 65 M +45 93 50 72 44 joal at sdu.dk https://urldefense.us/v3/__http://www.sdu.dk/ansat/joal__;!!G_uCfscf7eWS!dCmFZnVLqfD57VRiFZfer9P6Zv6XgPOnTO9eUH3h9e6d2X1XtNmoK8SyFrMhoCM0rkiFDpCq8KLJ8XaqNQ$ University of Southern Denmark Campusvej 55 DK-5230 Odense M https://urldefense.us/v3/__http://www.sdu.dk__;!!G_uCfscf7eWS!dCmFZnVLqfD57VRiFZfer9P6Zv6XgPOnTO9eUH3h9e6d2X1XtNmoK8SyFrMhoCM0rkiFDpCq8KLZfAMdWA$ Sent from Outlook for Android ________________________________ From: Mark Adams > Sent: Thursday, March 20, 2025 5:00:31 pm To: Barry Smith > Cc: Joe Alexandersen >; petsc-users at mcs.anl.gov > Subject: Re: [petsc-users] Semi-coarsening for GMG using DMDA? You don't often get email from mfadams at lbl.gov. Learn why this is important We have worked on semi coarsening in DMPlex, but it is not finished and we are not working on it now. I'm not sure about how easy it would be in DMDA, but Barry is suggesting that it is doable. We need to wait for Matt and he is on travel so his response may be delayed. Mark On Thu, Mar 20, 2025 at 11:34?AM Barry Smith > wrote: In theory you can do as you propose. In the context below uniform refinement" only means that the coordinates of the DMDA are ignored so each refinement. The interpolation is fine woth different refinements in the different coordinate directions. Barry On Mar 20, 2025, at 5:56?AM, Joe Alexandersen via petsc-users > wrote: Dear PETSc developers, We are working with a code that uses regular meshes (DMDA) and geometric multigrid. We would like to go from uniform coarsening/refinement to semi-coarsening/refinement, due to anisotropy in our underlying equations. We have tried to figure out if we can do this using built-in functions of PETSc, but it is unclear to us whether we can get it done relatively easily. It seems that we can go from the coarsest grid and refine differently in each direction using DMDASetRefinementFactor and then use DMRefine to define the finer levels. However, from the doc page for DMCreateInterpolation, it states that it only works for "uniform refinement" which to me seems to indicate it will not work with different refinement in each direction. But on the other hand, it states that it should work if using DMRefine, which I assume used the information from DMDASetRefinementFactor upon creation? So our questions are: is there are feasible and relatively simple way to do semi-coarsening/refinement of DMDAs for geometric multigrid hierarchies? Would the above work? Thanks in advance! Sincerely, Joe Alexandersen University of Southern Denmark -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!dCmFZnVLqfD57VRiFZfer9P6Zv6XgPOnTO9eUH3h9e6d2X1XtNmoK8SyFrMhoCM0rkiFDpCq8KIFQyyAig$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From dargaville.steven at gmail.com Mon Apr 7 16:57:03 2025 From: dargaville.steven at gmail.com (Steven Dargaville) Date: Mon, 7 Apr 2025 21:57:03 +0000 Subject: [petsc-users] kokkos, matcopy, matdiagonalscale on gpu Message-ID: Hi I have some code that computes a MatMatMult, then modifies one of the matrices and calls the MatMatMult again with MAT_REUSE_MATRIX. This gives the correct results with mat_types aij, aijkokkos and either run on a CPU or a GPU. In some cases one of the matrices is diagonal so I have been modifying my code to call MatDiagonalScale instead and have seen some peculiar behaviour. If the mat type is aij, everything works correctly on the CPU and GPU. If the mat type is aijkokkos, everything works correctly on the CPU but when run on a machine with a GPU the results differ. I have some example code below that shows this, it prints out four matrices; the first two should match and the last two should match. To see the failure run with "-mat_type aijkokkos" on a machine with a GPU (again this gives the correct results with aijkokkos run on a CPU). I get the output: Mat Object: 1 MPI process type: seqaijkokkos row 0: (0, 4.) (1, 6.) row 1: (0, 10.) (1, 14.) Mat Object: 1 MPI process type: seqaijkokkos row 0: (0, 4.) (1, 6.) row 1: (0, 10.) (1, 14.) Mat Object: 1 MPI process type: seqaijkokkos row 0: (0, 6.) (1, 9.) row 1: (0, 15.) (1, 21.) Mat Object: 1 MPI process type: seqaijkokkos row 0: (0, 12.) (1, 18.) row 1: (0, 30.) (1, 42.) I have narrowed down this failure to a MatCopy call, where the values of result_diag should be overwritten with A before calling MatDiagonalScale. The results with aijkokos on a GPU suggest the values of result_diag are not being changed. If instead of using the MatCopy I destroy result_diag and call MatDuplicate, the results are correct. You can trigger the correct behaviour with "-mat_type aijkokkos -correct". To me this looks like the values are out of sync between the device/host, but I would have thought a call to MatCopy would be safe. Any thoughts on what I might be doing wrong? Thanks for your help! Steven ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ static char help[] = "Test matmat products with matdiagonal on gpus \n\n"; #include #include int main(int argc, char **args) { const PetscInt inds[] = {0, 1}; PetscScalar avals[] = {2, 3, 5, 7}; Mat A, B_diag, B_aij_diag, result, result_diag; Vec diag; PetscCall(PetscInitialize(&argc, &args, NULL, help)); // Create matrix to start PetscCall(MatCreateFromOptions(PETSC_COMM_WORLD, NULL, 1, 2, 2, 2, 2, &A)); PetscCall(MatSetUp(A)); PetscCall(MatSetValues(A, 2, inds, 2, inds, avals, INSERT_VALUES)); PetscCall(MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY)); PetscCall(MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY)); // Create a matdiagonal matrix // Will be the matching vec type as A PetscCall(MatCreateVecs(A, &diag, NULL)); PetscCall(VecSet(diag, 2.0)); PetscCall(MatCreateDiagonal(diag, &B_diag)); // Create the same matrix as the matdiagonal but in aij format PetscCall(MatCreateFromOptions(PETSC_COMM_WORLD, NULL, 1, 2, 2, 2, 2, &B_aij_diag)); PetscCall(MatSetUp(B_aij_diag)); PetscCall(MatDiagonalSet(B_aij_diag, diag, INSERT_VALUES)); PetscCall(MatAssemblyBegin(B_aij_diag, MAT_FINAL_ASSEMBLY)); PetscCall(MatAssemblyEnd(B_aij_diag, MAT_FINAL_ASSEMBLY)); PetscCall(VecDestroy(&diag)); // ~~~~~~~~~~~~~ // Do an initial matmatmult // A * B_aij_diag // and then // A * B_diag but just using MatDiagonalScale // ~~~~~~~~~~~~~ // aij * aij PetscCall(MatMatMult(A, B_aij_diag, MAT_INITIAL_MATRIX, 1.5, &result)); PetscCall(MatView(result, PETSC_VIEWER_STDOUT_WORLD)); // aij * diagonal PetscCall(MatDuplicate(A, MAT_COPY_VALUES, &result_diag)); PetscCall(MatDiagonalGetDiagonal(B_diag, &diag)); PetscCall(MatDiagonalScale(result_diag, NULL, diag)); PetscCall(MatDiagonalRestoreDiagonal(B_diag, &diag)); PetscCall(MatView(result_diag, PETSC_VIEWER_STDOUT_WORLD)); // ~~~~~~~~~~~~~ // Now let's modify the diagonal and do it again with "reuse" // ~~~~~~~~~~~~~ PetscCall(MatDiagonalGetDiagonal(B_diag, &diag)); PetscCall(VecSet(diag, 3.0)); PetscCall(MatDiagonalSet(B_aij_diag, diag, INSERT_VALUES)); PetscCall(MatDiagonalRestoreDiagonal(B_diag, &diag)); // aij * aij PetscCall(MatMatMult(A, B_aij_diag, MAT_REUSE_MATRIX, 1.5, &result)); PetscCall(MatView(result, PETSC_VIEWER_STDOUT_WORLD)); PetscBool correct = PETSC_FALSE; PetscCall(PetscOptionsGetBool(NULL, NULL, "-correct", &correct, NULL)); if (!correct) { // This gives the wrong results below when run on gpu // Results suggest this isn't copied PetscCall(MatCopy(A, result_diag, SAME_NONZERO_PATTERN)); } else { // This gives the correct results below PetscCall(MatDestroy(&result_diag)); PetscCall(MatDuplicate(A, MAT_COPY_VALUES, &result_diag)); } // aij * diagonal PetscCall(MatDiagonalGetDiagonal(B_diag, &diag)); PetscCall(MatDiagonalScale(result_diag, NULL, diag)); PetscCall(MatDiagonalRestoreDiagonal(B_diag, &diag)); PetscCall(MatView(result_diag, PETSC_VIEWER_STDOUT_WORLD)); PetscCall(PetscFinalize()); return 0; } -------------- next part -------------- An HTML attachment was scrubbed... URL: From junchao.zhang at gmail.com Tue Apr 8 00:04:30 2025 From: junchao.zhang at gmail.com (Junchao Zhang) Date: Tue, 8 Apr 2025 00:04:30 -0500 Subject: [petsc-users] kokkos, matcopy, matdiagonalscale on gpu In-Reply-To: References: Message-ID: Hi, Steven Thank you for the bug report and test example. You were right. The MatCopy(A, B,..) implementation was wrong when B was on the device. I have a fix at https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/merge_requests/8288__;!!G_uCfscf7eWS!e15LlybplgvD-3BTmMELbBy9EiU1biKGxn_qlVmPVnTPUjMYT9nZ7kf3ZtjPPhbQL7zFDI8QGuKk_V-tW9IYCL3xFVri$ , and will add your test to the MR tomorrow. Thanks! --Junchao Zhang On Mon, Apr 7, 2025 at 4:57?PM Steven Dargaville < dargaville.steven at gmail.com> wrote: > Hi > > I have some code that computes a MatMatMult, then modifies one of the > matrices and calls the MatMatMult again with MAT_REUSE_MATRIX. This gives > the correct results with mat_types aij, aijkokkos and either run on a CPU > or a GPU. > > In some cases one of the matrices is diagonal so I have been modifying my > code to call MatDiagonalScale instead and have seen some peculiar > behaviour. If the mat type is aij, everything works correctly on the CPU > and GPU. If the mat type is aijkokkos, everything works correctly on the > CPU but when run on a machine with a GPU the results differ. > > I have some example code below that shows this, it prints out four > matrices; the first two should match and the last two should match. To see > the failure run with "-mat_type aijkokkos" on a machine with a GPU (again > this gives the correct results with aijkokkos run on a CPU). I get the > output: > > Mat Object: 1 MPI process > type: seqaijkokkos > row 0: (0, 4.) (1, 6.) > row 1: (0, 10.) (1, 14.) > Mat Object: 1 MPI process > type: seqaijkokkos > row 0: (0, 4.) (1, 6.) > row 1: (0, 10.) (1, 14.) > Mat Object: 1 MPI process > type: seqaijkokkos > row 0: (0, 6.) (1, 9.) > row 1: (0, 15.) (1, 21.) > Mat Object: 1 MPI process > type: seqaijkokkos > row 0: (0, 12.) (1, 18.) > row 1: (0, 30.) (1, 42.) > > I have narrowed down this failure to a MatCopy call, where the values of > result_diag should be overwritten with A before calling MatDiagonalScale. > The results with aijkokos on a GPU suggest the values of result_diag are > not being changed. If instead of using the MatCopy I destroy result_diag > and call MatDuplicate, the results are correct. You can trigger the correct > behaviour with "-mat_type aijkokkos -correct". > > To me this looks like the values are out of sync between the device/host, > but I would have thought a call to MatCopy would be safe. Any thoughts on > what I might be doing wrong? > > Thanks for your help! > Steven > > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > > static char help[] = "Test matmat products with matdiagonal on gpus \n\n"; > > #include > #include > > > int main(int argc, char **args) > { > const PetscInt inds[] = {0, 1}; > PetscScalar avals[] = {2, 3, 5, 7}; > Mat A, B_diag, B_aij_diag, result, result_diag; > Vec diag; > > PetscCall(PetscInitialize(&argc, &args, NULL, help)); > > // Create matrix to start > PetscCall(MatCreateFromOptions(PETSC_COMM_WORLD, NULL, 1, 2, 2, 2, 2, > &A)); > PetscCall(MatSetUp(A)); > PetscCall(MatSetValues(A, 2, inds, 2, inds, avals, INSERT_VALUES)); > PetscCall(MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY)); > PetscCall(MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY)); > > // Create a matdiagonal matrix > // Will be the matching vec type as A > PetscCall(MatCreateVecs(A, &diag, NULL)); > PetscCall(VecSet(diag, 2.0)); > PetscCall(MatCreateDiagonal(diag, &B_diag)); > > // Create the same matrix as the matdiagonal but in aij format > PetscCall(MatCreateFromOptions(PETSC_COMM_WORLD, NULL, 1, 2, 2, 2, 2, > &B_aij_diag)); > PetscCall(MatSetUp(B_aij_diag)); > PetscCall(MatDiagonalSet(B_aij_diag, diag, INSERT_VALUES)); > PetscCall(MatAssemblyBegin(B_aij_diag, MAT_FINAL_ASSEMBLY)); > PetscCall(MatAssemblyEnd(B_aij_diag, MAT_FINAL_ASSEMBLY)); > PetscCall(VecDestroy(&diag)); > > // ~~~~~~~~~~~~~ > // Do an initial matmatmult > // A * B_aij_diag > // and then > // A * B_diag but just using MatDiagonalScale > // ~~~~~~~~~~~~~ > > // aij * aij > PetscCall(MatMatMult(A, B_aij_diag, MAT_INITIAL_MATRIX, 1.5, &result)); > PetscCall(MatView(result, PETSC_VIEWER_STDOUT_WORLD)); > > // aij * diagonal > PetscCall(MatDuplicate(A, MAT_COPY_VALUES, &result_diag)); > PetscCall(MatDiagonalGetDiagonal(B_diag, &diag)); > PetscCall(MatDiagonalScale(result_diag, NULL, diag)); > PetscCall(MatDiagonalRestoreDiagonal(B_diag, &diag)); > PetscCall(MatView(result_diag, PETSC_VIEWER_STDOUT_WORLD)); > > // ~~~~~~~~~~~~~ > // Now let's modify the diagonal and do it again with "reuse" > // ~~~~~~~~~~~~~ > PetscCall(MatDiagonalGetDiagonal(B_diag, &diag)); > PetscCall(VecSet(diag, 3.0)); > PetscCall(MatDiagonalSet(B_aij_diag, diag, INSERT_VALUES)); > PetscCall(MatDiagonalRestoreDiagonal(B_diag, &diag)); > > // aij * aij > PetscCall(MatMatMult(A, B_aij_diag, MAT_REUSE_MATRIX, 1.5, &result)); > PetscCall(MatView(result, PETSC_VIEWER_STDOUT_WORLD)); > > PetscBool correct = PETSC_FALSE; > PetscCall(PetscOptionsGetBool(NULL, NULL, "-correct", &correct, NULL)); > > if (!correct) > { > // This gives the wrong results below when run on gpu > // Results suggest this isn't copied > PetscCall(MatCopy(A, result_diag, SAME_NONZERO_PATTERN)); > } > else > { > // This gives the correct results below > PetscCall(MatDestroy(&result_diag)); > PetscCall(MatDuplicate(A, MAT_COPY_VALUES, &result_diag)); > } > > // aij * diagonal > PetscCall(MatDiagonalGetDiagonal(B_diag, &diag)); > PetscCall(MatDiagonalScale(result_diag, NULL, diag)); > PetscCall(MatDiagonalRestoreDiagonal(B_diag, &diag)); > PetscCall(MatView(result_diag, PETSC_VIEWER_STDOUT_WORLD)); > > PetscCall(PetscFinalize()); > return 0; > } > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Tue Apr 8 12:05:33 2025 From: bsmith at petsc.dev (Barry Smith) Date: Tue, 8 Apr 2025 13:05:33 -0400 Subject: [petsc-users] PETSc 3.23 release Message-ID: We are pleased to announce the release of PETSc version 3.23.0 at https://urldefense.us/v3/__https://petsc.org/release/download/__;!!G_uCfscf7eWS!aoxKo0ANVxRvIiuI1hSnf2ei9MBJqVL-uBvLKRGSKybMgx0l0iFWcMJGOhXZIWvZcVq3GQRjNcbwXQpKONnMhyo$ A list of the major changes and updates can be found at https://urldefense.us/v3/__https://petsc.org/release/changes/323/__;!!G_uCfscf7eWS!aoxKo0ANVxRvIiuI1hSnf2ei9MBJqVL-uBvLKRGSKybMgx0l0iFWcMJGOhXZIWvZcVq3GQRjNcbwXQpKGfcj73E$ The final update to petsc-3.22 i.e petsc-3.21.5 is also available We recommend upgrading to PETSc 3.23.0 soon. As always, please report problems to petsc-maint at mcs.anl.gov and ask questions at petsc-users at mcs.anl.gov There are substantial improvements/changes to the PETSc Fortran bindings, please see Fortran at https://urldefense.us/v3/__https://petsc.org/release/changes/323/__;!!G_uCfscf7eWS!aoxKo0ANVxRvIiuI1hSnf2ei9MBJqVL-uBvLKRGSKybMgx0l0iFWcMJGOhXZIWvZcVq3GQRjNcbwXQpKGfcj73E$ and and https://urldefense.us/v3/__https://petsc.org/release/manual/fortran/__;!!G_uCfscf7eWS!aoxKo0ANVxRvIiuI1hSnf2ei9MBJqVL-uBvLKRGSKybMgx0l0iFWcMJGOhXZIWvZcVq3GQRjNcbwXQpK4gtCym4$ . This release includes contributions from Alex Lindsay Baha? Eddine Sidi Hida Barry Smith Brandon Connor Ward Daniel Otto de Mentock Darsh Nathawani Ed Bueler Eric Chamberland Florent Pruvost Francesco Ballarin Hansol Suh James Wright Jed Brown Jeff-Hadley Jonas Heinzmann Jose E. Roman Josh Hope-Collins Junchao Zhang Kenneth E. Jansen Lisandro Dalcin Mark Adams Martin Diehl Massimiliano Leoni Matthew Knepley Min RK Nuno Nobre Pierre Jolivet Raphael Zanella Ren? Chenard Richard Tran Mills Rylanor Satish Balay Scott MacLachlan sdargavi Stefano Zampini Toby Isaac Zach Atkins and bug reports/proposed improvements received from Alexander Gabriele Merlo Jacob Faibussowitsch Jose Roman langtian.liu at icloud.com Lemon Pierre Jolivet Sebastien Gilles "Unnikrishnan, Umesh" Venkata Narayana Sarma Dhavala As always, thanks for your support, Barry -------------- next part -------------- An HTML attachment was scrubbed... URL: From dargaville.steven at gmail.com Tue Apr 8 13:26:47 2025 From: dargaville.steven at gmail.com (Steven Dargaville) Date: Tue, 8 Apr 2025 18:26:47 +0000 Subject: [petsc-users] kokkos, matcopy, matdiagonalscale on gpu In-Reply-To: References: Message-ID: Fantastic, that seems to have fixed it. Thanks for your help! On Tue, 8 Apr 2025 at 06:04, Junchao Zhang wrote: > Hi, Steven > Thank you for the bug report and test example. You were right. The > MatCopy(A, B,..) implementation was wrong when B was on the device. > I have a fix at https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/merge_requests/8288__;!!G_uCfscf7eWS!czsmOAhl7kNlur21sGgNTcYMe79EP113IoDsGXFSWeuR9b1LW7-vxQgIfp2YGSsKyiIR3s193a6KcdEqeQAv-XxqSVRCPUHa$ , > and will add your test to the MR tomorrow. > > Thanks! > --Junchao Zhang > > > On Mon, Apr 7, 2025 at 4:57?PM Steven Dargaville < > dargaville.steven at gmail.com> wrote: > >> Hi >> >> I have some code that computes a MatMatMult, then modifies one of the >> matrices and calls the MatMatMult again with MAT_REUSE_MATRIX. This gives >> the correct results with mat_types aij, aijkokkos and either run on a CPU >> or a GPU. >> >> In some cases one of the matrices is diagonal so I have been modifying my >> code to call MatDiagonalScale instead and have seen some peculiar >> behaviour. If the mat type is aij, everything works correctly on the CPU >> and GPU. If the mat type is aijkokkos, everything works correctly on the >> CPU but when run on a machine with a GPU the results differ. >> >> I have some example code below that shows this, it prints out four >> matrices; the first two should match and the last two should match. To see >> the failure run with "-mat_type aijkokkos" on a machine with a GPU (again >> this gives the correct results with aijkokkos run on a CPU). I get the >> output: >> >> Mat Object: 1 MPI process >> type: seqaijkokkos >> row 0: (0, 4.) (1, 6.) >> row 1: (0, 10.) (1, 14.) >> Mat Object: 1 MPI process >> type: seqaijkokkos >> row 0: (0, 4.) (1, 6.) >> row 1: (0, 10.) (1, 14.) >> Mat Object: 1 MPI process >> type: seqaijkokkos >> row 0: (0, 6.) (1, 9.) >> row 1: (0, 15.) (1, 21.) >> Mat Object: 1 MPI process >> type: seqaijkokkos >> row 0: (0, 12.) (1, 18.) >> row 1: (0, 30.) (1, 42.) >> >> I have narrowed down this failure to a MatCopy call, where the values of >> result_diag should be overwritten with A before calling MatDiagonalScale. >> The results with aijkokos on a GPU suggest the values of result_diag are >> not being changed. If instead of using the MatCopy I destroy result_diag >> and call MatDuplicate, the results are correct. You can trigger the correct >> behaviour with "-mat_type aijkokkos -correct". >> >> To me this looks like the values are out of sync between the device/host, >> but I would have thought a call to MatCopy would be safe. Any thoughts on >> what I might be doing wrong? >> >> Thanks for your help! >> Steven >> >> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >> >> static char help[] = "Test matmat products with matdiagonal on gpus \n\n"; >> >> #include >> #include >> >> >> int main(int argc, char **args) >> { >> const PetscInt inds[] = {0, 1}; >> PetscScalar avals[] = {2, 3, 5, 7}; >> Mat A, B_diag, B_aij_diag, result, result_diag; >> Vec diag; >> >> PetscCall(PetscInitialize(&argc, &args, NULL, help)); >> >> // Create matrix to start >> PetscCall(MatCreateFromOptions(PETSC_COMM_WORLD, NULL, 1, 2, 2, 2, 2, >> &A)); >> PetscCall(MatSetUp(A)); >> PetscCall(MatSetValues(A, 2, inds, 2, inds, avals, INSERT_VALUES)); >> PetscCall(MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY)); >> PetscCall(MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY)); >> >> // Create a matdiagonal matrix >> // Will be the matching vec type as A >> PetscCall(MatCreateVecs(A, &diag, NULL)); >> PetscCall(VecSet(diag, 2.0)); >> PetscCall(MatCreateDiagonal(diag, &B_diag)); >> >> // Create the same matrix as the matdiagonal but in aij format >> PetscCall(MatCreateFromOptions(PETSC_COMM_WORLD, NULL, 1, 2, 2, 2, 2, >> &B_aij_diag)); >> PetscCall(MatSetUp(B_aij_diag)); >> PetscCall(MatDiagonalSet(B_aij_diag, diag, INSERT_VALUES)); >> PetscCall(MatAssemblyBegin(B_aij_diag, MAT_FINAL_ASSEMBLY)); >> PetscCall(MatAssemblyEnd(B_aij_diag, MAT_FINAL_ASSEMBLY)); >> PetscCall(VecDestroy(&diag)); >> >> // ~~~~~~~~~~~~~ >> // Do an initial matmatmult >> // A * B_aij_diag >> // and then >> // A * B_diag but just using MatDiagonalScale >> // ~~~~~~~~~~~~~ >> >> // aij * aij >> PetscCall(MatMatMult(A, B_aij_diag, MAT_INITIAL_MATRIX, 1.5, &result)); >> PetscCall(MatView(result, PETSC_VIEWER_STDOUT_WORLD)); >> >> // aij * diagonal >> PetscCall(MatDuplicate(A, MAT_COPY_VALUES, &result_diag)); >> PetscCall(MatDiagonalGetDiagonal(B_diag, &diag)); >> PetscCall(MatDiagonalScale(result_diag, NULL, diag)); >> PetscCall(MatDiagonalRestoreDiagonal(B_diag, &diag)); >> PetscCall(MatView(result_diag, PETSC_VIEWER_STDOUT_WORLD)); >> >> // ~~~~~~~~~~~~~ >> // Now let's modify the diagonal and do it again with "reuse" >> // ~~~~~~~~~~~~~ >> PetscCall(MatDiagonalGetDiagonal(B_diag, &diag)); >> PetscCall(VecSet(diag, 3.0)); >> PetscCall(MatDiagonalSet(B_aij_diag, diag, INSERT_VALUES)); >> PetscCall(MatDiagonalRestoreDiagonal(B_diag, &diag)); >> >> // aij * aij >> PetscCall(MatMatMult(A, B_aij_diag, MAT_REUSE_MATRIX, 1.5, &result)); >> PetscCall(MatView(result, PETSC_VIEWER_STDOUT_WORLD)); >> >> PetscBool correct = PETSC_FALSE; >> PetscCall(PetscOptionsGetBool(NULL, NULL, "-correct", &correct, NULL)); >> >> if (!correct) >> { >> // This gives the wrong results below when run on gpu >> // Results suggest this isn't copied >> PetscCall(MatCopy(A, result_diag, SAME_NONZERO_PATTERN)); >> } >> else >> { >> // This gives the correct results below >> PetscCall(MatDestroy(&result_diag)); >> PetscCall(MatDuplicate(A, MAT_COPY_VALUES, &result_diag)); >> } >> >> // aij * diagonal >> PetscCall(MatDiagonalGetDiagonal(B_diag, &diag)); >> PetscCall(MatDiagonalScale(result_diag, NULL, diag)); >> PetscCall(MatDiagonalRestoreDiagonal(B_diag, &diag)); >> PetscCall(MatView(result_diag, PETSC_VIEWER_STDOUT_WORLD)); >> >> PetscCall(PetscFinalize()); >> return 0; >> } >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pjool at dtu.dk Thu Apr 10 03:24:58 2025 From: pjool at dtu.dk (=?iso-8859-1?Q?Peder_J=F8rgensgaard_Olesen?=) Date: Thu, 10 Apr 2025 08:24:58 +0000 Subject: [petsc-users] ASCII viewer formats Message-ID: I would like to output the contents of a Vec to an ASCII file in which the entries are all on one line, as "x[0] x[1] x[2] ... x[N-1]". This can be done in a slightly roundabout way by putting the values in a 1xN dense Mat, assembling the matrix, and then use MatView with a suitable format, but one would think that skipping the matrix step and use a VecView directly would be more efficient (the procedure is to be repeated many times). However, none of the viewer formats seems to support the desired output formatting for Vec. Is there any way to customize viewer formats for a specific layout - or is there perhaps a more clever way to do the thing I want? Also, somewhat relatedly, is there a more detailed specification of available viewer formats? The documentation for PetscViewerFormat (https://urldefense.us/v3/__https://petsc.org/release/manualpages/Viewer/PetscViewerFormat/*petscviewerformat__;Iw!!G_uCfscf7eWS!fmgo_gSUThwXeUVpPBhwFVgTXRhk3tS8gDcRbkXu6TQu9lj3Emm5wwGKPUktzKcWMN-isoQKklivtNv-eU4$ ) briefly describes a number of them, and notes that "A variety of specialized formats also exist", although this isn't elaborated. Thanks! Best, Peder [https://urldefense.us/v3/__http://www.dtu.dk/-/media/DTU_Generelt/Andet/mail-signature-logo.png__;!!G_uCfscf7eWS!fmgo_gSUThwXeUVpPBhwFVgTXRhk3tS8gDcRbkXu6TQu9lj3Emm5wwGKPUktzKcWMN-isoQKklivOskRIMM$ ] Peder J?rgensgaard Olesen Postdoc DTU Construct Institut for Byggeri og Mekanisk Teknologi pjool at dtu.dk Koppels All? Building 403 2800 Kgs. Lyngby https://urldefense.us/v3/__http://www.dtu.dk/english__;!!G_uCfscf7eWS!fmgo_gSUThwXeUVpPBhwFVgTXRhk3tS8gDcRbkXu6TQu9lj3Emm5wwGKPUktzKcWMN-isoQKklivVLvHjUo$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Thu Apr 10 03:49:46 2025 From: mfadams at lbl.gov (Mark Adams) Date: Thu, 10 Apr 2025 04:49:46 -0400 Subject: [petsc-users] ASCII viewer formats In-Reply-To: References: Message-ID: Hi Peder, You can run with '-help -vec_view' (or whatever your view option is) and that prints all options that are queried for. This is very noisy so grep on 'vec_view' and you will see the option information: what it was, what it is after the query, and what options it accepts. I don't know of one that does what you want. You could write it to a file and use sed or awk, etc. Mark On Thu, Apr 10, 2025 at 4:25?AM Peder J?rgensgaard Olesen via petsc-users < petsc-users at mcs.anl.gov> wrote: > I would like to output the contents of a Vec to an ASCII file in which the > entries are all on one line, as "x[0] x[1] x[2] ... x[N-1]". This can > be done in a slightly roundabout way by putting the values in a 1xN dense > Mat, assembling the matrix, and then use MatView with a suitable format, > but one would think that skipping the matrix step and use a VecView > directly would be more efficient (the procedure is to be repeated many > times). However, none of the viewer formats seems to support the desired > output formatting for Vec. > > Is there any way to customize viewer formats for a specific layout - or is > there perhaps a more clever way to do the thing I want? > > Also, somewhat relatedly, is there a more detailed specification of > available viewer formats? The documentation for PetscViewerFormat ( > https://urldefense.us/v3/__https://petsc.org/release/manualpages/Viewer/PetscViewerFormat/*petscviewerformat__;Iw!!G_uCfscf7eWS!bBFDjNvT7RMZgpeDh0FUlbrbeUxPtb93k6PYYqfx5ls9UeUdQ8aiIMM2e7qCFxUOUxQW2MqPjMj9BccmOMloaJI$ > ) > briefly describes a number of them, and notes that "A variety of > specialized formats also exist", although this isn't elaborated. > > Thanks! > > Best, > Peder > > *Peder J?rgensgaard Olesen* > Postdoc > DTU Construct > Institut for Byggeri og Mekanisk Teknologi > > pjool at dtu.dk > Koppels All? > Building 403 > 2800 Kgs. Lyngby > https://urldefense.us/v3/__http://www.dtu.dk/english__;!!G_uCfscf7eWS!bBFDjNvT7RMZgpeDh0FUlbrbeUxPtb93k6PYYqfx5ls9UeUdQ8aiIMM2e7qCFxUOUxQW2MqPjMj9BccmR8pz5l4$ > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.zampini at gmail.com Thu Apr 10 04:55:01 2025 From: stefano.zampini at gmail.com (Stefano Zampini) Date: Thu, 10 Apr 2025 12:55:01 +0300 Subject: [petsc-users] ASCII viewer formats In-Reply-To: References: Message-ID: Il giorno gio 10 apr 2025 alle ore 11:25 Peder J?rgensgaard Olesen via petsc-users ha scritto: > I would like to output the contents of a Vec to an ASCII file in which the > entries are all on one line, as "x[0] x[1] x[2] ... x[N-1]". This can > be done in a slightly roundabout way by putting the values in a 1xN dense > Mat, assembling the matrix, and then use MatView with a suitable format, > but one would think that skipping the matrix step and use a VecView > directly would be more efficient (the procedure is to be repeated many > times). > You can use VecGetArrayRead to get the vector data and pass the array to MatCreateDense. Isn't that efficient enough? > However, none of the viewer formats seems to support the desired output > formatting for Vec. > > Is there any way to customize viewer formats for a specific layout - or is > there perhaps a more clever way to do the thing I want? > > Also, somewhat relatedly, is there a more detailed specification of > available viewer formats? The documentation for PetscViewerFormat ( > https://urldefense.us/v3/__https://petsc.org/release/manualpages/Viewer/PetscViewerFormat/*petscviewerformat__;Iw!!G_uCfscf7eWS!Y4XIeKrWEWWPvSSQleoTzKpv3jZILJZ_l47FbAbLvUNnNPA9sunnxxl-LgVNf7U6keAzX4lFkwy9TgeSafvVT5dCJmAUTEU$ > ) > briefly describes a number of them, and notes that "A variety of > specialized formats also exist", although this isn't elaborated. > > Thanks! > > Best, > Peder > > *Peder J?rgensgaard Olesen* > Postdoc > DTU Construct > Institut for Byggeri og Mekanisk Teknologi > > pjool at dtu.dk > Koppels All? > Building 403 > 2800 Kgs. Lyngby > https://urldefense.us/v3/__http://www.dtu.dk/english__;!!G_uCfscf7eWS!Y4XIeKrWEWWPvSSQleoTzKpv3jZILJZ_l47FbAbLvUNnNPA9sunnxxl-LgVNf7U6keAzX4lFkwy9TgeSafvVT5dCsW-2QDE$ > > > -- Stefano -------------- next part -------------- An HTML attachment was scrubbed... URL: From popov at uni-mainz.de Thu Apr 10 13:04:52 2025 From: popov at uni-mainz.de (Anton Popov) Date: Thu, 10 Apr 2025 20:04:52 +0200 Subject: [petsc-users] Hybrid multigrid Message-ID: Hi guys, I have a custom multigrid preconditioner that uses Galerkin coarsening for the staggered grid finite difference. Now I want to replace a few top level operators with the matrix-free shell matrices. Here is my plan: 1. Set PC_MG_GALERKIN_NONE 2. Call PCMGSetRestriction and PCMGSetInterpolation on all levels as I do it already 3. Use a combination of PCMGGetSmoother and KSPSetOperators to communicate my operators with PCMG ??? a. On the top levels just set the shell matrices minimalistically equipped with MATOP_GET_DIAGONAL and MATOP_MULT. ??? b. On the bottom levels generate the operators by explicitly calling MatMatMatMult (R is not the same as P in my case). ??? ??? Use MAT_INITIAL_MATRIX for the first time ??? ??? Use MAT_REUSE_MATRIX for the subsequent calls Is this a proper way to do it, or there is something wrong? If I want the restriction/interpolation matrices to be matrix-free on the top levels as well, I would need to set MATOP_MULT_ADD for them. Is this enough, or do I need something else? Thanks! Anton -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Thu Apr 10 14:17:21 2025 From: bsmith at petsc.dev (Barry Smith) Date: Thu, 10 Apr 2025 15:17:21 -0400 Subject: [petsc-users] Hybrid multigrid In-Reply-To: References: Message-ID: <58F541B8-B574-4B41-A88E-5754A1B503F0@petsc.dev> Sounds correct. > On Apr 10, 2025, at 2:04?PM, Anton Popov wrote: > > > Hi guys, > I have a custom multigrid preconditioner that uses Galerkin coarsening for the staggered grid finite difference. Now I want to replace a few top level operators with the matrix-free shell matrices. > Here is my plan: > 1. Set PC_MG_GALERKIN_NONE > 2. Call PCMGSetRestriction and PCMGSetInterpolation on all levels as I do it already > 3. Use a combination of PCMGGetSmoother and KSPSetOperators to communicate my operators with PCMG > a. On the top levels just set the shell matrices minimalistically equipped with MATOP_GET_DIAGONAL and MATOP_MULT. > b. On the bottom levels generate the operators by explicitly calling MatMatMatMult (R is not the same as P in my case). > > Use MAT_INITIAL_MATRIX for the first time > Use MAT_REUSE_MATRIX for the subsequent calls > > Is this a proper way to do it, or there is something wrong? > If I want the restriction/interpolation matrices to be matrix-free on the top levels as well, I would need to set MATOP_MULT_ADD for them. Is this enough, or do I need something else? > Thanks! > Anton > -------------- next part -------------- An HTML attachment was scrubbed... URL: From popov at uni-mainz.de Fri Apr 11 02:08:51 2025 From: popov at uni-mainz.de (Anton Popov) Date: Fri, 11 Apr 2025 09:08:51 +0200 Subject: [petsc-users] Hybrid multigrid In-Reply-To: <58F541B8-B574-4B41-A88E-5754A1B503F0@petsc.dev> References: <58F541B8-B574-4B41-A88E-5754A1B503F0@petsc.dev> Message-ID: <70467168-9a7e-4365-bbae-08f25d1a2f13@uni-mainz.de> Thanks Barry! Best, Anton On 10.04.25 21:17, Barry Smith wrote: > ? Sounds correct. > >> On Apr 10, 2025, at 2:04?PM, Anton Popov wrote: >> >> >> Hi guys, >> >> I have a custom multigrid preconditioner that uses Galerkin >> coarsening for the staggered grid finite difference. Now I want to >> replace a few top level operators with the matrix-free shell matrices. >> >> Here is my plan: >> >> 1. Set PC_MG_GALERKIN_NONE >> >> 2. Call PCMGSetRestriction and PCMGSetInterpolation on all levels as >> I do it already >> >> 3. Use a combination of PCMGGetSmoother and KSPSetOperators to >> communicate my operators with PCMG >> >> ??? a. On the top levels just set the shell matrices minimalistically >> equipped withMATOP_GET_DIAGONALand MATOP_MULT. >> >> ??? b. On the bottom levels generate the operators by explicitly >> calling MatMatMatMult (R is not the same as P in my case). >> >> >> Use MAT_INITIAL_MATRIX for the first time >> >> Use MAT_REUSE_MATRIX for the subsequent calls >> >> >> Is this a proper way to do it, or there is something wrong? >> >> If I want the restriction/interpolation matrices to be matrix-free on >> the top levels as well, I would need to setMATOP_MULT_ADDfor them. Is >> this enough, or do I need something else? >> >> Thanks! >> >> Anton >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From salvador.rodriguez at upm.es Fri Apr 11 02:02:03 2025 From: salvador.rodriguez at upm.es (salvador) Date: Fri, 11 Apr 2025 09:02:03 +0200 Subject: [petsc-users] Trouble configuring PETSc with serial MUMPS (no MPI) Message-ID: Dear PETSc team, I'm trying to compile PETSc with *MUMPS* in *serial mode* (without MPI), and I?ve encountered an issue during the |./configure| step. Here is the command I'm using: ./configure \ ? --with-cc=gcc \ ? --with-cxx=g++ \ ? --with-fc=gfortran \ ? --with-mpi=0 \ ? --with-scalar-type=complex \ ? --with-petsc-arch=arch-linus-c-opt-complex \ ? --with-blaslapack=1 \ ? --with-mumps=1 \ ? --with-scalapack=0 However, I get the following error: ********************************************************************************************* ?????????? UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details): --------------------------------------------------------------------------------------------- ? Package mumps requested but dependency scalapack not requested. ? Perhaps you want --download-scalapack or --with-scalapack-dir=directory or ? --with-scalapack-lib=libraries and --with-scalapack-include=directory ********************************************************************************************* My understanding is that *MUMPS can be used in serial* (according to its documentation), but PETSc seems to expect *ScaLAPACK* as a dependency even without MPI. * Is it possible to use MUMPS in PETSc *without MPI and without ScaLAPACK*? * If not, is there a way to enable serial MUMPS with PETSc? Any guidance would be appreciated. Best regards, /Salvador / -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay.anl at fastmail.org Fri Apr 11 09:08:04 2025 From: balay.anl at fastmail.org (Satish Balay) Date: Fri, 11 Apr 2025 09:08:04 -0500 (CDT) Subject: [petsc-users] Trouble configuring PETSc with serial MUMPS (no MPI) In-Reply-To: References: Message-ID: <2f254b05-f4d2-9ee7-4e09-476fe363efec@fastmail.org> you need the additional option -with-mumps-serial=1 For example - check config/examples/arch-ci-linux-cuda-uni-pkgs.py Satish On Fri, 11 Apr 2025, salvador wrote: > Dear PETSc team, > > I'm trying to compile PETSc with *MUMPS* in *serial mode* (without MPI), and > I?ve encountered an issue during the |./configure| step. > > Here is the command I'm using: > > ./configure \ > ? --with-cc=gcc \ > ? --with-cxx=g++ \ > ? --with-fc=gfortran \ > ? --with-mpi=0 \ > ? --with-scalar-type=complex \ > ? --with-petsc-arch=arch-linus-c-opt-complex \ > ? --with-blaslapack=1 \ > ? --with-mumps=1 \ > ? --with-scalapack=0 > > However, I get the following error: > > ********************************************************************************************* > ?????????? UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for > details): > --------------------------------------------------------------------------------------------- > ? Package mumps requested but dependency scalapack not requested. > ? Perhaps you want --download-scalapack or > --with-scalapack-dir=directory or > ? --with-scalapack-lib=libraries and --with-scalapack-include=directory > ********************************************************************************************* > > My understanding is that *MUMPS can be used in serial* (according to its > documentation), but PETSc seems to expect *ScaLAPACK* as a dependency even > without MPI. > > * > > Is it possible to use MUMPS in PETSc *without MPI and without > ScaLAPACK*? > > * > > If not, is there a way to enable serial MUMPS with PETSc? > > Any guidance would be appreciated. > > Best regards, > /Salvador > / > > From pierre at joliv.et Fri Apr 11 09:08:40 2025 From: pierre at joliv.et (Pierre Jolivet) Date: Fri, 11 Apr 2025 16:08:40 +0200 Subject: [petsc-users] Trouble configuring PETSc with serial MUMPS (no MPI) In-Reply-To: References: Message-ID: <86EC6C75-B3A9-4844-9635-2FD207E80BAA@joliv.et> > On 11 Apr 2025, at 9:02?AM, salvador wrote: > > Dear PETSc team, > > I'm trying to compile PETSc with MUMPS in serial mode (without MPI), and I?ve encountered an issue during the ./configure step. > > Here is the command I'm using: > > ./configure \ > --with-cc=gcc \ > --with-cxx=g++ \ > --with-fc=gfortran \ > --with-mpi=0 \ > --with-scalar-type=complex \ > --with-petsc-arch=arch-linus-c-opt-complex \ > --with-blaslapack=1 \ > --with-mumps=1 \ > --with-scalapack=0 > > > However, I get the following error: > > ********************************************************************************************* > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details): > --------------------------------------------------------------------------------------------- > Package mumps requested but dependency scalapack not requested. > Perhaps you want --download-scalapack or --with-scalapack-dir=directory or > --with-scalapack-lib=libraries and --with-scalapack-include=directory > ********************************************************************************************* > > > My understanding is that MUMPS can be used in serial (according to its documentation), but PETSc seems to expect ScaLAPACK as a dependency even without MPI. > > Is it possible to use MUMPS in PETSc without MPI and without ScaLAPACK? > > If not, is there a way to enable serial MUMPS with PETSc? > Use --with-mumps-serial Thanks, Pierre > Any guidance would be appreciated. > > Best regards, > Salvador > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From salvador.rodriguez at upm.es Fri Apr 11 11:13:13 2025 From: salvador.rodriguez at upm.es (salvador) Date: Fri, 11 Apr 2025 18:13:13 +0200 Subject: [petsc-users] Trouble configuring PETSc with serial MUMPS (no MPI) In-Reply-To: <2f254b05-f4d2-9ee7-4e09-476fe363efec@fastmail.org> References: <2f254b05-f4d2-9ee7-4e09-476fe363efec@fastmail.org> Message-ID: <2831e07d-ae4a-4a11-becd-3d331b3c1b9d@upm.es> Thank you very much for your help. Much appreciated. Salvador On 11/4/25 16:08, Satish Balay wrote: > you need the additional option -with-mumps-serial=1 > > For example - check config/examples/arch-ci-linux-cuda-uni-pkgs.py > > Satish > > On Fri, 11 Apr 2025, salvador wrote: > >> Dear PETSc team, >> >> I'm trying to compile PETSc with *MUMPS* in *serial mode* (without MPI), and >> I?ve encountered an issue during the |./configure| step. >> >> Here is the command I'm using: >> >> ./configure \ >> ? --with-cc=gcc \ >> ? --with-cxx=g++ \ >> ? --with-fc=gfortran \ >> ? --with-mpi=0 \ >> ? --with-scalar-type=complex \ >> ? --with-petsc-arch=arch-linus-c-opt-complex \ >> ? --with-blaslapack=1 \ >> ? --with-mumps=1 \ >> ? --with-scalapack=0 >> >> However, I get the following error: >> >> ********************************************************************************************* >> ?????????? UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for >> details): >> --------------------------------------------------------------------------------------------- >> ? Package mumps requested but dependency scalapack not requested. >> ? Perhaps you want --download-scalapack or >> --with-scalapack-dir=directory or >> ? --with-scalapack-lib=libraries and --with-scalapack-include=directory >> ********************************************************************************************* >> >> My understanding is that *MUMPS can be used in serial* (according to its >> documentation), but PETSc seems to expect *ScaLAPACK* as a dependency even >> without MPI. >> >> * >> >> Is it possible to use MUMPS in PETSc *without MPI and without >> ScaLAPACK*? >> >> * >> >> If not, is there a way to enable serial MUMPS with PETSc? >> >> Any guidance would be appreciated. >> >> Best regards, >> /Salvador >> / >> >> From benoit.nennig at isae-supmeca.fr Wed Apr 16 07:36:09 2025 From: benoit.nennig at isae-supmeca.fr (NENNIG Benoit) Date: Wed, 16 Apr 2025 12:36:09 +0000 Subject: [petsc-users] remove new zeros values with petsc4py Message-ID: Dear petsc users, I need to compute the difference of two matrices M(nu) and M(nu+epsilon) to estimate the derivative of the matrix with respect to the parameter nu. Since only very few entries are modified by epsilon, D = M(nu+epsilon) - M(nu) contains many new zeros and I would like to remove them. I am using PETSc from petsc4py. I have seen the method `chop` which call `Filter` but without the `compress` argument. At the end I need to store this matrix into a file. So my questions are: - Is is possible to remove the zeros entries with petsc4py with Mat methods? - Is is possible to do it when I save the file, ie with Viewer methods ? Thanks a lot, Benoit From pierre at joliv.et Wed Apr 16 08:58:00 2025 From: pierre at joliv.et (Pierre Jolivet) Date: Wed, 16 Apr 2025 15:58:00 +0200 Subject: [petsc-users] remove new zeros values with petsc4py In-Reply-To: References: Message-ID: Dear Benoit, > On 16 Apr 2025, at 2:36?PM, NENNIG Benoit wrote: > > Dear petsc users, > > I need to compute the difference of two matrices M(nu) and M(nu+epsilon) to estimate the derivative of the matrix with respect to the parameter nu. > Since only very few entries are modified by epsilon, D = M(nu+epsilon) - M(nu) contains many new zeros and I would like to remove them. > > I am using PETSc from petsc4py. > I have seen the method `chop` which call `Filter` but without the `compress` argument. At the C level, [Mat,Vec]Chop() has been replaced by [Mat,Vec]Filter(). This didn?t make it to the Python level (yet), but if you feel like it, you could modify the chop functions in Mat.pyx and Vec.pyx and rename them (to filter) plus add an optional parameter for the `compress` argument. If you submit a MR, I don?t see why it would not get approved, deprecation is difficult to handle in petsc4py (and in PETSc in general), and sometimes there can be small discrepancies between the Python and the C APIs. Thanks, Pierre > At the end I need to store this matrix into a file. So my questions are: > - Is is possible to remove the zeros entries with petsc4py with Mat methods? > - Is is possible to do it when I save the file, ie with Viewer methods ? > > Thanks a lot, > > Benoit From liufield at gmail.com Fri Apr 18 09:01:52 2025 From: liufield at gmail.com (neil liu) Date: Fri, 18 Apr 2025 10:01:52 -0400 Subject: [petsc-users] Mixing PETSc Parallelism with Serial MMG3D Workflow Message-ID: Dear PETSc developers and users, I am currently exploring the integration of MMG3D with PETSc. Since MMG3D supports only serial execution, I am planning to combine parallel and serial computing in my workflow. Specifically, after solving the linear systems in parallel using PETSc: 1. I intend to use DMPlexGetGatherDM to collect the entire mesh on the root process for input to MMG3D. 2. Additionally, I plan to gather the error field onto the root process using VecScatter. However, I am concerned that the nth value in the gathered error vector (step 2) may not correspond to the nth element in the gathered mesh (step 1). Is this a valid concern? Do you have any suggestions or recommended practices for ensuring correct correspondence between the solution fields and the mesh when switching from parallel to serial mode? Thanks, Xiaodong -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.zampini at gmail.com Fri Apr 18 09:09:36 2025 From: stefano.zampini at gmail.com (Stefano Zampini) Date: Fri, 18 Apr 2025 17:09:36 +0300 Subject: [petsc-users] Mixing PETSc Parallelism with Serial MMG3D Workflow In-Reply-To: References: Message-ID: If you have a vector distributed on the original mesh, then you can use the SF returned by DMPlexGetGatherDM and use that in a call to DMPlexDistributeField Il giorno ven 18 apr 2025 alle ore 17:02 neil liu ha scritto: > Dear PETSc developers and users, > > I am currently exploring the integration of MMG3D with PETSc. Since MMG3D > supports only serial execution, I am planning to combine parallel and > serial computing in my workflow. Specifically, after solving the linear > systems in parallel using PETSc: > > 1. > > I intend to use DMPlexGetGatherDM to collect the entire mesh on the > root process for input to MMG3D. > 2. > > Additionally, I plan to gather the error field onto the root process > using VecScatter. > > However, I am concerned that the nth value in the gathered error vector > (step 2) may not correspond to the nth element in the gathered mesh (step > 1). Is this a valid concern? > > Do you have any suggestions or recommended practices for ensuring correct > correspondence between the solution fields and the mesh when switching from > parallel to serial mode? > > Thanks, > > Xiaodong > -- Stefano -------------- next part -------------- An HTML attachment was scrubbed... URL: From augustin.perrier-michon at ensma.fr Wed Apr 23 09:20:07 2025 From: augustin.perrier-michon at ensma.fr (PERRIER-MICHON Augustin) Date: Wed, 23 Apr 2025 16:20:07 +0200 Subject: [petsc-users] Staggered solver phase field Message-ID: Dear Petsc users, I am currently dealing with finite element fracture analysis using phase field model. To perform such simulations, I have to develop a staggered solver : mechanical problem is solved at constant damage and damage problem is solved at constant displacement. I created 2 TS solver and 2 DMPLEX for each "physics". Each physics's system is built using TSSetIFunction and TSSetIJacobian with associated functions. The TS calls are performed with TSSTEP in order to respect staggered solver scheme in iterative loops. My question : Is the using of TSSTEP function adapted to a staggered solver ? How to use this function in my framework ? Have you got any other suggestions or advices ? Thanks a lot Best regards -- Augustin PERRIER-MICHON PhD student institut PPRIME Physics and Mechanics of materials department ISAE-ENSMA T?l?port 2 1 Avenue Cl?ment ADER 86361 Chasseneuil du Poitou- Futuroscope Tel : +33-(0)-5-49-49-80-97 From bourdin at mcmaster.ca Wed Apr 23 09:58:41 2025 From: bourdin at mcmaster.ca (Blaise Bourdin) Date: Wed, 23 Apr 2025 14:58:41 +0000 Subject: [petsc-users] Staggered solver phase field In-Reply-To: References: Message-ID: <588721F9-71E0-4A4B-A6D5-502589BD3099@mcmaster.ca> An HTML attachment was scrubbed... URL: From augustin.perrier-michon at ensma.fr Wed Apr 23 10:22:35 2025 From: augustin.perrier-michon at ensma.fr (PERRIER-MICHON Augustin) Date: Wed, 23 Apr 2025 17:22:35 +0200 Subject: [petsc-users] Staggered solver phase field In-Reply-To: <588721F9-71E0-4A4B-A6D5-502589BD3099@mcmaster.ca> References: <588721F9-71E0-4A4B-A6D5-502589BD3099@mcmaster.ca> Message-ID: Dear Mr Bourdin, thank you for your answer and the remarks. I will performed time dependent multi-physics analysis including crack propagation afterward. To anticipate this time dependency, I chose to use TS solver instead of SNES or TAO. Plus, I thought that TS solver can be used for quasi-static problems as well. In my previous simulations with a monolithic TS solver, I controlled the time step during all the calculation. In my opinion I could do the same in this framework and not let TS solvers adapt the step time. A synchronization of the two solvers is necessary. With these informations, is this framework and especially TSSTEP function compatible with my problem ? Thanks a lot Augustin Le 2025-04-23 16:58, Blaise Bourdin a ?crit?: > Augustin, > > Out of curiosity, why TS and not SNES? At the very least the damage > problem should be a constrained minimization problem so that you can > model criticality with respect to the phase-field variable. > Secondly, I would be very wary about letting TS adapt the time step by > itself. In quasi-static phase-field fracture, the time step affects > the crack path, not the order of the approximation in time. I doubt > that any of the mechanisms in TS are appropriate here. > > You are welcome to dig into my implementation for inspiration, or > reuse it for your problems https://urldefense.us/v3/__https://github.com/bourdin/mef90__;!!G_uCfscf7eWS!YNLdSyA8YkLHqmi8DisU8Hz_g4lAFvUm-N7RZHxO_u1CkWa6ZEMYoUBG2so6IcRb57XYSVCvOLtnD-Fx0ic5BOuyGj5kvwHxvw$ > > Blaise > >> On Apr 23, 2025, at 10:20?AM, PERRIER-MICHON Augustin >> wrote: >> >> [You don't often get email from augustin.perrier-michon at ensma.fr. >> Learn why this is important at >> https://urldefense.us/v3/__https://aka.ms/LearnAboutSenderIdentification__;!!G_uCfscf7eWS!YNLdSyA8YkLHqmi8DisU8Hz_g4lAFvUm-N7RZHxO_u1CkWa6ZEMYoUBG2so6IcRb57XYSVCvOLtnD-Fx0ic5BOuyGj4A2oxZ2g$ ] >> >> Caution: External email. >> >> Dear Petsc users, >> >> I am currently dealing with finite element fracture analysis using >> phase >> field model. To perform such simulations, I have to develop a >> staggered >> solver : mechanical problem is solved at constant damage and damage >> problem is solved at constant displacement. >> >> I created 2 TS solver and 2 DMPLEX for each "physics". >> Each physics's system is built using TSSetIFunction and >> TSSetIJacobian >> with associated functions. >> >> The TS calls are performed with TSSTEP in order to respect staggered >> solver scheme in iterative loops. >> >> My question : Is the using of TSSTEP function adapted to a staggered >> solver ? How to use this function in my framework ? Have you got any >> other suggestions or advices ? >> >> Thanks a lot >> Best regards >> >> -- >> Augustin PERRIER-MICHON >> PhD student institut PPRIME >> Physics and Mechanics of materials department >> ISAE-ENSMA >> T?l?port 2 >> 1 Avenue Cl?ment ADER >> 86361 Chasseneuil du Poitou- Futuroscope >> Tel : +33-(0)-5-49-49-80-97 > > ? > Canada Research Chair in Mathematical and Computational Aspects of > Solid Mechanics (Tier 1) > Professor, Department of Mathematics & Statistics > Hamilton Hall room 409A, McMaster University > 1280 Main Street West, Hamilton, Ontario L8S 4K1, Canada > https://urldefense.us/v3/__https://www.math.mcmaster.ca/bourdin__;!!G_uCfscf7eWS!YNLdSyA8YkLHqmi8DisU8Hz_g4lAFvUm-N7RZHxO_u1CkWa6ZEMYoUBG2so6IcRb57XYSVCvOLtnD-Fx0ic5BOuyGj7SJK5sRQ$ | +1 (905) 525 9140 ext. 27243 From liufield at gmail.com Wed Apr 23 10:31:56 2025 From: liufield at gmail.com (neil liu) Date: Wed, 23 Apr 2025 11:31:56 -0400 Subject: [petsc-users] Mixing PETSc Parallelism with Serial MMG3D Workflow In-Reply-To: References: Message-ID: Thanks a lot, Stefano. I tried DMPlexGetGatherDM and DMPlexDistributeField. It can give what we expected. The final gatherDM is listed as follows, rank 0 has all information (which is right) while rank 1 has nothing. Then I tried to feed this gatherDM into adaptMMG on rank 0 only (it seems MMG works better than ParMMG, that is why I want MMG to be tried first). But it was stuck at collective petsc functions in DMAdaptMetric_Mmg_Plex(). By the way, the present work can work well with 1 rank. Do you have any suggestions ? Build a real serial DM? Thanks a lot. Xiaodong DM Object: Parallel Mesh 2 MPI processes type: plex Parallel Mesh in 3 dimensions: Number of 0-cells per rank: 56 0 Number of 1-cells per rank: 289 0 Number of 2-cells per rank: 452 0 Number of 3-cells per rank: 216 0 Labels: depth: 4 strata with value/size (0 (56), 1 (289), 2 (452), 3 (216)) celltype: 4 strata with value/size (0 (56), 1 (289), 3 (452), 6 (216)) Cell Sets: 2 strata with value/size (29 (152), 30 (64)) Face Sets: 3 strata with value/size (27 (8), 28 (40), 101 (20)) Edge Sets: 1 strata with value/size (10 (10)) Vertex Sets: 5 strata with value/size (27 (2), 28 (6), 29 (2), 101 (4), 106 (4)) Field Field_0: adjacency FEM On Fri, Apr 18, 2025 at 10:09?AM Stefano Zampini wrote: > If you have a vector distributed on the original mesh, then you can use > the SF returned by DMPlexGetGatherDM and use that in a call to > DMPlexDistributeField > > Il giorno ven 18 apr 2025 alle ore 17:02 neil liu ha > scritto: > >> Dear PETSc developers and users, >> >> I am currently exploring the integration of MMG3D with PETSc. Since MMG3D >> supports only serial execution, I am planning to combine parallel and >> serial computing in my workflow. Specifically, after solving the linear >> systems in parallel using PETSc: >> >> 1. >> >> I intend to use DMPlexGetGatherDM to collect the entire mesh on the >> root process for input to MMG3D. >> 2. >> >> Additionally, I plan to gather the error field onto the root process >> using VecScatter. >> >> However, I am concerned that the nth value in the gathered error vector >> (step 2) may not correspond to the nth element in the gathered mesh (step >> 1). Is this a valid concern? >> >> Do you have any suggestions or recommended practices for ensuring correct >> correspondence between the solution fields and the mesh when switching from >> parallel to serial mode? >> >> Thanks, >> >> Xiaodong >> > > > -- > Stefano > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Wed Apr 23 10:38:07 2025 From: bsmith at petsc.dev (Barry Smith) Date: Wed, 23 Apr 2025 11:38:07 -0400 Subject: [petsc-users] Staggered solver phase field In-Reply-To: References: Message-ID: I am unfamiliar with this simulation type, so I have some elementary questions. Is each "physics" taking a fractional time-step so that the total time integrated (in a time-step) is 1/2dt + 1/2dt or is one physics stepping the entire time-step and the other physics "fixing up some of the variables" (for example, in fluid flow where a pressure solve doesn't step in time it merely changes some variables values to enforce incompressibility at the new time). Another way to phrase the question is are the underlying equations a DAE, not an ODE? Barry > On Apr 23, 2025, at 10:20?AM, PERRIER-MICHON Augustin wrote: > > Dear Petsc users, > > I am currently dealing with finite element fracture analysis using phase field model. To perform such simulations, I have to develop a staggered solver : mechanical problem is solved at constant damage and damage problem is solved at constant displacement. > > I created 2 TS solver and 2 DMPLEX for each "physics". > Each physics's system is built using TSSetIFunction and TSSetIJacobian with associated functions. > > The TS calls are performed with TSSTEP in order to respect staggered solver scheme in iterative loops. > > My question : Is the using of TSSTEP function adapted to a staggered solver ? How to use this function in my framework ? Have you got any other suggestions or advices ? > > Thanks a lot > Best regards > > -- > Augustin PERRIER-MICHON > PhD student institut PPRIME > Physics and Mechanics of materials department > ISAE-ENSMA > T?l?port 2 > 1 Avenue Cl?ment ADER > 86361 Chasseneuil du Poitou- Futuroscope > Tel : +33-(0)-5-49-49-80-97 From pierre at joliv.et Wed Apr 23 10:39:10 2025 From: pierre at joliv.et (Pierre Jolivet) Date: Wed, 23 Apr 2025 17:39:10 +0200 Subject: [petsc-users] Mixing PETSc Parallelism with Serial MMG3D Workflow In-Reply-To: References: Message-ID: <42C5B676-DAE8-461C-8303-4A84FEDECFFC@joliv.et> > On 23 Apr 2025, at 5:31?PM, neil liu wrote: > > Thanks a lot, Stefano. > I tried DMPlexGetGatherDM and DMPlexDistributeField. It can give what we expected. > The final gatherDM is listed as follows, rank 0 has all information (which is right) while rank 1 has nothing. > Then I tried to feed this gatherDM into adaptMMG on rank 0 only (it seems MMG works better than ParMMG, that is why I want MMG to be tried first). But it was stuck at collective petsc functions in DMAdaptMetric_Mmg_Plex(). By the way, the present work can work well with 1 rank. > > Do you have any suggestions ? Build a real serial DM? Yes, you need to change the underlying MPI_Comm as well, but I?m not sure if there is any user-facing API for doing this with a one-liner. Thanks, Pierre > Thanks a lot. > Xiaodong > > DM Object: Parallel Mesh 2 MPI processes > type: plex > Parallel Mesh in 3 dimensions: > Number of 0-cells per rank: 56 0 > Number of 1-cells per rank: 289 0 > Number of 2-cells per rank: 452 0 > Number of 3-cells per rank: 216 0 > Labels: > depth: 4 strata with value/size (0 (56), 1 (289), 2 (452), 3 (216)) > celltype: 4 strata with value/size (0 (56), 1 (289), 3 (452), 6 (216)) > Cell Sets: 2 strata with value/size (29 (152), 30 (64)) > Face Sets: 3 strata with value/size (27 (8), 28 (40), 101 (20)) > Edge Sets: 1 strata with value/size (10 (10)) > Vertex Sets: 5 strata with value/size (27 (2), 28 (6), 29 (2), 101 (4), 106 (4)) > Field Field_0: > adjacency FEM > > > > On Fri, Apr 18, 2025 at 10:09?AM Stefano Zampini > wrote: >> If you have a vector distributed on the original mesh, then you can use the SF returned by DMPlexGetGatherDM and use that in a call to DMPlexDistributeField >> >> Il giorno ven 18 apr 2025 alle ore 17:02 neil liu > ha scritto: >>> Dear PETSc developers and users, >>> >>> I am currently exploring the integration of MMG3D with PETSc. Since MMG3D supports only serial execution, I am planning to combine parallel and serial computing in my workflow. Specifically, after solving the linear systems in parallel using PETSc: >>> >>> I intend to use DMPlexGetGatherDM to collect the entire mesh on the root process for input to MMG3D. >>> >>> Additionally, I plan to gather the error field onto the root process using VecScatter. >>> >>> However, I am concerned that the nth value in the gathered error vector (step 2) may not correspond to the nth element in the gathered mesh (step 1). Is this a valid concern? >>> >>> Do you have any suggestions or recommended practices for ensuring correct correspondence between the solution fields and the mesh when switching from parallel to serial mode? >>> >>> >>> Thanks, >>> >>> Xiaodong >> >> >> >> -- >> Stefano -------------- next part -------------- An HTML attachment was scrubbed... URL: From augustin.perrier-michon at ensma.fr Wed Apr 23 10:47:24 2025 From: augustin.perrier-michon at ensma.fr (PERRIER-MICHON Augustin) Date: Wed, 23 Apr 2025 17:47:24 +0200 Subject: [petsc-users] Staggered solver phase field In-Reply-To: References: Message-ID: <5b3155624fcbf3024c8f5c0eb2d5a274@ensma.fr> Dear Mr Smith, Thank you for your answer. Maybe I do not understand your interrogation because I am not familiar with fluid flow simulations. Each physics are solved on an entire time step at every time step. This is a quasi static problem with an incremental loading. Later I will add time dependent phenomena thus I chose TS solver. Thanks Augustin Le 2025-04-23 17:38, Barry Smith a ?crit?: > I am unfamiliar with this simulation type, so I have some elementary > questions. Is each "physics" taking a fractional time-step so that the > total time > integrated (in a time-step) is 1/2dt + 1/2dt or is one physics > stepping the entire time-step and the other physics "fixing up some of > the variables" (for example, in fluid flow where a pressure solve > doesn't step in time it merely changes some variables values to > enforce incompressibility at the new time). Another way to phrase the > question is are the underlying equations a DAE, not an ODE? > > Barry > > >> On Apr 23, 2025, at 10:20?AM, PERRIER-MICHON Augustin >> wrote: >> >> Dear Petsc users, >> >> I am currently dealing with finite element fracture analysis using >> phase field model. To perform such simulations, I have to develop a >> staggered solver : mechanical problem is solved at constant damage and >> damage problem is solved at constant displacement. >> >> I created 2 TS solver and 2 DMPLEX for each "physics". >> Each physics's system is built using TSSetIFunction and TSSetIJacobian >> with associated functions. >> >> The TS calls are performed with TSSTEP in order to respect staggered >> solver scheme in iterative loops. >> >> My question : Is the using of TSSTEP function adapted to a staggered >> solver ? How to use this function in my framework ? Have you got any >> other suggestions or advices ? >> >> Thanks a lot >> Best regards >> >> -- >> Augustin PERRIER-MICHON >> PhD student institut PPRIME >> Physics and Mechanics of materials department >> ISAE-ENSMA >> T?l?port 2 >> 1 Avenue Cl?ment ADER >> 86361 Chasseneuil du Poitou- Futuroscope >> Tel : +33-(0)-5-49-49-80-97 From bsmith at petsc.dev Wed Apr 23 11:21:08 2025 From: bsmith at petsc.dev (Barry Smith) Date: Wed, 23 Apr 2025 12:21:08 -0400 Subject: [petsc-users] Staggered solver phase field In-Reply-To: <5b3155624fcbf3024c8f5c0eb2d5a274@ensma.fr> References: <5b3155624fcbf3024c8f5c0eb2d5a274@ensma.fr> Message-ID: <095941DE-40AA-4B8A-8128-81515C91A1EE@petsc.dev> > On Apr 23, 2025, at 11:47?AM, PERRIER-MICHON Augustin wrote: > > Dear Mr Smith, > > Thank you for your answer. > > Maybe I do not understand your interrogation because I am not familiar with fluid flow simulations. Each physics are solved on an entire time step at every time step. So different variables (degrees of freedom) are integrated in each "sub-physics" time-step? > This is a quasi static problem with an incremental loading. Later I will add time dependent phenomena thus I chose TS solver. > > Thanks > Augustin > > Le 2025-04-23 17:38, Barry Smith a ?crit : >> I am unfamiliar with this simulation type, so I have some elementary >> questions. Is each "physics" taking a fractional time-step so that the >> total time >> integrated (in a time-step) is 1/2dt + 1/2dt or is one physics >> stepping the entire time-step and the other physics "fixing up some of >> the variables" (for example, in fluid flow where a pressure solve >> doesn't step in time it merely changes some variables values to >> enforce incompressibility at the new time). Another way to phrase the >> question is are the underlying equations a DAE, not an ODE? >> Barry >>> On Apr 23, 2025, at 10:20?AM, PERRIER-MICHON Augustin wrote: >>> Dear Petsc users, >>> I am currently dealing with finite element fracture analysis using phase field model. To perform such simulations, I have to develop a staggered solver : mechanical problem is solved at constant damage and damage problem is solved at constant displacement. >>> I created 2 TS solver and 2 DMPLEX for each "physics". >>> Each physics's system is built using TSSetIFunction and TSSetIJacobian with associated functions. >>> The TS calls are performed with TSSTEP in order to respect staggered solver scheme in iterative loops. >>> My question : Is the using of TSSTEP function adapted to a staggered solver ? How to use this function in my framework ? Have you got any other suggestions or advices ? >>> Thanks a lot >>> Best regards >>> -- >>> Augustin PERRIER-MICHON >>> PhD student institut PPRIME >>> Physics and Mechanics of materials department >>> ISAE-ENSMA >>> T?l?port 2 >>> 1 Avenue Cl?ment ADER >>> 86361 Chasseneuil du Poitou- Futuroscope >>> Tel : +33-(0)-5-49-49-80-97 From liufield at gmail.com Wed Apr 23 11:56:38 2025 From: liufield at gmail.com (neil liu) Date: Wed, 23 Apr 2025 12:56:38 -0400 Subject: [petsc-users] Mixing PETSc Parallelism with Serial MMG3D Workflow In-Reply-To: <42C5B676-DAE8-461C-8303-4A84FEDECFFC@joliv.et> References: <42C5B676-DAE8-461C-8303-4A84FEDECFFC@joliv.et> Message-ID: Thanks a lot. Pierre. Do you have any suggestions to build a real serial DM from this gatherDM? I tried several ways, which don't work. DMClone? Thanks, On Wed, Apr 23, 2025 at 11:39?AM Pierre Jolivet wrote: > > > On 23 Apr 2025, at 5:31?PM, neil liu wrote: > > Thanks a lot, Stefano. > I tried DMPlexGetGatherDM and DMPlexDistributeField. It can give what we > expected. > The final gatherDM is listed as follows, rank 0 has all information (which > is right) while rank 1 has nothing. > Then I tried to feed this gatherDM into adaptMMG on rank 0 only (it seems > MMG works better than ParMMG, that is why I want MMG to be tried first). > But it was stuck at collective petsc functions in DMAdaptMetric_Mmg_Plex(). > By the way, the present work can work well with 1 rank. > > Do you have any suggestions ? Build a real serial DM? > > > Yes, you need to change the underlying MPI_Comm as well, but I?m not sure > if there is any user-facing API for doing this with a one-liner. > > Thanks, > Pierre > > Thanks a lot. > Xiaodong > > DM Object: Parallel Mesh 2 MPI processes > type: plex > Parallel Mesh in 3 dimensions: > Number of 0-cells per rank: 56 0 > Number of 1-cells per rank: 289 0 > Number of 2-cells per rank: 452 0 > Number of 3-cells per rank: 216 0 > Labels: > depth: 4 strata with value/size (0 (56), 1 (289), 2 (452), 3 (216)) > celltype: 4 strata with value/size (0 (56), 1 (289), 3 (452), 6 (216)) > Cell Sets: 2 strata with value/size (29 (152), 30 (64)) > Face Sets: 3 strata with value/size (27 (8), 28 (40), 101 (20)) > Edge Sets: 1 strata with value/size (10 (10)) > Vertex Sets: 5 strata with value/size (27 (2), 28 (6), 29 (2), 101 (4), > 106 (4)) > Field Field_0: > adjacency FEM > > > > On Fri, Apr 18, 2025 at 10:09?AM Stefano Zampini < > stefano.zampini at gmail.com> wrote: > >> If you have a vector distributed on the original mesh, then you can use >> the SF returned by DMPlexGetGatherDM and use that in a call to >> DMPlexDistributeField >> >> Il giorno ven 18 apr 2025 alle ore 17:02 neil liu >> ha scritto: >> >>> Dear PETSc developers and users, >>> >>> I am currently exploring the integration of MMG3D with PETSc. Since >>> MMG3D supports only serial execution, I am planning to combine parallel and >>> serial computing in my workflow. Specifically, after solving the linear >>> systems in parallel using PETSc: >>> >>> 1. >>> >>> I intend to use DMPlexGetGatherDM to collect the entire mesh on the >>> root process for input to MMG3D. >>> 2. >>> >>> Additionally, I plan to gather the error field onto the root process >>> using VecScatter. >>> >>> However, I am concerned that the nth value in the gathered error vector >>> (step 2) may not correspond to the nth element in the gathered mesh (step >>> 1). Is this a valid concern? >>> >>> Do you have any suggestions or recommended practices for ensuring >>> correct correspondence between the solution fields and the mesh when >>> switching from parallel to serial mode? >>> >>> Thanks, >>> >>> Xiaodong >>> >> >> >> -- >> Stefano >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.zampini at gmail.com Wed Apr 23 12:10:46 2025 From: stefano.zampini at gmail.com (Stefano Zampini) Date: Wed, 23 Apr 2025 20:10:46 +0300 Subject: [petsc-users] Mixing PETSc Parallelism with Serial MMG3D Workflow In-Reply-To: References: <42C5B676-DAE8-461C-8303-4A84FEDECFFC@joliv.et> Message-ID: If mmg does not support parallel communicators, we should handle it internally in the code, always use commself, and raise an error if there are two or more processes in the comm that have cEnd - cStart > 0 Il giorno mer 23 apr 2025 alle ore 20:05 neil liu ha scritto: > Thanks a lot. Pierre. > Do you have any suggestions to build a real serial DM from this gatherDM? > I tried several ways, which don't work. > DMClone? > > Thanks, > > On Wed, Apr 23, 2025 at 11:39?AM Pierre Jolivet wrote: > >> >> >> On 23 Apr 2025, at 5:31?PM, neil liu wrote: >> >> Thanks a lot, Stefano. >> I tried DMPlexGetGatherDM and DMPlexDistributeField. It can give what we >> expected. >> The final gatherDM is listed as follows, rank 0 has all information >> (which is right) while rank 1 has nothing. >> Then I tried to feed this gatherDM into adaptMMG on rank 0 only (it >> seems MMG works better than ParMMG, that is why I want MMG to be tried >> first). But it was stuck at collective petsc functions >> in DMAdaptMetric_Mmg_Plex(). By the way, the present work can work well >> with 1 rank. >> >> Do you have any suggestions ? Build a real serial DM? >> >> >> Yes, you need to change the underlying MPI_Comm as well, but I?m not sure >> if there is any user-facing API for doing this with a one-liner. >> >> Thanks, >> Pierre >> >> Thanks a lot. >> Xiaodong >> >> DM Object: Parallel Mesh 2 MPI processes >> type: plex >> Parallel Mesh in 3 dimensions: >> Number of 0-cells per rank: 56 0 >> Number of 1-cells per rank: 289 0 >> Number of 2-cells per rank: 452 0 >> Number of 3-cells per rank: 216 0 >> Labels: >> depth: 4 strata with value/size (0 (56), 1 (289), 2 (452), 3 (216)) >> celltype: 4 strata with value/size (0 (56), 1 (289), 3 (452), 6 (216)) >> Cell Sets: 2 strata with value/size (29 (152), 30 (64)) >> Face Sets: 3 strata with value/size (27 (8), 28 (40), 101 (20)) >> Edge Sets: 1 strata with value/size (10 (10)) >> Vertex Sets: 5 strata with value/size (27 (2), 28 (6), 29 (2), 101 (4), >> 106 (4)) >> Field Field_0: >> adjacency FEM >> >> >> >> On Fri, Apr 18, 2025 at 10:09?AM Stefano Zampini < >> stefano.zampini at gmail.com> wrote: >> >>> If you have a vector distributed on the original mesh, then you can use >>> the SF returned by DMPlexGetGatherDM and use that in a call to >>> DMPlexDistributeField >>> >>> Il giorno ven 18 apr 2025 alle ore 17:02 neil liu >>> ha scritto: >>> >>>> Dear PETSc developers and users, >>>> >>>> I am currently exploring the integration of MMG3D with PETSc. Since >>>> MMG3D supports only serial execution, I am planning to combine parallel and >>>> serial computing in my workflow. Specifically, after solving the linear >>>> systems in parallel using PETSc: >>>> >>>> 1. >>>> >>>> I intend to use DMPlexGetGatherDM to collect the entire mesh on the >>>> root process for input to MMG3D. >>>> 2. >>>> >>>> Additionally, I plan to gather the error field onto the root >>>> process using VecScatter. >>>> >>>> However, I am concerned that the nth value in the gathered error vector >>>> (step 2) may not correspond to the nth element in the gathered mesh (step >>>> 1). Is this a valid concern? >>>> >>>> Do you have any suggestions or recommended practices for ensuring >>>> correct correspondence between the solution fields and the mesh when >>>> switching from parallel to serial mode? >>>> >>>> Thanks, >>>> >>>> Xiaodong >>>> >>> >>> >>> -- >>> Stefano >>> >> >> -- Stefano -------------- next part -------------- An HTML attachment was scrubbed... URL: From liufield at gmail.com Wed Apr 23 12:27:13 2025 From: liufield at gmail.com (neil liu) Date: Wed, 23 Apr 2025 13:27:13 -0400 Subject: [petsc-users] Mixing PETSc Parallelism with Serial MMG3D Workflow In-Reply-To: References: <42C5B676-DAE8-461C-8303-4A84FEDECFFC@joliv.et> Message-ID: *MMG only supports serial execution, whereas ParMMG supports parallel mode (although ParMMG is not as robust or mature as MMG).* Given this, could you please provide some guidance on how to handle this in the code? Here are my current thoughts; please let know whether it could work as a temporary solution. We may only need to make minor modifications in the DMAdaptMetric_Mmg_Plex() subroutine. Specifically: - Allow all *collective PETSc functions* to run across all ranks as usual. - Restrict the *MMG-specific logic* to run *only on rank 0*, since MMG is serial-only. - Add a check before MMG is called to ensure that *only rank 0 holds mesh cells*, i.e., validate that cEnd - cStart > 0 only on rank 0. If more than one rank holds cells, raise a clear warning or error. On Wed, Apr 23, 2025 at 1:11?PM Stefano Zampini wrote: > If mmg does not support parallel communicators, we should handle it > internally in the code, always use commself, and raise an error if there > are two or more processes in the comm that have cEnd - cStart > 0 > > Il giorno mer 23 apr 2025 alle ore 20:05 neil liu ha > scritto: > >> Thanks a lot. Pierre. >> Do you have any suggestions to build a real serial DM from this gatherDM? >> I tried several ways, which don't work. >> DMClone? >> >> Thanks, >> >> On Wed, Apr 23, 2025 at 11:39?AM Pierre Jolivet wrote: >> >>> >>> >>> On 23 Apr 2025, at 5:31?PM, neil liu wrote: >>> >>> Thanks a lot, Stefano. >>> I tried DMPlexGetGatherDM and DMPlexDistributeField. It can give what we >>> expected. >>> The final gatherDM is listed as follows, rank 0 has all information >>> (which is right) while rank 1 has nothing. >>> Then I tried to feed this gatherDM into adaptMMG on rank 0 only (it >>> seems MMG works better than ParMMG, that is why I want MMG to be tried >>> first). But it was stuck at collective petsc functions >>> in DMAdaptMetric_Mmg_Plex(). By the way, the present work can work well >>> with 1 rank. >>> >>> Do you have any suggestions ? Build a real serial DM? >>> >>> >>> Yes, you need to change the underlying MPI_Comm as well, but I?m not >>> sure if there is any user-facing API for doing this with a one-liner. >>> >>> Thanks, >>> Pierre >>> >>> Thanks a lot. >>> Xiaodong >>> >>> DM Object: Parallel Mesh 2 MPI processes >>> type: plex >>> Parallel Mesh in 3 dimensions: >>> Number of 0-cells per rank: 56 0 >>> Number of 1-cells per rank: 289 0 >>> Number of 2-cells per rank: 452 0 >>> Number of 3-cells per rank: 216 0 >>> Labels: >>> depth: 4 strata with value/size (0 (56), 1 (289), 2 (452), 3 (216)) >>> celltype: 4 strata with value/size (0 (56), 1 (289), 3 (452), 6 (216)) >>> Cell Sets: 2 strata with value/size (29 (152), 30 (64)) >>> Face Sets: 3 strata with value/size (27 (8), 28 (40), 101 (20)) >>> Edge Sets: 1 strata with value/size (10 (10)) >>> Vertex Sets: 5 strata with value/size (27 (2), 28 (6), 29 (2), 101 >>> (4), 106 (4)) >>> Field Field_0: >>> adjacency FEM >>> >>> >>> >>> On Fri, Apr 18, 2025 at 10:09?AM Stefano Zampini < >>> stefano.zampini at gmail.com> wrote: >>> >>>> If you have a vector distributed on the original mesh, then you can use >>>> the SF returned by DMPlexGetGatherDM and use that in a call to >>>> DMPlexDistributeField >>>> >>>> Il giorno ven 18 apr 2025 alle ore 17:02 neil liu >>>> ha scritto: >>>> >>>>> Dear PETSc developers and users, >>>>> >>>>> I am currently exploring the integration of MMG3D with PETSc. Since >>>>> MMG3D supports only serial execution, I am planning to combine parallel and >>>>> serial computing in my workflow. Specifically, after solving the linear >>>>> systems in parallel using PETSc: >>>>> >>>>> 1. >>>>> >>>>> I intend to use DMPlexGetGatherDM to collect the entire mesh on >>>>> the root process for input to MMG3D. >>>>> 2. >>>>> >>>>> Additionally, I plan to gather the error field onto the root >>>>> process using VecScatter. >>>>> >>>>> However, I am concerned that the nth value in the gathered error >>>>> vector (step 2) may not correspond to the nth element in the gathered mesh >>>>> (step 1). Is this a valid concern? >>>>> >>>>> Do you have any suggestions or recommended practices for ensuring >>>>> correct correspondence between the solution fields and the mesh when >>>>> switching from parallel to serial mode? >>>>> >>>>> Thanks, >>>>> >>>>> Xiaodong >>>>> >>>> >>>> >>>> -- >>>> Stefano >>>> >>> >>> > > -- > Stefano > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bourdin at mcmaster.ca Wed Apr 23 13:19:46 2025 From: bourdin at mcmaster.ca (Blaise Bourdin) Date: Wed, 23 Apr 2025 18:19:46 +0000 Subject: [petsc-users] Staggered solver phase field In-Reply-To: References: <588721F9-71E0-4A4B-A6D5-502589BD3099@mcmaster.ca> Message-ID: <9EBF30CD-F182-4F23-9E57-F201BF634CAA@mcmaster.ca> An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Apr 23 14:17:55 2025 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 23 Apr 2025 15:17:55 -0400 Subject: [petsc-users] Staggered solver phase field In-Reply-To: <9EBF30CD-F182-4F23-9E57-F201BF634CAA@mcmaster.ca> References: <588721F9-71E0-4A4B-A6D5-502589BD3099@mcmaster.ca> <9EBF30CD-F182-4F23-9E57-F201BF634CAA@mcmaster.ca> Message-ID: On Wed, Apr 23, 2025 at 2:20?PM Blaise Bourdin wrote: > Hi, > > Typically, phase-field models are formulated as rate independent > unilateral minimization problems of the form > > u_i,\alpha_i = \argmin_{u,\alpha \le \alpha_{i-1}} F(u,\alpha) > > Where i denotes the time step. These are technically neither DAE nor ODE > since there is the only time derivative in the limit model would be a > constraint in the form \dot{\alpha} = 0. > > The most common numerical scheme is for each time step, to alternate > minimization with respect to u and \alpha. The main reason is that while F > is not convex jointly in u and \alpha, it is separately convex and > quadratic with respect to each variable, and because in the simpler models. > Alternate minimization is technically block Gauss-Seidel, I think. It is > not particularly efficient but very robust and unconditionally stable. > Joint minimization in (u,\alpha) is typically fragile (most of the > interesting physics in fracture mechanics corresponds to situation where a > family of critical points looses stability, i.e. the pair (u,\alpha) has to > evolve through a region of non-convexity of F. > > In general, is there an advantage in implementing a steady-state problem > as a TS vs. Solving its optimality conditions as a SNES, or minimizing the > associated energy using TAO? > I think TAO would actually be the better route here, unless you are using time as a sort of continuation variable. Thanks, Matt > Regards, > Blaise > > > > > On Apr 23, 2025, at 11:22?AM, PERRIER-MICHON Augustin < > augustin.perrier-michon at ensma.fr> wrote: > > [You don't often get email from augustin.perrier-michon at ensma.fr. Learn > why this is important at https://urldefense.us/v3/__https://aka.ms/LearnAboutSenderIdentification__;!!G_uCfscf7eWS!eqUxeR4o8hBQ2Yh-wHiExzrleqVtQiAbHr7UY_g_SNWhz0wsLcwEL7-Atx1Oo17r8l4hOKWLQ_nnIhH_7OAZ$ > > ] > > Caution: External email. > > > Dear Mr Bourdin, > > thank you for your answer and the remarks. > > I will performed time dependent multi-physics analysis including crack > propagation afterward. To anticipate this time dependency, I chose to > use TS solver instead of SNES or TAO. Plus, I thought that TS solver can > be used for quasi-static problems as well. > > In my previous simulations with a monolithic TS solver, I controlled the > time step during all the calculation. In my opinion I could do the same > in this framework and not let TS solvers adapt the step time. A > synchronization of the two solvers is necessary. > > With these informations, is this framework and especially TSSTEP > function compatible with my problem ? > > Thanks a lot > Augustin > > Le 2025-04-23 16:58, Blaise Bourdin a ?crit : > > Augustin, > > Out of curiosity, why TS and not SNES? At the very least the damage > problem should be a constrained minimization problem so that you can > model criticality with respect to the phase-field variable. > Secondly, I would be very wary about letting TS adapt the time step by > itself. In quasi-static phase-field fracture, the time step affects > the crack path, not the order of the approximation in time. I doubt > that any of the mechanisms in TS are appropriate here. > > You are welcome to dig into my implementation for inspiration, or > reuse it for your problems https://urldefense.us/v3/__https://github.com/bourdin/mef90__;!!G_uCfscf7eWS!eqUxeR4o8hBQ2Yh-wHiExzrleqVtQiAbHr7UY_g_SNWhz0wsLcwEL7-Atx1Oo17r8l4hOKWLQ_nnIhipn-bc$ > > > Blaise > > On Apr 23, 2025, at 10:20?AM, PERRIER-MICHON Augustin > wrote: > > [You don't often get email from augustin.perrier-michon at ensma.fr. > Learn why this is important at > https://urldefense.us/v3/__https://aka.ms/LearnAboutSenderIdentification__;!!G_uCfscf7eWS!eqUxeR4o8hBQ2Yh-wHiExzrleqVtQiAbHr7UY_g_SNWhz0wsLcwEL7-Atx1Oo17r8l4hOKWLQ_nnIhH_7OAZ$ > > ] > > Caution: External email. > > Dear Petsc users, > > I am currently dealing with finite element fracture analysis using > phase > field model. To perform such simulations, I have to develop a > staggered > solver : mechanical problem is solved at constant damage and damage > problem is solved at constant displacement. > > I created 2 TS solver and 2 DMPLEX for each "physics". > Each physics's system is built using TSSetIFunction and > TSSetIJacobian > with associated functions. > > The TS calls are performed with TSSTEP in order to respect staggered > solver scheme in iterative loops. > > My question : Is the using of TSSTEP function adapted to a staggered > solver ? How to use this function in my framework ? Have you got any > other suggestions or advices ? > > Thanks a lot > Best regards > > -- > Augustin PERRIER-MICHON > PhD student institut PPRIME > Physics and Mechanics of materials department > ISAE-ENSMA > T?l?port 2 > 1 Avenue Cl?ment ADER > 86361 Chasseneuil du Poitou- Futuroscope > Tel : +33-(0)-5-49-49-80-97 > > > ? > Canada Research Chair in Mathematical and Computational Aspects of > Solid Mechanics (Tier 1) > Professor, Department of Mathematics & Statistics > Hamilton Hall room 409A, McMaster University > 1280 Main Street West, Hamilton, Ontario L8S 4K1, Canada > https://urldefense.us/v3/__https://www.math.mcmaster.ca/bourdin__;!!G_uCfscf7eWS!eqUxeR4o8hBQ2Yh-wHiExzrleqVtQiAbHr7UY_g_SNWhz0wsLcwEL7-Atx1Oo17r8l4hOKWLQ_nnIlIFn4WI$ > > | +1 (905) 525 9140 ext. 27243 > > > ? > Canada Research Chair in Mathematical and Computational Aspects of > Solid Mechanics (Tier 1) > Professor, Department of Mathematics & Statistics > Hamilton Hall room 409A, McMaster University > 1280 Main Street West, Hamilton, Ontario L8S 4K1, Canada > https://urldefense.us/v3/__https://www.math.mcmaster.ca/bourdin__;!!G_uCfscf7eWS!eqUxeR4o8hBQ2Yh-wHiExzrleqVtQiAbHr7UY_g_SNWhz0wsLcwEL7-Atx1Oo17r8l4hOKWLQ_nnIlIFn4WI$ > | +1 > (905) 525 9140 ext. 27243 > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!eqUxeR4o8hBQ2Yh-wHiExzrleqVtQiAbHr7UY_g_SNWhz0wsLcwEL7-Atx1Oo17r8l4hOKWLQ_nnIktprDZp$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at joliv.et Wed Apr 23 15:32:47 2025 From: pierre at joliv.et (Pierre Jolivet) Date: Wed, 23 Apr 2025 22:32:47 +0200 Subject: [petsc-users] Mixing PETSc Parallelism with Serial MMG3D Workflow In-Reply-To: References: Message-ID: <81768C87-7DA4-42BE-A763-98F427EA9282@joliv.et> An HTML attachment was scrubbed... URL: From augustin.perrier-michon at ensma.fr Wed Apr 23 23:43:34 2025 From: augustin.perrier-michon at ensma.fr (PERRIER-MICHON Augustin) Date: Thu, 24 Apr 2025 06:43:34 +0200 Subject: [petsc-users] Staggered solver phase field In-Reply-To: References: <588721F9-71E0-4A4B-A6D5-502589BD3099@mcmaster.ca> <9EBF30CD-F182-4F23-9E57-F201BF634CAA@mcmaster.ca> Message-ID: <18c28e0cfede773c3d2fcce2322c5a51@ensma.fr> Dear all, I agree that TAO or SNES should be better solutions for fracture analysis using phase field models alone. In my case, the use of TS is not real a choice. It is motivating by later adding new time-dependent physics (like thermal or species diffusion). To be fair, I chose to use gradient damage model built in the framework of generalized standard materials instead of phase field models developped as a minimization problem. I obtained a coupled system of strong equations in displacement and damage. I am trying is to solve this coupled problem with a staggered scheme. I identified TSSTEP as a potential function to apply staggered physics solving. Is this promising ? Thanks Augustin Le 2025-04-23 21:17, Matthew Knepley a ?crit?: > On Wed, Apr 23, 2025 at 2:20?PM Blaise Bourdin > wrote: > >> Hi, >> >> Typically, phase-field models are formulated as rate independent >> unilateral minimization problems of the form >> >> u_i,\alpha_i = \argmin_{u,\alpha \le \alpha_{i-1}} F(u,\alpha) >> >> Where i denotes the time step. These are technically neither DAE nor >> ODE since there is the only time derivative in the limit model would >> be a constraint in the form \dot{\alpha} = 0. >> >> The most common numerical scheme is for each time step, to alternate >> minimization with respect to u and \alpha. The main reason is that >> while F is not convex jointly in u and \alpha, it is separately >> convex and quadratic with respect to each variable, and because in >> the simpler models. >> Alternate minimization is technically block Gauss-Seidel, I think. >> It is not particularly efficient but very robust and unconditionally >> stable. Joint minimization in (u,\alpha) is typically fragile (most >> of the interesting physics in fracture mechanics corresponds to >> situation where a family of critical points looses stability, i.e. >> the pair (u,\alpha) has to evolve through a region of non-convexity >> of F. >> >> In general, is there an advantage in implementing a steady-state >> problem as a TS vs. Solving its optimality conditions as a SNES, or >> minimizing the associated energy using TAO? > > I think TAO would actually be the better route here, unless you are > using time as a sort of continuation variable. > > Thanks, > > Matt > >> Regards, >> Blaise >> >> On Apr 23, 2025, at 11:22?AM, PERRIER-MICHON Augustin >> wrote: >> >> [You don't often get email from augustin.perrier-michon at ensma.fr. >> Learn why this is important at >> https://urldefense.us/v3/__https://aka.ms/LearnAboutSenderIdentification__;!!G_uCfscf7eWS!avkeFItwEAey6K_gtfFmi47RGpcntWLEnHYooiJLUAsD7p4k7c6bRKuEFgeKamuzP-HAjZNz-BldqS-LQaVh_oBU4V46O6xtmw$ [1] ] >> >> Caution: External email. >> >> Dear Mr Bourdin, >> >> thank you for your answer and the remarks. >> >> I will performed time dependent multi-physics analysis including >> crack >> propagation afterward. To anticipate this time dependency, I chose >> to >> use TS solver instead of SNES or TAO. Plus, I thought that TS solver >> can >> be used for quasi-static problems as well. >> >> In my previous simulations with a monolithic TS solver, I controlled >> the >> time step during all the calculation. In my opinion I could do the >> same >> in this framework and not let TS solvers adapt the step time. A >> synchronization of the two solvers is necessary. >> >> With these informations, is this framework and especially TSSTEP >> function compatible with my problem ? >> >> Thanks a lot >> Augustin >> >> Le 2025-04-23 16:58, Blaise Bourdin a ?crit : >> Augustin, >> >> Out of curiosity, why TS and not SNES? At the very least the damage >> problem should be a constrained minimization problem so that you can >> model criticality with respect to the phase-field variable. >> Secondly, I would be very wary about letting TS adapt the time step >> by >> itself. In quasi-static phase-field fracture, the time step affects >> the crack path, not the order of the approximation in time. I doubt >> that any of the mechanisms in TS are appropriate here. >> >> You are welcome to dig into my implementation for inspiration, or >> reuse it for your problems https://urldefense.us/v3/__https://github.com/bourdin/mef90__;!!G_uCfscf7eWS!avkeFItwEAey6K_gtfFmi47RGpcntWLEnHYooiJLUAsD7p4k7c6bRKuEFgeKamuzP-HAjZNz-BldqS-LQaVh_oBU4V5nLW2neA$ [2] >> >> Blaise >> >> On Apr 23, 2025, at 10:20?AM, PERRIER-MICHON Augustin >> wrote: >> >> [You don't often get email from augustin.perrier-michon at ensma.fr. >> Learn why this is important at >> https://urldefense.us/v3/__https://aka.ms/LearnAboutSenderIdentification__;!!G_uCfscf7eWS!avkeFItwEAey6K_gtfFmi47RGpcntWLEnHYooiJLUAsD7p4k7c6bRKuEFgeKamuzP-HAjZNz-BldqS-LQaVh_oBU4V46O6xtmw$ [1] ] >> >> Caution: External email. >> >> Dear Petsc users, >> >> I am currently dealing with finite element fracture analysis using >> phase >> field model. To perform such simulations, I have to develop a >> staggered >> solver : mechanical problem is solved at constant damage and damage >> problem is solved at constant displacement. >> >> I created 2 TS solver and 2 DMPLEX for each "physics". >> Each physics's system is built using TSSetIFunction and >> TSSetIJacobian >> with associated functions. >> >> The TS calls are performed with TSSTEP in order to respect staggered >> solver scheme in iterative loops. >> >> My question : Is the using of TSSTEP function adapted to a staggered >> solver ? How to use this function in my framework ? Have you got any >> other suggestions or advices ? >> >> Thanks a lot >> Best regards >> >> -- >> Augustin PERRIER-MICHON >> PhD student institut PPRIME >> Physics and Mechanics of materials department >> ISAE-ENSMA >> T?l?port 2 >> 1 Avenue Cl?ment ADER >> 86361 Chasseneuil du Poitou- Futuroscope >> Tel : +33-(0)-5-49-49-80-97 >> >> ? >> Canada Research Chair in Mathematical and Computational Aspects of >> Solid Mechanics (Tier 1) >> Professor, Department of Mathematics & Statistics >> Hamilton Hall room 409A, McMaster University >> 1280 Main Street West, Hamilton, Ontario L8S 4K1, Canada >> https://urldefense.us/v3/__https://www.math.mcmaster.ca/bourdin__;!!G_uCfscf7eWS!avkeFItwEAey6K_gtfFmi47RGpcntWLEnHYooiJLUAsD7p4k7c6bRKuEFgeKamuzP-HAjZNz-BldqS-LQaVh_oBU4V6NTW13Gw$ [3] | +1 (905) 525 9140 ext. >> 27243 > > ? > Canada Research Chair in Mathematical and Computational Aspects of > Solid Mechanics (Tier 1) > Professor, Department of Mathematics & Statistics > Hamilton Hall room 409A, McMaster University > 1280 Main Street West, Hamilton, Ontario L8S 4K1, Canada > https://urldefense.us/v3/__https://www.math.mcmaster.ca/bourdin__;!!G_uCfscf7eWS!avkeFItwEAey6K_gtfFmi47RGpcntWLEnHYooiJLUAsD7p4k7c6bRKuEFgeKamuzP-HAjZNz-BldqS-LQaVh_oBU4V6NTW13Gw$ [3] | +1 (905) 525 9140 ext. > 27243 > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!avkeFItwEAey6K_gtfFmi47RGpcntWLEnHYooiJLUAsD7p4k7c6bRKuEFgeKamuzP-HAjZNz-BldqS-LQaVh_oBU4V7pYKsbVw$ [4] > > > Links: > ------ > [1] > https://urldefense.us/v3/__https://aka.ms/LearnAboutSenderIdentification__;!!G_uCfscf7eWS!dgihWmlH-Av_CJxXFBFTi9fkSSD7ymojR59alAozp30nnqI3OdqNX6wqPpuZ0noKSRGJ81DMvhfcxqM0217B6-vz$ > [2] > https://urldefense.us/v3/__https://github.com/bourdin/mef90__;!!G_uCfscf7eWS!dgihWmlH-Av_CJxXFBFTi9fkSSD7ymojR59alAozp30nnqI3OdqNX6wqPpuZ0noKSRGJ81DMvhfcxqM025KC_T1P$ > [3] > https://urldefense.us/v3/__https://www.math.mcmaster.ca/bourdin__;!!G_uCfscf7eWS!dgihWmlH-Av_CJxXFBFTi9fkSSD7ymojR59alAozp30nnqI3OdqNX6wqPpuZ0noKSRGJ81DMvhfcxqM022gaWYZ_$ > [4] https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!avkeFItwEAey6K_gtfFmi47RGpcntWLEnHYooiJLUAsD7p4k7c6bRKuEFgeKamuzP-HAjZNz-BldqS-LQaVh_oBU4V6__7kPsQ$ From liufield at gmail.com Thu Apr 24 11:07:53 2025 From: liufield at gmail.com (neil liu) Date: Thu, 24 Apr 2025 12:07:53 -0400 Subject: [petsc-users] Mixing PETSc Parallelism with Serial MMG3D Workflow In-Reply-To: <81768C87-7DA4-42BE-A763-98F427EA9282@joliv.et> References: <81768C87-7DA4-42BE-A763-98F427EA9282@joliv.et> Message-ID: Thanks a lot, Pierre. It works now. Another question is, with the present strategy, after the adapting, we will get a pseudo DM object, which has all information on rank 0 and nothing on all other ranks. Then I tried to use DMPlexdistribute to partition it and the partitioned DMs seem correct. Is it safe to do things like this? Thanks, Xiaodong On Wed, Apr 23, 2025 at 4:33?PM Pierre Jolivet wrote: > > > On 23 Apr 2025, at 7:28?PM, neil liu wrote: > > ? > > *MMG only supports serial execution, whereas ParMMG supports parallel mode > (although ParMMG is not as robust or mature as MMG).* > Given this, could you please provide some guidance on how to handle this > in the code? > > Here are my current thoughts; please let know whether it could work as > a temporary solution. > > That could work, > Pierre > > We may only need to make minor modifications in the > DMAdaptMetric_Mmg_Plex() subroutine. Specifically: > > - > > Allow all *collective PETSc functions* to run across all ranks as > usual. > - > > Restrict the *MMG-specific logic* to run *only on rank 0*, since MMG > is serial-only. > - > > Add a check before MMG is called to ensure that *only rank 0 holds > mesh cells*, i.e., validate that cEnd - cStart > 0 only on rank 0. If > more than one rank holds cells, raise a clear warning or error. > > > On Wed, Apr 23, 2025 at 1:11?PM Stefano Zampini > wrote: > >> If mmg does not support parallel communicators, we should handle it >> internally in the code, always use commself, and raise an error if there >> are two or more processes in the comm that have cEnd - cStart > 0 >> >> Il giorno mer 23 apr 2025 alle ore 20:05 neil liu >> ha scritto: >> >>> Thanks a lot. Pierre. >>> Do you have any suggestions to build a real serial DM from this >>> gatherDM? >>> I tried several ways, which don't work. >>> DMClone? >>> >>> Thanks, >>> >>> On Wed, Apr 23, 2025 at 11:39?AM Pierre Jolivet wrote: >>> >>>> >>>> >>>> On 23 Apr 2025, at 5:31?PM, neil liu wrote: >>>> >>>> Thanks a lot, Stefano. >>>> I tried DMPlexGetGatherDM and DMPlexDistributeField. It can give what >>>> we expected. >>>> The final gatherDM is listed as follows, rank 0 has all information >>>> (which is right) while rank 1 has nothing. >>>> Then I tried to feed this gatherDM into adaptMMG on rank 0 only (it >>>> seems MMG works better than ParMMG, that is why I want MMG to be tried >>>> first). But it was stuck at collective petsc functions >>>> in DMAdaptMetric_Mmg_Plex(). By the way, the present work can work well >>>> with 1 rank. >>>> >>>> Do you have any suggestions ? Build a real serial DM? >>>> >>>> >>>> Yes, you need to change the underlying MPI_Comm as well, but I?m not >>>> sure if there is any user-facing API for doing this with a one-liner. >>>> >>>> Thanks, >>>> Pierre >>>> >>>> Thanks a lot. >>>> Xiaodong >>>> >>>> DM Object: Parallel Mesh 2 MPI processes >>>> type: plex >>>> Parallel Mesh in 3 dimensions: >>>> Number of 0-cells per rank: 56 0 >>>> Number of 1-cells per rank: 289 0 >>>> Number of 2-cells per rank: 452 0 >>>> Number of 3-cells per rank: 216 0 >>>> Labels: >>>> depth: 4 strata with value/size (0 (56), 1 (289), 2 (452), 3 (216)) >>>> celltype: 4 strata with value/size (0 (56), 1 (289), 3 (452), 6 (216)) >>>> Cell Sets: 2 strata with value/size (29 (152), 30 (64)) >>>> Face Sets: 3 strata with value/size (27 (8), 28 (40), 101 (20)) >>>> Edge Sets: 1 strata with value/size (10 (10)) >>>> Vertex Sets: 5 strata with value/size (27 (2), 28 (6), 29 (2), 101 >>>> (4), 106 (4)) >>>> Field Field_0: >>>> adjacency FEM >>>> >>>> >>>> >>>> On Fri, Apr 18, 2025 at 10:09?AM Stefano Zampini < >>>> stefano.zampini at gmail.com> wrote: >>>> >>>>> If you have a vector distributed on the original mesh, then you can >>>>> use the SF returned by DMPlexGetGatherDM and use that in a call to >>>>> DMPlexDistributeField >>>>> >>>>> Il giorno ven 18 apr 2025 alle ore 17:02 neil liu >>>>> ha scritto: >>>>> >>>>>> Dear PETSc developers and users, >>>>>> >>>>>> I am currently exploring the integration of MMG3D with PETSc. Since >>>>>> MMG3D supports only serial execution, I am planning to combine parallel and >>>>>> serial computing in my workflow. Specifically, after solving the linear >>>>>> systems in parallel using PETSc: >>>>>> >>>>>> 1. >>>>>> >>>>>> I intend to use DMPlexGetGatherDM to collect the entire mesh on >>>>>> the root process for input to MMG3D. >>>>>> 2. >>>>>> >>>>>> Additionally, I plan to gather the error field onto the root >>>>>> process using VecScatter. >>>>>> >>>>>> However, I am concerned that the nth value in the gathered error >>>>>> vector (step 2) may not correspond to the nth element in the gathered mesh >>>>>> (step 1). Is this a valid concern? >>>>>> >>>>>> Do you have any suggestions or recommended practices for ensuring >>>>>> correct correspondence between the solution fields and the mesh when >>>>>> switching from parallel to serial mode? >>>>>> >>>>>> Thanks, >>>>>> >>>>>> Xiaodong >>>>>> >>>>> >>>>> >>>>> -- >>>>> Stefano >>>>> >>>> >>>> >> >> -- >> Stefano >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at joliv.et Thu Apr 24 11:27:37 2025 From: pierre at joliv.et (Pierre Jolivet) Date: Thu, 24 Apr 2025 18:27:37 +0200 Subject: [petsc-users] Mixing PETSc Parallelism with Serial MMG3D Workflow In-Reply-To: References: Message-ID: <9E6E9B83-AD96-426A-8916-3944D07E4C1E@joliv.et> An HTML attachment was scrubbed... URL: From C.Klaij at marin.nl Fri Apr 25 03:30:58 2025 From: C.Klaij at marin.nl (Klaij, Christiaan) Date: Fri, 25 Apr 2025 08:30:58 +0000 Subject: [petsc-users] problem with nested logging Message-ID: We recently upgraded from 3.19.4 to 3.22.4 but face the problem below with the nested logging. Any ideas? Chris [1]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [1]PETSC ERROR: General MPI error [1]PETSC ERROR: MPI error 1 MPI_ERR_BUFFER: invalid buffer pointer [1]PETSC ERROR: See https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!biVlk6PKXoJvq5oVlmWdVJfW9tXv-JlwuWr3zg3jI5u1_jo8rvtZpEYnHO5RjdBqQEoqpqlJ3nusrFGM3UaqHzc$ for trouble shooting. [1]PETSC ERROR: Petsc Release Version 3.22.4, Mar 01, 2025 [1]PETSC ERROR: refresco with 2 MPI process(es) and PETSC_ARCH on marclus3login2 by jwindt Fri Apr 25 08:52:30 2025 [1]PETSC ERROR: Configure options: --prefix=/home/jwindt/cmake_builds/refresco/install-libs-gnu --with-mpi-dir=/cm/shared/apps/openmpi/gcc/4.0.2 --with-x=0 --with-mpe=0 --with-debugging=0 --download-superlu_dist=https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/superlu_dist-8.1.2.tar.gz__;!!G_uCfscf7eWS!biVlk6PKXoJvq5oVlmWdVJfW9tXv-JlwuWr3zg3jI5u1_jo8rvtZpEYnHO5RjdBqQEoqpqlJ3nusrFGM21-2D-o$ --with-blaslapack-dir=/cm/shared/apps/intel/oneapi/mkl/2021.4.0 --download-parmetis=https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/parmetis-4.0.3-p9.tar.gz__;!!G_uCfscf7eWS!biVlk6PKXoJvq5oVlmWdVJfW9tXv-JlwuWr3zg3jI5u1_jo8rvtZpEYnHO5RjdBqQEoqpqlJ3nusrFGMW0lYHko$ --download-metis=https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/metis-5.1.0-p11.tar.gz__;!!G_uCfscf7eWS!biVlk6PKXoJvq5oVlmWdVJfW9tXv-JlwuWr3zg3jI5u1_jo8rvtZpEYnHO5RjdBqQEoqpqlJ3nusrFGMbSrIiUg$ --with-packages-build-dir=/home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild --with-ssl=0 --with-shared-libraries=1 CFLAGS="-std=gnu11 -Wall -funroll-all-loops -O3 -DNDEBUG" CXXFLAGS="-std=gnu++14 -Wall -funroll-all-loops -O3 -DNDEBUG " COPTFLAGS="-std=gnu11 -Wall -funroll-all-loops -O3 -DNDEBUG" CXXOPTFLAGS="-std=gnu++14 -Wall -funroll-all-loops -O3 -DNDEBUG " FCFLAGS="-Wall -funroll-all-loops -ffree-line-length-0 -Wno-maybe-uninitialized -Wno-target-lifetime -Wno-unused-function -O3 -DNDEBUG" F90FLAGS="-Wall -funroll-all-loops -ffree-line-length-0 -Wno-maybe-uninitialized -Wno-target-lifetime -Wno-unused-function -O3 -DNDEBUG" FOPTFLAGS="-Wall -funroll-all-loops -ffree-line-length-0 -Wno-maybe-uninitialized -Wno-target-lifetime -Wno-unused-function -O3 -DNDEBUG" [1]PETSC ERROR: #1 PetscLogNestedTreePrint() at /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:330 [1]PETSC ERROR: #2 PetscLogNestedTreePrint() at /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:384 [1]PETSC ERROR: #3 PetscLogNestedTreePrint() at /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:384 [1]PETSC ERROR: #4 PetscLogNestedTreePrintTop() at /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:420 [1]PETSC ERROR: #5 PetscLogHandlerView_Nested_XML() at /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:443 [1]PETSC ERROR: #6 PetscLogHandlerView_Nested() at /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/lognested.c:405 [1]PETSC ERROR: #7 PetscLogHandlerView() at /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/interface/loghandler.c:342 [1]PETSC ERROR: #8 PetscLogView() at /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/plog.c:2040 -------------------------------------------------------------------------- MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD with errorcode 98. NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. You may or may not see output from other processes, depending on exactly when Open MPI kills them. -------------------------------------------------------------------------- dr. ir. Christiaan Klaij | senior researcher | Research & Development | CFD Development T +31 317 49 33 44 | C.Klaij at marin.nl | https://urldefense.us/v3/__http://www.marin.nl__;!!G_uCfscf7eWS!biVlk6PKXoJvq5oVlmWdVJfW9tXv-JlwuWr3zg3jI5u1_jo8rvtZpEYnHO5RjdBqQEoqpqlJ3nusrFGMejNgRbQ$ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image898428.png Type: image/png Size: 5004 bytes Desc: image898428.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image714119.png Type: image/png Size: 487 bytes Desc: image714119.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image138727.png Type: image/png Size: 504 bytes Desc: image138727.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image471093.png Type: image/png Size: 482 bytes Desc: image471093.png URL: From derek.teaney at stonybrook.edu Fri Apr 25 12:23:13 2025 From: derek.teaney at stonybrook.edu (Derek Teaney) Date: Fri, 25 Apr 2025 13:23:13 -0400 Subject: [petsc-users] VecSet and VecAssemblyBegin Message-ID: Hi, I was under the (mistaken) impression that one does not need to due a VecAssemblyBegin etc following a VecSet, e.g. VecSet(dn_local, 0.); VecAssemblyBegin(dn_local) ; VecAssemblyEnd(dn_local) ; Seems to give different results without the Assembly. Thanks for clarifying, Derek -- ------------------------------------------------------------------------ Derek Teaney Professor and Graduate Program Director Dept. of Physics & Astronomy Stony Brook University Stony Brook, NY 11794-3800 Tel: (631) 632-4489 Fax: (631) 632-9718 e-mail: Derek.Teaney at stonybrook.edu ------------------------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Fri Apr 25 12:26:40 2025 From: bsmith at petsc.dev (Barry Smith) Date: Fri, 25 Apr 2025 13:26:40 -0400 Subject: [petsc-users] VecSet and VecAssemblyBegin In-Reply-To: References: Message-ID: <26297B8C-82CE-409A-A5E2-74B4D3A272DD@petsc.dev> You absolutely should not need to do an assembly after a VecSet. Please post a full reproducer that demonstrates the problem. Barry > On Apr 25, 2025, at 1:23?PM, Derek Teaney via petsc-users wrote: > > Hi, > > I was under the (mistaken) impression that one does not need to due a VecAssemblyBegin etc following a VecSet, e.g. > > VecSet(dn_local, 0.); > VecAssemblyBegin(dn_local) ; > VecAssemblyEnd(dn_local) ; > > Seems to give different results without the Assembly. > > Thanks for clarifying, > > Derek > > -- > ------------------------------------------------------------------------ > Derek Teaney > Professor and Graduate Program Director > Dept. of Physics & Astronomy > Stony Brook University > Stony Brook, NY 11794-3800 > Tel: (631) 632-4489 > Fax: (631) 632-9718 > e-mail: Derek.Teaney at stonybrook.edu > ------------------------------------------------------------------------ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Fri Apr 25 16:30:34 2025 From: bsmith at petsc.dev (Barry Smith) Date: Fri, 25 Apr 2025 17:30:34 -0400 Subject: [petsc-users] VecSet and VecAssemblyBegin In-Reply-To: References: <26297B8C-82CE-409A-A5E2-74B4D3A272DD@petsc.dev> Message-ID: <01B28818-27CC-4D29-9C21-C9F5087BD461@petsc.dev> Technically you should not be calling VecSet() with any outstanding arrays but it will probably be fine. Even though GetArray() does not copy the array values; both GetArray/RestoreArray and Set track the current "state" of the vector and that count might get confused if they are used improperly. > On Apr 25, 2025, at 4:42?PM, Derek Teaney wrote: > > Thanks, I am working on providing a standalone code. A related question is - if I did have a view of a local vector provided by: > > data_node ***dn_array; > DMDAVecGetArray(domain, dn_local, &dn_array); > > Can I assume through multiple calls to VecSet that the view dn_array is valid, or should this be restored, between calls. > > Thanks, > > Derek > > On Fri, Apr 25, 2025 at 1:26?PM Barry Smith > wrote: >> >> You absolutely should not need to do an assembly after a VecSet. Please post a full reproducer that demonstrates the problem. >> >> Barry >> >> >>> On Apr 25, 2025, at 1:23?PM, Derek Teaney via petsc-users > wrote: >>> >>> Hi, >>> >>> I was under the (mistaken) impression that one does not need to due a VecAssemblyBegin etc following a VecSet, e.g. >>> >>> VecSet(dn_local, 0.); >>> VecAssemblyBegin(dn_local) ; >>> VecAssemblyEnd(dn_local) ; >>> >>> Seems to give different results without the Assembly. >>> >>> Thanks for clarifying, >>> >>> Derek >>> >>> -- >>> ------------------------------------------------------------------------ >>> Derek Teaney >>> Professor and Graduate Program Director >>> Dept. of Physics & Astronomy >>> Stony Brook University >>> Stony Brook, NY 11794-3800 >>> Tel: (631) 632-4489 >>> Fax: (631) 632-9718 >>> e-mail: Derek.Teaney at stonybrook.edu >>> ------------------------------------------------------------------------ >>> >> > > > > -- > ------------------------------------------------------------------------ > Derek Teaney > Professor and Graduate Program Director > Dept. of Physics & Astronomy > Stony Brook University > Stony Brook, NY 11794-3800 > Tel: (631) 632-4489 > Fax: (631) 632-9718 > e-mail: Derek.Teaney at stonybrook.edu > ------------------------------------------------------------------------ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From derek.teaney at stonybrook.edu Sat Apr 26 08:26:50 2025 From: derek.teaney at stonybrook.edu (Derek Teaney) Date: Sat, 26 Apr 2025 09:26:50 -0400 Subject: [petsc-users] VecSet and VecAssemblyBegin In-Reply-To: <01B28818-27CC-4D29-9C21-C9F5087BD461@petsc.dev> References: <26297B8C-82CE-409A-A5E2-74B4D3A272DD@petsc.dev> <01B28818-27CC-4D29-9C21-C9F5087BD461@petsc.dev> Message-ID: Thanks Barry -- this solved the issue. "probably will be fine" was fine with 3.17 and maybe 3.19, but definitely not fine with 3.20. For others the faulty logic is: GetArray(dn_local, &dn) //WRONG Loop over cases VecSet(dn_local, 0.) Fill up dn LocalToGlobal RestoreArray Where as one should do: Loop over cases VecSet(dn_local, 0.) GetArray(dn_local, &dn) // RIGHT Fill up dn LocalToGlobal RestoreArray So, while nothing is copied, if I think of dn as a copy (and not a view) the logic will always be correct. Now I have a related question about "Technically you should not be calling VecSet() with any outstanding arrays but it will probably be fine." What about GlobalToLocal? should I always GetArray for the local array after the GlobalToLocal So, is this also bad logic: GetArray(n_local, &n) Loop over cases: GlobalToLocal(n_global, &n_local) do stuff with n LocalToGlobal(n_local, n_global) RestoreArray as opposed to Loop over cases: GlobalToLocal(n_global, &n_local) GetArray(n_local, &n) do stuff with n LocalToGlobal(n_local, n_global) RestoreArray Thanks again for all your help, Derek On Fri, Apr 25, 2025 at 5:30?PM Barry Smith wrote: > > Technically you should not be calling VecSet() with any outstanding > arrays but it will probably be fine. > > Even though GetArray() does not copy the array values; both > GetArray/RestoreArray and Set track the current "state" of the vector and > that count might get confused if they are used improperly. > > > > On Apr 25, 2025, at 4:42?PM, Derek Teaney > wrote: > > Thanks, I am working on providing a standalone code. A related question > is - if I did have a view of a local vector provided by: > > data_node ***dn_array; > DMDAVecGetArray(domain, dn_local, &dn_array); > > Can I assume through multiple calls to VecSet that the view dn_array is > valid, or should this be restored, between calls. > > Thanks, > > Derek > > On Fri, Apr 25, 2025 at 1:26?PM Barry Smith wrote: > >> >> You absolutely should not need to do an assembly after a VecSet. >> Please post a full reproducer that demonstrates the problem. >> >> Barry >> >> >> On Apr 25, 2025, at 1:23?PM, Derek Teaney via petsc-users < >> petsc-users at mcs.anl.gov> wrote: >> >> Hi, >> >> I was under the (mistaken) impression that one does not need to due a >> VecAssemblyBegin etc following a VecSet, e.g. >> >> VecSet(dn_local, 0.); >> VecAssemblyBegin(dn_local) ; >> VecAssemblyEnd(dn_local) ; >> >> Seems to give different results without the Assembly. >> >> Thanks for clarifying, >> >> Derek >> >> -- >> ------------------------------------------------------------------------ >> Derek Teaney >> Professor and Graduate Program Director >> Dept. of Physics & Astronomy >> Stony Brook University >> Stony Brook, NY 11794-3800 >> Tel: (631) 632-4489 >> Fax: (631) 632-9718 >> e-mail: Derek.Teaney at stonybrook.edu >> ------------------------------------------------------------------------ >> >> >> > > -- > ------------------------------------------------------------------------ > Derek Teaney > Professor and Graduate Program Director > Dept. of Physics & Astronomy > Stony Brook University > Stony Brook, NY 11794-3800 > Tel: (631) 632-4489 > Fax: (631) 632-9718 > e-mail: Derek.Teaney at stonybrook.edu > ------------------------------------------------------------------------ > > > -- ------------------------------------------------------------------------ Derek Teaney Professor and Graduate Program Director Dept. of Physics & Astronomy Stony Brook University Stony Brook, NY 11794-3800 Tel: (631) 632-4489 Fax: (631) 632-9718 e-mail: Derek.Teaney at stonybrook.edu ------------------------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From junchao.zhang at gmail.com Sat Apr 26 08:49:57 2025 From: junchao.zhang at gmail.com (Junchao Zhang) Date: Sat, 26 Apr 2025 08:49:57 -0500 Subject: [petsc-users] VecSet and VecAssemblyBegin In-Reply-To: References: <26297B8C-82CE-409A-A5E2-74B4D3A272DD@petsc.dev> <01B28818-27CC-4D29-9C21-C9F5087BD461@petsc.dev> Message-ID: On Sat, Apr 26, 2025 at 8:27?AM Derek Teaney via petsc-users < petsc-users at mcs.anl.gov> wrote: > Thanks Barry -- this solved the issue. > > "probably will be fine" was fine with 3.17 and maybe 3.19, but definitely > not fine with 3.20. > > For others the faulty logic is: > > GetArray(dn_local, &dn) //WRONG > Loop over cases > VecSet(dn_local, 0.) > Fill up dn > LocalToGlobal > RestoreArray > > Where as one should do: > > Loop over cases > VecSet(dn_local, 0.) > GetArray(dn_local, &dn) // RIGHT > Fill up dn > LocalToGlobal > RestoreArray > The above two pieces of code are both wrong, in my view. > > So, while nothing is copied, if I think of dn as a copy (and not a view) > the logic will always be correct. > > Now I have a related question about "Technically you should not be > calling VecSet() with any outstanding arrays but it will probably be fine." > What about GlobalToLocal? should I always GetArray for the local array > after the GlobalToLocal > > So, is this also bad logic: > > GetArray(n_local, &n) > Loop over cases: > GlobalToLocal(n_global, &n_local) > do stuff with n > LocalToGlobal(n_local, n_global) > RestoreArray > > as opposed to > > Loop over cases: > GlobalToLocal(n_global, &n_local) > GetArray(n_local, &n) > do stuff with n > LocalToGlobal(n_local, n_global) > RestoreArray > This is also wrong. I think the rule here is: GetArray() puts the vector in an interim state. One shall not call any vector routines (ex. LocalToGlobal/GlobalToLocal) before RestoreArray(). You can only operate on the array instead. > Thanks again for all your help, > > Derek > > > > On Fri, Apr 25, 2025 at 5:30?PM Barry Smith wrote: > >> >> Technically you should not be calling VecSet() with any outstanding >> arrays but it will probably be fine. >> >> Even though GetArray() does not copy the array values; both >> GetArray/RestoreArray and Set track the current "state" of the vector and >> that count might get confused if they are used improperly. >> >> >> >> On Apr 25, 2025, at 4:42?PM, Derek Teaney >> wrote: >> >> Thanks, I am working on providing a standalone code. A related question >> is - if I did have a view of a local vector provided by: >> >> data_node ***dn_array; >> DMDAVecGetArray(domain, dn_local, &dn_array); >> >> Can I assume through multiple calls to VecSet that the view dn_array is >> valid, or should this be restored, between calls. >> >> Thanks, >> >> Derek >> >> On Fri, Apr 25, 2025 at 1:26?PM Barry Smith wrote: >> >>> >>> You absolutely should not need to do an assembly after a VecSet. >>> Please post a full reproducer that demonstrates the problem. >>> >>> Barry >>> >>> >>> On Apr 25, 2025, at 1:23?PM, Derek Teaney via petsc-users < >>> petsc-users at mcs.anl.gov> wrote: >>> >>> Hi, >>> >>> I was under the (mistaken) impression that one does not need to due a >>> VecAssemblyBegin etc following a VecSet, e.g. >>> >>> VecSet(dn_local, 0.); >>> VecAssemblyBegin(dn_local) ; >>> VecAssemblyEnd(dn_local) ; >>> >>> Seems to give different results without the Assembly. >>> >>> Thanks for clarifying, >>> >>> Derek >>> >>> -- >>> ------------------------------------------------------------------------ >>> Derek Teaney >>> Professor and Graduate Program Director >>> Dept. of Physics & Astronomy >>> Stony Brook University >>> Stony Brook, NY 11794-3800 >>> Tel: (631) 632-4489 >>> Fax: (631) 632-9718 >>> e-mail: Derek.Teaney at stonybrook.edu >>> ------------------------------------------------------------------------ >>> >>> >>> >> >> -- >> ------------------------------------------------------------------------ >> Derek Teaney >> Professor and Graduate Program Director >> Dept. of Physics & Astronomy >> Stony Brook University >> Stony Brook, NY 11794-3800 >> Tel: (631) 632-4489 >> Fax: (631) 632-9718 >> e-mail: Derek.Teaney at stonybrook.edu >> ------------------------------------------------------------------------ >> >> >> > > -- > ------------------------------------------------------------------------ > Derek Teaney > Professor and Graduate Program Director > Dept. of Physics & Astronomy > Stony Brook University > Stony Brook, NY 11794-3800 > Tel: (631) 632-4489 > Fax: (631) 632-9718 > e-mail: Derek.Teaney at stonybrook.edu > ------------------------------------------------------------------------ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From junchao.zhang at gmail.com Sat Apr 26 08:51:57 2025 From: junchao.zhang at gmail.com (Junchao Zhang) Date: Sat, 26 Apr 2025 08:51:57 -0500 Subject: [petsc-users] problem with nested logging In-Reply-To: References: Message-ID: Toby (Cc'ed) might know it. Or could you provide an example? --Junchao Zhang On Fri, Apr 25, 2025 at 3:31?AM Klaij, Christiaan via petsc-users < petsc-users at mcs.anl.gov> wrote: > We recently upgraded from 3.19.4 to 3.22.4 but face the problem below with > the nested logging. Any ideas? > > Chris > > > [1]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [1]PETSC ERROR: General MPI error > [1]PETSC ERROR: MPI error 1 MPI_ERR_BUFFER: invalid buffer pointer > [1]PETSC ERROR: See https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!f64jR8FwcQdcTxYskijm4-CJ5UqGKAC-HRru1lRJXB558lncnm34NquLCRjFYnofEAp_ylqjbPxnKjJxuRyJXJ0UQMzn$ > > for trouble shooting. > [1]PETSC ERROR: Petsc Release Version 3.22.4, Mar 01, 2025 > [1]PETSC ERROR: refresco with 2 MPI process(es) and PETSC_ARCH on > marclus3login2 by jwindt Fri Apr 25 08:52:30 2025 > [1]PETSC ERROR: Configure options: > --prefix=/home/jwindt/cmake_builds/refresco/install-libs-gnu > --with-mpi-dir=/cm/shared/apps/openmpi/gcc/4.0.2 --with-x=0 --with-mpe=0 > --with-debugging=0 --download-superlu_dist= > https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/superlu_dist-8.1.2.tar.gz__;!!G_uCfscf7eWS!f64jR8FwcQdcTxYskijm4-CJ5UqGKAC-HRru1lRJXB558lncnm34NquLCRjFYnofEAp_ylqjbPxnKjJxuRyJXBC6iON-$ > > --with-blaslapack-dir=/cm/shared/apps/intel/oneapi/mkl/2021.4.0 > --download-parmetis= > https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/parmetis-4.0.3-p9.tar.gz__;!!G_uCfscf7eWS!f64jR8FwcQdcTxYskijm4-CJ5UqGKAC-HRru1lRJXB558lncnm34NquLCRjFYnofEAp_ylqjbPxnKjJxuRyJXNfbyQij$ > > --download-metis= > https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/metis-5.1.0-p11.tar.gz__;!!G_uCfscf7eWS!f64jR8FwcQdcTxYskijm4-CJ5UqGKAC-HRru1lRJXB558lncnm34NquLCRjFYnofEAp_ylqjbPxnKjJxuRyJXAjPuW1M$ > > --with-packages-build-dir=/home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild > --with-ssl=0 --with-shared-libraries=1 CFLAGS="-std=gnu11 -Wall > -funroll-all-loops -O3 -DNDEBUG" CXXFLAGS="-std=gnu++14 -Wall > -funroll-all-loops -O3 -DNDEBUG " COPTFLAGS="-std=gnu11 -Wall > -funroll-all-loops -O3 -DNDEBUG" CXXOPTFLAGS="-std=gnu++14 -Wall > -funroll-all-loops -O3 -DNDEBUG " FCFLAGS="-Wall -funroll-all-loops > -ffree-line-length-0 -Wno-maybe-uninitialized -Wno-target-lifetime > -Wno-unused-function -O3 -DNDEBUG" F90FLAGS="-Wall -funroll-all-loops > -ffree-line-length-0 -Wno-maybe-uninitialized -Wno-target-lifetime > -Wno-unused-function -O3 -DNDEBUG" FOPTFLAGS="-Wall -funroll-all-loops > -ffree-line-length-0 -Wno-maybe-uninitialized -Wno-target-lifetime > -Wno-unused-function -O3 -DNDEBUG" > [1]PETSC ERROR: #1 PetscLogNestedTreePrint() at > /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:330 > [1]PETSC ERROR: #2 PetscLogNestedTreePrint() at > /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:384 > [1]PETSC ERROR: #3 PetscLogNestedTreePrint() at > /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:384 > [1]PETSC ERROR: #4 PetscLogNestedTreePrintTop() at > /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:420 > [1]PETSC ERROR: #5 PetscLogHandlerView_Nested_XML() at > /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:443 > [1]PETSC ERROR: #6 PetscLogHandlerView_Nested() at > /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/lognested.c:405 > [1]PETSC ERROR: #7 PetscLogHandlerView() at > /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/interface/loghandler.c:342 > [1]PETSC ERROR: #8 PetscLogView() at > /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/plog.c:2040 > -------------------------------------------------------------------------- > MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD > with errorcode 98. > > NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. > You may or may not see output from other processes, depending on > exactly when Open MPI kills them. > -------------------------------------------------------------------------- > dr. ir.???? Christiaan Klaij > | senior researcher | Research & Development | CFD Development > T +31 317 49 33 44 <+31%20317%2049%2033%2044> | C.Klaij at marin.nl | > https://urldefense.us/v3/__http://www.marin.nl__;!!G_uCfscf7eWS!f64jR8FwcQdcTxYskijm4-CJ5UqGKAC-HRru1lRJXB558lncnm34NquLCRjFYnofEAp_ylqjbPxnKjJxuRyJXPv1evZa$ > > [image: Facebook] > > [image: LinkedIn] > > [image: YouTube] > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image898428.png Type: image/png Size: 5004 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image714119.png Type: image/png Size: 487 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image138727.png Type: image/png Size: 504 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image471093.png Type: image/png Size: 482 bytes Desc: not available URL: From junchao.zhang at gmail.com Sat Apr 26 10:41:25 2025 From: junchao.zhang at gmail.com (Junchao Zhang) Date: Sat, 26 Apr 2025 10:41:25 -0500 Subject: [petsc-users] VecSet and VecAssemblyBegin In-Reply-To: References: <26297B8C-82CE-409A-A5E2-74B4D3A272DD@petsc.dev> <01B28818-27CC-4D29-9C21-C9F5087BD461@petsc.dev> Message-ID: Yes, that is correct. --Junchao Zhang On Sat, Apr 26, 2025 at 10:35?AM Derek Teaney wrote: > Ok -- got it -- thanks so just do the RestoreArray before the final step, > e.g. > > Loop over cases > VecSet(dn_local, 0.) > GetArray(dn_local, &dn) // RIGHT > Fill up dn > Restore Array > LocalToGlobal > > > On Sat, Apr 26, 2025 at 9:50?AM Junchao Zhang > wrote: > >> On Sat, Apr 26, 2025 at 8:27?AM Derek Teaney via petsc-users < >> petsc-users at mcs.anl.gov> wrote: >> >>> Thanks Barry -- this solved the issue. >>> >>> "probably will be fine" was fine with 3.17 and maybe 3.19, but >>> definitely not fine with 3.20. >>> >>> For others the faulty logic is: >>> >>> GetArray(dn_local, &dn) //WRONG >>> Loop over cases >>> VecSet(dn_local, 0.) >>> Fill up dn >>> LocalToGlobal >>> RestoreArray >>> >>> Where as one should do: >>> >>> Loop over cases >>> VecSet(dn_local, 0.) >>> GetArray(dn_local, &dn) // RIGHT >>> Fill up dn >>> LocalToGlobal >>> RestoreArray >>> >> The above two pieces of code are both wrong, in my view. >> >>> >>> >> So, while nothing is copied, if I think of dn as a copy (and not a view) >>> the logic will always be correct. >>> >>> Now I have a related question about "Technically you should not be >>> calling VecSet() with any outstanding arrays but it will probably be fine." >>> What about GlobalToLocal? should I always GetArray for the local array >>> after the GlobalToLocal >>> >>> So, is this also bad logic: >>> >>> GetArray(n_local, &n) >>> Loop over cases: >>> GlobalToLocal(n_global, &n_local) >>> do stuff with n >>> LocalToGlobal(n_local, n_global) >>> RestoreArray >>> >>> as opposed to >>> >>> Loop over cases: >>> GlobalToLocal(n_global, &n_local) >>> GetArray(n_local, &n) >>> do stuff with n >>> LocalToGlobal(n_local, n_global) >>> RestoreArray >>> >> This is also wrong. >> I think the rule here is: GetArray() puts the vector in an interim state. >> One shall not call any vector routines (ex. LocalToGlobal/GlobalToLocal) >> before RestoreArray(). You can only operate on the array instead. >> >> >>> >> Thanks again for all your help, >>> >>> Derek >>> >>> >>> >>> On Fri, Apr 25, 2025 at 5:30?PM Barry Smith wrote: >>> >>>> >>>> Technically you should not be calling VecSet() with any outstanding >>>> arrays but it will probably be fine. >>>> >>>> Even though GetArray() does not copy the array values; both >>>> GetArray/RestoreArray and Set track the current "state" of the vector and >>>> that count might get confused if they are used improperly. >>>> >>>> >>>> >>>> On Apr 25, 2025, at 4:42?PM, Derek Teaney >>>> wrote: >>>> >>>> Thanks, I am working on providing a standalone code. A related >>>> question is - if I did have a view of a local vector provided by: >>>> >>>> data_node ***dn_array; >>>> DMDAVecGetArray(domain, dn_local, &dn_array); >>>> >>>> Can I assume through multiple calls to VecSet that the view dn_array >>>> is valid, or should this be restored, between calls. >>>> >>>> Thanks, >>>> >>>> Derek >>>> >>>> On Fri, Apr 25, 2025 at 1:26?PM Barry Smith wrote: >>>> >>>>> >>>>> You absolutely should not need to do an assembly after a VecSet. >>>>> Please post a full reproducer that demonstrates the problem. >>>>> >>>>> Barry >>>>> >>>>> >>>>> On Apr 25, 2025, at 1:23?PM, Derek Teaney via petsc-users < >>>>> petsc-users at mcs.anl.gov> wrote: >>>>> >>>>> Hi, >>>>> >>>>> I was under the (mistaken) impression that one does not need to due a >>>>> VecAssemblyBegin etc following a VecSet, e.g. >>>>> >>>>> VecSet(dn_local, 0.); >>>>> VecAssemblyBegin(dn_local) ; >>>>> VecAssemblyEnd(dn_local) ; >>>>> >>>>> Seems to give different results without the Assembly. >>>>> >>>>> Thanks for clarifying, >>>>> >>>>> Derek >>>>> >>>>> -- >>>>> >>>>> ------------------------------------------------------------------------ >>>>> Derek Teaney >>>>> Professor and Graduate Program Director >>>>> Dept. of Physics & Astronomy >>>>> Stony Brook University >>>>> Stony Brook, NY 11794-3800 >>>>> Tel: (631) 632-4489 >>>>> Fax: (631) 632-9718 >>>>> e-mail: Derek.Teaney at stonybrook.edu >>>>> >>>>> ------------------------------------------------------------------------ >>>>> >>>>> >>>>> >>>> >>>> -- >>>> ------------------------------------------------------------------------ >>>> Derek Teaney >>>> Professor and Graduate Program Director >>>> Dept. of Physics & Astronomy >>>> Stony Brook University >>>> Stony Brook, NY 11794-3800 >>>> Tel: (631) 632-4489 >>>> Fax: (631) 632-9718 >>>> e-mail: Derek.Teaney at stonybrook.edu >>>> ------------------------------------------------------------------------ >>>> >>>> >>>> >>> >>> -- >>> ------------------------------------------------------------------------ >>> Derek Teaney >>> Professor and Graduate Program Director >>> Dept. of Physics & Astronomy >>> Stony Brook University >>> Stony Brook, NY 11794-3800 >>> Tel: (631) 632-4489 >>> Fax: (631) 632-9718 >>> e-mail: Derek.Teaney at stonybrook.edu >>> ------------------------------------------------------------------------ >>> >>> > > -- > ------------------------------------------------------------------------ > Derek Teaney > Professor and Graduate Program Director > Dept. of Physics & Astronomy > Stony Brook University > Stony Brook, NY 11794-3800 > Tel: (631) 632-4489 > Fax: (631) 632-9718 > e-mail: Derek.Teaney at stonybrook.edu > ------------------------------------------------------------------------ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sat Apr 26 12:09:57 2025 From: knepley at gmail.com (Matthew Knepley) Date: Sat, 26 Apr 2025 13:09:57 -0400 Subject: [petsc-users] VecSet and VecAssemblyBegin In-Reply-To: References: <26297B8C-82CE-409A-A5E2-74B4D3A272DD@petsc.dev> <01B28818-27CC-4D29-9C21-C9F5087BD461@petsc.dev> Message-ID: Junchao, should we put guards in VecSet() and LocalToGlobal() for the array lock? Thanks, Matt On Sat, Apr 26, 2025 at 11:50?AM Junchao Zhang wrote: > Yes, that is correct. > > --Junchao Zhang > > > On Sat, Apr 26, 2025 at 10:35?AM Derek Teaney > wrote: > >> Ok -- got it -- thanks so just do the RestoreArray before the final step, >> e.g. >> >> Loop over cases >> VecSet(dn_local, 0.) >> GetArray(dn_local, &dn) // RIGHT >> Fill up dn >> Restore Array >> LocalToGlobal >> >> >> On Sat, Apr 26, 2025 at 9:50?AM Junchao Zhang >> wrote: >> >>> On Sat, Apr 26, 2025 at 8:27?AM Derek Teaney via petsc-users < >>> petsc-users at mcs.anl.gov> wrote: >>> >>>> Thanks Barry -- this solved the issue. >>>> >>>> "probably will be fine" was fine with 3.17 and maybe 3.19, but >>>> definitely not fine with 3.20. >>>> >>>> For others the faulty logic is: >>>> >>>> GetArray(dn_local, &dn) //WRONG >>>> Loop over cases >>>> VecSet(dn_local, 0.) >>>> Fill up dn >>>> LocalToGlobal >>>> RestoreArray >>>> >>>> Where as one should do: >>>> >>>> Loop over cases >>>> VecSet(dn_local, 0.) >>>> GetArray(dn_local, &dn) // RIGHT >>>> Fill up dn >>>> LocalToGlobal >>>> RestoreArray >>>> >>> The above two pieces of code are both wrong, in my view. >>> >>>> >>>> >>> So, while nothing is copied, if I think of dn as a copy (and not a view) >>>> the logic will always be correct. >>>> >>>> Now I have a related question about "Technically you should not be >>>> calling VecSet() with any outstanding arrays but it will probably be fine." >>>> What about GlobalToLocal? should I always GetArray for the local >>>> array after the GlobalToLocal >>>> >>>> So, is this also bad logic: >>>> >>>> GetArray(n_local, &n) >>>> Loop over cases: >>>> GlobalToLocal(n_global, &n_local) >>>> do stuff with n >>>> LocalToGlobal(n_local, n_global) >>>> RestoreArray >>>> >>>> as opposed to >>>> >>>> Loop over cases: >>>> GlobalToLocal(n_global, &n_local) >>>> GetArray(n_local, &n) >>>> do stuff with n >>>> LocalToGlobal(n_local, n_global) >>>> RestoreArray >>>> >>> This is also wrong. >>> I think the rule here is: GetArray() puts the vector in an interim >>> state. One shall not call any vector routines (ex. >>> LocalToGlobal/GlobalToLocal) before RestoreArray(). You can only operate >>> on the array instead. >>> >>> >>>> >>> Thanks again for all your help, >>>> >>>> Derek >>>> >>>> >>>> >>>> On Fri, Apr 25, 2025 at 5:30?PM Barry Smith wrote: >>>> >>>>> >>>>> Technically you should not be calling VecSet() with any outstanding >>>>> arrays but it will probably be fine. >>>>> >>>>> Even though GetArray() does not copy the array values; both >>>>> GetArray/RestoreArray and Set track the current "state" of the vector and >>>>> that count might get confused if they are used improperly. >>>>> >>>>> >>>>> >>>>> On Apr 25, 2025, at 4:42?PM, Derek Teaney >>>>> wrote: >>>>> >>>>> Thanks, I am working on providing a standalone code. A related >>>>> question is - if I did have a view of a local vector provided by: >>>>> >>>>> data_node ***dn_array; >>>>> DMDAVecGetArray(domain, dn_local, &dn_array); >>>>> >>>>> Can I assume through multiple calls to VecSet that the view dn_array >>>>> is valid, or should this be restored, between calls. >>>>> >>>>> Thanks, >>>>> >>>>> Derek >>>>> >>>>> On Fri, Apr 25, 2025 at 1:26?PM Barry Smith wrote: >>>>> >>>>>> >>>>>> You absolutely should not need to do an assembly after a VecSet. >>>>>> Please post a full reproducer that demonstrates the problem. >>>>>> >>>>>> Barry >>>>>> >>>>>> >>>>>> On Apr 25, 2025, at 1:23?PM, Derek Teaney via petsc-users < >>>>>> petsc-users at mcs.anl.gov> wrote: >>>>>> >>>>>> Hi, >>>>>> >>>>>> I was under the (mistaken) impression that one does not need to due a >>>>>> VecAssemblyBegin etc following a VecSet, e.g. >>>>>> >>>>>> VecSet(dn_local, 0.); >>>>>> VecAssemblyBegin(dn_local) ; >>>>>> VecAssemblyEnd(dn_local) ; >>>>>> >>>>>> Seems to give different results without the Assembly. >>>>>> >>>>>> Thanks for clarifying, >>>>>> >>>>>> Derek >>>>>> >>>>>> -- >>>>>> >>>>>> ------------------------------------------------------------------------ >>>>>> Derek Teaney >>>>>> Professor and Graduate Program Director >>>>>> Dept. of Physics & Astronomy >>>>>> Stony Brook University >>>>>> Stony Brook, NY 11794-3800 >>>>>> Tel: (631) 632-4489 >>>>>> Fax: (631) 632-9718 >>>>>> e-mail: Derek.Teaney at stonybrook.edu >>>>>> >>>>>> ------------------------------------------------------------------------ >>>>>> >>>>>> >>>>>> >>>>> >>>>> -- >>>>> >>>>> ------------------------------------------------------------------------ >>>>> Derek Teaney >>>>> Professor and Graduate Program Director >>>>> Dept. of Physics & Astronomy >>>>> Stony Brook University >>>>> Stony Brook, NY 11794-3800 >>>>> Tel: (631) 632-4489 >>>>> Fax: (631) 632-9718 >>>>> e-mail: Derek.Teaney at stonybrook.edu >>>>> >>>>> ------------------------------------------------------------------------ >>>>> >>>>> >>>>> >>>> >>>> -- >>>> ------------------------------------------------------------------------ >>>> Derek Teaney >>>> Professor and Graduate Program Director >>>> Dept. of Physics & Astronomy >>>> Stony Brook University >>>> Stony Brook, NY 11794-3800 >>>> Tel: (631) 632-4489 >>>> Fax: (631) 632-9718 >>>> e-mail: Derek.Teaney at stonybrook.edu >>>> ------------------------------------------------------------------------ >>>> >>>> >> >> -- >> ------------------------------------------------------------------------ >> Derek Teaney >> Professor and Graduate Program Director >> Dept. of Physics & Astronomy >> Stony Brook University >> Stony Brook, NY 11794-3800 >> Tel: (631) 632-4489 >> Fax: (631) 632-9718 >> e-mail: Derek.Teaney at stonybrook.edu >> ------------------------------------------------------------------------ >> >> -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!aIL-dQ6HGo2zkrc1_qEB3XbGFGhwsiPuJRK68V4emqBz31A_jzsgI-sJS8ki4Sx6hcqI_Ya3tzpzdlhh2o96$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at petsc.dev Sat Apr 26 12:46:36 2025 From: bsmith at petsc.dev (Barry Smith) Date: Sat, 26 Apr 2025 13:46:36 -0400 Subject: [petsc-users] VecSet and VecAssemblyBegin In-Reply-To: References: <26297B8C-82CE-409A-A5E2-74B4D3A272DD@petsc.dev> <01B28818-27CC-4D29-9C21-C9F5087BD461@petsc.dev> Message-ID: The guards are supposed to be universal, not one-offs that are put into specific locations. How come no errors with debug mode are detected in this situation? Barry > On Apr 26, 2025, at 1:09?PM, Matthew Knepley wrote: > > Junchao, should we put guards in VecSet() and LocalToGlobal() for the array lock? > > Thanks, > > Matt > > On Sat, Apr 26, 2025 at 11:50?AM Junchao Zhang > wrote: >> Yes, that is correct. >> >> --Junchao Zhang >> >> >> On Sat, Apr 26, 2025 at 10:35?AM Derek Teaney > wrote: >>> Ok -- got it -- thanks so just do the RestoreArray before the final step, e.g. >>> >>> Loop over cases >>> VecSet(dn_local, 0.) >>> GetArray(dn_local, &dn) // RIGHT >>> Fill up dn >>> Restore Array >>> LocalToGlobal >>> >>> >>> On Sat, Apr 26, 2025 at 9:50?AM Junchao Zhang > wrote: >>>> On Sat, Apr 26, 2025 at 8:27?AM Derek Teaney via petsc-users > wrote: >>>>> Thanks Barry -- this solved the issue. >>>>> >>>>> "probably will be fine" was fine with 3.17 and maybe 3.19, but definitely not fine with 3.20. >>>>> >>>>> For others the faulty logic is: >>>>> >>>>> GetArray(dn_local, &dn) //WRONG >>>>> Loop over cases >>>>> VecSet(dn_local, 0.) >>>>> Fill up dn >>>>> LocalToGlobal >>>>> RestoreArray >>>>> >>>>> Where as one should do: >>>>> >>>>> Loop over cases >>>>> VecSet(dn_local, 0.) >>>>> GetArray(dn_local, &dn) // RIGHT >>>>> Fill up dn >>>>> LocalToGlobal >>>>> RestoreArray >>>> The above two pieces of code are both wrong, in my view. >>>>> >>>>> So, while nothing is copied, if I think of dn as a copy (and not a view) the logic will always be correct. >>>>> >>>>> Now I have a related question about "Technically you should not be calling VecSet() with any outstanding arrays but it will probably be fine." >>>>> What about GlobalToLocal? should I always GetArray for the local array after the GlobalToLocal >>>>> >>>>> So, is this also bad logic: >>>>> >>>>> GetArray(n_local, &n) >>>>> Loop over cases: >>>>> GlobalToLocal(n_global, &n_local) >>>>> do stuff with n >>>>> LocalToGlobal(n_local, n_global) >>>>> RestoreArray >>>>> >>>>> as opposed to >>>>> >>>>> Loop over cases: >>>>> GlobalToLocal(n_global, &n_local) >>>>> GetArray(n_local, &n) >>>>> do stuff with n >>>>> LocalToGlobal(n_local, n_global) >>>>> RestoreArray >>>> This is also wrong. >>>> I think the rule here is: GetArray() puts the vector in an interim state. One shall not call any vector routines (ex. LocalToGlobal/GlobalToLocal) before RestoreArray(). You can only operate on the array instead. >>>> >>>>> >>>>> Thanks again for all your help, >>>>> >>>>> Derek >>>>> >>>>> >>>>> >>>>> On Fri, Apr 25, 2025 at 5:30?PM Barry Smith > wrote: >>>>>> >>>>>> Technically you should not be calling VecSet() with any outstanding arrays but it will probably be fine. >>>>>> >>>>>> Even though GetArray() does not copy the array values; both GetArray/RestoreArray and Set track the current "state" of the vector and that count might get confused if they are used improperly. >>>>>> >>>>>> >>>>>> >>>>>>> On Apr 25, 2025, at 4:42?PM, Derek Teaney > wrote: >>>>>>> >>>>>>> Thanks, I am working on providing a standalone code. A related question is - if I did have a view of a local vector provided by: >>>>>>> >>>>>>> data_node ***dn_array; >>>>>>> DMDAVecGetArray(domain, dn_local, &dn_array); >>>>>>> >>>>>>> Can I assume through multiple calls to VecSet that the view dn_array is valid, or should this be restored, between calls. >>>>>>> >>>>>>> Thanks, >>>>>>> >>>>>>> Derek >>>>>>> >>>>>>> On Fri, Apr 25, 2025 at 1:26?PM Barry Smith > wrote: >>>>>>>> >>>>>>>> You absolutely should not need to do an assembly after a VecSet. Please post a full reproducer that demonstrates the problem. >>>>>>>> >>>>>>>> Barry >>>>>>>> >>>>>>>> >>>>>>>>> On Apr 25, 2025, at 1:23?PM, Derek Teaney via petsc-users > wrote: >>>>>>>>> >>>>>>>>> Hi, >>>>>>>>> >>>>>>>>> I was under the (mistaken) impression that one does not need to due a VecAssemblyBegin etc following a VecSet, e.g. >>>>>>>>> >>>>>>>>> VecSet(dn_local, 0.); >>>>>>>>> VecAssemblyBegin(dn_local) ; >>>>>>>>> VecAssemblyEnd(dn_local) ; >>>>>>>>> >>>>>>>>> Seems to give different results without the Assembly. >>>>>>>>> >>>>>>>>> Thanks for clarifying, >>>>>>>>> >>>>>>>>> Derek >>>>>>>>> >>>>>>>>> -- >>>>>>>>> ------------------------------------------------------------------------ >>>>>>>>> Derek Teaney >>>>>>>>> Professor and Graduate Program Director >>>>>>>>> Dept. of Physics & Astronomy >>>>>>>>> Stony Brook University >>>>>>>>> Stony Brook, NY 11794-3800 >>>>>>>>> Tel: (631) 632-4489 >>>>>>>>> Fax: (631) 632-9718 >>>>>>>>> e-mail: Derek.Teaney at stonybrook.edu >>>>>>>>> ------------------------------------------------------------------------ >>>>>>>>> >>>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> ------------------------------------------------------------------------ >>>>>>> Derek Teaney >>>>>>> Professor and Graduate Program Director >>>>>>> Dept. of Physics & Astronomy >>>>>>> Stony Brook University >>>>>>> Stony Brook, NY 11794-3800 >>>>>>> Tel: (631) 632-4489 >>>>>>> Fax: (631) 632-9718 >>>>>>> e-mail: Derek.Teaney at stonybrook.edu >>>>>>> ------------------------------------------------------------------------ >>>>>>> >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> ------------------------------------------------------------------------ >>>>> Derek Teaney >>>>> Professor and Graduate Program Director >>>>> Dept. of Physics & Astronomy >>>>> Stony Brook University >>>>> Stony Brook, NY 11794-3800 >>>>> Tel: (631) 632-4489 >>>>> Fax: (631) 632-9718 >>>>> e-mail: Derek.Teaney at stonybrook.edu >>>>> ------------------------------------------------------------------------ >>>>> >>> >>> >>> >>> -- >>> ------------------------------------------------------------------------ >>> Derek Teaney >>> Professor and Graduate Program Director >>> Dept. of Physics & Astronomy >>> Stony Brook University >>> Stony Brook, NY 11794-3800 >>> Tel: (631) 632-4489 >>> Fax: (631) 632-9718 >>> e-mail: Derek.Teaney at stonybrook.edu >>> ------------------------------------------------------------------------ >>> > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!fE14MPtF_3o08eM3ty1elbmmVfVpJTt3JiSuxrBbKBXPwXIHLTC4l7wwrUT21_bE5_FN9mM0rKi3SAGYKstfsrg$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sat Apr 26 12:49:38 2025 From: knepley at gmail.com (Matthew Knepley) Date: Sat, 26 Apr 2025 13:49:38 -0400 Subject: [petsc-users] VecSet and VecAssemblyBegin In-Reply-To: References: <26297B8C-82CE-409A-A5E2-74B4D3A272DD@petsc.dev> <01B28818-27CC-4D29-9C21-C9F5087BD461@petsc.dev> Message-ID: On Sat, Apr 26, 2025 at 1:46?PM Barry Smith wrote: > > The guards are supposed to be universal, not one-offs that are put into > specific locations. How come no errors with debug mode are detected in this > situation?] > Hmm, VecSet() has PetscCall(VecSetErrorIfLocked(x, 1)); which should error. Is Derek not checking error codes? Thanks, Matt > Barry > > > On Apr 26, 2025, at 1:09?PM, Matthew Knepley wrote: > > Junchao, should we put guards in VecSet() and LocalToGlobal() for the > array lock? > > Thanks, > > Matt > > On Sat, Apr 26, 2025 at 11:50?AM Junchao Zhang > wrote: > >> Yes, that is correct. >> >> --Junchao Zhang >> >> >> On Sat, Apr 26, 2025 at 10:35?AM Derek Teaney < >> derek.teaney at stonybrook.edu> wrote: >> >>> Ok -- got it -- thanks so just do the RestoreArray before the final >>> step, e.g. >>> >>> Loop over cases >>> VecSet(dn_local, 0.) >>> GetArray(dn_local, &dn) // RIGHT >>> Fill up dn >>> Restore Array >>> LocalToGlobal >>> >>> >>> On Sat, Apr 26, 2025 at 9:50?AM Junchao Zhang >>> wrote: >>> >>>> On Sat, Apr 26, 2025 at 8:27?AM Derek Teaney via petsc-users < >>>> petsc-users at mcs.anl.gov> wrote: >>>> >>>>> Thanks Barry -- this solved the issue. >>>>> >>>>> "probably will be fine" was fine with 3.17 and maybe 3.19, but >>>>> definitely not fine with 3.20. >>>>> >>>>> For others the faulty logic is: >>>>> >>>>> GetArray(dn_local, &dn) //WRONG >>>>> Loop over cases >>>>> VecSet(dn_local, 0.) >>>>> Fill up dn >>>>> LocalToGlobal >>>>> RestoreArray >>>>> >>>>> Where as one should do: >>>>> >>>>> Loop over cases >>>>> VecSet(dn_local, 0.) >>>>> GetArray(dn_local, &dn) // RIGHT >>>>> Fill up dn >>>>> LocalToGlobal >>>>> RestoreArray >>>>> >>>> The above two pieces of code are both wrong, in my view. >>>> >>>>> >>>>> >>>> So, while nothing is copied, if I think of dn as a copy (and not a >>>>> view) the logic will always be correct. >>>>> >>>>> Now I have a related question about "Technically you should not be >>>>> calling VecSet() with any outstanding arrays but it will probably be fine." >>>>> What about GlobalToLocal? should I always GetArray for the local >>>>> array after the GlobalToLocal >>>>> >>>>> So, is this also bad logic: >>>>> >>>>> GetArray(n_local, &n) >>>>> Loop over cases: >>>>> GlobalToLocal(n_global, &n_local) >>>>> do stuff with n >>>>> LocalToGlobal(n_local, n_global) >>>>> RestoreArray >>>>> >>>>> as opposed to >>>>> >>>>> Loop over cases: >>>>> GlobalToLocal(n_global, &n_local) >>>>> GetArray(n_local, &n) >>>>> do stuff with n >>>>> LocalToGlobal(n_local, n_global) >>>>> RestoreArray >>>>> >>>> This is also wrong. >>>> I think the rule here is: GetArray() puts the vector in an interim >>>> state. One shall not call any vector routines (ex. >>>> LocalToGlobal/GlobalToLocal) before RestoreArray(). You can only operate >>>> on the array instead. >>>> >>>> >>>>> >>>> Thanks again for all your help, >>>>> >>>>> Derek >>>>> >>>>> >>>>> >>>>> On Fri, Apr 25, 2025 at 5:30?PM Barry Smith wrote: >>>>> >>>>>> >>>>>> Technically you should not be calling VecSet() with any >>>>>> outstanding arrays but it will probably be fine. >>>>>> >>>>>> Even though GetArray() does not copy the array values; both >>>>>> GetArray/RestoreArray and Set track the current "state" of the vector and >>>>>> that count might get confused if they are used improperly. >>>>>> >>>>>> >>>>>> >>>>>> On Apr 25, 2025, at 4:42?PM, Derek Teaney < >>>>>> derek.teaney at stonybrook.edu> wrote: >>>>>> >>>>>> Thanks, I am working on providing a standalone code. A related >>>>>> question is - if I did have a view of a local vector provided by: >>>>>> >>>>>> data_node ***dn_array; >>>>>> DMDAVecGetArray(domain, dn_local, &dn_array); >>>>>> >>>>>> Can I assume through multiple calls to VecSet that the view dn_array >>>>>> is valid, or should this be restored, between calls. >>>>>> >>>>>> Thanks, >>>>>> >>>>>> Derek >>>>>> >>>>>> On Fri, Apr 25, 2025 at 1:26?PM Barry Smith wrote: >>>>>> >>>>>>> >>>>>>> You absolutely should not need to do an assembly after a VecSet. >>>>>>> Please post a full reproducer that demonstrates the problem. >>>>>>> >>>>>>> Barry >>>>>>> >>>>>>> >>>>>>> On Apr 25, 2025, at 1:23?PM, Derek Teaney via petsc-users < >>>>>>> petsc-users at mcs.anl.gov> wrote: >>>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> I was under the (mistaken) impression that one does not need to due >>>>>>> a VecAssemblyBegin etc following a VecSet, e.g. >>>>>>> >>>>>>> VecSet(dn_local, 0.); >>>>>>> VecAssemblyBegin(dn_local) ; >>>>>>> VecAssemblyEnd(dn_local) ; >>>>>>> >>>>>>> Seems to give different results without the Assembly. >>>>>>> >>>>>>> Thanks for clarifying, >>>>>>> >>>>>>> Derek >>>>>>> >>>>>>> -- >>>>>>> >>>>>>> ------------------------------------------------------------------------ >>>>>>> Derek Teaney >>>>>>> Professor and Graduate Program Director >>>>>>> Dept. of Physics & Astronomy >>>>>>> Stony Brook University >>>>>>> Stony Brook, NY 11794-3800 >>>>>>> Tel: (631) 632-4489 >>>>>>> Fax: (631) 632-9718 >>>>>>> e-mail: Derek.Teaney at stonybrook.edu >>>>>>> >>>>>>> ------------------------------------------------------------------------ >>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>>> -- >>>>>> >>>>>> ------------------------------------------------------------------------ >>>>>> Derek Teaney >>>>>> Professor and Graduate Program Director >>>>>> Dept. of Physics & Astronomy >>>>>> Stony Brook University >>>>>> Stony Brook, NY 11794-3800 >>>>>> Tel: (631) 632-4489 >>>>>> Fax: (631) 632-9718 >>>>>> e-mail: Derek.Teaney at stonybrook.edu >>>>>> >>>>>> ------------------------------------------------------------------------ >>>>>> >>>>>> >>>>>> >>>>> >>>>> -- >>>>> >>>>> ------------------------------------------------------------------------ >>>>> Derek Teaney >>>>> Professor and Graduate Program Director >>>>> Dept. of Physics & Astronomy >>>>> Stony Brook University >>>>> Stony Brook, NY 11794-3800 >>>>> Tel: (631) 632-4489 >>>>> Fax: (631) 632-9718 >>>>> e-mail: Derek.Teaney at stonybrook.edu >>>>> >>>>> ------------------------------------------------------------------------ >>>>> >>>>> >>> >>> -- >>> ------------------------------------------------------------------------ >>> Derek Teaney >>> Professor and Graduate Program Director >>> Dept. of Physics & Astronomy >>> Stony Brook University >>> Stony Brook, NY 11794-3800 >>> Tel: (631) 632-4489 >>> Fax: (631) 632-9718 >>> e-mail: Derek.Teaney at stonybrook.edu >>> ------------------------------------------------------------------------ >>> >>> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!eatD9TtAKdE50XiRgd9HWccqIXKXbEXVFEYQ9n9arp3lVyayhSxYSBkK4l6sfp5_AbQiWZCILgZT0k32ZMuv$ > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!eatD9TtAKdE50XiRgd9HWccqIXKXbEXVFEYQ9n9arp3lVyayhSxYSBkK4l6sfp5_AbQiWZCILgZT0k32ZMuv$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From C.Klaij at marin.nl Mon Apr 28 07:44:26 2025 From: C.Klaij at marin.nl (Klaij, Christiaan) Date: Mon, 28 Apr 2025 12:44:26 +0000 Subject: [petsc-users] problem with nested logging In-Reply-To: References: Message-ID: I've tried adding a nested log viewer to src/snes/tutorials/ex70.c, but it does not replicate the problem and works fine. Perhaps it is related to fortran, since the manualpage of PetscLogNestedBegin says "No fortran support" (why? we've been using it in fortran ever since). Therefore I've tried adding it to src/snes/ex5f90.F90 and that also works fine. It seems I cannot replicate the problem in a small example, unfortunately. Chris ________________________________________ From: Junchao Zhang Sent: Saturday, April 26, 2025 3:51 PM To: Klaij, Christiaan Cc: petsc-users at mcs.anl.gov; Isaac, Toby Subject: Re: [petsc-users] problem with nested logging You don't often get email from junchao.zhang at gmail.com. Learn why this is important Toby (Cc'ed) might know it. Or could you provide an example? --Junchao Zhang On Fri, Apr 25, 2025 at 3:31?AM Klaij, Christiaan via petsc-users > wrote: We recently upgraded from 3.19.4 to 3.22.4 but face the problem below with the nested logging. Any ideas? Chris [1]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [1]PETSC ERROR: General MPI error [1]PETSC ERROR: MPI error 1 MPI_ERR_BUFFER: invalid buffer pointer [1]PETSC ERROR: See https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!cmLENZvO_Uydoa8ciUmsyX-F-QiJt9a2ZfQRUvQnRibGm7VE6PED7S_BDsUgjOzvPZIJyiIoJ8bLJk6gIT68pbk$ for trouble shooting. [1]PETSC ERROR: Petsc Release Version 3.22.4, Mar 01, 2025 [1]PETSC ERROR: refresco with 2 MPI process(es) and PETSC_ARCH on marclus3login2 by jwindt Fri Apr 25 08:52:30 2025 [1]PETSC ERROR: Configure options: --prefix=/home/jwindt/cmake_builds/refresco/install-libs-gnu --with-mpi-dir=/cm/shared/apps/openmpi/gcc/4.0.2 --with-x=0 --with-mpe=0 --with-debugging=0 --download-superlu_dist=https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/superlu_dist-8.1.2.tar.gz__;!!G_uCfscf7eWS!cmLENZvO_Uydoa8ciUmsyX-F-QiJt9a2ZfQRUvQnRibGm7VE6PED7S_BDsUgjOzvPZIJyiIoJ8bLJk6grH5BbeU$ --with-blaslapack-dir=/cm/shared/apps/intel/oneapi/mkl/2021.4.0 --download-parmetis=https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/parmetis-4.0.3-p9.tar.gz__;!!G_uCfscf7eWS!cmLENZvO_Uydoa8ciUmsyX-F-QiJt9a2ZfQRUvQnRibGm7VE6PED7S_BDsUgjOzvPZIJyiIoJ8bLJk6gw4-tEtY$ --download-metis=https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/metis-5.1.0-p11.tar.gz__;!!G_uCfscf7eWS!cmLENZvO_Uydoa8ciUmsyX-F-QiJt9a2ZfQRUvQnRibGm7VE6PED7S_BDsUgjOzvPZIJyiIoJ8bLJk6gHq4uYiY$ --with-packages-build-dir=/home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild --with-ssl=0 --with-shared-libraries=1 CFLAGS="-std=gnu11 -Wall -funroll-all-loops -O3 -DNDEBUG" CXXFLAGS="-std=gnu++14 -Wall -funroll-all-loops -O3 -DNDEBUG " COPTFLAGS="-std=gnu11 -Wall -funroll-all-loops -O3 -DNDEBUG" CXXOPTFLAGS="-std=gnu++14 -Wall -funroll-all-loops -O3 -DNDEBUG " FCFLAGS="-Wall -funroll-all-loops -ffree-line-length-0 -Wno-maybe-uninitialized -Wno-target-lifetime -Wno-unused-function -O3 -DNDEBUG" F90FLAGS="-Wall -funroll-all-loops -ffree-line-length-0 -Wno-maybe-uninitialized -Wno-target-lifetime -Wno-unused-function -O3 -DNDEBUG" FOPTFLAGS="-Wall -funroll-all-loops -ffree-line-length-0 -Wno-maybe-uninitialized -Wno-target-lifetime -Wno-unused-function -O3 -DNDEBUG" [1]PETSC ERROR: #1 PetscLogNestedTreePrint() at /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:330 [1]PETSC ERROR: #2 PetscLogNestedTreePrint() at /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:384 [1]PETSC ERROR: #3 PetscLogNestedTreePrint() at /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:384 [1]PETSC ERROR: #4 PetscLogNestedTreePrintTop() at /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:420 [1]PETSC ERROR: #5 PetscLogHandlerView_Nested_XML() at /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:443 [1]PETSC ERROR: #6 PetscLogHandlerView_Nested() at /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/lognested.c:405 [1]PETSC ERROR: #7 PetscLogHandlerView() at /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/interface/loghandler.c:342 [1]PETSC ERROR: #8 PetscLogView() at /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/plog.c:2040 -------------------------------------------------------------------------- MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD with errorcode 98. NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. You may or may not see output from other processes, depending on exactly when Open MPI kills them. -------------------------------------------------------------------------- [cid:ii_196725d1e2a809852191] dr. ir.???? Christiaan Klaij | senior researcher | Research & Development | CFD Development T +31 317 49 33 44 | C.Klaij at marin.nl | https://urldefense.us/v3/__http://www.marin.nl__;!!G_uCfscf7eWS!cmLENZvO_Uydoa8ciUmsyX-F-QiJt9a2ZfQRUvQnRibGm7VE6PED7S_BDsUgjOzvPZIJyiIoJ8bLJk6g8TwMMcw$ [Facebook] [LinkedIn] [YouTube] -------------- next part -------------- A non-text attachment was scrubbed... Name: image898428.png Type: image/png Size: 5004 bytes Desc: image898428.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image714119.png Type: image/png Size: 487 bytes Desc: image714119.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image138727.png Type: image/png Size: 504 bytes Desc: image138727.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image471093.png Type: image/png Size: 482 bytes Desc: image471093.png URL: From knepley at gmail.com Mon Apr 28 08:06:45 2025 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 28 Apr 2025 09:06:45 -0400 Subject: [petsc-users] problem with nested logging In-Reply-To: References: Message-ID: On Mon, Apr 28, 2025 at 8:45?AM Klaij, Christiaan via petsc-users < petsc-users at mcs.anl.gov> wrote: > I've tried adding a nested log viewer to src/snes/tutorials/ex70.c, > but it does not replicate the problem and works fine. > > Perhaps it is related to fortran, since the manualpage of > PetscLogNestedBegin says "No fortran support" (why? we've been > using it in fortran ever since). > > Therefore I've tried adding it to src/snes/ex5f90.F90 and that > also works fine. It seems I cannot replicate the problem in a > small example, unfortunately. > We cannot replicate it here. Is there a chance you could bisect to see what change is responsible? Thanks, Matt > Chris > > ________________________________________ > From: Junchao Zhang > Sent: Saturday, April 26, 2025 3:51 PM > To: Klaij, Christiaan > Cc: petsc-users at mcs.anl.gov; Isaac, Toby > Subject: Re: [petsc-users] problem with nested logging > > You don't often get email from junchao.zhang at gmail.com. Learn why this is > important< > https://urldefense.us/v3/__https://aka.ms/LearnAboutSenderIdentification__;!!G_uCfscf7eWS!cmLENZvO_Uydoa8ciUmsyX-F-QiJt9a2ZfQRUvQnRibGm7VE6PED7S_BDsUgjOzvPZIJyiIoJ8bLJk6gVgt1dmE$ > > > Toby (Cc'ed) might know it. Or could you provide an example? > > --Junchao Zhang > > > On Fri, Apr 25, 2025 at 3:31?AM Klaij, Christiaan via petsc-users < > petsc-users at mcs.anl.gov> wrote: > We recently upgraded from 3.19.4 to 3.22.4 but face the problem below with > the nested logging. Any ideas? > > Chris > > > [1]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [1]PETSC ERROR: General MPI error > [1]PETSC ERROR: MPI error 1 MPI_ERR_BUFFER: invalid buffer pointer > [1]PETSC ERROR: See > https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!cmLENZvO_Uydoa8ciUmsyX-F-QiJt9a2ZfQRUvQnRibGm7VE6PED7S_BDsUgjOzvPZIJyiIoJ8bLJk6gIT68pbk$ > < > https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!biVlk6PKXoJvq5oVlmWdVJfW9tXv-JlwuWr3zg3jI5u1_jo8rvtZpEYnHO5RjdBqQEoqpqlJ3nusrFGM3UaqHzc$> > for trouble shooting. > [1]PETSC ERROR: Petsc Release Version 3.22.4, Mar 01, 2025 > [1]PETSC ERROR: refresco with 2 MPI process(es) and PETSC_ARCH on > marclus3login2 by jwindt Fri Apr 25 08:52:30 2025 > [1]PETSC ERROR: Configure options: > --prefix=/home/jwindt/cmake_builds/refresco/install-libs-gnu > --with-mpi-dir=/cm/shared/apps/openmpi/gcc/4.0.2 --with-x=0 --with-mpe=0 > --with-debugging=0 --download-superlu_dist= > https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/superlu_dist-8.1.2.tar.gz__;!!G_uCfscf7eWS!cmLENZvO_Uydoa8ciUmsyX-F-QiJt9a2ZfQRUvQnRibGm7VE6PED7S_BDsUgjOzvPZIJyiIoJ8bLJk6grH5BbeU$ > < > https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/superlu_dist-8.1.2.tar.gz__;!!G_uCfscf7eWS!biVlk6PKXoJvq5oVlmWdVJfW9tXv-JlwuWr3zg3jI5u1_jo8rvtZpEYnHO5RjdBqQEoqpqlJ3nusrFGM21-2D-o$> > --with-blaslapack-dir=/cm/shared/apps/intel/oneapi/mkl/2021.4.0 > --download-parmetis= > https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/parmetis-4.0.3-p9.tar.gz__;!!G_uCfscf7eWS!cmLENZvO_Uydoa8ciUmsyX-F-QiJt9a2ZfQRUvQnRibGm7VE6PED7S_BDsUgjOzvPZIJyiIoJ8bLJk6gw4-tEtY$ > < > https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/parmetis-4.0.3-p9.tar.gz__;!!G_uCfscf7eWS!biVlk6PKXoJvq5oVlmWdVJfW9tXv-JlwuWr3zg3jI5u1_jo8rvtZpEYnHO5RjdBqQEoqpqlJ3nusrFGMW0lYHko$> > --download-metis= > https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/metis-5.1.0-p11.tar.gz__;!!G_uCfscf7eWS!cmLENZvO_Uydoa8ciUmsyX-F-QiJt9a2ZfQRUvQnRibGm7VE6PED7S_BDsUgjOzvPZIJyiIoJ8bLJk6gHq4uYiY$ > < > https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/metis-5.1.0-p11.tar.gz__;!!G_uCfscf7eWS!biVlk6PKXoJvq5oVlmWdVJfW9tXv-JlwuWr3zg3jI5u1_jo8rvtZpEYnHO5RjdBqQEoqpqlJ3nusrFGMbSrIiUg$> > --with-packages-build-dir=/home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild > --with-ssl=0 --with-shared-libraries=1 CFLAGS="-std=gnu11 -Wall > -funroll-all-loops -O3 -DNDEBUG" CXXFLAGS="-std=gnu++14 -Wall > -funroll-all-loops -O3 -DNDEBUG " COPTFLAGS="-std=gnu11 -Wall > -funroll-all-loops -O3 -DNDEBUG" CXXOPTFLAGS="-std=gnu++14 -Wall > -funroll-all-loops -O3 -DNDEBUG " FCFLAGS="-Wall -funroll-all-loops > -ffree-line-length-0 -Wno-maybe-uninitialized -Wno-target-lifetime > -Wno-unused-function -O3 -DNDEBUG" F90FLAGS="-Wall -funroll-all-loops > -ffree-line-length-0 -Wno-maybe-uninitialized -Wno-target-lifetime > -Wno-unused-function -O3 -DNDEBUG" FOPTFLAGS="-Wall -funroll-all-loops > -ffree-line-length-0 -Wno-maybe-uninitialized -Wno-target-lifetime > -Wno-unused-function -O3 -DNDEBUG" > [1]PETSC ERROR: #1 PetscLogNestedTreePrint() at > /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:330 > [1]PETSC ERROR: #2 PetscLogNestedTreePrint() at > /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:384 > [1]PETSC ERROR: #3 PetscLogNestedTreePrint() at > /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:384 > [1]PETSC ERROR: #4 PetscLogNestedTreePrintTop() at > /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:420 > [1]PETSC ERROR: #5 PetscLogHandlerView_Nested_XML() at > /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:443 > [1]PETSC ERROR: #6 PetscLogHandlerView_Nested() at > /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/lognested.c:405 > [1]PETSC ERROR: #7 PetscLogHandlerView() at > /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/interface/loghandler.c:342 > [1]PETSC ERROR: #8 PetscLogView() at > /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/plog.c:2040 > -------------------------------------------------------------------------- > MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD > with errorcode 98. > > NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. > You may or may not see output from other processes, depending on > exactly when Open MPI kills them. > -------------------------------------------------------------------------- > [cid:ii_196725d1e2a809852191] > dr. ir. Christiaan Klaij > | senior researcher | Research & Development | > CFD Development > T +31 317 49 33 44 | > C.Klaij at marin.nl | > https://urldefense.us/v3/__http://www.marin.nl__;!!G_uCfscf7eWS!cmLENZvO_Uydoa8ciUmsyX-F-QiJt9a2ZfQRUvQnRibGm7VE6PED7S_BDsUgjOzvPZIJyiIoJ8bLJk6g8TwMMcw$ > < > https://urldefense.us/v3/__https://www.marin.nl/__;!!G_uCfscf7eWS!biVlk6PKXoJvq5oVlmWdVJfW9tXv-JlwuWr3zg3jI5u1_jo8rvtZpEYnHO5RjdBqQEoqpqlJ3nusrFGM8tSNH1g$ > > > [Facebook]< > https://urldefense.us/v3/__https://www.facebook.com/marin.wageningen__;!!G_uCfscf7eWS!biVlk6PKXoJvq5oVlmWdVJfW9tXv-JlwuWr3zg3jI5u1_jo8rvtZpEYnHO5RjdBqQEoqpqlJ3nusrFGMcVCZ9hk$ > > > [LinkedIn]< > https://urldefense.us/v3/__https://www.linkedin.com/company/marin__;!!G_uCfscf7eWS!biVlk6PKXoJvq5oVlmWdVJfW9tXv-JlwuWr3zg3jI5u1_jo8rvtZpEYnHO5RjdBqQEoqpqlJ3nusrFGMIDBZW7k$ > > > [YouTube]< > https://urldefense.us/v3/__https://www.youtube.com/marinmultimedia__;!!G_uCfscf7eWS!biVlk6PKXoJvq5oVlmWdVJfW9tXv-JlwuWr3zg3jI5u1_jo8rvtZpEYnHO5RjdBqQEoqpqlJ3nusrFGMVKWos24$ > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!YXuwrfla5e73Ur9RHC4t8ujKBBo7A-SFdQMbC0jA7G_kyXiKdTb4NQvlHx9EEvenDC_SBIM8U6Kl27j_imdd$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From C.Klaij at marin.nl Mon Apr 28 08:53:03 2025 From: C.Klaij at marin.nl (Klaij, Christiaan) Date: Mon, 28 Apr 2025 13:53:03 +0000 Subject: [petsc-users] problem with nested logging In-Reply-To: References: Message-ID: Bisecting would be quite hard, it's not just the petsc version that changed, also other libs, compilers, even os components. Chris _____ dr. ir. Christiaan Klaij | senior researcher | Research & Development | CFD Development T +31 317 49 33 44 | C.Klaij at marin.nl | https://urldefense.us/v3/__http://www.marin.nl__;!!G_uCfscf7eWS!ccB05YCuTjBD4s8PO77q6b1rOHXkATjqWr4Muuiu9DhyO9p8kHihED9CkcBmIVF9GvJiFLEG-_mFNqpz_Icm3xI$ ___________________________________ From: Matthew Knepley Sent: Monday, April 28, 2025 3:06 PM To: Klaij, Christiaan Cc: Junchao Zhang; petsc-users at mcs.anl.gov; Isaac, Toby Subject: Re: [petsc-users] problem with nested logging You don't often get email from knepley at gmail.com. Learn why this is important On Mon, Apr 28, 2025 at 8:45?AM Klaij, Christiaan via petsc-users > wrote: I've tried adding a nested log viewer to src/snes/tutorials/ex70.c, but it does not replicate the problem and works fine. Perhaps it is related to fortran, since the manualpage of PetscLogNestedBegin says "No fortran support" (why? we've been using it in fortran ever since). Therefore I've tried adding it to src/snes/ex5f90.F90 and that also works fine. It seems I cannot replicate the problem in a small example, unfortunately. We cannot replicate it here. Is there a chance you could bisect to see what change is responsible? Thanks, Matt Chris ________________________________________ From: Junchao Zhang > Sent: Saturday, April 26, 2025 3:51 PM To: Klaij, Christiaan Cc: petsc-users at mcs.anl.gov; Isaac, Toby Subject: Re: [petsc-users] problem with nested logging You don't often get email from junchao.zhang at gmail.com. Learn why this is important Toby (Cc'ed) might know it. Or could you provide an example? --Junchao Zhang On Fri, Apr 25, 2025 at 3:31?AM Klaij, Christiaan via petsc-users >> wrote: We recently upgraded from 3.19.4 to 3.22.4 but face the problem below with the nested logging. Any ideas? Chris [1]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [1]PETSC ERROR: General MPI error [1]PETSC ERROR: MPI error 1 MPI_ERR_BUFFER: invalid buffer pointer [1]PETSC ERROR: See https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!cmLENZvO_Uydoa8ciUmsyX-F-QiJt9a2ZfQRUvQnRibGm7VE6PED7S_BDsUgjOzvPZIJyiIoJ8bLJk6gIT68pbk$ for trouble shooting. [1]PETSC ERROR: Petsc Release Version 3.22.4, Mar 01, 2025 [1]PETSC ERROR: refresco with 2 MPI process(es) and PETSC_ARCH on marclus3login2 by jwindt Fri Apr 25 08:52:30 2025 [1]PETSC ERROR: Configure options: --prefix=/home/jwindt/cmake_builds/refresco/install-libs-gnu --with-mpi-dir=/cm/shared/apps/openmpi/gcc/4.0.2 --with-x=0 --with-mpe=0 --with-debugging=0 --download-superlu_dist=https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/superlu_dist-8.1.2.tar.gz__;!!G_uCfscf7eWS!cmLENZvO_Uydoa8ciUmsyX-F-QiJt9a2ZfQRUvQnRibGm7VE6PED7S_BDsUgjOzvPZIJyiIoJ8bLJk6grH5BbeU$ --with-blaslapack-dir=/cm/shared/apps/intel/oneapi/mkl/2021.4.0 --download-parmetis=https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/parmetis-4.0.3-p9.tar.gz__;!!G_uCfscf7eWS!cmLENZvO_Uydoa8ciUmsyX-F-QiJt9a2ZfQRUvQnRibGm7VE6PED7S_BDsUgjOzvPZIJyiIoJ8bLJk6gw4-tEtY$ --download-metis=https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/metis-5.1.0-p11.tar.gz__;!!G_uCfscf7eWS!cmLENZvO_Uydoa8ciUmsyX-F-QiJt9a2ZfQRUvQnRibGm7VE6PED7S_BDsUgjOzvPZIJyiIoJ8bLJk6gHq4uYiY$ --with-packages-build-dir=/home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild --with-ssl=0 --with-shared-libraries=1 CFLAGS="-std=gnu11 -Wall -funroll-all-loops -O3 -DNDEBUG" CXXFLAGS="-std=gnu++14 -Wall -funroll-all-loops -O3 -DNDEBUG " COPTFLAGS="-std=gnu11 -Wall -funroll-all-loops -O3 -DNDEBUG" CXXOPTFLAGS="-std=gnu++14 -Wall -funroll-all-loops -O3 -DNDEBUG " FCFLAGS="-Wall -funroll-all-loops -ffree-line-length-0 -Wno-maybe-uninitialized -Wno-target-lifetime -Wno-unused-function -O3 -DNDEBUG" F90FLAGS="-Wall -funroll-all-loops -ffree-line-length-0 -Wno-maybe-uninitialized -Wno-target-lifetime -Wno-unused-function -O3 -DNDEBUG" FOPTFLAGS="-Wall -funroll-all-loops -ffree-line-length-0 -Wno-maybe-uninitialized -Wno-target-lifetime -Wno-unused-function -O3 -DNDEBUG" [1]PETSC ERROR: #1 PetscLogNestedTreePrint() at /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:330 [1]PETSC ERROR: #2 PetscLogNestedTreePrint() at /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:384 [1]PETSC ERROR: #3 PetscLogNestedTreePrint() at /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:384 [1]PETSC ERROR: #4 PetscLogNestedTreePrintTop() at /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:420 [1]PETSC ERROR: #5 PetscLogHandlerView_Nested_XML() at /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:443 [1]PETSC ERROR: #6 PetscLogHandlerView_Nested() at /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/lognested.c:405 [1]PETSC ERROR: #7 PetscLogHandlerView() at /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/interface/loghandler.c:342 [1]PETSC ERROR: #8 PetscLogView() at /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/plog.c:2040 -------------------------------------------------------------------------- MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD with errorcode 98. NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. You may or may not see output from other processes, depending on exactly when Open MPI kills them. -------------------------------------------------------------------------- [cid:ii_196725d1e2a809852191] dr. ir. Christiaan Klaij | senior researcher | Research & Development | CFD Development T +31 317 49 33 44 | C.Klaij at marin.nl> | https://urldefense.us/v3/__http://www.marin.nl__;!!G_uCfscf7eWS!cmLENZvO_Uydoa8ciUmsyX-F-QiJt9a2ZfQRUvQnRibGm7VE6PED7S_BDsUgjOzvPZIJyiIoJ8bLJk6g8TwMMcw$ [Facebook] [LinkedIn] [YouTube] -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!ccB05YCuTjBD4s8PO77q6b1rOHXkATjqWr4Muuiu9DhyO9p8kHihED9CkcBmIVF9GvJiFLEG-_mFNqpzWifsa_s$ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image408714.png Type: image/png Size: 5004 bytes Desc: image408714.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image582281.png Type: image/png Size: 487 bytes Desc: image582281.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image620066.png Type: image/png Size: 504 bytes Desc: image620066.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image109228.png Type: image/png Size: 482 bytes Desc: image109228.png URL: From C.Klaij at marin.nl Tue Apr 29 05:50:24 2025 From: C.Klaij at marin.nl (Klaij, Christiaan) Date: Tue, 29 Apr 2025 10:50:24 +0000 Subject: [petsc-users] problem with nested logging In-Reply-To: References: Message-ID: Here's a slightly better error message, obtained --with-debugging=1 [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Petsc has generated inconsistent data [0]PETSC ERROR: MPIU_Allreduce() called in different locations (code lines) on different processors [0]PETSC ERROR: See https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!ZTevBCzVfxb2gys8ixhqH037tPe-V6RgQxNCzSAaDthsOn8eijZAz8SseXkEQlVZR4cFxfHrYrQoOXsufdJ3qNI$ for trouble shooting. [0]PETSC ERROR: Petsc Release Version 3.22.4, Mar 01, 2025 [0]PETSC ERROR: ./refresco with 2 MPI process(es) and PETSC_ARCH on marclus3login2 by cklaij Tue Apr 29 12:43:54 2025 [0]PETSC ERROR: Configure options: --prefix=/home/cklaij/ReFRESCO/trunk/install/extLibs --with-mpi-dir=/cm/shared/apps/openmpi/gcc/4.0.2 --with-x=0 --with-mpe=0 --with-debugging=1 --download-superlu_dist=https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/superlu_dist-8.1.2.tar.gz__;!!G_uCfscf7eWS!ZTevBCzVfxb2gys8ixhqH037tPe-V6RgQxNCzSAaDthsOn8eijZAz8SseXkEQlVZR4cFxfHrYrQoOXsuwwSTswQ$ --with-blaslapack-dir=/cm/shared/apps/intel/oneapi/mkl/2021.4.0 --download-parmetis=https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/parmetis-4.0.3-p9.tar.gz__;!!G_uCfscf7eWS!ZTevBCzVfxb2gys8ixhqH037tPe-V6RgQxNCzSAaDthsOn8eijZAz8SseXkEQlVZR4cFxfHrYrQoOXsusrnCWiE$ --download-metis=https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/metis-5.1.0-p11.tar.gz__;!!G_uCfscf7eWS!ZTevBCzVfxb2gys8ixhqH037tPe-V6RgQxNCzSAaDthsOn8eijZAz8SseXkEQlVZR4cFxfHrYrQoOXsuc9mzPLE$ --with-packages-build-dir=/home/cklaij/ReFRESCO/trunk/build-libs/superbuild --with-ssl=0 --with-shared-libraries=1 [0]PETSC ERROR: #1 PetscLogNestedTreePrintLine() at /home/cklaij/ReFRESCO/trunk/build-libs/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:289 [0]PETSC ERROR: #2 PetscLogNestedTreePrint() at /home/cklaij/ReFRESCO/trunk/build-libs/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:379 [0]PETSC ERROR: #3 PetscLogNestedTreePrint() at /home/cklaij/ReFRESCO/trunk/build-libs/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:384 [0]PETSC ERROR: #4 PetscLogNestedTreePrint() at /home/cklaij/ReFRESCO/trunk/build-libs/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:384 [0]PETSC ERROR: #5 PetscLogNestedTreePrintTop() at /home/cklaij/ReFRESCO/trunk/build-libs/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:420 [0]PETSC ERROR: #6 PetscLogHandlerView_Nested_XML() at /home/cklaij/ReFRESCO/trunk/build-libs/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:443 [0]PETSC ERROR: #7 PetscLogHandlerView_Nested() at /home/cklaij/ReFRESCO/trunk/build-libs/superbuild/petsc/src/src/sys/logging/handler/impls/nested/lognested.c:405 [0]PETSC ERROR: #8 PetscLogHandlerView() at /home/cklaij/ReFRESCO/trunk/build-libs/superbuild/petsc/src/src/sys/logging/handler/interface/loghandler.c:342 [0]PETSC ERROR: #9 PetscLogView() at /home/cklaij/ReFRESCO/trunk/build-libs/superbuild/petsc/src/src/sys/logging/plog.c:2040 [0]PETSC ERROR: #10 /home/cklaij/ReFRESCO/trunk/Code/src/petsc_include_impl.F90:130 _____ dr. ir. Christiaan Klaij | senior researcher | Research & Development | CFD Development T +31 317 49 33 44 | C.Klaij at marin.nl | https://urldefense.us/v3/__http://www.marin.nl__;!!G_uCfscf7eWS!ZTevBCzVfxb2gys8ixhqH037tPe-V6RgQxNCzSAaDthsOn8eijZAz8SseXkEQlVZR4cFxfHrYrQoOXsuwIMz7Z8$ ___________________________________ From: Klaij, Christiaan Sent: Monday, April 28, 2025 3:53 PM To: Matthew Knepley Cc: Junchao Zhang; petsc-users at mcs.anl.gov; Isaac, Toby Subject: Re: [petsc-users] problem with nested logging Bisecting would be quite hard, it's not just the petsc version that changed, also other libs, compilers, even os components. Chris ________________________________________ From: Matthew Knepley Sent: Monday, April 28, 2025 3:06 PM To: Klaij, Christiaan Cc: Junchao Zhang; petsc-users at mcs.anl.gov; Isaac, Toby Subject: Re: [petsc-users] problem with nested logging You don't often get email from knepley at gmail.com. Learn why this is important On Mon, Apr 28, 2025 at 8:45?AM Klaij, Christiaan via petsc-users > wrote: I've tried adding a nested log viewer to src/snes/tutorials/ex70.c, but it does not replicate the problem and works fine. Perhaps it is related to fortran, since the manualpage of PetscLogNestedBegin says "No fortran support" (why? we've been using it in fortran ever since). Therefore I've tried adding it to src/snes/ex5f90.F90 and that also works fine. It seems I cannot replicate the problem in a small example, unfortunately. We cannot replicate it here. Is there a chance you could bisect to see what change is responsible? Thanks, Matt Chris ________________________________________ From: Junchao Zhang > Sent: Saturday, April 26, 2025 3:51 PM To: Klaij, Christiaan Cc: petsc-users at mcs.anl.gov; Isaac, Toby Subject: Re: [petsc-users] problem with nested logging You don't often get email from junchao.zhang at gmail.com. Learn why this is important Toby (Cc'ed) might know it. Or could you provide an example? --Junchao Zhang On Fri, Apr 25, 2025 at 3:31?AM Klaij, Christiaan via petsc-users >> wrote: We recently upgraded from 3.19.4 to 3.22.4 but face the problem below with the nested logging. Any ideas? Chris [1]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [1]PETSC ERROR: General MPI error [1]PETSC ERROR: MPI error 1 MPI_ERR_BUFFER: invalid buffer pointer [1]PETSC ERROR: See https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!cmLENZvO_Uydoa8ciUmsyX-F-QiJt9a2ZfQRUvQnRibGm7VE6PED7S_BDsUgjOzvPZIJyiIoJ8bLJk6gIT68pbk$ for trouble shooting. [1]PETSC ERROR: Petsc Release Version 3.22.4, Mar 01, 2025 [1]PETSC ERROR: refresco with 2 MPI process(es) and PETSC_ARCH on marclus3login2 by jwindt Fri Apr 25 08:52:30 2025 [1]PETSC ERROR: Configure options: --prefix=/home/jwindt/cmake_builds/refresco/install-libs-gnu --with-mpi-dir=/cm/shared/apps/openmpi/gcc/4.0.2 --with-x=0 --with-mpe=0 --with-debugging=0 --download-superlu_dist=https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/superlu_dist-8.1.2.tar.gz__;!!G_uCfscf7eWS!cmLENZvO_Uydoa8ciUmsyX-F-QiJt9a2ZfQRUvQnRibGm7VE6PED7S_BDsUgjOzvPZIJyiIoJ8bLJk6grH5BbeU$ --with-blaslapack-dir=/cm/shared/apps/intel/oneapi/mkl/2021.4.0 --download-parmetis=https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/parmetis-4.0.3-p9.tar.gz__;!!G_uCfscf7eWS!cmLENZvO_Uydoa8ciUmsyX-F-QiJt9a2ZfQRUvQnRibGm7VE6PED7S_BDsUgjOzvPZIJyiIoJ8bLJk6gw4-tEtY$ --download-metis=https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/metis-5.1.0-p11.tar.gz__;!!G_uCfscf7eWS!cmLENZvO_Uydoa8ciUmsyX-F-QiJt9a2ZfQRUvQnRibGm7VE6PED7S_BDsUgjOzvPZIJyiIoJ8bLJk6gHq4uYiY$ --with-packages-build-dir=/home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild --with-ssl=0 --with-shared-libraries=1 CFLAGS="-std=gnu11 -Wall -funroll-all-loops -O3 -DNDEBUG" CXXFLAGS="-std=gnu++14 -Wall -funroll-all-loops -O3 -DNDEBUG " COPTFLAGS="-std=gnu11 -Wall -funroll-all-loops -O3 -DNDEBUG" CXXOPTFLAGS="-std=gnu++14 -Wall -funroll-all-loops -O3 -DNDEBUG " FCFLAGS="-Wall -funroll-all-loops -ffree-line-length-0 -Wno-maybe-uninitialized -Wno-target-lifetime -Wno-unused-function -O3 -DNDEBUG" F90FLAGS="-Wall -funroll-all-loops -ffree-line-length-0 -Wno-maybe-uninitialized -Wno-target-lifetime -Wno-unused-function -O3 -DNDEBUG" FOPTFLAGS="-Wall -funroll-all-loops -ffree-line-length-0 -Wno-maybe-uninitialized -Wno-target-lifetime -Wno-unused-function -O3 -DNDEBUG" [1]PETSC ERROR: #1 PetscLogNestedTreePrint() at /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:330 [1]PETSC ERROR: #2 PetscLogNestedTreePrint() at /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:384 [1]PETSC ERROR: #3 PetscLogNestedTreePrint() at /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:384 [1]PETSC ERROR: #4 PetscLogNestedTreePrintTop() at /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:420 [1]PETSC ERROR: #5 PetscLogHandlerView_Nested_XML() at /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:443 [1]PETSC ERROR: #6 PetscLogHandlerView_Nested() at /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/lognested.c:405 [1]PETSC ERROR: #7 PetscLogHandlerView() at /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/interface/loghandler.c:342 [1]PETSC ERROR: #8 PetscLogView() at /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/plog.c:2040 -------------------------------------------------------------------------- MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD with errorcode 98. NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. You may or may not see output from other processes, depending on exactly when Open MPI kills them. -------------------------------------------------------------------------- [cid:ii_196725d1e2a809852191] dr. ir. Christiaan Klaij | senior researcher | Research & Development | CFD Development T +31 317 49 33 44 | C.Klaij at marin.nl> | https://urldefense.us/v3/__http://www.marin.nl__;!!G_uCfscf7eWS!cmLENZvO_Uydoa8ciUmsyX-F-QiJt9a2ZfQRUvQnRibGm7VE6PED7S_BDsUgjOzvPZIJyiIoJ8bLJk6g8TwMMcw$ [Facebook] [LinkedIn] [YouTube] -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!ZTevBCzVfxb2gys8ixhqH037tPe-V6RgQxNCzSAaDthsOn8eijZAz8SseXkEQlVZR4cFxfHrYrQoOXsuCXaj8lU$ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image341996.png Type: image/png Size: 5004 bytes Desc: image341996.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image278410.png Type: image/png Size: 487 bytes Desc: image278410.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image182311.png Type: image/png Size: 504 bytes Desc: image182311.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image231002.png Type: image/png Size: 482 bytes Desc: image231002.png URL: From knepley at gmail.com Tue Apr 29 06:50:14 2025 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 29 Apr 2025 07:50:14 -0400 Subject: [petsc-users] problem with nested logging In-Reply-To: References: Message-ID: On Tue, Apr 29, 2025 at 6:50?AM Klaij, Christiaan wrote: > Here's a slightly better error message, obtained --with-debugging=1 > Is it possible that you have a mismatched EventBegin()/EventEnd() in your code? That could be why we cannot reproduce it here. Thanks, Matt > [0]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [0]PETSC ERROR: Petsc has generated inconsistent data > [0]PETSC ERROR: MPIU_Allreduce() called in different locations (code > lines) on different processors > [0]PETSC ERROR: See https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!eq1kivi8yzuD_s9lNwIeqkMg9dU4SC9grCBJnOhoALEg4iILbgybeYt5c__oPKrFt56W5KKM4Kab4LwmzLRI$ for trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.22.4, Mar 01, 2025 > [0]PETSC ERROR: ./refresco with 2 MPI process(es) and PETSC_ARCH on > marclus3login2 by cklaij Tue Apr 29 12:43:54 2025 > [0]PETSC ERROR: Configure options: > --prefix=/home/cklaij/ReFRESCO/trunk/install/extLibs > --with-mpi-dir=/cm/shared/apps/openmpi/gcc/4.0.2 --with-x=0 --with-mpe=0 > --with-debugging=1 --download-superlu_dist= > https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/superlu_dist-8.1.2.tar.gz__;!!G_uCfscf7eWS!eq1kivi8yzuD_s9lNwIeqkMg9dU4SC9grCBJnOhoALEg4iILbgybeYt5c__oPKrFt56W5KKM4Kab4MX1qQCC$ > --with-blaslapack-dir=/cm/shared/apps/intel/oneapi/mkl/2021.4.0 > --download-parmetis= > https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/parmetis-4.0.3-p9.tar.gz__;!!G_uCfscf7eWS!eq1kivi8yzuD_s9lNwIeqkMg9dU4SC9grCBJnOhoALEg4iILbgybeYt5c__oPKrFt56W5KKM4Kab4NVa8uiZ$ > --download-metis= > https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/metis-5.1.0-p11.tar.gz__;!!G_uCfscf7eWS!eq1kivi8yzuD_s9lNwIeqkMg9dU4SC9grCBJnOhoALEg4iILbgybeYt5c__oPKrFt56W5KKM4Kab4CbREwQV$ > --with-packages-build-dir=/home/cklaij/ReFRESCO/trunk/build-libs/superbuild > --with-ssl=0 --with-shared-libraries=1 > [0]PETSC ERROR: #1 PetscLogNestedTreePrintLine() at > /home/cklaij/ReFRESCO/trunk/build-libs/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:289 > [0]PETSC ERROR: #2 PetscLogNestedTreePrint() at > /home/cklaij/ReFRESCO/trunk/build-libs/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:379 > [0]PETSC ERROR: #3 PetscLogNestedTreePrint() at > /home/cklaij/ReFRESCO/trunk/build-libs/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:384 > [0]PETSC ERROR: #4 PetscLogNestedTreePrint() at > /home/cklaij/ReFRESCO/trunk/build-libs/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:384 > [0]PETSC ERROR: #5 PetscLogNestedTreePrintTop() at > /home/cklaij/ReFRESCO/trunk/build-libs/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:420 > [0]PETSC ERROR: #6 PetscLogHandlerView_Nested_XML() at > /home/cklaij/ReFRESCO/trunk/build-libs/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:443 > [0]PETSC ERROR: #7 PetscLogHandlerView_Nested() at > /home/cklaij/ReFRESCO/trunk/build-libs/superbuild/petsc/src/src/sys/logging/handler/impls/nested/lognested.c:405 > [0]PETSC ERROR: #8 PetscLogHandlerView() at > /home/cklaij/ReFRESCO/trunk/build-libs/superbuild/petsc/src/src/sys/logging/handler/interface/loghandler.c:342 > [0]PETSC ERROR: #9 PetscLogView() at > /home/cklaij/ReFRESCO/trunk/build-libs/superbuild/petsc/src/src/sys/logging/plog.c:2040 > [0]PETSC ERROR: #10 > /home/cklaij/ReFRESCO/trunk/Code/src/petsc_include_impl.F90:130 > > ________________________________________ > dr. ir. Christiaan Klaij > | senior researcher | Research & Development | CFD Development > T +31 317 49 33 44 <+31%20317%2049%2033%2044> | C.Klaij at marin.nl | > https://urldefense.us/v3/__http://www.marin.nl__;!!G_uCfscf7eWS!eq1kivi8yzuD_s9lNwIeqkMg9dU4SC9grCBJnOhoALEg4iILbgybeYt5c__oPKrFt56W5KKM4Kab4JYE0ch-$ > [image: Facebook] > [image: LinkedIn] > [image: YouTube] > > From: Klaij, Christiaan > Sent: Monday, April 28, 2025 3:53 PM > To: Matthew Knepley > Cc: Junchao Zhang; petsc-users at mcs.anl.gov; Isaac, Toby > Subject: Re: [petsc-users] problem with nested logging > > Bisecting would be quite hard, it's not just the petsc version that > changed, also other libs, compilers, even os components. > > Chris > > ________________________________________ > From: Matthew Knepley > Sent: Monday, April 28, 2025 3:06 PM > To: Klaij, Christiaan > Cc: Junchao Zhang; petsc-users at mcs.anl.gov; Isaac, Toby > Subject: Re: [petsc-users] problem with nested logging > > You don't often get email from knepley at gmail.com. Learn why this is > important > On Mon, Apr 28, 2025 at 8:45?AM Klaij, Christiaan via petsc-users < > petsc-users at mcs.anl.gov> wrote: > I've tried adding a nested log viewer to src/snes/tutorials/ex70.c, > but it does not replicate the problem and works fine. > > Perhaps it is related to fortran, since the manualpage of > PetscLogNestedBegin says "No fortran support" (why? we've been > using it in fortran ever since). > > Therefore I've tried adding it to src/snes/ex5f90.F90 and that > also works fine. It seems I cannot replicate the problem in a > small example, unfortunately. > > We cannot replicate it here. Is there a chance you could bisect to see > what change is responsible? > > Thanks, > > Matt > > Chris > > ________________________________________ > From: Junchao Zhang junchao.zhang at gmail.com>> > Sent: Saturday, April 26, 2025 3:51 PM > To: Klaij, Christiaan > Cc: petsc-users at mcs.anl.gov; Isaac, Toby > Subject: Re: [petsc-users] problem with nested logging > > You don't often get email from junchao.zhang at gmail.com junchao.zhang at gmail.com>. Learn why this is important< > https://urldefense.us/v3/__https://aka.ms/LearnAboutSenderIdentification__;!!G_uCfscf7eWS!cmLENZvO_Uydoa8ciUmsyX-F-QiJt9a2ZfQRUvQnRibGm7VE6PED7S_BDsUgjOzvPZIJyiIoJ8bLJk6gVgt1dmE$ > > > Toby (Cc'ed) might know it. Or could you provide an example? > > --Junchao Zhang > > > On Fri, Apr 25, 2025 at 3:31?AM Klaij, Christiaan via petsc-users < > petsc-users at mcs.anl.gov petsc-users at mcs.anl.gov>> wrote: > We recently upgraded from 3.19.4 to 3.22.4 but face the problem below with > the nested logging. Any ideas? > > Chris > > > [1]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [1]PETSC ERROR: General MPI error > [1]PETSC ERROR: MPI error 1 MPI_ERR_BUFFER: invalid buffer pointer > [1]PETSC ERROR: See > https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!cmLENZvO_Uydoa8ciUmsyX-F-QiJt9a2ZfQRUvQnRibGm7VE6PED7S_BDsUgjOzvPZIJyiIoJ8bLJk6gIT68pbk$ > < > https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!biVlk6PKXoJvq5oVlmWdVJfW9tXv-JlwuWr3zg3jI5u1_jo8rvtZpEYnHO5RjdBqQEoqpqlJ3nusrFGM3UaqHzc$> > for trouble shooting. > [1]PETSC ERROR: Petsc Release Version 3.22.4, Mar 01, 2025 > [1]PETSC ERROR: refresco with 2 MPI process(es) and PETSC_ARCH on > marclus3login2 by jwindt Fri Apr 25 08:52:30 2025 > [1]PETSC ERROR: Configure options: > --prefix=/home/jwindt/cmake_builds/refresco/install-libs-gnu > --with-mpi-dir=/cm/shared/apps/openmpi/gcc/4.0.2 --with-x=0 --with-mpe=0 > --with-debugging=0 --download-superlu_dist= > https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/superlu_dist-8.1.2.tar.gz__;!!G_uCfscf7eWS!cmLENZvO_Uydoa8ciUmsyX-F-QiJt9a2ZfQRUvQnRibGm7VE6PED7S_BDsUgjOzvPZIJyiIoJ8bLJk6grH5BbeU$ > < > https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/superlu_dist-8.1.2.tar.gz__;!!G_uCfscf7eWS!biVlk6PKXoJvq5oVlmWdVJfW9tXv-JlwuWr3zg3jI5u1_jo8rvtZpEYnHO5RjdBqQEoqpqlJ3nusrFGM21-2D-o$> > --with-blaslapack-dir=/cm/shared/apps/intel/oneapi/mkl/2021.4.0 > --download-parmetis= > https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/parmetis-4.0.3-p9.tar.gz__;!!G_uCfscf7eWS!cmLENZvO_Uydoa8ciUmsyX-F-QiJt9a2ZfQRUvQnRibGm7VE6PED7S_BDsUgjOzvPZIJyiIoJ8bLJk6gw4-tEtY$ > < > https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/parmetis-4.0.3-p9.tar.gz__;!!G_uCfscf7eWS!biVlk6PKXoJvq5oVlmWdVJfW9tXv-JlwuWr3zg3jI5u1_jo8rvtZpEYnHO5RjdBqQEoqpqlJ3nusrFGMW0lYHko$> > --download-metis= > https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/metis-5.1.0-p11.tar.gz__;!!G_uCfscf7eWS!cmLENZvO_Uydoa8ciUmsyX-F-QiJt9a2ZfQRUvQnRibGm7VE6PED7S_BDsUgjOzvPZIJyiIoJ8bLJk6gHq4uYiY$ > < > https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/metis-5.1.0-p11.tar.gz__;!!G_uCfscf7eWS!biVlk6PKXoJvq5oVlmWdVJfW9tXv-JlwuWr3zg3jI5u1_jo8rvtZpEYnHO5RjdBqQEoqpqlJ3nusrFGMbSrIiUg$> > --with-packages-build-dir=/home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild > --with-ssl=0 --with-shared-libraries=1 CFLAGS="-std=gnu11 -Wall > -funroll-all-loops -O3 -DNDEBUG" CXXFLAGS="-std=gnu++14 -Wall > -funroll-all-loops -O3 -DNDEBUG " COPTFLAGS="-std=gnu11 -Wall > -funroll-all-loops -O3 -DNDEBUG" CXXOPTFLAGS="-std=gnu++14 -Wall > -funroll-all-loops -O3 -DNDEBUG " FCFLAGS="-Wall -funroll-all-loops > -ffree-line-length-0 -Wno-maybe-uninitialized -Wno-target-lifetime > -Wno-unused-function -O3 -DNDEBUG" F90FLAGS="-Wall -funroll-all-loops > -ffree-line-length-0 -Wno-maybe-uninitialized -Wno-target-lifetime > -Wno-unused-function -O3 -DNDEBUG" FOPTFLAGS="-Wall -funroll-all-loops > -ffree-line-length-0 -Wno-maybe-uninitialized -Wno-target-lifetime > -Wno-unused-function -O3 -DNDEBUG" > [1]PETSC ERROR: #1 PetscLogNestedTreePrint() at > /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:330 > [1]PETSC ERROR: #2 PetscLogNestedTreePrint() at > /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:384 > [1]PETSC ERROR: #3 PetscLogNestedTreePrint() at > /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:384 > [1]PETSC ERROR: #4 PetscLogNestedTreePrintTop() at > /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:420 > [1]PETSC ERROR: #5 PetscLogHandlerView_Nested_XML() at > /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:443 > [1]PETSC ERROR: #6 PetscLogHandlerView_Nested() at > /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/lognested.c:405 > [1]PETSC ERROR: #7 PetscLogHandlerView() at > /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/interface/loghandler.c:342 > [1]PETSC ERROR: #8 PetscLogView() at > /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/plog.c:2040 > -------------------------------------------------------------------------- > MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD > with errorcode 98. > > NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. > You may or may not see output from other processes, depending on > exactly when Open MPI kills them. > -------------------------------------------------------------------------- > [cid:ii_196725d1e2a809852191] > dr. ir. Christiaan Klaij > | senior researcher | Research & Development | CFD Development > T +31 317 49 33 44 | C.Klaij at marin.nl > > > | > https://urldefense.us/v3/__http://www.marin.nl__;!!G_uCfscf7eWS!cmLENZvO_Uydoa8ciUmsyX-F-QiJt9a2ZfQRUvQnRibGm7VE6PED7S_BDsUgjOzvPZIJyiIoJ8bLJk6g8TwMMcw$ > < > https://urldefense.us/v3/__https://www.marin.nl/__;!!G_uCfscf7eWS!biVlk6PKXoJvq5oVlmWdVJfW9tXv-JlwuWr3zg3jI5u1_jo8rvtZpEYnHO5RjdBqQEoqpqlJ3nusrFGM8tSNH1g$ > > > [Facebook]< > https://urldefense.us/v3/__https://www.facebook.com/marin.wageningen__;!!G_uCfscf7eWS!biVlk6PKXoJvq5oVlmWdVJfW9tXv-JlwuWr3zg3jI5u1_jo8rvtZpEYnHO5RjdBqQEoqpqlJ3nusrFGMcVCZ9hk$ > > > [LinkedIn]< > https://urldefense.us/v3/__https://www.linkedin.com/company/marin__;!!G_uCfscf7eWS!biVlk6PKXoJvq5oVlmWdVJfW9tXv-JlwuWr3zg3jI5u1_jo8rvtZpEYnHO5RjdBqQEoqpqlJ3nusrFGMIDBZW7k$ > > > [YouTube]< > https://urldefense.us/v3/__https://www.youtube.com/marinmultimedia__;!!G_uCfscf7eWS!biVlk6PKXoJvq5oVlmWdVJfW9tXv-JlwuWr3zg3jI5u1_jo8rvtZpEYnHO5RjdBqQEoqpqlJ3nusrFGMVKWos24$ > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!eq1kivi8yzuD_s9lNwIeqkMg9dU4SC9grCBJnOhoALEg4iILbgybeYt5c__oPKrFt56W5KKM4Kab4Go3vEyJ$ > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!eq1kivi8yzuD_s9lNwIeqkMg9dU4SC9grCBJnOhoALEg4iILbgybeYt5c__oPKrFt56W5KKM4Kab4Go3vEyJ$ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image341996.png Type: image/png Size: 5004 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image278410.png Type: image/png Size: 487 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image182311.png Type: image/png Size: 504 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image231002.png Type: image/png Size: 482 bytes Desc: not available URL: From C.Klaij at marin.nl Tue Apr 29 08:17:18 2025 From: C.Klaij at marin.nl (Klaij, Christiaan) Date: Tue, 29 Apr 2025 13:17:18 +0000 Subject: [petsc-users] problem with nested logging In-Reply-To: References: Message-ID: I don't think so, we have tracing in place to detect mismatches. But as soon as I switch the tracing on, the error disappears... Same if I add a counter or print statements before and after EventBegin/End. Looks like a memory corruption problem, maybe nothing to do with petsc despite the error message. Chris ________________________________________ From: Matthew Knepley Sent: Tuesday, April 29, 2025 1:50 PM To: Klaij, Christiaan Cc: Junchao Zhang; petsc-users at mcs.anl.gov; Isaac, Toby Subject: Re: [petsc-users] problem with nested logging On Tue, Apr 29, 2025 at 6:50?AM Klaij, Christiaan > wrote: Here's a slightly better error message, obtained --with-debugging=1 Is it possible that you have a mismatched EventBegin()/EventEnd() in your code? That could be why we cannot reproduce it here. Thanks, Matt [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Petsc has generated inconsistent data [0]PETSC ERROR: MPIU_Allreduce() called in different locations (code lines) on different processors [0]PETSC ERROR: See https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!fFq1daAFFpKhBA_NWU3sd2QJe_S44rklqeRi0TB57XI0nQsh9jgy8iw3JNGpBbd21zqvO3QlGTLa7kjggvQAzPU$ for trouble shooting. [0]PETSC ERROR: Petsc Release Version 3.22.4, Mar 01, 2025 [0]PETSC ERROR: ./refresco with 2 MPI process(es) and PETSC_ARCH on marclus3login2 by cklaij Tue Apr 29 12:43:54 2025 [0]PETSC ERROR: Configure options: --prefix=/home/cklaij/ReFRESCO/trunk/install/extLibs --with-mpi-dir=/cm/shared/apps/openmpi/gcc/4.0.2 --with-x=0 --with-mpe=0 --with-debugging=1 --download-superlu_dist=https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/superlu_dist-8.1.2.tar.gz__;!!G_uCfscf7eWS!fFq1daAFFpKhBA_NWU3sd2QJe_S44rklqeRi0TB57XI0nQsh9jgy8iw3JNGpBbd21zqvO3QlGTLa7kjgVgVAJPM$ --with-blaslapack-dir=/cm/shared/apps/intel/oneapi/mkl/2021.4.0 --download-parmetis=https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/parmetis-4.0.3-p9.tar.gz__;!!G_uCfscf7eWS!fFq1daAFFpKhBA_NWU3sd2QJe_S44rklqeRi0TB57XI0nQsh9jgy8iw3JNGpBbd21zqvO3QlGTLa7kjgo2JWTO4$ --download-metis=https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/metis-5.1.0-p11.tar.gz__;!!G_uCfscf7eWS!fFq1daAFFpKhBA_NWU3sd2QJe_S44rklqeRi0TB57XI0nQsh9jgy8iw3JNGpBbd21zqvO3QlGTLa7kjgX9ZMYJA$ --with-packages-build-dir=/home/cklaij/ReFRESCO/trunk/build-libs/superbuild --with-ssl=0 --with-shared-libraries=1 [0]PETSC ERROR: #1 PetscLogNestedTreePrintLine() at /home/cklaij/ReFRESCO/trunk/build-libs/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:289 [0]PETSC ERROR: #2 PetscLogNestedTreePrint() at /home/cklaij/ReFRESCO/trunk/build-libs/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:379 [0]PETSC ERROR: #3 PetscLogNestedTreePrint() at /home/cklaij/ReFRESCO/trunk/build-libs/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:384 [0]PETSC ERROR: #4 PetscLogNestedTreePrint() at /home/cklaij/ReFRESCO/trunk/build-libs/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:384 [0]PETSC ERROR: #5 PetscLogNestedTreePrintTop() at /home/cklaij/ReFRESCO/trunk/build-libs/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:420 [0]PETSC ERROR: #6 PetscLogHandlerView_Nested_XML() at /home/cklaij/ReFRESCO/trunk/build-libs/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:443 [0]PETSC ERROR: #7 PetscLogHandlerView_Nested() at /home/cklaij/ReFRESCO/trunk/build-libs/superbuild/petsc/src/src/sys/logging/handler/impls/nested/lognested.c:405 [0]PETSC ERROR: #8 PetscLogHandlerView() at /home/cklaij/ReFRESCO/trunk/build-libs/superbuild/petsc/src/src/sys/logging/handler/interface/loghandler.c:342 [0]PETSC ERROR: #9 PetscLogView() at /home/cklaij/ReFRESCO/trunk/build-libs/superbuild/petsc/src/src/sys/logging/plog.c:2040 [0]PETSC ERROR: #10 /home/cklaij/ReFRESCO/trunk/Code/src/petsc_include_impl.F90:130 ________________________________________ [cid:ii_19681617e7812ff9cfc1] dr. ir. Christiaan Klaij | senior researcher | Research & Development | CFD Development T +31 317 49 33 44 | C.Klaij at marin.nl | https://urldefense.us/v3/__http://www.marin.nl__;!!G_uCfscf7eWS!fFq1daAFFpKhBA_NWU3sd2QJe_S44rklqeRi0TB57XI0nQsh9jgy8iw3JNGpBbd21zqvO3QlGTLa7kjgJooxJhg$ [Facebook] [LinkedIn] [YouTube] From: Klaij, Christiaan > Sent: Monday, April 28, 2025 3:53 PM To: Matthew Knepley Cc: Junchao Zhang; petsc-users at mcs.anl.gov; Isaac, Toby Subject: Re: [petsc-users] problem with nested logging Bisecting would be quite hard, it's not just the petsc version that changed, also other libs, compilers, even os components. Chris ________________________________________ From: Matthew Knepley > Sent: Monday, April 28, 2025 3:06 PM To: Klaij, Christiaan Cc: Junchao Zhang; petsc-users at mcs.anl.gov; Isaac, Toby Subject: Re: [petsc-users] problem with nested logging You don't often get email from knepley at gmail.com. Learn why this is important On Mon, Apr 28, 2025 at 8:45?AM Klaij, Christiaan via petsc-users >> wrote: I've tried adding a nested log viewer to src/snes/tutorials/ex70.c, but it does not replicate the problem and works fine. Perhaps it is related to fortran, since the manualpage of PetscLogNestedBegin says "No fortran support" (why? we've been using it in fortran ever since). Therefore I've tried adding it to src/snes/ex5f90.F90 and that also works fine. It seems I cannot replicate the problem in a small example, unfortunately. We cannot replicate it here. Is there a chance you could bisect to see what change is responsible? Thanks, Matt Chris ________________________________________ From: Junchao Zhang >> Sent: Saturday, April 26, 2025 3:51 PM To: Klaij, Christiaan Cc: petsc-users at mcs.anl.gov>; Isaac, Toby Subject: Re: [petsc-users] problem with nested logging You don't often get email from junchao.zhang at gmail.com>. Learn why this is important Toby (Cc'ed) might know it. Or could you provide an example? --Junchao Zhang On Fri, Apr 25, 2025 at 3:31?AM Klaij, Christiaan via petsc-users >>>> wrote: We recently upgraded from 3.19.4 to 3.22.4 but face the problem below with the nested logging. Any ideas? Chris [1]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [1]PETSC ERROR: General MPI error [1]PETSC ERROR: MPI error 1 MPI_ERR_BUFFER: invalid buffer pointer [1]PETSC ERROR: See https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!cmLENZvO_Uydoa8ciUmsyX-F-QiJt9a2ZfQRUvQnRibGm7VE6PED7S_BDsUgjOzvPZIJyiIoJ8bLJk6gIT68pbk$ for trouble shooting. [1]PETSC ERROR: Petsc Release Version 3.22.4, Mar 01, 2025 [1]PETSC ERROR: refresco with 2 MPI process(es) and PETSC_ARCH on marclus3login2 by jwindt Fri Apr 25 08:52:30 2025 [1]PETSC ERROR: Configure options: --prefix=/home/jwindt/cmake_builds/refresco/install-libs-gnu --with-mpi-dir=/cm/shared/apps/openmpi/gcc/4.0.2 --with-x=0 --with-mpe=0 --with-debugging=0 --download-superlu_dist=https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/superlu_dist-8.1.2.tar.gz__;!!G_uCfscf7eWS!cmLENZvO_Uydoa8ciUmsyX-F-QiJt9a2ZfQRUvQnRibGm7VE6PED7S_BDsUgjOzvPZIJyiIoJ8bLJk6grH5BbeU$ --with-blaslapack-dir=/cm/shared/apps/intel/oneapi/mkl/2021.4.0 --download-parmetis=https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/parmetis-4.0.3-p9.tar.gz__;!!G_uCfscf7eWS!cmLENZvO_Uydoa8ciUmsyX-F-QiJt9a2ZfQRUvQnRibGm7VE6PED7S_BDsUgjOzvPZIJyiIoJ8bLJk6gw4-tEtY$ --download-metis=https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/metis-5.1.0-p11.tar.gz__;!!G_uCfscf7eWS!cmLENZvO_Uydoa8ciUmsyX-F-QiJt9a2ZfQRUvQnRibGm7VE6PED7S_BDsUgjOzvPZIJyiIoJ8bLJk6gHq4uYiY$ --with-packages-build-dir=/home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild --with-ssl=0 --with-shared-libraries=1 CFLAGS="-std=gnu11 -Wall -funroll-all-loops -O3 -DNDEBUG" CXXFLAGS="-std=gnu++14 -Wall -funroll-all-loops -O3 -DNDEBUG " COPTFLAGS="-std=gnu11 -Wall -funroll-all-loops -O3 -DNDEBUG" CXXOPTFLAGS="-std=gnu++14 -Wall -funroll-all-loops -O3 -DNDEBUG " FCFLAGS="-Wall -funroll-all-loops -ffree-line-length-0 -Wno-maybe-uninitialized -Wno-target-lifetime -Wno-unused-function -O3 -DNDEBUG" F90FLAGS="-Wall -funroll-all-loops -ffree-line-length-0 -Wno-maybe-uninitialized -Wno-target-lifetime -Wno-unused-function -O3 -DNDEBUG" FOPTFLAGS="-Wall -funroll-all-loops -ffree-line-length-0 -Wno-maybe-uninitialized -Wno-target-lifetime -Wno-unused-function -O3 -DNDEBUG" [1]PETSC ERROR: #1 PetscLogNestedTreePrint() at /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:330 [1]PETSC ERROR: #2 PetscLogNestedTreePrint() at /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:384 [1]PETSC ERROR: #3 PetscLogNestedTreePrint() at /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:384 [1]PETSC ERROR: #4 PetscLogNestedTreePrintTop() at /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:420 [1]PETSC ERROR: #5 PetscLogHandlerView_Nested_XML() at /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:443 [1]PETSC ERROR: #6 PetscLogHandlerView_Nested() at /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/lognested.c:405 [1]PETSC ERROR: #7 PetscLogHandlerView() at /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/interface/loghandler.c:342 [1]PETSC ERROR: #8 PetscLogView() at /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/plog.c:2040 -------------------------------------------------------------------------- MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD with errorcode 98. NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. You may or may not see output from other processes, depending on exactly when Open MPI kills them. -------------------------------------------------------------------------- [cid:ii_196725d1e2a809852191] dr. ir. Christiaan Klaij | senior researcher | Research & Development | CFD Development T +31 317 49 33 44 | C.Klaij at marin.nl>>> | https://urldefense.us/v3/__http://www.marin.nl__;!!G_uCfscf7eWS!cmLENZvO_Uydoa8ciUmsyX-F-QiJt9a2ZfQRUvQnRibGm7VE6PED7S_BDsUgjOzvPZIJyiIoJ8bLJk6g8TwMMcw$ [Facebook] [LinkedIn] [YouTube] -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!fFq1daAFFpKhBA_NWU3sd2QJe_S44rklqeRi0TB57XI0nQsh9jgy8iw3JNGpBbd21zqvO3QlGTLa7kjg539kFLg$ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!fFq1daAFFpKhBA_NWU3sd2QJe_S44rklqeRi0TB57XI0nQsh9jgy8iw3JNGpBbd21zqvO3QlGTLa7kjg539kFLg$ -------------- next part -------------- A non-text attachment was scrubbed... Name: image341996.png Type: image/png Size: 5004 bytes Desc: image341996.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image278410.png Type: image/png Size: 487 bytes Desc: image278410.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image182311.png Type: image/png Size: 504 bytes Desc: image182311.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image231002.png Type: image/png Size: 482 bytes Desc: image231002.png URL: From rlmackie862 at gmail.com Tue Apr 29 08:33:48 2025 From: rlmackie862 at gmail.com (Randall Mackie) Date: Tue, 29 Apr 2025 08:33:48 -0500 Subject: [petsc-users] problem with nested logging In-Reply-To: References: Message-ID: We had a similar issue last year that we eventually tracked down to a bug in Intel MPI AllReduce, which was around the same version you are using. Can you try a different MPI or the latest Intel One API and see if your error clears? Randy On Tue, Apr 29, 2025 at 8:17?AM Klaij, Christiaan via petsc-users < petsc-users at mcs.anl.gov> wrote: > I don't think so, we have tracing in place to detect mismatches. But as > soon as I switch the tracing on, the error disappears... Same if I add a > counter or print statements before and after EventBegin/End. Looks like a > memory corruption problem, maybe nothing to do with petsc despite the error > message. > > Chris > > ________________________________________ > From: Matthew Knepley > Sent: Tuesday, April 29, 2025 1:50 PM > To: Klaij, Christiaan > Cc: Junchao Zhang; petsc-users at mcs.anl.gov; Isaac, Toby > Subject: Re: [petsc-users] problem with nested logging > > On Tue, Apr 29, 2025 at 6:50?AM Klaij, Christiaan > wrote: > Here's a slightly better error message, obtained --with-debugging=1 > > Is it possible that you have a mismatched EventBegin()/EventEnd() in your > code? That could be why we cannot reproduce it here. > > Thanks, > > Matt > > [0]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [0]PETSC ERROR: Petsc has generated inconsistent data > [0]PETSC ERROR: MPIU_Allreduce() called in different locations (code > lines) on different processors > [0]PETSC ERROR: See > https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!fFq1daAFFpKhBA_NWU3sd2QJe_S44rklqeRi0TB57XI0nQsh9jgy8iw3JNGpBbd21zqvO3QlGTLa7kjggvQAzPU$ > for trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.22.4, Mar 01, 2025 > [0]PETSC ERROR: ./refresco with 2 MPI process(es) and PETSC_ARCH on > marclus3login2 by cklaij Tue Apr 29 12:43:54 2025 > [0]PETSC ERROR: Configure options: > --prefix=/home/cklaij/ReFRESCO/trunk/install/extLibs > --with-mpi-dir=/cm/shared/apps/openmpi/gcc/4.0.2 --with-x=0 --with-mpe=0 > --with-debugging=1 --download-superlu_dist= > https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/superlu_dist-8.1.2.tar.gz__;!!G_uCfscf7eWS!fFq1daAFFpKhBA_NWU3sd2QJe_S44rklqeRi0TB57XI0nQsh9jgy8iw3JNGpBbd21zqvO3QlGTLa7kjgVgVAJPM$ > --with-blaslapack-dir=/cm/shared/apps/intel/oneapi/mkl/2021.4.0 > --download-parmetis= > https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/parmetis-4.0.3-p9.tar.gz__;!!G_uCfscf7eWS!fFq1daAFFpKhBA_NWU3sd2QJe_S44rklqeRi0TB57XI0nQsh9jgy8iw3JNGpBbd21zqvO3QlGTLa7kjgo2JWTO4$ > --download-metis= > https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/metis-5.1.0-p11.tar.gz__;!!G_uCfscf7eWS!fFq1daAFFpKhBA_NWU3sd2QJe_S44rklqeRi0TB57XI0nQsh9jgy8iw3JNGpBbd21zqvO3QlGTLa7kjgX9ZMYJA$ > --with-packages-build-dir=/home/cklaij/ReFRESCO/trunk/build-libs/superbuild > --with-ssl=0 --with-shared-libraries=1 > [0]PETSC ERROR: #1 PetscLogNestedTreePrintLine() at > /home/cklaij/ReFRESCO/trunk/build-libs/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:289 > [0]PETSC ERROR: #2 PetscLogNestedTreePrint() at > /home/cklaij/ReFRESCO/trunk/build-libs/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:379 > [0]PETSC ERROR: #3 PetscLogNestedTreePrint() at > /home/cklaij/ReFRESCO/trunk/build-libs/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:384 > [0]PETSC ERROR: #4 PetscLogNestedTreePrint() at > /home/cklaij/ReFRESCO/trunk/build-libs/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:384 > [0]PETSC ERROR: #5 PetscLogNestedTreePrintTop() at > /home/cklaij/ReFRESCO/trunk/build-libs/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:420 > [0]PETSC ERROR: #6 PetscLogHandlerView_Nested_XML() at > /home/cklaij/ReFRESCO/trunk/build-libs/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:443 > [0]PETSC ERROR: #7 PetscLogHandlerView_Nested() at > /home/cklaij/ReFRESCO/trunk/build-libs/superbuild/petsc/src/src/sys/logging/handler/impls/nested/lognested.c:405 > [0]PETSC ERROR: #8 PetscLogHandlerView() at > /home/cklaij/ReFRESCO/trunk/build-libs/superbuild/petsc/src/src/sys/logging/handler/interface/loghandler.c:342 > [0]PETSC ERROR: #9 PetscLogView() at > /home/cklaij/ReFRESCO/trunk/build-libs/superbuild/petsc/src/src/sys/logging/plog.c:2040 > [0]PETSC ERROR: #10 > /home/cklaij/ReFRESCO/trunk/Code/src/petsc_include_impl.F90:130 > > ________________________________________ > [cid:ii_19681617e7812ff9cfc1] > dr. ir. Christiaan Klaij > | senior researcher | Research & Development | > CFD Development > T +31 317 49 33 44 | > C.Klaij at marin.nl | > https://urldefense.us/v3/__http://www.marin.nl__;!!G_uCfscf7eWS!fFq1daAFFpKhBA_NWU3sd2QJe_S44rklqeRi0TB57XI0nQsh9jgy8iw3JNGpBbd21zqvO3QlGTLa7kjgJooxJhg$ > < > https://urldefense.us/v3/__https://www.marin.nl/__;!!G_uCfscf7eWS!fFq1daAFFpKhBA_NWU3sd2QJe_S44rklqeRi0TB57XI0nQsh9jgy8iw3JNGpBbd21zqvO3QlGTLa7kjgpOXjQSM$ > > > [Facebook]< > https://urldefense.us/v3/__https://www.facebook.com/marin.wageningen__;!!G_uCfscf7eWS!fFq1daAFFpKhBA_NWU3sd2QJe_S44rklqeRi0TB57XI0nQsh9jgy8iw3JNGpBbd21zqvO3QlGTLa7kjg_3Zt0Pw$ > > > [LinkedIn]< > https://urldefense.us/v3/__https://www.linkedin.com/company/marin__;!!G_uCfscf7eWS!fFq1daAFFpKhBA_NWU3sd2QJe_S44rklqeRi0TB57XI0nQsh9jgy8iw3JNGpBbd21zqvO3QlGTLa7kjgcpgQdSE$ > > > [YouTube]< > https://urldefense.us/v3/__https://www.youtube.com/marinmultimedia__;!!G_uCfscf7eWS!fFq1daAFFpKhBA_NWU3sd2QJe_S44rklqeRi0TB57XI0nQsh9jgy8iw3JNGpBbd21zqvO3QlGTLa7kjgn8qP6_I$ > > > > From: Klaij, Christiaan > > Sent: Monday, April 28, 2025 3:53 PM > To: Matthew Knepley > Cc: Junchao Zhang; petsc-users at mcs.anl.gov; > Isaac, Toby > Subject: Re: [petsc-users] problem with nested logging > > Bisecting would be quite hard, it's not just the petsc version that > changed, also other libs, compilers, even os components. > > Chris > > ________________________________________ > From: Matthew Knepley > > Sent: Monday, April 28, 2025 3:06 PM > To: Klaij, Christiaan > Cc: Junchao Zhang; petsc-users at mcs.anl.gov; > Isaac, Toby > Subject: Re: [petsc-users] problem with nested logging > > You don't often get email from knepley at gmail.com. > Learn why this is important< > https://urldefense.us/v3/__https://aka.ms/LearnAboutSenderIdentification__;!!G_uCfscf7eWS!fFq1daAFFpKhBA_NWU3sd2QJe_S44rklqeRi0TB57XI0nQsh9jgy8iw3JNGpBbd21zqvO3QlGTLa7kjgcKEJXRA$ > > > On Mon, Apr 28, 2025 at 8:45?AM Klaij, Christiaan via petsc-users < > petsc-users at mcs.anl.gov petsc-users at mcs.anl.gov>> wrote: > I've tried adding a nested log viewer to src/snes/tutorials/ex70.c, > but it does not replicate the problem and works fine. > > Perhaps it is related to fortran, since the manualpage of > PetscLogNestedBegin says "No fortran support" (why? we've been > using it in fortran ever since). > > Therefore I've tried adding it to src/snes/ex5f90.F90 and that > also works fine. It seems I cannot replicate the problem in a > small example, unfortunately. > > We cannot replicate it here. Is there a chance you could bisect to see > what change is responsible? > > Thanks, > > Matt > > Chris > > ________________________________________ > From: Junchao Zhang junchao.zhang at gmail.com> junchao.zhang at gmail.com>>> > Sent: Saturday, April 26, 2025 3:51 PM > To: Klaij, Christiaan > Cc: petsc-users at mcs.anl.gov petsc-users at mcs.anl.gov>; Isaac, Toby > Subject: Re: [petsc-users] problem with nested logging > > You don't often get email from junchao.zhang at gmail.com junchao.zhang at gmail.com> junchao.zhang at gmail.com>>. Learn why this is important< > https://urldefense.us/v3/__https://aka.ms/LearnAboutSenderIdentification__;!!G_uCfscf7eWS!cmLENZvO_Uydoa8ciUmsyX-F-QiJt9a2ZfQRUvQnRibGm7VE6PED7S_BDsUgjOzvPZIJyiIoJ8bLJk6gVgt1dmE$ > > > Toby (Cc'ed) might know it. Or could you provide an example? > > --Junchao Zhang > > > On Fri, Apr 25, 2025 at 3:31?AM Klaij, Christiaan via petsc-users < > petsc-users at mcs.anl.gov petsc-users at mcs.anl.gov> petsc-users at mcs.anl.gov petsc-users at mcs.anl.gov>>> wrote: > We recently upgraded from 3.19.4 to 3.22.4 but face the problem below with > the nested logging. Any ideas? > > Chris > > > [1]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [1]PETSC ERROR: General MPI error > [1]PETSC ERROR: MPI error 1 MPI_ERR_BUFFER: invalid buffer pointer > [1]PETSC ERROR: See > https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!cmLENZvO_Uydoa8ciUmsyX-F-QiJt9a2ZfQRUvQnRibGm7VE6PED7S_BDsUgjOzvPZIJyiIoJ8bLJk6gIT68pbk$ > < > https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!biVlk6PKXoJvq5oVlmWdVJfW9tXv-JlwuWr3zg3jI5u1_jo8rvtZpEYnHO5RjdBqQEoqpqlJ3nusrFGM3UaqHzc$> > for trouble shooting. > [1]PETSC ERROR: Petsc Release Version 3.22.4, Mar 01, 2025 > [1]PETSC ERROR: refresco with 2 MPI process(es) and PETSC_ARCH on > marclus3login2 by jwindt Fri Apr 25 08:52:30 2025 > [1]PETSC ERROR: Configure options: > --prefix=/home/jwindt/cmake_builds/refresco/install-libs-gnu > --with-mpi-dir=/cm/shared/apps/openmpi/gcc/4.0.2 --with-x=0 --with-mpe=0 > --with-debugging=0 --download-superlu_dist= > https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/superlu_dist-8.1.2.tar.gz__;!!G_uCfscf7eWS!cmLENZvO_Uydoa8ciUmsyX-F-QiJt9a2ZfQRUvQnRibGm7VE6PED7S_BDsUgjOzvPZIJyiIoJ8bLJk6grH5BbeU$ > < > https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/superlu_dist-8.1.2.tar.gz__;!!G_uCfscf7eWS!biVlk6PKXoJvq5oVlmWdVJfW9tXv-JlwuWr3zg3jI5u1_jo8rvtZpEYnHO5RjdBqQEoqpqlJ3nusrFGM21-2D-o$> > --with-blaslapack-dir=/cm/shared/apps/intel/oneapi/mkl/2021.4.0 > --download-parmetis= > https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/parmetis-4.0.3-p9.tar.gz__;!!G_uCfscf7eWS!cmLENZvO_Uydoa8ciUmsyX-F-QiJt9a2ZfQRUvQnRibGm7VE6PED7S_BDsUgjOzvPZIJyiIoJ8bLJk6gw4-tEtY$ > < > https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/parmetis-4.0.3-p9.tar.gz__;!!G_uCfscf7eWS!biVlk6PKXoJvq5oVlmWdVJfW9tXv-JlwuWr3zg3jI5u1_jo8rvtZpEYnHO5RjdBqQEoqpqlJ3nusrFGMW0lYHko$> > --download-metis= > https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/metis-5.1.0-p11.tar.gz__;!!G_uCfscf7eWS!cmLENZvO_Uydoa8ciUmsyX-F-QiJt9a2ZfQRUvQnRibGm7VE6PED7S_BDsUgjOzvPZIJyiIoJ8bLJk6gHq4uYiY$ > < > https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/metis-5.1.0-p11.tar.gz__;!!G_uCfscf7eWS!biVlk6PKXoJvq5oVlmWdVJfW9tXv-JlwuWr3zg3jI5u1_jo8rvtZpEYnHO5RjdBqQEoqpqlJ3nusrFGMbSrIiUg$> > --with-packages-build-dir=/home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild > --with-ssl=0 --with-shared-libraries=1 CFLAGS="-std=gnu11 -Wall > -funroll-all-loops -O3 -DNDEBUG" CXXFLAGS="-std=gnu++14 -Wall > -funroll-all-loops -O3 -DNDEBUG " COPTFLAGS="-std=gnu11 -Wall > -funroll-all-loops -O3 -DNDEBUG" CXXOPTFLAGS="-std=gnu++14 -Wall > -funroll-all-loops -O3 -DNDEBUG " FCFLAGS="-Wall -funroll-all-loops > -ffree-line-length-0 -Wno-maybe-uninitialized -Wno-target-lifetime > -Wno-unused-function -O3 -DNDEBUG" F90FLAGS="-Wall -funroll-all-loops > -ffree-line-length-0 -Wno-maybe-uninitialized -Wno-target-lifetime > -Wno-unused-function -O3 -DNDEBUG" FOPTFLAGS="-Wall -funroll-all-loops > -ffree-line-length-0 -Wno-maybe-uninitialized -Wno-target-lifetime > -Wno-unused-function -O3 -DNDEBUG" > [1]PETSC ERROR: #1 PetscLogNestedTreePrint() at > /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:330 > [1]PETSC ERROR: #2 PetscLogNestedTreePrint() at > /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:384 > [1]PETSC ERROR: #3 PetscLogNestedTreePrint() at > /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:384 > [1]PETSC ERROR: #4 PetscLogNestedTreePrintTop() at > /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:420 > [1]PETSC ERROR: #5 PetscLogHandlerView_Nested_XML() at > /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:443 > [1]PETSC ERROR: #6 PetscLogHandlerView_Nested() at > /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/lognested.c:405 > [1]PETSC ERROR: #7 PetscLogHandlerView() at > /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/interface/loghandler.c:342 > [1]PETSC ERROR: #8 PetscLogView() at > /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/plog.c:2040 > -------------------------------------------------------------------------- > MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD > with errorcode 98. > > NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. > You may or may not see output from other processes, depending on > exactly when Open MPI kills them. > -------------------------------------------------------------------------- > [cid:ii_196725d1e2a809852191] > dr. ir. Christiaan Klaij > | senior researcher | Research & Development | CFD Development > T +31 317 49 33 44 | C.Klaij at marin.nl > >> C.Klaij at marin.nl>> | > https://urldefense.us/v3/__http://www.marin.nl__;!!G_uCfscf7eWS!cmLENZvO_Uydoa8ciUmsyX-F-QiJt9a2ZfQRUvQnRibGm7VE6PED7S_BDsUgjOzvPZIJyiIoJ8bLJk6g8TwMMcw$ > < > https://urldefense.us/v3/__https://www.marin.nl/__;!!G_uCfscf7eWS!biVlk6PKXoJvq5oVlmWdVJfW9tXv-JlwuWr3zg3jI5u1_jo8rvtZpEYnHO5RjdBqQEoqpqlJ3nusrFGM8tSNH1g$ > > > [Facebook]< > https://urldefense.us/v3/__https://www.facebook.com/marin.wageningen__;!!G_uCfscf7eWS!biVlk6PKXoJvq5oVlmWdVJfW9tXv-JlwuWr3zg3jI5u1_jo8rvtZpEYnHO5RjdBqQEoqpqlJ3nusrFGMcVCZ9hk$ > > > [LinkedIn]< > https://urldefense.us/v3/__https://www.linkedin.com/company/marin__;!!G_uCfscf7eWS!biVlk6PKXoJvq5oVlmWdVJfW9tXv-JlwuWr3zg3jI5u1_jo8rvtZpEYnHO5RjdBqQEoqpqlJ3nusrFGMIDBZW7k$ > > > [YouTube]< > https://urldefense.us/v3/__https://www.youtube.com/marinmultimedia__;!!G_uCfscf7eWS!biVlk6PKXoJvq5oVlmWdVJfW9tXv-JlwuWr3zg3jI5u1_jo8rvtZpEYnHO5RjdBqQEoqpqlJ3nusrFGMVKWos24$ > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!fFq1daAFFpKhBA_NWU3sd2QJe_S44rklqeRi0TB57XI0nQsh9jgy8iw3JNGpBbd21zqvO3QlGTLa7kjg539kFLg$ > < > https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!fFq1daAFFpKhBA_NWU3sd2QJe_S44rklqeRi0TB57XI0nQsh9jgy8iw3JNGpBbd21zqvO3QlGTLa7kjgMsu6hhA$ > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!fFq1daAFFpKhBA_NWU3sd2QJe_S44rklqeRi0TB57XI0nQsh9jgy8iw3JNGpBbd21zqvO3QlGTLa7kjg539kFLg$ > < > https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!fFq1daAFFpKhBA_NWU3sd2QJe_S44rklqeRi0TB57XI0nQsh9jgy8iw3JNGpBbd21zqvO3QlGTLa7kjgMsu6hhA$ > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From C.Klaij at marin.nl Tue Apr 29 08:58:04 2025 From: C.Klaij at marin.nl (Klaij, Christiaan) Date: Tue, 29 Apr 2025 13:58:04 +0000 Subject: [petsc-users] problem with nested logging In-Reply-To: References: Message-ID: Well, the error below only shows-up thanks to openmpi and gnu compilers. With the intel mpi and compilers it just hangs (tried oneapi 2023.1.0). In which version was that bug fixed? Chris _____ dr. ir. Christiaan Klaij | senior researcher | Research & Development | CFD Development T +31 317 49 33 44 | C.Klaij at marin.nl | https://urldefense.us/v3/__http://www.marin.nl__;!!G_uCfscf7eWS!ahkfUsZWHswDZhSoTMwW9ai90SCe9vYv11x8vld6yKoKZiri6UvJszvi3BYMed3n8fhgSIdwvDufO6fNRjs6nOM$ ___________________________________ From: Randall Mackie Sent: Tuesday, April 29, 2025 3:33 PM To: Klaij, Christiaan Cc: Matthew Knepley; petsc-users at mcs.anl.gov; Isaac, Toby Subject: Re: [petsc-users] problem with nested logging You don't often get email from rlmackie862 at gmail.com. Learn why this is important We had a similar issue last year that we eventually tracked down to a bug in Intel MPI AllReduce, which was around the same version you are using. Can you try a different MPI or the latest Intel One API and see if your error clears? Randy On Tue, Apr 29, 2025 at 8:17?AM Klaij, Christiaan via petsc-users > wrote: I don't think so, we have tracing in place to detect mismatches. But as soon as I switch the tracing on, the error disappears... Same if I add a counter or print statements before and after EventBegin/End. Looks like a memory corruption problem, maybe nothing to do with petsc despite the error message. Chris ________________________________________ From: Matthew Knepley > Sent: Tuesday, April 29, 2025 1:50 PM To: Klaij, Christiaan Cc: Junchao Zhang; petsc-users at mcs.anl.gov; Isaac, Toby Subject: Re: [petsc-users] problem with nested logging On Tue, Apr 29, 2025 at 6:50?AM Klaij, Christiaan >> wrote: Here's a slightly better error message, obtained --with-debugging=1 Is it possible that you have a mismatched EventBegin()/EventEnd() in your code? That could be why we cannot reproduce it here. Thanks, Matt [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Petsc has generated inconsistent data [0]PETSC ERROR: MPIU_Allreduce() called in different locations (code lines) on different processors [0]PETSC ERROR: See https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!fFq1daAFFpKhBA_NWU3sd2QJe_S44rklqeRi0TB57XI0nQsh9jgy8iw3JNGpBbd21zqvO3QlGTLa7kjggvQAzPU$ for trouble shooting. [0]PETSC ERROR: Petsc Release Version 3.22.4, Mar 01, 2025 [0]PETSC ERROR: ./refresco with 2 MPI process(es) and PETSC_ARCH on marclus3login2 by cklaij Tue Apr 29 12:43:54 2025 [0]PETSC ERROR: Configure options: --prefix=/home/cklaij/ReFRESCO/trunk/install/extLibs --with-mpi-dir=/cm/shared/apps/openmpi/gcc/4.0.2 --with-x=0 --with-mpe=0 --with-debugging=1 --download-superlu_dist=https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/superlu_dist-8.1.2.tar.gz__;!!G_uCfscf7eWS!fFq1daAFFpKhBA_NWU3sd2QJe_S44rklqeRi0TB57XI0nQsh9jgy8iw3JNGpBbd21zqvO3QlGTLa7kjgVgVAJPM$ --with-blaslapack-dir=/cm/shared/apps/intel/oneapi/mkl/2021.4.0 --download-parmetis=https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/parmetis-4.0.3-p9.tar.gz__;!!G_uCfscf7eWS!fFq1daAFFpKhBA_NWU3sd2QJe_S44rklqeRi0TB57XI0nQsh9jgy8iw3JNGpBbd21zqvO3QlGTLa7kjgo2JWTO4$ --download-metis=https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/metis-5.1.0-p11.tar.gz__;!!G_uCfscf7eWS!fFq1daAFFpKhBA_NWU3sd2QJe_S44rklqeRi0TB57XI0nQsh9jgy8iw3JNGpBbd21zqvO3QlGTLa7kjgX9ZMYJA$ --with-packages-build-dir=/home/cklaij/ReFRESCO/trunk/build-libs/superbuild --with-ssl=0 --with-shared-libraries=1 [0]PETSC ERROR: #1 PetscLogNestedTreePrintLine() at /home/cklaij/ReFRESCO/trunk/build-libs/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:289 [0]PETSC ERROR: #2 PetscLogNestedTreePrint() at /home/cklaij/ReFRESCO/trunk/build-libs/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:379 [0]PETSC ERROR: #3 PetscLogNestedTreePrint() at /home/cklaij/ReFRESCO/trunk/build-libs/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:384 [0]PETSC ERROR: #4 PetscLogNestedTreePrint() at /home/cklaij/ReFRESCO/trunk/build-libs/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:384 [0]PETSC ERROR: #5 PetscLogNestedTreePrintTop() at /home/cklaij/ReFRESCO/trunk/build-libs/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:420 [0]PETSC ERROR: #6 PetscLogHandlerView_Nested_XML() at /home/cklaij/ReFRESCO/trunk/build-libs/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:443 [0]PETSC ERROR: #7 PetscLogHandlerView_Nested() at /home/cklaij/ReFRESCO/trunk/build-libs/superbuild/petsc/src/src/sys/logging/handler/impls/nested/lognested.c:405 [0]PETSC ERROR: #8 PetscLogHandlerView() at /home/cklaij/ReFRESCO/trunk/build-libs/superbuild/petsc/src/src/sys/logging/handler/interface/loghandler.c:342 [0]PETSC ERROR: #9 PetscLogView() at /home/cklaij/ReFRESCO/trunk/build-libs/superbuild/petsc/src/src/sys/logging/plog.c:2040 [0]PETSC ERROR: #10 /home/cklaij/ReFRESCO/trunk/Code/src/petsc_include_impl.F90:130 ________________________________________ [cid:ii_19681617e7812ff9cfc1] dr. ir. Christiaan Klaij | senior researcher | Research & Development | CFD Development T +31 317 49 33 44 | C.Klaij at marin.nl> | https://urldefense.us/v3/__http://www.marin.nl__;!!G_uCfscf7eWS!fFq1daAFFpKhBA_NWU3sd2QJe_S44rklqeRi0TB57XI0nQsh9jgy8iw3JNGpBbd21zqvO3QlGTLa7kjgJooxJhg$ [Facebook] [LinkedIn] [YouTube] From: Klaij, Christiaan >> Sent: Monday, April 28, 2025 3:53 PM To: Matthew Knepley Cc: Junchao Zhang; petsc-users at mcs.anl.gov>; Isaac, Toby Subject: Re: [petsc-users] problem with nested logging Bisecting would be quite hard, it's not just the petsc version that changed, also other libs, compilers, even os components. Chris ________________________________________ From: Matthew Knepley >> Sent: Monday, April 28, 2025 3:06 PM To: Klaij, Christiaan Cc: Junchao Zhang; petsc-users at mcs.anl.gov>; Isaac, Toby Subject: Re: [petsc-users] problem with nested logging You don't often get email from knepley at gmail.com>. Learn why this is important On Mon, Apr 28, 2025 at 8:45?AM Klaij, Christiaan via petsc-users >>>> wrote: I've tried adding a nested log viewer to src/snes/tutorials/ex70.c, but it does not replicate the problem and works fine. Perhaps it is related to fortran, since the manualpage of PetscLogNestedBegin says "No fortran support" (why? we've been using it in fortran ever since). Therefore I've tried adding it to src/snes/ex5f90.F90 and that also works fine. It seems I cannot replicate the problem in a small example, unfortunately. We cannot replicate it here. Is there a chance you could bisect to see what change is responsible? Thanks, Matt Chris ________________________________________ From: Junchao Zhang >>>> Sent: Saturday, April 26, 2025 3:51 PM To: Klaij, Christiaan Cc: petsc-users at mcs.anl.gov>>>; Isaac, Toby Subject: Re: [petsc-users] problem with nested logging You don't often get email from junchao.zhang at gmail.com>>>. Learn why this is important Toby (Cc'ed) might know it. Or could you provide an example? --Junchao Zhang On Fri, Apr 25, 2025 at 3:31?AM Klaij, Christiaan via petsc-users >>>>>>>> wrote: We recently upgraded from 3.19.4 to 3.22.4 but face the problem below with the nested logging. Any ideas? Chris [1]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [1]PETSC ERROR: General MPI error [1]PETSC ERROR: MPI error 1 MPI_ERR_BUFFER: invalid buffer pointer [1]PETSC ERROR: See https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!cmLENZvO_Uydoa8ciUmsyX-F-QiJt9a2ZfQRUvQnRibGm7VE6PED7S_BDsUgjOzvPZIJyiIoJ8bLJk6gIT68pbk$ for trouble shooting. [1]PETSC ERROR: Petsc Release Version 3.22.4, Mar 01, 2025 [1]PETSC ERROR: refresco with 2 MPI process(es) and PETSC_ARCH on marclus3login2 by jwindt Fri Apr 25 08:52:30 2025 [1]PETSC ERROR: Configure options: --prefix=/home/jwindt/cmake_builds/refresco/install-libs-gnu --with-mpi-dir=/cm/shared/apps/openmpi/gcc/4.0.2 --with-x=0 --with-mpe=0 --with-debugging=0 --download-superlu_dist=https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/superlu_dist-8.1.2.tar.gz__;!!G_uCfscf7eWS!cmLENZvO_Uydoa8ciUmsyX-F-QiJt9a2ZfQRUvQnRibGm7VE6PED7S_BDsUgjOzvPZIJyiIoJ8bLJk6grH5BbeU$ --with-blaslapack-dir=/cm/shared/apps/intel/oneapi/mkl/2021.4.0 --download-parmetis=https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/parmetis-4.0.3-p9.tar.gz__;!!G_uCfscf7eWS!cmLENZvO_Uydoa8ciUmsyX-F-QiJt9a2ZfQRUvQnRibGm7VE6PED7S_BDsUgjOzvPZIJyiIoJ8bLJk6gw4-tEtY$ --download-metis=https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/metis-5.1.0-p11.tar.gz__;!!G_uCfscf7eWS!cmLENZvO_Uydoa8ciUmsyX-F-QiJt9a2ZfQRUvQnRibGm7VE6PED7S_BDsUgjOzvPZIJyiIoJ8bLJk6gHq4uYiY$ --with-packages-build-dir=/home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild --with-ssl=0 --with-shared-libraries=1 CFLAGS="-std=gnu11 -Wall -funroll-all-loops -O3 -DNDEBUG" CXXFLAGS="-std=gnu++14 -Wall -funroll-all-loops -O3 -DNDEBUG " COPTFLAGS="-std=gnu11 -Wall -funroll-all-loops -O3 -DNDEBUG" CXXOPTFLAGS="-std=gnu++14 -Wall -funroll-all-loops -O3 -DNDEBUG " FCFLAGS="-Wall -funroll-all-loops -ffree-line-length-0 -Wno-maybe-uninitialized -Wno-target-lifetime -Wno-unused-function -O3 -DNDEBUG" F90FLAGS="-Wall -funroll-all-loops -ffree-line-length-0 -Wno-maybe-uninitialized -Wno-target-lifetime -Wno-unused-function -O3 -DNDEBUG" FOPTFLAGS="-Wall -funroll-all-loops -ffree-line-length-0 -Wno-maybe-uninitialized -Wno-target-lifetime -Wno-unused-function -O3 -DNDEBUG" [1]PETSC ERROR: #1 PetscLogNestedTreePrint() at /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:330 [1]PETSC ERROR: #2 PetscLogNestedTreePrint() at /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:384 [1]PETSC ERROR: #3 PetscLogNestedTreePrint() at /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:384 [1]PETSC ERROR: #4 PetscLogNestedTreePrintTop() at /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:420 [1]PETSC ERROR: #5 PetscLogHandlerView_Nested_XML() at /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:443 [1]PETSC ERROR: #6 PetscLogHandlerView_Nested() at /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/lognested.c:405 [1]PETSC ERROR: #7 PetscLogHandlerView() at /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/interface/loghandler.c:342 [1]PETSC ERROR: #8 PetscLogView() at /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/plog.c:2040 -------------------------------------------------------------------------- MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD with errorcode 98. NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. You may or may not see output from other processes, depending on exactly when Open MPI kills them. -------------------------------------------------------------------------- [cid:ii_196725d1e2a809852191] dr. ir. Christiaan Klaij | senior researcher | Research & Development | CFD Development T +31 317 49 33 44 | C.Klaij at marin.nl>>>>>>> | https://urldefense.us/v3/__http://www.marin.nl__;!!G_uCfscf7eWS!cmLENZvO_Uydoa8ciUmsyX-F-QiJt9a2ZfQRUvQnRibGm7VE6PED7S_BDsUgjOzvPZIJyiIoJ8bLJk6g8TwMMcw$ [Facebook] [LinkedIn] [YouTube] -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!fFq1daAFFpKhBA_NWU3sd2QJe_S44rklqeRi0TB57XI0nQsh9jgy8iw3JNGpBbd21zqvO3QlGTLa7kjg539kFLg$ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!fFq1daAFFpKhBA_NWU3sd2QJe_S44rklqeRi0TB57XI0nQsh9jgy8iw3JNGpBbd21zqvO3QlGTLa7kjg539kFLg$ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image374583.png Type: image/png Size: 5004 bytes Desc: image374583.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image322757.png Type: image/png Size: 487 bytes Desc: image322757.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image512282.png Type: image/png Size: 504 bytes Desc: image512282.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image090879.png Type: image/png Size: 482 bytes Desc: image090879.png URL: From rlmackie862 at gmail.com Tue Apr 29 10:21:09 2025 From: rlmackie862 at gmail.com (Randall Mackie) Date: Tue, 29 Apr 2025 10:21:09 -0500 Subject: [petsc-users] problem with nested logging In-Reply-To: References: Message-ID: ah okay, I missed that this was found using openmpi. then it?s probably not the same issue we had. I can?t remember in which version it was fixed (I?m away from my work computer)?.I do know in our case openmpi and the latest Intel One API work fine. Randy > On Apr 29, 2025, at 8:58?AM, Klaij, Christiaan wrote: > > Well, the error below only shows-up thanks to openmpi and gnu compilers. > With the intel mpi and compilers it just hangs (tried oneapi 2023.1.0). In which version was that bug fixed? > > Chris > > ________________________________________ > > dr. ir.???? Christiaan Klaij > | senior researcher | Research & Development | CFD Development > T +31?317?49?33?44 | C.Klaij at marin.nl | https://urldefense.us/v3/__http://www.marin.nl__;!!G_uCfscf7eWS!bW2X4VgowGOLrAASgbFOR_Mh6HW4HtWqrdtpsvpnpiFrIwki34JOGyih-h-1bvgb-Bh4EdWRUoVqQW7s6CzTuZcF2A$ > > > > From: Randall Mackie > Sent: Tuesday, April 29, 2025 3:33 PM > To: Klaij, Christiaan > Cc: Matthew Knepley; petsc-users at mcs.anl.gov; Isaac, Toby > Subject: Re: [petsc-users] problem with nested logging > > You don't often get email from rlmackie862 at gmail.com. Learn why this is important > We had a similar issue last year that we eventually tracked down to a bug in Intel MPI AllReduce, which was around the same version you are using. > > Can you try a different MPI or the latest Intel One API and see if your error clears? > > Randy > > On Tue, Apr 29, 2025 at 8:17?AM Klaij, Christiaan via petsc-users > wrote: > I don't think so, we have tracing in place to detect mismatches. But as soon as I switch the tracing on, the error disappears... Same if I add a counter or print statements before and after EventBegin/End. Looks like a memory corruption problem, maybe nothing to do with petsc despite the error message. > > Chris > > ________________________________________ > From: Matthew Knepley > > Sent: Tuesday, April 29, 2025 1:50 PM > To: Klaij, Christiaan > Cc: Junchao Zhang; petsc-users at mcs.anl.gov; Isaac, Toby > Subject: Re: [petsc-users] problem with nested logging > > On Tue, Apr 29, 2025 at 6:50?AM Klaij, Christiaan >> wrote: > Here's a slightly better error message, obtained --with-debugging=1 > > Is it possible that you have a mismatched EventBegin()/EventEnd() in your code? That could be why we cannot reproduce it here. > > Thanks, > > Matt > > [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > [0]PETSC ERROR: Petsc has generated inconsistent data > [0]PETSC ERROR: MPIU_Allreduce() called in different locations (code lines) on different processors > [0]PETSC ERROR: See https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!fFq1daAFFpKhBA_NWU3sd2QJe_S44rklqeRi0TB57XI0nQsh9jgy8iw3JNGpBbd21zqvO3QlGTLa7kjggvQAzPU$ for trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.22.4, Mar 01, 2025 > [0]PETSC ERROR: ./refresco with 2 MPI process(es) and PETSC_ARCH on marclus3login2 by cklaij Tue Apr 29 12:43:54 2025 > [0]PETSC ERROR: Configure options: --prefix=/home/cklaij/ReFRESCO/trunk/install/extLibs --with-mpi-dir=/cm/shared/apps/openmpi/gcc/4.0.2 --with-x=0 --with-mpe=0 --with-debugging=1 --download-superlu_dist=https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/superlu_dist-8.1.2.tar.gz__;!!G_uCfscf7eWS!fFq1daAFFpKhBA_NWU3sd2QJe_S44rklqeRi0TB57XI0nQsh9jgy8iw3JNGpBbd21zqvO3QlGTLa7kjgVgVAJPM$ --with-blaslapack-dir=/cm/shared/apps/intel/oneapi/mkl/2021.4.0 --download-parmetis=https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/parmetis-4.0.3-p9.tar.gz__;!!G_uCfscf7eWS!fFq1daAFFpKhBA_NWU3sd2QJe_S44rklqeRi0TB57XI0nQsh9jgy8iw3JNGpBbd21zqvO3QlGTLa7kjgo2JWTO4$ --download-metis=https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/metis-5.1.0-p11.tar.gz__;!!G_uCfscf7eWS!fFq1daAFFpKhBA_NWU3sd2QJe_S44rklqeRi0TB57XI0nQsh9jgy8iw3JNGpBbd21zqvO3QlGTLa7kjgX9ZMYJA$ --with-packages-build-dir=/home/cklaij/ReFRESCO/trunk/build-libs/superbuild --with-ssl=0 --with-shared-libraries=1 > [0]PETSC ERROR: #1 PetscLogNestedTreePrintLine() at /home/cklaij/ReFRESCO/trunk/build-libs/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:289 > [0]PETSC ERROR: #2 PetscLogNestedTreePrint() at /home/cklaij/ReFRESCO/trunk/build-libs/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:379 > [0]PETSC ERROR: #3 PetscLogNestedTreePrint() at /home/cklaij/ReFRESCO/trunk/build-libs/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:384 > [0]PETSC ERROR: #4 PetscLogNestedTreePrint() at /home/cklaij/ReFRESCO/trunk/build-libs/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:384 > [0]PETSC ERROR: #5 PetscLogNestedTreePrintTop() at /home/cklaij/ReFRESCO/trunk/build-libs/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:420 > [0]PETSC ERROR: #6 PetscLogHandlerView_Nested_XML() at /home/cklaij/ReFRESCO/trunk/build-libs/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:443 > [0]PETSC ERROR: #7 PetscLogHandlerView_Nested() at /home/cklaij/ReFRESCO/trunk/build-libs/superbuild/petsc/src/src/sys/logging/handler/impls/nested/lognested.c:405 > [0]PETSC ERROR: #8 PetscLogHandlerView() at /home/cklaij/ReFRESCO/trunk/build-libs/superbuild/petsc/src/src/sys/logging/handler/interface/loghandler.c:342 > [0]PETSC ERROR: #9 PetscLogView() at /home/cklaij/ReFRESCO/trunk/build-libs/superbuild/petsc/src/src/sys/logging/plog.c:2040 > [0]PETSC ERROR: #10 /home/cklaij/ReFRESCO/trunk/Code/src/petsc_include_impl.F90:130 > > ________________________________________ > [cid:ii_19681617e7812ff9cfc1] > dr. ir. Christiaan Klaij > | senior researcher | Research & Development | CFD Development > T +31 317 49 33 44 | C.Klaij at marin.nl> | https://urldefense.us/v3/__http://www.marin.nl__;!!G_uCfscf7eWS!fFq1daAFFpKhBA_NWU3sd2QJe_S44rklqeRi0TB57XI0nQsh9jgy8iw3JNGpBbd21zqvO3QlGTLa7kjgJooxJhg$ > [Facebook] > [LinkedIn] > [YouTube] > > From: Klaij, Christiaan >> > Sent: Monday, April 28, 2025 3:53 PM > To: Matthew Knepley > Cc: Junchao Zhang; petsc-users at mcs.anl.gov>; Isaac, Toby > Subject: Re: [petsc-users] problem with nested logging > > Bisecting would be quite hard, it's not just the petsc version that changed, also other libs, compilers, even os components. > > Chris > > ________________________________________ > From: Matthew Knepley >> > Sent: Monday, April 28, 2025 3:06 PM > To: Klaij, Christiaan > Cc: Junchao Zhang; petsc-users at mcs.anl.gov>; Isaac, Toby > Subject: Re: [petsc-users] problem with nested logging > > You don't often get email from knepley at gmail.com>. Learn why this is important > On Mon, Apr 28, 2025 at 8:45?AM Klaij, Christiaan via petsc-users >>>> wrote: > I've tried adding a nested log viewer to src/snes/tutorials/ex70.c, > but it does not replicate the problem and works fine. > > Perhaps it is related to fortran, since the manualpage of > PetscLogNestedBegin says "No fortran support" (why? we've been > using it in fortran ever since). > > Therefore I've tried adding it to src/snes/ex5f90.F90 and that > also works fine. It seems I cannot replicate the problem in a > small example, unfortunately. > > We cannot replicate it here. Is there a chance you could bisect to see what change is responsible? > > Thanks, > > Matt > > Chris > > ________________________________________ > From: Junchao Zhang >>>> > Sent: Saturday, April 26, 2025 3:51 PM > To: Klaij, Christiaan > Cc: petsc-users at mcs.anl.gov>>>; Isaac, Toby > Subject: Re: [petsc-users] problem with nested logging > > You don't often get email from junchao.zhang at gmail.com>>>. Learn why this is important > Toby (Cc'ed) might know it. Or could you provide an example? > > --Junchao Zhang > > > On Fri, Apr 25, 2025 at 3:31?AM Klaij, Christiaan via petsc-users >>>>>>>> wrote: > We recently upgraded from 3.19.4 to 3.22.4 but face the problem below with the nested logging. Any ideas? > > Chris > > > [1]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > [1]PETSC ERROR: General MPI error > [1]PETSC ERROR: MPI error 1 MPI_ERR_BUFFER: invalid buffer pointer > [1]PETSC ERROR: See https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!cmLENZvO_Uydoa8ciUmsyX-F-QiJt9a2ZfQRUvQnRibGm7VE6PED7S_BDsUgjOzvPZIJyiIoJ8bLJk6gIT68pbk$ for trouble shooting. > [1]PETSC ERROR: Petsc Release Version 3.22.4, Mar 01, 2025 > [1]PETSC ERROR: refresco with 2 MPI process(es) and PETSC_ARCH on marclus3login2 by jwindt Fri Apr 25 08:52:30 2025 > [1]PETSC ERROR: Configure options: --prefix=/home/jwindt/cmake_builds/refresco/install-libs-gnu --with-mpi-dir=/cm/shared/apps/openmpi/gcc/4.0.2 --with-x=0 --with-mpe=0 --with-debugging=0 --download-superlu_dist=https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/superlu_dist-8.1.2.tar.gz__;!!G_uCfscf7eWS!cmLENZvO_Uydoa8ciUmsyX-F-QiJt9a2ZfQRUvQnRibGm7VE6PED7S_BDsUgjOzvPZIJyiIoJ8bLJk6grH5BbeU$ --with-blaslapack-dir=/cm/shared/apps/intel/oneapi/mkl/2021.4.0 --download-parmetis=https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/parmetis-4.0.3-p9.tar.gz__;!!G_uCfscf7eWS!cmLENZvO_Uydoa8ciUmsyX-F-QiJt9a2ZfQRUvQnRibGm7VE6PED7S_BDsUgjOzvPZIJyiIoJ8bLJk6gw4-tEtY$ --download-metis=https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/metis-5.1.0-p11.tar.gz__;!!G_uCfscf7eWS!cmLENZvO_Uydoa8ciUmsyX-F-QiJt9a2ZfQRUvQnRibGm7VE6PED7S_BDsUgjOzvPZIJyiIoJ8bLJk6gHq4uYiY$ --with-packages-build-dir=/home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild --with-ssl=0 --with-shared-libraries=1 CFLAGS="-std=gnu11 -Wall -funroll-all-loops -O3 -DNDEBUG" CXXFLAGS="-std=gnu++14 -Wall -funroll-all-loops -O3 -DNDEBUG " COPTFLAGS="-std=gnu11 -Wall -funroll-all-loops -O3 -DNDEBUG" CXXOPTFLAGS="-std=gnu++14 -Wall -funroll-all-loops -O3 -DNDEBUG " FCFLAGS="-Wall -funroll-all-loops -ffree-line-length-0 -Wno-maybe-uninitialized -Wno-target-lifetime -Wno-unused-function -O3 -DNDEBUG" F90FLAGS="-Wall -funroll-all-loops -ffree-line-length-0 -Wno-maybe-uninitialized -Wno-target-lifetime -Wno-unused-function -O3 -DNDEBUG" FOPTFLAGS="-Wall -funroll-all-loops -ffree-line-length-0 -Wno-maybe-uninitialized -Wno-target-lifetime -Wno-unused-function -O3 -DNDEBUG" > [1]PETSC ERROR: #1 PetscLogNestedTreePrint() at /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:330 > [1]PETSC ERROR: #2 PetscLogNestedTreePrint() at /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:384 > [1]PETSC ERROR: #3 PetscLogNestedTreePrint() at /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:384 > [1]PETSC ERROR: #4 PetscLogNestedTreePrintTop() at /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:420 > [1]PETSC ERROR: #5 PetscLogHandlerView_Nested_XML() at /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:443 > [1]PETSC ERROR: #6 PetscLogHandlerView_Nested() at /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/lognested.c:405 > [1]PETSC ERROR: #7 PetscLogHandlerView() at /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/interface/loghandler.c:342 > [1]PETSC ERROR: #8 PetscLogView() at /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/plog.c:2040 > -------------------------------------------------------------------------- > MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD > with errorcode 98. > > NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. > You may or may not see output from other processes, depending on > exactly when Open MPI kills them. > -------------------------------------------------------------------------- > [cid:ii_196725d1e2a809852191] > dr. ir. Christiaan Klaij > | senior researcher | Research & Development | CFD Development > T +31 317 49 33 44 | C.Klaij at marin.nl>>>>>>> | https://urldefense.us/v3/__http://www.marin.nl__;!!G_uCfscf7eWS!cmLENZvO_Uydoa8ciUmsyX-F-QiJt9a2ZfQRUvQnRibGm7VE6PED7S_BDsUgjOzvPZIJyiIoJ8bLJk6g8TwMMcw$ > [Facebook] > [LinkedIn] > [YouTube] > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!fFq1daAFFpKhBA_NWU3sd2QJe_S44rklqeRi0TB57XI0nQsh9jgy8iw3JNGpBbd21zqvO3QlGTLa7kjg539kFLg$ > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!fFq1daAFFpKhBA_NWU3sd2QJe_S44rklqeRi0TB57XI0nQsh9jgy8iw3JNGpBbd21zqvO3QlGTLa7kjg539kFLg$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.zampini at gmail.com Tue Apr 29 11:12:44 2025 From: stefano.zampini at gmail.com (Stefano Zampini) Date: Tue, 29 Apr 2025 19:12:44 +0300 Subject: [petsc-users] problem with nested logging In-Reply-To: References: Message-ID: Can you try using -log_sync ? This should check every entry/exit points of logged Events and complain if something is not collectively called Stefano On Tue, Apr 29, 2025, 18:21 Randall Mackie wrote: > ah okay, I missed that this was found using openmpi. > > then it?s probably not the same issue we had. > > I can?t remember in which version it was fixed (I?m away from my work > computer)?.I do know in our case openmpi and the latest Intel One API work > fine. > > Randy > > On Apr 29, 2025, at 8:58?AM, Klaij, Christiaan wrote: > > Well, the error below only shows-up thanks to openmpi and gnu compilers. > With the intel mpi and compilers it just hangs (tried oneapi 2023.1.0). In > which version was that bug fixed? > > Chris > > ________________________________________ > > dr. ir.???? Christiaan Klaij > | senior researcher | Research & Development | CFD Development > T +31 317 49 33 44 <+31%20317%2049%2033%2044> | C.Klaij at marin.nl | > https://urldefense.us/v3/__http://www.marin.nl__;!!G_uCfscf7eWS!eB70n8KgdMhPikh9py0UMeBU6SlvRyH5a0-OAfV1OalJWgzJyfY8qbgnv2tSg30ERG1-ZhMPAczudbrMXfiDZBcFySbYcYI$ > > > > > > > > > From: Randall Mackie > Sent: Tuesday, April 29, 2025 3:33 PM > To: Klaij, Christiaan > Cc: Matthew Knepley; petsc-users at mcs.anl.gov; Isaac, Toby > Subject: Re: [petsc-users] problem with nested logging > > You don't often get email from rlmackie862 at gmail.com. Learn why this is > important > > > We had a similar issue last year that we eventually tracked down to a bug > in Intel MPI AllReduce, which was around the same version you are using. > > Can you try a different MPI or the latest Intel One API and see if your > error clears? > > Randy > > On Tue, Apr 29, 2025 at 8:17?AM Klaij, Christiaan via petsc-users < > petsc-users at mcs.anl.gov> wrote: > I don't think so, we have tracing in place to detect mismatches. But as > soon as I switch the tracing on, the error disappears... Same if I add a > counter or print statements before and after EventBegin/End. Looks like a > memory corruption problem, maybe nothing to do with petsc despite the error > message. > > Chris > > ________________________________________ > From: Matthew Knepley > > Sent: Tuesday, April 29, 2025 1:50 PM > To: Klaij, Christiaan > Cc: Junchao Zhang; petsc-users at mcs.anl.gov; > Isaac, Toby > Subject: Re: [petsc-users] problem with nested logging > > On Tue, Apr 29, 2025 at 6:50?AM Klaij, Christiaan >> > wrote: > Here's a slightly better error message, obtained --with-debugging=1 > > Is it possible that you have a mismatched EventBegin()/EventEnd() in your > code? That could be why we cannot reproduce it here. > > Thanks, > > Matt > > [0]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [0]PETSC ERROR: Petsc has generated inconsistent data > [0]PETSC ERROR: MPIU_Allreduce() called in different locations (code > lines) on different processors > [0]PETSC ERROR: See > https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!fFq1daAFFpKhBA_NWU3sd2QJe_S44rklqeRi0TB57XI0nQsh9jgy8iw3JNGpBbd21zqvO3QlGTLa7kjggvQAzPU$ > for trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.22.4, Mar 01, 2025 > [0]PETSC ERROR: ./refresco with 2 MPI process(es) and PETSC_ARCH on > marclus3login2 by cklaij Tue Apr 29 12:43:54 2025 > [0]PETSC ERROR: Configure options: > --prefix=/home/cklaij/ReFRESCO/trunk/install/extLibs > --with-mpi-dir=/cm/shared/apps/openmpi/gcc/4.0.2 --with-x=0 --with-mpe=0 > --with-debugging=1 --download-superlu_dist= > https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/superlu_dist-8.1.2.tar.gz__;!!G_uCfscf7eWS!fFq1daAFFpKhBA_NWU3sd2QJe_S44rklqeRi0TB57XI0nQsh9jgy8iw3JNGpBbd21zqvO3QlGTLa7kjgVgVAJPM$ > --with-blaslapack-dir=/cm/shared/apps/intel/oneapi/mkl/2021.4.0 > --download-parmetis= > https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/parmetis-4.0.3-p9.tar.gz__;!!G_uCfscf7eWS!fFq1daAFFpKhBA_NWU3sd2QJe_S44rklqeRi0TB57XI0nQsh9jgy8iw3JNGpBbd21zqvO3QlGTLa7kjgo2JWTO4$ > --download-metis= > https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/metis-5.1.0-p11.tar.gz__;!!G_uCfscf7eWS!fFq1daAFFpKhBA_NWU3sd2QJe_S44rklqeRi0TB57XI0nQsh9jgy8iw3JNGpBbd21zqvO3QlGTLa7kjgX9ZMYJA$ > --with-packages-build-dir=/home/cklaij/ReFRESCO/trunk/build-libs/superbuild > --with-ssl=0 --with-shared-libraries=1 > [0]PETSC ERROR: #1 PetscLogNestedTreePrintLine() at > /home/cklaij/ReFRESCO/trunk/build-libs/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:289 > [0]PETSC ERROR: #2 PetscLogNestedTreePrint() at > /home/cklaij/ReFRESCO/trunk/build-libs/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:379 > [0]PETSC ERROR: #3 PetscLogNestedTreePrint() at > /home/cklaij/ReFRESCO/trunk/build-libs/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:384 > [0]PETSC ERROR: #4 PetscLogNestedTreePrint() at > /home/cklaij/ReFRESCO/trunk/build-libs/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:384 > [0]PETSC ERROR: #5 PetscLogNestedTreePrintTop() at > /home/cklaij/ReFRESCO/trunk/build-libs/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:420 > [0]PETSC ERROR: #6 PetscLogHandlerView_Nested_XML() at > /home/cklaij/ReFRESCO/trunk/build-libs/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:443 > [0]PETSC ERROR: #7 PetscLogHandlerView_Nested() at > /home/cklaij/ReFRESCO/trunk/build-libs/superbuild/petsc/src/src/sys/logging/handler/impls/nested/lognested.c:405 > [0]PETSC ERROR: #8 PetscLogHandlerView() at > /home/cklaij/ReFRESCO/trunk/build-libs/superbuild/petsc/src/src/sys/logging/handler/interface/loghandler.c:342 > [0]PETSC ERROR: #9 PetscLogView() at > /home/cklaij/ReFRESCO/trunk/build-libs/superbuild/petsc/src/src/sys/logging/plog.c:2040 > [0]PETSC ERROR: #10 > /home/cklaij/ReFRESCO/trunk/Code/src/petsc_include_impl.F90:130 > > ________________________________________ > [cid:ii_19681617e7812ff9cfc1] > dr. ir. Christiaan Klaij > | senior researcher | Research & Development | CFD Development > T +31 317 49 33 44 | C.Klaij at marin.nl > > > | > https://urldefense.us/v3/__http://www.marin.nl__;!!G_uCfscf7eWS!fFq1daAFFpKhBA_NWU3sd2QJe_S44rklqeRi0TB57XI0nQsh9jgy8iw3JNGpBbd21zqvO3QlGTLa7kjgJooxJhg$ > < > https://urldefense.us/v3/__https://www.marin.nl/__;!!G_uCfscf7eWS!fFq1daAFFpKhBA_NWU3sd2QJe_S44rklqeRi0TB57XI0nQsh9jgy8iw3JNGpBbd21zqvO3QlGTLa7kjgpOXjQSM$ > > > [Facebook]< > https://urldefense.us/v3/__https://www.facebook.com/marin.wageningen__;!!G_uCfscf7eWS!fFq1daAFFpKhBA_NWU3sd2QJe_S44rklqeRi0TB57XI0nQsh9jgy8iw3JNGpBbd21zqvO3QlGTLa7kjg_3Zt0Pw$ > > > [LinkedIn]< > https://urldefense.us/v3/__https://www.linkedin.com/company/marin__;!!G_uCfscf7eWS!fFq1daAFFpKhBA_NWU3sd2QJe_S44rklqeRi0TB57XI0nQsh9jgy8iw3JNGpBbd21zqvO3QlGTLa7kjgcpgQdSE$ > > > [YouTube]< > https://urldefense.us/v3/__https://www.youtube.com/marinmultimedia__;!!G_uCfscf7eWS!fFq1daAFFpKhBA_NWU3sd2QJe_S44rklqeRi0TB57XI0nQsh9jgy8iw3JNGpBbd21zqvO3QlGTLa7kjgn8qP6_I$ > > > > From: Klaij, Christiaan C.Klaij at marin.nl>> > Sent: Monday, April 28, 2025 3:53 PM > To: Matthew Knepley > Cc: Junchao Zhang; petsc-users at mcs.anl.gov >>; Isaac, > Toby > Subject: Re: [petsc-users] problem with nested logging > > Bisecting would be quite hard, it's not just the petsc version that > changed, also other libs, compilers, even os components. > > Chris > > ________________________________________ > From: Matthew Knepley knepley at gmail.com>> > Sent: Monday, April 28, 2025 3:06 PM > To: Klaij, Christiaan > Cc: Junchao Zhang; petsc-users at mcs.anl.gov >>; Isaac, > Toby > Subject: Re: [petsc-users] problem with nested logging > > You don't often get email from knepley at gmail.com >>. Learn why this is > important< > https://urldefense.us/v3/__https://aka.ms/LearnAboutSenderIdentification__;!!G_uCfscf7eWS!fFq1daAFFpKhBA_NWU3sd2QJe_S44rklqeRi0TB57XI0nQsh9jgy8iw3JNGpBbd21zqvO3QlGTLa7kjgcKEJXRA$ > > > On Mon, Apr 28, 2025 at 8:45?AM Klaij, Christiaan via petsc-users < > petsc-users at mcs.anl.gov petsc-users at mcs.anl.gov> petsc-users at mcs.anl.gov petsc-users at mcs.anl.gov>>> wrote: > I've tried adding a nested log viewer to src/snes/tutorials/ex70.c, > but it does not replicate the problem and works fine. > > Perhaps it is related to fortran, since the manualpage of > PetscLogNestedBegin says "No fortran support" (why? we've been > using it in fortran ever since). > > Therefore I've tried adding it to src/snes/ex5f90.F90 and that > also works fine. It seems I cannot replicate the problem in a > small example, unfortunately. > > We cannot replicate it here. Is there a chance you could bisect to see > what change is responsible? > > Thanks, > > Matt > > Chris > > ________________________________________ > From: Junchao Zhang junchao.zhang at gmail.com> junchao.zhang at gmail.com>> junchao.zhang at gmail.com> junchao.zhang at gmail.com>>>> > Sent: Saturday, April 26, 2025 3:51 PM > To: Klaij, Christiaan > Cc: petsc-users at mcs.anl.gov petsc-users at mcs.anl.gov> petsc-users at mcs.anl.gov petsc-users at mcs.anl.gov>>; Isaac, Toby > Subject: Re: [petsc-users] problem with nested logging > > You don't often get email from junchao.zhang at gmail.com junchao.zhang at gmail.com> junchao.zhang at gmail.com>> junchao.zhang at gmail.com> junchao.zhang at gmail.com>>>. Learn why this is important< > https://urldefense.us/v3/__https://aka.ms/LearnAboutSenderIdentification__;!!G_uCfscf7eWS!cmLENZvO_Uydoa8ciUmsyX-F-QiJt9a2ZfQRUvQnRibGm7VE6PED7S_BDsUgjOzvPZIJyiIoJ8bLJk6gVgt1dmE$ > > > Toby (Cc'ed) might know it. Or could you provide an example? > > --Junchao Zhang > > > On Fri, Apr 25, 2025 at 3:31?AM Klaij, Christiaan via petsc-users < > petsc-users at mcs.anl.gov petsc-users at mcs.anl.gov> petsc-users at mcs.anl.gov petsc-users at mcs.anl.gov>> petsc-users at mcs.anl.gov petsc-users at mcs.anl.gov> petsc-users at mcs.anl.gov petsc-users at mcs.anl.gov>>>> wrote: > We recently upgraded from 3.19.4 to 3.22.4 but face the problem below with > the nested logging. Any ideas? > > Chris > > > [1]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [1]PETSC ERROR: General MPI error > [1]PETSC ERROR: MPI error 1 MPI_ERR_BUFFER: invalid buffer pointer > [1]PETSC ERROR: See > https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!cmLENZvO_Uydoa8ciUmsyX-F-QiJt9a2ZfQRUvQnRibGm7VE6PED7S_BDsUgjOzvPZIJyiIoJ8bLJk6gIT68pbk$ > < > https://urldefense.us/v3/__https://petsc.org/release/faq/__;!!G_uCfscf7eWS!biVlk6PKXoJvq5oVlmWdVJfW9tXv-JlwuWr3zg3jI5u1_jo8rvtZpEYnHO5RjdBqQEoqpqlJ3nusrFGM3UaqHzc$> > for trouble shooting. > [1]PETSC ERROR: Petsc Release Version 3.22.4, Mar 01, 2025 > [1]PETSC ERROR: refresco with 2 MPI process(es) and PETSC_ARCH on > marclus3login2 by jwindt Fri Apr 25 08:52:30 2025 > [1]PETSC ERROR: Configure options: > --prefix=/home/jwindt/cmake_builds/refresco/install-libs-gnu > --with-mpi-dir=/cm/shared/apps/openmpi/gcc/4.0.2 --with-x=0 --with-mpe=0 > --with-debugging=0 --download-superlu_dist= > https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/superlu_dist-8.1.2.tar.gz__;!!G_uCfscf7eWS!cmLENZvO_Uydoa8ciUmsyX-F-QiJt9a2ZfQRUvQnRibGm7VE6PED7S_BDsUgjOzvPZIJyiIoJ8bLJk6grH5BbeU$ > < > https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/superlu_dist-8.1.2.tar.gz__;!!G_uCfscf7eWS!biVlk6PKXoJvq5oVlmWdVJfW9tXv-JlwuWr3zg3jI5u1_jo8rvtZpEYnHO5RjdBqQEoqpqlJ3nusrFGM21-2D-o$> > --with-blaslapack-dir=/cm/shared/apps/intel/oneapi/mkl/2021.4.0 > --download-parmetis= > https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/parmetis-4.0.3-p9.tar.gz__;!!G_uCfscf7eWS!cmLENZvO_Uydoa8ciUmsyX-F-QiJt9a2ZfQRUvQnRibGm7VE6PED7S_BDsUgjOzvPZIJyiIoJ8bLJk6gw4-tEtY$ > < > https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/parmetis-4.0.3-p9.tar.gz__;!!G_uCfscf7eWS!biVlk6PKXoJvq5oVlmWdVJfW9tXv-JlwuWr3zg3jI5u1_jo8rvtZpEYnHO5RjdBqQEoqpqlJ3nusrFGMW0lYHko$> > --download-metis= > https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/metis-5.1.0-p11.tar.gz__;!!G_uCfscf7eWS!cmLENZvO_Uydoa8ciUmsyX-F-QiJt9a2ZfQRUvQnRibGm7VE6PED7S_BDsUgjOzvPZIJyiIoJ8bLJk6gHq4uYiY$ > < > https://urldefense.us/v3/__https://updates.marin.nl/refresco/libs/metis-5.1.0-p11.tar.gz__;!!G_uCfscf7eWS!biVlk6PKXoJvq5oVlmWdVJfW9tXv-JlwuWr3zg3jI5u1_jo8rvtZpEYnHO5RjdBqQEoqpqlJ3nusrFGMbSrIiUg$> > --with-packages-build-dir=/home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild > --with-ssl=0 --with-shared-libraries=1 CFLAGS="-std=gnu11 -Wall > -funroll-all-loops -O3 -DNDEBUG" CXXFLAGS="-std=gnu++14 -Wall > -funroll-all-loops -O3 -DNDEBUG " COPTFLAGS="-std=gnu11 -Wall > -funroll-all-loops -O3 -DNDEBUG" CXXOPTFLAGS="-std=gnu++14 -Wall > -funroll-all-loops -O3 -DNDEBUG " FCFLAGS="-Wall -funroll-all-loops > -ffree-line-length-0 -Wno-maybe-uninitialized -Wno-target-lifetime > -Wno-unused-function -O3 -DNDEBUG" F90FLAGS="-Wall -funroll-all-loops > -ffree-line-length-0 -Wno-maybe-uninitialized -Wno-target-lifetime > -Wno-unused-function -O3 -DNDEBUG" FOPTFLAGS="-Wall -funroll-all-loops > -ffree-line-length-0 -Wno-maybe-uninitialized -Wno-target-lifetime > -Wno-unused-function -O3 -DNDEBUG" > [1]PETSC ERROR: #1 PetscLogNestedTreePrint() at > /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:330 > [1]PETSC ERROR: #2 PetscLogNestedTreePrint() at > /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:384 > [1]PETSC ERROR: #3 PetscLogNestedTreePrint() at > /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:384 > [1]PETSC ERROR: #4 PetscLogNestedTreePrintTop() at > /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:420 > [1]PETSC ERROR: #5 PetscLogHandlerView_Nested_XML() at > /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/xmlviewer.c:443 > [1]PETSC ERROR: #6 PetscLogHandlerView_Nested() at > /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/impls/nested/lognested.c:405 > [1]PETSC ERROR: #7 PetscLogHandlerView() at > /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/handler/interface/loghandler.c:342 > [1]PETSC ERROR: #8 PetscLogView() at > /home/jwindt/cmake_builds/refresco/build-libs-gnu/superbuild/petsc/src/src/sys/logging/plog.c:2040 > -------------------------------------------------------------------------- > MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD > with errorcode 98. > > NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. > You may or may not see output from other processes, depending on > exactly when Open MPI kills them. > -------------------------------------------------------------------------- > [cid:ii_196725d1e2a809852191] > dr. ir. Christiaan Klaij > | senior researcher | Research & Development | CFD Development > T +31 317 49 33 44 | C.Klaij at marin.nl > >> C.Klaij at marin.nl>> >> C.Klaij at marin.nl>>> | > https://urldefense.us/v3/__http://www.marin.nl__;!!G_uCfscf7eWS!cmLENZvO_Uydoa8ciUmsyX-F-QiJt9a2ZfQRUvQnRibGm7VE6PED7S_BDsUgjOzvPZIJyiIoJ8bLJk6g8TwMMcw$ > < > https://urldefense.us/v3/__https://www.marin.nl/__;!!G_uCfscf7eWS!biVlk6PKXoJvq5oVlmWdVJfW9tXv-JlwuWr3zg3jI5u1_jo8rvtZpEYnHO5RjdBqQEoqpqlJ3nusrFGM8tSNH1g$ > > > [Facebook]< > https://urldefense.us/v3/__https://www.facebook.com/marin.wageningen__;!!G_uCfscf7eWS!biVlk6PKXoJvq5oVlmWdVJfW9tXv-JlwuWr3zg3jI5u1_jo8rvtZpEYnHO5RjdBqQEoqpqlJ3nusrFGMcVCZ9hk$ > > > [LinkedIn]< > https://urldefense.us/v3/__https://www.linkedin.com/company/marin__;!!G_uCfscf7eWS!biVlk6PKXoJvq5oVlmWdVJfW9tXv-JlwuWr3zg3jI5u1_jo8rvtZpEYnHO5RjdBqQEoqpqlJ3nusrFGMIDBZW7k$ > > > [YouTube]< > https://urldefense.us/v3/__https://www.youtube.com/marinmultimedia__;!!G_uCfscf7eWS!biVlk6PKXoJvq5oVlmWdVJfW9tXv-JlwuWr3zg3jI5u1_jo8rvtZpEYnHO5RjdBqQEoqpqlJ3nusrFGMVKWos24$ > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!fFq1daAFFpKhBA_NWU3sd2QJe_S44rklqeRi0TB57XI0nQsh9jgy8iw3JNGpBbd21zqvO3QlGTLa7kjg539kFLg$ > < > https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!fFq1daAFFpKhBA_NWU3sd2QJe_S44rklqeRi0TB57XI0nQsh9jgy8iw3JNGpBbd21zqvO3QlGTLa7kjgMsu6hhA$ > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!fFq1daAFFpKhBA_NWU3sd2QJe_S44rklqeRi0TB57XI0nQsh9jgy8iw3JNGpBbd21zqvO3QlGTLa7kjg539kFLg$ > < > https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!fFq1daAFFpKhBA_NWU3sd2QJe_S44rklqeRi0TB57XI0nQsh9jgy8iw3JNGpBbd21zqvO3QlGTLa7kjgMsu6hhA$ > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hexiaofeng at buaa.edu.cn Tue Apr 29 21:57:48 2025 From: hexiaofeng at buaa.edu.cn (hexioafeng) Date: Wed, 30 Apr 2025 10:57:48 +0800 Subject: [petsc-users] build pets with 64 bit indices Message-ID: Dear PETSc developers, I use PETSc and SLEPC to solve generalized eigen problems. When solving an interval eigen problem with matrix size about 5 million, i got the error message: "product of two integer xx xx overflow, you must ./configure PETSc with --with-64-bit-indices for the case you are running". I use some prebuilt third-party packages when building PETSc, namely OpenBLAS, METIS, ParMETIS and SCALAPACK. I wonder should i also use 64-bit prebuilt packages when configure PETSc with the --with-64-bit-indices flag? How about the MUMPS and MPI? Do i have to also use the b4-bit version? Look forward for your reply, thanks. Xiaofeng -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay.anl at fastmail.org Tue Apr 29 23:27:36 2025 From: balay.anl at fastmail.org (Satish Balay) Date: Tue, 29 Apr 2025 23:27:36 -0500 (CDT) Subject: [petsc-users] build pets with 64 bit indices In-Reply-To: References: Message-ID: On Wed, 30 Apr 2025, hexioafeng via petsc-users wrote: > Dear PETSc developers, > > I use PETSc and SLEPC to solve generalized eigen problems. When solving an interval eigen problem with matrix size about 5 million, i got the error message: "product of two integer xx xx overflow, you must ./configure PETSc with --with-64-bit-indices for the case you are running". > > I use some prebuilt third-party packages when building PETSc, namely OpenBLAS, METIS, ParMETIS and SCALAPACK. I wonder should i also use 64-bit prebuilt packages when configure PETSc with the --with-64-bit-indices flag? How about the MUMPS and MPI? Do i have to also use the b4-bit version? Hm - metis/parmetis would need a rebuild [with -DMETIS_USE_LONGINDEX=1 option]. Others should be unaffected. You could use petsc configure to build pkgs to ensure compatibility i.e. use --download-metis --download-parmetis etc.. Note - there is a difference between --with-64-bit-indices (PetscInt) and --with-64-bit-blas-indices (PetscBlasInt) [and ILP64 - aka fortran '-i8'] Satish ---- $ grep defaultIndexSize config/BuildSystem/config/packages/*.py config/BuildSystem/config/packages/hypre.py: if self.defaultIndexSize == 64: config/BuildSystem/config/packages/metis.py: if self.defaultIndexSize == 64: config/BuildSystem/config/packages/mkl_cpardiso.py: elif self.blasLapack.has64bitindices and not self.defaultIndexSize == 64: config/BuildSystem/config/packages/mkl_cpardiso.py: elif not self.blasLapack.has64bitindices and self.defaultIndexSize == 64: config/BuildSystem/config/packages/mkl_pardiso.py: elif self.blasLapack.has64bitindices and not self.defaultIndexSize == 64: config/BuildSystem/config/packages/mkl_sparse_optimize.py: if not self.blasLapack.mkl or (not self.blasLapack.has64bitindices and self.defaultIndexSize == 64): config/BuildSystem/config/packages/mkl_sparse.py: if not self.blasLapack.mkl or (not self.blasLapack.has64bitindices and self.defaultIndexSize == 64): config/BuildSystem/config/packages/SuperLU_DIST.py: if self.defaultIndexSize == 64: > > Look forward for your reply, thanks. > > Xiaofeng From pierre at joliv.et Wed Apr 30 00:08:14 2025 From: pierre at joliv.et (Pierre Jolivet) Date: Wed, 30 Apr 2025 07:08:14 +0200 Subject: [petsc-users] build pets with 64 bit indices In-Reply-To: References: Message-ID: <38C648F9-B364-4574-8E8B-A6C26C4270CB@joliv.et> Could you please provide the full back trace? Depending on your set of options, it may be as simple as switching -bv_type to make your code run (if you are using svec, this would explain such an error but could be circumvented with something else, like mat). Thanks, Pierre > On 30 Apr 2025, at 6:27?AM, Satish Balay wrote: > > On Wed, 30 Apr 2025, hexioafeng via petsc-users wrote: > >> Dear PETSc developers, >> >> I use PETSc and SLEPC to solve generalized eigen problems. When solving an interval eigen problem with matrix size about 5 million, i got the error message: "product of two integer xx xx overflow, you must ./configure PETSc with --with-64-bit-indices for the case you are running". >> >> I use some prebuilt third-party packages when building PETSc, namely OpenBLAS, METIS, ParMETIS and SCALAPACK. I wonder should i also use 64-bit prebuilt packages when configure PETSc with the --with-64-bit-indices flag? How about the MUMPS and MPI? Do i have to also use the b4-bit version? > > Hm - metis/parmetis would need a rebuild [with -DMETIS_USE_LONGINDEX=1 option]. Others should be unaffected. > > You could use petsc configure to build pkgs to ensure compatibility i.e. use --download-metis --download-parmetis etc.. > > Note - there is a difference between --with-64-bit-indices (PetscInt) and --with-64-bit-blas-indices (PetscBlasInt) [and ILP64 - aka fortran '-i8'] > > Satish > > ---- > > $ grep defaultIndexSize config/BuildSystem/config/packages/*.py > config/BuildSystem/config/packages/hypre.py: if self.defaultIndexSize == 64: > config/BuildSystem/config/packages/metis.py: if self.defaultIndexSize == 64: > config/BuildSystem/config/packages/mkl_cpardiso.py: elif self.blasLapack.has64bitindices and not self.defaultIndexSize == 64: > config/BuildSystem/config/packages/mkl_cpardiso.py: elif not self.blasLapack.has64bitindices and self.defaultIndexSize == 64: > config/BuildSystem/config/packages/mkl_pardiso.py: elif self.blasLapack.has64bitindices and not self.defaultIndexSize == 64: > config/BuildSystem/config/packages/mkl_sparse_optimize.py: if not self.blasLapack.mkl or (not self.blasLapack.has64bitindices and self.defaultIndexSize == 64): > config/BuildSystem/config/packages/mkl_sparse.py: if not self.blasLapack.mkl or (not self.blasLapack.has64bitindices and self.defaultIndexSize == 64): > config/BuildSystem/config/packages/SuperLU_DIST.py: if self.defaultIndexSize == 64: > > > >> >> Look forward for your reply, thanks. >> >> Xiaofeng > From hexiaofeng at buaa.edu.cn Wed Apr 30 00:57:37 2025 From: hexiaofeng at buaa.edu.cn (hexioafeng) Date: Wed, 30 Apr 2025 13:57:37 +0800 Subject: [petsc-users] build pets with 64 bit indices In-Reply-To: <38C648F9-B364-4574-8E8B-A6C26C4270CB@joliv.et> References: <38C648F9-B364-4574-8E8B-A6C26C4270CB@joliv.et> Message-ID: <306967E1-F7B6-46C7-9631-96D303DD6D16@buaa.edu.cn> Dear Sir, Thank you for your kind reply. Bellow are the full backtrace: [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: No support for this operation for this object type [0]PETSC ERROR: Product of two integer 1925 4633044 overflow, you must ./configure PETSc with --with-64-bit-indices for the case you are running [0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Release Version 3.14.3, Jan 09, 2021 [0]PETSC ERROR: Unknown Name on a named DESKTOP-74R6I4M by ibe Sun Apr 27 16:10:08 2025 [0]PETSC ERROR: Configure options --with-cc=mpicc --with-cxx=mpicxx --with-fc=mpifort --with-scalar-type=real --with-precision=double --prefix=/home/sd/petsc/petsc-3.14.3/..//rd CXXFLAGS=-fno-stack-protector CFLAGS=-fno-stack-protector FFLAGS=" -O2 -fallow-argument-mismatch -fallow-invalid-boz" --with-debugging=0 COPTFLAGS="-O3 -mtune=generic" CXXOPTFLAGS="-O3 -mtune=generic" FOPTFLAGS="-O3 -mtune=generic" --known-64-bit-blas-indices=0 --with-cxx-dialect=C++11 --with-ssl=0 --with-x=0 --with-fortran-bindings=0 --with-cudac=0 --with-shared-libraries=0 --with-mpi-lib=/c/Windows/System32/msmpi.dll --with-mpi-include=/home/sd/petsc/thirdparty/MPI/Include --with-mpiexec="/C/Program Files/Microsoft MPI/Bin/mpiexec" --with-blaslapack-lib="-L/home/sd/petsc/thirdparty/openblas/lib -llibopenblas -lopenblas" --with-metis-include=/home/sd/petsc/scalapack-mumps-dll/metis --with-metis-lib=/home/sd/petsc/scalapack-mumps-dll/metis/libmetis.dll --with-parmetis-include=/home/sd/petsc/scalapack-mumps-dll/metis --with-parmetis-lib=/home/sd/petsc/scalapack-mumps-dll/metis/libparmetis.dll --download-slepc --with-scalapack-lib=/home/sd/petsc/scalapack-mumps-dll/scalapack/libscalapack.dll --download-hypre --download-mumps --download-hypre-configure-arguments="--build=x86_64-linux-gnu --host=x86_64-linux-gnu" PETSC_ARCH=rd [0]PETSC ERROR: #1 PetscIntMultError() line 2309 in C:/msys64/home/sd/petsc/rd/include/petscsys.h [0]PETSC ERROR: #2 BVCreate_Svec() line 452 in C:/msys64/home/sd/petsc/petsc-3.14.3/rd/externalpackages/git.slepc/src/sys/classes/bv/impls/svec/svec.c [0]PETSC ERROR: #3 BVSetSizesFromVec() line 186 in C:/msys64/home/sd/petsc/petsc-3.14.3/rd/externalpackages/git.slepc/src/sys/classes/bv/interface/bvbasic.c[0]PETSC ERROR: #4 EPSAllocateSolution() line 687 in C:/msys64/home/sd/petsc/petsc-3.14.3/rd/externalpackages/git.slepc/src/eps/interface/epssetup.c [0]PETSC ERROR: #5 EPSSetUp_KrylovSchur() line 159 in C:/msys64/home/sd/petsc/petsc-3.14.3/rd/externalpackages/git.slepc/src/eps/impls/krylov/krylovschur/krylovschur.c [0]PETSC ERROR: #6 EPSSetUp() line 315 in C:/msys64/home/sd/petsc/petsc-3.14.3/rd/externalpackages/git.slepc/src/eps/interface/epssetup.c Sincerely, Xiaofeng > On Apr 30, 2025, at 13:08, Pierre Jolivet wrote: > > Could you please provide the full back trace? > Depending on your set of options, it may be as simple as switching -bv_type to make your code run (if you are using svec, this would explain such an error but could be circumvented with something else, like mat). > > Thanks, > Pierre > >> On 30 Apr 2025, at 6:27?AM, Satish Balay wrote: >> >> On Wed, 30 Apr 2025, hexioafeng via petsc-users wrote: >> >>> Dear PETSc developers, >>> >>> I use PETSc and SLEPC to solve generalized eigen problems. When solving an interval eigen problem with matrix size about 5 million, i got the error message: "product of two integer xx xx overflow, you must ./configure PETSc with --with-64-bit-indices for the case you are running". >>> >>> I use some prebuilt third-party packages when building PETSc, namely OpenBLAS, METIS, ParMETIS and SCALAPACK. I wonder should i also use 64-bit prebuilt packages when configure PETSc with the --with-64-bit-indices flag? How about the MUMPS and MPI? Do i have to also use the b4-bit version? >> >> Hm - metis/parmetis would need a rebuild [with -DMETIS_USE_LONGINDEX=1 option]. Others should be unaffected. >> >> You could use petsc configure to build pkgs to ensure compatibility i.e. use --download-metis --download-parmetis etc.. >> >> Note - there is a difference between --with-64-bit-indices (PetscInt) and --with-64-bit-blas-indices (PetscBlasInt) [and ILP64 - aka fortran '-i8'] >> >> Satish >> >> ---- >> >> $ grep defaultIndexSize config/BuildSystem/config/packages/*.py >> config/BuildSystem/config/packages/hypre.py: if self.defaultIndexSize == 64: >> config/BuildSystem/config/packages/metis.py: if self.defaultIndexSize == 64: >> config/BuildSystem/config/packages/mkl_cpardiso.py: elif self.blasLapack.has64bitindices and not self.defaultIndexSize == 64: >> config/BuildSystem/config/packages/mkl_cpardiso.py: elif not self.blasLapack.has64bitindices and self.defaultIndexSize == 64: >> config/BuildSystem/config/packages/mkl_pardiso.py: elif self.blasLapack.has64bitindices and not self.defaultIndexSize == 64: >> config/BuildSystem/config/packages/mkl_sparse_optimize.py: if not self.blasLapack.mkl or (not self.blasLapack.has64bitindices and self.defaultIndexSize == 64): >> config/BuildSystem/config/packages/mkl_sparse.py: if not self.blasLapack.mkl or (not self.blasLapack.has64bitindices and self.defaultIndexSize == 64): >> config/BuildSystem/config/packages/SuperLU_DIST.py: if self.defaultIndexSize == 64: >> >> >> >>> >>> Look forward for your reply, thanks. >>> >>> Xiaofeng >> From pierre at joliv.et Wed Apr 30 01:01:24 2025 From: pierre at joliv.et (Pierre Jolivet) Date: Wed, 30 Apr 2025 08:01:24 +0200 Subject: [petsc-users] build pets with 64 bit indices In-Reply-To: <306967E1-F7B6-46C7-9631-96D303DD6D16@buaa.edu.cn> References: <306967E1-F7B6-46C7-9631-96D303DD6D16@buaa.edu.cn> Message-ID: Just use -bv_type mat and the error will go away. Note: you are highly advised to update to a new PETSc/SLEPc version. Thanks, Pierre > On 30 Apr 2025, at 7:58?AM, hexioafeng wrote: > > ?Dear Sir, > > Thank you for your kind reply. Bellow are the full backtrace: > > [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > [0]PETSC ERROR: No support for this operation for this object type > [0]PETSC ERROR: Product of two integer 1925 4633044 overflow, you must ./configure PETSc with --with-64-bit-indices for the case you are running > [0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.14.3, Jan 09, 2021 > [0]PETSC ERROR: Unknown Name on a named DESKTOP-74R6I4M by ibe Sun Apr 27 16:10:08 2025 > [0]PETSC ERROR: Configure options --with-cc=mpicc --with-cxx=mpicxx --with-fc=mpifort --with-scalar-type=real --with-precision=double --prefix=/home/sd/petsc/petsc-3.14.3/..//rd CXXFLAGS=-fno-stack-protector CFLAGS=-fno-stack-protector FFLAGS=" -O2 -fallow-argument-mismatch -fallow-invalid-boz" --with-debugging=0 COPTFLAGS="-O3 -mtune=generic" CXXOPTFLAGS="-O3 -mtune=generic" FOPTFLAGS="-O3 -mtune=generic" --known-64-bit-blas-indices=0 --with-cxx-dialect=C++11 --with-ssl=0 --with-x=0 --with-fortran-bindings=0 --with-cudac=0 --with-shared-libraries=0 --with-mpi-lib=/c/Windows/System32/msmpi.dll --with-mpi-include=/home/sd/petsc/thirdparty/MPI/Include --with-mpiexec="/C/Program Files/Microsoft MPI/Bin/mpiexec" --with-blaslapack-lib="-L/home/sd/petsc/thirdparty/openblas/lib -llibopenblas -lopenblas" --with-metis-include=/home/sd/petsc/scalapack-mumps-dll/metis --with-metis-lib=/home/sd/petsc/scalapack-mumps-dll/metis/libmetis.dll --with-parmetis-include=/home/sd/petsc/scalapack-mumps-dll/metis --with-parmetis-lib=/home/sd/petsc/scalapack-mumps-dll/metis/libparmetis.dll --download-slepc --with-scalapack-lib=/home/sd/petsc/scalapack-mumps-dll/scalapack/libscalapack.dll --download-hypre --download-mumps --download-hypre-configure-arguments="--build=x86_64-linux-gnu --host=x86_64-linux-gnu" PETSC_ARCH=rd > [0]PETSC ERROR: #1 PetscIntMultError() line 2309 in C:/msys64/home/sd/petsc/rd/include/petscsys.h > [0]PETSC ERROR: #2 BVCreate_Svec() line 452 in C:/msys64/home/sd/petsc/petsc-3.14.3/rd/externalpackages/git.slepc/src/sys/classes/bv/impls/svec/svec.c > [0]PETSC ERROR: #3 BVSetSizesFromVec() line 186 in C:/msys64/home/sd/petsc/petsc-3.14.3/rd/externalpackages/git.slepc/src/sys/classes/bv/interface/bvbasic.c[0]PETSC ERROR: #4 EPSAllocateSolution() line 687 in C:/msys64/home/sd/petsc/petsc-3.14.3/rd/externalpackages/git.slepc/src/eps/interface/epssetup.c > [0]PETSC ERROR: #5 EPSSetUp_KrylovSchur() line 159 in C:/msys64/home/sd/petsc/petsc-3.14.3/rd/externalpackages/git.slepc/src/eps/impls/krylov/krylovschur/krylovschur.c > [0]PETSC ERROR: #6 EPSSetUp() line 315 in C:/msys64/home/sd/petsc/petsc-3.14.3/rd/externalpackages/git.slepc/src/eps/interface/epssetup.c > > > > > Sincerely, > > Xiaofeng > > > >> On Apr 30, 2025, at 13:08, Pierre Jolivet wrote: >> >> Could you please provide the full back trace? >> Depending on your set of options, it may be as simple as switching -bv_type to make your code run (if you are using svec, this would explain such an error but could be circumvented with something else, like mat). >> >> Thanks, >> Pierre >> >>>> On 30 Apr 2025, at 6:27?AM, Satish Balay wrote: >>> >>>> On Wed, 30 Apr 2025, hexioafeng via petsc-users wrote: >>> >>>> Dear PETSc developers, >>>> >>>> I use PETSc and SLEPC to solve generalized eigen problems. When solving an interval eigen problem with matrix size about 5 million, i got the error message: "product of two integer xx xx overflow, you must ./configure PETSc with --with-64-bit-indices for the case you are running". >>>> >>>> I use some prebuilt third-party packages when building PETSc, namely OpenBLAS, METIS, ParMETIS and SCALAPACK. I wonder should i also use 64-bit prebuilt packages when configure PETSc with the --with-64-bit-indices flag? How about the MUMPS and MPI? Do i have to also use the b4-bit version? >>> >>> Hm - metis/parmetis would need a rebuild [with -DMETIS_USE_LONGINDEX=1 option]. Others should be unaffected. >>> >>> You could use petsc configure to build pkgs to ensure compatibility i.e. use --download-metis --download-parmetis etc.. >>> >>> Note - there is a difference between --with-64-bit-indices (PetscInt) and --with-64-bit-blas-indices (PetscBlasInt) [and ILP64 - aka fortran '-i8'] >>> >>> Satish >>> >>> ---- >>> >>> $ grep defaultIndexSize config/BuildSystem/config/packages/*.py >>> config/BuildSystem/config/packages/hypre.py: if self.defaultIndexSize == 64: >>> config/BuildSystem/config/packages/metis.py: if self.defaultIndexSize == 64: >>> config/BuildSystem/config/packages/mkl_cpardiso.py: elif self.blasLapack.has64bitindices and not self.defaultIndexSize == 64: >>> config/BuildSystem/config/packages/mkl_cpardiso.py: elif not self.blasLapack.has64bitindices and self.defaultIndexSize == 64: >>> config/BuildSystem/config/packages/mkl_pardiso.py: elif self.blasLapack.has64bitindices and not self.defaultIndexSize == 64: >>> config/BuildSystem/config/packages/mkl_sparse_optimize.py: if not self.blasLapack.mkl or (not self.blasLapack.has64bitindices and self.defaultIndexSize == 64): >>> config/BuildSystem/config/packages/mkl_sparse.py: if not self.blasLapack.mkl or (not self.blasLapack.has64bitindices and self.defaultIndexSize == 64): >>> config/BuildSystem/config/packages/SuperLU_DIST.py: if self.defaultIndexSize == 64: >>> >>> >>> >>>> >>>> Look forward for your reply, thanks. >>>> >>>> Xiaofeng >>> > From pierre at joliv.et Wed Apr 30 02:35:34 2025 From: pierre at joliv.et (Pierre Jolivet) Date: Wed, 30 Apr 2025 09:35:34 +0200 Subject: [petsc-users] build pets with 64 bit indices In-Reply-To: <482AFA55-57B5-4A03-9679-F8FD54AA0362@buaa.edu.cn> References: <482AFA55-57B5-4A03-9679-F8FD54AA0362@buaa.edu.cn> Message-ID: <82055D87-B296-40A0-9B9F-029EEC7FF569@joliv.et> > On 30 Apr 2025, at 9:31?AM, hexioafeng wrote: > > ?Dear sir, > > I ran the case with the bv type mat again, and got the similar error: > > > [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > [0]PETSC ERROR: No support for this operation for this object type > [0]PETSC ERROR: Product of two integer 4633044 1925 overflow, you must ./configure PETSc with --with-64-bit-indices for the case you are running > [0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.14.3, Jan 09, 2021 > [0]PETSC ERROR: Unknown Name on a named DESKTOP-74R6I4M by ibe Wed Apr 30 15:08:43 2025 > [0]PETSC ERROR: Configure options --with-cc=mpicc --with-cxx=mpicxx --with-fc=mpifort --with-scalar-type=real --with-precision=double --prefix=/home/sd/petsc/petsc-3.14.3/..//rd CXXFLAGS=-fno-stack-protector CFLAGS=-fno-stack-protector FFLAGS=" -O2 -fallow-argument-mismatch -fallow-invalid-boz" --with-debugging=0 COPTFLAGS="-O3 -mtune=generic" CXXOPTFLAGS="-O3 -mtune=generic" FOPTFLAGS="-O3 -mtune=generic" --known-64-bit-blas-indices=0 --with-cxx-dialect=C++11 --with-ssl=0 --with-x=0 --with-fortran-bindings=0 --with-cudac=0 --with-shared-libraries=0 --with-mpi-lib=/c/Windows/System32/msmpi.dll --with-mpi-include=/home/sd/petsc/thirdparty/MPI/Include --with-mpiexec="/C/Program Files/Microsoft MPI/Bin/mpiexec" --with-blaslapack-lib="-L/home/sd/petsc/thirdparty/openblas/lib -llibopenblas -lopenblas" --with-metis-include=/home/sd/petsc/scalapack-mumps-dll/metis --with-metis-lib=/home/sd/petsc/scalapack-mumps-dll/metis/libmetis.dll --with-parmetis-include=/home/sd/petsc/scalapack-mumps-dll/metis --with-parmetis-lib=/home/sd/petsc/scalapack-mumps-dll/metis/libparmetis.dll --download-slepc --with-scalapack-lib=/home/sd/petsc/scalapack-mumps-dll/scalapack/libscalapack.dll --download-hypre --download-mumps --download-hypre-configure-arguments="--build=x86_64-linux-gnu --host=x86_64-linux-gnu" PETSC_ARCH=rd > [0]PETSC ERROR: #1 PetscIntMultError() line 2309 in C:/msys64/home/sd/petsc/petsc-3.14.3/include/petscsys.h > [0]PETSC ERROR: #2 MatSeqDenseSetPreallocation_SeqDense() line 2785 in C:/msys64/home/sd/petsc/petsc-3.14.3/src/mat/impls/dense/seq/dense. > [0]PETSC ERROR: #3 MatSeqDenseSetPreallocation() line 2767 in C:/msys64/home/sd/petsc/petsc-3.14.3/src/mat/impls/dense/seq/dense.c > [0]PETSC ERROR: #4 MatCreateDense() line 2416 in C:/msys64/home/sd/petsc/petsc-3.14.3/src/mat/impls/dense/mpi/mpidense.c > [0]PETSC ERROR: #5 BVCreate_Mat() line 455 in C:/msys64/home/sd/petsc/petsc-3.14.3/rd/externalpackages/git.slepc/src/sys/classes/bv/impls/mat/bvmat.c > [0]PETSC ERROR: #6 BVSetSizesFromVec() line 186 in C:/msys64/home/sd/petsc/petsc-3.14.3/rd/externalpackages/git.slepc/src/sys/classes/bv/interface/bvbasic.c > [0]PETSC ERROR: #7 EPSAllocateSolution() line 687 in C:/msys64/home/sd/petsc/petsc-3.14.3/rd/externalpackages/git.slepc/src/eps/interface/epssetup.c > [0]PETSC ERROR: #8 EPSSetUp_KrylovSchur() line 159 in C:/msys64/home/sd/petsc/petsc-3.14.3/rd/externalpackages/git.slepc/src/eps/impls/krylov/krylovschur/krylovschur.c > [0]PETSC ERROR: #9 EPSSetUp() line 315 in C:/msys64/home/sd/petsc/petsc-3.14.3/rd/externalpackages/git.slepc/src/eps/interface/epssetup.c > > > > Does it mean that I have to run with the 64-bit PETSc/SLEPC? Please always keep the list in copy. I?d rather update PETSc/SLEPc than just reconfigure with 64-bit PetscInt. This (preallocation error) has been fixed in PETSc. Thanks, Pierre > Thanks. > Xiaofeng > > > >> On Apr 30, 2025, at 14:10, hexioafeng wrote: >> >> Thank you, sir. I will try it. >> >> >> Sincerely, >> Xiaofeng >> >> >> >>>> On Apr 30, 2025, at 14:01, Pierre Jolivet wrote: >>> >>> Just use -bv_type mat and the error will go away. >>> Note: you are highly advised to update to a new PETSc/SLEPc version. >>> >>> Thanks, >>> Pierre >>> >>>> On 30 Apr 2025, at 7:58?AM, hexioafeng wrote: >>>> >>>> ?Dear Sir, >>>> >>>> Thank you for your kind reply. Bellow are the full backtrace: >>>> >>>> [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- >>>> [0]PETSC ERROR: No support for this operation for this object type >>>> [0]PETSC ERROR: Product of two integer 1925 4633044 overflow, you must ./configure PETSc with --with-64-bit-indices for the case you are running >>>> [0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. >>>> [0]PETSC ERROR: Petsc Release Version 3.14.3, Jan 09, 2021 >>>> [0]PETSC ERROR: Unknown Name on a named DESKTOP-74R6I4M by ibe Sun Apr 27 16:10:08 2025 >>>> [0]PETSC ERROR: Configure options --with-cc=mpicc --with-cxx=mpicxx --with-fc=mpifort --with-scalar-type=real --with-precision=double --prefix=/home/sd/petsc/petsc-3.14.3/..//rd CXXFLAGS=-fno-stack-protector CFLAGS=-fno-stack-protector FFLAGS=" -O2 -fallow-argument-mismatch -fallow-invalid-boz" --with-debugging=0 COPTFLAGS="-O3 -mtune=generic" CXXOPTFLAGS="-O3 -mtune=generic" FOPTFLAGS="-O3 -mtune=generic" --known-64-bit-blas-indices=0 --with-cxx-dialect=C++11 --with-ssl=0 --with-x=0 --with-fortran-bindings=0 --with-cudac=0 --with-shared-libraries=0 --with-mpi-lib=/c/Windows/System32/msmpi.dll --with-mpi-include=/home/sd/petsc/thirdparty/MPI/Include --with-mpiexec="/C/Program Files/Microsoft MPI/Bin/mpiexec" --with-blaslapack-lib="-L/home/sd/petsc/thirdparty/openblas/lib -llibopenblas -lopenblas" --with-metis-include=/home/sd/petsc/scalapack-mumps-dll/metis --with-metis-lib=/home/sd/petsc/scalapack-mumps-dll/metis/libmetis.dll --with-parmetis-include=/home/sd/petsc/scalapack-mumps-dll/metis --with-parmetis-lib=/home/sd/petsc/scalapack-mumps-dll/metis/libparmetis.dll --download-slepc --with-scalapack-lib=/home/sd/petsc/scalapack-mumps-dll/scalapack/libscalapack.dll --download-hypre --download-mumps --download-hypre-configure-arguments="--build=x86_64-linux-gnu --host=x86_64-linux-gnu" PETSC_ARCH=rd >>>> [0]PETSC ERROR: #1 PetscIntMultError() line 2309 in C:/msys64/home/sd/petsc/rd/include/petscsys.h >>>> [0]PETSC ERROR: #2 BVCreate_Svec() line 452 in C:/msys64/home/sd/petsc/petsc-3.14.3/rd/externalpackages/git.slepc/src/sys/classes/bv/impls/svec/svec.c >>>> [0]PETSC ERROR: #3 BVSetSizesFromVec() line 186 in C:/msys64/home/sd/petsc/petsc-3.14.3/rd/externalpackages/git.slepc/src/sys/classes/bv/interface/bvbasic.c[0]PETSC ERROR: #4 EPSAllocateSolution() line 687 in C:/msys64/home/sd/petsc/petsc-3.14.3/rd/externalpackages/git.slepc/src/eps/interface/epssetup.c >>>> [0]PETSC ERROR: #5 EPSSetUp_KrylovSchur() line 159 in C:/msys64/home/sd/petsc/petsc-3.14.3/rd/externalpackages/git.slepc/src/eps/impls/krylov/krylovschur/krylovschur.c >>>> [0]PETSC ERROR: #6 EPSSetUp() line 315 in C:/msys64/home/sd/petsc/petsc-3.14.3/rd/externalpackages/git.slepc/src/eps/interface/epssetup.c >>>> >>>> >>>> >>>> >>>> Sincerely, >>>> >>>> Xiaofeng >>>> >>>> >>>> >>>>> On Apr 30, 2025, at 13:08, Pierre Jolivet wrote: >>>>> >>>>> Could you please provide the full back trace? >>>>> Depending on your set of options, it may be as simple as switching -bv_type to make your code run (if you are using svec, this would explain such an error but could be circumvented with something else, like mat). >>>>> >>>>> Thanks, >>>>> Pierre >>>>> >>>>>>> On 30 Apr 2025, at 6:27?AM, Satish Balay wrote: >>>>>> >>>>>>> On Wed, 30 Apr 2025, hexioafeng via petsc-users wrote: >>>>>> >>>>>>> Dear PETSc developers, >>>>>>> >>>>>>> I use PETSc and SLEPC to solve generalized eigen problems. When solving an interval eigen problem with matrix size about 5 million, i got the error message: "product of two integer xx xx overflow, you must ./configure PETSc with --with-64-bit-indices for the case you are running". >>>>>>> >>>>>>> I use some prebuilt third-party packages when building PETSc, namely OpenBLAS, METIS, ParMETIS and SCALAPACK. I wonder should i also use 64-bit prebuilt packages when configure PETSc with the --with-64-bit-indices flag? How about the MUMPS and MPI? Do i have to also use the b4-bit version? >>>>>> >>>>>> Hm - metis/parmetis would need a rebuild [with -DMETIS_USE_LONGINDEX=1 option]. Others should be unaffected. >>>>>> >>>>>> You could use petsc configure to build pkgs to ensure compatibility i.e. use --download-metis --download-parmetis etc.. >>>>>> >>>>>> Note - there is a difference between --with-64-bit-indices (PetscInt) and --with-64-bit-blas-indices (PetscBlasInt) [and ILP64 - aka fortran '-i8'] >>>>>> >>>>>> Satish >>>>>> >>>>>> ---- >>>>>> >>>>>> $ grep defaultIndexSize config/BuildSystem/config/packages/*.py >>>>>> config/BuildSystem/config/packages/hypre.py: if self.defaultIndexSize == 64: >>>>>> config/BuildSystem/config/packages/metis.py: if self.defaultIndexSize == 64: >>>>>> config/BuildSystem/config/packages/mkl_cpardiso.py: elif self.blasLapack.has64bitindices and not self.defaultIndexSize == 64: >>>>>> config/BuildSystem/config/packages/mkl_cpardiso.py: elif not self.blasLapack.has64bitindices and self.defaultIndexSize == 64: >>>>>> config/BuildSystem/config/packages/mkl_pardiso.py: elif self.blasLapack.has64bitindices and not self.defaultIndexSize == 64: >>>>>> config/BuildSystem/config/packages/mkl_sparse_optimize.py: if not self.blasLapack.mkl or (not self.blasLapack.has64bitindices and self.defaultIndexSize == 64): >>>>>> config/BuildSystem/config/packages/mkl_sparse.py: if not self.blasLapack.mkl or (not self.blasLapack.has64bitindices and self.defaultIndexSize == 64): >>>>>> config/BuildSystem/config/packages/SuperLU_DIST.py: if self.defaultIndexSize == 64: >>>>>> >>>>>> >>>>>> >>>>>>> >>>>>>> Look forward for your reply, thanks. >>>>>>> >>>>>>> Xiaofeng >>>>>> >>>> >> > From hexiaofeng at buaa.edu.cn Wed Apr 30 03:06:05 2025 From: hexiaofeng at buaa.edu.cn (hexioafeng) Date: Wed, 30 Apr 2025 16:06:05 +0800 Subject: [petsc-users] build pets with 64 bit indices In-Reply-To: <82055D87-B296-40A0-9B9F-029EEC7FF569@joliv.et> References: <482AFA55-57B5-4A03-9679-F8FD54AA0362@buaa.edu.cn> <82055D87-B296-40A0-9B9F-029EEC7FF569@joliv.et> Message-ID: <0FB280A4-5CB4-426A-BB8F-CDED98F55648@buaa.edu.cn> Thanks, I'll try to update PETSc or compile the 64-bit version. Best regards, Xiaofeng > On Apr 30, 2025, at 15:35, Pierre Jolivet wrote: > > > >> On 30 Apr 2025, at 9:31?AM, hexioafeng wrote: >> >> ?Dear sir, >> >> I ran the case with the bv type mat again, and got the similar error: >> >> >> [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- >> [0]PETSC ERROR: No support for this operation for this object type >> [0]PETSC ERROR: Product of two integer 4633044 1925 overflow, you must ./configure PETSc with --with-64-bit-indices for the case you are running >> [0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. >> [0]PETSC ERROR: Petsc Release Version 3.14.3, Jan 09, 2021 >> [0]PETSC ERROR: Unknown Name on a named DESKTOP-74R6I4M by ibe Wed Apr 30 15:08:43 2025 >> [0]PETSC ERROR: Configure options --with-cc=mpicc --with-cxx=mpicxx --with-fc=mpifort --with-scalar-type=real --with-precision=double --prefix=/home/sd/petsc/petsc-3.14.3/..//rd CXXFLAGS=-fno-stack-protector CFLAGS=-fno-stack-protector FFLAGS=" -O2 -fallow-argument-mismatch -fallow-invalid-boz" --with-debugging=0 COPTFLAGS="-O3 -mtune=generic" CXXOPTFLAGS="-O3 -mtune=generic" FOPTFLAGS="-O3 -mtune=generic" --known-64-bit-blas-indices=0 --with-cxx-dialect=C++11 --with-ssl=0 --with-x=0 --with-fortran-bindings=0 --with-cudac=0 --with-shared-libraries=0 --with-mpi-lib=/c/Windows/System32/msmpi.dll --with-mpi-include=/home/sd/petsc/thirdparty/MPI/Include --with-mpiexec="/C/Program Files/Microsoft MPI/Bin/mpiexec" --with-blaslapack-lib="-L/home/sd/petsc/thirdparty/openblas/lib -llibopenblas -lopenblas" --with-metis-include=/home/sd/petsc/scalapack-mumps-dll/metis --with-metis-lib=/home/sd/petsc/scalapack-mumps-dll/metis/libmetis.dll --with-parmetis-include=/home/sd/petsc/scalapack-mumps-dll/metis --with-parmetis-lib=/home/sd/petsc/scalapack-mumps-dll/metis/libparmetis.dll --download-slepc --with-scalapack-lib=/home/sd/petsc/scalapack-mumps-dll/scalapack/libscalapack.dll --download-hypre --download-mumps --download-hypre-configure-arguments="--build=x86_64-linux-gnu --host=x86_64-linux-gnu" PETSC_ARCH=rd >> [0]PETSC ERROR: #1 PetscIntMultError() line 2309 in C:/msys64/home/sd/petsc/petsc-3.14.3/include/petscsys.h >> [0]PETSC ERROR: #2 MatSeqDenseSetPreallocation_SeqDense() line 2785 in C:/msys64/home/sd/petsc/petsc-3.14.3/src/mat/impls/dense/seq/dense. >> [0]PETSC ERROR: #3 MatSeqDenseSetPreallocation() line 2767 in C:/msys64/home/sd/petsc/petsc-3.14.3/src/mat/impls/dense/seq/dense.c >> [0]PETSC ERROR: #4 MatCreateDense() line 2416 in C:/msys64/home/sd/petsc/petsc-3.14.3/src/mat/impls/dense/mpi/mpidense.c >> [0]PETSC ERROR: #5 BVCreate_Mat() line 455 in C:/msys64/home/sd/petsc/petsc-3.14.3/rd/externalpackages/git.slepc/src/sys/classes/bv/impls/mat/bvmat.c >> [0]PETSC ERROR: #6 BVSetSizesFromVec() line 186 in C:/msys64/home/sd/petsc/petsc-3.14.3/rd/externalpackages/git.slepc/src/sys/classes/bv/interface/bvbasic.c >> [0]PETSC ERROR: #7 EPSAllocateSolution() line 687 in C:/msys64/home/sd/petsc/petsc-3.14.3/rd/externalpackages/git.slepc/src/eps/interface/epssetup.c >> [0]PETSC ERROR: #8 EPSSetUp_KrylovSchur() line 159 in C:/msys64/home/sd/petsc/petsc-3.14.3/rd/externalpackages/git.slepc/src/eps/impls/krylov/krylovschur/krylovschur.c >> [0]PETSC ERROR: #9 EPSSetUp() line 315 in C:/msys64/home/sd/petsc/petsc-3.14.3/rd/externalpackages/git.slepc/src/eps/interface/epssetup.c >> >> >> >> Does it mean that I have to run with the 64-bit PETSc/SLEPC? > > Please always keep the list in copy. > I?d rather update PETSc/SLEPc than just reconfigure with 64-bit PetscInt. > This (preallocation error) has been fixed in PETSc. > > Thanks, > Pierre > >> Thanks. >> Xiaofeng >> >> >> >>> On Apr 30, 2025, at 14:10, hexioafeng wrote: >>> >>> Thank you, sir. I will try it. >>> >>> >>> Sincerely, >>> Xiaofeng >>> >>> >>> >>>>> On Apr 30, 2025, at 14:01, Pierre Jolivet wrote: >>>> >>>> Just use -bv_type mat and the error will go away. >>>> Note: you are highly advised to update to a new PETSc/SLEPc version. >>>> >>>> Thanks, >>>> Pierre >>>> >>>>> On 30 Apr 2025, at 7:58?AM, hexioafeng wrote: >>>>> >>>>> ?Dear Sir, >>>>> >>>>> Thank you for your kind reply. Bellow are the full backtrace: >>>>> >>>>> [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- >>>>> [0]PETSC ERROR: No support for this operation for this object type >>>>> [0]PETSC ERROR: Product of two integer 1925 4633044 overflow, you must ./configure PETSc with --with-64-bit-indices for the case you are running >>>>> [0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. >>>>> [0]PETSC ERROR: Petsc Release Version 3.14.3, Jan 09, 2021 >>>>> [0]PETSC ERROR: Unknown Name on a named DESKTOP-74R6I4M by ibe Sun Apr 27 16:10:08 2025 >>>>> [0]PETSC ERROR: Configure options --with-cc=mpicc --with-cxx=mpicxx --with-fc=mpifort --with-scalar-type=real --with-precision=double --prefix=/home/sd/petsc/petsc-3.14.3/..//rd CXXFLAGS=-fno-stack-protector CFLAGS=-fno-stack-protector FFLAGS=" -O2 -fallow-argument-mismatch -fallow-invalid-boz" --with-debugging=0 COPTFLAGS="-O3 -mtune=generic" CXXOPTFLAGS="-O3 -mtune=generic" FOPTFLAGS="-O3 -mtune=generic" --known-64-bit-blas-indices=0 --with-cxx-dialect=C++11 --with-ssl=0 --with-x=0 --with-fortran-bindings=0 --with-cudac=0 --with-shared-libraries=0 --with-mpi-lib=/c/Windows/System32/msmpi.dll --with-mpi-include=/home/sd/petsc/thirdparty/MPI/Include --with-mpiexec="/C/Program Files/Microsoft MPI/Bin/mpiexec" --with-blaslapack-lib="-L/home/sd/petsc/thirdparty/openblas/lib -llibopenblas -lopenblas" --with-metis-include=/home/sd/petsc/scalapack-mumps-dll/metis --with-metis-lib=/home/sd/petsc/scalapack-mumps-dll/metis/libmetis.dll --with-parmetis-include=/home/sd/petsc/scalapack-mumps-dll/metis --with-parmetis-lib=/home/sd/petsc/scalapack-mumps-dll/metis/libparmetis.dll --download-slepc --with-scalapack-lib=/home/sd/petsc/scalapack-mumps-dll/scalapack/libscalapack.dll --download-hypre --download-mumps --download-hypre-configure-arguments="--build=x86_64-linux-gnu --host=x86_64-linux-gnu" PETSC_ARCH=rd >>>>> [0]PETSC ERROR: #1 PetscIntMultError() line 2309 in C:/msys64/home/sd/petsc/rd/include/petscsys.h >>>>> [0]PETSC ERROR: #2 BVCreate_Svec() line 452 in C:/msys64/home/sd/petsc/petsc-3.14.3/rd/externalpackages/git.slepc/src/sys/classes/bv/impls/svec/svec.c >>>>> [0]PETSC ERROR: #3 BVSetSizesFromVec() line 186 in C:/msys64/home/sd/petsc/petsc-3.14.3/rd/externalpackages/git.slepc/src/sys/classes/bv/interface/bvbasic.c[0]PETSC ERROR: #4 EPSAllocateSolution() line 687 in C:/msys64/home/sd/petsc/petsc-3.14.3/rd/externalpackages/git.slepc/src/eps/interface/epssetup.c >>>>> [0]PETSC ERROR: #5 EPSSetUp_KrylovSchur() line 159 in C:/msys64/home/sd/petsc/petsc-3.14.3/rd/externalpackages/git.slepc/src/eps/impls/krylov/krylovschur/krylovschur.c >>>>> [0]PETSC ERROR: #6 EPSSetUp() line 315 in C:/msys64/home/sd/petsc/petsc-3.14.3/rd/externalpackages/git.slepc/src/eps/interface/epssetup.c >>>>> >>>>> >>>>> >>>>> >>>>> Sincerely, >>>>> >>>>> Xiaofeng >>>>> >>>>> >>>>> >>>>>> On Apr 30, 2025, at 13:08, Pierre Jolivet wrote: >>>>>> >>>>>> Could you please provide the full back trace? >>>>>> Depending on your set of options, it may be as simple as switching -bv_type to make your code run (if you are using svec, this would explain such an error but could be circumvented with something else, like mat). >>>>>> >>>>>> Thanks, >>>>>> Pierre >>>>>> >>>>>>>> On 30 Apr 2025, at 6:27?AM, Satish Balay wrote: >>>>>>> >>>>>>>> On Wed, 30 Apr 2025, hexioafeng via petsc-users wrote: >>>>>>> >>>>>>>> Dear PETSc developers, >>>>>>>> >>>>>>>> I use PETSc and SLEPC to solve generalized eigen problems. When solving an interval eigen problem with matrix size about 5 million, i got the error message: "product of two integer xx xx overflow, you must ./configure PETSc with --with-64-bit-indices for the case you are running". >>>>>>>> >>>>>>>> I use some prebuilt third-party packages when building PETSc, namely OpenBLAS, METIS, ParMETIS and SCALAPACK. I wonder should i also use 64-bit prebuilt packages when configure PETSc with the --with-64-bit-indices flag? How about the MUMPS and MPI? Do i have to also use the b4-bit version? >>>>>>> >>>>>>> Hm - metis/parmetis would need a rebuild [with -DMETIS_USE_LONGINDEX=1 option]. Others should be unaffected. >>>>>>> >>>>>>> You could use petsc configure to build pkgs to ensure compatibility i.e. use --download-metis --download-parmetis etc.. >>>>>>> >>>>>>> Note - there is a difference between --with-64-bit-indices (PetscInt) and --with-64-bit-blas-indices (PetscBlasInt) [and ILP64 - aka fortran '-i8'] >>>>>>> >>>>>>> Satish >>>>>>> >>>>>>> ---- >>>>>>> >>>>>>> $ grep defaultIndexSize config/BuildSystem/config/packages/*.py >>>>>>> config/BuildSystem/config/packages/hypre.py: if self.defaultIndexSize == 64: >>>>>>> config/BuildSystem/config/packages/metis.py: if self.defaultIndexSize == 64: >>>>>>> config/BuildSystem/config/packages/mkl_cpardiso.py: elif self.blasLapack.has64bitindices and not self.defaultIndexSize == 64: >>>>>>> config/BuildSystem/config/packages/mkl_cpardiso.py: elif not self.blasLapack.has64bitindices and self.defaultIndexSize == 64: >>>>>>> config/BuildSystem/config/packages/mkl_pardiso.py: elif self.blasLapack.has64bitindices and not self.defaultIndexSize == 64: >>>>>>> config/BuildSystem/config/packages/mkl_sparse_optimize.py: if not self.blasLapack.mkl or (not self.blasLapack.has64bitindices and self.defaultIndexSize == 64): >>>>>>> config/BuildSystem/config/packages/mkl_sparse.py: if not self.blasLapack.mkl or (not self.blasLapack.has64bitindices and self.defaultIndexSize == 64): >>>>>>> config/BuildSystem/config/packages/SuperLU_DIST.py: if self.defaultIndexSize == 64: >>>>>>> >>>>>>> >>>>>>> >>>>>>>> >>>>>>>> Look forward for your reply, thanks. >>>>>>>> >>>>>>>> Xiaofeng >>>>>>> >>>>> >>> >> From hexiaofeng at buaa.edu.cn Wed Apr 30 03:11:38 2025 From: hexiaofeng at buaa.edu.cn (hexioafeng) Date: Wed, 30 Apr 2025 16:11:38 +0800 Subject: [petsc-users] build pets with 64 bit indices In-Reply-To: <82055D87-B296-40A0-9B9F-029EEC7FF569@joliv.et> References: <482AFA55-57B5-4A03-9679-F8FD54AA0362@buaa.edu.cn> <82055D87-B296-40A0-9B9F-029EEC7FF569@joliv.et> Message-ID: I found that the preallocation check for dense matrix was removed in v3.16.3. I will try to update this version first. Thanks. Xiaofeng > On Apr 30, 2025, at 15:35, Pierre Jolivet wrote: > > > >> On 30 Apr 2025, at 9:31?AM, hexioafeng wrote: >> >> ?Dear sir, >> >> I ran the case with the bv type mat again, and got the similar error: >> >> >> [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- >> [0]PETSC ERROR: No support for this operation for this object type >> [0]PETSC ERROR: Product of two integer 4633044 1925 overflow, you must ./configure PETSc with --with-64-bit-indices for the case you are running >> [0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. >> [0]PETSC ERROR: Petsc Release Version 3.14.3, Jan 09, 2021 >> [0]PETSC ERROR: Unknown Name on a named DESKTOP-74R6I4M by ibe Wed Apr 30 15:08:43 2025 >> [0]PETSC ERROR: Configure options --with-cc=mpicc --with-cxx=mpicxx --with-fc=mpifort --with-scalar-type=real --with-precision=double --prefix=/home/sd/petsc/petsc-3.14.3/..//rd CXXFLAGS=-fno-stack-protector CFLAGS=-fno-stack-protector FFLAGS=" -O2 -fallow-argument-mismatch -fallow-invalid-boz" --with-debugging=0 COPTFLAGS="-O3 -mtune=generic" CXXOPTFLAGS="-O3 -mtune=generic" FOPTFLAGS="-O3 -mtune=generic" --known-64-bit-blas-indices=0 --with-cxx-dialect=C++11 --with-ssl=0 --with-x=0 --with-fortran-bindings=0 --with-cudac=0 --with-shared-libraries=0 --with-mpi-lib=/c/Windows/System32/msmpi.dll --with-mpi-include=/home/sd/petsc/thirdparty/MPI/Include --with-mpiexec="/C/Program Files/Microsoft MPI/Bin/mpiexec" --with-blaslapack-lib="-L/home/sd/petsc/thirdparty/openblas/lib -llibopenblas -lopenblas" --with-metis-include=/home/sd/petsc/scalapack-mumps-dll/metis --with-metis-lib=/home/sd/petsc/scalapack-mumps-dll/metis/libmetis.dll --with-parmetis-include=/home/sd/petsc/scalapack-mumps-dll/metis --with-parmetis-lib=/home/sd/petsc/scalapack-mumps-dll/metis/libparmetis.dll --download-slepc --with-scalapack-lib=/home/sd/petsc/scalapack-mumps-dll/scalapack/libscalapack.dll --download-hypre --download-mumps --download-hypre-configure-arguments="--build=x86_64-linux-gnu --host=x86_64-linux-gnu" PETSC_ARCH=rd >> [0]PETSC ERROR: #1 PetscIntMultError() line 2309 in C:/msys64/home/sd/petsc/petsc-3.14.3/include/petscsys.h >> [0]PETSC ERROR: #2 MatSeqDenseSetPreallocation_SeqDense() line 2785 in C:/msys64/home/sd/petsc/petsc-3.14.3/src/mat/impls/dense/seq/dense. >> [0]PETSC ERROR: #3 MatSeqDenseSetPreallocation() line 2767 in C:/msys64/home/sd/petsc/petsc-3.14.3/src/mat/impls/dense/seq/dense.c >> [0]PETSC ERROR: #4 MatCreateDense() line 2416 in C:/msys64/home/sd/petsc/petsc-3.14.3/src/mat/impls/dense/mpi/mpidense.c >> [0]PETSC ERROR: #5 BVCreate_Mat() line 455 in C:/msys64/home/sd/petsc/petsc-3.14.3/rd/externalpackages/git.slepc/src/sys/classes/bv/impls/mat/bvmat.c >> [0]PETSC ERROR: #6 BVSetSizesFromVec() line 186 in C:/msys64/home/sd/petsc/petsc-3.14.3/rd/externalpackages/git.slepc/src/sys/classes/bv/interface/bvbasic.c >> [0]PETSC ERROR: #7 EPSAllocateSolution() line 687 in C:/msys64/home/sd/petsc/petsc-3.14.3/rd/externalpackages/git.slepc/src/eps/interface/epssetup.c >> [0]PETSC ERROR: #8 EPSSetUp_KrylovSchur() line 159 in C:/msys64/home/sd/petsc/petsc-3.14.3/rd/externalpackages/git.slepc/src/eps/impls/krylov/krylovschur/krylovschur.c >> [0]PETSC ERROR: #9 EPSSetUp() line 315 in C:/msys64/home/sd/petsc/petsc-3.14.3/rd/externalpackages/git.slepc/src/eps/interface/epssetup.c >> >> >> >> Does it mean that I have to run with the 64-bit PETSc/SLEPC? > > Please always keep the list in copy. > I?d rather update PETSc/SLEPc than just reconfigure with 64-bit PetscInt. > This (preallocation error) has been fixed in PETSc. > > Thanks, > Pierre > >> Thanks. >> Xiaofeng >> >> >> >>> On Apr 30, 2025, at 14:10, hexioafeng wrote: >>> >>> Thank you, sir. I will try it. >>> >>> >>> Sincerely, >>> Xiaofeng >>> >>> >>> >>>>> On Apr 30, 2025, at 14:01, Pierre Jolivet wrote: >>>> >>>> Just use -bv_type mat and the error will go away. >>>> Note: you are highly advised to update to a new PETSc/SLEPc version. >>>> >>>> Thanks, >>>> Pierre >>>> >>>>> On 30 Apr 2025, at 7:58?AM, hexioafeng wrote: >>>>> >>>>> ?Dear Sir, >>>>> >>>>> Thank you for your kind reply. Bellow are the full backtrace: >>>>> >>>>> [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- >>>>> [0]PETSC ERROR: No support for this operation for this object type >>>>> [0]PETSC ERROR: Product of two integer 1925 4633044 overflow, you must ./configure PETSc with --with-64-bit-indices for the case you are running >>>>> [0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. >>>>> [0]PETSC ERROR: Petsc Release Version 3.14.3, Jan 09, 2021 >>>>> [0]PETSC ERROR: Unknown Name on a named DESKTOP-74R6I4M by ibe Sun Apr 27 16:10:08 2025 >>>>> [0]PETSC ERROR: Configure options --with-cc=mpicc --with-cxx=mpicxx --with-fc=mpifort --with-scalar-type=real --with-precision=double --prefix=/home/sd/petsc/petsc-3.14.3/..//rd CXXFLAGS=-fno-stack-protector CFLAGS=-fno-stack-protector FFLAGS=" -O2 -fallow-argument-mismatch -fallow-invalid-boz" --with-debugging=0 COPTFLAGS="-O3 -mtune=generic" CXXOPTFLAGS="-O3 -mtune=generic" FOPTFLAGS="-O3 -mtune=generic" --known-64-bit-blas-indices=0 --with-cxx-dialect=C++11 --with-ssl=0 --with-x=0 --with-fortran-bindings=0 --with-cudac=0 --with-shared-libraries=0 --with-mpi-lib=/c/Windows/System32/msmpi.dll --with-mpi-include=/home/sd/petsc/thirdparty/MPI/Include --with-mpiexec="/C/Program Files/Microsoft MPI/Bin/mpiexec" --with-blaslapack-lib="-L/home/sd/petsc/thirdparty/openblas/lib -llibopenblas -lopenblas" --with-metis-include=/home/sd/petsc/scalapack-mumps-dll/metis --with-metis-lib=/home/sd/petsc/scalapack-mumps-dll/metis/libmetis.dll --with-parmetis-include=/home/sd/petsc/scalapack-mumps-dll/metis --with-parmetis-lib=/home/sd/petsc/scalapack-mumps-dll/metis/libparmetis.dll --download-slepc --with-scalapack-lib=/home/sd/petsc/scalapack-mumps-dll/scalapack/libscalapack.dll --download-hypre --download-mumps --download-hypre-configure-arguments="--build=x86_64-linux-gnu --host=x86_64-linux-gnu" PETSC_ARCH=rd >>>>> [0]PETSC ERROR: #1 PetscIntMultError() line 2309 in C:/msys64/home/sd/petsc/rd/include/petscsys.h >>>>> [0]PETSC ERROR: #2 BVCreate_Svec() line 452 in C:/msys64/home/sd/petsc/petsc-3.14.3/rd/externalpackages/git.slepc/src/sys/classes/bv/impls/svec/svec.c >>>>> [0]PETSC ERROR: #3 BVSetSizesFromVec() line 186 in C:/msys64/home/sd/petsc/petsc-3.14.3/rd/externalpackages/git.slepc/src/sys/classes/bv/interface/bvbasic.c[0]PETSC ERROR: #4 EPSAllocateSolution() line 687 in C:/msys64/home/sd/petsc/petsc-3.14.3/rd/externalpackages/git.slepc/src/eps/interface/epssetup.c >>>>> [0]PETSC ERROR: #5 EPSSetUp_KrylovSchur() line 159 in C:/msys64/home/sd/petsc/petsc-3.14.3/rd/externalpackages/git.slepc/src/eps/impls/krylov/krylovschur/krylovschur.c >>>>> [0]PETSC ERROR: #6 EPSSetUp() line 315 in C:/msys64/home/sd/petsc/petsc-3.14.3/rd/externalpackages/git.slepc/src/eps/interface/epssetup.c >>>>> >>>>> >>>>> >>>>> >>>>> Sincerely, >>>>> >>>>> Xiaofeng >>>>> >>>>> >>>>> >>>>>> On Apr 30, 2025, at 13:08, Pierre Jolivet wrote: >>>>>> >>>>>> Could you please provide the full back trace? >>>>>> Depending on your set of options, it may be as simple as switching -bv_type to make your code run (if you are using svec, this would explain such an error but could be circumvented with something else, like mat). >>>>>> >>>>>> Thanks, >>>>>> Pierre >>>>>> >>>>>>>> On 30 Apr 2025, at 6:27?AM, Satish Balay wrote: >>>>>>> >>>>>>>> On Wed, 30 Apr 2025, hexioafeng via petsc-users wrote: >>>>>>> >>>>>>>> Dear PETSc developers, >>>>>>>> >>>>>>>> I use PETSc and SLEPC to solve generalized eigen problems. When solving an interval eigen problem with matrix size about 5 million, i got the error message: "product of two integer xx xx overflow, you must ./configure PETSc with --with-64-bit-indices for the case you are running". >>>>>>>> >>>>>>>> I use some prebuilt third-party packages when building PETSc, namely OpenBLAS, METIS, ParMETIS and SCALAPACK. I wonder should i also use 64-bit prebuilt packages when configure PETSc with the --with-64-bit-indices flag? How about the MUMPS and MPI? Do i have to also use the b4-bit version? >>>>>>> >>>>>>> Hm - metis/parmetis would need a rebuild [with -DMETIS_USE_LONGINDEX=1 option]. Others should be unaffected. >>>>>>> >>>>>>> You could use petsc configure to build pkgs to ensure compatibility i.e. use --download-metis --download-parmetis etc.. >>>>>>> >>>>>>> Note - there is a difference between --with-64-bit-indices (PetscInt) and --with-64-bit-blas-indices (PetscBlasInt) [and ILP64 - aka fortran '-i8'] >>>>>>> >>>>>>> Satish >>>>>>> >>>>>>> ---- >>>>>>> >>>>>>> $ grep defaultIndexSize config/BuildSystem/config/packages/*.py >>>>>>> config/BuildSystem/config/packages/hypre.py: if self.defaultIndexSize == 64: >>>>>>> config/BuildSystem/config/packages/metis.py: if self.defaultIndexSize == 64: >>>>>>> config/BuildSystem/config/packages/mkl_cpardiso.py: elif self.blasLapack.has64bitindices and not self.defaultIndexSize == 64: >>>>>>> config/BuildSystem/config/packages/mkl_cpardiso.py: elif not self.blasLapack.has64bitindices and self.defaultIndexSize == 64: >>>>>>> config/BuildSystem/config/packages/mkl_pardiso.py: elif self.blasLapack.has64bitindices and not self.defaultIndexSize == 64: >>>>>>> config/BuildSystem/config/packages/mkl_sparse_optimize.py: if not self.blasLapack.mkl or (not self.blasLapack.has64bitindices and self.defaultIndexSize == 64): >>>>>>> config/BuildSystem/config/packages/mkl_sparse.py: if not self.blasLapack.mkl or (not self.blasLapack.has64bitindices and self.defaultIndexSize == 64): >>>>>>> config/BuildSystem/config/packages/SuperLU_DIST.py: if self.defaultIndexSize == 64: >>>>>>> >>>>>>> >>>>>>> >>>>>>>> >>>>>>>> Look forward for your reply, thanks. >>>>>>>> >>>>>>>> Xiaofeng >>>>>>> >>>>> >>> >>