<div dir="ltr"><div dir="ltr">On Tue, Feb 14, 2023 at 8:36 AM Nicolas Barnafi <<a href="mailto:nbarnafi@cmm.uchile.cl">nbarnafi@cmm.uchile.cl</a>> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div>
Hello Matt, <br>
<br>
After some discussions elsewhere (thanks @LMitchell!), we found out
that the problem is that the fields are setup with PCSetIS, without
an attached DM, which does not support this kind of nesting fields.
<br>
<br>
I would like to add this feature, meaning that during the setup of
the preconditioner there should be an additional routine when there
is no dm that reads _%i_fields and sets the corresponding fields to
the sub PCs, in some order.<br>
<br>
My plan would be to do so at the PCSetUp_FieldSplit level. The idea
is that whenever some IS fields are set such as 'a' and 'b', it
should be possible to rearrange them with '-pc_fieldsplit_0_fields
a,b' , or at least support this with numbered fields. <br>
<br>
How do you see it?<br></div></blockquote><div><br></div><div>Just to clarify, if you call SetIS() 3 times, and then give</div><div><br></div><div> -pc_fieldsplit_0_fields 0,2</div><div><br></div><div>then we should reduce the number of fields to two by calling ISConcatenate() on the first and last ISes?</div><div><br></div><div>I think this should not be hard. It will work exactly as it does on the DM case, except the ISes will come from</div><div>the PC, not the DM. One complication is that you will have to hold the new ISes until the end, and then set them.</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div>
Best, <br>
NB<br>
<br>
<div>On 06/02/23 17:57, Matthew Knepley
wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">
<div dir="ltr">On Mon, Feb 6, 2023 at 11:45 AM Nicolas Barnafi
<<a href="mailto:nbarnafi@cmm.uchile.cl" target="_blank">nbarnafi@cmm.uchile.cl</a>>
wrote:<br>
</div>
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div> Thank you Matt, <br>
<br>
Again, at the bottom of this message you will find the
-info output. I don't see any output related to the
fields,<br>
</div>
</blockquote>
<div><br>
</div>
<div>If the splits were done automatically, you would see an
info message from here:</div>
<div><br>
</div>
<div> <a href="https://gitlab.com/petsc/petsc/-/blob/main/src/ksp/pc/impls/fieldsplit/fieldsplit.c#L1595" target="_blank">https://gitlab.com/petsc/petsc/-/blob/main/src/ksp/pc/impls/fieldsplit/fieldsplit.c#L1595</a></div>
<div><br>
</div>
<div>Thus it must be setup here</div>
<div><br>
</div>
<div> <a href="https://gitlab.com/petsc/petsc/-/blob/main/src/ksp/pc/impls/fieldsplit/fieldsplit.c#L380" target="_blank">https://gitlab.com/petsc/petsc/-/blob/main/src/ksp/pc/impls/fieldsplit/fieldsplit.c#L380</a></div>
<div><br>
</div>
<div>There are info statements, but you do not see them, I do
not see a way around using a small example</div>
<div>to understand how you are setting up the system, since it
is working as expected in the PETSc examples.</div>
<div><br>
</div>
<div> Thanks,</div>
<div><br>
</div>
<div> Matt</div>
<div> </div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div> Best<br>
<br>
<br>
------ -info<br>
<br>
[0] <sys> PetscDetermineInitialFPTrap(): Floating
point trapping is on by default 13<br>
[0] <sys>
PetscDeviceInitializeTypeFromOptions_Private():
PetscDeviceType host available, initializing<br>
[0] <sys>
PetscDeviceInitializeTypeFromOptions_Private():
PetscDevice host initialized, default device id 0, view
FALSE, init type lazy<br>
[0] <sys>
PetscDeviceInitializeTypeFromOptions_Private():
PetscDeviceType cuda not available<br>
[0] <sys>
PetscDeviceInitializeTypeFromOptions_Private():
PetscDeviceType hip not available<br>
[0] <sys>
PetscDeviceInitializeTypeFromOptions_Private():
PetscDeviceType sycl not available<br>
[0] <sys> PetscInitialize_Common(): PETSc
successfully started: number of processors = 1<br>
[0] <sys> PetscGetHostName(): Rejecting domainname,
likely is NIS nico-santech.(none)<br>
[0] <sys> PetscInitialize_Common(): Running on
machine: nico-santech<br>
[0] <sys> SlepcInitialize(): SLEPc successfully
started<br>
[0] <sys> PetscCommDuplicate(): Duplicating a
communicator 94770066936960 94770087780768 max tags =
2147483647<br>
[0] <sys> Petsc_OuterComm_Attr_Delete_Fn(): Removing
reference to PETSc communicator embedded in a user
MPI_Comm 94770087780768<br>
[0] <sys> Petsc_InnerComm_Attr_Delete_Fn(): User
MPI_Comm 94770066936960 is being unlinked from inner PETSc
comm 94770087780768<br>
[0] <sys> PetscCommDestroy(): Deleting PETSc
MPI_Comm 94770087780768<br>
[0] <sys> Petsc_Counter_Attr_Delete_Fn(): Deleting
counter data in an MPI_Comm 94770087780768<br>
[0] <sys> PetscCommDuplicate(): Duplicating a
communicator 94770066936960 94770087780768 max tags =
2147483647<br>
[0] <sys> Petsc_OuterComm_Attr_Delete_Fn(): Removing
reference to PETSc communicator embedded in a user
MPI_Comm 94770087780768<br>
[0] <sys> Petsc_InnerComm_Attr_Delete_Fn(): User
MPI_Comm 94770066936960 is being unlinked from inner PETSc
comm 94770087780768<br>
[0] <sys> PetscCommDestroy(): Deleting PETSc
MPI_Comm 94770087780768<br>
[0] <sys> Petsc_Counter_Attr_Delete_Fn(): Deleting
counter data in an MPI_Comm 94770087780768<br>
[0] <sys> PetscCommDuplicate(): Duplicating a
communicator 94770066936960 94770087780768 max tags =
2147483647<br>
[0] <sys> Petsc_OuterComm_Attr_Delete_Fn(): Removing
reference to PETSc communicator embedded in a user
MPI_Comm 94770087780768<br>
[0] <sys> Petsc_InnerComm_Attr_Delete_Fn(): User
MPI_Comm 94770066936960 is being unlinked from inner PETSc
comm 94770087780768<br>
[0] <sys> PetscCommDestroy(): Deleting PETSc
MPI_Comm 94770087780768<br>
[0] <sys> Petsc_Counter_Attr_Delete_Fn(): Deleting
counter data in an MPI_Comm 94770087780768<br>
[0] <sys> PetscCommDuplicate(): Duplicating a
communicator 94770066936960 94770087780768 max tags =
2147483647<br>
[0] <sys> Petsc_OuterComm_Attr_Delete_Fn(): Removing
reference to PETSc communicator embedded in a user
MPI_Comm 94770087780768<br>
[0] <sys> Petsc_InnerComm_Attr_Delete_Fn(): User
MPI_Comm 94770066936960 is being unlinked from inner PETSc
comm 94770087780768<br>
[0] <sys> PetscCommDestroy(): Deleting PETSc
MPI_Comm 94770087780768<br>
[0] <sys> Petsc_Counter_Attr_Delete_Fn(): Deleting
counter data in an MPI_Comm 94770087780768<br>
[0] <sys> PetscCommDuplicate(): Duplicating a
communicator 94770066936960 94770087780768 max tags =
2147483647<br>
[0] <sys> Petsc_OuterComm_Attr_Delete_Fn(): Removing
reference to PETSc communicator embedded in a user
MPI_Comm 94770087780768<br>
[0] <sys> Petsc_InnerComm_Attr_Delete_Fn(): User
MPI_Comm 94770066936960 is being unlinked from inner PETSc
comm 94770087780768<br>
[0] <sys> PetscCommDestroy(): Deleting PETSc
MPI_Comm 94770087780768<br>
[0] <sys> Petsc_Counter_Attr_Delete_Fn(): Deleting
counter data in an MPI_Comm 94770087780768<br>
[0] <sys> PetscCommDuplicate(): Duplicating a
communicator 94770066936960 94770087780768 max tags =
2147483647<br>
[0] <sys> PetscCommDuplicate(): Using internal PETSc
communicator 94770066936960 94770087780768<br>
[0] <mat> MatAssemblyEnd_SeqAIJ(): Matrix size: 1219
X 1219; storage space: 0 unneeded,26443 used<br>
[0] <mat> MatAssemblyEnd_SeqAIJ(): Number of mallocs
during MatSetValues() is 0<br>
[0] <mat> MatAssemblyEnd_SeqAIJ(): Maximum nonzeros
in any row is 150<br>
[0] <mat> MatCheckCompressedRow(): Found the ratio
(num_zerorows 0)/(num_localrows 1219) < 0.6. Do not use
CompressedRow routines.<br>
[0] <mat> MatSeqAIJCheckInode(): Found 1160 nodes
out of 1219 rows. Not using Inode routines<br>
[0] <sys> PetscCommDuplicate(): Using internal PETSc
communicator 94770066936960 94770087780768<br>
[0] <sys> PetscCommDuplicate(): Using internal PETSc
communicator 94770066936960 94770087780768<br>
[0] <sys> PetscCommDuplicate(): Using internal PETSc
communicator 94770066936960 94770087780768<br>
[0] <sys> PetscCommDuplicate(): Using internal PETSc
communicator 94770066936960 94770087780768<br>
[0] <sys> PetscCommDuplicate(): Using internal PETSc
communicator 94770066936960 94770087780768<br>
[0] <sys> PetscGetHostName(): Rejecting domainname,
likely is NIS nico-santech.(none)<br>
[0] <pc> PCSetUp(): Setting up PC for first time<br>
[0] <mat> MatAssemblyEnd_SeqAIJ(): Matrix size: 615
X 615; storage space: 0 unneeded,9213 used<br>
[0] <mat> MatAssemblyEnd_SeqAIJ(): Number of mallocs
during MatSetValues() is 0<br>
[0] <mat> MatAssemblyEnd_SeqAIJ(): Maximum nonzeros
in any row is 117<br>
[0] <mat> MatCheckCompressedRow(): Found the ratio
(num_zerorows 0)/(num_localrows 615) < 0.6. Do not use
CompressedRow routines.<br>
[0] <mat> MatSeqAIJCheckInode(): Found 561 nodes out
of 615 rows. Not using Inode routines<br>
[0] <sys> PetscCommDuplicate(): Duplicating a
communicator 94770066934048 94770110251424 max tags =
2147483647<br>
[0] <sys> Petsc_OuterComm_Attr_Delete_Fn(): Removing
reference to PETSc communicator embedded in a user
MPI_Comm 94770110251424<br>
[0] <sys> Petsc_InnerComm_Attr_Delete_Fn(): User
MPI_Comm 94770066934048 is being unlinked from inner PETSc
comm 94770110251424<br>
[0] <sys> PetscCommDestroy(): Deleting PETSc
MPI_Comm 94770110251424<br>
[0] <sys> Petsc_Counter_Attr_Delete_Fn(): Deleting
counter data in an MPI_Comm 94770110251424<br>
[0] <mat> MatAssemblyEnd_SeqAIJ(): Matrix size: 64 X
64; storage space: 0 unneeded,0 used<br>
[0] <mat> MatAssemblyEnd_SeqAIJ(): Number of mallocs
during MatSetValues() is 0<br>
[0] <mat> MatAssemblyEnd_SeqAIJ(): Maximum nonzeros
in any row is 0<br>
[0] <mat> MatCheckCompressedRow(): Found the ratio
(num_zerorows 64)/(num_localrows 64) > 0.6. Use
CompressedRow routines.<br>
[0] <mat> MatSeqAIJCheckInode(): Found 13 nodes of
64. Limit used: 5. Using Inode routines<br>
[0] <sys> PetscCommDuplicate(): Duplicating a
communicator 94770066934048 94770100861088 max tags =
2147483647<br>
[0] <sys> Petsc_OuterComm_Attr_Delete_Fn(): Removing
reference to PETSc communicator embedded in a user
MPI_Comm 94770100861088<br>
[0] <sys> Petsc_InnerComm_Attr_Delete_Fn(): User
MPI_Comm 94770066934048 is being unlinked from inner PETSc
comm 94770100861088<br>
[0] <sys> PetscCommDestroy(): Deleting PETSc
MPI_Comm 94770100861088<br>
[0] <sys> Petsc_Counter_Attr_Delete_Fn(): Deleting
counter data in an MPI_Comm 94770100861088<br>
[0] <mat> MatAssemblyEnd_SeqAIJ(): Matrix size: 240
X 240; storage space: 0 unneeded,2140 used<br>
[0] <mat> MatAssemblyEnd_SeqAIJ(): Number of mallocs
during MatSetValues() is 0<br>
[0] <mat> MatAssemblyEnd_SeqAIJ(): Maximum nonzeros
in any row is 11<br>
[0] <mat> MatCheckCompressedRow(): Found the ratio
(num_zerorows 0)/(num_localrows 240) < 0.6. Do not use
CompressedRow routines.<br>
[0] <mat> MatSeqAIJCheckInode(): Found 235 nodes out
of 240 rows. Not using Inode routines<br>
[0] <sys> PetscCommDuplicate(): Duplicating a
communicator 94770066934048 94770100861088 max tags =
2147483647<br>
[0] <sys> Petsc_OuterComm_Attr_Delete_Fn(): Removing
reference to PETSc communicator embedded in a user
MPI_Comm 94770100861088<br>
[0] <sys> Petsc_InnerComm_Attr_Delete_Fn(): User
MPI_Comm 94770066934048 is being unlinked from inner PETSc
comm 94770100861088<br>
[0] <sys> PetscCommDestroy(): Deleting PETSc
MPI_Comm 94770100861088<br>
[0] <sys> Petsc_Counter_Attr_Delete_Fn(): Deleting
counter data in an MPI_Comm 94770100861088<br>
[0] <mat> MatAssemblyEnd_SeqAIJ(): Matrix size: 300
X 300; storage space: 0 unneeded,2292 used<br>
[0] <mat> MatAssemblyEnd_SeqAIJ(): Number of mallocs
during MatSetValues() is 0<br>
[0] <mat> MatAssemblyEnd_SeqAIJ(): Maximum nonzeros
in any row is 33<br>
[0] <mat> MatCheckCompressedRow(): Found the ratio
(num_zerorows 0)/(num_localrows 300) < 0.6. Do not use
CompressedRow routines.<br>
[0] <mat> MatSeqAIJCheckInode(): Found 300 nodes out
of 300 rows. Not using Inode routines<br>
[0] <sys> PetscCommDuplicate(): Duplicating a
communicator 94770066934048 94770100861088 max tags =
2147483647<br>
[0] <sys> Petsc_OuterComm_Attr_Delete_Fn(): Removing
reference to PETSc communicator embedded in a user
MPI_Comm 94770100861088<br>
[0] <sys> Petsc_InnerComm_Attr_Delete_Fn(): User
MPI_Comm 94770066934048 is being unlinked from inner PETSc
comm 94770100861088<br>
[0] <sys> PetscCommDestroy(): Deleting PETSc
MPI_Comm 94770100861088<br>
[0] <sys> Petsc_Counter_Attr_Delete_Fn(): Deleting
counter data in an MPI_Comm 94770100861088<br>
[0] <mat> MatAssemblyEnd_SeqAIJ(): Matrix size: 615
X 1219; storage space: 0 unneeded,11202 used<br>
[0] <mat> MatAssemblyEnd_SeqAIJ(): Number of mallocs
during MatSetValues() is 0<br>
[0] <mat> MatAssemblyEnd_SeqAIJ(): Maximum nonzeros
in any row is 150<br>
[0] <mat> MatCheckCompressedRow(): Found the ratio
(num_zerorows 0)/(num_localrows 615) < 0.6. Do not use
CompressedRow routines.<br>
[0] <mat> MatSeqAIJCheckInode(): Found 561 nodes out
of 615 rows. Not using Inode routines<br>
[0] <mat> MatAssemblyEnd_SeqAIJ(): Matrix size: 64 X
1219; storage space: 0 unneeded,288 used<br>
[0] <mat> MatAssemblyEnd_SeqAIJ(): Number of mallocs
during MatSetValues() is 0<br>
[0] <mat> MatAssemblyEnd_SeqAIJ(): Maximum nonzeros
in any row is 6<br>
[0] <mat> MatCheckCompressedRow(): Found the ratio
(num_zerorows 0)/(num_localrows 64) < 0.6. Do not use
CompressedRow routines.<br>
[0] <mat> MatSeqAIJCheckInode(): Found 64 nodes out
of 64 rows. Not using Inode routines<br>
[0] <mat> MatAssemblyEnd_SeqAIJ(): Matrix size: 240
X 1219; storage space: 0 unneeded,8800 used<br>
[0] <mat> MatAssemblyEnd_SeqAIJ(): Number of mallocs
during MatSetValues() is 0<br>
[0] <mat> MatAssemblyEnd_SeqAIJ(): Maximum nonzeros
in any row is 78<br>
[0] <mat> MatCheckCompressedRow(): Found the ratio
(num_zerorows 0)/(num_localrows 240) < 0.6. Do not use
CompressedRow routines.<br>
[0] <mat> MatSeqAIJCheckInode(): Found 235 nodes out
of 240 rows. Not using Inode routines<br>
[0] <mat> MatAssemblyEnd_SeqAIJ(): Matrix size: 300
X 1219; storage space: 0 unneeded,6153 used<br>
[0] <mat> MatAssemblyEnd_SeqAIJ(): Number of mallocs
during MatSetValues() is 0<br>
[0] <mat> MatAssemblyEnd_SeqAIJ(): Maximum nonzeros
in any row is 89<br>
[0] <mat> MatCheckCompressedRow(): Found the ratio
(num_zerorows 0)/(num_localrows 300) < 0.6. Do not use
CompressedRow routines.<br>
[0] <mat> MatSeqAIJCheckInode(): Found 300 nodes out
of 300 rows. Not using Inode routines<br>
[0] <sys> PetscCommDuplicate(): Duplicating a
communicator 94770066934048 94770100861088 max tags =
2147483647<br>
[0] <sys> PetscCommDuplicate(): Using internal PETSc
communicator 94770066934048 94770100861088<br>
0 KSP Residual norm 2.541418258630e+01 <br>
[0] <ksp> KSPConvergedDefault(): user has provided
nonzero initial guess, computing 2-norm of RHS<br>
[0] <pc> PCSetUp(): Leaving PC with identical
preconditioner since operator is unchanged<br>
[0] <pc> PCSetUp(): Leaving PC with identical
preconditioner since operator is unchanged<br>
[0] <pc> PCSetUp(): Setting up PC for first time<br>
[0] <sys> PetscCommDuplicate(): Using internal PETSc
communicator 94770066934048 94770100861088<br>
[0] <sys> PetscCommDuplicate(): Using internal PETSc
communicator 94770066934048 94770100861088<br>
[0] <sys> PetscCommDuplicate(): Using internal PETSc
communicator 94770066934048 94770100861088<br>
[0] <pc> PCSetUp(): Leaving PC with identical
preconditioner since operator is unchanged<br>
[0] <pc> PCSetUp(): Setting up PC for first time<br>
[0] <sys> PetscCommDuplicate(): Using internal PETSc
communicator 94770066934048 94770100861088<br>
[0] <sys> PetscCommDuplicate(): Using internal PETSc
communicator 94770066934048 94770100861088<br>
[0] <sys> PetscCommDuplicate(): Using internal PETSc
communicator 94770066934048 94770100861088<br>
[0] <sys> PetscCommDuplicate(): Using internal PETSc
communicator 94770066934048 947701008610882<br>
<br>
<br>
<div>On 03/02/23 21:11, Matthew Knepley wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">
<div dir="ltr">On Fri, Feb 3, 2023 at 3:03 PM Nicolas
Barnafi <<a href="mailto:nbarnafi@cmm.uchile.cl" target="_blank">nbarnafi@cmm.uchile.cl</a>>
wrote:<br>
</div>
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">> There are
a number of common errors:<br>
> <br>
> 1) Your PC has a prefix<br>
> <br>
> 2) You have not called KSPSetFromOptions()
here<br>
> <br>
> Can you send the -ksp_view output?<br>
<br>
The PC at least has no prefix. I had to set
ksp_rtol to 1 to get through <br>
the solution process, you will find both the
petsc_rc and the ksp_view <br>
at the bottom of this message.<br>
<br>
Options are indeed being set from the options
file, but there must be <br>
something missing at a certain level. Thanks for
looking into this.<br>
</blockquote>
<div><br>
</div>
<div>Okay, the next step is to pass</div>
<div><br>
</div>
<div> -info</div>
<div><br>
</div>
<div>and send the output. This will tell us how the
default splits were done. If that</div>
<div>is not conclusive, we will have to use the
debugger.</div>
<div><br>
</div>
<div> Thanks,</div>
<div><br>
</div>
<div> Matt</div>
<div> </div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"> Best<br>
<br>
---- petsc_rc file<br>
<br>
-ksp_monitor<br>
-ksp_type gmres<br>
-ksp_view<br>
-mat_type aij<br>
-ksp_norm_type unpreconditioned<br>
-ksp_atol 1e-14<br>
-ksp_rtol 1<br>
-pc_type fieldsplit<br>
-pc_fieldsplit_type multiplicative<br>
<br>
---- ksp_view<br>
<br>
KSP Object: 1 MPI process<br>
type: gmres<br>
restart=500, using Classical (unmodified)
Gram-Schmidt <br>
Orthogonalization with no iterative refinement<br>
happy breakdown tolerance 1e-30<br>
maximum iterations=10000, nonzero initial guess<br>
tolerances: relative=1., absolute=1e-14,
divergence=10000.<br>
right preconditioning<br>
using UNPRECONDITIONED norm type for
convergence test<br>
PC Object: 1 MPI process<br>
type: fieldsplit<br>
FieldSplit with MULTIPLICATIVE composition:
total splits = 4<br>
Solver info for each split is in the
following KSP objects:<br>
Split number 0 Defined by IS<br>
KSP Object: (fieldsplit_0_) 1 MPI process<br>
type: preonly<br>
maximum iterations=10000, initial guess is
zero<br>
tolerances: relative=1e-05, absolute=1e-50,
divergence=10000.<br>
left preconditioning<br>
using DEFAULT norm type for convergence test<br>
PC Object: (fieldsplit_0_) 1 MPI process<br>
type: ilu<br>
PC has not been set up so information may be
incomplete<br>
out-of-place factorization<br>
0 levels of fill<br>
tolerance for zero pivot 2.22045e-14<br>
matrix ordering: natural<br>
matrix solver type: petsc<br>
matrix not yet factored; no additional
information available<br>
linear system matrix = precond matrix:<br>
Mat Object: (fieldsplit_0_) 1 MPI process<br>
type: seqaij<br>
rows=615, cols=615<br>
total: nonzeros=9213, allocated
nonzeros=9213<br>
total number of mallocs used during
MatSetValues calls=0<br>
not using I-node routines<br>
Split number 1 Defined by IS<br>
KSP Object: (fieldsplit_1_) 1 MPI process<br>
type: preonly<br>
maximum iterations=10000, initial guess is
zero<br>
tolerances: relative=1e-05, absolute=1e-50,
divergence=10000.<br>
left preconditioning<br>
using DEFAULT norm type for convergence test<br>
PC Object: (fieldsplit_1_) 1 MPI process<br>
type: ilu<br>
PC has not been set up so information may be
incomplete<br>
out-of-place factorization<br>
0 levels of fill<br>
tolerance for zero pivot 2.22045e-14<br>
matrix ordering: natural<br>
matrix solver type: petsc<br>
matrix not yet factored; no additional
information available<br>
linear system matrix = precond matrix:<br>
Mat Object: (fieldsplit_1_) 1 MPI process<br>
type: seqaij<br>
rows=64, cols=64<br>
total: nonzeros=0, allocated nonzeros=0<br>
total number of mallocs used during
MatSetValues calls=0<br>
using I-node routines: found 13 nodes,
limit used is 5<br>
Split number 2 Defined by IS<br>
KSP Object: (fieldsplit_2_) 1 MPI process<br>
type: preonly<br>
maximum iterations=10000, initial guess is
zero<br>
tolerances: relative=1e-05, absolute=1e-50,
divergence=10000.<br>
left preconditioning<br>
using DEFAULT norm type for convergence test<br>
PC Object: (fieldsplit_2_) 1 MPI process<br>
type: ilu<br>
PC has not been set up so information may be
incomplete<br>
out-of-place factorization<br>
0 levels of fill<br>
tolerance for zero pivot 2.22045e-14<br>
matrix ordering: natural<br>
matrix solver type: petsc<br>
matrix not yet factored; no additional
information available<br>
linear system matrix = precond matrix:<br>
Mat Object: (fieldsplit_2_) 1 MPI process<br>
type: seqaij<br>
rows=240, cols=240<br>
total: nonzeros=2140, allocated
nonzeros=2140<br>
total number of mallocs used during
MatSetValues calls=0<br>
not using I-node routines<br>
Split number 3 Defined by IS<br>
KSP Object: (fieldsplit_3_) 1 MPI process<br>
type: preonly<br>
maximum iterations=10000, initial guess is
zero<br>
tolerances: relative=1e-05, absolute=1e-50,
divergence=10000.<br>
left preconditioning<br>
using DEFAULT norm type for convergence test<br>
PC Object: (fieldsplit_3_) 1 MPI process<br>
type: ilu<br>
PC has not been set up so information may be
incomplete<br>
out-of-place factorization<br>
0 levels of fill<br>
tolerance for zero pivot 2.22045e-14<br>
matrix ordering: natural<br>
matrix solver type: petsc<br>
matrix not yet factored; no additional
information available<br>
linear system matrix = precond matrix:<br>
Mat Object: (fieldsplit_3_) 1 MPI process<br>
type: seqaij<br>
rows=300, cols=300<br>
total: nonzeros=2292, allocated
nonzeros=2292<br>
total number of mallocs used during
MatSetValues calls=0<br>
not using I-node routines<br>
linear system matrix = precond matrix:<br>
Mat Object: 1 MPI process<br>
type: seqaij<br>
rows=1219, cols=1219<br>
total: nonzeros=26443, allocated
nonzeros=26443<br>
total number of mallocs used during
MatSetValues calls=0<br>
not using I-node routines<br>
solving time: 0.00449609<br>
iterations: 0<br>
estimated error: 25.4142<br>
<br>
</blockquote>
</div>
<br clear="all">
<div><br>
</div>
-- <br>
<div dir="ltr">
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>What most experimenters take for
granted before they begin their
experiments is infinitely more
interesting than any results to which
their experiments lead.<br>
-- Norbert Wiener</div>
<div><br>
</div>
<div><a href="http://www.cse.buffalo.edu/~knepley/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</blockquote>
<br>
</div>
</blockquote>
</div>
<br clear="all">
<div><br>
</div>
-- <br>
<div dir="ltr">
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>What most experimenters take for granted before
they begin their experiments is infinitely more
interesting than any results to which their
experiments lead.<br>
-- Norbert Wiener</div>
<div><br>
</div>
<div><a href="http://www.cse.buffalo.edu/~knepley/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</blockquote>
<br>
</div>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="http://www.cse.buffalo.edu/~knepley/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br></div></div></div></div></div></div></div></div>