[petsc-users] Curiosity about MatSetOptionsPrefix on a_11 in PCSetUp_FieldSplit

Eric Chamberland Eric.Chamberland at giref.ulaval.ca
Thu Sep 11 10:34:17 CDT 2014


Hi,

I was just curious to know why the prefix of the sub-matrix a_{11} in a 
matnest is forced to the ksp prefix by PCSetUp_FieldSplit?

In my case it changes from "gcrSchur_fieldsplit_a_11_" to 
"gcrSchur_fieldsplit_schur_" after the PCSetUp_FieldSplit, by this (I 
think) lines of code in fieldsplit.c (petsc 3.5.2):

const char *prefix;
ierr = 
MatGetSubMatrix(pc->pmat,ilink->is,ilink->is_col,MAT_INITIAL_MATRIX,&jac->pmat[i]);CHKERRQ(ierr);
ierr = KSPGetOptionsPrefix(ilink->ksp,&prefix);CHKERRQ(ierr);
ierr = MatSetOptionsPrefix(jac->pmat[i],prefix);CHKERRQ(ierr);
ierr = MatViewFromOptions(jac->pmat[i],NULL,"-mat_view");CHKERRQ(ierr);

We wanted to pass options to the a_11 matrix, but using the 
PC_COMPOSITE_SCHUR, we have to give the a_{11} matrix a unique prefix 
different from the ksp prefix used to solve the shur complement (which 
we named "schur") and have MatSetFromOptions use this unique prefixe. 
It all worked, but we saw this "curiosity" in the kspview (view attached 
log) about the PCSetUp_FieldSplit which rename our matrix (after all the 
precautions we took to name it differently)!!!!

The code is working, so maybe this is not an issue, but I can't tell 
from my knowledge if it can be harmful?

thank you!

Eric


-------------- next part --------------
[0] PetscInitialize(): PETSc successfully started: number of processors = 1
[0] PetscGetHostName(): Rejecting domainname, likely is NIS melkor.(none)
[0] PetscInitialize(): Running on machine: melkor
[0] PetscCommDuplicate(): Duplicating a communicator 139814426697728 68462368 max tags = 2147483647
#PETSc Option Table entries:
-info
-on_error_attach_debugger ddd
#End of PETSc Option Table entries
assignation du prefixe (asgnPrefixeOptionsPETSc) pour Solveur_ProjectionL2_0x7fff820fcf20
  prefixe      : Options_ProjectionL2
  type matrice : aij
  type precond : hypre
  type solveur : cg (iteratif/precond)
  librairie    : petsc
assignation du prefixe (asgnPrefixeOptionsPETSc) pour Solveur_ProjectionL2_0x7fff820ffa28
  prefixe      : Options_ProjectionL2
  type matrice : aij
  type precond : hypre
  type solveur : cg (iteratif/precond)
  librairie    : petsc
ATTENTION: On a pas la BCS, on bascule vers MUMPS
assignation du prefixe (asgnPrefixeOptionsPETSc) pour mon_solvlin
  prefixe      : gcrSchur_
  type matrice : nest
  type precond : fieldsplit
  type solveur : gcr (iteratif/precond)
  librairie    : mumps
chrono::SolveurLinPETSc::detruitKSP::debut VmSize: 556768 VmRSS: 109304 VmPeak: 647920 VmData: 24060 VmHWM: 192648 <etiq_8>
::fin VmSize: 556768 VmRSS: 109304 VmPeak: 647920 VmData: 24060 VmHWM: 192648 WC: 0.000304 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_8>
::fin VmSize: 556768 VmRSS: 109304 VmPeak: 647920 VmData: 24060 VmHWM: 192648 WC: 0.04414 SelfUser: 0.038 SelfSys: 0.006 ChildUser: 0 Childsys: 0 </etiq_7>
chrono::Geometrie::reconstructionModeTocher::debut VmSize: 557568 VmRSS: 111668 VmPeak: 647920 VmData: 24860 VmHWM: 192648 <etiq_9>
::fin VmSize: 557596 VmRSS: 113736 VmPeak: 647920 VmData: 24860 VmHWM: 192648 WC: 0.511831 SelfUser: 0.51 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_9>
::fin VmSize: 557596 VmRSS: 113736 VmPeak: 647920 VmData: 24860 VmHWM: 192648 WC: 0.991469 SelfUser: 0.913 SelfSys: 0.077 ChildUser: 0 Childsys: 0 </etiq_2>
::fin VmSize: 557596 VmRSS: 113736 VmPeak: 647920 VmData: 24860 VmHWM: 192648 WC: 0.999435 SelfUser: 0.92 SelfSys: 0.077 ChildUser: 0 Childsys: 0 </etiq_1>

 Gestion des booleens et des scalaires :
 -------------------------------------

Basculement du booleen "ActiveExportations" a true
chrono::ProblemeGD::asgnParametresEtInitialise::debut VmSize: 557596 VmRSS: 113736 VmPeak: 647920 VmData: 24860 VmHWM: 192648 <etiq_10>
Vous utilisez un Field split! WOW! cool! 2 u* pression*
Numerote GIS SSFeuilles:

-----------------------------------------------------------------------
SystemeSymbolique::affiche() [ this = 0x7fff820e9460]
Couplages (28):
	  1) 0x45f7240 : uX:uX sur domaine # 10 de type "8Maillage" type TF: 2
	  2) 0x45f7300 : uX:uY sur domaine # 10 de type "8Maillage" type TF: 2
	  3) 0x45f73c0 : uX:uZ sur domaine # 10 de type "8Maillage" type TF: 2
	  4) 0x45f7070 : uX:pression sur domaine # 10 de type "8Maillage" type TF: 2
	  5) 0x470ffa0 : uY:uX sur domaine # 10 de type "8Maillage" type TF: 2
	  6) 0x4726770 : uY:uY sur domaine # 10 de type "8Maillage" type TF: 2
	  7) 0x45f6e80 : uY:uZ sur domaine # 10 de type "8Maillage" type TF: 2
	  8) 0x45f71a0 : uY:pression sur domaine # 10 de type "8Maillage" type TF: 2
	  9) 0x45f6ed0 : uZ:uX sur domaine # 10 de type "8Maillage" type TF: 2
	 10) 0x4726980 : uZ:uY sur domaine # 10 de type "8Maillage" type TF: 2
	 11) 0x4726a00 : uZ:uZ sur domaine # 10 de type "8Maillage" type TF: 2
	 12) 0x431ce20 : uZ:pression sur domaine # 10 de type "8Maillage" type TF: 2
	 13) 0x4712e50 : pression:uX sur domaine # 10 de type "8Maillage" type TF: 2
	 14) 0x45f7120 : pression:uY sur domaine # 10 de type "8Maillage" type TF: 2
	 15) 0x431cda0 : pression:uZ sur domaine # 10 de type "8Maillage" type TF: 2
	 16) 0x431cea0 : pression:pression sur domaine # 10 de type "8Maillage" type TF: 2
	 17) 0x4726bd0 : uX:uX sur domaine # 50 de type "17EntiteGeometrique" type TF: 2
	 18) 0x4726c50 : uX:uY sur domaine # 50 de type "17EntiteGeometrique" type TF: 2
	 19) 0x4726cd0 : uX:uZ sur domaine # 50 de type "17EntiteGeometrique" type TF: 2
	 20) 0x4726d50 : uX:pression sur domaine # 50 de type "17EntiteGeometrique" type TF: 2
	 21) 0x45f6530 : uY:uX sur domaine # 50 de type "17EntiteGeometrique" type TF: 2
	 22) 0x45f65b0 : uY:uY sur domaine # 50 de type "17EntiteGeometrique" type TF: 2
	 23) 0x45f6630 : uY:uZ sur domaine # 50 de type "17EntiteGeometrique" type TF: 2
	 24) 0x45f66b0 : uY:pression sur domaine # 50 de type "17EntiteGeometrique" type TF: 2
	 25) 0x45f6730 : uZ:uX sur domaine # 50 de type "17EntiteGeometrique" type TF: 2
	 26) 0x45f67b0 : uZ:uY sur domaine # 50 de type "17EntiteGeometrique" type TF: 2
	 27) 0x45f6830 : uZ:uZ sur domaine # 50 de type "17EntiteGeometrique" type TF: 2
	 28) 0x45f68b0 : uZ:pression sur domaine # 50 de type "17EntiteGeometrique" type TF: 2
Champs hors couplages (0):
Liste champs connus (4):
	  1) 0x433bc30 : uX (racine)
	  2) 0x433ccc0 : uY (racine)
	  3) 0x433dd50 : uZ (racine)
	  4) 0x4340a80 : pression (racine)
Liste champs équations (4):
	  1) 0x433bc30 : uX (racine)
	  2) 0x433ccc0 : uY (racine)
	  3) 0x433dd50 : uZ (racine)
	  4) 0x4340a80 : pression (racine)
Liste champs inconnues (4):
	  1) 0x433bc30 : uX (racine)
	  2) 0x433ccc0 : uY (racine)
	  3) 0x433dd50 : uZ (racine)
	  4) 0x4340a80 : pression (racine)
-------------------------------------------------------------------------
Infos internes (4):
	  0) 0x433bc30 : uX [REI]
	  1) 0x433ccc0 : uY [REI]
	  2) 0x433dd50 : uZ [REI]
	  3) 0x4340a80 : pression [REI]
-------------------------------------------------------------------------
Numerote GIS SSS:

-----------------------------------------------------------------------
SystemeSymbolique::affiche() [ this = 0x7fff820e9650]
Couplages (16):
	  1) 0x45f7240 : uX:uX sur domaine # 10 de type "8Maillage" type TF: 2
	  2) 0x45f7300 : uX:uY sur domaine # 10 de type "8Maillage" type TF: 2
	  3) 0x45f73c0 : uX:uZ sur domaine # 10 de type "8Maillage" type TF: 2
	  4) 0x45f7070 : uX:pression sur domaine # 10 de type "8Maillage" type TF: 2
	  5) 0x470ffa0 : uY:uX sur domaine # 10 de type "8Maillage" type TF: 2
	  6) 0x4726770 : uY:uY sur domaine # 10 de type "8Maillage" type TF: 2
	  7) 0x45f6e80 : uY:uZ sur domaine # 10 de type "8Maillage" type TF: 2
	  8) 0x45f71a0 : uY:pression sur domaine # 10 de type "8Maillage" type TF: 2
	  9) 0x45f6ed0 : uZ:uX sur domaine # 10 de type "8Maillage" type TF: 2
	 10) 0x4726980 : uZ:uY sur domaine # 10 de type "8Maillage" type TF: 2
	 11) 0x4726a00 : uZ:uZ sur domaine # 10 de type "8Maillage" type TF: 2
	 12) 0x431ce20 : uZ:pression sur domaine # 10 de type "8Maillage" type TF: 2
	 13) 0x4712e50 : pression:uX sur domaine # 10 de type "8Maillage" type TF: 2
	 14) 0x45f7120 : pression:uY sur domaine # 10 de type "8Maillage" type TF: 2
	 15) 0x431cda0 : pression:uZ sur domaine # 10 de type "8Maillage" type TF: 2
	 16) 0x431cea0 : pression:pression sur domaine # 10 de type "8Maillage" type TF: 2
Champs hors couplages (0):
Liste champs connus (4):
	  1) 0x433bc30 : uX (racine)
	  2) 0x433ccc0 : uY (racine)
	  3) 0x433dd50 : uZ (racine)
	  4) 0x4340a80 : pression (racine)
Liste champs équations (4):
	  1) 0x433bc30 : uX (racine)
	  2) 0x433ccc0 : uY (racine)
	  3) 0x433dd50 : uZ (racine)
	  4) 0x4340a80 : pression (racine)
Liste champs inconnues (4):
	  1) 0x433bc30 : uX (racine)
	  2) 0x433ccc0 : uY (racine)
	  3) 0x433dd50 : uZ (racine)
	  4) 0x4340a80 : pression (racine)
-------------------------------------------------------------------------
Infos internes (4):
	  0) 0x433bc30 : uX [REI]
	  1) 0x433ccc0 : uY [REI]
	  2) 0x433dd50 : uZ [REI]
	  3) 0x4340a80 : pression [REI]
-------------------------------------------------------------------------
ajoute la sous-chaine: u*,pression*
Tous les groupes:[u*,pression*]
traite le field split (dégroupé): u*
FS: ajoute le champ: uX
FS: ajoute le champ: uY
FS: ajoute le champ: uZ
FS: n'ajoute pas le champ: pression
ligne: On a créé le sous-système symbolique suivant en (0,0) :

-----------------------------------------------------------------------
SystemeSymbolique::affiche() [ this = 0x45f90b0]
Couplages (9):
	  1) 0x45f52e0 : uX:uX sur domaine # 10 de type "8Maillage" type TF: 2
	  2) 0x45f9360 : uX:uY sur domaine # 10 de type "8Maillage" type TF: 2
	  3) 0x45f9440 : uX:uZ sur domaine # 10 de type "8Maillage" type TF: 2
	  4) 0x45f9520 : uY:uX sur domaine # 10 de type "8Maillage" type TF: 2
	  5) 0x45f95d0 : uY:uY sur domaine # 10 de type "8Maillage" type TF: 2
	  6) 0x45f9670 : uY:uZ sur domaine # 10 de type "8Maillage" type TF: 2
	  7) 0x45f9710 : uZ:uX sur domaine # 10 de type "8Maillage" type TF: 2
	  8) 0x45f97e0 : uZ:uY sur domaine # 10 de type "8Maillage" type TF: 2
	  9) 0x45f9880 : uZ:uZ sur domaine # 10 de type "8Maillage" type TF: 2
Champs hors couplages (0):
Liste champs connus (3):
	  1) 0x433bc30 : uX (racine)
	  2) 0x433ccc0 : uY (racine)
	  3) 0x433dd50 : uZ (racine)
Liste champs équations (3):
	  1) 0x433bc30 : uX (racine)
	  2) 0x433ccc0 : uY (racine)
	  3) 0x433dd50 : uZ (racine)
Liste champs inconnues (3):
	  1) 0x433bc30 : uX (racine)
	  2) 0x433ccc0 : uY (racine)
	  3) 0x433dd50 : uZ (racine)
-------------------------------------------------------------------------
Infos internes (3):
	  0) 0x433bc30 : uX [REI]
	  1) 0x433ccc0 : uY [REI]
	  2) 0x433dd50 : uZ [REI]
-------------------------------------------------------------------------
lignes: On a trouvé 2 GIS dont les enfants sont tous inclus dans les 9 IF+SG présents dans aGISNumerotationComplet 
Reconstitution du champ ligne u par ses GIS enfants
traite le field split (dégroupé): pression*
FS: n'ajoute pas le champ: uX
FS: n'ajoute pas le champ: uY
FS: n'ajoute pas le champ: uZ
FS: ajoute le champ: pression
ligne: On a créé le sous-système symbolique suivant en (1,1) :

-----------------------------------------------------------------------
SystemeSymbolique::affiche() [ this = 0x45fe730]
Couplages (1):
	  1) 0x45fe920 : pression:pression sur domaine # 10 de type "8Maillage" type TF: 2
Champs hors couplages (0):
Liste champs connus (1):
	  1) 0x4340a80 : pression (racine)
Liste champs équations (1):
	  1) 0x4340a80 : pression (racine)
Liste champs inconnues (1):
	  1) 0x4340a80 : pression (racine)
-------------------------------------------------------------------------
Infos internes (1):
	  0) 0x4340a80 : pression [REI]
-------------------------------------------------------------------------
lignes: On a trouvé 1 GIS dont les enfants sont tous inclus dans les 9 IF+SG présents dans aGISNumerotationComplet 
Reconstitution du champ ligne pression par ses GIS enfants
ligne: On a créé le sous-système symbolique suivant en (1,0) :

-----------------------------------------------------------------------
SystemeSymbolique::affiche() [ this = 0x4603080]
Couplages (3):
	  1) 0x4603290 : pression:uX sur domaine # 10 de type "8Maillage" type TF: 2
	  2) 0x4603380 : pression:uY sur domaine # 10 de type "8Maillage" type TF: 2
	  3) 0x4603430 : pression:uZ sur domaine # 10 de type "8Maillage" type TF: 2
Champs hors couplages (0):
Liste champs connus (4):
	  1) 0x433bc30 : uX (racine)
	  2) 0x433ccc0 : uY (racine)
	  3) 0x433dd50 : uZ (racine)
	  4) 0x4340a80 : pression (racine)
Liste champs équations (1):
	  1) 0x4340a80 : pression (racine)
Liste champs inconnues (3):
	  1) 0x433bc30 : uX (racine)
	  2) 0x433ccc0 : uY (racine)
	  3) 0x433dd50 : uZ (racine)
-------------------------------------------------------------------------
Infos internes (4):
	  0) 0x433bc30 : uX [RI]
	  1) 0x433ccc0 : uY [RI]
	  2) 0x433dd50 : uZ [RI]
	  3) 0x4340a80 : pression [RE]
-------------------------------------------------------------------------
lignes: On a trouvé 3 GIS dont les enfants sont tous inclus dans les 9 IF+SG présents dans aGISNumerotationComplet 
Reconstitution du champ ligne pression par ses GIS enfants
Reconstitution du champ ligne u par ses GIS enfants
col: On a créé le sous-système symbolique suivant en (0,1) :

-----------------------------------------------------------------------
SystemeSymbolique::affiche() [ this = 0x4608140]
Couplages (3):
	  1) 0x4608330 : uX:pression sur domaine # 10 de type "8Maillage" type TF: 2
	  2) 0x4608420 : uY:pression sur domaine # 10 de type "8Maillage" type TF: 2
	  3) 0x46084d0 : uZ:pression sur domaine # 10 de type "8Maillage" type TF: 2
Champs hors couplages (0):
Liste champs connus (4):
	  1) 0x433bc30 : uX (racine)
	  2) 0x433ccc0 : uY (racine)
	  3) 0x433dd50 : uZ (racine)
	  4) 0x4340a80 : pression (racine)
Liste champs équations (3):
	  1) 0x433bc30 : uX (racine)
	  2) 0x433ccc0 : uY (racine)
	  3) 0x433dd50 : uZ (racine)
Liste champs inconnues (1):
	  1) 0x4340a80 : pression (racine)
-------------------------------------------------------------------------
Infos internes (4):
	  0) 0x433bc30 : uX [RE]
	  1) 0x433ccc0 : uY [RE]
	  2) 0x433dd50 : uZ [RE]
	  3) 0x4340a80 : pression [RI]
-------------------------------------------------------------------------
colonnes: On a trouvé 3 GIS dont les enfants sont tous inclus dans les 9 IF+SG présents dans aGISNumerotationComplet 
Reconstitution du champ colonne u par ses GIS enfants
Reconstitution du champ colonne pression par ses GIS enfants
On garde les lignes avec DDLs dirichlet:
DDLsNum: CL Champ à imposer: uZ
DDLsNum: CL Champ à imposer: uY
DDLsNum: CL Champ à imposer: uX
DDLsNum: CL Champ à imposer: u
Trouvé Champ de la CL dans la numérotation!

 Exportation au format: GIREF

[0] PetscCommDuplicate(): Duplicating a communicator 68383344 73355168 max tags = 2147483647
[0] PetscCommDuplicate(): Using internal PETSc communicator 139814426697728 68462368
[0] PetscCommDuplicate(): Using internal PETSc communicator 68383344 73355168
[0] PetscCommDuplicate(): Using internal PETSc communicator 139814426697728 68462368
[0] VecScatterCreate(): Special case: sequential vector general to stride
[0] PetscCommDuplicate(): Using internal PETSc communicator 68383344 73355168
::fin VmSize: 559548 VmRSS: 119748 VmPeak: 647920 VmData: 26812 VmHWM: 192648 WC: 0.174405 SelfUser: 0.174 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_10>
chrono::ProblemeGD::resoudre::debut VmSize: 559548 VmRSS: 119748 VmPeak: 647920 VmData: 26812 VmHWM: 192648 <etiq_11>
chrono::SolveurLinPETSc::initialise:AssembleurGD::debut VmSize: 559548 VmRSS: 119972 VmPeak: 647920 VmData: 26812 VmHWM: 192648 <etiq_12>
chrono::SolveurLinPETSc::initialise:initialiseNumerotation:AssembleurGD::debut VmSize: 559548 VmRSS: 119972 VmPeak: 647920 VmData: 26812 VmHWM: 192648 <etiq_13>
::fin VmSize: 559548 VmRSS: 119972 VmPeak: 647920 VmData: 26812 VmHWM: 192648 WC: 0.000196 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_13>
chrono::SolveurLinPETSc::initialise:initialiseObjetPETSc:AssembleurGD::debut VmSize: 559548 VmRSS: 119972 VmPeak: 647920 VmData: 26812 VmHWM: 192648 <etiq_14>
[0] PetscCommDuplicate(): Using internal PETSc communicator 68383344 73355168
::fin VmSize: 559548 VmRSS: 119972 VmPeak: 647920 VmData: 26812 VmHWM: 192648 WC: 0.000452 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_14>
OptionsSolveurLinPETSc::configureMatrice de Matrice_AssembleurGD (gcrSchur_)
 Instanciation (configureMatrice) de la matrice de prefixe: gcrSchur_
 Identifiant de la matrice: M0_
 Type Matrice             : nest
 Librairie du Solveur     : mumps
 aMatriceSymetrique       : 0
 aSymetrieDetruite        : 0
 aIgnoreNonSymetrie       : 0
 aBasculeTypeCSR          : 0
 aTypeSymetrique          : sbaij
 aTypeNonSymetrique       : aij
 aTypeMatLu               : nest
 Nom du solveur           : mon_solvlin
Entre dans MatricePETScParBlocs::asgnDimension 
0x4431710 On appelle reqOptionsSousBloc avec a_00 dans gcrSchur_
On crée le préfixe: gcrSchur_fieldsplit_a_00_
  prefixe      : gcrSchur_fieldsplit_a_00_
  type matrice : aij
  type precond : lu
  type solveur : preonly (direct)
  librairie    : petsc
0x4431710 asgnOptionsSousBloc par le nom du sous-bloc: a_00 de préfixe: gcrSchur_fieldsplit_a_00_ dans gcrSchur_
MPB(0,0) = N/A solv: 0
 Instanciation (configureMatrice) de la matrice de prefixe: gcrSchur_fieldsplit_a_00_
 Identifiant de la matrice: M1_
 Type Matrice             : aij
 Librairie du Solveur     : librairie_auto
 aMatriceSymetrique       : 0
 aSymetrieDetruite        : 0
 aIgnoreNonSymetrie       : 0
 aBasculeTypeCSR          : 0
 aTypeSymetrique          : sbaij
 aTypeNonSymetrique       : aij
 aTypeMatLu               : aij
Bravo, vous avez le bon nb de sous-blocs vs sous-numérotations!
chrono::MatricePETSc::asgnDimension gcrSchur_fieldsplit_a_00_::debut VmSize: 559548 VmRSS: 120232 VmPeak: 647920 VmData: 26812 VmHWM: 192648 <etiq_15>
chrono::MatricePETScCSR::creeObjet(81x81)::debut VmSize: 559548 VmRSS: 120232 VmPeak: 647920 VmData: 26812 VmHWM: 192648 <etiq_16>
chrono::CompteurNonZeroCSRGIS::algoMetaCouplages()::debut VmSize: 559548 VmRSS: 120232 VmPeak: 647920 VmData: 26812 VmHWM: 192648 <etiq_17>
chrono::CompteurNonZeroCSRGIS::visiteMaillage::debut VmSize: 559548 VmRSS: 121288 VmPeak: 647920 VmData: 26812 VmHWM: 192648 <etiq_18>
::fin VmSize: 559548 VmRSS: 121552 VmPeak: 647920 VmData: 26812 VmHWM: 192648 WC: 0.001453 SelfUser: 0.002 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_18>
chrono::CompteurNonZeroCSRGIS::SetNonZeros::debut VmSize: 559548 VmRSS: 121552 VmPeak: 647920 VmData: 26812 VmHWM: 192648 <etiq_19>
::fin VmSize: 559548 VmRSS: 121552 VmPeak: 647920 VmData: 26812 VmHWM: 192648 WC: 0.004112 SelfUser: 0.004 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_19>
::fin VmSize: 559548 VmRSS: 121552 VmPeak: 647920 VmData: 26812 VmHWM: 192648 WC: 0.014277 SelfUser: 0.013 SelfSys: 0.001 ChildUser: 0 Childsys: 0 </etiq_17>
chrono::MatricePETScCSR::creeObjetPrive::debut VmSize: 559548 VmRSS: 121552 VmPeak: 647920 VmData: 26812 VmHWM: 192648 <etiq_20>
[0] PetscCommDuplicate(): Using internal PETSc communicator 68383344 73355168
[0] MatCreate_SeqAIJ_Inode(): Not using Inode routines due to -mat_no_inode
::fin VmSize: 559548 VmRSS: 122080 VmPeak: 647920 VmData: 26812 VmHWM: 192648 WC: 0.001289 SelfUser: 0.001 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_20>
chrono::MatricePETScCSR::definirStructureEtMettreAZeroAvecGIS::debut VmSize: 559548 VmRSS: 122080 VmPeak: 647920 VmData: 26812 VmHWM: 192648 <etiq_21>
chrono::MatricePETScCSR::definirStructureEtMettreAZeroAvecGIS::Couplages::debut VmSize: 559548 VmRSS: 122080 VmPeak: 647920 VmData: 26812 VmHWM: 192648 <etiq_22>
::fin VmSize: 559548 VmRSS: 122080 VmPeak: 647920 VmData: 26812 VmHWM: 192648 WC: 0.002659 SelfUser: 0.002 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_22>
chrono::MatricePETScCSR::definirStructureEtMettreAZeroGIS::FinAssemblage::debut VmSize: 559548 VmRSS: 122080 VmPeak: 647920 VmData: 26812 VmHWM: 192648 <etiq_23>
[0] MatAssemblyEnd_SeqAIJ(): Matrix size: 81 X 81; storage space: 0 unneeded,1845 used
[0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0
[0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 54
[0] MatCheckCompressedRow(): Found the ratio (num_zerorows 0)/(num_localrows 81) < 0.6. Do not use CompressedRow routines.
::fin VmSize: 559548 VmRSS: 122080 VmPeak: 647920 VmData: 26812 VmHWM: 192648 WC: 0.00084 SelfUser: 0.001 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_23>
::fin VmSize: 559548 VmRSS: 122080 VmPeak: 647920 VmData: 26812 VmHWM: 192648 WC: 0.004598 SelfUser: 0.004 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_21>
::fin VmSize: 559548 VmRSS: 122080 VmPeak: 647920 VmData: 26812 VmHWM: 192648 WC: 0.024048 SelfUser: 0.021 SelfSys: 0.003 ChildUser: 0 Childsys: 0 </etiq_16>
::fin VmSize: 559548 VmRSS: 122080 VmPeak: 647920 VmData: 26812 VmHWM: 192648 WC: 0.024529 SelfUser: 0.021 SelfSys: 0.003 ChildUser: 0 Childsys: 0 </etiq_15>
MPB(0,1) = N/A solv: 0
 Instanciation (configureMatrice) de la matrice de prefixe: gcrSchur_BlocsHDiag_
 Identifiant de la matrice: M2_
 Type Matrice             : aij
 Librairie du Solveur     : librairie_auto
 aMatriceSymetrique       : 0
 aSymetrieDetruite        : 1
 aIgnoreNonSymetrie       : 0
 aBasculeTypeCSR          : 0
 aTypeSymetrique          : sbaij
 aTypeNonSymetrique       : aij
 aTypeMatLu               : aij
Bravo, vous avez le bon nb de sous-blocs vs sous-numérotations!
chrono::MatricePETSc::asgnDimension gcrSchur_BlocsHDiag_::debut VmSize: 559548 VmRSS: 122080 VmPeak: 647920 VmData: 26812 VmHWM: 192648 <etiq_24>
chrono::MatricePETScCSR::creeObjet(81x8)::debut VmSize: 559548 VmRSS: 122080 VmPeak: 647920 VmData: 26812 VmHWM: 192648 <etiq_25>
chrono::CompteurNonZeroCSRGIS::algoMetaCouplages()::debut VmSize: 559548 VmRSS: 122080 VmPeak: 647920 VmData: 26812 VmHWM: 192648 <etiq_17>
chrono::CompteurNonZeroCSRGIS::visiteMaillage::debut VmSize: 559548 VmRSS: 122080 VmPeak: 647920 VmData: 26812 VmHWM: 192648 <etiq_18>
::fin VmSize: 559548 VmRSS: 122080 VmPeak: 647920 VmData: 26812 VmHWM: 192648 WC: 0.00083 SelfUser: 0.001 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_18>
chrono::CompteurNonZeroCSRGIS::SetNonZeros::debut VmSize: 559548 VmRSS: 122080 VmPeak: 647920 VmData: 26812 VmHWM: 192648 <etiq_19>
::fin VmSize: 559548 VmRSS: 122080 VmPeak: 647920 VmData: 26812 VmHWM: 192648 WC: 0.001924 SelfUser: 0.002 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_19>
::fin VmSize: 559548 VmRSS: 122080 VmPeak: 647920 VmData: 26812 VmHWM: 192648 WC: 0.00785 SelfUser: 0.007 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_17>
chrono::MatricePETScCSR::creeObjetPrive::debut VmSize: 559548 VmRSS: 122080 VmPeak: 647920 VmData: 26812 VmHWM: 192648 <etiq_20>
[0] PetscCommDuplicate(): Using internal PETSc communicator 68383344 73355168
[0] MatCreate_SeqAIJ_Inode(): Not using Inode routines due to -mat_no_inode
::fin VmSize: 559548 VmRSS: 122344 VmPeak: 647920 VmData: 26812 VmHWM: 192648 WC: 0.001006 SelfUser: 0.001 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_20>
chrono::MatricePETScCSR::definirStructureEtMettreAZeroAvecGIS::debut VmSize: 559548 VmRSS: 122344 VmPeak: 647920 VmData: 26812 VmHWM: 192648 <etiq_21>
chrono::MatricePETScCSR::definirStructureEtMettreAZeroAvecGIS::Couplages::debut VmSize: 559548 VmRSS: 122344 VmPeak: 647920 VmData: 26812 VmHWM: 192648 <etiq_22>
::fin VmSize: 559548 VmRSS: 122344 VmPeak: 647920 VmData: 26812 VmHWM: 192648 WC: 0.001945 SelfUser: 0.002 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_22>
chrono::MatricePETScCSR::definirStructureEtMettreAZeroGIS::FinAssemblage::debut VmSize: 559548 VmRSS: 122344 VmPeak: 647920 VmData: 26812 VmHWM: 192648 <etiq_23>
[0] MatAssemblyEnd_SeqAIJ(): Matrix size: 81 X 8; storage space: 0 unneeded,144 used
[0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0
[0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 8
[0] MatCheckCompressedRow(): Found the ratio (num_zerorows 27)/(num_localrows 81) < 0.6. Do not use CompressedRow routines.
::fin VmSize: 559548 VmRSS: 122344 VmPeak: 647920 VmData: 26812 VmHWM: 192648 WC: 0.000763 SelfUser: 0 SelfSys: 0.001 ChildUser: 0 Childsys: 0 </etiq_23>
::fin VmSize: 559548 VmRSS: 122344 VmPeak: 647920 VmData: 26812 VmHWM: 192648 WC: 0.003795 SelfUser: 0.002 SelfSys: 0.001 ChildUser: 0 Childsys: 0 </etiq_21>
::fin VmSize: 559548 VmRSS: 122344 VmPeak: 647920 VmData: 26812 VmHWM: 192648 WC: 0.019979 SelfUser: 0.018 SelfSys: 0.002 ChildUser: 0 Childsys: 0 </etiq_25>
::fin VmSize: 559548 VmRSS: 122344 VmPeak: 647920 VmData: 26812 VmHWM: 192648 WC: 0.020403 SelfUser: 0.018 SelfSys: 0.002 ChildUser: 0 Childsys: 0 </etiq_24>
MPB(1,0) = N/A solv: 0
 Instanciation (configureMatrice) de la matrice de prefixe: gcrSchur_BlocsHDiag_
 Identifiant de la matrice: M3_
 Type Matrice             : aij
 Librairie du Solveur     : librairie_auto
 aMatriceSymetrique       : 0
 aSymetrieDetruite        : 1
 aIgnoreNonSymetrie       : 0
 aBasculeTypeCSR          : 0
 aTypeSymetrique          : sbaij
 aTypeNonSymetrique       : aij
 aTypeMatLu               : aij
Bravo, vous avez le bon nb de sous-blocs vs sous-numérotations!
chrono::MatricePETSc::asgnDimension gcrSchur_BlocsHDiag_::debut VmSize: 559548 VmRSS: 122344 VmPeak: 647920 VmData: 26812 VmHWM: 192648 <etiq_24>
chrono::MatricePETScCSR::creeObjet(8x81)::debut VmSize: 559548 VmRSS: 122344 VmPeak: 647920 VmData: 26812 VmHWM: 192648 <etiq_26>
chrono::CompteurNonZeroCSRGIS::algoMetaCouplages()::debut VmSize: 559548 VmRSS: 122344 VmPeak: 647920 VmData: 26812 VmHWM: 192648 <etiq_17>
chrono::CompteurNonZeroCSRGIS::visiteMaillage::debut VmSize: 559548 VmRSS: 122344 VmPeak: 647920 VmData: 26812 VmHWM: 192648 <etiq_18>
::fin VmSize: 559548 VmRSS: 122344 VmPeak: 647920 VmData: 26812 VmHWM: 192648 WC: 0.001043 SelfUser: 0.001 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_18>
chrono::CompteurNonZeroCSRGIS::SetNonZeros::debut VmSize: 559548 VmRSS: 122344 VmPeak: 647920 VmData: 26812 VmHWM: 192648 <etiq_19>
::fin VmSize: 559548 VmRSS: 122344 VmPeak: 647920 VmData: 26812 VmHWM: 192648 WC: 0.002736 SelfUser: 0.002 SelfSys: 0.001 ChildUser: 0 Childsys: 0 </etiq_19>
::fin VmSize: 559548 VmRSS: 122344 VmPeak: 647920 VmData: 26812 VmHWM: 192648 WC: 0.018447 SelfUser: 0.018 SelfSys: 0.001 ChildUser: 0 Childsys: 0 </etiq_17>
chrono::MatricePETScCSR::creeObjetPrive::debut VmSize: 559548 VmRSS: 122344 VmPeak: 647920 VmData: 26812 VmHWM: 192648 <etiq_20>
[0] PetscCommDuplicate(): Using internal PETSc communicator 68383344 73355168
[0] MatCreate_SeqAIJ_Inode(): Not using Inode routines due to -mat_no_inode
::fin VmSize: 559548 VmRSS: 122608 VmPeak: 647920 VmData: 26812 VmHWM: 192648 WC: 0.001003 SelfUser: 0.001 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_20>
chrono::MatricePETScCSR::definirStructureEtMettreAZeroAvecGIS::debut VmSize: 559548 VmRSS: 122608 VmPeak: 647920 VmData: 26812 VmHWM: 192648 <etiq_21>
chrono::MatricePETScCSR::definirStructureEtMettreAZeroAvecGIS::Couplages::debut VmSize: 559548 VmRSS: 122608 VmPeak: 647920 VmData: 26812 VmHWM: 192648 <etiq_22>
::fin VmSize: 559680 VmRSS: 122608 VmPeak: 647920 VmData: 26944 VmHWM: 192648 WC: 0.004956 SelfUser: 0.005 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_22>
chrono::MatricePETScCSR::definirStructureEtMettreAZeroGIS::FinAssemblage::debut VmSize: 559680 VmRSS: 122608 VmPeak: 647920 VmData: 26944 VmHWM: 192648 <etiq_23>
[0] MatAssemblyEnd_SeqAIJ(): Matrix size: 8 X 81; storage space: 0 unneeded,144 used
[0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0
[0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 24
[0] MatCheckCompressedRow(): Found the ratio (num_zerorows 0)/(num_localrows 8) < 0.6. Do not use CompressedRow routines.
::fin VmSize: 559680 VmRSS: 122608 VmPeak: 647920 VmData: 26944 VmHWM: 192648 WC: 0.001032 SelfUser: 0.001 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_23>
::fin VmSize: 559680 VmRSS: 122608 VmPeak: 647920 VmData: 26944 VmHWM: 192648 WC: 0.007204 SelfUser: 0.007 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_21>
::fin VmSize: 559680 VmRSS: 122608 VmPeak: 647920 VmData: 26944 VmHWM: 192648 WC: 0.034172 SelfUser: 0.033 SelfSys: 0.001 ChildUser: 0 Childsys: 0 </etiq_26>
::fin VmSize: 559680 VmRSS: 122608 VmPeak: 647920 VmData: 26944 VmHWM: 192648 WC: 0.034704 SelfUser: 0.033 SelfSys: 0.001 ChildUser: 0 Childsys: 0 </etiq_24>
MPB(1,1) = N/A solv: 0
 Instanciation (configureMatrice) de la matrice de prefixe: gcrSchur_fieldsplit_a_11_
 Identifiant de la matrice: M4_
 Type Matrice             : sbaij
 Librairie du Solveur     : librairie_auto
 aMatriceSymetrique       : 0
 aSymetrieDetruite        : 0
 aIgnoreNonSymetrie       : 0
 aBasculeTypeCSR          : 0
 aTypeSymetrique          : sbaij
 aTypeNonSymetrique       : aij
 aTypeMatLu               : sbaij
Bravo, vous avez le bon nb de sous-blocs vs sous-numérotations!
chrono::MatricePETSc::asgnDimension gcrSchur_fieldsplit_a_11_::debut VmSize: 559680 VmRSS: 122608 VmPeak: 647920 VmData: 26944 VmHWM: 192648 <etiq_27>
chrono::MatricePETScCSR::creeObjet(8x8)::debut VmSize: 559680 VmRSS: 122608 VmPeak: 647920 VmData: 26944 VmHWM: 192648 <etiq_28>
chrono::CompteurNonZeroCSRGIS::algoMetaCouplages()::debut VmSize: 559680 VmRSS: 122608 VmPeak: 647920 VmData: 26944 VmHWM: 192648 <etiq_17>
chrono::CompteurNonZeroCSRGIS::visiteMaillage::debut VmSize: 559680 VmRSS: 122608 VmPeak: 647920 VmData: 26944 VmHWM: 192648 <etiq_18>
::fin VmSize: 559680 VmRSS: 122608 VmPeak: 647920 VmData: 26944 VmHWM: 192648 WC: 0.000798 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_18>
chrono::CompteurNonZeroCSRGIS::SetNonZeros::debut VmSize: 559680 VmRSS: 122608 VmPeak: 647920 VmData: 26944 VmHWM: 192648 <etiq_19>
::fin VmSize: 559680 VmRSS: 122608 VmPeak: 647920 VmData: 26944 VmHWM: 192648 WC: 0.002245 SelfUser: 0.002 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_19>
::fin VmSize: 559680 VmRSS: 122608 VmPeak: 647920 VmData: 26944 VmHWM: 192648 WC: 0.01242 SelfUser: 0.011 SelfSys: 0.001 ChildUser: 0 Childsys: 0 </etiq_17>
chrono::MatricePETScCSR::creeObjetPrive::debut VmSize: 559680 VmRSS: 122608 VmPeak: 647920 VmData: 26944 VmHWM: 192648 <etiq_20>
[0] PetscCommDuplicate(): Using internal PETSc communicator 68383344 73355168
[0] MatCreate_SeqSBAIJ(): Not using Inode routines due to -mat_no_inode
[0] MatSetOption_SeqSBAIJ(): Option UNUSED_NONZERO_LOCATION_ERR not relevent
::fin VmSize: 559680 VmRSS: 122608 VmPeak: 647920 VmData: 26944 VmHWM: 192648 WC: 0.001146 SelfUser: 0.001 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_20>
chrono::MatricePETScCSR::definirStructureEtMettreAZeroAvecGIS::debut VmSize: 559680 VmRSS: 122608 VmPeak: 647920 VmData: 26944 VmHWM: 192648 <etiq_21>
chrono::MatricePETScCSR::definirStructureEtMettreAZeroAvecGIS::Couplages::debut VmSize: 559680 VmRSS: 122608 VmPeak: 647920 VmData: 26944 VmHWM: 192648 <etiq_22>
::fin VmSize: 559680 VmRSS: 122608 VmPeak: 647920 VmData: 26944 VmHWM: 192648 WC: 0.004398 SelfUser: 0.005 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_22>
chrono::MatricePETScCSR::definirStructureEtMettreAZeroGIS::FinAssemblage::debut VmSize: 559680 VmRSS: 122608 VmPeak: 647920 VmData: 26944 VmHWM: 192648 <etiq_23>
[0] MatAssemblyEnd_SeqSBAIJ(): Matrix size: 8 X 8, block size 1; storage space: 0 unneeded, 8 used
[0] MatAssemblyEnd_SeqSBAIJ(): Number of mallocs during MatSetValues is 0
[0] MatAssemblyEnd_SeqSBAIJ(): Most nonzeros blocks in any row is 1
::fin VmSize: 559680 VmRSS: 122608 VmPeak: 647920 VmData: 26944 VmHWM: 192648 WC: 0.000767 SelfUser: 0 SelfSys: 0.001 ChildUser: 0 Childsys: 0 </etiq_23>
::fin VmSize: 559680 VmRSS: 122608 VmPeak: 647920 VmData: 26944 VmHWM: 192648 WC: 0.006251 SelfUser: 0.006 SelfSys: 0.001 ChildUser: 0 Childsys: 0 </etiq_21>
::fin VmSize: 559680 VmRSS: 122608 VmPeak: 647920 VmData: 26944 VmHWM: 192648 WC: 0.028055 SelfUser: 0.026 SelfSys: 0.002 ChildUser: 0 Childsys: 0 </etiq_28>
::fin VmSize: 559680 VmRSS: 122608 VmPeak: 647920 VmData: 26944 VmHWM: 192648 WC: 0.028525 SelfUser: 0.026 SelfSys: 0.002 ChildUser: 0 Childsys: 0 </etiq_27>
[0] PetscCommDuplicate(): Using internal PETSc communicator 68383344 73355168
[0] PetscCommDuplicate(): Using internal PETSc communicator 68383344 73355168
[0] PetscCommDuplicate(): Using internal PETSc communicator 68383344 73355168
 Construction (creeMatriceEtAssigneProfile) de la matrice de prefixe: gcrSchur_
 Type Matrice                : nest
 Nouvelle Matrice            : VRAI
 Nom du probleme             : AssembleurGD
 Librairie du Solveur        : mumps
Matrice Refaite dans SolveurLinPETSc::asgnProfileMatrice
chrono::SolveurLinPETSc::initialise:newVecResidu:AssembleurGD::debut VmSize: 559812 VmRSS: 122872 VmPeak: 647920 VmData: 27076 VmHWM: 192648 <etiq_29>
::fin VmSize: 559812 VmRSS: 122872 VmPeak: 647920 VmData: 27076 VmHWM: 192648 WC: 0.000223 SelfUser: 0.001 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_29>
chrono::SolveurLinPETSc::initialise:initialiseVecResidu:AssembleurGD::debut VmSize: 559812 VmRSS: 122872 VmPeak: 647920 VmData: 27076 VmHWM: 192648 <etiq_30>
[0] PetscCommDuplicate(): Using internal PETSc communicator 68383344 73355168
[0] PetscCommDuplicate(): Using internal PETSc communicator 139814426697728 68462368
[0] PetscCommDuplicate(): Using internal PETSc communicator 68383344 73355168
[0] PetscCommDuplicate(): Using internal PETSc communicator 139814426697728 68462368
[0] VecScatterCreate(): Special case: sequential vector general to stride
[0] PetscCommDuplicate(): Using internal PETSc communicator 68383344 73355168
::fin VmSize: 559812 VmRSS: 122872 VmPeak: 647920 VmData: 27076 VmHWM: 192648 WC: 0.000515 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_30>
chrono::SolveurLinPETSc::initialise:newVecCorrection:AssembleurGD::debut VmSize: 559812 VmRSS: 122872 VmPeak: 647920 VmData: 27076 VmHWM: 192648 <etiq_31>
::fin VmSize: 559812 VmRSS: 122872 VmPeak: 647920 VmData: 27076 VmHWM: 192648 WC: 0.000205 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_31>
chrono::SolveurLinPETSc::initialise:initialiseVecCorrection:AssembleurGD::debut VmSize: 559812 VmRSS: 122872 VmPeak: 647920 VmData: 27076 VmHWM: 192648 <etiq_32>
[0] PetscCommDuplicate(): Using internal PETSc communicator 68383344 73355168
[0] PetscCommDuplicate(): Using internal PETSc communicator 139814426697728 68462368
[0] PetscCommDuplicate(): Using internal PETSc communicator 68383344 73355168
[0] PetscCommDuplicate(): Using internal PETSc communicator 139814426697728 68462368
[0] VecScatterCreate(): Special case: sequential vector general to stride
[0] PetscCommDuplicate(): Using internal PETSc communicator 68383344 73355168
::fin VmSize: 559944 VmRSS: 122872 VmPeak: 647920 VmData: 27208 VmHWM: 192648 WC: 0.000466 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_32>
::fin VmSize: 559944 VmRSS: 122872 VmPeak: 647920 VmData: 27208 VmHWM: 192648 WC: 0.116025 SelfUser: 0.107 SelfSys: 0.009 ChildUser: 0 Childsys: 0 </etiq_12>
chrono::SolveurStatNlinPETSc::PreTraitParPasDeTemps:AssembleurGD::debut VmSize: 559944 VmRSS: 122872 VmPeak: 647920 VmData: 27208 VmHWM: 192648 <etiq_33>
chrono::GestionPrePostTraitement::executePrePostTrait:GESTIONPREPOSTDEFAUT:DebutPasDeTemps::************ProblemeGD_aPPExecutePreTraitement************::debut VmSize: 559944 VmRSS: 122872 VmPeak: 647920 VmData: 27208 VmHWM: 192648 <etiq_34>
chrono::PP::************ProblemeGD_aPPExecutePreTraitement************::effectueCalcul::debut VmSize: 559944 VmRSS: 123136 VmPeak: 647920 VmData: 27208 VmHWM: 192648 <etiq_35>
 
========  Pas de Temps [1] = 1    ========

chrono::PP::aPPMAJDeplacementNewtonPrecedent::effectueCalcul::debut VmSize: 559944 VmRSS: 123136 VmPeak: 647920 VmData: 27208 VmHWM: 192648 <etiq_36>
::fin VmSize: 559944 VmRSS: 123136 VmPeak: 647920 VmData: 27208 VmHWM: 192648 WC: 0.000304 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_36>
::fin VmSize: 559944 VmRSS: 123136 VmPeak: 647920 VmData: 27208 VmHWM: 192648 WC: 0.003025 SelfUser: 0.003 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_35>
::fin VmSize: 559944 VmRSS: 123136 VmPeak: 647920 VmData: 27208 VmHWM: 192648 WC: 0.003435 SelfUser: 0.003 SelfSys: 0.001 ChildUser: 0 Childsys: 0 </etiq_34>
::fin VmSize: 559944 VmRSS: 123136 VmPeak: 647920 VmData: 27208 VmHWM: 192648 WC: 0.004056 SelfUser: 0.003 SelfSys: 0.001 ChildUser: 0 Childsys: 0 </etiq_33>
chrono::SolveurStatNlinPETSc::EcritResuPreParPasDeTemps:AssembleurGD::debut VmSize: 559944 VmRSS: 123136 VmPeak: 647920 VmData: 27208 VmHWM: 192648 <etiq_37>
::fin VmSize: 559944 VmRSS: 123136 VmPeak: 647920 VmData: 27208 VmHWM: 192648 WC: 0.000211 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_37>
chrono::SolveurStatNlinPETSc::PPIteration:AssembleurGD::debut VmSize: 559944 VmRSS: 123136 VmPeak: 647920 VmData: 27208 VmHWM: 192648 <etiq_38>
chrono::GestionPrePostTraitement::executePrePostTrait:GESTIONPREPOSTDEFAUT:DebutIterationNlin::************ProblemeGD_aPPExecutePreTraitementIteration************::debut VmSize: 559944 VmRSS: 123136 VmPeak: 647920 VmData: 27208 VmHWM: 192648 <etiq_39>
chrono::PP::************ProblemeGD_aPPExecutePreTraitementIteration************::effectueCalcul::debut VmSize: 559944 VmRSS: 123136 VmPeak: 647920 VmData: 27208 VmHWM: 192648 <etiq_40>
chrono::PP::aPPMAJNoIterationNewton::effectueCalcul::debut VmSize: 559944 VmRSS: 123136 VmPeak: 647920 VmData: 27208 VmHWM: 192648 <etiq_41>
::fin VmSize: 559944 VmRSS: 123136 VmPeak: 647920 VmData: 27208 VmHWM: 192648 WC: 0.000202 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_41>
chrono::PP::aPPMAJNoIterationCumule::effectueCalcul::debut VmSize: 559944 VmRSS: 123136 VmPeak: 647920 VmData: 27208 VmHWM: 192648 <etiq_42>
chrono::PP::PostTrait:aPPMAJNoIterationCumule:aPPMAJNoIterationCumuleAuxVoisins::debut VmSize: 559944 VmRSS: 123136 VmPeak: 647920 VmData: 27208 VmHWM: 192648 <etiq_43>
chrono::PP::aPPMAJNoIterationCumuleAuxVoisins::effectueCalcul::debut VmSize: 559944 VmRSS: 123136 VmPeak: 647920 VmData: 27208 VmHWM: 192648 <etiq_44>
::fin VmSize: 559944 VmRSS: 123136 VmPeak: 647920 VmData: 27208 VmHWM: 192648 WC: 0.000231 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_44>
::fin VmSize: 559944 VmRSS: 123136 VmPeak: 647920 VmData: 27208 VmHWM: 192648 WC: 0.000622 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_43>
::fin VmSize: 559944 VmRSS: 123136 VmPeak: 647920 VmData: 27208 VmHWM: 192648 WC: 0.001034 SelfUser: 0.001 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_42>
::fin VmSize: 559944 VmRSS: 123136 VmPeak: 647920 VmData: 27208 VmHWM: 192648 WC: 0.00181 SelfUser: 0.002 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_40>
::fin VmSize: 559944 VmRSS: 123136 VmPeak: 647920 VmData: 27208 VmHWM: 192648 WC: 0.002237 SelfUser: 0.002 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_39>
::fin VmSize: 559944 VmRSS: 123136 VmPeak: 647920 VmData: 27208 VmHWM: 192648 WC: 0.002619 SelfUser: 0.002 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_38>
chrono::SolveurStatNlinPETSc::faisAssemblage:AssembleurGD::debut VmSize: 559944 VmRSS: 123136 VmPeak: 647920 VmData: 27208 VmHWM: 192648 <etiq_45>
chrono::SolveurLinPETSc::faisAssemblagePrive::appliqueChamp:AssembleurGD::debut VmSize: 559944 VmRSS: 123136 VmPeak: 647920 VmData: 27208 VmHWM: 192648 <etiq_46>
::fin VmSize: 559944 VmRSS: 123136 VmPeak: 647920 VmData: 27208 VmHWM: 192648 WC: 0.000452 SelfUser: 0.001 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_46>
chrono::SolveurLinPETSc::faisAssemblagePrive::debutAssemblage:AssembleurGD::debut VmSize: 559944 VmRSS: 123136 VmPeak: 647920 VmData: 27208 VmHWM: 192648 <etiq_47>
::fin VmSize: 559944 VmRSS: 123136 VmPeak: 647920 VmData: 27208 VmHWM: 192648 WC: 0.00037 SelfUser: 0.001 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_47>
chrono::SolveurLinPETSc::faisAssemblagePrive::assembleMatriceEtResidu:AssembleurGD::debut VmSize: 559944 VmRSS: 123136 VmPeak: 647920 VmData: 27208 VmHWM: 192648 <etiq_48>
chrono::ProblemeEF::assemblePriveDomaine:AssembleurGD::debut VmSize: 559944 VmRSS: 123136 VmPeak: 647920 VmData: 27208 VmHWM: 192648 <etiq_49>
chrono::ProblemeEF::assemblePrive:Mat:Vec:AssembleurGD::debut VmSize: 559944 VmRSS: 123400 VmPeak: 647920 VmData: 27208 VmHWM: 192648 <etiq_50>
::fin VmSize: 565144 VmRSS: 124180 VmPeak: 647920 VmData: 27360 VmHWM: 192648 WC: 0.153915 SelfUser: 0.047 SelfSys: 0.006 ChildUser: 0 Childsys: 0 </etiq_50>
::fin VmSize: 565144 VmRSS: 124184 VmPeak: 647920 VmData: 27360 VmHWM: 192648 WC: 0.155662 SelfUser: 0.049 SelfSys: 0.006 ChildUser: 0 Childsys: 0 </etiq_49>
chrono::ProblemeEF::assemblePrivePeau:AssembleurGD::debut VmSize: 565144 VmRSS: 124184 VmPeak: 647920 VmData: 27360 VmHWM: 192648 <etiq_51>
::fin VmSize: 565144 VmRSS: 124184 VmPeak: 647920 VmData: 27360 VmHWM: 192648 WC: 0.000246 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_51>
::fin VmSize: 565144 VmRSS: 124184 VmPeak: 647920 VmData: 27360 VmHWM: 192648 WC: 0.159546 SelfUser: 0.051 SelfSys: 0.008 ChildUser: 0 Childsys: 0 </etiq_48>
chrono::SolveurLinPETSc::faisAssemblagePrive::appliqueMatrice:AssembleurGD::debut VmSize: 565144 VmRSS: 124184 VmPeak: 647920 VmData: 27360 VmHWM: 192648 <etiq_52>
chrono::MatricePETSc::mettreAZeroLignes::debut VmSize: 571572 VmRSS: 124664 VmPeak: 647920 VmData: 27632 VmHWM: 192648 <etiq_53>
[0] MatAssemblyEnd_SeqAIJ(): Matrix size: 81 X 81; storage space: 0 unneeded,1845 used
[0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0
[0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 54
[0] MatCheckCompressedRow(): Found the ratio (num_zerorows 0)/(num_localrows 81) < 0.6. Do not use CompressedRow routines.
[0] MatAssemblyEnd_SeqAIJ(): Matrix size: 81 X 81; storage space: 0 unneeded,1845 used
[0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0
[0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 54
[0] MatCheckCompressedRow(): Found the ratio (num_zerorows 0)/(num_localrows 81) < 0.6. Do not use CompressedRow routines.
::fin VmSize: 571572 VmRSS: 124664 VmPeak: 647920 VmData: 27632 VmHWM: 192648 WC: 0.0018 SelfUser: 0.001 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_53>
chrono::MatricePETSc::mettreAZeroLignes::debut VmSize: 571572 VmRSS: 124664 VmPeak: 647920 VmData: 27632 VmHWM: 192648 <etiq_53>
[0] MatAssemblyEnd_SeqAIJ(): Matrix size: 81 X 8; storage space: 0 unneeded,144 used
[0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0
[0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 8
[0] MatCheckCompressedRow(): Found the ratio (num_zerorows 27)/(num_localrows 81) < 0.6. Do not use CompressedRow routines.
[0] MatAssemblyEnd_SeqAIJ(): Matrix size: 81 X 8; storage space: 0 unneeded,144 used
[0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0
[0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 8
[0] MatCheckCompressedRow(): Found the ratio (num_zerorows 27)/(num_localrows 81) < 0.6. Do not use CompressedRow routines.
::fin VmSize: 571572 VmRSS: 124664 VmPeak: 647920 VmData: 27632 VmHWM: 192648 WC: 0.000881 SelfUser: 0.001 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_53>
[0] MatAssemblyEnd_SeqAIJ(): Matrix size: 8 X 81; storage space: 0 unneeded,144 used
[0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0
[0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 24
[0] MatCheckCompressedRow(): Found the ratio (num_zerorows 0)/(num_localrows 8) < 0.6. Do not use CompressedRow routines.
[0] MatAssemblyEnd_SeqSBAIJ(): Matrix size: 8 X 8, block size 1; storage space: 0 unneeded, 8 used
[0] MatAssemblyEnd_SeqSBAIJ(): Number of mallocs during MatSetValues is 0
[0] MatAssemblyEnd_SeqSBAIJ(): Most nonzeros blocks in any row is 1
[0] MatAssemblyEnd_SeqAIJ(): Matrix size: 81 X 81; storage space: 0 unneeded,1845 used
[0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0
[0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 54
[0] MatCheckCompressedRow(): Found the ratio (num_zerorows 0)/(num_localrows 81) < 0.6. Do not use CompressedRow routines.
[0] MatAssemblyEnd_SeqAIJ(): Matrix size: 81 X 8; storage space: 0 unneeded,144 used
[0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0
[0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 8
[0] MatCheckCompressedRow(): Found the ratio (num_zerorows 27)/(num_localrows 81) < 0.6. Do not use CompressedRow routines.
[0] MatAssemblyEnd_SeqAIJ(): Matrix size: 8 X 81; storage space: 0 unneeded,144 used
[0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0
[0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 24
[0] MatCheckCompressedRow(): Found the ratio (num_zerorows 0)/(num_localrows 8) < 0.6. Do not use CompressedRow routines.
[0] MatAssemblyEnd_SeqSBAIJ(): Matrix size: 8 X 8, block size 1; storage space: 0 unneeded, 8 used
[0] MatAssemblyEnd_SeqSBAIJ(): Number of mallocs during MatSetValues is 0
[0] MatAssemblyEnd_SeqSBAIJ(): Most nonzeros blocks in any row is 1
::fin VmSize: 571572 VmRSS: 124696 VmPeak: 647920 VmData: 27632 VmHWM: 192648 WC: 0.193168 SelfUser: 0.048 SelfSys: 0.004 ChildUser: 0.098 Childsys: 0.048 </etiq_52>
chrono::SolveurLinPETSc::faisAssemblagePrive::appliqueResidu:AssembleurGD::debut VmSize: 571572 VmRSS: 124696 VmPeak: 647920 VmData: 27632 VmHWM: 192648 <etiq_54>
[0] VecAssemblyBegin_MPI(): Stash has 0 entries, uses 0 mallocs.
[0] VecAssemblyBegin_MPI(): Block-Stash has 0 entries, uses 0 mallocs.
::fin VmSize: 577728 VmRSS: 124808 VmPeak: 647920 VmData: 27632 VmHWM: 192648 WC: 0.160504 SelfUser: 0.018 SelfSys: 0.004 ChildUser: 0.092 Childsys: 0.059 </etiq_54>
chrono::SolveurLinPETSc::faisAssemblagePrive::finAssemblageAssembleurGD::debut VmSize: 577728 VmRSS: 124808 VmPeak: 647920 VmData: 27632 VmHWM: 192648 <etiq_55>
[0] VecAssemblyBegin_MPI(): Stash has 0 entries, uses 0 mallocs.
[0] VecAssemblyBegin_MPI(): Block-Stash has 0 entries, uses 0 mallocs.
::fin VmSize: 577728 VmRSS: 124848 VmPeak: 647920 VmData: 27632 VmHWM: 192648 WC: 0.000381 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_55>
::fin VmSize: 577728 VmRSS: 124848 VmPeak: 647920 VmData: 27632 VmHWM: 192648 WC: 0.515877 SelfUser: 0.119 SelfSys: 0.016 ChildUser: 0 Childsys: 0 </etiq_45>
chrono::SolveurStatNlinPETSc::resoudre_et_RechercheLineaire:AssembleurGD::debut VmSize: 577728 VmRSS: 124848 VmPeak: 647920 VmData: 27632 VmHWM: 192648 <etiq_56>
chrono::SolveurLinPETSc::resoudre_Factorisation_et_DR:AssembleurGD::debut VmSize: 577728 VmRSS: 124848 VmPeak: 647920 VmData: 27632 VmHWM: 192648 <etiq_57>
chrono::asgnOperateurKSP::debut VmSize: 577728 VmRSS: 124848 VmPeak: 647920 VmData: 27632 VmHWM: 192648 <etiq_58>
::fin VmSize: 577728 VmRSS: 124848 VmPeak: 647920 VmData: 27632 VmHWM: 192648 WC: 0.000683 SelfUser: 0.001 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_58>
chrono::SolveurLinPETSc::resoudre:KSPSetUp:AssembleurGD::debut VmSize: 577728 VmRSS: 124848 VmPeak: 647920 VmData: 27632 VmHWM: 192648 <etiq_59>
nomme champ 0 : a_00
nomme champ 1 : schur
[0] PCSetUp(): Setting up PC for first time[0] VecScatterCreate(): Special case: sequential vector stride to stride
[0] VecScatterCreate(): Special case: sequential vector stride to stride
0x4431710 On appelle reqOptionsSousBloc avec a_00 dans gcrSchur_
[0] PCSetUp(): Setting up PC for first time[0] PetscCommDuplicate(): Using internal PETSc communicator 139814426697728 68462368
[0] PetscCommDuplicate(): Using internal PETSc communicator 139814426697728 68462368
[0] PetscCommDuplicate(): Using internal PETSc communicator 139814426697728 68462368
[0] MatLUFactorSymbolic_SeqAIJ(): Reallocs 0 Fill ratio:given 5 needed 1.03902
[0] MatLUFactorSymbolic_SeqAIJ(): Run with -pc_factor_fill 1.03902 or use 
[0] MatLUFactorSymbolic_SeqAIJ(): PCFactorSetFill(pc,1.03902);
[0] MatLUFactorSymbolic_SeqAIJ(): for best performance.
[0] Mat_CheckInode_FactorLU(): Found 27 nodes of 81. Limit used: 5. Using Inode routines
0x4431710 On appelle reqOptionsSousBloc avec schur dans gcrSchur_
On crée le préfixe: gcrSchur_fieldsplit_schur_
  prefixe      : gcrSchur_fieldsplit_schur_
  type matrice : schurcomplement
  type precond : jacobi
  type solveur : gcr (iteratif/precond)
  librairie    : petsc
0x4431710 asgnOptionsSousBloc par le nom du sous-bloc: schur de préfixe: gcrSchur_fieldsplit_schur_ dans gcrSchur_
[0] PCSetUp(): Setting up PC for first time::fin VmSize: 580112 VmRSS: 127972 VmPeak: 647920 VmData: 30016 VmHWM: 192648 WC: 0.010764 SelfUser: 0.009 SelfSys: 0.002 ChildUser: 0 Childsys: 0 </etiq_59>
chrono::SolveurLinPETSc::resoudre:KSPSolve:AssembleurGD::debut VmSize: 580112 VmRSS: 127972 VmPeak: 647920 VmData: 30016 VmHWM: 192648 <etiq_60>
[0] PetscCommDuplicate(): Duplicating a communicator 139814426695680 76804080 max tags = 2147483647
KSP Object:(gcrSchur_) 1 MPI processes
  type: gcr
    GCR: restart = 30 
    GCR: restarts performed = 0 
  maximum iterations=100, initial guess is zero
  tolerances:  relative=1e-11, absolute=1e-11, divergence=10000
  right preconditioning
  using UNPRECONDITIONED norm type for convergence test
PC Object:(gcrSchur_) 1 MPI processes
  type: fieldsplit
    FieldSplit with Schur preconditioner, factorization FULL
    Preconditioner for the Schur complement formed from A11
    Split info:
    Split number 0 Defined by IS
    Split number 1 Defined by IS
    KSP solver for A00 block
      KSP Object:      (gcrSchur_fieldsplit_a_00_)       1 MPI processes
        type: preonly
        maximum iterations=10000, initial guess is zero
        tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
        left preconditioning
        using NONE norm type for convergence test
      PC Object:      (gcrSchur_fieldsplit_a_00_)       1 MPI processes
        type: lu
          LU: out-of-place factorization
          tolerance for zero pivot 2.22045e-14
          matrix ordering: nd
          factor fill ratio given 5, needed 1.03902
            Factored matrix follows:
              Mat Object:               1 MPI processes
                type: seqaij
                rows=81, cols=81, bs=3
                package used to perform factorization: petsc
                total: nonzeros=1917, allocated nonzeros=1917
                total number of mallocs used during MatSetValues calls =0
                  using I-node routines: found 27 nodes, limit used is 5
        linear system matrix = precond matrix:
        Mat Object:        (gcrSchur_fieldsplit_a_00_)         1 MPI processes
          type: seqaij
          rows=81, cols=81, bs=3
          total: nonzeros=1845, allocated nonzeros=1845
          total number of mallocs used during MatSetValues calls =0
            not using I-node routines
    KSP solver for S = A11 - A10 inv(A00) A01 
      KSP Object:      (gcrSchur_fieldsplit_schur_)       1 MPI processes
        type: gcr
          GCR: restart = 30 
          GCR: restarts performed = 0 
        maximum iterations=10000, initial guess is zero
        tolerances:  relative=1e-12, absolute=1e-12, divergence=10000
        right preconditioning
        using UNPRECONDITIONED norm type for convergence test
      PC Object:      (gcrSchur_fieldsplit_schur_)       1 MPI processes
        type: jacobi
        linear system matrix followed by preconditioner matrix:
        Mat Object:        (gcrSchur_fieldsplit_schur_)         1 MPI processes
          type: schurcomplement
          rows=8, cols=8
            Schur complement A11 - A10 inv(A00) A01
            A11
              Mat Object:              (gcrSchur_fieldsplit_schur_)               1 MPI processes
                type: seqsbaij
                rows=8, cols=8
                total: nonzeros=8, allocated nonzeros=8
                total number of mallocs used during MatSetValues calls =0
                    block size is 1
            A10
              Mat Object:              (gcrSchur_BlocsHDiag_)               1 MPI processes
                type: seqaij
                rows=8, cols=81
                total: nonzeros=144, allocated nonzeros=144
                total number of mallocs used during MatSetValues calls =0
                  not using I-node routines
            KSP of A00
              KSP Object:              (gcrSchur_fieldsplit_a_00_)               1 MPI processes
                type: preonly
                maximum iterations=10000, initial guess is zero
                tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
                left preconditioning
                using NONE norm type for convergence test
              PC Object:              (gcrSchur_fieldsplit_a_00_)               1 MPI processes
                type: lu
                  LU: out-of-place factorization
                  tolerance for zero pivot 2.22045e-14
                  matrix ordering: nd
                  factor fill ratio given 5, needed 1.03902
                    Factored matrix follows:
                      Mat Object:                       1 MPI processes
                        type: seqaij
                        rows=81, cols=81, bs=3
                        package used to perform factorization: petsc
                        total: nonzeros=1917, allocated nonzeros=1917
                        total number of mallocs used during MatSetValues calls =0
                          using I-node routines: found 27 nodes, limit used is 5
                linear system matrix = precond matrix:
                Mat Object:                (gcrSchur_fieldsplit_a_00_)                 1 MPI processes
                  type: seqaij
                  rows=81, cols=81, bs=3
                  total: nonzeros=1845, allocated nonzeros=1845
                  total number of mallocs used during MatSetValues calls =0
                    not using I-node routines
            A01
              Mat Object:              (gcrSchur_BlocsHDiag_)               1 MPI processes
                type: seqaij
                rows=81, cols=8
                total: nonzeros=144, allocated nonzeros=144
                total number of mallocs used during MatSetValues calls =0
                  not using I-node routines
        Mat Object:        (gcrSchur_fieldsplit_schur_)         1 MPI processes
          type: seqsbaij
          rows=8, cols=8
          total: nonzeros=8, allocated nonzeros=8
          total number of mallocs used during MatSetValues calls =0
              block size is 1
  linear system matrix = precond matrix:
  Mat Object:   1 MPI processes
    type: nest
    rows=89, cols=89
      Matrix object: 
        type=nest, rows=2, cols=2 
        MatNest structure: 
        (0,0) : prefix="gcrSchur_fieldsplit_a_00_", type=seqaij, rows=81, cols=81 
        (0,1) : prefix="gcrSchur_BlocsHDiag_", type=seqaij, rows=81, cols=8 
        (1,0) : prefix="gcrSchur_BlocsHDiag_", type=seqaij, rows=8, cols=81 
        (1,1) : prefix="gcrSchur_fieldsplit_schur_", type=seqsbaij, rows=8, cols=8 

 Configuration (ecritInfoKSP) du KSP : 
 Nom du solveur        : mon_solvlin
 Librairie du Solveur  : petsc
 Type du solveur       : gcr (gcrSchur_)
 Type du précond.      : fieldsplit (gcrSchur_)
 Type Matrice          : nest
 MatNonzeroState       : 0
 Type Matrice          : nest
  Residual norms for gcrSchur_ solve.
  0 KSP Residual norm 2.108717462444e-01 
[0] PCSetUp(): Leaving PC with identical preconditioner since operator is unchanged
[0] PCSetUp(): Leaving PC with identical preconditioner since operator is unchanged
[0] PCSetUp(): Leaving PC with identical preconditioner since operator is unchanged
    Residual norms for gcrSchur_fieldsplit_schur_ solve.
    0 KSP Residual norm 1.651981957458e-03 
[0] PCSetUp(): Leaving PC with identical preconditioner since operator is unchanged
[0] PCSetUp(): Leaving PC with identical preconditioner since operator is unchanged
    1 KSP Residual norm 2.204951633351e-05 
[0] PCSetUp(): Leaving PC with identical preconditioner since operator is unchanged
[0] PCSetUp(): Leaving PC with identical preconditioner since operator is unchanged
    2 KSP Residual norm 1.118981498028e-19 
[0] KSPConvergedDefault(): Linear solver has converged. Residual norm 1.118981498028e-19 is less than absolute tolerance 1.000000000000e-12 at iteration 2
[0] PCSetUp(): Leaving PC with identical preconditioner since operator is unchanged
  1 KSP Residual norm 6.726023515396e-17 
[0] KSPConvergedDefault(): Linear solver has converged. Residual norm 6.726023515396e-17 is less than absolute tolerance 1.000000000000e-11 at iteration 1
KSP Object:(gcrSchur_) 1 MPI processes
  type: gcr
    GCR: restart = 30 
    GCR: restarts performed = 1 
  maximum iterations=100, initial guess is zero
  tolerances:  relative=1e-11, absolute=1e-11, divergence=10000
  right preconditioning
  using UNPRECONDITIONED norm type for convergence test
PC Object:(gcrSchur_) 1 MPI processes
  type: fieldsplit
    FieldSplit with Schur preconditioner, factorization FULL
    Preconditioner for the Schur complement formed from A11
    Split info:
    Split number 0 Defined by IS
    Split number 1 Defined by IS
    KSP solver for A00 block
      KSP Object:      (gcrSchur_fieldsplit_a_00_)       1 MPI processes
        type: preonly
        maximum iterations=10000, initial guess is zero
        tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
        left preconditioning
        using NONE norm type for convergence test
      PC Object:      (gcrSchur_fieldsplit_a_00_)       1 MPI processes
        type: lu
          LU: out-of-place factorization
          tolerance for zero pivot 2.22045e-14
          matrix ordering: nd
          factor fill ratio given 5, needed 1.03902
            Factored matrix follows:
              Mat Object:               1 MPI processes
                type: seqaij
                rows=81, cols=81, bs=3
                package used to perform factorization: petsc
                total: nonzeros=1917, allocated nonzeros=1917
                total number of mallocs used during MatSetValues calls =0
                  using I-node routines: found 27 nodes, limit used is 5
        linear system matrix = precond matrix:
        Mat Object:        (gcrSchur_fieldsplit_a_00_)         1 MPI processes
          type: seqaij
          rows=81, cols=81, bs=3
          total: nonzeros=1845, allocated nonzeros=1845
          total number of mallocs used during MatSetValues calls =0
            not using I-node routines
    KSP solver for S = A11 - A10 inv(A00) A01 
      KSP Object:      (gcrSchur_fieldsplit_schur_)       1 MPI processes
        type: gcr
          GCR: restart = 30 
          GCR: restarts performed = 1 
        maximum iterations=10000, initial guess is zero
        tolerances:  relative=1e-12, absolute=1e-12, divergence=10000
        right preconditioning
        using UNPRECONDITIONED norm type for convergence test
      PC Object:      (gcrSchur_fieldsplit_schur_)       1 MPI processes
        type: jacobi
        linear system matrix followed by preconditioner matrix:
        Mat Object:        (gcrSchur_fieldsplit_schur_)         1 MPI processes
          type: schurcomplement
          rows=8, cols=8
            Schur complement A11 - A10 inv(A00) A01
            A11
              Mat Object:              (gcrSchur_fieldsplit_schur_)               1 MPI processes
                type: seqsbaij
                rows=8, cols=8
                total: nonzeros=8, allocated nonzeros=8
                total number of mallocs used during MatSetValues calls =0
                    block size is 1
            A10
              Mat Object:              (gcrSchur_BlocsHDiag_)               1 MPI processes
                type: seqaij
                rows=8, cols=81
                total: nonzeros=144, allocated nonzeros=144
                total number of mallocs used during MatSetValues calls =0
                  not using I-node routines
            KSP of A00
              KSP Object:              (gcrSchur_fieldsplit_a_00_)               1 MPI processes
                type: preonly
                maximum iterations=10000, initial guess is zero
                tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
                left preconditioning
                using NONE norm type for convergence test
              PC Object:              (gcrSchur_fieldsplit_a_00_)               1 MPI processes
                type: lu
                  LU: out-of-place factorization
                  tolerance for zero pivot 2.22045e-14
                  matrix ordering: nd
                  factor fill ratio given 5, needed 1.03902
                    Factored matrix follows:
                      Mat Object:                       1 MPI processes
                        type: seqaij
                        rows=81, cols=81, bs=3
                        package used to perform factorization: petsc
                        total: nonzeros=1917, allocated nonzeros=1917
                        total number of mallocs used during MatSetValues calls =0
                          using I-node routines: found 27 nodes, limit used is 5
                linear system matrix = precond matrix:
                Mat Object:                (gcrSchur_fieldsplit_a_00_)                 1 MPI processes
                  type: seqaij
                  rows=81, cols=81, bs=3
                  total: nonzeros=1845, allocated nonzeros=1845
                  total number of mallocs used during MatSetValues calls =0
                    not using I-node routines
            A01
              Mat Object:              (gcrSchur_BlocsHDiag_)               1 MPI processes
                type: seqaij
                rows=81, cols=8
                total: nonzeros=144, allocated nonzeros=144
                total number of mallocs used during MatSetValues calls =0
                  not using I-node routines
        Mat Object:        (gcrSchur_fieldsplit_schur_)         1 MPI processes
          type: seqsbaij
          rows=8, cols=8
          total: nonzeros=8, allocated nonzeros=8
          total number of mallocs used during MatSetValues calls =0
              block size is 1
  linear system matrix = precond matrix:
  Mat Object:   1 MPI processes
    type: nest
    rows=89, cols=89
      Matrix object: 
        type=nest, rows=2, cols=2 
        MatNest structure: 
        (0,0) : prefix="gcrSchur_fieldsplit_a_00_", type=seqaij, rows=81, cols=81 
        (0,1) : prefix="gcrSchur_BlocsHDiag_", type=seqaij, rows=81, cols=8 
        (1,0) : prefix="gcrSchur_BlocsHDiag_", type=seqaij, rows=8, cols=81 
        (1,1) : prefix="gcrSchur_fieldsplit_schur_", type=seqsbaij, rows=8, cols=8 
::fin VmSize: 580244 VmRSS: 128424 VmPeak: 647920 VmData: 30148 VmHWM: 192648 WC: 0.007586 SelfUser: 0.005 SelfSys: 0.003 ChildUser: 0 Childsys: 0 </etiq_60>
 KSPSolve non zero id. : FAUX
 KSPSolve valeurs  id. : FAUX
[0] PetscCommDuplicate(): Using internal PETSc communicator 139814426697728 68462368
 Err. rel. resolution  : 4.4867e-16 / 3.56045e-16

::fin VmSize: 580244 VmRSS: 128660 VmPeak: 647920 VmData: 30148 VmHWM: 192648 WC: 0.020825 SelfUser: 0.016 SelfSys: 0.005 ChildUser: 0 Childsys: 0 </etiq_57>
chrono::SolveurLinPETSc::CL+VecDDLs:AssembleurGD::debut VmSize: 580244 VmRSS: 128660 VmPeak: 647920 VmData: 30148 VmHWM: 192648 <etiq_61>
::fin VmSize: 580244 VmRSS: 128660 VmPeak: 647920 VmData: 30148 VmHWM: 192648 WC: 0.00024 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_61>
chrono::SolveurLinPETSc::appliqueCorrectionAuProbleme:AssembleurGD::debut VmSize: 580244 VmRSS: 128660 VmPeak: 647920 VmData: 30148 VmHWM: 192648 <etiq_62>
::fin VmSize: 580244 VmRSS: 128720 VmPeak: 647920 VmData: 30148 VmHWM: 192648 WC: 0.009112 SelfUser: 0.008 SelfSys: 0.001 ChildUser: 0 Childsys: 0 </etiq_62>
::fin VmSize: 580244 VmRSS: 128720 VmPeak: 647920 VmData: 30148 VmHWM: 192648 WC: 0.031035 SelfUser: 0.025 SelfSys: 0.006 ChildUser: 0 Childsys: 0 </etiq_56>
chrono::SolveurStatNlinPETSc::PPIteration:AssembleurGD::debut VmSize: 580244 VmRSS: 128720 VmPeak: 647920 VmData: 30148 VmHWM: 192648 <etiq_38>
chrono::GestionPrePostTraitement::executePrePostTrait:GESTIONPREPOSTDEFAUT:FinIterationNlin::************ProblemeGD_aPPExecutePostTraitementIteration************::debut VmSize: 580244 VmRSS: 128720 VmPeak: 647920 VmData: 30148 VmHWM: 192648 <etiq_63>
chrono::PP::************ProblemeGD_aPPExecutePostTraitementIteration************::effectueCalcul::debut VmSize: 580244 VmRSS: 128720 VmPeak: 647920 VmData: 30148 VmHWM: 192648 <etiq_64>
::fin VmSize: 580244 VmRSS: 128720 VmPeak: 647920 VmData: 30148 VmHWM: 192648 WC: 0.000213 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_64>
::fin VmSize: 580244 VmRSS: 128720 VmPeak: 647920 VmData: 30148 VmHWM: 192648 WC: 0.0006 SelfUser: 0.001 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_63>
::fin VmSize: 580244 VmRSS: 128720 VmPeak: 647920 VmData: 30148 VmHWM: 192648 WC: 0.001 SelfUser: 0.001 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_38>
chrono::AnalyseProcessusIteratif::CC:miseAZero: SolveurStatNlinPETSc::BouclePointFixe:AssembleurGD::debut VmSize: 580244 VmRSS: 128788 VmPeak: 647920 VmData: 30148 VmHWM: 192648 <etiq_65>
::fin VmSize: 580244 VmRSS: 128788 VmPeak: 647920 VmData: 30148 VmHWM: 192648 WC: 0.000206 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_65>
SolveurStatNlinPETSc::BouclePointFixe:AssembleurGD itération #    1,chrono::AnalyseProcessusIteratif::CC:reqConvAtteinte:SolveurStatNlinPETSc::BouclePointFixe:AssembleurGD::debut VmSize: 580244 VmRSS: 128788 VmPeak: 647920 VmData: 30148 VmHWM: 192648 <etiq_66>
chrono::CC::PreTrait:CCNL2Res:aPPMAJDeplacementNewtonPrecedent::debut VmSize: 580244 VmRSS: 128788 VmPeak: 647920 VmData: 30148 VmHWM: 192648 <etiq_67>
chrono::PP::aPPMAJDeplacementNewtonPrecedent::effectueCalcul::debut VmSize: 580244 VmRSS: 128788 VmPeak: 647920 VmData: 30148 VmHWM: 192648 <etiq_36>
::fin VmSize: 580244 VmRSS: 128788 VmPeak: 647920 VmData: 30148 VmHWM: 192648 WC: 0.000265 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_36>
::fin VmSize: 580244 VmRSS: 128788 VmPeak: 647920 VmData: 30148 VmHWM: 192648 WC: 0.000676 SelfUser: 0.001 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_67>
chrono::CC::PostTrait:CCNInf:aPPMAJNormeInfCorrection::debut VmSize: 580244 VmRSS: 128788 VmPeak: 647920 VmData: 30148 VmHWM: 192648 <etiq_68>
chrono::PP::aPPMAJNormeInfCorrection::effectueCalcul::debut VmSize: 580244 VmRSS: 128788 VmPeak: 647920 VmData: 30148 VmHWM: 192648 <etiq_69>
::fin VmSize: 580244 VmRSS: 128788 VmPeak: 647920 VmData: 30148 VmHWM: 192648 WC: 0.000208 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_69>
::fin VmSize: 580244 VmRSS: 128788 VmPeak: 647920 VmData: 30148 VmHWM: 192648 WC: 0.000598 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_68>
 CCNInf(1)[81]= 0.113041[0] PetscCommDuplicate(): Using internal PETSc communicator 139814426697728 68462368
chrono::CC::PostTrait:CCNL2CorRel:aPPMAJNormeL2CorrectionRelative::debut VmSize: 580244 VmRSS: 128788 VmPeak: 647920 VmData: 30148 VmHWM: 192648 <etiq_70>
chrono::PP::aPPMAJNormeL2CorrectionRelative::effectueCalcul::debut VmSize: 580244 VmRSS: 128788 VmPeak: 647920 VmData: 30148 VmHWM: 192648 <etiq_71>
::fin VmSize: 580244 VmRSS: 128788 VmPeak: 647920 VmData: 30148 VmHWM: 192648 WC: 0.000241 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_71>
::fin VmSize: 580244 VmRSS: 128788 VmPeak: 647920 VmData: 30148 VmHWM: 192648 WC: 0.000624 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_70>
 CCNL2CorRel(0)= 0.995307chrono::CC::PostTrait:CCNInfRes:aPPMAJNormeInfResidu::debut VmSize: 580244 VmRSS: 128844 VmPeak: 647920 VmData: 30148 VmHWM: 192648 <etiq_72>
chrono::PP::aPPMAJNormeInfResidu::effectueCalcul::debut VmSize: 580244 VmRSS: 128844 VmPeak: 647920 VmData: 30148 VmHWM: 192648 <etiq_73>
::fin VmSize: 580244 VmRSS: 128844 VmPeak: 647920 VmData: 30148 VmHWM: 192648 WC: 0.000235 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_73>
::fin VmSize: 580244 VmRSS: 128844 VmPeak: 647920 VmData: 30148 VmHWM: 192648 WC: 0.000616 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_72>
 CCNInfRés(0)[50]= 0.123724chrono::CC::PostTrait:CCNL2Res:aPPMAJNormeL2Residu::debut VmSize: 580244 VmRSS: 128844 VmPeak: 647920 VmData: 30148 VmHWM: 192648 <etiq_74>
chrono::PP::aPPMAJNormeL2Residu::effectueCalcul::debut VmSize: 580244 VmRSS: 128844 VmPeak: 647920 VmData: 30148 VmHWM: 192648 <etiq_75>
::fin VmSize: 580244 VmRSS: 128844 VmPeak: 647920 VmData: 30148 VmHWM: 192648 WC: 0.000195 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_75>
::fin VmSize: 580244 VmRSS: 128844 VmPeak: 647920 VmData: 30148 VmHWM: 192648 WC: 0.000613 SelfUser: 0.001 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_74>
 CCNL2Rés(1)= 0.0223524::fin VmSize: 580244 VmRSS: 128844 VmPeak: 647920 VmData: 30148 VmHWM: 192648 WC: 0.012743 SelfUser: 0.012 SelfSys: 0.001 ChildUser: 0 Childsys: 0 </etiq_66>

chrono::SolveurStatNlinPETSc::PPIteration:AssembleurGD::debut VmSize: 580244 VmRSS: 128844 VmPeak: 647920 VmData: 30148 VmHWM: 192648 <etiq_38>
chrono::GestionPrePostTraitement::executePrePostTrait:GESTIONPREPOSTDEFAUT:DebutIterationNlin::************ProblemeGD_aPPExecutePreTraitementIteration************::debut VmSize: 580244 VmRSS: 128844 VmPeak: 647920 VmData: 30148 VmHWM: 192648 <etiq_39>
chrono::PP::************ProblemeGD_aPPExecutePreTraitementIteration************::effectueCalcul::debut VmSize: 580244 VmRSS: 128844 VmPeak: 647920 VmData: 30148 VmHWM: 192648 <etiq_40>
chrono::PP::aPPMAJNoIterationNewton::effectueCalcul::debut VmSize: 580244 VmRSS: 128844 VmPeak: 647920 VmData: 30148 VmHWM: 192648 <etiq_41>
::fin VmSize: 580244 VmRSS: 128844 VmPeak: 647920 VmData: 30148 VmHWM: 192648 WC: 0.000191 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_41>
chrono::PP::aPPMAJNoIterationCumule::effectueCalcul::debut VmSize: 580244 VmRSS: 128844 VmPeak: 647920 VmData: 30148 VmHWM: 192648 <etiq_42>
chrono::PP::PostTrait:aPPMAJNoIterationCumule:aPPMAJNoIterationCumuleAuxVoisins::debut VmSize: 580244 VmRSS: 128844 VmPeak: 647920 VmData: 30148 VmHWM: 192648 <etiq_43>
chrono::PP::aPPMAJNoIterationCumuleAuxVoisins::effectueCalcul::debut VmSize: 580244 VmRSS: 128844 VmPeak: 647920 VmData: 30148 VmHWM: 192648 <etiq_44>
::fin VmSize: 580244 VmRSS: 128844 VmPeak: 647920 VmData: 30148 VmHWM: 192648 WC: 0.000223 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_44>
::fin VmSize: 580244 VmRSS: 128844 VmPeak: 647920 VmData: 30148 VmHWM: 192648 WC: 0.000609 SelfUser: 0.001 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_43>
::fin VmSize: 580244 VmRSS: 128844 VmPeak: 647920 VmData: 30148 VmHWM: 192648 WC: 0.00098 SelfUser: 0.001 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_42>
::fin VmSize: 580244 VmRSS: 128844 VmPeak: 647920 VmData: 30148 VmHWM: 192648 WC: 0.001753 SelfUser: 0.001 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_40>
::fin VmSize: 580244 VmRSS: 128844 VmPeak: 647920 VmData: 30148 VmHWM: 192648 WC: 0.002121 SelfUser: 0.001 SelfSys: 0.001 ChildUser: 0 Childsys: 0 </etiq_39>
::fin VmSize: 580244 VmRSS: 128844 VmPeak: 647920 VmData: 30148 VmHWM: 192648 WC: 0.002494 SelfUser: 0.001 SelfSys: 0.002 ChildUser: 0 Childsys: 0 </etiq_38>
chrono::SolveurStatNlinPETSc::faisAssemblage:AssembleurGD::debut VmSize: 580244 VmRSS: 128844 VmPeak: 647920 VmData: 30148 VmHWM: 192648 <etiq_45>
 Construction (creeMatriceEtAssigneProfile) de la matrice de prefixe: gcrSchur_
 Type Matrice                : nest
 Nouvelle Matrice            : FAUX
 Nom du probleme             : AssembleurGD
 Librairie du Solveur        : mumps
chrono::SolveurLinPETSc::faisAssemblagePrive::appliqueChamp:AssembleurGD::debut VmSize: 580244 VmRSS: 128844 VmPeak: 647920 VmData: 30148 VmHWM: 192648 <etiq_46>
::fin VmSize: 580244 VmRSS: 128844 VmPeak: 647920 VmData: 30148 VmHWM: 192648 WC: 0.000347 SelfUser: 0.001 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_46>
chrono::SolveurLinPETSc::faisAssemblagePrive::debutAssemblage:AssembleurGD::debut VmSize: 580244 VmRSS: 128844 VmPeak: 647920 VmData: 30148 VmHWM: 192648 <etiq_47>
::fin VmSize: 580244 VmRSS: 128844 VmPeak: 647920 VmData: 30148 VmHWM: 192648 WC: 0.000363 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_47>
chrono::SolveurLinPETSc::faisAssemblagePrive::assembleMatriceEtResidu:AssembleurGD::debut VmSize: 580244 VmRSS: 128844 VmPeak: 647920 VmData: 30148 VmHWM: 192648 <etiq_48>
chrono::ProblemeEF::assemblePriveDomaine:AssembleurGD::debut VmSize: 580244 VmRSS: 128844 VmPeak: 647920 VmData: 30148 VmHWM: 192648 <etiq_49>
chrono::ProblemeEF::assemblePrive:Mat:Vec:AssembleurGD::debut VmSize: 580244 VmRSS: 128848 VmPeak: 647920 VmData: 30148 VmHWM: 192648 <etiq_50>
::fin VmSize: 580396 VmRSS: 128872 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.04273 SelfUser: 0.041 SelfSys: 0.001 ChildUser: 0 Childsys: 0 </etiq_50>
::fin VmSize: 580396 VmRSS: 128872 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.044193 SelfUser: 0.043 SelfSys: 0.001 ChildUser: 0 Childsys: 0 </etiq_49>
chrono::ProblemeEF::assemblePrivePeau:AssembleurGD::debut VmSize: 580396 VmRSS: 128872 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_51>
::fin VmSize: 580396 VmRSS: 128872 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000187 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_51>
::fin VmSize: 580396 VmRSS: 128872 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.048016 SelfUser: 0.046 SelfSys: 0.002 ChildUser: 0 Childsys: 0 </etiq_48>
chrono::SolveurLinPETSc::faisAssemblagePrive::appliqueMatrice:AssembleurGD::debut VmSize: 580396 VmRSS: 128872 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_52>
chrono::MatricePETSc::mettreAZeroLignes::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_53>
[0] MatAssemblyEnd_SeqAIJ(): Matrix size: 81 X 81; storage space: 0 unneeded,1845 used
[0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0
[0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 54
[0] MatCheckCompressedRow(): Found the ratio (num_zerorows 0)/(num_localrows 81) < 0.6. Do not use CompressedRow routines.
[0] MatAssemblyEnd_SeqAIJ(): Matrix size: 81 X 81; storage space: 0 unneeded,1845 used
[0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0
[0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 54
[0] MatCheckCompressedRow(): Found the ratio (num_zerorows 0)/(num_localrows 81) < 0.6. Do not use CompressedRow routines.
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000976 SelfUser: 0.001 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_53>
chrono::MatricePETSc::mettreAZeroLignes::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_53>
[0] MatAssemblyEnd_SeqAIJ(): Matrix size: 81 X 8; storage space: 0 unneeded,144 used
[0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0
[0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 8
[0] MatCheckCompressedRow(): Found the ratio (num_zerorows 27)/(num_localrows 81) < 0.6. Do not use CompressedRow routines.
[0] MatAssemblyEnd_SeqAIJ(): Matrix size: 81 X 8; storage space: 0 unneeded,144 used
[0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0
[0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 8
[0] MatCheckCompressedRow(): Found the ratio (num_zerorows 27)/(num_localrows 81) < 0.6. Do not use CompressedRow routines.
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000859 SelfUser: 0.001 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_53>
[0] MatAssemblyEnd_SeqAIJ(): Matrix size: 8 X 81; storage space: 0 unneeded,144 used
[0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0
[0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 24
[0] MatCheckCompressedRow(): Found the ratio (num_zerorows 0)/(num_localrows 8) < 0.6. Do not use CompressedRow routines.
[0] MatAssemblyEnd_SeqSBAIJ(): Matrix size: 8 X 8, block size 1; storage space: 0 unneeded, 8 used
[0] MatAssemblyEnd_SeqSBAIJ(): Number of mallocs during MatSetValues is 0
[0] MatAssemblyEnd_SeqSBAIJ(): Most nonzeros blocks in any row is 1
[0] MatAssemblyEnd_SeqAIJ(): Matrix size: 81 X 81; storage space: 0 unneeded,1845 used
[0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0
[0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 54
[0] MatCheckCompressedRow(): Found the ratio (num_zerorows 0)/(num_localrows 81) < 0.6. Do not use CompressedRow routines.
[0] MatAssemblyEnd_SeqAIJ(): Matrix size: 81 X 8; storage space: 0 unneeded,144 used
[0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0
[0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 8
[0] MatCheckCompressedRow(): Found the ratio (num_zerorows 27)/(num_localrows 81) < 0.6. Do not use CompressedRow routines.
[0] MatAssemblyEnd_SeqAIJ(): Matrix size: 8 X 81; storage space: 0 unneeded,144 used
[0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0
[0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 24
[0] MatCheckCompressedRow(): Found the ratio (num_zerorows 0)/(num_localrows 8) < 0.6. Do not use CompressedRow routines.
[0] MatAssemblyEnd_SeqSBAIJ(): Matrix size: 8 X 8, block size 1; storage space: 0 unneeded, 8 used
[0] MatAssemblyEnd_SeqSBAIJ(): Number of mallocs during MatSetValues is 0
[0] MatAssemblyEnd_SeqSBAIJ(): Most nonzeros blocks in any row is 1
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.027587 SelfUser: 0.027 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_52>
chrono::SolveurLinPETSc::faisAssemblagePrive::appliqueResidu:AssembleurGD::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_54>
[0] VecAssemblyBegin_MPI(): Stash has 0 entries, uses 0 mallocs.
[0] VecAssemblyBegin_MPI(): Block-Stash has 0 entries, uses 0 mallocs.
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.009115 SelfUser: 0.009 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_54>
chrono::SolveurLinPETSc::faisAssemblagePrive::finAssemblageAssembleurGD::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_55>
[0] VecAssemblyBegin_MPI(): Stash has 0 entries, uses 0 mallocs.
[0] VecAssemblyBegin_MPI(): Block-Stash has 0 entries, uses 0 mallocs.
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000286 SelfUser: 0.001 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_55>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.087135 SelfUser: 0.085 SelfSys: 0.002 ChildUser: 0 Childsys: 0 </etiq_45>
chrono::SolveurStatNlinPETSc::resoudre_et_RechercheLineaire:AssembleurGD::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_56>
chrono::SolveurLinPETSc::resoudre_Factorisation_et_DR:AssembleurGD::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_57>
chrono::asgnOperateurKSP::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_58>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000226 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_58>
chrono::SolveurLinPETSc::resoudre:KSPSolve:AssembleurGD::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_60>
[0] PetscCommDuplicate(): Using internal PETSc communicator 139814426695680 76804080
KSP Object:(gcrSchur_) 1 MPI processes
  type: gcr
    GCR: restart = 30 
    GCR: restarts performed = 1 
  maximum iterations=100, initial guess is zero
  tolerances:  relative=1e-11, absolute=1e-11, divergence=10000
  right preconditioning
  using UNPRECONDITIONED norm type for convergence test
PC Object:(gcrSchur_) 1 MPI processes
  type: fieldsplit
    FieldSplit with Schur preconditioner, factorization FULL
    Preconditioner for the Schur complement formed from A11
    Split info:
    Split number 0 Defined by IS
    Split number 1 Defined by IS
    KSP solver for A00 block
      KSP Object:      (gcrSchur_fieldsplit_a_00_)       1 MPI processes
        type: preonly
        maximum iterations=10000, initial guess is zero
        tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
        left preconditioning
        using NONE norm type for convergence test
      PC Object:      (gcrSchur_fieldsplit_a_00_)       1 MPI processes
        type: lu
          LU: out-of-place factorization
          tolerance for zero pivot 2.22045e-14
          matrix ordering: nd
          factor fill ratio given 5, needed 1.03902
            Factored matrix follows:
              Mat Object:               1 MPI processes
                type: seqaij
                rows=81, cols=81, bs=3
                package used to perform factorization: petsc
                total: nonzeros=1917, allocated nonzeros=1917
                total number of mallocs used during MatSetValues calls =0
                  using I-node routines: found 27 nodes, limit used is 5
        linear system matrix = precond matrix:
        Mat Object:        (gcrSchur_fieldsplit_a_00_)         1 MPI processes
          type: seqaij
          rows=81, cols=81, bs=3
          total: nonzeros=1845, allocated nonzeros=1845
          total number of mallocs used during MatSetValues calls =0
            not using I-node routines
    KSP solver for S = A11 - A10 inv(A00) A01 
      KSP Object:      (gcrSchur_fieldsplit_schur_)       1 MPI processes
        type: gcr
          GCR: restart = 30 
          GCR: restarts performed = 1 
        maximum iterations=10000, initial guess is zero
        tolerances:  relative=1e-12, absolute=1e-12, divergence=10000
        right preconditioning
        using UNPRECONDITIONED norm type for convergence test
      PC Object:      (gcrSchur_fieldsplit_schur_)       1 MPI processes
        type: jacobi
        linear system matrix followed by preconditioner matrix:
        Mat Object:        (gcrSchur_fieldsplit_schur_)         1 MPI processes
          type: schurcomplement
          rows=8, cols=8
            Schur complement A11 - A10 inv(A00) A01
            A11
              Mat Object:              (gcrSchur_fieldsplit_schur_)               1 MPI processes
                type: seqsbaij
                rows=8, cols=8
                total: nonzeros=8, allocated nonzeros=8
                total number of mallocs used during MatSetValues calls =0
                    block size is 1
            A10
              Mat Object:              (gcrSchur_BlocsHDiag_)               1 MPI processes
                type: seqaij
                rows=8, cols=81
                total: nonzeros=144, allocated nonzeros=144
                total number of mallocs used during MatSetValues calls =0
                  not using I-node routines
            KSP of A00
              KSP Object:              (gcrSchur_fieldsplit_a_00_)               1 MPI processes
                type: preonly
                maximum iterations=10000, initial guess is zero
                tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
                left preconditioning
                using NONE norm type for convergence test
              PC Object:              (gcrSchur_fieldsplit_a_00_)               1 MPI processes
                type: lu
                  LU: out-of-place factorization
                  tolerance for zero pivot 2.22045e-14
                  matrix ordering: nd
                  factor fill ratio given 5, needed 1.03902
                    Factored matrix follows:
                      Mat Object:                       1 MPI processes
                        type: seqaij
                        rows=81, cols=81, bs=3
                        package used to perform factorization: petsc
                        total: nonzeros=1917, allocated nonzeros=1917
                        total number of mallocs used during MatSetValues calls =0
                          using I-node routines: found 27 nodes, limit used is 5
                linear system matrix = precond matrix:
                Mat Object:                (gcrSchur_fieldsplit_a_00_)                 1 MPI processes
                  type: seqaij
                  rows=81, cols=81, bs=3
                  total: nonzeros=1845, allocated nonzeros=1845
                  total number of mallocs used during MatSetValues calls =0
                    not using I-node routines
            A01
              Mat Object:              (gcrSchur_BlocsHDiag_)               1 MPI processes
                type: seqaij
                rows=81, cols=8
                total: nonzeros=144, allocated nonzeros=144
                total number of mallocs used during MatSetValues calls =0
                  not using I-node routines
        Mat Object:        (gcrSchur_fieldsplit_schur_)         1 MPI processes
          type: seqsbaij
          rows=8, cols=8
          total: nonzeros=8, allocated nonzeros=8
          total number of mallocs used during MatSetValues calls =0
              block size is 1
  linear system matrix = precond matrix:
  Mat Object:   1 MPI processes
    type: nest
    rows=89, cols=89
      Matrix object: 
        type=nest, rows=2, cols=2 
        MatNest structure: 
        (0,0) : prefix="gcrSchur_fieldsplit_a_00_", type=seqaij, rows=81, cols=81 
        (0,1) : prefix="gcrSchur_BlocsHDiag_", type=seqaij, rows=81, cols=8 
        (1,0) : prefix="gcrSchur_BlocsHDiag_", type=seqaij, rows=8, cols=81 
        (1,1) : prefix="gcrSchur_fieldsplit_schur_", type=seqsbaij, rows=8, cols=8 

 Configuration (ecritInfoKSP) du KSP : 
 Nom du solveur        : mon_solvlin
 Librairie du Solveur  : petsc
 Type du solveur       : gcr (gcrSchur_)
 Type du précond.      : fieldsplit (gcrSchur_)
 Type Matrice          : nest
 MatNonzeroState       : 0
 Type Matrice          : nest
[0] PCSetUp(): Setting up PC with same nonzero pattern
  Residual norms for gcrSchur_ solve.
  0 KSP Residual norm 4.221202844964e-03 
[0] PCSetUp(): Leaving PC with identical preconditioner since operator is unchanged
[0] PCSetUp(): Setting up PC with same nonzero pattern
[0] PCSetUp(): Leaving PC with identical preconditioner since operator is unchanged
[0] PCSetUp(): Setting up PC with same nonzero pattern
[0] PCSetUp(): Leaving PC with identical preconditioner since operator is unchanged
    Residual norms for gcrSchur_fieldsplit_schur_ solve.
    0 KSP Residual norm 1.111806403591e-04 
[0] PCSetUp(): Leaving PC with identical preconditioner since operator is unchanged
[0] PCSetUp(): Leaving PC with identical preconditioner since operator is unchanged
    1 KSP Residual norm 8.504395824449e-06 
[0] PCSetUp(): Leaving PC with identical preconditioner since operator is unchanged
[0] PCSetUp(): Leaving PC with identical preconditioner since operator is unchanged
    2 KSP Residual norm 1.699170180987e-20 
[0] KSPConvergedDefault(): Linear solver has converged. Residual norm 1.699170180987e-20 is less than absolute tolerance 1.000000000000e-12 at iteration 2
[0] PCSetUp(): Leaving PC with identical preconditioner since operator is unchanged
  1 KSP Residual norm 1.003584996519e-18 
[0] KSPConvergedDefault(): Linear solver has converged. Residual norm 1.003584996519e-18 is less than absolute tolerance 1.000000000000e-11 at iteration 1
KSP Object:(gcrSchur_) 1 MPI processes
  type: gcr
    GCR: restart = 30 
    GCR: restarts performed = 2 
  maximum iterations=100, initial guess is zero
  tolerances:  relative=1e-11, absolute=1e-11, divergence=10000
  right preconditioning
  using UNPRECONDITIONED norm type for convergence test
PC Object:(gcrSchur_) 1 MPI processes
  type: fieldsplit
    FieldSplit with Schur preconditioner, factorization FULL
    Preconditioner for the Schur complement formed from A11
    Split info:
    Split number 0 Defined by IS
    Split number 1 Defined by IS
    KSP solver for A00 block
      KSP Object:      (gcrSchur_fieldsplit_a_00_)       1 MPI processes
        type: preonly
        maximum iterations=10000, initial guess is zero
        tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
        left preconditioning
        using NONE norm type for convergence test
      PC Object:      (gcrSchur_fieldsplit_a_00_)       1 MPI processes
        type: lu
          LU: out-of-place factorization
          tolerance for zero pivot 2.22045e-14
          matrix ordering: nd
          factor fill ratio given 5, needed 1.03902
            Factored matrix follows:
              Mat Object:               1 MPI processes
                type: seqaij
                rows=81, cols=81, bs=3
                package used to perform factorization: petsc
                total: nonzeros=1917, allocated nonzeros=1917
                total number of mallocs used during MatSetValues calls =0
                  using I-node routines: found 27 nodes, limit used is 5
        linear system matrix = precond matrix:
        Mat Object:        (gcrSchur_fieldsplit_a_00_)         1 MPI processes
          type: seqaij
          rows=81, cols=81, bs=3
          total: nonzeros=1845, allocated nonzeros=1845
          total number of mallocs used during MatSetValues calls =0
            not using I-node routines
    KSP solver for S = A11 - A10 inv(A00) A01 
      KSP Object:      (gcrSchur_fieldsplit_schur_)       1 MPI processes
        type: gcr
          GCR: restart = 30 
          GCR: restarts performed = 2 
        maximum iterations=10000, initial guess is zero
        tolerances:  relative=1e-12, absolute=1e-12, divergence=10000
        right preconditioning
        using UNPRECONDITIONED norm type for convergence test
      PC Object:      (gcrSchur_fieldsplit_schur_)       1 MPI processes
        type: jacobi
        linear system matrix followed by preconditioner matrix:
        Mat Object:        (gcrSchur_fieldsplit_schur_)         1 MPI processes
          type: schurcomplement
          rows=8, cols=8
            Schur complement A11 - A10 inv(A00) A01
            A11
              Mat Object:              (gcrSchur_fieldsplit_schur_)               1 MPI processes
                type: seqsbaij
                rows=8, cols=8
                total: nonzeros=8, allocated nonzeros=8
                total number of mallocs used during MatSetValues calls =0
                    block size is 1
            A10
              Mat Object:              (gcrSchur_BlocsHDiag_)               1 MPI processes
                type: seqaij
                rows=8, cols=81
                total: nonzeros=144, allocated nonzeros=144
                total number of mallocs used during MatSetValues calls =0
                  not using I-node routines
            KSP of A00
              KSP Object:              (gcrSchur_fieldsplit_a_00_)               1 MPI processes
                type: preonly
                maximum iterations=10000, initial guess is zero
                tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
                left preconditioning
                using NONE norm type for convergence test
              PC Object:              (gcrSchur_fieldsplit_a_00_)               1 MPI processes
                type: lu
                  LU: out-of-place factorization
                  tolerance for zero pivot 2.22045e-14
                  matrix ordering: nd
                  factor fill ratio given 5, needed 1.03902
                    Factored matrix follows:
                      Mat Object:                       1 MPI processes
                        type: seqaij
                        rows=81, cols=81, bs=3
                        package used to perform factorization: petsc
                        total: nonzeros=1917, allocated nonzeros=1917
                        total number of mallocs used during MatSetValues calls =0
                          using I-node routines: found 27 nodes, limit used is 5
                linear system matrix = precond matrix:
                Mat Object:                (gcrSchur_fieldsplit_a_00_)                 1 MPI processes
                  type: seqaij
                  rows=81, cols=81, bs=3
                  total: nonzeros=1845, allocated nonzeros=1845
                  total number of mallocs used during MatSetValues calls =0
                    not using I-node routines
            A01
              Mat Object:              (gcrSchur_BlocsHDiag_)               1 MPI processes
                type: seqaij
                rows=81, cols=8
                total: nonzeros=144, allocated nonzeros=144
                total number of mallocs used during MatSetValues calls =0
                  not using I-node routines
        Mat Object:        (gcrSchur_fieldsplit_schur_)         1 MPI processes
          type: seqsbaij
          rows=8, cols=8
          total: nonzeros=8, allocated nonzeros=8
          total number of mallocs used during MatSetValues calls =0
              block size is 1
  linear system matrix = precond matrix:
  Mat Object:   1 MPI processes
    type: nest
    rows=89, cols=89
      Matrix object: 
        type=nest, rows=2, cols=2 
        MatNest structure: 
        (0,0) : prefix="gcrSchur_fieldsplit_a_00_", type=seqaij, rows=81, cols=81 
        (0,1) : prefix="gcrSchur_BlocsHDiag_", type=seqaij, rows=81, cols=8 
        (1,0) : prefix="gcrSchur_BlocsHDiag_", type=seqaij, rows=8, cols=81 
        (1,1) : prefix="gcrSchur_fieldsplit_schur_", type=seqsbaij, rows=8, cols=8 
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.00749 SelfUser: 0.005 SelfSys: 0.003 ChildUser: 0 Childsys: 0 </etiq_60>
 KSPSolve non zero id. : VRAI
 KSPSolve valeurs  id. : FAUX
[0] PetscCommDuplicate(): Using internal PETSc communicator 139814426697728 68462368
 Err. rel. resolution  : 3.06564e-16 / 2.66726e-16

::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.00933 SelfUser: 0.007 SelfSys: 0.003 ChildUser: 0 Childsys: 0 </etiq_57>
chrono::SolveurLinPETSc::CL+VecDDLs:AssembleurGD::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_61>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000186 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_61>
chrono::SolveurLinPETSc::appliqueCorrectionAuProbleme:AssembleurGD::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_62>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.008276 SelfUser: 0.008 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_62>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.018656 SelfUser: 0.016 SelfSys: 0.003 ChildUser: 0 Childsys: 0 </etiq_56>
chrono::SolveurStatNlinPETSc::PPIteration:AssembleurGD::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_38>
chrono::GestionPrePostTraitement::executePrePostTrait:GESTIONPREPOSTDEFAUT:FinIterationNlin::************ProblemeGD_aPPExecutePostTraitementIteration************::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_63>
chrono::PP::************ProblemeGD_aPPExecutePostTraitementIteration************::effectueCalcul::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_64>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000183 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_64>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000562 SelfUser: 0.001 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_63>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000933 SelfUser: 0.001 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_38>
SolveurStatNlinPETSc::BouclePointFixe:AssembleurGD itération #    2,chrono::AnalyseProcessusIteratif::CC:reqConvAtteinte:SolveurStatNlinPETSc::BouclePointFixe:AssembleurGD::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_66>
chrono::CC::PreTrait:CCNL2Res:aPPMAJDeplacementNewtonPrecedent::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_67>
chrono::PP::aPPMAJDeplacementNewtonPrecedent::effectueCalcul::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_36>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000251 SelfUser: 0 SelfSys: 0.001 ChildUser: 0 Childsys: 0 </etiq_36>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.00062 SelfUser: 0 SelfSys: 0.001 ChildUser: 0 Childsys: 0 </etiq_67>
chrono::CC::PostTrait:CCNInf:aPPMAJNormeInfCorrection::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_68>
chrono::PP::aPPMAJNormeInfCorrection::effectueCalcul::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_69>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000186 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_69>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000561 SelfUser: 0 SelfSys: 0.001 ChildUser: 0 Childsys: 0 </etiq_68>
 CCNInf(1)[81]= 0.00932268chrono::CC::PostTrait:CCNL2CorRel:aPPMAJNormeL2CorrectionRelative::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_70>
chrono::PP::aPPMAJNormeL2CorrectionRelative::effectueCalcul::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_71>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000187 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_71>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000559 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_70>
 CCNL2CorRel(0)= 0.0632437chrono::CC::PostTrait:CCNInfRes:aPPMAJNormeInfResidu::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_72>
chrono::PP::aPPMAJNormeInfResidu::effectueCalcul::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_73>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000185 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_73>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000554 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_72>
 CCNInfRés(0)[21]= 0.00141465chrono::CC::PostTrait:CCNL2Res:aPPMAJNormeL2Residu::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_74>
chrono::PP::aPPMAJNormeL2Residu::effectueCalcul::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_75>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000185 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_75>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000554 SelfUser: 0.001 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_74>
 CCNL2Rés(1)= 0.000447447::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.012174 SelfUser: 0.01 SelfSys: 0.002 ChildUser: 0 Childsys: 0 </etiq_66>

chrono::SolveurStatNlinPETSc::PPIteration:AssembleurGD::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_38>
chrono::GestionPrePostTraitement::executePrePostTrait:GESTIONPREPOSTDEFAUT:DebutIterationNlin::************ProblemeGD_aPPExecutePreTraitementIteration************::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_39>
chrono::PP::************ProblemeGD_aPPExecutePreTraitementIteration************::effectueCalcul::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_40>
chrono::PP::aPPMAJNoIterationNewton::effectueCalcul::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_41>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000187 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_41>
chrono::PP::aPPMAJNoIterationCumule::effectueCalcul::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_42>
chrono::PP::PostTrait:aPPMAJNoIterationCumule:aPPMAJNoIterationCumuleAuxVoisins::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_43>
chrono::PP::aPPMAJNoIterationCumuleAuxVoisins::effectueCalcul::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_44>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000199 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_44>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000581 SelfUser: 0 SelfSys: 0.001 ChildUser: 0 Childsys: 0 </etiq_43>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000948 SelfUser: 0 SelfSys: 0.001 ChildUser: 0 Childsys: 0 </etiq_42>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.001696 SelfUser: 0 SelfSys: 0.001 ChildUser: 0 Childsys: 0 </etiq_40>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.002146 SelfUser: 0.001 SelfSys: 0.002 ChildUser: 0 Childsys: 0 </etiq_39>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.002518 SelfUser: 0.001 SelfSys: 0.002 ChildUser: 0 Childsys: 0 </etiq_38>
chrono::SolveurStatNlinPETSc::faisAssemblage:AssembleurGD::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_45>
 Construction (creeMatriceEtAssigneProfile) de la matrice de prefixe: gcrSchur_
 Type Matrice                : nest
 Nouvelle Matrice            : FAUX
 Nom du probleme             : AssembleurGD
 Librairie du Solveur        : mumps
chrono::SolveurLinPETSc::faisAssemblagePrive::appliqueChamp:AssembleurGD::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_46>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000312 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_46>
chrono::SolveurLinPETSc::faisAssemblagePrive::debutAssemblage:AssembleurGD::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_47>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000307 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_47>
chrono::SolveurLinPETSc::faisAssemblagePrive::assembleMatriceEtResidu:AssembleurGD::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_48>
chrono::ProblemeEF::assemblePriveDomaine:AssembleurGD::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_49>
chrono::ProblemeEF::assemblePrive:Mat:Vec:AssembleurGD::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_50>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.041738 SelfUser: 0.042 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_50>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.042774 SelfUser: 0.043 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_49>
chrono::ProblemeEF::assemblePrivePeau:AssembleurGD::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_51>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000183 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_51>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.046532 SelfUser: 0.047 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_48>
chrono::SolveurLinPETSc::faisAssemblagePrive::appliqueMatrice:AssembleurGD::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_52>
chrono::MatricePETSc::mettreAZeroLignes::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_53>
[0] MatAssemblyEnd_SeqAIJ(): Matrix size: 81 X 81; storage space: 0 unneeded,1845 used
[0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0
[0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 54
[0] MatCheckCompressedRow(): Found the ratio (num_zerorows 0)/(num_localrows 81) < 0.6. Do not use CompressedRow routines.
[0] MatAssemblyEnd_SeqAIJ(): Matrix size: 81 X 81; storage space: 0 unneeded,1845 used
[0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0
[0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 54
[0] MatCheckCompressedRow(): Found the ratio (num_zerorows 0)/(num_localrows 81) < 0.6. Do not use CompressedRow routines.
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000937 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_53>
chrono::MatricePETSc::mettreAZeroLignes::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_53>
[0] MatAssemblyEnd_SeqAIJ(): Matrix size: 81 X 8; storage space: 0 unneeded,144 used
[0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0
[0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 8
[0] MatCheckCompressedRow(): Found the ratio (num_zerorows 27)/(num_localrows 81) < 0.6. Do not use CompressedRow routines.
[0] MatAssemblyEnd_SeqAIJ(): Matrix size: 81 X 8; storage space: 0 unneeded,144 used
[0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0
[0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 8
[0] MatCheckCompressedRow(): Found the ratio (num_zerorows 27)/(num_localrows 81) < 0.6. Do not use CompressedRow routines.
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000857 SelfUser: 0.001 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_53>
[0] MatAssemblyEnd_SeqAIJ(): Matrix size: 8 X 81; storage space: 0 unneeded,144 used
[0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0
[0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 24
[0] MatCheckCompressedRow(): Found the ratio (num_zerorows 0)/(num_localrows 8) < 0.6. Do not use CompressedRow routines.
[0] MatAssemblyEnd_SeqSBAIJ(): Matrix size: 8 X 8, block size 1; storage space: 0 unneeded, 8 used
[0] MatAssemblyEnd_SeqSBAIJ(): Number of mallocs during MatSetValues is 0
[0] MatAssemblyEnd_SeqSBAIJ(): Most nonzeros blocks in any row is 1
[0] MatAssemblyEnd_SeqAIJ(): Matrix size: 81 X 81; storage space: 0 unneeded,1845 used
[0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0
[0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 54
[0] MatCheckCompressedRow(): Found the ratio (num_zerorows 0)/(num_localrows 81) < 0.6. Do not use CompressedRow routines.
[0] MatAssemblyEnd_SeqAIJ(): Matrix size: 81 X 8; storage space: 0 unneeded,144 used
[0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0
[0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 8
[0] MatCheckCompressedRow(): Found the ratio (num_zerorows 27)/(num_localrows 81) < 0.6. Do not use CompressedRow routines.
[0] MatAssemblyEnd_SeqAIJ(): Matrix size: 8 X 81; storage space: 0 unneeded,144 used
[0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0
[0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 24
[0] MatCheckCompressedRow(): Found the ratio (num_zerorows 0)/(num_localrows 8) < 0.6. Do not use CompressedRow routines.
[0] MatAssemblyEnd_SeqSBAIJ(): Matrix size: 8 X 8, block size 1; storage space: 0 unneeded, 8 used
[0] MatAssemblyEnd_SeqSBAIJ(): Number of mallocs during MatSetValues is 0
[0] MatAssemblyEnd_SeqSBAIJ(): Most nonzeros blocks in any row is 1
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.026958 SelfUser: 0.025 SelfSys: 0.002 ChildUser: 0 Childsys: 0 </etiq_52>
chrono::SolveurLinPETSc::faisAssemblagePrive::appliqueResidu:AssembleurGD::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_54>
[0] VecAssemblyBegin_MPI(): Stash has 0 entries, uses 0 mallocs.
[0] VecAssemblyBegin_MPI(): Block-Stash has 0 entries, uses 0 mallocs.
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.00904 SelfUser: 0.009 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_54>
chrono::SolveurLinPETSc::faisAssemblagePrive::finAssemblageAssembleurGD::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_55>
[0] VecAssemblyBegin_MPI(): Stash has 0 entries, uses 0 mallocs.
[0] VecAssemblyBegin_MPI(): Block-Stash has 0 entries, uses 0 mallocs.
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000285 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_55>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.084839 SelfUser: 0.082 SelfSys: 0.003 ChildUser: 0 Childsys: 0 </etiq_45>
chrono::SolveurStatNlinPETSc::resoudre_et_RechercheLineaire:AssembleurGD::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_56>
chrono::SolveurLinPETSc::resoudre_Factorisation_et_DR:AssembleurGD::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_57>
chrono::asgnOperateurKSP::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_58>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000224 SelfUser: 0.001 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_58>
chrono::SolveurLinPETSc::resoudre:KSPSolve:AssembleurGD::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_60>
[0] PetscCommDuplicate(): Using internal PETSc communicator 139814426695680 76804080
KSP Object:(gcrSchur_) 1 MPI processes
  type: gcr
    GCR: restart = 30 
    GCR: restarts performed = 2 
  maximum iterations=100, initial guess is zero
  tolerances:  relative=1e-11, absolute=1e-11, divergence=10000
  right preconditioning
  using UNPRECONDITIONED norm type for convergence test
PC Object:(gcrSchur_) 1 MPI processes
  type: fieldsplit
    FieldSplit with Schur preconditioner, factorization FULL
    Preconditioner for the Schur complement formed from A11
    Split info:
    Split number 0 Defined by IS
    Split number 1 Defined by IS
    KSP solver for A00 block
      KSP Object:      (gcrSchur_fieldsplit_a_00_)       1 MPI processes
        type: preonly
        maximum iterations=10000, initial guess is zero
        tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
        left preconditioning
        using NONE norm type for convergence test
      PC Object:      (gcrSchur_fieldsplit_a_00_)       1 MPI processes
        type: lu
          LU: out-of-place factorization
          tolerance for zero pivot 2.22045e-14
          matrix ordering: nd
          factor fill ratio given 5, needed 1.03902
            Factored matrix follows:
              Mat Object:               1 MPI processes
                type: seqaij
                rows=81, cols=81, bs=3
                package used to perform factorization: petsc
                total: nonzeros=1917, allocated nonzeros=1917
                total number of mallocs used during MatSetValues calls =0
                  using I-node routines: found 27 nodes, limit used is 5
        linear system matrix = precond matrix:
        Mat Object:        (gcrSchur_fieldsplit_a_00_)         1 MPI processes
          type: seqaij
          rows=81, cols=81, bs=3
          total: nonzeros=1845, allocated nonzeros=1845
          total number of mallocs used during MatSetValues calls =0
            not using I-node routines
    KSP solver for S = A11 - A10 inv(A00) A01 
      KSP Object:      (gcrSchur_fieldsplit_schur_)       1 MPI processes
        type: gcr
          GCR: restart = 30 
          GCR: restarts performed = 2 
        maximum iterations=10000, initial guess is zero
        tolerances:  relative=1e-12, absolute=1e-12, divergence=10000
        right preconditioning
        using UNPRECONDITIONED norm type for convergence test
      PC Object:      (gcrSchur_fieldsplit_schur_)       1 MPI processes
        type: jacobi
        linear system matrix followed by preconditioner matrix:
        Mat Object:        (gcrSchur_fieldsplit_schur_)         1 MPI processes
          type: schurcomplement
          rows=8, cols=8
            Schur complement A11 - A10 inv(A00) A01
            A11
              Mat Object:              (gcrSchur_fieldsplit_schur_)               1 MPI processes
                type: seqsbaij
                rows=8, cols=8
                total: nonzeros=8, allocated nonzeros=8
                total number of mallocs used during MatSetValues calls =0
                    block size is 1
            A10
              Mat Object:              (gcrSchur_BlocsHDiag_)               1 MPI processes
                type: seqaij
                rows=8, cols=81
                total: nonzeros=144, allocated nonzeros=144
                total number of mallocs used during MatSetValues calls =0
                  not using I-node routines
            KSP of A00
              KSP Object:              (gcrSchur_fieldsplit_a_00_)               1 MPI processes
                type: preonly
                maximum iterations=10000, initial guess is zero
                tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
                left preconditioning
                using NONE norm type for convergence test
              PC Object:              (gcrSchur_fieldsplit_a_00_)               1 MPI processes
                type: lu
                  LU: out-of-place factorization
                  tolerance for zero pivot 2.22045e-14
                  matrix ordering: nd
                  factor fill ratio given 5, needed 1.03902
                    Factored matrix follows:
                      Mat Object:                       1 MPI processes
                        type: seqaij
                        rows=81, cols=81, bs=3
                        package used to perform factorization: petsc
                        total: nonzeros=1917, allocated nonzeros=1917
                        total number of mallocs used during MatSetValues calls =0
                          using I-node routines: found 27 nodes, limit used is 5
                linear system matrix = precond matrix:
                Mat Object:                (gcrSchur_fieldsplit_a_00_)                 1 MPI processes
                  type: seqaij
                  rows=81, cols=81, bs=3
                  total: nonzeros=1845, allocated nonzeros=1845
                  total number of mallocs used during MatSetValues calls =0
                    not using I-node routines
            A01
              Mat Object:              (gcrSchur_BlocsHDiag_)               1 MPI processes
                type: seqaij
                rows=81, cols=8
                total: nonzeros=144, allocated nonzeros=144
                total number of mallocs used during MatSetValues calls =0
                  not using I-node routines
        Mat Object:        (gcrSchur_fieldsplit_schur_)         1 MPI processes
          type: seqsbaij
          rows=8, cols=8
          total: nonzeros=8, allocated nonzeros=8
          total number of mallocs used during MatSetValues calls =0
              block size is 1
  linear system matrix = precond matrix:
  Mat Object:   1 MPI processes
    type: nest
    rows=89, cols=89
      Matrix object: 
        type=nest, rows=2, cols=2 
        MatNest structure: 
        (0,0) : prefix="gcrSchur_fieldsplit_a_00_", type=seqaij, rows=81, cols=81 
        (0,1) : prefix="gcrSchur_BlocsHDiag_", type=seqaij, rows=81, cols=8 
        (1,0) : prefix="gcrSchur_BlocsHDiag_", type=seqaij, rows=8, cols=81 
        (1,1) : prefix="gcrSchur_fieldsplit_schur_", type=seqsbaij, rows=8, cols=8 

 Configuration (ecritInfoKSP) du KSP : 
 Nom du solveur        : mon_solvlin
 Librairie du Solveur  : petsc
 Type du solveur       : gcr (gcrSchur_)
 Type du précond.      : fieldsplit (gcrSchur_)
 Type Matrice          : nest
 MatNonzeroState       : 0
 Type Matrice          : nest
[0] PCSetUp(): Setting up PC with same nonzero pattern
  Residual norms for gcrSchur_ solve.
  0 KSP Residual norm 3.649269236024e-07 
[0] PCSetUp(): Leaving PC with identical preconditioner since operator is unchanged
[0] PCSetUp(): Setting up PC with same nonzero pattern
[0] PCSetUp(): Leaving PC with identical preconditioner since operator is unchanged
[0] PCSetUp(): Setting up PC with same nonzero pattern
[0] PCSetUp(): Leaving PC with identical preconditioner since operator is unchanged
    Residual norms for gcrSchur_fieldsplit_schur_ solve.
    0 KSP Residual norm 6.412889148832e-09 
[0] PCSetUp(): Leaving PC with identical preconditioner since operator is unchanged
[0] PCSetUp(): Leaving PC with identical preconditioner since operator is unchanged
    1 KSP Residual norm 3.715668495120e-10 
[0] PCSetUp(): Leaving PC with identical preconditioner since operator is unchanged
[0] PCSetUp(): Leaving PC with identical preconditioner since operator is unchanged
    2 KSP Residual norm 4.606225538561e-21 
[0] KSPConvergedDefault(): Linear solver has converged. Residual norm 4.606225538561e-21 is less than absolute tolerance 1.000000000000e-12 at iteration 2
[0] PCSetUp(): Leaving PC with identical preconditioner since operator is unchanged
  1 KSP Residual norm 4.606135580612e-21 
[0] KSPConvergedDefault(): Linear solver has converged. Residual norm 4.606135580612e-21 is less than absolute tolerance 1.000000000000e-11 at iteration 1
KSP Object:(gcrSchur_) 1 MPI processes
  type: gcr
    GCR: restart = 30 
    GCR: restarts performed = 3 
  maximum iterations=100, initial guess is zero
  tolerances:  relative=1e-11, absolute=1e-11, divergence=10000
  right preconditioning
  using UNPRECONDITIONED norm type for convergence test
PC Object:(gcrSchur_) 1 MPI processes
  type: fieldsplit
    FieldSplit with Schur preconditioner, factorization FULL
    Preconditioner for the Schur complement formed from A11
    Split info:
    Split number 0 Defined by IS
    Split number 1 Defined by IS
    KSP solver for A00 block
      KSP Object:      (gcrSchur_fieldsplit_a_00_)       1 MPI processes
        type: preonly
        maximum iterations=10000, initial guess is zero
        tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
        left preconditioning
        using NONE norm type for convergence test
      PC Object:      (gcrSchur_fieldsplit_a_00_)       1 MPI processes
        type: lu
          LU: out-of-place factorization
          tolerance for zero pivot 2.22045e-14
          matrix ordering: nd
          factor fill ratio given 5, needed 1.03902
            Factored matrix follows:
              Mat Object:               1 MPI processes
                type: seqaij
                rows=81, cols=81, bs=3
                package used to perform factorization: petsc
                total: nonzeros=1917, allocated nonzeros=1917
                total number of mallocs used during MatSetValues calls =0
                  using I-node routines: found 27 nodes, limit used is 5
        linear system matrix = precond matrix:
        Mat Object:        (gcrSchur_fieldsplit_a_00_)         1 MPI processes
          type: seqaij
          rows=81, cols=81, bs=3
          total: nonzeros=1845, allocated nonzeros=1845
          total number of mallocs used during MatSetValues calls =0
            not using I-node routines
    KSP solver for S = A11 - A10 inv(A00) A01 
      KSP Object:      (gcrSchur_fieldsplit_schur_)       1 MPI processes
        type: gcr
          GCR: restart = 30 
          GCR: restarts performed = 3 
        maximum iterations=10000, initial guess is zero
        tolerances:  relative=1e-12, absolute=1e-12, divergence=10000
        right preconditioning
        using UNPRECONDITIONED norm type for convergence test
      PC Object:      (gcrSchur_fieldsplit_schur_)       1 MPI processes
        type: jacobi
        linear system matrix followed by preconditioner matrix:
        Mat Object:        (gcrSchur_fieldsplit_schur_)         1 MPI processes
          type: schurcomplement
          rows=8, cols=8
            Schur complement A11 - A10 inv(A00) A01
            A11
              Mat Object:              (gcrSchur_fieldsplit_schur_)               1 MPI processes
                type: seqsbaij
                rows=8, cols=8
                total: nonzeros=8, allocated nonzeros=8
                total number of mallocs used during MatSetValues calls =0
                    block size is 1
            A10
              Mat Object:              (gcrSchur_BlocsHDiag_)               1 MPI processes
                type: seqaij
                rows=8, cols=81
                total: nonzeros=144, allocated nonzeros=144
                total number of mallocs used during MatSetValues calls =0
                  not using I-node routines
            KSP of A00
              KSP Object:              (gcrSchur_fieldsplit_a_00_)               1 MPI processes
                type: preonly
                maximum iterations=10000, initial guess is zero
                tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
                left preconditioning
                using NONE norm type for convergence test
              PC Object:              (gcrSchur_fieldsplit_a_00_)               1 MPI processes
                type: lu
                  LU: out-of-place factorization
                  tolerance for zero pivot 2.22045e-14
                  matrix ordering: nd
                  factor fill ratio given 5, needed 1.03902
                    Factored matrix follows:
                      Mat Object:                       1 MPI processes
                        type: seqaij
                        rows=81, cols=81, bs=3
                        package used to perform factorization: petsc
                        total: nonzeros=1917, allocated nonzeros=1917
                        total number of mallocs used during MatSetValues calls =0
                          using I-node routines: found 27 nodes, limit used is 5
                linear system matrix = precond matrix:
                Mat Object:                (gcrSchur_fieldsplit_a_00_)                 1 MPI processes
                  type: seqaij
                  rows=81, cols=81, bs=3
                  total: nonzeros=1845, allocated nonzeros=1845
                  total number of mallocs used during MatSetValues calls =0
                    not using I-node routines
            A01
              Mat Object:              (gcrSchur_BlocsHDiag_)               1 MPI processes
                type: seqaij
                rows=81, cols=8
                total: nonzeros=144, allocated nonzeros=144
                total number of mallocs used during MatSetValues calls =0
                  not using I-node routines
        Mat Object:        (gcrSchur_fieldsplit_schur_)         1 MPI processes
          type: seqsbaij
          rows=8, cols=8
          total: nonzeros=8, allocated nonzeros=8
          total number of mallocs used during MatSetValues calls =0
              block size is 1
  linear system matrix = precond matrix:
  Mat Object:   1 MPI processes
    type: nest
    rows=89, cols=89
      Matrix object: 
        type=nest, rows=2, cols=2 
        MatNest structure: 
        (0,0) : prefix="gcrSchur_fieldsplit_a_00_", type=seqaij, rows=81, cols=81 
        (0,1) : prefix="gcrSchur_BlocsHDiag_", type=seqaij, rows=81, cols=8 
        (1,0) : prefix="gcrSchur_BlocsHDiag_", type=seqaij, rows=8, cols=81 
        (1,1) : prefix="gcrSchur_fieldsplit_schur_", type=seqsbaij, rows=8, cols=8 
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.007458 SelfUser: 0.005 SelfSys: 0.002 ChildUser: 0 Childsys: 0 </etiq_60>
 KSPSolve non zero id. : VRAI
 KSPSolve valeurs  id. : FAUX
[0] PetscCommDuplicate(): Using internal PETSc communicator 139814426697728 68462368
 Err. rel. resolution  : 1.89337e-14 / 1.26223e-14

::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.009294 SelfUser: 0.007 SelfSys: 0.002 ChildUser: 0 Childsys: 0 </etiq_57>
chrono::SolveurLinPETSc::CL+VecDDLs:AssembleurGD::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_61>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000189 SelfUser: 0.001 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_61>
chrono::SolveurLinPETSc::appliqueCorrectionAuProbleme:AssembleurGD::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_62>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.008102 SelfUser: 0.008 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_62>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.018403 SelfUser: 0.016 SelfSys: 0.002 ChildUser: 0 Childsys: 0 </etiq_56>
chrono::SolveurStatNlinPETSc::PPIteration:AssembleurGD::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_38>
chrono::GestionPrePostTraitement::executePrePostTrait:GESTIONPREPOSTDEFAUT:FinIterationNlin::************ProblemeGD_aPPExecutePostTraitementIteration************::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_63>
chrono::PP::************ProblemeGD_aPPExecutePostTraitementIteration************::effectueCalcul::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_64>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000181 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_64>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000545 SelfUser: 0.001 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_63>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000928 SelfUser: 0.001 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_38>
SolveurStatNlinPETSc::BouclePointFixe:AssembleurGD itération #    3,chrono::AnalyseProcessusIteratif::CC:reqConvAtteinte:SolveurStatNlinPETSc::BouclePointFixe:AssembleurGD::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_66>
chrono::CC::PreTrait:CCNL2Res:aPPMAJDeplacementNewtonPrecedent::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_67>
chrono::PP::aPPMAJDeplacementNewtonPrecedent::effectueCalcul::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_36>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000243 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_36>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000621 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_67>
chrono::CC::PostTrait:CCNInf:aPPMAJNormeInfCorrection::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_68>
chrono::PP::aPPMAJNormeInfCorrection::effectueCalcul::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_69>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000185 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_69>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000552 SelfUser: 0.001 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_68>
 CCNInf(1)[84]= 5.13693e-07chrono::CC::PostTrait:CCNL2CorRel:aPPMAJNormeL2CorrectionRelative::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_70>
chrono::PP::aPPMAJNormeL2CorrectionRelative::effectueCalcul::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_71>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000185 SelfUser: 0.001 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_71>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000558 SelfUser: 0.001 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_70>
 CCNL2CorRel(0)= 3.83885e-06chrono::CC::PostTrait:CCNInfRes:aPPMAJNormeInfResidu::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_72>
chrono::PP::aPPMAJNormeInfResidu::effectueCalcul::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_73>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000187 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_73>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000568 SelfUser: 0.001 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_72>
 CCNInfRés(0)[46]= 1.39608e-07chrono::CC::PostTrait:CCNL2Res:aPPMAJNormeL2Residu::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_74>
chrono::PP::aPPMAJNormeL2Residu::effectueCalcul::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_75>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000226 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_75>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000587 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_74>
 CCNL2Rés(1)= 3.86822e-08::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.012286 SelfUser: 0.011 SelfSys: 0.002 ChildUser: 0 Childsys: 0 </etiq_66>

chrono::SolveurStatNlinPETSc::PPIteration:AssembleurGD::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_38>
chrono::GestionPrePostTraitement::executePrePostTrait:GESTIONPREPOSTDEFAUT:DebutIterationNlin::************ProblemeGD_aPPExecutePreTraitementIteration************::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_39>
chrono::PP::************ProblemeGD_aPPExecutePreTraitementIteration************::effectueCalcul::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_40>
chrono::PP::aPPMAJNoIterationNewton::effectueCalcul::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_41>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000185 SelfUser: 0.001 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_41>
chrono::PP::aPPMAJNoIterationCumule::effectueCalcul::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_42>
chrono::PP::PostTrait:aPPMAJNoIterationCumule:aPPMAJNoIterationCumuleAuxVoisins::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_43>
chrono::PP::aPPMAJNoIterationCumuleAuxVoisins::effectueCalcul::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_44>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000198 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_44>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.00057 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_43>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000936 SelfUser: 0.001 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_42>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.001682 SelfUser: 0.002 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_40>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.002056 SelfUser: 0.002 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_39>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.002435 SelfUser: 0.002 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_38>
chrono::SolveurStatNlinPETSc::faisAssemblage:AssembleurGD::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_45>
 Construction (creeMatriceEtAssigneProfile) de la matrice de prefixe: gcrSchur_
 Type Matrice                : nest
 Nouvelle Matrice            : FAUX
 Nom du probleme             : AssembleurGD
 Librairie du Solveur        : mumps
chrono::SolveurLinPETSc::faisAssemblagePrive::appliqueChamp:AssembleurGD::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_46>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000303 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_46>
chrono::SolveurLinPETSc::faisAssemblagePrive::debutAssemblage:AssembleurGD::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_47>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000309 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_47>
chrono::SolveurLinPETSc::faisAssemblagePrive::assembleMatriceEtResidu:AssembleurGD::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_48>
chrono::ProblemeEF::assemblePriveDomaine:AssembleurGD::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_49>
chrono::ProblemeEF::assemblePrive:Mat:Vec:AssembleurGD::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_50>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.041683 SelfUser: 0.041 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_50>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.042739 SelfUser: 0.042 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_49>
chrono::ProblemeEF::assemblePrivePeau:AssembleurGD::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_51>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000184 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_51>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.046532 SelfUser: 0.046 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_48>
chrono::SolveurLinPETSc::faisAssemblagePrive::appliqueMatrice:AssembleurGD::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_52>
chrono::MatricePETSc::mettreAZeroLignes::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_53>
[0] MatAssemblyEnd_SeqAIJ(): Matrix size: 81 X 81; storage space: 0 unneeded,1845 used
[0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0
[0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 54
[0] MatCheckCompressedRow(): Found the ratio (num_zerorows 0)/(num_localrows 81) < 0.6. Do not use CompressedRow routines.
[0] MatAssemblyEnd_SeqAIJ(): Matrix size: 81 X 81; storage space: 0 unneeded,1845 used
[0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0
[0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 54
[0] MatCheckCompressedRow(): Found the ratio (num_zerorows 0)/(num_localrows 81) < 0.6. Do not use CompressedRow routines.
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.00088 SelfUser: 0.001 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_53>
chrono::MatricePETSc::mettreAZeroLignes::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_53>
[0] MatAssemblyEnd_SeqAIJ(): Matrix size: 81 X 8; storage space: 0 unneeded,144 used
[0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0
[0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 8
[0] MatCheckCompressedRow(): Found the ratio (num_zerorows 27)/(num_localrows 81) < 0.6. Do not use CompressedRow routines.
[0] MatAssemblyEnd_SeqAIJ(): Matrix size: 81 X 8; storage space: 0 unneeded,144 used
[0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0
[0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 8
[0] MatCheckCompressedRow(): Found the ratio (num_zerorows 27)/(num_localrows 81) < 0.6. Do not use CompressedRow routines.
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000853 SelfUser: 0 SelfSys: 0.001 ChildUser: 0 Childsys: 0 </etiq_53>
[0] MatAssemblyEnd_SeqAIJ(): Matrix size: 8 X 81; storage space: 0 unneeded,144 used
[0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0
[0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 24
[0] MatCheckCompressedRow(): Found the ratio (num_zerorows 0)/(num_localrows 8) < 0.6. Do not use CompressedRow routines.
[0] MatAssemblyEnd_SeqSBAIJ(): Matrix size: 8 X 8, block size 1; storage space: 0 unneeded, 8 used
[0] MatAssemblyEnd_SeqSBAIJ(): Number of mallocs during MatSetValues is 0
[0] MatAssemblyEnd_SeqSBAIJ(): Most nonzeros blocks in any row is 1
[0] MatAssemblyEnd_SeqAIJ(): Matrix size: 81 X 81; storage space: 0 unneeded,1845 used
[0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0
[0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 54
[0] MatCheckCompressedRow(): Found the ratio (num_zerorows 0)/(num_localrows 81) < 0.6. Do not use CompressedRow routines.
[0] MatAssemblyEnd_SeqAIJ(): Matrix size: 81 X 8; storage space: 0 unneeded,144 used
[0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0
[0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 8
[0] MatCheckCompressedRow(): Found the ratio (num_zerorows 27)/(num_localrows 81) < 0.6. Do not use CompressedRow routines.
[0] MatAssemblyEnd_SeqAIJ(): Matrix size: 8 X 81; storage space: 0 unneeded,144 used
[0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0
[0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 24
[0] MatCheckCompressedRow(): Found the ratio (num_zerorows 0)/(num_localrows 8) < 0.6. Do not use CompressedRow routines.
[0] MatAssemblyEnd_SeqSBAIJ(): Matrix size: 8 X 8, block size 1; storage space: 0 unneeded, 8 used
[0] MatAssemblyEnd_SeqSBAIJ(): Number of mallocs during MatSetValues is 0
[0] MatAssemblyEnd_SeqSBAIJ(): Most nonzeros blocks in any row is 1
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.026892 SelfUser: 0.026 SelfSys: 0.001 ChildUser: 0 Childsys: 0 </etiq_52>
chrono::SolveurLinPETSc::faisAssemblagePrive::appliqueResidu:AssembleurGD::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_54>
[0] VecAssemblyBegin_MPI(): Stash has 0 entries, uses 0 mallocs.
[0] VecAssemblyBegin_MPI(): Block-Stash has 0 entries, uses 0 mallocs.
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.00914 SelfUser: 0.009 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_54>
chrono::SolveurLinPETSc::faisAssemblagePrive::finAssemblageAssembleurGD::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_55>
[0] VecAssemblyBegin_MPI(): Stash has 0 entries, uses 0 mallocs.
[0] VecAssemblyBegin_MPI(): Block-Stash has 0 entries, uses 0 mallocs.
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000316 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_55>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.08494 SelfUser: 0.083 SelfSys: 0.002 ChildUser: 0 Childsys: 0 </etiq_45>
chrono::SolveurStatNlinPETSc::resoudre_et_RechercheLineaire:AssembleurGD::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_56>
chrono::SolveurLinPETSc::resoudre_Factorisation_et_DR:AssembleurGD::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_57>
chrono::asgnOperateurKSP::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_58>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000224 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_58>
chrono::SolveurLinPETSc::resoudre:KSPSolve:AssembleurGD::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_60>
[0] PetscCommDuplicate(): Using internal PETSc communicator 139814426695680 76804080
KSP Object:(gcrSchur_) 1 MPI processes
  type: gcr
    GCR: restart = 30 
    GCR: restarts performed = 3 
  maximum iterations=100, initial guess is zero
  tolerances:  relative=1e-11, absolute=1e-11, divergence=10000
  right preconditioning
  using UNPRECONDITIONED norm type for convergence test
PC Object:(gcrSchur_) 1 MPI processes
  type: fieldsplit
    FieldSplit with Schur preconditioner, factorization FULL
    Preconditioner for the Schur complement formed from A11
    Split info:
    Split number 0 Defined by IS
    Split number 1 Defined by IS
    KSP solver for A00 block
      KSP Object:      (gcrSchur_fieldsplit_a_00_)       1 MPI processes
        type: preonly
        maximum iterations=10000, initial guess is zero
        tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
        left preconditioning
        using NONE norm type for convergence test
      PC Object:      (gcrSchur_fieldsplit_a_00_)       1 MPI processes
        type: lu
          LU: out-of-place factorization
          tolerance for zero pivot 2.22045e-14
          matrix ordering: nd
          factor fill ratio given 5, needed 1.03902
            Factored matrix follows:
              Mat Object:               1 MPI processes
                type: seqaij
                rows=81, cols=81, bs=3
                package used to perform factorization: petsc
                total: nonzeros=1917, allocated nonzeros=1917
                total number of mallocs used during MatSetValues calls =0
                  using I-node routines: found 27 nodes, limit used is 5
        linear system matrix = precond matrix:
        Mat Object:        (gcrSchur_fieldsplit_a_00_)         1 MPI processes
          type: seqaij
          rows=81, cols=81, bs=3
          total: nonzeros=1845, allocated nonzeros=1845
          total number of mallocs used during MatSetValues calls =0
            not using I-node routines
    KSP solver for S = A11 - A10 inv(A00) A01 
      KSP Object:      (gcrSchur_fieldsplit_schur_)       1 MPI processes
        type: gcr
          GCR: restart = 30 
          GCR: restarts performed = 3 
        maximum iterations=10000, initial guess is zero
        tolerances:  relative=1e-12, absolute=1e-12, divergence=10000
        right preconditioning
        using UNPRECONDITIONED norm type for convergence test
      PC Object:      (gcrSchur_fieldsplit_schur_)       1 MPI processes
        type: jacobi
        linear system matrix followed by preconditioner matrix:
        Mat Object:        (gcrSchur_fieldsplit_schur_)         1 MPI processes
          type: schurcomplement
          rows=8, cols=8
            Schur complement A11 - A10 inv(A00) A01
            A11
              Mat Object:              (gcrSchur_fieldsplit_schur_)               1 MPI processes
                type: seqsbaij
                rows=8, cols=8
                total: nonzeros=8, allocated nonzeros=8
                total number of mallocs used during MatSetValues calls =0
                    block size is 1
            A10
              Mat Object:              (gcrSchur_BlocsHDiag_)               1 MPI processes
                type: seqaij
                rows=8, cols=81
                total: nonzeros=144, allocated nonzeros=144
                total number of mallocs used during MatSetValues calls =0
                  not using I-node routines
            KSP of A00
              KSP Object:              (gcrSchur_fieldsplit_a_00_)               1 MPI processes
                type: preonly
                maximum iterations=10000, initial guess is zero
                tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
                left preconditioning
                using NONE norm type for convergence test
              PC Object:              (gcrSchur_fieldsplit_a_00_)               1 MPI processes
                type: lu
                  LU: out-of-place factorization
                  tolerance for zero pivot 2.22045e-14
                  matrix ordering: nd
                  factor fill ratio given 5, needed 1.03902
                    Factored matrix follows:
                      Mat Object:                       1 MPI processes
                        type: seqaij
                        rows=81, cols=81, bs=3
                        package used to perform factorization: petsc
                        total: nonzeros=1917, allocated nonzeros=1917
                        total number of mallocs used during MatSetValues calls =0
                          using I-node routines: found 27 nodes, limit used is 5
                linear system matrix = precond matrix:
                Mat Object:                (gcrSchur_fieldsplit_a_00_)                 1 MPI processes
                  type: seqaij
                  rows=81, cols=81, bs=3
                  total: nonzeros=1845, allocated nonzeros=1845
                  total number of mallocs used during MatSetValues calls =0
                    not using I-node routines
            A01
              Mat Object:              (gcrSchur_BlocsHDiag_)               1 MPI processes
                type: seqaij
                rows=81, cols=8
                total: nonzeros=144, allocated nonzeros=144
                total number of mallocs used during MatSetValues calls =0
                  not using I-node routines
        Mat Object:        (gcrSchur_fieldsplit_schur_)         1 MPI processes
          type: seqsbaij
          rows=8, cols=8
          total: nonzeros=8, allocated nonzeros=8
          total number of mallocs used during MatSetValues calls =0
              block size is 1
  linear system matrix = precond matrix:
  Mat Object:   1 MPI processes
    type: nest
    rows=89, cols=89
      Matrix object: 
        type=nest, rows=2, cols=2 
        MatNest structure: 
        (0,0) : prefix="gcrSchur_fieldsplit_a_00_", type=seqaij, rows=81, cols=81 
        (0,1) : prefix="gcrSchur_BlocsHDiag_", type=seqaij, rows=81, cols=8 
        (1,0) : prefix="gcrSchur_BlocsHDiag_", type=seqaij, rows=8, cols=81 
        (1,1) : prefix="gcrSchur_fieldsplit_schur_", type=seqsbaij, rows=8, cols=8 

 Configuration (ecritInfoKSP) du KSP : 
 Nom du solveur        : mon_solvlin
 Librairie du Solveur  : petsc
 Type du solveur       : gcr (gcrSchur_)
 Type du précond.      : fieldsplit (gcrSchur_)
 Type Matrice          : nest
 MatNonzeroState       : 0
 Type Matrice          : nest
[0] PCSetUp(): Setting up PC with same nonzero pattern
  Residual norms for gcrSchur_ solve.
  0 KSP Residual norm 3.305223331380e-15 
[0] KSPConvergedDefault(): Linear solver has converged. Residual norm 3.305223331380e-15 is less than absolute tolerance 1.000000000000e-11 at iteration 0
KSP Object:(gcrSchur_) 1 MPI processes
  type: gcr
    GCR: restart = 30 
    GCR: restarts performed = 3 
  maximum iterations=100, initial guess is zero
  tolerances:  relative=1e-11, absolute=1e-11, divergence=10000
  right preconditioning
  using UNPRECONDITIONED norm type for convergence test
PC Object:(gcrSchur_) 1 MPI processes
  type: fieldsplit
    FieldSplit with Schur preconditioner, factorization FULL
    Preconditioner for the Schur complement formed from A11
    Split info:
    Split number 0 Defined by IS
    Split number 1 Defined by IS
    KSP solver for A00 block
      KSP Object:      (gcrSchur_fieldsplit_a_00_)       1 MPI processes
        type: preonly
        maximum iterations=10000, initial guess is zero
        tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
        left preconditioning
        using NONE norm type for convergence test
      PC Object:      (gcrSchur_fieldsplit_a_00_)       1 MPI processes
        type: lu
          LU: out-of-place factorization
          tolerance for zero pivot 2.22045e-14
          matrix ordering: nd
          factor fill ratio given 5, needed 1.03902
            Factored matrix follows:
              Mat Object:               1 MPI processes
                type: seqaij
                rows=81, cols=81, bs=3
                package used to perform factorization: petsc
                total: nonzeros=1917, allocated nonzeros=1917
                total number of mallocs used during MatSetValues calls =0
                  using I-node routines: found 27 nodes, limit used is 5
        linear system matrix = precond matrix:
        Mat Object:        (gcrSchur_fieldsplit_a_00_)         1 MPI processes
          type: seqaij
          rows=81, cols=81, bs=3
          total: nonzeros=1845, allocated nonzeros=1845
          total number of mallocs used during MatSetValues calls =0
            not using I-node routines
    KSP solver for S = A11 - A10 inv(A00) A01 
      KSP Object:      (gcrSchur_fieldsplit_schur_)       1 MPI processes
        type: gcr
          GCR: restart = 30 
          GCR: restarts performed = 3 
        maximum iterations=10000, initial guess is zero
        tolerances:  relative=1e-12, absolute=1e-12, divergence=10000
        right preconditioning
        using UNPRECONDITIONED norm type for convergence test
      PC Object:      (gcrSchur_fieldsplit_schur_)       1 MPI processes
        type: jacobi
        linear system matrix followed by preconditioner matrix:
        Mat Object:        (gcrSchur_fieldsplit_schur_)         1 MPI processes
          type: schurcomplement
          rows=8, cols=8
            Schur complement A11 - A10 inv(A00) A01
            A11
              Mat Object:              (gcrSchur_fieldsplit_schur_)               1 MPI processes
                type: seqsbaij
                rows=8, cols=8
                total: nonzeros=8, allocated nonzeros=8
                total number of mallocs used during MatSetValues calls =0
                    block size is 1
            A10
              Mat Object:              (gcrSchur_BlocsHDiag_)               1 MPI processes
                type: seqaij
                rows=8, cols=81
                total: nonzeros=144, allocated nonzeros=144
                total number of mallocs used during MatSetValues calls =0
                  not using I-node routines
            KSP of A00
              KSP Object:              (gcrSchur_fieldsplit_a_00_)               1 MPI processes
                type: preonly
                maximum iterations=10000, initial guess is zero
                tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
                left preconditioning
                using NONE norm type for convergence test
              PC Object:              (gcrSchur_fieldsplit_a_00_)               1 MPI processes
                type: lu
                  LU: out-of-place factorization
                  tolerance for zero pivot 2.22045e-14
                  matrix ordering: nd
                  factor fill ratio given 5, needed 1.03902
                    Factored matrix follows:
                      Mat Object:                       1 MPI processes
                        type: seqaij
                        rows=81, cols=81, bs=3
                        package used to perform factorization: petsc
                        total: nonzeros=1917, allocated nonzeros=1917
                        total number of mallocs used during MatSetValues calls =0
                          using I-node routines: found 27 nodes, limit used is 5
                linear system matrix = precond matrix:
                Mat Object:                (gcrSchur_fieldsplit_a_00_)                 1 MPI processes
                  type: seqaij
                  rows=81, cols=81, bs=3
                  total: nonzeros=1845, allocated nonzeros=1845
                  total number of mallocs used during MatSetValues calls =0
                    not using I-node routines
            A01
              Mat Object:              (gcrSchur_BlocsHDiag_)               1 MPI processes
                type: seqaij
                rows=81, cols=8
                total: nonzeros=144, allocated nonzeros=144
                total number of mallocs used during MatSetValues calls =0
                  not using I-node routines
        Mat Object:        (gcrSchur_fieldsplit_schur_)         1 MPI processes
          type: seqsbaij
          rows=8, cols=8
          total: nonzeros=8, allocated nonzeros=8
          total number of mallocs used during MatSetValues calls =0
              block size is 1
  linear system matrix = precond matrix:
  Mat Object:   1 MPI processes
    type: nest
    rows=89, cols=89
      Matrix object: 
        type=nest, rows=2, cols=2 
        MatNest structure: 
        (0,0) : prefix="gcrSchur_fieldsplit_a_00_", type=seqaij, rows=81, cols=81 
        (0,1) : prefix="gcrSchur_BlocsHDiag_", type=seqaij, rows=81, cols=8 
        (1,0) : prefix="gcrSchur_BlocsHDiag_", type=seqaij, rows=8, cols=81 
        (1,1) : prefix="gcrSchur_fieldsplit_schur_", type=seqsbaij, rows=8, cols=8 
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.00474 SelfUser: 0.004 SelfSys: 0.001 ChildUser: 0 Childsys: 0 </etiq_60>
 KSPSolve non zero id. : VRAI
 KSPSolve valeurs  id. : FAUX
[0] PetscCommDuplicate(): Using internal PETSc communicator 139814426697728 68462368
 Err. rel. resolution  : 1 / 1

::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.006545 SelfUser: 0.005 SelfSys: 0.002 ChildUser: 0 Childsys: 0 </etiq_57>
chrono::SolveurLinPETSc::CL+VecDDLs:AssembleurGD::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_61>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000186 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_61>
chrono::SolveurLinPETSc::appliqueCorrectionAuProbleme:AssembleurGD::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_62>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.00819 SelfUser: 0.008 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_62>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.01575 SelfUser: 0.014 SelfSys: 0.002 ChildUser: 0 Childsys: 0 </etiq_56>
chrono::SolveurStatNlinPETSc::PPIteration:AssembleurGD::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_38>
chrono::GestionPrePostTraitement::executePrePostTrait:GESTIONPREPOSTDEFAUT:FinIterationNlin::************ProblemeGD_aPPExecutePostTraitementIteration************::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_63>
chrono::PP::************ProblemeGD_aPPExecutePostTraitementIteration************::effectueCalcul::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_64>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000182 SelfUser: 0 SelfSys: 0.001 ChildUser: 0 Childsys: 0 </etiq_64>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000587 SelfUser: 0 SelfSys: 0.001 ChildUser: 0 Childsys: 0 </etiq_63>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000956 SelfUser: 0 SelfSys: 0.001 ChildUser: 0 Childsys: 0 </etiq_38>
SolveurStatNlinPETSc::BouclePointFixe:AssembleurGD itération #    4,chrono::AnalyseProcessusIteratif::CC:reqConvAtteinte:SolveurStatNlinPETSc::BouclePointFixe:AssembleurGD::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_66>
chrono::CC::PreTrait:CCNL2Res:aPPMAJDeplacementNewtonPrecedent::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_67>
chrono::PP::aPPMAJDeplacementNewtonPrecedent::effectueCalcul::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_36>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000284 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_36>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000653 SelfUser: 0 SelfSys: 0.001 ChildUser: 0 Childsys: 0 </etiq_67>
chrono::CC::PostTrait:CCNInf:aPPMAJNormeInfCorrection::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_68>
chrono::PP::aPPMAJNormeInfCorrection::effectueCalcul::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_69>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000191 SelfUser: 0.001 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_69>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000559 SelfUser: 0.001 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_68>
 CCNInf(1)[0]= 0chrono::CC::PostTrait:CCNL2CorRel:aPPMAJNormeL2CorrectionRelative::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_70>
chrono::PP::aPPMAJNormeL2CorrectionRelative::effectueCalcul::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_71>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000187 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_71>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.00056 SelfUser: 0.001 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_70>
 CCNL2CorRel(1)= 0chrono::CC::PostTrait:CCNInfRes:aPPMAJNormeInfResidu::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_72>
chrono::PP::aPPMAJNormeInfResidu::effectueCalcul::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_73>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000186 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_73>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000555 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_72>
 CCNInfRés(1)[50]= 1.11196e-15chrono::CC::PostTrait:CCNL2Res:aPPMAJNormeL2Residu::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_74>
chrono::PP::aPPMAJNormeL2Residu::effectueCalcul::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_75>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000186 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_75>
::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000575 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_74>
 CCNL2Rés(1)= 3.50353e-16::fin VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.012192 SelfUser: 0.01 SelfSys: 0.002 ChildUser: 0 Childsys: 0 </etiq_66>

chrono::SolveurStatNlinPETSc::PostTraitParPasDeTemps:AssembleurGD::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_76>
chrono::GestionPrePostTraitement::executePrePostTrait:GESTIONPREPOSTDEFAUT:FinPasDeTemps::************ProblemeGD_aPPExecutePostTraitement************::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_77>
chrono::PP::************ProblemeGD_aPPExecutePostTraitement************::effectueCalcul::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_78>
chrono::PP::PreTrait:************ProblemeGD_aPPExecutePostTraitement************:aPPCalculPsiMoyen::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_79>
chrono::PP::aPPCalculPsiMoyen::effectueCalcul::debut VmSize: 580396 VmRSS: 128992 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_80>
::fin VmSize: 580396 VmRSS: 129028 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.00289 SelfUser: 0.003 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_80>
::fin VmSize: 580396 VmRSS: 129028 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.003268 SelfUser: 0.003 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_79>
chrono::PP::PreTrait:************ProblemeGD_aPPExecutePostTraitement************:aPPCalculVolumeNonDeforme::debut VmSize: 580396 VmRSS: 129028 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_81>
chrono::PP::aPPCalculVolumeNonDeforme::effectueCalcul::debut VmSize: 580396 VmRSS: 129028 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_82>
::fin VmSize: 580396 VmRSS: 129028 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000943 SelfUser: 0 SelfSys: 0.001 ChildUser: 0 Childsys: 0 </etiq_82>
::fin VmSize: 580396 VmRSS: 129028 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.001361 SelfUser: 0 SelfSys: 0.001 ChildUser: 0 Childsys: 0 </etiq_81>
chrono::PP::PreTrait:************ProblemeGD_aPPExecutePostTraitement************:aPPCalculVolumeDeforme::debut VmSize: 580396 VmRSS: 129028 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_83>
chrono::PP::aPPCalculVolumeDeforme::effectueCalcul::debut VmSize: 580396 VmRSS: 129028 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_84>
::fin VmSize: 580396 VmRSS: 129028 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.002386 SelfUser: 0.003 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_84>
::fin VmSize: 580396 VmRSS: 129028 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.002763 SelfUser: 0.003 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_83>
chrono::ProblemeGD::executePostTraitement::debut VmSize: 580396 VmRSS: 129028 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_85>
chrono::PP::ResiduDansChampReactionsNodales::effectueCalcul::debut VmSize: 580396 VmRSS: 129028 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_86>
chrono::ProblemeEF::assemblePriveDomaine:AssembleurGD::debut VmSize: 580396 VmRSS: 129028 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_49>
chrono::ProblemeEF::assemblePrive:Vec:AssembleurGD::debut VmSize: 580396 VmRSS: 129028 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_87>
::fin VmSize: 580396 VmRSS: 129028 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.030955 SelfUser: 0.03 SelfSys: 0.001 ChildUser: 0 Childsys: 0 </etiq_87>
::fin VmSize: 580396 VmRSS: 129028 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.03191 SelfUser: 0.03 SelfSys: 0.001 ChildUser: 0 Childsys: 0 </etiq_49>
[0] VecAssemblyBegin_MPI(): Stash has 0 entries, uses 0 mallocs.
[0] VecAssemblyBegin_MPI(): Block-Stash has 0 entries, uses 0 mallocs.
::fin VmSize: 580396 VmRSS: 129028 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.035811 SelfUser: 0.035 SelfSys: 0.001 ChildUser: 0 Childsys: 0 </etiq_86>
::fin VmSize: 580396 VmRSS: 129028 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.036312 SelfUser: 0.035 SelfSys: 0.001 ChildUser: 0 Childsys: 0 </etiq_85>
::fin VmSize: 580396 VmRSS: 129028 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.044626 SelfUser: 0.042 SelfSys: 0.002 ChildUser: 0 Childsys: 0 </etiq_78>
::fin VmSize: 580396 VmRSS: 129028 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.045013 SelfUser: 0.042 SelfSys: 0.003 ChildUser: 0 Childsys: 0 </etiq_77>
chrono::GestionPrePostTraitement::executePrePostTrait:GESTIONPREPOSTDEFAUT:FinPasDeTemps::aPPExportVolume::debut VmSize: 580396 VmRSS: 129028 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_88>
chrono::PP::aPPExportVolume::effectueCalcul::debut VmSize: 580396 VmRSS: 129028 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_89>
chrono::PP::PreTrait:aPPExportVolume:aPPCalculGradDefMoyen::debut VmSize: 580396 VmRSS: 129028 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_90>
chrono::PP::aPPCalculGradDefMoyen::effectueCalcul::debut VmSize: 580396 VmRSS: 129028 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_91>
::fin VmSize: 580396 VmRSS: 129088 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.002911 SelfUser: 0.002 SelfSys: 0.001 ChildUser: 0 Childsys: 0 </etiq_91>
::fin VmSize: 580396 VmRSS: 129088 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.003289 SelfUser: 0.002 SelfSys: 0.001 ChildUser: 0 Childsys: 0 </etiq_90>
chrono::PP::PreTrait:aPPExportVolume:aPPCalculPKIIMoyen::debut VmSize: 580396 VmRSS: 129088 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_92>
chrono::PP::aPPCalculPKIIMoyen::effectueCalcul::debut VmSize: 580396 VmRSS: 129088 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_93>
::fin VmSize: 580396 VmRSS: 129088 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.003516 SelfUser: 0.003 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_93>
::fin VmSize: 580396 VmRSS: 129088 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.003905 SelfUser: 0.003 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_92>
chrono::PP::PreTrait:aPPExportVolume:aPPCalculPsiMoyen::debut VmSize: 580396 VmRSS: 129088 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_94>
chrono::PP::aPPCalculPsiMoyen::effectueCalcul::debut VmSize: 580396 VmRSS: 129088 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_80>
::fin VmSize: 580396 VmRSS: 129088 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.002656 SelfUser: 0.002 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_80>
::fin VmSize: 580396 VmRSS: 129088 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.003036 SelfUser: 0.003 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_94>
chrono::PP::PreTrait:aPPExportVolume:aPPCalculVolumeNonDeforme::debut VmSize: 580396 VmRSS: 129088 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_95>
chrono::PP::aPPCalculVolumeNonDeforme::effectueCalcul::debut VmSize: 580396 VmRSS: 129088 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_82>
::fin VmSize: 580396 VmRSS: 129088 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000905 SelfUser: 0.001 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_82>
::fin VmSize: 580396 VmRSS: 129088 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.001286 SelfUser: 0.001 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_95>
chrono::PP::PreTrait:aPPExportVolume:aPPCalculVolumeDeforme::debut VmSize: 580396 VmRSS: 129088 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_96>
chrono::PP::aPPCalculVolumeDeforme::effectueCalcul::debut VmSize: 580396 VmRSS: 129088 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_84>
::fin VmSize: 580396 VmRSS: 129088 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.00236 SelfUser: 0.002 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_84>
::fin VmSize: 580396 VmRSS: 129088 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.002745 SelfUser: 0.002 SelfSys: 0.001 ChildUser: 0 Childsys: 0 </etiq_96>
chrono::Maillage::exporteParallele::debut VmSize: 580396 VmRSS: 129192 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_97>
::fin VmSize: 580396 VmRSS: 129312 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.004005 SelfUser: 0.004 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_97>
chrono::PP::PostTrait:aPPExportVolume:aPPEcrisFichierChampsPourAdaptation::debut VmSize: 580396 VmRSS: 129408 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_98>
chrono::PP::aPPEcrisFichierChampsPourAdaptation::effectueCalcul::debut VmSize: 580396 VmRSS: 129408 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_99>
::fin VmSize: 580396 VmRSS: 129532 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000594 SelfUser: 0.001 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_99>
::fin VmSize: 580396 VmRSS: 129532 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.00101 SelfUser: 0.001 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_98>
::fin VmSize: 580396 VmRSS: 129532 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.026378 SelfUser: 0.023 SelfSys: 0.003 ChildUser: 0 Childsys: 0 </etiq_89>
::fin VmSize: 580396 VmRSS: 129532 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.026761 SelfUser: 0.023 SelfSys: 0.003 ChildUser: 0 Childsys: 0 </etiq_88>
::fin VmSize: 580396 VmRSS: 129532 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.072381 SelfUser: 0.066 SelfSys: 0.006 ChildUser: 0 Childsys: 0 </etiq_76>
chrono::SolveurStatNlinPETSc::EcritResuPostParPasDeTemps:AssembleurGD::debut VmSize: 580396 VmRSS: 129532 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_100>
::fin VmSize: 580396 VmRSS: 129532 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000185 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_100>
chrono::PP::aPPExportGIREFCCCorrection::effectueCalcul::debut VmSize: 580396 VmRSS: 129532 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_101>
::fin VmSize: 580396 VmRSS: 129532 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000418 SelfUser: 0.001 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_101>
chrono::PP::aPPExportGIREFCCResidu::effectueCalcul::debut VmSize: 580396 VmRSS: 129532 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_102>
::fin VmSize: 580396 VmRSS: 129532 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000371 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_102>
::fin VmSize: 580396 VmRSS: 129532 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 1.12123 SelfUser: 0.672 SelfSys: 0.066 ChildUser: 0 Childsys: 0 </etiq_11>
chrono::SolveurLinPETSc::detruitKSP::debut VmSize: 578344 VmRSS: 129656 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_8>
::fin VmSize: 578344 VmRSS: 129656 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000225 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_8>
chrono::SolveurLinPETSc::detruitKSP::debut VmSize: 578344 VmRSS: 129656 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_8>
::fin VmSize: 578344 VmRSS: 129656 VmPeak: 647920 VmData: 30300 VmHWM: 192648 WC: 0.000189 SelfUser: 0 SelfSys: 0 ChildUser: 0 Childsys: 0 </etiq_8>
Destructeur DDLsNumerotation
Destructeur DDLsNumerotation
Destructeur DDLsNumerotation
Destructeur DDLsNumerotation
Destructeur DDLsNumerotation
chrono::SolveurLinPETSc::detruitKSP::debut VmSize: 563980 VmRSS: 129728 VmPeak: 647920 VmData: 30300 VmHWM: 192648 <etiq_8>
::fin VmSize: 563792 VmRSS: 124952 VmPeak: 647920 VmData: 30112 VmHWM: 192648 WC: 0.001961 SelfUser: 0.001 SelfSys: 0.001 ChildUser: 0 Childsys: 0 </etiq_8>
::fin VmSize: 563792 VmRSS: 124964 VmPeak: 647920 VmData: 30112 VmHWM: 192648 WC: 2.4343 SelfUser: 1.9 SelfSys: 0.149 ChildUser: 0 Childsys: 0 </etiq_0>
[0] Petsc_DelComm_Inner(): Removing reference to PETSc communicator embedded in a user MPI_Comm 73355168
[0] Petsc_DelComm_Outer(): User MPI_Comm 68383344 is being freed after removing reference from inner PETSc comm to this outer comm
[0] PetscFinalize(): PetscFinalize() called
[0] Petsc_DelViewer(): Removing viewer data attribute in an MPI_Comm 68462368
[0] Petsc_DelComm_Inner(): Removing reference to PETSc communicator embedded in a user MPI_Comm 68462368
[0] Petsc_DelComm_Outer(): User MPI_Comm 139814426697728 is being freed after removing reference from inner PETSc comm to this outer comm
[0] PetscCommDestroy(): Deleting PETSc MPI_Comm 68462368
[0] Petsc_DelCounter(): Deleting counter data in an MPI_Comm 68462368
[0] Petsc_DelThreadComm(): Deleting thread communicator data in an MPI_Comm 68462368
[0] Petsc_DelViewer(): Removing viewer data attribute in an MPI_Comm 68462368
[0] Petsc_DelViewer(): Removing viewer data attribute in an MPI_Comm 73355168
[0] PetscCommDestroy(): Deleting PETSc MPI_Comm 73355168
[0] Petsc_DelCounter(): Deleting counter data in an MPI_Comm 73355168
[0] Petsc_DelThreadComm(): Deleting thread communicator data in an MPI_Comm 73355168
[0] Petsc_DelViewer(): Removing viewer data attribute in an MPI_Comm 73355168
[0] Petsc_DelViewer(): Removing viewer data attribute in an MPI_Comm 76804080
[0] Petsc_DelComm_Inner(): Removing reference to PETSc communicator embedded in a user MPI_Comm 76804080
[0] Petsc_DelComm_Outer(): User MPI_Comm 139814426695680 is being freed after removing reference from inner PETSc comm to this outer comm
[0] PetscCommDestroy(): Deleting PETSc MPI_Comm 76804080
[0] Petsc_DelCounter(): Deleting counter data in an MPI_Comm 76804080
[0] Petsc_DelThreadComm(): Deleting thread communicator data in an MPI_Comm 76804080
[0] Petsc_DelViewer(): Removing viewer data attribute in an MPI_Comm 76804080
WARNING! There are options you set that were not used!
WARNING! could be spelling mistake, etc!
Option left: name:-Options_ProjectionL2ksp_atol value: 1e-15
Option left: name:-Options_ProjectionL2ksp_divtol value: 1e+12
Option left: name:-Options_ProjectionL2ksp_max_it value: 10000
Option left: name:-Options_ProjectionL2ksp_rtol value: 1e-15
Option left: name:-Options_ProjectionL2pc_hypre_type value: boomeramg
Option left: name:-gcrSchur_fieldsplit_a_00_mat_bcs_columnmajor (no value)
Option left: name:-gcrSchur_fieldsplit_a_00_mat_mkl_pardiso_6 value: 0
Option left: name:-gcrSchur_fieldsplit_a_00_mat_pardiso_69 value: 11
Option left: name:-gcrSchur_fieldsplit_a_00_mg_coarse_pc_factor_mat_solver_package value: mumps
Option left: name:-gcrSchur_fieldsplit_a_00_pc_ml_maxNlevels value: 2
Option left: name:-gcrSchur_fieldsplit_a_11_mat_bcs_columnmajor (no value)
Option left: name:-gcrSchur_fieldsplit_a_11_mat_mkl_pardiso_6 value: 0
Option left: name:-gcrSchur_fieldsplit_a_11_mat_mumps_icntl_14 value: 33
Option left: name:-gcrSchur_fieldsplit_a_11_mat_pardiso_69 value: 11


More information about the petsc-users mailing list