<div dir="ltr"><div class="gmail_quote"><div dir="ltr">On Tue, Aug 28, 2018 at 5:34 AM Ali Reza Khaz'ali <<a href="mailto:arkhazali@cc.iut.ac.ir">arkhazali@cc.iut.ac.ir</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><br>
> Actually you do not need my new branch to achieve what you desired. All you need in your main program is something like<br>
><br>
> ierr = SNESCreate(PETSC_COMM_WORLD,&snes);CHKERRQ(ierr);<br>
> ierr = SNESGetKSP(snes,&ksp);CHKERRQ(ierr);<br>
> ierr = KSPGetPC(ksp,&pc);CHKERRQ(ierr);<br>
> ierr = PCSetType(pc,PCBJACOBI);CHKERRQ(ierr);<br>
> ierr = PCBJacobiSetTotalBlocks(pc,3,lens);CHKERRQ(ierr); /* here you set your block sizes to whatever you need */<br>
><br>
> Then simply do not call PCBJacobiGetSubKSP() but use the options database to set the inner solver with -sub_pc_type lu -sub_ksp_type preonly<br>
><br>
> I have updated the branch to move the PCBJacobiSetTotalBlocks() to the main program but left the callback in there for setting the inner solver types (though as I just said you don't need to use the callback since you can control the solver from the options database). The callback is needed, if, for example, you wished to use a different solver on different blocks (which is not your case).<br>
><br>
> Barry<br>
><br>
> PETSc developers - do you think we should put the callback functionality into PETSc? It allows doing things that are otherwise not doable but is rather ugly (perhaps too specialized)?<br>
><br>
><br>
><br>
<br>
It works! Thanks a lot. Here is log of a 30x30x10 system (18000 blocks, <br>
with GMRes solver). I like to have variable sized block preconditioners <br>
and solvers in PETSc. Their application is more than it may first <br>
appear. If it is possible, I would like to contribute to PETSc code, to <br>
build a variable sized block Jacobi and block ILU(k) at the first step <br>
(If I can, of course). Where can I start?<br></blockquote><div><br></div><div>Okay, the best place to start I think is to make an example which shows what you want</div><div>to demonstrate. Then we can offer feedback, and if anything should move to the library,</div><div>we can do that in a subsequent contribution.</div><div><br></div><div>The best way, I think, to make an example is to fork the PETSc repository on Bitbucket</div><div>(or Github), add you example code to the relevant directory, such as</div><div><br></div><div> $PETSC_DIR/src/snes/examples/tutorials</div><div><br></div><div>It will build with</div><div><br></div><div> make ex1001</div><div><br></div><div>or whatever number you choose, and then you make a Pull Request (there is documentation</div><div>here: <a href="https://bitbucket.org/petsc/petsc/wiki/pull-request-instructions-git">https://bitbucket.org/petsc/petsc/wiki/pull-request-instructions-git</a>). Note there is a</div><div>Developer's Manual which has things like code structure guidelines.</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
type: newtonls<br>
maximum iterations=2000, maximum function evaluations=2000<br>
tolerances: relative=0.0001, absolute=1e-05, solution=1e-05<br>
total number of linear solver iterations=3<br>
total number of function evaluations=2<br>
norm schedule ALWAYS<br>
SNESLineSearch Object: 1 MPI processes<br>
type: bt<br>
interpolation: cubic<br>
alpha=1.000000e-04<br>
maxstep=1.000000e+08, minlambda=1.000000e-12<br>
tolerances: relative=1.000000e-08, absolute=1.000000e-15, <br>
lambda=1.000000e-08<br>
maximum iterations=40<br>
KSP Object: 1 MPI processes<br>
type: gmres<br>
restart=30, using Classical (unmodified) Gram-Schmidt <br>
Orthogonalization with no iterative refinement<br>
happy breakdown tolerance 1e-30<br>
maximum iterations=5000, initial guess is zero<br>
tolerances: relative=1e-05, absolute=1e-06, divergence=10000.<br>
left preconditioning<br>
using PRECONDITIONED norm type for convergence test<br>
PC Object: 1 MPI processes<br>
type: bjacobi<br>
number of blocks = 18000<br>
Local solve is same for all blocks, in the following KSP and PC <br>
objects:<br>
KSP Object: (sub_) 1 MPI processes<br>
type: preonly<br>
maximum iterations=10000, initial guess is zero<br>
tolerances: relative=1e-05, absolute=1e-50, divergence=10000.<br>
left preconditioning<br>
using NONE norm type for convergence test<br>
PC Object: (sub_) 1 MPI processes<br>
type: lu<br>
out-of-place factorization<br>
tolerance for zero pivot 2.22045e-14<br>
matrix ordering: nd<br>
factor fill ratio given 0., needed 0.<br>
Factored matrix follows:<br>
Mat Object: 1 MPI processes<br>
type: mkl_pardiso<br>
rows=6, cols=6<br>
package used to perform factorization: mkl_pardiso<br>
total: nonzeros=26, allocated nonzeros=26<br>
total number of mallocs used during MatSetValues calls =0<br>
MKL_PARDISO run parameters:<br>
MKL_PARDISO phase: 33<br>
MKL_PARDISO iparm[1]: 1<br>
MKL_PARDISO iparm[2]: 2<br>
MKL_PARDISO iparm[3]: 1<br>
MKL_PARDISO iparm[4]: 0<br>
MKL_PARDISO iparm[5]: 0<br>
MKL_PARDISO iparm[6]: 0<br>
MKL_PARDISO iparm[7]: 0<br>
MKL_PARDISO iparm[8]: 0<br>
MKL_PARDISO iparm[9]: 0<br>
MKL_PARDISO iparm[10]: 13<br>
MKL_PARDISO iparm[11]: 1<br>
MKL_PARDISO iparm[12]: 0<br>
MKL_PARDISO iparm[13]: 1<br>
MKL_PARDISO iparm[14]: 0<br>
MKL_PARDISO iparm[15]: 144<br>
MKL_PARDISO iparm[16]: 144<br>
MKL_PARDISO iparm[17]: 0<br>
MKL_PARDISO iparm[18]: 37<br>
MKL_PARDISO iparm[19]: 0<br>
MKL_PARDISO iparm[20]: 0<br>
MKL_PARDISO iparm[21]: 0<br>
MKL_PARDISO iparm[22]: 0<br>
MKL_PARDISO iparm[23]: 0<br>
MKL_PARDISO iparm[24]: 0<br>
MKL_PARDISO iparm[25]: 0<br>
MKL_PARDISO iparm[26]: 0<br>
MKL_PARDISO iparm[27]: 0<br>
MKL_PARDISO iparm[28]: 0<br>
MKL_PARDISO iparm[29]: 0<br>
MKL_PARDISO iparm[30]: 0<br>
MKL_PARDISO iparm[31]: 0<br>
MKL_PARDISO iparm[32]: 0<br>
MKL_PARDISO iparm[33]: 0<br>
MKL_PARDISO iparm[34]: -1<br>
MKL_PARDISO iparm[35]: 1<br>
MKL_PARDISO iparm[36]: 0<br>
MKL_PARDISO iparm[37]: 0<br>
MKL_PARDISO iparm[38]: 0<br>
MKL_PARDISO iparm[39]: 0<br>
MKL_PARDISO iparm[40]: 0<br>
MKL_PARDISO iparm[41]: 0<br>
MKL_PARDISO iparm[42]: 0<br>
MKL_PARDISO iparm[43]: 0<br>
MKL_PARDISO iparm[44]: 0<br>
MKL_PARDISO iparm[45]: 0<br>
MKL_PARDISO iparm[46]: 0<br>
MKL_PARDISO iparm[47]: 0<br>
MKL_PARDISO iparm[48]: 0<br>
MKL_PARDISO iparm[49]: 0<br>
MKL_PARDISO iparm[50]: 0<br>
MKL_PARDISO iparm[51]: 0<br>
MKL_PARDISO iparm[52]: 0<br>
MKL_PARDISO iparm[53]: 0<br>
MKL_PARDISO iparm[54]: 0<br>
MKL_PARDISO iparm[55]: 0<br>
MKL_PARDISO iparm[56]: 0<br>
MKL_PARDISO iparm[57]: -1<br>
MKL_PARDISO iparm[58]: 0<br>
MKL_PARDISO iparm[59]: 0<br>
MKL_PARDISO iparm[60]: 0<br>
MKL_PARDISO iparm[61]: 144<br>
MKL_PARDISO iparm[62]: 145<br>
MKL_PARDISO iparm[63]: 21<br>
MKL_PARDISO iparm[64]: 0<br>
MKL_PARDISO maxfct: 1<br>
MKL_PARDISO mnum: 1<br>
MKL_PARDISO mtype: 11<br>
MKL_PARDISO n: 6<br>
MKL_PARDISO nrhs: 1<br>
MKL_PARDISO msglvl: 0<br>
linear system matrix = precond matrix:<br>
Mat Object: 1 MPI processes<br>
type: seqaij<br>
rows=6, cols=6<br>
total: nonzeros=26, allocated nonzeros=26<br>
total number of mallocs used during MatSetValues calls =0<br>
using I-node routines: found 4 nodes, limit used is 5<br>
linear system matrix = precond matrix:<br>
Mat Object: 1 MPI processes<br>
type: seqaij<br>
rows=108000, cols=108000<br>
total: nonzeros=2868000, allocated nonzeros=8640000<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node routines<br>
************************************************************************************************************************<br>
*** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r <br>
-fCourier9' to print this document ***<br>
************************************************************************************************************************<br>
<br>
---------------------------------------------- PETSc Performance <br>
Summary: ----------------------------------------------<br>
<br>
E:\Documents\Visual Studio 2015\Projects\compsim\x64\Release\compsim.exe <br>
on a named ALIREZA-PC with 1 processor, by AliReza Tue Aug 28 13:57:09 2018<br>
Using Petsc Development GIT revision: v3.9.3-1238-gce82fdcfd6 GIT Date: <br>
2018-08-27 15:47:19 -0500<br>
<br>
Max Max/Min Avg Total<br>
Time (sec): 1.353e+02 1.000 1.353e+02<br>
Objects: 1.980e+05 1.000 1.980e+05<br>
Flop: 2.867e+07 1.000 2.867e+07 2.867e+07<br>
Flop/sec: 2.119e+05 1.000 2.119e+05 2.119e+05<br>
MPI Messages: 0.000e+00 0.000 0.000e+00 0.000e+00<br>
MPI Message Lengths: 0.000e+00 0.000 0.000e+00 0.000e+00<br>
MPI Reductions: 0.000e+00 0.000<br>
<br>
Flop counting convention: 1 flop = 1 real number operation of type <br>
(multiply/divide/add/subtract)<br>
e.g., VecAXPY() for real vectors of length <br>
N --> 2N flop<br>
and VecAXPY() for complex vectors of length <br>
N --> 8N flop<br>
<br>
Summary of Stages: ----- Time ------ ----- Flop ------ --- Messages <br>
--- -- Message Lengths -- -- Reductions --<br>
Avg %Total Avg %Total Count <br>
%Total Avg %Total Count %Total<br>
0: Main Stage: 1.3529e+02 100.0% 2.8668e+07 100.0% 0.000e+00 <br>
0.0% 0.000e+00 0.0% 0.000e+00 0.0%<br>
<br>
------------------------------------------------------------------------------------------------------------------------<br>
See the 'Profiling' chapter of the users' manual for details on <br>
interpreting output.<br>
Phase summary info:<br>
Count: number of times phase was executed<br>
Time and Flop: Max - maximum over all processors<br>
Ratio - ratio of maximum to minimum over all processors<br>
Mess: number of messages sent<br>
AvgLen: average message length (bytes)<br>
Reduct: number of global reductions<br>
Global: entire computation<br>
Stage: stages of a computation. Set stages with PetscLogStagePush() <br>
and PetscLogStagePop().<br>
%T - percent time in this phase %F - percent flop in this <br>
phase<br>
%M - percent messages in this phase %L - percent message <br>
lengths in this phase<br>
%R - percent reductions in this phase<br>
Total Mflop/s: 10e-6 * (sum of flop over all processors)/(max time <br>
over all processors)<br>
------------------------------------------------------------------------------------------------------------------------<br>
Event Count Time (sec) <br>
Flop --- Global --- --- Stage ---- Total<br>
Max Ratio Max Ratio Max Ratio Mess AvgLen <br>
Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s<br>
------------------------------------------------------------------------------------------------------------------------<br>
<br>
--- Event Stage 0: Main Stage<br>
<br>
BuildTwoSidedF 2 1.0 1.2701e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 <br>
0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br>
SNESSolve 1 1.0 1.1583e+02 1.0 2.87e+07 1.0 0.0e+00 0.0e+00 <br>
0.0e+00 86100 0 0 0 86100 0 0 0 0<br>
SNESFunctionEval 2 1.0 5.4101e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 <br>
0.0e+00 4 0 0 0 0 4 0 0 0 0 0<br>
SNESJacobianEval 1 1.0 9.3770e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 <br>
0.0e+00 69 0 0 0 0 69 0 0 0 0 0<br>
SNESLineSearch 1 1.0 3.1033e+00 1.0 6.82e+06 1.0 0.0e+00 0.0e+00 <br>
0.0e+00 2 24 0 0 0 2 24 0 0 0 2<br>
VecDot 1 1.0 1.8688e-04 1.0 2.16e+05 1.0 0.0e+00 0.0e+00 <br>
0.0e+00 0 1 0 0 0 0 1 0 0 0 1156<br>
VecMDot 3 1.0 9.9299e-04 1.0 1.30e+06 1.0 0.0e+00 0.0e+00 <br>
0.0e+00 0 5 0 0 0 0 5 0 0 0 1305<br>
VecNorm 7 1.0 6.0845e-03 1.0 1.51e+06 1.0 0.0e+00 0.0e+00 <br>
0.0e+00 0 5 0 0 0 0 5 0 0 0 248<br>
VecScale 4 1.0 1.4437e+00 1.0 4.32e+05 1.0 0.0e+00 0.0e+00 <br>
0.0e+00 1 2 0 0 0 1 2 0 0 0 0<br>
VecCopy 3 1.0 1.6059e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 <br>
0.0e+00 1 0 0 0 0 1 0 0 0 0 0<br>
VecSet 90002 1.0 1.3843e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 <br>
0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br>
VecAXPY 1 1.0 3.1733e-01 1.0 2.16e+05 1.0 0.0e+00 0.0e+00 <br>
0.0e+00 0 1 0 0 0 0 1 0 0 0 1<br>
VecWAXPY 1 1.0 2.2665e-04 1.0 1.08e+05 1.0 0.0e+00 0.0e+00 <br>
0.0e+00 0 0 0 0 0 0 0 0 0 0 477<br>
VecMAXPY 4 1.0 8.6085e-04 1.0 1.94e+06 1.0 0.0e+00 0.0e+00 <br>
0.0e+00 0 7 0 0 0 0 7 0 0 0 2258<br>
VecAssemblyBegin 2 1.0 1.6379e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 <br>
0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br>
VecAssemblyEnd 2 1.0 1.4112e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 <br>
0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br>
VecReduceArith 2 1.0 3.1304e-04 1.0 4.32e+05 1.0 0.0e+00 0.0e+00 <br>
0.0e+00 0 2 0 0 0 0 2 0 0 0 1380<br>
VecReduceComm 1 1.0 2.1382e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 <br>
0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br>
VecNormalize 4 1.0 1.4441e+00 1.0 1.30e+06 1.0 0.0e+00 0.0e+00 <br>
0.0e+00 1 5 0 0 0 1 5 0 0 0 1<br>
MatMult 4 1.0 2.0402e-02 1.0 2.25e+07 1.0 0.0e+00 0.0e+00 <br>
0.0e+00 0 79 0 0 0 0 79 0 0 0 1103<br>
MatSolve 72000 1.0 5.3514e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 <br>
0.0e+00 4 0 0 0 0 4 0 0 0 0 0<br>
MatLUFactorSym 18000 1.0 1.9405e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 <br>
0.0e+00 1 0 0 0 0 1 0 0 0 0 0<br>
MatLUFactorNum 18000 1.0 1.8373e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 <br>
0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br>
MatAssemblyBegin 18002 1.0 1.0409e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 <br>
0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br>
MatAssemblyEnd 18002 1.0 3.3879e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 <br>
0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br>
MatGetRowIJ 18000 1.0 3.1819e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 <br>
0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br>
MatCreateSubMats 1 1.0 3.7015e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 <br>
0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br>
MatGetOrdering 18000 1.0 3.0787e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 <br>
0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br>
MatZeroEntries 1 1.0 2.7952e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 <br>
0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br>
MatView 3 1.0 2.9153e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 <br>
0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br>
KSPSetUp 18001 1.0 7.5898e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 <br>
0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br>
KSPSolve 1 1.0 1.6244e+01 1.0 2.16e+07 1.0 0.0e+00 0.0e+00 <br>
0.0e+00 12 75 0 0 0 12 75 0 0 0 1<br>
KSPGMRESOrthog 3 1.0 8.4669e-02 1.0 2.59e+06 1.0 0.0e+00 0.0e+00 <br>
0.0e+00 0 9 0 0 0 0 9 0 0 0 31<br>
PCSetUp 18001 1.0 3.3536e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 <br>
0.0e+00 2 0 0 0 0 2 0 0 0 0 0<br>
PCSetUpOnBlocks 1 1.0 2.5973e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 <br>
0.0e+00 2 0 0 0 0 2 0 0 0 0 0<br>
PCApply 4 1.0 6.2752e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 <br>
0.0e+00 5 0 0 0 0 5 0 0 0 0 0<br>
PCApplyOnBlocks 72000 1.0 5.9278e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 <br>
0.0e+00 4 0 0 0 0 4 0 0 0 0 0<br>
------------------------------------------------------------------------------------------------------------------------<br>
<br>
Memory usage is given in bytes:<br>
<br>
Object Type Creations Destructions Memory Descendants' Mem.<br>
Reports information only for process 0.<br>
<br>
--- Event Stage 0: Main Stage<br>
<br>
SNES 1 0 0 0.<br>
DMSNES 1 0 0 0.<br>
SNESLineSearch 1 0 0 0.<br>
Vector 36020 0 0 0.<br>
Matrix 36001 0 0 0.<br>
Distributed Mesh 2 0 0 0.<br>
Index Set 90000 36000 29088000 0.<br>
Star Forest Graph 4 0 0 0.<br>
Discrete System 2 0 0 0.<br>
Krylov Solver 18001 0 0 0.<br>
DMKSP interface 1 0 0 0.<br>
Preconditioner 18001 0 0 0.<br>
Viewer 1 0 0 0.<br>
========================================================================================================================<br>
Average time to get PetscTime(): 1.28294e-07<br>
#PETSc Option Table entries:<br>
-ksp_atol 1e-6<br>
-ksp_rtol 1e-5<br>
-snes_rtol 1e-4<br>
-sub_ksp_type preonly<br>
-sub_pc_factor_mat_solver_type mkl_pardiso<br>
-sub_pc_type lu<br>
#End of PETSc Option Table entries<br>
Compiled without FORTRAN kernels<br>
Compiled with full precision matrices (default)<br>
sizeof(short) 2 sizeof(int) 4 sizeof(long) 4 sizeof(void*) 8 <br>
sizeof(PetscScalar) 8 sizeof(PetscInt) 4<br>
Configure options: --prefix=/home/alireza/PetscGit <br>
--with-mkl_pardiso-dir=/cygdrive/E/Program_Files_x86/IntelSWTools/compilers_and_libraries/windows/mkl <br>
--with-hypre-incl<br>
ude=/cygdrive/E/hypre-2.11.2/Builds/Bins/include <br>
--with-hypre-lib=/cygdrive/E/hypre-2.11.2/Builds/Bins/lib/HYPRE.lib <br>
--with-ml-include=/cygdrive/E/Trilinos-master/Bins/in<br>
clude --with-ml-lib=/cygdrive/E/Trilinos-master/Bins/lib/ml.lib <br>
ظ€ôwith-openmp --with-cc="win32fe icl" --with-fc="win32fe ifort" <br>
--with-mpi-include=/cygdrive/E/Program_Fi<br>
les_x86/IntelSWTools/compilers_and_libraries/windows/mpi/intel64/include <br>
--with-mpi-lib=/cygdrive/E/Program_Files_x86/IntelSWTools/compilers_and_libraries/windows/mpi/int<br>
el64/lib/impi.lib <br>
--with-mpi-mpiexec=/cygdrive/E/Program_Files_x86/IntelSWTools/compilers_and_libraries/windows/mpi/intel64/bin/mpiexec.exe <br>
--with-debugging=0 --with-blas<br>
-lib=/cygdrive/E/Program_Files_x86/IntelSWTools/compilers_and_libraries/windows/mkl/lib/intel64_win/mkl_rt.lib <br>
--with-lapack-lib=/cygdrive/E/Program_Files_x86/IntelSWTool<br>
s/compilers_and_libraries/windows/mkl/lib/intel64_win/mkl_rt.lib <br>
-CFLAGS="-O2 -MT -wd4996 -Qopenmp" -CXXFLAGS="-O2 -MT -wd4996 -Qopenmp" <br>
-FFLAGS="-MT -O2 -Qopenmp"<br>
-----------------------------------------<br>
Libraries compiled on 2018-08-27 22:42:15 on AliReza-PC<br>
Machine characteristics: CYGWIN_NT-6.1-2.10.0-0.325-5-3-x86_64-64bit<br>
Using PETSc directory: /home/alireza/PetscGit<br>
Using PETSc arch:<br>
-----------------------------------------<br>
<br>
Using C compiler: /home/alireza/PETSc/lib/petsc/bin/win32fe/win32fe icl <br>
-O2 -MT -wd4996 -Qopenmp<br>
Using Fortran compiler: <br>
/home/alireza/PETSc/lib/petsc/bin/win32fe/win32fe ifort -MT -O2 -Qopenmp <br>
-fpp<br>
-----------------------------------------<br>
<br>
Using include paths: -I/home/alireza/PetscGit/include <br>
-I/cygdrive/E/Program_Files_x86/IntelSWTools/compilers_and_libraries/windows/mkl/include <br>
-I/cygdrive/E/hypre-2.11.2/<br>
Builds/Bins/include -I/cygdrive/E/Trilinos-master/Bins/include <br>
-I/cygdrive/E/Program_Files_x86/IntelSWTools/compilers_and_libraries/windows/mpi/intel64/include<br>
-----------------------------------------<br>
<br>
Using C linker: /home/alireza/PETSc/lib/petsc/bin/win32fe/win32fe icl<br>
Using Fortran linker: /home/alireza/PETSc/lib/petsc/bin/win32fe/win32fe <br>
ifort<br>
Using libraries: -L/home/alireza/PetscGit/lib <br>
-L/home/alireza/PetscGit/lib -lpetsc <br>
/cygdrive/E/hypre-2.11.2/Builds/Bins/lib/HYPRE.lib <br>
/cygdrive/E/Trilinos-master/Bins/lib<br>
/ml.lib <br>
/cygdrive/E/Program_Files_x86/IntelSWTools/compilers_and_libraries/windows/mkl/lib/intel64_win/mkl_rt.lib <br>
/cygdrive/E/Program_Files_x86/IntelSWTools/compilers_and<br>
_libraries/windows/mkl/lib/intel64_win/mkl_rt.lib <br>
/cygdrive/E/Program_Files_x86/IntelSWTools/compilers_and_libraries/windows/mpi/intel64/lib/impi.lib <br>
Gdi32.lib User32.lib<br>
Advapi32.lib Kernel32.lib Ws2_32.lib<br>
-----------------------------------------<br>
<br>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="http://www.cse.buffalo.edu/~knepley/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br></div></div></div></div></div></div></div></div>