[petsc-users] 答复: Question with filedsplit in PETSc
Zhu, Qiming
qiming2 at illinois.edu
Sat Feb 23 16:29:01 CST 2019
Dear Knepley,
Thank you for your detailed answer and help to this problem. I got your idea. And I also find it would be better to just use MPIAIJ matrix to construct this problem for fieldsplit. Thank you for your help all the time. Wish you have a nice day.
Yours sincerely,
Qiming Zhu
________________________________
发件人: Matthew Knepley <knepley at gmail.com>
发送时间: 2019年2月23日 15:09:17
收件人: Zhu, Qiming
抄送: petsc-users at mcs.anl.gov
主题: Re: [petsc-users] Question with filedsplit in PETSc
On Thu, Feb 21, 2019 at 3:45 PM Zhu, Qiming via petsc-users <petsc-users at mcs.anl.gov<mailto:petsc-users at mcs.anl.gov>> wrote:
Dear all,
Sorry to disturb you. I am a user of Petsc. I am trying to use Fieldsplit in Petsc to do preconditioning for Navier-Stokes problem. I have some problems when I trying to use Fieldsplit function. I am now defining the nest matrix first, then I get the IS from nested matrix. But I find that my code just work for one core. When I change to parallel case, I could only get zero solution. I wonder is there any special requirements for IS definition in Fieldsplit? I include one code here. If you have any idea, hope you reply soon. Thank you for your help. Thank you very much.
I cleaned up the code a little so I could see what was going on. I attach my version here. If you run on 1 proc, you get what you expect:
master *:~/Downloads/tmp/blaise$ $PETSC_DIR/../bin/mpiexec -n 1 ./ex5 -ksp_monitor_true_residual -ksp_view_mat -sol_view -rhs_view -sys_view
A00 block print here.
Mat Object: 1 MPI processes
type: mpiaij
row 0: (0, 1.)
row 1: (1, 2.)
row 2: (2, 3.)
row 3: (3, 4.)
A01 block print here.
Mat Object: 1 MPI processes
type: mpiaij
row 0:
row 1:
row 2:
row 3:
A10 block print here.
Mat Object: 1 MPI processes
type: mpiaij
row 0:
row 1:
row 2:
row 3:
A11 block print here.
Mat Object: 1 MPI processes
type: mpiaij
row 0: (0, 5.)
row 1: (1, 6.)
row 2: (2, 7.)
row 3: (3, 8.)
IS Object: 1 MPI processes
type: stride
Index set is permutation
Number of indices in (stride) set 4
0 0
1 1
2 2
3 3
IS Object: 1 MPI processes
type: stride
Number of indices in (stride) set 4
0 4
1 5
2 6
3 7
0 KSP preconditioned resid norm 2.828427124746e+00 true resid norm 1.428285685709e+01 ||r(i)||/||b|| 1.000000000000e+00
1 KSP preconditioned resid norm 4.154074181055e-16 true resid norm 3.475547814546e-15 ||r(i)||/||b|| 2.433370192898e-16
Mat Object: 1 MPI processes
type: nest
Matrix object:
type=nest, rows=2, cols=2
MatNest structure:
(0,0) : prefix="fieldsplit_0_", type=mpiaij, rows=4, cols=4
(0,1) : type=mpiaij, rows=4, cols=4
(1,0) : type=mpiaij, rows=4, cols=4
(1,1) : prefix="fieldsplit_1_", type=mpiaij, rows=4, cols=4
Vec Object: Rhs 1 MPI processes
type: mpi
Process [0]
1.
2.
3.
4.
5.
6.
7.
8.
Vec Object: Sol 1 MPI processes
type: mpi
Process [0]
1.
1.
1.
1.
1.
1.
1.
1.
Mat Object: System Matrix 1 MPI processes
type: seqaij
row 0: (0, 1.)
row 1: (1, 2.)
row 2: (2, 3.)
row 3: (3, 4.)
row 4: (4, 5.)
row 5: (5, 6.)
row 6: (6, 7.)
row 7: (7, 8.)
If you run on 2 procs, you get the "wrong" answer. This is because you matrix is not in the order you think it is. I show this by converting to AIJ and printing it. This happens because you are sticking together _parallel_ matrices with Nest, so the local parts become contiguous:
master *:~/Downloads/tmp/blaise$ $PETSC_DIR/../bin/mpiexec -n 2 ./ex5 -ksp_monitor_true_residual -ksp_view_mat -sol_view -rhs_view -sys_view
A00 block print here.
Mat Object: 2 MPI processes
type: mpiaij
row 0: (0, 1.)
row 1: (1, 2.)
row 2: (2, 3.)
row 3: (3, 4.)
A01 block print here.
Mat Object: 2 MPI processes
type: mpiaij
row 0:
row 1:
row 2:
row 3:
A10 block print here.
Mat Object: 2 MPI processes
type: mpiaij
row 0:
row 1:
row 2:
row 3:
A11 block print here.
Mat Object: 2 MPI processes
type: mpiaij
row 0: (0, 5.)
row 1: (1, 6.)
row 2: (2, 7.)
row 3: (3, 8.)
IS Object: 2 MPI processes
type: stride
[0] Index set is permutation
[0] Number of indices in (stride) set 2
[0] 0 0
[0] 1 1
[1] Number of indices in (stride) set 2
[1] 0 4
[1] 1 5
IS Object: 2 MPI processes
type: stride
[0] Number of indices in (stride) set 2
[0] 0 2
[0] 1 3
[1] Number of indices in (stride) set 2
[1] 0 6
[1] 1 7
0 KSP preconditioned resid norm 3.135637450698e+00 true resid norm 1.428285685709e+01 ||r(i)||/||b|| 1.000000000000e+00
1 KSP preconditioned resid norm -0.000000000000e+00 true resid norm 1.620317160370e-15 ||r(i)||/||b|| 1.134448924737e-16
Mat Object: 2 MPI processes
type: nest
Matrix object:
type=nest, rows=2, cols=2
MatNest structure:
(0,0) : prefix="fieldsplit_0_", type=mpiaij, rows=4, cols=4
(0,1) : type=mpiaij, rows=4, cols=4
(1,0) : type=mpiaij, rows=4, cols=4
(1,1) : prefix="fieldsplit_1_", type=mpiaij, rows=4, cols=4
Vec Object: Rhs 2 MPI processes
type: mpi
Process [0]
1.
2.
3.
4.
Process [1]
5.
6.
7.
8.
Vec Object: Sol 2 MPI processes
type: mpi
Process [0]
1.
1.
0.6
0.666667
Process [1]
1.66667
1.5
1.
1.
Mat Object: System Matrix 2 MPI processes
type: mpiaij
row 0: (0, 1.)
row 1: (1, 2.)
row 2: (2, 5.)
row 3: (3, 6.)
row 4: (4, 3.)
row 5: (5, 4.)
row 6: (6, 7.)
row 7: (7, 8.)
In general, I think its a bad idea to use Nest. Just build an AIJ matrix the way you want and make some ISes.
Thanks,
Matt
Yours sincerely,
Qiming Zhu,
--
What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener
https://www.cse.buffalo.edu/~knepley/<http://www.cse.buffalo.edu/~knepley/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20190223/cf49e12f/attachment-0001.html>
More information about the petsc-users
mailing list