<div dir="ltr"><div><div><div><div><div><div><div><div><div>Dear Karli,<br></div>Thanks for your suggestions! <br></div>Right now I test 'src/ksp/ksp/examples/tutorials/ex11.c' with n=2<br></div>I rewrite the sample into a c++ class, and run it as mpirun -np 2 .....<br>
</div>I notice that the vector x is stored in 2 processors.<br></div>But the matrix A seems not as you told me 'MatView should report your matrix R as being equally split over the MPI ranks.'<br><br>Norm of error 1.19381e-16 iterations 4<br>
Vector Object: 2 MPI processes<br> type: mpi<br>Process [0]<br>4.16334e-17<br>0 + 1.38778e-17 i<br>Process [1]<br>0<br>1.11022e-16<br>Matrix Object: 1 MPI processes<br> type: mpiaij<br>row 0: (0, -7.11111 + 0.0800036 i) (1, -1) (2, -1) <br>
row 1: (0, -1) (1, -7.11111 + 0.00111359 i) (3, -1) <br>row 2: (0, -1) (2, -7.11111 + 0.0601271 i) (3, -1) <br>row 3: (1, -1) (2, -1) (3, -7.11111 + 0.0943018 i)<br><br></div>I thought Matrix Object should have 2 MPI processes.<br>
<br></div>And again thanks for your help.<br><br></div>Best regards,<br></div>Yan<br><br><div><br></div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Sun, Jan 19, 2014 at 6:34 PM, Karl Rupp <span dir="ltr"><<a href="mailto:rupp@mcs.anl.gov" target="_blank">rupp@mcs.anl.gov</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi,<div class="im"><br>
<br>
On 01/19/2014 10:39 AM, yan chang wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
I have used Eigen (a C++ template library ) to set up a spin dynamics<br>
simulation program, and now I would like to speed up the simulation in<br>
parallel using PETSc, mostly use sparse Mat and Vec operations. After<br>
some rewrite work (just replace the matrix operations by PETSc), I can<br>
firstly set up a spin system and calculate its relaxation matrix R.<br>
<br>
PetscErrorCode ierr;<br>
PetscInitialize(&argc, &argv, (char*) 0, help);<br>
SpinSystem sys;<br>
sys.set_magnet(14.1);<br>
sys.set_isotopes("1H 13C");<br>
sys.set_basis("sphten_liouv");<br>
sys.set_zeeman_eigs("(7 15 -22) (11 18 -29)");<br>
sys.set_zeeman_euler("(1.0472 0.7854 0.6283) (0.5236 0.4488 0.3927)");<br>
sys.set_coordinates("1 (0 0 0) 2 (0 0 1.02)");<br>
sys.create();<br>
Relaxation relax(sys);<br>
Mat R = relax.Rmat();<br>
ierr = MatView(R, PETSC_VIEWER_STDOUT_WORLD);<br>
ierr = MatDestroy(&R);<br>
CHKERRQ(ierr);<br>
ierr = PetscFinalize();<br>
<br>
And run it with mpirun -np 1 ./Release/spin-scenarios-petsc it is ok.<br>
</blockquote>
<br></div>
Please don't forget to check the error code through CHKERRQ after *each* function call.<div class="im"><br>
<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
However if I set the np to be 2 or more, the program would generate the<br>
same number of the spin system object and it is time consuming to get<br>
the result. I am very confused that in the implementation the Mat and<br>
Vec variables are not sequential like Eigen, so could you give me some<br>
advise how to organize my program based on PETSc?<br>
</blockquote>
<br></div>
a) Did you verify that you are using the correct 'mpirun'? You need to use the one you specified for the PETSc installation. Otherwise, the same executable will be called multiple times in sequential mode. Hint: MatView should report your matrix R as being equally split over the MPI ranks.<br>
<br>
b) To get reasonable performance and scalability you need to consider the multiple MPI ranks in your SpinSystem and Relaxation objects already. Each MPI rank should then set up its part of the global system. Have a look at e.g. the examples in src/ksp/ksp/examples/<u></u>tutorials/ or src/snes/examples/tutorials to get an idea how this is done. Keep in mind that the distributed matrix is decomposed by rows, i.e. each MPI rank stores a horizontal strip of the full matrix R, but PETSc will take care of the details if you set values in rows which are not locally owned.<div class="im">
<br>
<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
I am a beginner of both OPENMPI and PETSc. Also, after the simple set up<br>
part, there will be some more time consuming work to do, that's why I<br>
switch from Eigen to PETSc.<br>
Thanks a lot!<br>
</blockquote>
<br></div>
My advice is to familiarize yourself with some of the simpler tutorials on how simpler systems are set up in parallel. Then, incrementally migrate these techniques over to your use case.<br>
<br>
Best regards,<br>
Karli<br>
<br>
</blockquote></div><br></div>