[petsc-users] general problem with application based on PETSc

Karl Rupp rupp at mcs.anl.gov
Sun Jan 19 04:34:32 CST 2014


Hi,

On 01/19/2014 10:39 AM, yan chang wrote:
> I have used Eigen (a C++ template library ) to set up a spin dynamics
> simulation program, and now I would like to speed up the simulation in
> parallel using PETSc, mostly use sparse Mat and Vec operations. After
> some rewrite work (just replace the matrix operations by PETSc), I can
> firstly set up a spin system and calculate its relaxation matrix R.
>
> PetscErrorCode ierr;
> PetscInitialize(&argc, &argv, (char*) 0, help);
>    SpinSystem sys;
>    sys.set_magnet(14.1);
>    sys.set_isotopes("1H 13C");
>    sys.set_basis("sphten_liouv");
>    sys.set_zeeman_eigs("(7 15 -22) (11 18 -29)");
>    sys.set_zeeman_euler("(1.0472 0.7854 0.6283) (0.5236 0.4488 0.3927)");
>    sys.set_coordinates("1 (0 0 0) 2 (0 0 1.02)");
>    sys.create();
>    Relaxation relax(sys);
>    Mat R = relax.Rmat();
>    ierr = MatView(R, PETSC_VIEWER_STDOUT_WORLD);
>    ierr = MatDestroy(&R);
>    CHKERRQ(ierr);
>    ierr = PetscFinalize();
>
> And run it with mpirun -np 1 ./Release/spin-scenarios-petsc it is ok.

Please don't forget to check the error code through CHKERRQ after *each* 
function call.


> However if I set the np to be 2 or more, the program would generate the
> same number of the spin system object and it is time consuming to get
> the result. I am very confused that in the implementation the Mat and
> Vec variables are not sequential like Eigen, so could you give me some
> advise how to organize my program based on PETSc?

a) Did you verify that you are using the correct 'mpirun'? You need to 
use the one you specified for the PETSc installation. Otherwise, the 
same executable will be called multiple times in sequential mode. Hint: 
MatView should report your matrix R as being equally split over the MPI 
ranks.

b) To get reasonable performance and scalability you need to consider 
the multiple MPI ranks in your SpinSystem and Relaxation objects 
already. Each MPI rank should then set up its part of the global system. 
Have a look at e.g. the examples in src/ksp/ksp/examples/tutorials/ or 
src/snes/examples/tutorials to get an idea how this is done. Keep in 
mind that the distributed matrix is decomposed by rows, i.e. each MPI 
rank stores a horizontal strip of the full matrix R, but PETSc will take 
care of the details if you set values in rows which are not locally owned.


> I am a beginner of both OPENMPI and PETSc. Also, after the simple set up
> part, there will be some more time consuming work to do, that's why I
> switch from Eigen to PETSc.
> Thanks a lot!

My advice is to familiarize yourself with some of the simpler tutorials 
on how simpler systems are set up in parallel. Then, incrementally 
migrate these techniques over to your use case.

Best regards,
Karli



More information about the petsc-users mailing list