PETSc and parallel direct solvers
Mehdi Bostandoost
mbostandoust at yahoo.com
Tue May 13 23:15:47 CDT 2008
Hi
In my master thesis,I needed to use PETSC direct solvers. Because of that I preparesd a short report. I attached the report to this email.
note: the cluster that we had was not a good cluster and this report goes back to 4 years ago when I used petsc2.1.6.
I thought it might be helpful.
Regards
Mehdi
Lisandro Dalcin <dalcinl at gmail.com> wrote:
On 5/13/08, Lars Rindorf wrote:
> Dear Lisandro
>
> I was also suspecting that petsc was using the default lu factorization. And, in fact, petsc returns 'type: lu' instead of 'type mumps'. So you are right. I will try again later with a linux computer to compare umfpack and mumps.
Indeed. Tell me your conclusions, I would love to know your results...
> In the comparison between umfpack and mumps you send me (http://istanbul.be.itu.edu.tr/~huseyin/doc/frontal/node12.html) umfpack and mumps are almost equal in performance (they spell it 'mups'. Their reference on 'mups' is from 1989, maybe mups is a predecessor of mumps). If they are almost equal, then mumps is good enough for my purposes.
>
Well, the first authon in the 1989 reference seems to be the same that
Patrick Amestoy here
http://graal.ens-lyon.fr/MUMPS/index.php?page=credits. As i warned,
the link is dated. Better to give a try yourself!..
Regards
> Thanks. KR, Lars
>
>
>
> -----Oprindelig meddelelse-----
> Fra: Lisandro Dalcin [mailto:dalcinl at gmail.com]
>
> Sendt: 13. maj 2008 20:04
> Til: Lars Rindorf
> Cc: petsc-users at mcs.anl.gov
> Emne: Re: PETSc and parallel direct solvers
>
>
> On 5/13/08, Lars Rindorf wrote:
> > Dear Lisandro
> >
> > I have tried to compare MUMPS with UMFPACK for one of my systems. UMFPACK is four times faster (134s) than MUMPS (581s). I did not add the 'MatConvert' line in my program. I've given up on cygwin, and I will receive a linux computer later in this week. Then I will try it again. Do you think that the missing 'MatConvert' line could cause the long calculation time? Or, rather, would including the missing line give a four fold enhancement of the MUMPS performance?
>
> Perhaps I'm missing something, but if you are using petsc-2.3.3 or below (in petsc-dev there is now a MatSetSoverType, I have not found the time to look at it), the if you do not convert the matrix to 'aijmumps' format, then I guess PETSc ended up using the default, PETSc-builting LU factorization, and not MUMPS at all !!...
>
> To be completelly sure about what your program is acutally using, add '-ksp_view' to the command line. Then you easily notice if you are using MUMPS or not.
>
> Finally, a disclaimer. I never tried UMFPACK, so I have no idea if it is actually faster or slower than MUMPS. But I want to make sure you are actually trying MUMPS. As you can see, selection LU solver in PETSc was a bit contrived, that's the reason Barry Smith reimplemented all this crap adding the MatSetSolverType() stuff.
>
> I'm posting this to petsc-users, please any PETSc developer/user correct me if I'm wrong in any of my above coments. I'm do not frequently use direct methods.
>
>
> Regards,
>
>
> >
> > Kind regards, Lars
> >
> > -----Oprindelig meddelelse-----
> > Fra: Lisandro Dalcin [mailto:dalcinl at gmail.com]
> > Sendt: 13. maj 2008 18:54
> > Til: Lars Rindorf
> > Emne: PETSc and parallel direct solvers
> >
> >
> > Dear lars, I saw you post to petsc-users, it bounced because you have
> > to suscribe to the list
> >
> > I never used UMFPACK, but I've tried MUMPS with PETSc, and it seems to work just fine. Could you give a try to see if it works for you?
> >
> > I usually do this to easy switch to use mumps. First, in the source
> > code, after assembling your matrix, add the following
> >
> > MatConvert(A, MATSAME, MAT_REUSE_MATRIX, &A);
> >
> > And then, when you actually run your program, add the following to the command line:
> >
> > $ mpiexec -n ./yourprogram -matconvert_type aijmumps -ksp_type
> > preonly -pc_type lu
> >
> > This way, you will actually use MUMPS if you pass the '-matconvert_type aijmumps' option. If you run sequentially and do not pass the matconvert option, then petsc will use their default LU factorization. Of course, you can also use MUMPS sequentially depending on your hardware and compiler optimizations MUMPS can be faster than PETSc-builtin linear solvers by a factor of two.
> >
> >
> > --
> > Lisandro Dalc?n
> > ---------------
> > Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC)
> > Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC)
> > Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET)
> > PTLC - G?emes 3450, (3000) Santa Fe, Argentina
> > Tel/Fax: +54-(0)342-451.1594
> >
>
>
> --
> Lisandro Dalc?n
> ---------------
> Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC) Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC) Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET) PTLC - G?emes 3450, (3000) Santa Fe, Argentina
> Tel/Fax: +54-(0)342-451.1594
>
>
--
Lisandro Dalc?n
---------------
Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC)
Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC)
Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET)
PTLC - G?emes 3450, (3000) Santa Fe, Argentina
Tel/Fax: +54-(0)342-451.1594
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20080513/122dfb1b/attachment.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Performance of PETSC direct solvers on the Beowulf Cluster.pdf
Type: application/pdf
Size: 127056 bytes
Desc: 4129988398-Performance of PETSC direct solvers on the Beowulf Cluster.pdf
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20080513/122dfb1b/attachment.pdf>
More information about the petsc-users
mailing list