PETSc and parallel direct solvers

Lisandro Dalcin dalcinl at gmail.com
Tue May 13 18:54:07 CDT 2008


On 5/13/08, Lars Rindorf <Lars.Rindorf at teknologisk.dk> wrote:
> Dear Lisandro
>
>  I was also suspecting that petsc was using the default lu factorization. And, in fact, petsc returns 'type: lu' instead of 'type mumps'. So you are right. I will try again later with a linux computer to compare umfpack and mumps.

Indeed. Tell me your conclusions, I would love to know your results...

>  In the comparison between umfpack and mumps you send me (http://istanbul.be.itu.edu.tr/~huseyin/doc/frontal/node12.html) umfpack and mumps are almost equal in performance (they spell it 'mups'. Their reference on 'mups' is from 1989, maybe mups is a predecessor of mumps). If they are almost equal, then mumps is good enough for my purposes.
>

Well, the first authon in the 1989 reference seems to be the same that
Patrick Amestoy   here
http://graal.ens-lyon.fr/MUMPS/index.php?page=credits. As i warned,
the link is dated. Better to give a try yourself!..

Regards

>  Thanks. KR, Lars
>
>
>
>  -----Oprindelig meddelelse-----
>  Fra: Lisandro Dalcin [mailto:dalcinl at gmail.com]
>
> Sendt: 13. maj 2008 20:04
>  Til: Lars Rindorf
>  Cc: petsc-users at mcs.anl.gov
>  Emne: Re: PETSc and parallel direct solvers
>
>
>  On 5/13/08, Lars Rindorf <Lars.Rindorf at teknologisk.dk> wrote:
>  > Dear Lisandro
>  >
>  >  I have tried to compare MUMPS with UMFPACK for one of my systems. UMFPACK is four times faster (134s) than MUMPS (581s). I did not add the 'MatConvert' line in my program. I've given up on cygwin, and I will receive a linux computer later in this week. Then I will try it again. Do you think that the missing 'MatConvert' line could cause the long calculation time? Or, rather, would including the missing line give a four fold enhancement of the MUMPS performance?
>
>  Perhaps I'm missing something, but if you are using petsc-2.3.3 or below (in petsc-dev there is now a MatSetSoverType, I have not found the time to look at it), the if you do not convert the matrix to 'aijmumps' format, then I guess PETSc ended up using the default, PETSc-builting LU factorization, and not MUMPS at all !!...
>
>  To be completelly sure about what your program is acutally using, add '-ksp_view' to the command line. Then you easily notice if you are using MUMPS or not.
>
>  Finally, a disclaimer. I never tried UMFPACK, so I have no idea if it is actually faster or slower than MUMPS. But I want to make sure you are actually trying MUMPS.  As you can see, selection LU solver in PETSc was a bit contrived, that's the reason Barry Smith reimplemented all this crap adding the MatSetSolverType() stuff.
>
>  I'm posting this to petsc-users, please any PETSc developer/user correct me if I'm wrong in any of my above coments. I'm do not frequently use direct methods.
>
>
>  Regards,
>
>
>  >
>  >  Kind regards, Lars
>  >
>  >  -----Oprindelig meddelelse-----
>  >  Fra: Lisandro Dalcin [mailto:dalcinl at gmail.com]
>  >  Sendt: 13. maj 2008 18:54
>  >  Til: Lars Rindorf
>  >  Emne: PETSc and parallel direct solvers
>  >
>  >
>  >  Dear lars, I saw you post to petsc-users, it bounced because you have
>  > to suscribe to the list
>  >
>  >  I never used UMFPACK, but I've tried MUMPS with PETSc, and it seems to work just fine. Could you give a try to see if it works for you?
>  >
>  >  I usually do this to easy switch to use mumps. First, in the source
>  > code, after assembling your matrix, add the following
>  >
>  >  MatConvert(A, MATSAME, MAT_REUSE_MATRIX, &A);
>  >
>  >  And then, when you actually run your program, add the following to the command line:
>  >
>  >  $ mpiexec -n <np> ./yourprogram  -matconvert_type aijmumps -ksp_type
>  > preonly -pc_type lu
>  >
>  >  This way, you will actually use MUMPS if you pass the '-matconvert_type aijmumps' option. If you run sequentially and do not pass the matconvert option, then petsc will use their default LU factorization. Of course, you can also use MUMPS sequentially depending on your hardware and compiler optimizations MUMPS can be faster than PETSc-builtin linear solvers by a factor of two.
>  >
>  >
>  >  --
>  >  Lisandro Dalcín
>  >  ---------------
>  >  Centro Internacional de Métodos Computacionales en Ingeniería (CIMEC)
>  > Instituto de Desarrollo Tecnológico para la Industria Química (INTEC)
>  > Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET)
>  > PTLC - Güemes 3450, (3000) Santa Fe, Argentina
>  >  Tel/Fax: +54-(0)342-451.1594
>  >
>
>
>  --
>  Lisandro Dalcín
>  ---------------
>  Centro Internacional de Métodos Computacionales en Ingeniería (CIMEC) Instituto de Desarrollo Tecnológico para la Industria Química (INTEC) Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET) PTLC - Güemes 3450, (3000) Santa Fe, Argentina
>  Tel/Fax: +54-(0)342-451.1594
>
>


-- 
Lisandro Dalcín
---------------
Centro Internacional de Métodos Computacionales en Ingeniería (CIMEC)
Instituto de Desarrollo Tecnológico para la Industria Química (INTEC)
Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET)
PTLC - Güemes 3450, (3000) Santa Fe, Argentina
Tel/Fax: +54-(0)342-451.1594




More information about the petsc-users mailing list