[petsc-users] petsc-users Digest, Vol 40, Issue 107

Pham Van pham_ha at yahoo.com
Sat Apr 28 12:22:28 CDT 2012


I think the sentence:

The matrix has row and column ownership ranges.

is very important. I did not get it from the manual (maybe my fault).

Kind regards,
Pham Van Ha



________________________________
 From: "petsc-users-request at mcs.anl.gov" <petsc-users-request at mcs.anl.gov>
To: petsc-users at mcs.anl.gov 
Sent: Sunday, April 29, 2012 12:00 AM
Subject: petsc-users Digest, Vol 40, Issue 107
 
Send petsc-users mailing list submissions to
    petsc-users at mcs.anl.gov

To subscribe or unsubscribe via the World Wide Web, visit
    https://lists.mcs.anl.gov/mailman/listinfo/petsc-users
or, via email, send a message with subject or body 'help' to
    petsc-users-request at mcs.anl.gov

You can reach the person managing the list at
    petsc-users-owner at mcs.anl.gov

When replying, please edit your Subject line so it is more specific
than "Re: Contents of petsc-users digest..."


Today's Topics:

   1. Re:  Preallocation for regtangular matrix (Jed Brown)
   2. Re:  mumps solve with same nonzero pattern (Alexander Grayver)
   3.  superlu dist colperm by default (Wen Jiang)


----------------------------------------------------------------------

Message: 1
Date: Sat, 28 Apr 2012 05:05:22 -0500
From: Jed Brown <jedbrown at mcs.anl.gov>
Subject: Re: [petsc-users] Preallocation for regtangular matrix
To: Pham Van <pham_ha at yahoo.com>, PETSc users list
    <petsc-users at mcs.anl.gov>
Message-ID:
    <CAM9tzSnXqs5Uzx1z9Lu2BMGFrRe3PsjQpbJaMcdeDPZmQDb7vQ at mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

On Sat, Apr 28, 2012 at 03:32, Pham Van <pham_ha at yahoo.com> wrote:

> I am trying to create matrices for a rectangular matrix with number of
> rows is much bigger than number of columns.
>
> I create 2 matrices together one square and one rectangular with the same
> number of rows. To my surprise the smaller matrix (rectangular) creation
> take much more time than the bigger one (rectangular). I did preallocate
> for both matrices with predefine number of diagonal and off-diagonal
> entries. But then again I did not know how to define diagonal part of a
> rectangular matrix. Only a very small top part of the matrix is "diagonal".
>
> I have try both method: the first one to set diagonal and off-diagonal
> part as it was a square matrix; and second to set only a small top part of
> the matrix diagonal. Both method does not work.
>
> Does anyone know how to preallocate a rectangular matrix.
>
> By the way, the same code run pretty fast when I run with single process
> and painfully slow when 2 processes employed.
>

The matrix has row and column ownership ranges. The "diagonal" part is any
entry that lies in both the row and column ownership range. The
"off-diagonal" part is in the row ownership range, but not in the column
ownership range.

Run with -mat_new_nonzero_allocation_err or

ierr = MatSetOption(B,MAT_NEW_NONZERO_ALLOCATION_ERR,flg);CHKERRQ(ierr);

to find which entries start going outside your preallocation.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120428/208b1c4c/attachment-0001.htm>

------------------------------

Message: 2
Date: Sat, 28 Apr 2012 16:03:58 +0200
From: Alexander Grayver <agrayver at gfz-potsdam.de>
Subject: Re: [petsc-users] mumps solve with same nonzero pattern
To: PETSc users list <petsc-users at mcs.anl.gov>
Message-ID: <4F9BF8CE.7030005 at gfz-potsdam.de>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

This is not quite normal.

Right now I'm dealing with 0.65 million problem and it takes 15 secs for 
symbolic factorization to be done using 64 cores (four 16x cores nodes).

As Hong suggested you should try different ordering techniques. Usually 
metis is among the best (-mat_mumps_icntl_7 5). And also any timings is 
better to do in release version (--with-debugging=0 flag for configure).

You should be able to get some output and statistics from MUMPS with this:
-mat_mumps_icntl_3 2 -mat_mumps_icntl_4 2

If you don't read MUMPS manual for that. Without any output from MUMPS 
it is not really clear what happens.

On 27.04.2012 19:49, Wen Jiang wrote:
> Thanks for your reply. I also tested the exactly same problem with 
> SUPERLU. The mumps and superlu dist both are using sequential symbolic 
> factorization. However, superlu dist takes only 20 seconds but mumps 
> takes almost 700 seconds. I am wondering whether such a big difference 
> is possible. Do those two direct solver use quite different algorithm?
>
> And also since I might have the same nonzero structure system to be 
> solved many times at different places. I am wondering whether I could 
> save the symbolic factorization output somewhere and then read them as 
> the input for future solving. Thanks.
>
> Regards,
> Wen
>


-- 
Regards,
Alexander



------------------------------

Message: 3
Date: Sat, 28 Apr 2012 13:00:06 -0400
From: Wen Jiang <jiangwen84 at gmail.com>
Subject: [petsc-users] superlu dist colperm by default
To: petsc-users at mcs.anl.gov
Message-ID:
    <CAMJxm+DQwYW-6prQHAXPCuC3buvkq7md5yVaHqTMtprBjEjDsA at mail.gmail.com>
Content-Type: text/plain; charset="iso-8859-1"

Hi,

I am using superlu dist in PETSc 3.1. The version of superlu dist installed
by petsc is SuperLU_DIST_2.4-hg-v2. Does anyone know which colperm method
superlu dist use for the default setting by petsc? I cannot got such
information by using -mat_superlu_dist_statprint, the output of which I
attached below. Thanks.

Regards,
Wen

***************************************************************************************************************
PC Object:
  type: lu
    LU: out-of-place factorization
    tolerance for zero pivot 1e-12
    matrix ordering: natural
    factor fill ratio given 0, needed 0
      Factored matrix follows:
        Matrix Object:
          type=mpiaij, rows=215883, cols=215883
          package used to perform factorization: superlu_dist
          total: nonzeros=0, allocated nonzeros=431766
            SuperLU_DIST run parameters:
              Process grid nprow 8 x npcol 8
              Equilibrate matrix TRUE
              Matrix input mode 1
              Replace tiny pivots TRUE
              Use iterative refinement FALSE
              Processors in row 8 col partition 8
              Row permutation LargeDiag
              Parallel symbolic factorization FALSE
              Repeated factorization SamePattern
******************************************************************************************************************
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120428/599f3cb2/attachment-0001.htm>

------------------------------

_______________________________________________
petsc-users mailing list
petsc-users at mcs.anl.gov
https://lists.mcs.anl.gov/mailman/listinfo/petsc-users


End of petsc-users Digest, Vol 40, Issue 107
********************************************
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120428/6f4046a4/attachment.htm>


More information about the petsc-users mailing list