[petsc-users] Scatter scipy Sparse matrix to petsc local processor

Matthew Knepley knepley at gmail.com
Fri Apr 6 08:22:17 CDT 2018


On Fri, Apr 6, 2018 at 9:17 AM, Yongxiang Wu <yongxiang27 at gmail.com> wrote:

> Hello Matt,
> thanks for your quick reply. My situation is a bit different. What I am
> solving is a generalized eigenvalue problem. I only recently switched from
> scipy.eigs to slepc, because slepc support parallel eigenvalue problem. In
> other word, I am adapting my old code to utilizing slepc/petsc parallel
> support. To have a first test, I saved the matrix from eigenvalue problem
> with old code. Then read in this matrix for slepc/petsc validation. That's
> why I need to scatter it to processors.  Do you have any idea how can I
> achieve this?
>

Ah, the best way to do this is to first convert your matrix from the format
it is in to PETSc binary format. Then you
can load it in parallel automatically. So, I would

  a) Load my matrix
  b) Create a serial PETSc matrix from that using petc4py
  c) Save that matrix using a binary viewer

Then in your parallel code you can jsut MatLoad() that matrix, and it will
automatically be
distributed.

  Thanks,

     Matt


> with regards
> Yongxiang
>
> On 6 April 2018 at 15:00, Matthew Knepley <knepley at gmail.com> wrote:
>
>> On Fri, Apr 6, 2018 at 8:44 AM, Yongxiang Wu <yongxiang27 at gmail.com>
>> wrote:
>>
>>> Hello, everyone,
>>>
>>> I already have a scipy sparse square matrix L0 . Since my problem is
>>> large, parallel run is preferred. My Question is, how can I scatter my
>>> L0 to each of the processors? In the following code, I can get the
>>> indices of the localized part of the matrix. In the tutorial, the matrix
>>> element are directly assign with value, but in my case, the matrix are so
>>> large, assign each element in loop (commented code) is not efficient. So if
>>> any function would do the mpi scatter work?
>>>
>>
>> Hi Yongxiang,
>>
>> It would be really anomalous for what you propose to results in any
>> speedup at all. If the matrix is large,
>> it will not fit on one process. Any speedup from using more processes
>> will be eaten up by the time to
>> communicate the matrix. I would focus on generating the matrix in
>> parallel.
>>
>>   Thanks,
>>
>>      Matt
>>
>>
>>> With regards and Thanks.
>>>
>>>     import sys, slepc4py
>>>     slepc4py.init(sys.argv)
>>>     from petsc4py import PETSc
>>>     from slepc4py import SLEPc
>>>
>>>     opts = PETSc.Options()
>>>     opts.setValue('-st_pc_factor_mat_solver_package','mumps')
>>>
>>>     A = PETSc.Mat().createAIJ(size=L0.shape,comm=PETSc.COMM_WORLD)
>>>     A.setUp()
>>>
>>>     Istart, Iend = A.getOwnershipRange()#    for I in range(Istart,Iend):#        for J in range(0,L0.shape[0]):#            A[I,J] = L0[I,J]
>>>
>>> The flowing code, would make the assignment from the scipy sparse matrix
>>>  L0 to PETSc matrix A. But this would only work for one process.
>>>
>>>     A = PETSc.Mat().createAIJ(size=L0.shape,
>>>                                    csr=(L0.indptr, L0.indices,
>>>                                         L0.data), comm=PETSc.COMM_WORLD)
>>>
>>>
>>>
>>
>>
>> --
>> What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> -- Norbert Wiener
>>
>> https://www.cse.buffalo.edu/~knepley/ <http://www.caam.rice.edu/~mk51/>
>>
>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ <http://www.caam.rice.edu/~mk51/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20180406/51579117/attachment.html>


More information about the petsc-users mailing list