[petsc-users] How to create a local to global mapping and construct matrix correctly

Barry Smith bsmith at mcs.anl.gov
Fri Sep 16 21:28:17 CDT 2016


> On Sep 16, 2016, at 9:00 PM, Ji Zhang <gotofd at gmail.com> wrote:
> 
> Thanks for your previous suggestion and the construction from little m to big M have accomplished. 
> 
> For a MPI program, a arbitrary matrix is been shorted in different cups (i.e. 3), and each cup only contain part of then. So I think the matrix have two kinds of indexes, a local one indicate the location of values at the corresponding cup, and the global one indicate the location at the whole matrix. I would like to know the relation between them and find the way to shift the index from one to another. 

   This depends on what the little matrices are that you are putting into the large matrix. For PDE type problems you can look at the PETSc KSP and SNES tutorial examples but for your problem I don't know.
> 
> I have one more question, the function VecGetArray() only return a pointer to the local data array. What should I do if I need a pointer to the whole data array? 

  VecScatterCreateToAll() but note this is not scalable for very large problems
> 
> Wayne
> 
> On Sat, Sep 17, 2016 at 9:00 AM, Barry Smith <bsmith at mcs.anl.gov> wrote:
> 
> > On Sep 16, 2016, at 7:52 PM, Ji Zhang <gotofd at gmail.com> wrote:
> >
> > Sorry. What I mean is that, for example, I have a matrix
> >             [a1, a2, a3]
> >     mij = [b1, b2, b3] ,
> >             [c1, c2, c3]
> > and using 3 cups. Thus, mij in cpu 2 is
> >     mij_2 = [b1, b2, b3] .
> >
> > The local index of element b1 is (1, 1) and it's global index is (2, 1). How can I get the global index from the local index, and local index from global index?
> 
>    That is something your code needs to generate and deal with, it is not something PETSc can do for you directly. You are defining the little m's and the big M and deciding where to put the little m's into the big ends. PETSc/we have no idea what the little m's represent in terms of the big M and where they would belong, that is completely the business of your application.
> 
>    Barry
> 
> 
> 
> >
> > Thanks.
> > 2016-09-17
> > Best,
> > Regards,
> > Zhang Ji
> > Beijing Computational Science Research Center
> > E-mail: gotofd at gmail.com
> >
> >
> >
> >
> > Wayne
> >
> > On Sat, Sep 17, 2016 at 2:24 AM, Barry Smith <bsmith at mcs.anl.gov> wrote:
> >
> >    "Gives wrong answers" is not very informative. What answer do you expect and what answer do you get?
> >
> >   Note that each process is looping over mSizes?
> >
> > for i in range(len(mSizes)):
> >     for j in range(len(mSizes)):
> >
> >   Is this what you want? It doesn't seem likely that you want all processes to generate all information in the matrix. Each process should be doing a subset of the generation.
> >
> >    Barry
> >
> > > On Sep 16, 2016, at 11:03 AM, Ji Zhang <gotofd at gmail.com> wrote:
> > >
> > > Dear all,
> > >
> > > I have a number of small 'mpidense' matrices mij, and I want to construct them to a big 'mpidense' matrix M like this:
> > >      [  m11  m12  m13  ]
> > > M =  |  m21  m22  m23  |   ,
> > >      [  m31  m32  m33  ]
> > >
> > > And a short demo is below. I'm using python, but their grammar are similar.
> > > import numpy as np
> > > from petsc4py import PETSc
> > > import sys, petsc4py
> > >
> > >
> > > petsc4py.init(sys.argv)
> > > mSizes = (2, 2)
> > > mij = []
> > >
> > > # create sub-matrices mij
> > > for i in range(len(mSizes)):
> > >     for j in range(len(mSizes)):
> > >         temp_m = PETSc.Mat().create(comm=PETSc.COMM_WORLD)
> > >         temp_m.setSizes(((None, mSizes[i]), (None, mSizes[j])))
> > >         temp_m.setType('mpidense')
> > >         temp_m.setFromOptions()
> > >         temp_m.setUp()
> > >         temp_m[:, :] = np.random.random_sample((mSizes[i], mSizes[j]))
> > >         temp_m.assemble()
> > >         temp_m.view()
> > >         mij.append(temp_m)
> > >
> > > # Now we have four sub-matrices. I would like to construct them into a big matrix M.
> > > M = PETSc.Mat().create(comm=PETSc.COMM_WORLD)
> > > M.setSizes(((None, np.sum(mSizes)), (None, np.sum(mSizes))))
> > > M.setType('mpidense')
> > > M.setFromOptions()
> > > M.setUp()
> > > mLocations = np.insert(np.cumsum(mSizes), 0, 0)    # mLocations = [0, mSizes]
> > > for i in range(len(mSizes)):
> > >     for j in range(len(mSizes)):
> > >         temp_m = mij[i*len(mSizes)+j].getDenseArray()
> > >         for k in range(temp_m.shape[0]):
> > >             M.setValues(mLocations[i]+k, np.arange(mLocations[j],mLocations[j+1],dtype='int32'), temp_m[k, :])
> > > M.assemble()
> > > M.view()
> > > The code works well in a single cup, but give wrong answer for 2 and more cores.
> > >
> > > Thanks.
> > > 2016-09-17
> > > Best,
> > > Regards,
> > > Zhang Ji
> > > Beijing Computational Science Research Center
> > > E-mail: gotofd at gmail.com
> > >
> > >
> >
> >
> 
> 



More information about the petsc-users mailing list