[petsc-users] petsc4py Vec().setSizes()
Weston Lowrie
wlowrie at uw.edu
Tue Feb 5 07:59:38 CST 2013
That makes sense now. It was just creating 'mpi_size' (in my case 4)
separate Vec objects and distributing them evenly among the processes.
That is why it appeared to give the correct sizes.
Good to know about the petsc4py "size" argument. It was unclear to me that
it could take multiple values, but that makes sense to keep the API general
and include as much of the functionality of the C++/Fortran APIs.
I have been referencing this:
http://packages.python.org/petsc4py/apiref/index.html
where it shows:
setSizes(self, size, bsize=None)
Maybe there is a better reference somewhere, or I missed where it describes
the size argument?
Thanks,
Wes
On Mon, Feb 4, 2013 at 6:40 PM, Jed Brown <jedbrown at mcs.anl.gov> wrote:
> petsc4py is "too clever" in the sense that it tries to interpret many
> different kids of "sizes" arguments. You can always pass the pair
> (localsize, globalsize), but you can also pass only a global size (in which
> case the vector will be split apart). If you want to set only the local
> size, you should pass (localsize, None).
>
> Your example is invalid, with each process passing different global sizes.
> petsc-dev will now error if you do this.
>
> I changed your example to:
>
>
> X = PETSc.Vec().create(comm=PETSc.COMM_WORLD)
> X.setSizes((sizes[mpi_rank],PETSc.DECIDE),bsize=1)
> X.setFromOptions()
> ilow,ihigh = X.getOwnershipRange()
>
> PETSc.Sys.syncPrint("rank: ",mpi_rank,"low/high: ",ilow,ihigh)
> PETSc.Sys.syncFlush()
>
> and now get the output:
>
> rank: 0 low/high: 0 35675
> rank: 1 low/high: 35675 401185
> rank: 2 low/high: 401185 766927
> rank: 3 low/high: 766927 802370
>
>
>
> On Mon, Feb 4, 2013 at 4:01 PM, Weston Lowrie <wlowrie at uw.edu> wrote:
>
>> Hi,
>> I'm confused what the Vec().setSizes() routine is doing in petsc4py.
>> Consider this example:
>>
>> #!/usr/bin/env python
>> import sys,os
>> from petsc4py import PETSc
>> from numpy import *
>>
>> mpi_rank = PETSc.COMM_WORLD.getRank()
>> mpi_size = PETSc.COMM_WORLD.getSize()
>>
>> sizes = zeros(4)
>> sizes[0] = 35675
>> sizes[1] = 365510
>> sizes[2] = 365742
>> sizes[3] = 35443
>>
>> X = PETSc.Vec().create(comm=PETSc.COMM_WORLD)
>> X.setSizes(mpi_size*sizes[mpi_rank],bsize=1)
>> X.setFromOptions()
>> ilow,ihigh = X.getOwnershipRange()
>>
>> print "rank: ",mpi_rank,"low/high: ",ilow,ihigh
>>
>>
>>
>> Why is it that when setting the local sizes explicitly do I need to
>> multiply by the mpi_size? My understanding is that when using this routine
>> it is telling PETSc what the local processor core size should be. It seems
>> to divide it by total number of processors cores.
>>
>> Thanks,
>> Wes
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20130205/c6622506/attachment.html>
More information about the petsc-users
mailing list