question concerning building vectors

Barry Smith bsmith at mcs.anl.gov
Fri Jul 21 21:24:57 CDT 2006


   Mat,

     There is no "built in" way to go from a bunch of seperately created
small vectors to one large vector, this is because the default VECMPI in
PETSc requires all the vector entries to be stored contiquously.

But if you start with the large vector you can chop up the big array
and put a piece of it for each "subvector". This is basically the code
Berend sent you before. Just call VecGetArray() on the large vector and then
use VecCreateSeqWithArray() to provide the storage for each subvector, where
you pass in the properly offset value from the array pointer you got from
VecGetArray().

   Good luck,

   Barry


On Thu, 20 Jul 2006, Matt Funk wrote:

> Hi,
>
> i was wondering if there exists a function that lets me "assemble" a vector.
> Basically i would like to do the following:
>
> I have several PETSc vectors on the same proc. I would like to be able to
> assemble all of these into one vector if possible without having to copy the
> data. Is that possible?
>
> thanks for all the help
> mat
>
>
  >
>
> On Thursday 20 July 2006 14:28, Matt Funk wrote:
>> One other thing,
>>
>> in your function VecOverGlobalToBlockGlobalBegin and
>> VecOverGlobalToBlockGlobalEnd:
>>
>> i assume that Vec *A is a the pointer to the blockglobal Vector?
>> further Vec **B is an array of pointers to MPI vectors where each element
>> of the array is a MPI vector associated with one subblock?
>>
>> if that is so, then this is what i believe your functions are doing (please
>> correct me if i am wrong):
>>
>> VecOverGlobalToBlockGlobalBegin: splits the blockglobal vector A into its
>> MPI subblock vectors
>>
>> VecOverGlobalToBlockGlobalEnd: restores the vectors
>>
>> And in between these two function calls you can mess with the MPI subblock
>> vectors?
>>
>> mat
>>
>>
>> then you iterate over all blocks (i assume this is the glueing part? )
>>
>> On Thursday 20 July 2006 12:21, Berend van Wachem wrote:
>>> Hi Mat,
>>>
>>> I understand it's confusing in the beginning - it took me a while to
>>> grasp it as well.
>>>
>>>> So what i was thinking was to create a DA and a local sequential vector
>>>> on each subblock (and here i would specify m=n=1 since the local
>>>> Seqvector/DA resides only on one proc, which would give me multiple
>>>> Seqvectors/DA per processor), and then as you say, somehow glue
>>>> together the "super"-vector (the global vector) ... :). Though that is
>>>> the part which am i stuck on right now.
>>>>
>>> > Why did you have to create a communicator for each subblock?
>>>
>>> If everything is balanced and the blocks have the same size, you could
>>> specify m=n=1, but you would still need to make a seperate communicator
>>> for each of the blocks; how would the DA otherwise know on which
>>> processor it lies on? I created the communicators with
>>>
>>>     ierr = MPI_Comm_group(PETSC_COMM_WORLD, &tmpGroup);
>>>      CHKERRQ(ierr);
>>>      for (i = 0; i < Mesh->TOTBlocks; i++)
>>>      {
>>>          ierr = MPI_Group_incl(tmpGroup, Mesh->CFDParBlocks[i].NProc,
>>> Mesh->CFDParBlocks[i].Proclist, &Mesh->CFDParBlocks[i].BlockGroup);
>>>          CHKERRQ(ierr);
>>>
>>>          ierr = MPI_Comm_create(PETSC_COMM_WORLD,
>>> Mesh->CFDParBlocks[i].BlockGroup, &(Mesh->CFDParBlocks[i].BlockComm));
>>>          CHKERRQ(ierr);
>>>
>>>      }
>>>
>>>
>>> Another tip on communicators. Why it is also convenient to have a
>>> communicator for one processor is for data processing from file. For
>>> instance, I have the initial data in one file. I make a DA, similar to
>>> the DA I want to use later on, but for just one processor. I read the
>>> data, and then save the DA to disk. Then I make the real DA with the
>>> communicator and the number of processors I want to have and use DALoad
>>> to read the DA - and all the data is transferred to the appropriate
>>> processor.
>>>
>>> > Do you create an MPI vector for your global vector?
>>>
>>> Yes - because  I want to get a solution over the COMPLETE problem in one
>>> matrix. If your blocks are in any way connected, you will want to do
>>> this as well.
>>>
>>> "Glueing" the DA vectors into one big one is not difficult if you use
>>> the length of the various DA vectors and glue them together (see code I
>>> sent in previous email)
>>>
>>> One thing you also have to think about is cross-block-addressing. One
>>> point in one block has a neighbour it is dependent upon in another
>>> block. In the code I sent you, this addressjump is called "daoffset" and
>>> is member of the block structure (I am an old C++ programmer so I use
>>> mostly structures).
>>>
>>>> Sorry for pestering you again, but since you are in Sweden (or your
>>>> email is anyway) i think i might not get you again today ... :)
>>>
>>> No problem - just ask. It took me a while before I understood it and the
>>> best thing is to look at examples that come with petsc and maybe the
>>> code examples I send you, please let me know if you want me to send more.
>>>
>>> Good luck,
>>>
>>> Berend.
>>>
>>>> thanks for all your help
>>>> mat
>>>>
>>>> Quoting Berend van Wachem <berend at chalmers.se>:
>>>>> Hi Matt,
>>>>>
>>>>> Yes, I create a DA for each domain (block I call it in the code) where
>>>>> M and M and N are the dimensions of that block.
>>>>>
>>>>> In my problem I do specify m and n, by a simple algorithm; I try to
>>>>> find a solution so that each processor has about the same number of
>>>>> nodes in total. I do this by taking the first block, looking at how
>>>>> much processors it would get (by dividing its number of nodes by the
>>>>> total number of nodes times the number of processors) and fill it up
>>>>> from that. For each block I create a seperate communicator as well, so
>>>>> each block has its own communicator.
>>>>>
>>>>> From the vectors of the various blocks I glue together one larger
>>>>> vector on which the computations are done, with these functions,
>>>>>
>>>>> #undef __FUNCT__
>>>>> #define __FUNCT__ "VecOverGlobalToBlockGlobalBegin"
>>>>> int VecOverGlobalToBlockGlobalBegin(Vec* A, Vec** B, char *name, struct
>>>>> CFDMesh *Mesh)
>>>>> {
>>>>>    int ierr,i;
>>>>>    double *GPtr;
>>>>>
>>>>>    PetscFunctionBegin;
>>>>>
>>>>>    ierr=VecGetArray(*A,&GPtr); CHKERRQ(ierr);
>>>>>    AddToListofLocalVectors(&Mesh->BGCC,Mesh->NMyBlocks,B,GPtr,name);
>>>>>
>>>>>    for (i=0; i<Mesh->NMyBlocks; i++)
>>>>>    {
>>>>>       ierr=DAGetGlobalVector(Mesh->CFDBlocks[i].da,&((*B)[i]));
>>>>> CHKERRQ(ierr);
>>>>>       ierr=VecPlaceArray((*B)[i], GPtr+Mesh->CFDBlocks[i].daoffset);
>>>>>       CHKERRQ(ierr);
>>>>>    }
>>>>>    PetscFunctionReturn(0);
>>>>> }
>>>>>
>>>>>
>>>>> #undef __FUNCT__
>>>>> #define __FUNCT__ "VecOverGlobalToBlockGlobalEnd"
>>>>> int VecOverGlobalToBlockGlobalEnd(Vec* A, Vec** B, struct CFDMesh
>>>>> *Mesh) {
>>>>>       int ierr,i;
>>>>>       double *GPtr;
>>>>>
>>>>>       PetscFunctionBegin;
>>>>>
>>>>>       if (!ExistListOfLocalVectors(Mesh->BGCC,*B,&GPtr))
>>>>>       {
>>>>> 	 ErrorH(1,"Trying to delete a non existing vector from BGCC list");
>>>>>       }
>>>>>
>>>>>       for (i=0; i<Mesh->NMyBlocks; i++)
>>>>>       {
>>>>>          ierr=VecResetArray((*B)[i]); CHKERRQ(ierr);
>>>>>          ierr=DARestoreGlobalVector(Mesh->CFDBlocks[i].da,&((*B)[i]));
>>>>> CHKERRQ(ierr);
>>>>>       }
>>>>>       ierr=VecRestoreArray(*A,&GPtr); CHKERRQ(ierr);
>>>>>       DeleteFromListOfLocalVectors(&Mesh->BGCC,*B);
>>>>>
>>>>>       PetscFunctionReturn(0);
>>>>> }
>>>>>
>>>>>> Thanks for the response (and sorry for the late 'thanks'),
>>>>>>
>>>>>> i can see how this is supposed to work. However, my question is then
>>>>>> with
>>>>>
>>>>> the
>>>>>
>>>>>> creation of the distributed array objects. I take it from your code
>>>>>> that
>>>>>
>>>>> you
>>>>>
>>>>>> create a DA for each patch (?). M and N then would be the dimension of
>>>>>> my
>>>>>
>>>>> 2D
>>>>>
>>>>>> patch. My question is what i would be specifying for n and m. I am a
>>>>>> little
>>>>>>
>>>>>> confused in general what m and n are. It says they denote the process
>>>>>> partition in each direction and that m*n must be equal the total
>>>>>> number of
>>>>>>
>>>>>> processes in the communicator.
>>>>>> Sorry, i am almost sure this is something simple.
>>>>>>
>>>>>> thanks
>>>>>> mat
>>>>>>
>>>>>> On Thursday 13 July 2006 23:13, Berend van Wachem wrote:
>>>>>>>> Hi,
>>>>>>>>
>>>>>>>> i was wondering if it is possible (or advisable) to use a
>>>>>>>> distributed
>>>>>>>
>>>>>>> array
>>>>>>>
>>>>>>>> for when my domain is not necessarily rectangular but my subdomains
>>>>>
>>>>> are
>>>>>
>>>>>>>> rectangular?
>>>>>>>>
>>>>>>>> thanks
>>>>>>>> mat
>>>>>>>
>>>>>>> Hi Matt,
>>>>>>>
>>>>>>> This is exactly what I do; I work with a multi-block CFD problem,
>>>>>>> where the total geometry consists of a number of structured (not
>>>>>>> neccesarily rectangular) blocks. Each block has its own DA.
>>>>>>>
>>>>>>> How I solved it, with help from the petsc team, is by making
>>>>>>> functions which translate vectors from the complete domain (called
>>>>>>> block global in my code) to each block (called block local in my
>>>>>>> code). The matrix computation is done at the blockglobal level, and
>>>>>>> I can manipulate the vectors at the block local level. Just to give
>>>>>>> an example, I attatch the code at the end of this email. If you want
>>>>>>> more help/information please let me know.
>>>>>>>
>>>>>>> Good luck,
>>>>>>>
>>>>>>> Berend.
>>>>>>>
>>>>>>>
>>>>>>> #undef __FUNCT__
>>>>>>> #define __FUNCT__ "VecBlockGlobalToBlockLocalBegin"
>>>>>>> int VecBlockGlobalToBlockLocalBegin(Vec **B, Vec **A, char* name,
>>>>>>> struct CFDMesh *Mesh)
>>>>>>> {
>>>>>>>   int ierr, i;
>>>>>>>   double *tmpPTR=NULL;
>>>>>>>
>>>>>>>   PetscFunctionBegin;
>>>>>>>
>>>>>>>
>>>>>>> AddToListofLocalVectors(&Mesh->BLCC,Mesh->NMyBlocks,A,tmpPTR,name);
>>>>>>>
>>>>>>>   for (i=0; i<Mesh->NMyBlocks; i++)
>>>>>>>   {
>>>>>>>      ierr=DAGetLocalVector(Mesh->CFDBlocks[i].da, &((*A)[i]));
>>>>>>> CHKERRQ(ierr);
>>>>>>>
>>>>>>> ierr=DAGlobalToLocalBegin(Mesh->CFDBlocks[i].da,(*B)[i],INSERT_VALUES
>>>>>>> ,( *A)[ i]); CHKERRQ(ierr);
>>>>>>>
>>>>>>> ierr=DAGlobalToLocalEnd(Mesh->CFDBlocks[i].da,(*B)[i],INSERT_VALUES,(
>>>>>>> *A )[i] ); CHKERRQ(ierr);
>>>>>>>
>>>>>>>   }
>>>>>>>
>>>>>>>   PetscFunctionReturn(0);
>>>>>>> }
>>>>>>>
>>>>>>> #undef __FUNCT__
>>>>>>> #define __FUNCT__ "VecBlockGlobalToBlockLocalEnd"
>>>>>>> int VecBlockGlobalToBlockLocalEnd(Vec **B, Vec **A, struct CFDMesh
>>>>>>> *Mesh,int CopyData)
>>>>>>> {
>>>>>>>   int i,ierr;
>>>>>>>
>>>>>>>   PetscFunctionBegin;
>>>>>>>
>>>>>>>   for (i=0; i<Mesh->NMyBlocks; i++)
>>>>>>>   {
>>>>>>>     if (CopyData==MF_COPY_DATA)
>>>>>>>     {
>>>>>>>
>>>>>>> ierr=DALocalToGlobal(Mesh->CFDBlocks[i].da,(*A)[i],INSERT_VALUES,(*B)
>>>>>>> [i ]); CHKERRQ(ierr);
>>>>>>>     }
>>>>>>>     ierr=DARestoreLocalVector(Mesh->CFDBlocks[i].da,&((*A)[i]));
>>>>>>> CHKERRQ(ierr);
>>>>>>>   }
>>>>>>>   DeleteFromListOfLocalVectors(&Mesh->BLCC,*A);
>>>>>>>
>>>>>>>   PetscFunctionReturn(0);
>>>>>>>
>>>>>>> }
>>>>>
>>>>> --
>>>>>   /\-==-==-==-==-==-==-==-==-==-==-==-==-==-==-==-==-==-=-\
>>>>>   L_@~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~@
>>>>>
>>>>>     | Berend van Wachem                                    |
>>>>>     | Multiphase Flow Group                                |
>>>>>     | Chalmers University of Technology                    |
>>>>>     |
>>>>>     | Please note that my email address has changed to:    |
>>>>>     | Berend at chalmers.se                                   |
>>>>>     |
>>>>>     | Please make the appropriate changes in your address  |
>>>>>     | list.                                                |
>>>>>
>>>>>   __@~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~@
>>>>>   \/______________________________________________________/
>
>




More information about the petsc-users mailing list