distributed array

mafunk at nmsu.edu mafunk at nmsu.edu
Thu Jul 20 09:31:14 CDT 2006


Hi Berend,

i think i am getting there. Sorry for me being a little slow (it's the first
time I am using PETSc). 

I need to look at these functions you send me and see what the PETSc calls are.
My problem is different from yours to some degree in that i already have
loadbalanced subblocks. And multiple subblocks could already reside on the same
processor. 

So what i was thinking was to create a DA and a local sequential vector on each
subblock (and here i would specify m=n=1 since the local Seqvector/DA resides
only on one proc, which would give me multiple Seqvectors/DA per processor), and
then as you say, somehow glue together the "super"-vector (the global vector)
... :). Though that is the part which am i stuck on right now.

So i guess the question that i have left for now are:
Do you create an MPI vector for your global vector?
Why did you have to create a communicator for each subblock?

Sorry for pestering you again, but since you are in Sweden (or your email is
anyway) i think i might not get you again today ... :)


thanks for all your help
mat



Quoting Berend van Wachem <berend at chalmers.se>:

> Hi Matt,
> 
> Yes, I create a DA for each domain (block I call it in the code) where M 
> and M and N are the dimensions of that block.
> 
> In my problem I do specify m and n, by a simple algorithm; I try to find 
> a solution so that each processor has about the same number of nodes in 
> total. I do this by taking the first block, looking at how much 
> processors it would get (by dividing its number of nodes by the total 
> number of nodes times the number of processors) and fill it up from 
> that. For each block I create a seperate communicator as well, so each 
> block has its own communicator.
> 
>  From the vectors of the various blocks I glue together one larger 
> vector on which the computations are done, with these functions,
> 
> #undef __FUNCT__
> #define __FUNCT__ "VecOverGlobalToBlockGlobalBegin"
> int VecOverGlobalToBlockGlobalBegin(Vec* A, Vec** B, char *name, struct 
> CFDMesh *Mesh)
> {
>     int ierr,i;
>     double *GPtr;
> 
>     PetscFunctionBegin;
> 
>     ierr=VecGetArray(*A,&GPtr); CHKERRQ(ierr);
>     AddToListofLocalVectors(&Mesh->BGCC,Mesh->NMyBlocks,B,GPtr,name);
> 
>     for (i=0; i<Mesh->NMyBlocks; i++)
>     {
>        ierr=DAGetGlobalVector(Mesh->CFDBlocks[i].da,&((*B)[i])); 
> CHKERRQ(ierr);
>        ierr=VecPlaceArray((*B)[i], GPtr+Mesh->CFDBlocks[i].daoffset);
>        CHKERRQ(ierr);
>     }
>     PetscFunctionReturn(0);
> }
> 
> 
> #undef __FUNCT__
> #define __FUNCT__ "VecOverGlobalToBlockGlobalEnd"
> int VecOverGlobalToBlockGlobalEnd(Vec* A, Vec** B, struct CFDMesh *Mesh)
> {
>        int ierr,i;
>        double *GPtr;
> 
>        PetscFunctionBegin;
> 
>        if (!ExistListOfLocalVectors(Mesh->BGCC,*B,&GPtr))
>        {
> 	 ErrorH(1,"Trying to delete a non existing vector from BGCC list");
>        }
> 
>        for (i=0; i<Mesh->NMyBlocks; i++)
>        {
>           ierr=VecResetArray((*B)[i]); CHKERRQ(ierr);
>           ierr=DARestoreGlobalVector(Mesh->CFDBlocks[i].da,&((*B)[i])); 
> CHKERRQ(ierr);
>        }
>        ierr=VecRestoreArray(*A,&GPtr); CHKERRQ(ierr);
>        DeleteFromListOfLocalVectors(&Mesh->BGCC,*B);
> 
>        PetscFunctionReturn(0);
> }
> 
> 
> > Thanks for the response (and sorry for the late 'thanks'),
> > 
> > i can see how this is supposed to work. However, my question is then with
> the 
> > creation of the distributed array objects. I take it from your code that
> you 
> > create a DA for each patch (?). M and N then would be the dimension of my
> 2D 
> > patch. My question is what i would be specifying for n and m. I am a little
> 
> > confused in general what m and n are. It says they denote the process 
> > partition in each direction and that m*n must be equal the total number of
> 
> > processes in the communicator.
> > Sorry, i am almost sure this is something simple. 
> > 
> > thanks
> > mat
> > 
> > On Thursday 13 July 2006 23:13, Berend van Wachem wrote:
> > 
> >> > Hi,
> >> >
> >> > i was wondering if it is possible (or advisable) to use a distributed
> >>
> >>array
> >>
> >> > for when my domain is not necessarily rectangular but my subdomains
> are
> >> > rectangular?
> >> >
> >> > thanks
> >> > mat
> >>
> >>Hi Matt,
> >>
> >>This is exactly what I do; I work with a multi-block CFD problem, where
> >>the total geometry consists of a number of structured (not neccesarily
> >>rectangular) blocks. Each block has its own DA.
> >>
> >>How I solved it, with help from the petsc team, is by making functions
> >>which translate vectors from the complete domain (called block global in
> >>my code) to each block (called block local in my code). The matrix
> >>computation is done at the blockglobal level, and I can manipulate the
> >>vectors at the block local level. Just to give an example, I attatch the
> >>code at the end of this email. If you want more help/information please
> >>let me know.
> >>
> >>Good luck,
> >>
> >>Berend.
> >>
> >>
> >>#undef __FUNCT__
> >>#define __FUNCT__ "VecBlockGlobalToBlockLocalBegin"
> >>int VecBlockGlobalToBlockLocalBegin(Vec **B, Vec **A, char* name, struct
> >>CFDMesh *Mesh)
> >>{
> >>    int ierr, i;
> >>    double *tmpPTR=NULL;
> >>
> >>    PetscFunctionBegin;
> >>
> >>    AddToListofLocalVectors(&Mesh->BLCC,Mesh->NMyBlocks,A,tmpPTR,name);
> >>
> >>    for (i=0; i<Mesh->NMyBlocks; i++)
> >>    {
> >>       ierr=DAGetLocalVector(Mesh->CFDBlocks[i].da, &((*A)[i]));
> >>CHKERRQ(ierr);
> >>
> >>ierr=DAGlobalToLocalBegin(Mesh->CFDBlocks[i].da,(*B)[i],INSERT_VALUES,(*A)[
> >>i]); CHKERRQ(ierr);
> >>
> >>ierr=DAGlobalToLocalEnd(Mesh->CFDBlocks[i].da,(*B)[i],INSERT_VALUES,(*A)[i]
> >>); CHKERRQ(ierr);
> >>
> >>    }
> >>
> >>    PetscFunctionReturn(0);
> >>}
> >>
> >>#undef __FUNCT__
> >>#define __FUNCT__ "VecBlockGlobalToBlockLocalEnd"
> >>int VecBlockGlobalToBlockLocalEnd(Vec **B, Vec **A, struct CFDMesh
> >>*Mesh,int CopyData)
> >>{
> >>    int i,ierr;
> >>
> >>    PetscFunctionBegin;
> >>
> >>    for (i=0; i<Mesh->NMyBlocks; i++)
> >>    {
> >>      if (CopyData==MF_COPY_DATA)
> >>      {
> >>
> >>ierr=DALocalToGlobal(Mesh->CFDBlocks[i].da,(*A)[i],INSERT_VALUES,(*B)[i]);
> >>CHKERRQ(ierr);
> >>      }
> >>      ierr=DARestoreLocalVector(Mesh->CFDBlocks[i].da,&((*A)[i]));
> >>CHKERRQ(ierr);
> >>    }
> >>    DeleteFromListOfLocalVectors(&Mesh->BLCC,*A);
> >>
> >>    PetscFunctionReturn(0);
> >>
> >>}
> > 
> > 
> 
> 
> -- 
>    /\-==-==-==-==-==-==-==-==-==-==-==-==-==-==-==-==-==-=-\
>    L_@~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~@
>      | Berend van Wachem                                    |
>      | Multiphase Flow Group                                |
>      | Chalmers University of Technology                    |
>      |                                                      |
>      | Please note that my email address has changed to:    |
>      | Berend at chalmers.se                                   |
>      |                                                      |
>      | Please make the appropriate changes in your address  |
>      | list.                                                |
>      |                                                      |
>    __@~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~@
>    \/______________________________________________________/
> 
> 





More information about the petsc-users mailing list