distributed array
Matt Funk
mafunk at nmsu.edu
Thu Jul 20 14:06:14 CDT 2006
Hi Berend,
your code examples helped me a lot to simply get a glimpse of how some of this
stuff works. And i think i understand mostly what your functions are doing.
However, after going through your code i rethought what i am trying to do. And
i am not sure if the idea of using DA's is the best way or not.
The reason i initially thought of using DA's was due to its description in the
user manual " ... are intended for use with logically regular rectangular
grids when communication of nonlocal data is needed before certain local
computations can occur."
Well, my problem is such that i already have my right hand side. What i need
to do is transfer this to a PETSc vector and build the matrix.
So i guess i don't know what the advantage is to me in using DA's because my
right hand side is ready to be used once i get to PETSc.
My problem is different from yours (i think anyway) in that any of your blocks
are distributed over several processors. For me a single processors can own
several blocks.
What i am not sure of is whether that makes a difference in how i approach the
problem vs what you are doing.
So the most basic question (and maybe somebody else has an answer to that as
well) i have is what the advantage of the DA is as opposed to using a regular
vector when my right hand side is already given? Because when i solve the
linear system i believe PETSc will take care of the nonlocal communication
that has to be done during the solve for me, right?
mat
On Thursday 20 July 2006 12:21, Berend van Wachem wrote:
> Hi Mat,
>
> I understand it's confusing in the beginning - it took me a while to
> grasp it as well.
>
> > So what i was thinking was to create a DA and a local sequential vector
> > on each subblock (and here i would specify m=n=1 since the local
> > Seqvector/DA resides only on one proc, which would give me multiple
> > Seqvectors/DA per processor), and then as you say, somehow glue together
> > the "super"-vector (the global vector) ... :). Though that is the part
> > which am i stuck on right now.
> >
> > Why did you have to create a communicator for each subblock?
>
> If everything is balanced and the blocks have the same size, you could
> specify m=n=1, but you would still need to make a seperate communicator
> for each of the blocks; how would the DA otherwise know on which
> processor it lies on? I created the communicators with
>
> ierr = MPI_Comm_group(PETSC_COMM_WORLD, &tmpGroup);
> CHKERRQ(ierr);
> for (i = 0; i < Mesh->TOTBlocks; i++)
> {
> ierr = MPI_Group_incl(tmpGroup, Mesh->CFDParBlocks[i].NProc,
> Mesh->CFDParBlocks[i].Proclist, &Mesh->CFDParBlocks[i].BlockGroup);
> CHKERRQ(ierr);
>
> ierr = MPI_Comm_create(PETSC_COMM_WORLD,
> Mesh->CFDParBlocks[i].BlockGroup, &(Mesh->CFDParBlocks[i].BlockComm));
> CHKERRQ(ierr);
>
> }
>
>
> Another tip on communicators. Why it is also convenient to have a
> communicator for one processor is for data processing from file. For
> instance, I have the initial data in one file. I make a DA, similar to
> the DA I want to use later on, but for just one processor. I read the
> data, and then save the DA to disk. Then I make the real DA with the
> communicator and the number of processors I want to have and use DALoad
> to read the DA - and all the data is transferred to the appropriate
> processor.
>
> > Do you create an MPI vector for your global vector?
>
> Yes - because I want to get a solution over the COMPLETE problem in one
> matrix. If your blocks are in any way connected, you will want to do
> this as well.
>
> "Glueing" the DA vectors into one big one is not difficult if you use
> the length of the various DA vectors and glue them together (see code I
> sent in previous email)
>
> One thing you also have to think about is cross-block-addressing. One
> point in one block has a neighbour it is dependent upon in another
> block. In the code I sent you, this addressjump is called "daoffset" and
> is member of the block structure (I am an old C++ programmer so I use
> mostly structures).
>
> > Sorry for pestering you again, but since you are in Sweden (or your email
> > is anyway) i think i might not get you again today ... :)
>
> No problem - just ask. It took me a while before I understood it and the
> best thing is to look at examples that come with petsc and maybe the
> code examples I send you, please let me know if you want me to send more.
>
> Good luck,
>
> Berend.
>
> > thanks for all your help
> > mat
> >
> > Quoting Berend van Wachem <berend at chalmers.se>:
> >>Hi Matt,
> >>
> >>Yes, I create a DA for each domain (block I call it in the code) where M
> >>and M and N are the dimensions of that block.
> >>
> >>In my problem I do specify m and n, by a simple algorithm; I try to find
> >>a solution so that each processor has about the same number of nodes in
> >>total. I do this by taking the first block, looking at how much
> >>processors it would get (by dividing its number of nodes by the total
> >>number of nodes times the number of processors) and fill it up from
> >>that. For each block I create a seperate communicator as well, so each
> >>block has its own communicator.
> >>
> >> From the vectors of the various blocks I glue together one larger
> >>vector on which the computations are done, with these functions,
> >>
> >>#undef __FUNCT__
> >>#define __FUNCT__ "VecOverGlobalToBlockGlobalBegin"
> >>int VecOverGlobalToBlockGlobalBegin(Vec* A, Vec** B, char *name, struct
> >>CFDMesh *Mesh)
> >>{
> >> int ierr,i;
> >> double *GPtr;
> >>
> >> PetscFunctionBegin;
> >>
> >> ierr=VecGetArray(*A,&GPtr); CHKERRQ(ierr);
> >> AddToListofLocalVectors(&Mesh->BGCC,Mesh->NMyBlocks,B,GPtr,name);
> >>
> >> for (i=0; i<Mesh->NMyBlocks; i++)
> >> {
> >> ierr=DAGetGlobalVector(Mesh->CFDBlocks[i].da,&((*B)[i]));
> >>CHKERRQ(ierr);
> >> ierr=VecPlaceArray((*B)[i], GPtr+Mesh->CFDBlocks[i].daoffset);
> >> CHKERRQ(ierr);
> >> }
> >> PetscFunctionReturn(0);
> >>}
> >>
> >>
> >>#undef __FUNCT__
> >>#define __FUNCT__ "VecOverGlobalToBlockGlobalEnd"
> >>int VecOverGlobalToBlockGlobalEnd(Vec* A, Vec** B, struct CFDMesh *Mesh)
> >>{
> >> int ierr,i;
> >> double *GPtr;
> >>
> >> PetscFunctionBegin;
> >>
> >> if (!ExistListOfLocalVectors(Mesh->BGCC,*B,&GPtr))
> >> {
> >> ErrorH(1,"Trying to delete a non existing vector from BGCC list");
> >> }
> >>
> >> for (i=0; i<Mesh->NMyBlocks; i++)
> >> {
> >> ierr=VecResetArray((*B)[i]); CHKERRQ(ierr);
> >> ierr=DARestoreGlobalVector(Mesh->CFDBlocks[i].da,&((*B)[i]));
> >>CHKERRQ(ierr);
> >> }
> >> ierr=VecRestoreArray(*A,&GPtr); CHKERRQ(ierr);
> >> DeleteFromListOfLocalVectors(&Mesh->BGCC,*B);
> >>
> >> PetscFunctionReturn(0);
> >>}
> >>
> >>>Thanks for the response (and sorry for the late 'thanks'),
> >>>
> >>>i can see how this is supposed to work. However, my question is then
> >>> with
> >>
> >>the
> >>
> >>>creation of the distributed array objects. I take it from your code that
> >>
> >>you
> >>
> >>>create a DA for each patch (?). M and N then would be the dimension of
> >>> my
> >>
> >>2D
> >>
> >>>patch. My question is what i would be specifying for n and m. I am a
> >>> little
> >>>
> >>>confused in general what m and n are. It says they denote the process
> >>>partition in each direction and that m*n must be equal the total number
> >>> of
> >>>
> >>>processes in the communicator.
> >>>Sorry, i am almost sure this is something simple.
> >>>
> >>>thanks
> >>>mat
> >>>
> >>>On Thursday 13 July 2006 23:13, Berend van Wachem wrote:
> >>>>>Hi,
> >>>>>
> >>>>>i was wondering if it is possible (or advisable) to use a distributed
> >>>>
> >>>>array
> >>>>
> >>>>>for when my domain is not necessarily rectangular but my subdomains
> >>
> >>are
> >>
> >>>>>rectangular?
> >>>>>
> >>>>>thanks
> >>>>>mat
> >>>>
> >>>>Hi Matt,
> >>>>
> >>>>This is exactly what I do; I work with a multi-block CFD problem, where
> >>>>the total geometry consists of a number of structured (not neccesarily
> >>>>rectangular) blocks. Each block has its own DA.
> >>>>
> >>>>How I solved it, with help from the petsc team, is by making functions
> >>>>which translate vectors from the complete domain (called block global
> >>>> in my code) to each block (called block local in my code). The matrix
> >>>> computation is done at the blockglobal level, and I can manipulate the
> >>>> vectors at the block local level. Just to give an example, I attatch
> >>>> the code at the end of this email. If you want more help/information
> >>>> please let me know.
> >>>>
> >>>>Good luck,
> >>>>
> >>>>Berend.
> >>>>
> >>>>
> >>>>#undef __FUNCT__
> >>>>#define __FUNCT__ "VecBlockGlobalToBlockLocalBegin"
> >>>>int VecBlockGlobalToBlockLocalBegin(Vec **B, Vec **A, char* name,
> >>>> struct CFDMesh *Mesh)
> >>>>{
> >>>> int ierr, i;
> >>>> double *tmpPTR=NULL;
> >>>>
> >>>> PetscFunctionBegin;
> >>>>
> >>>> AddToListofLocalVectors(&Mesh->BLCC,Mesh->NMyBlocks,A,tmpPTR,name);
> >>>>
> >>>> for (i=0; i<Mesh->NMyBlocks; i++)
> >>>> {
> >>>> ierr=DAGetLocalVector(Mesh->CFDBlocks[i].da, &((*A)[i]));
> >>>>CHKERRQ(ierr);
> >>>>
> >>>>ierr=DAGlobalToLocalBegin(Mesh->CFDBlocks[i].da,(*B)[i],INSERT_VALUES,(
> >>>>*A)[ i]); CHKERRQ(ierr);
> >>>>
> >>>>ierr=DAGlobalToLocalEnd(Mesh->CFDBlocks[i].da,(*B)[i],INSERT_VALUES,(*A
> >>>>)[i] ); CHKERRQ(ierr);
> >>>>
> >>>> }
> >>>>
> >>>> PetscFunctionReturn(0);
> >>>>}
> >>>>
> >>>>#undef __FUNCT__
> >>>>#define __FUNCT__ "VecBlockGlobalToBlockLocalEnd"
> >>>>int VecBlockGlobalToBlockLocalEnd(Vec **B, Vec **A, struct CFDMesh
> >>>>*Mesh,int CopyData)
> >>>>{
> >>>> int i,ierr;
> >>>>
> >>>> PetscFunctionBegin;
> >>>>
> >>>> for (i=0; i<Mesh->NMyBlocks; i++)
> >>>> {
> >>>> if (CopyData==MF_COPY_DATA)
> >>>> {
> >>>>
> >>>>ierr=DALocalToGlobal(Mesh->CFDBlocks[i].da,(*A)[i],INSERT_VALUES,(*B)[i
> >>>>]); CHKERRQ(ierr);
> >>>> }
> >>>> ierr=DARestoreLocalVector(Mesh->CFDBlocks[i].da,&((*A)[i]));
> >>>>CHKERRQ(ierr);
> >>>> }
> >>>> DeleteFromListOfLocalVectors(&Mesh->BLCC,*A);
> >>>>
> >>>> PetscFunctionReturn(0);
> >>>>
> >>>>}
> >>
> >>--
> >> /\-==-==-==-==-==-==-==-==-==-==-==-==-==-==-==-==-==-=-\
> >> L_@~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~@
> >>
> >> | Berend van Wachem |
> >> | Multiphase Flow Group |
> >> | Chalmers University of Technology |
> >> |
> >> | Please note that my email address has changed to: |
> >> | Berend at chalmers.se |
> >> |
> >> | Please make the appropriate changes in your address |
> >> | list. |
> >>
> >> __@~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~@
> >> \/______________________________________________________/
More information about the petsc-users
mailing list