distributed array
Matt Funk
mafunk at nmsu.edu
Wed Jul 19 16:04:46 CDT 2006
Thanks for the response (and sorry for the late 'thanks'),
i can see how this is supposed to work. However, my question is then with the
creation of the distributed array objects. I take it from your code that you
create a DA for each patch (?). M and N then would be the dimension of my 2D
patch. My question is what i would be specifying for n and m. I am a little
confused in general what m and n are. It says they denote the process
partition in each direction and that m*n must be equal the total number of
processes in the communicator.
Sorry, i am almost sure this is something simple.
thanks
mat
On Thursday 13 July 2006 23:13, Berend van Wachem wrote:
> > Hi,
> >
> > i was wondering if it is possible (or advisable) to use a distributed
>
> array
>
> > for when my domain is not necessarily rectangular but my subdomains are
> > rectangular?
> >
> > thanks
> > mat
>
> Hi Matt,
>
> This is exactly what I do; I work with a multi-block CFD problem, where
> the total geometry consists of a number of structured (not neccesarily
> rectangular) blocks. Each block has its own DA.
>
> How I solved it, with help from the petsc team, is by making functions
> which translate vectors from the complete domain (called block global in
> my code) to each block (called block local in my code). The matrix
> computation is done at the blockglobal level, and I can manipulate the
> vectors at the block local level. Just to give an example, I attatch the
> code at the end of this email. If you want more help/information please
> let me know.
>
> Good luck,
>
> Berend.
>
>
> #undef __FUNCT__
> #define __FUNCT__ "VecBlockGlobalToBlockLocalBegin"
> int VecBlockGlobalToBlockLocalBegin(Vec **B, Vec **A, char* name, struct
> CFDMesh *Mesh)
> {
> int ierr, i;
> double *tmpPTR=NULL;
>
> PetscFunctionBegin;
>
> AddToListofLocalVectors(&Mesh->BLCC,Mesh->NMyBlocks,A,tmpPTR,name);
>
> for (i=0; i<Mesh->NMyBlocks; i++)
> {
> ierr=DAGetLocalVector(Mesh->CFDBlocks[i].da, &((*A)[i]));
> CHKERRQ(ierr);
>
> ierr=DAGlobalToLocalBegin(Mesh->CFDBlocks[i].da,(*B)[i],INSERT_VALUES,(*A)[
>i]); CHKERRQ(ierr);
>
> ierr=DAGlobalToLocalEnd(Mesh->CFDBlocks[i].da,(*B)[i],INSERT_VALUES,(*A)[i]
>); CHKERRQ(ierr);
>
> }
>
> PetscFunctionReturn(0);
> }
>
> #undef __FUNCT__
> #define __FUNCT__ "VecBlockGlobalToBlockLocalEnd"
> int VecBlockGlobalToBlockLocalEnd(Vec **B, Vec **A, struct CFDMesh
> *Mesh,int CopyData)
> {
> int i,ierr;
>
> PetscFunctionBegin;
>
> for (i=0; i<Mesh->NMyBlocks; i++)
> {
> if (CopyData==MF_COPY_DATA)
> {
>
> ierr=DALocalToGlobal(Mesh->CFDBlocks[i].da,(*A)[i],INSERT_VALUES,(*B)[i]);
> CHKERRQ(ierr);
> }
> ierr=DARestoreLocalVector(Mesh->CFDBlocks[i].da,&((*A)[i]));
> CHKERRQ(ierr);
> }
> DeleteFromListOfLocalVectors(&Mesh->BLCC,*A);
>
> PetscFunctionReturn(0);
>
> }
More information about the petsc-users
mailing list