A simple question.

F.E. Karaoulanis fkar at nemesis-project.org
Mon Apr 2 19:58:55 CDT 2007


I am new to PETSc, so my question may sound trivial. This is what I've
done:

1. I configured PETSc on WinXP under cygwin, using:
***************************************************************************
./config/configure.py               \
--with-cc='win32fe cl'              \
--with-cxx='win32fe cl'             \
--with-fortran=0                    \
--with-parmetis=0                   \
--download-c-blas-lapack=yes        \
--with-mpi-dir=/cygdrive/C/MPICH2                           
***************************************************************************

2. I wrote and compiled the following (simple) code:
***************************************************************************
static char help[] = "Testing PETSc installation.";
#include "petscksp.h"
#include <iostream>
using namespace std;
#undef __FUNCT__
#define __FUNCT__ "main"
int main(int argc,char **args)
{
    Vec x;
    PetscInt n =10;
    PetscErrorCode ierr;
    PetscMPIInt size;
    PetscScalar one=1.0;
    double vd[3]={2.,2.,2.};
    int indices[3]={0,5,7};
    PetscScalar* v=&vd[0];

    ierr = PetscInitialize(&argc,&args,(char *)0,help);  CHKERRQ(ierr);
    ierr = MPI_Comm_size(PETSC_COMM_WORLD,&size);        CHKERRQ(ierr);
    printf("Number of processors : %d\n",size);
    ierr = VecCreate(PETSC_COMM_WORLD,&x);               CHKERRQ(ierr);
    ierr = VecSetSizes(x,PETSC_DECIDE,n);                CHKERRQ(ierr);
    ierr = VecSetType(x,"mpi");                          CHKERRQ(ierr);
    ierr = VecSetFromOptions(x);                         CHKERRQ(ierr);
    ierr = VecSet(x,one);                                CHKERRQ(ierr);
    VecSetValues(x,3,indices,v,ADD_VALUES);
    VecAssemblyBegin(x);
    VecAssemblyEnd(x);

    ierr = VecView(x,PETSC_VIEWER_STDOUT_WORLD);         CHKERRQ(ierr);
    VecDestroy(x);                                       CHKERRQ(ierr);
    ierr = PetscFinalize();                              CHKERRQ(ierr);
    return 0;
}
***************************************************************************

3. And now I'm running it on a dual core processor for -n 1 & 2. These are
the results.
***************************************************************************
Number of processors : 1
Process [0]
3
1
1
1
1
3
1
3
1
1
***************************************************************************
Number of processors : 2
Process [0]
5
1
1
1
1
Process [1]
5
1
5
1
1
Number of processors : 2
***************************************************************************

This is not I was really looking for. I intended to have the same output
(concerning the vector entries) and no double-printings (like "Number of
processors : 2").

Do you think this is an MPI setup problem, or have I not really understood
what the above code does?

Kind regards,

Fotios Karaoulanis._

ps. Congratulations on your excellent work!



-- 
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Fotios E. Karaoulanis
Dipl. Civil Engineer, MSc TUM
tel     +30 2310 458913
fax     +30 2310 458913
mob     +30 6948 179452
e-mail  fkar at nemesis-project.org
--------------------------------------------
Consider visiting www.nemesis-project.org.
Home of an experimental finite element code.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~




More information about the petsc-users mailing list