MPICH error when using petsc-3.0.0-p4

Wee-Beng TAY zonexo at gmail.com
Mon Apr 13 20:49:59 CDT 2009


Hi Satish,

I now changed to MPI_DOUBLE_PRECISION and it worked. Removing -r8 option 
does not seem to have any effect.

Btw, Barry,

I tried leaving some variables undefined and the CVF does not allow the 
file to be compiled, saying that the variables are not explicitly 
defined. So I believe having an "implicit none" on the top should be enough.

Thank you very much and have a nice day!

Yours sincerely,

Wee-Beng Tay



Satish Balay wrote:
> MPIU_SCALAR is defined to be MPI_DOUBLE_PRECISION
>
> I guess MPI_REAL/MPI_REAL8 behavior is affected by -r8.
>
> Satish
>
> On Tue, 14 Apr 2009, Wee-Beng TAY wrote:
>
>   
>> Hi Satish,
>>
>> Moreover, I found that as long as I use MPIU_SCALAR instead of MPI_REAL8,
>> there's no problem. Btw, I'm using MPICH 1.25.
>>
>> Thank you very much and have a nice day!
>>
>> Yours sincerely,
>>
>> Wee-Beng Tay
>>
>>
>>
>> Satish Balay wrote:
>>     
>>> (--with-scalar-type=[real,complex], --with-precision=[single,double])
>>>
>>> Depending upon how petsc is configured - PetscScalar can be either
>>> real4,real8,complex8. With most common usage - [i.e default build]
>>> PetscScalar is real8.
>>>
>>> Earlier you claimed: MPI_ISEND(v_ast) worked with MPI_REAL but not
>>> MPI_REAL8. This sugested that v_ast was declared as real4 - not real8.
>>>
>>> Satish
>>>
>>> On Tue, 14 Apr 2009, Wee-Beng TAY wrote:
>>>
>>>   
>>>       
>>>> Hi,
>>>>
>>>> So when must I use PetscScalar, and when can I just use real(8)? This is
>>>> because most of my variables are in real(8), except for the Mat, Vec,
>>>> required
>>>> in the solving of the linear eqn. Is there a rule? I initially thought
>>>> they
>>>> are the same and if I remember correctly, it seems to work fine in
>>>> PETSc2.3.3.
>>>>
>>>> My v_ast is now defined as:
>>>>
>>>> PetscScalar, allocatable ::    u_ast(:,:), v_ast(:,:)
>>>>
>>>> allocate (u_ast(0:size_x+1,jsta_ext:jend_ext), STAT=status(1));allocate
>>>> (v_ast(0:size_x+1,jsta_ext:jend_ext), STAT=status(2))
>>>>
>>>> if (status(1)/=0 .or. status(2)/=0) STOP "Cannot allocate memory"
>>>>
>>>> Thank you very much and have a nice day!
>>>>
>>>> Yours sincerely,
>>>>
>>>> Wee-Beng Tay
>>>>
>>>>
>>>>
>>>> Satish Balay wrote:
>>>>     
>>>>         
>>>>> And if v_ast() is used with PETSc - then it should be defined
>>>>> 'PetscScalar'
>>>>>
>>>>> and then you use MPI_ISEND(....,MPIU_SCALAR,...)
>>>>>
>>>>> Satish
>>>>>
>>>>>
>>>>> On Mon, 13 Apr 2009, Barry Smith wrote:
>>>>>
>>>>>         
>>>>>           
>>>>>>   Where is your implicit none that all sane programmers begin their
>>>>>> Fortran
>>>>>> subroutines with?
>>>>>>
>>>>>>
>>>>>> On Apr 13, 2009, at 11:11 AM, Wee-Beng TAY wrote:
>>>>>>
>>>>>>             
>>>>>>             
>>>>>>> Hi Satish,
>>>>>>>
>>>>>>> Compiling and building now worked without error. However, when I
>>>>>>> run, I
>>>>>>> get
>>>>>>> the error:
>>>>>>>
>>>>>>> 0 - MPI_ISEND : Datatype is MPI_TYPE_NULL
>>>>>>> [0]  Aborting program !
>>>>>>> [0] Aborting program!
>>>>>>> Error 323, process 0, host GOTCHAMA-E73BB3:
>>>>>>>
>>>>>>> The problem lies in this subroutine:
>>>>>>>
>>>>>>> subroutine v_ast_row_copy
>>>>>>>
>>>>>>> #include "finclude/petsc.h"
>>>>>>> #include "finclude/petscvec.h"
>>>>>>> #include "finclude/petscmat.h"
>>>>>>> #include "finclude/petscksp.h"
>>>>>>> #include "finclude/petscpc.h"
>>>>>>> #include "finclude/petscsys.h"
>>>>>>>
>>>>>>> !to copy data of jend row to others
>>>>>>>
>>>>>>> integer :: inext,iprev,istatus(MPI_STATUS_SIZE),irecv1,ierr,isend1
>>>>>>>
>>>>>>> inext = myid + 1
>>>>>>> iprev = myid - 1
>>>>>>>
>>>>>>> if (myid == num_procs - 1) inext = MPI_PROC_NULL
>>>>>>>
>>>>>>> if (myid == 0) iprev = MPI_PROC_NULL
>>>>>>>
>>>>>>> CALL
>>>>>>> MPI_ISEND(v_ast(1,jend),size_x,MPI_REAL8,inext,1,MPI_COMM_WORLD,isend1,ierr)
>>>>>>> CALL
>>>>>>> MPI_IRECV(v_ast(1,jsta-1),size_x,MPI_REAL8,iprev,1,MPI_COMM_WORLD,irecv1,ierr)
>>>>>>> CALL MPI_WAIT(isend1, istatus, ierr)
>>>>>>> CALL MPI_WAIT(irecv1, istatus, ierr)
>>>>>>>
>>>>>>> end subroutine v_ast_row_copy
>>>>>>>
>>>>>>>
>>>>>>> I copied this subroutine from the RS6000 mpi manual and it used to
>>>>>>> work.
>>>>>>> I
>>>>>>> wonder if this is a MPI or PETSc problem? Strange because I already
>>>>>>> specify
>>>>>>> the type to be MPI_REAL8. However changing to MPI_REAL solves the
>>>>>>> problem.
>>>>>>>
>>>>>>> If this is a MPI problem, then you can just ignore it. I'll check it
>>>>>>> in
>>>>>>> some
>>>>>>> MPI forum.
>>>>>>>
>>>>>>>
>>>>>>> Thank you very much and have a nice day!
>>>>>>>
>>>>>>> Yours sincerely,
>>>>>>>
>>>>>>> Wee-Beng Tay
>>>>>>>           
>>>>>>>               
>>>   
>>>       
>
>
>   


More information about the petsc-users mailing list