[Nek5000-users] outpost parallel error

nek5000-users at lists.mcs.anl.gov nek5000-users at lists.mcs.anl.gov
Sun Oct 26 12:20:00 CDT 2014


Hi Paul,
yes, all the arrays are declared. However, I have declared some arrays 
with the same name of some common declarations. For instance, I declare 
real*8 arrays named u,v,u0,v0. Could this be in conflict with some 
common /scruz/ declarations found in other source files?
Thank you,
Giuseppe



On 26/10/2014 17:30, nek5000-users at lists.mcs.anl.gov wrote:
> Are all your arrays properly declared in the calling routine?
>
> Just trying to identify the potential issues...
>
> Best,
>
> Paul
>
> ________________________________________
> From: nek5000-users-bounces at lists.mcs.anl.gov [nek5000-users-bounces at lists.mcs.anl.gov] on behalf of nek5000-users at lists.mcs.anl.gov [nek5000-users at lists.mcs.anl.gov]
> Sent: Sunday, October 26, 2014 9:39 AM
> To: nek5000-users at lists.mcs.anl.gov
> Subject: Re: [Nek5000-users] outpost parallel error
>
> My userchk is structured this way:
>
> ...
>          call pdnaupd(...)
> ...
> c   here call to nek subroutines
> c   for matrix-vector products
> ...
> c    if outpost is called here, it works (serial and parallel)
> c   call to Arpack postprocessing routine:
>          call pdneupd(...,zz,...)
> c   zz is a matrix storing the eigenvectors
> c   call to arpack to nek format conversion, then
>          call outpost(v1,v2,...)      ! this gets stuck in parallel
>
> note that with 1 rank this code is working well. If I comment out the
> call to pdneupd, then it works in parallel, although it will not give
> any useful results.
> Giuseppe
>
>
> On 26/10/2014 14:36, nek5000-users at lists.mcs.anl.gov wrote:
>> If you comment out the arpack call, does it work?
>>
>> ________________________________________
>> From: nek5000-users-bounces at lists.mcs.anl.gov [nek5000-users-bounces at lists.mcs.anl.gov] on behalf of nek5000-users at lists.mcs.anl.gov [nek5000-users at lists.mcs.anl.gov]
>> Sent: Sunday, October 26, 2014 2:36 AM
>> To: nek5000-users at lists.mcs.anl.gov
>> Subject: Re: [Nek5000-users] outpost parallel error
>>
>> Hi Paul,
>> thank for the answer.
>> I tried both removing the .sch file before each run and using outpost
>> instead of outpost2.
>> Unfortunately none of these solved my problem.
>> I have checked that outpost works in parallel before the Arpack pdneupd
>> subroutine is called, so my guess is that the Arpack subroutine is
>> modifying some flags forcing outpost to write the .sch file, only when
>> called by 2 or more ranks.
>> If somebody has further comments, please let me know.
>> Best,
>> Giuseppe
>>
>>
>>
>> On 25/10/2014 20:39, nek5000-users at lists.mcs.anl.gov wrote:
>>> Hi Giuseppe,
>>>
>>> It looks as though you didn't remove the .sch file prior to the start
>>> of your run?
>>>
>>> Nek is designed to not clobber the .sch file because they can contain
>>> information that might have resulted from very long runs.  The onus
>>> is thus on the user to manage those files.   The usual nek scripts move
>>> those files to blah.sch1  if there is already a blah.sch file.
>>>
>>> It's possible, however, that something is messed up in the logic of outpost2().
>>> That rarely gets exercised.   Is there a reason to not use outpost()?
>>>
>>> Paul
>>>
>>> ________________________________________
>>> From:nek5000-users-bounces at lists.mcs.anl.gov  [nek5000-users-bounces at lists.mcs.anl.gov] on behalf ofnek5000-users at lists.mcs.anl.gov  [nek5000-users at lists.mcs.anl.gov]
>>> Sent: Saturday, October 25, 2014 8:58 AM
>>> To:nek5000-users at lists.mcs.anl.gov
>>> Subject: [Nek5000-users] outpost parallel error
>>>
>>> Dear Neks,
>>> in the usr file, after doing some manipulations on a known solution
>>> (loaded from file with load_fld(...)), I save the computed values with a
>>> call to outpost, like:
>>>              call outpost2(u1(1,1,1,1),v1(1,1,1,1),
>>>          &  w1(1,1,1,1),p1(1,1,1,1),
>>>          &  p1(1,1,1,1),1,'arp')
>>> While this is working correctly with 1 rank, in parallel it fails,
>>> returning only the error message:
>>> schfile:/.../myrun.sch
>>> then it hangs for a while, with all processes consuming all the cpu time
>>> until they are killed (so the .sch file is empty).
>>> Does anybody have an idea on how this could be solved?
>>> Best,
>>> Giuseppe
>>> _______________________________________________
>>> Nek5000-users mailing list
>>> Nek5000-users at lists.mcs.anl.gov
>>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users
>>> _______________________________________________
>>> Nek5000-users mailing list
>>> Nek5000-users at lists.mcs.anl.gov
>>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users
>> _______________________________________________
>> Nek5000-users mailing list
>> Nek5000-users at lists.mcs.anl.gov
>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users
>> _______________________________________________
>> Nek5000-users mailing list
>> Nek5000-users at lists.mcs.anl.gov
>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users
> _______________________________________________
> Nek5000-users mailing list
> Nek5000-users at lists.mcs.anl.gov
> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users
> _______________________________________________
> Nek5000-users mailing list
> Nek5000-users at lists.mcs.anl.gov
> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users



More information about the Nek5000-users mailing list