[Nek5000-users] Calling subroutine "hpts" for several files
nek5000-users at lists.mcs.anl.gov
nek5000-users at lists.mcs.anl.gov
Tue Oct 17 05:11:39 CDT 2017
> Is it important that each point of "pts" is within the local piece of "wrk" for a processor?
NO
> * If it is not important; Could "intpts" be called only from one process (say nid == 0) in order to simplify the code?
All ranks have to call intpts() but you can pass an arbitrary list of points (including no points).
> Also: Have I understood the hemi example correctly?
You have to ask yourself ;)
Cheers,
Stefan
-----Original message-----
> From:nek5000-users at lists.mcs.anl.gov <nek5000-users at lists.mcs.anl.gov>
> Sent: Tuesday 17th October 2017 10:36
> To: nek5000-users at lists.mcs.anl.gov
> Subject: Re: [Nek5000-users] Calling subroutine "hpts" for several files
>
> Thanks. After running the hemi example and looking into hemi.usr, I have some questions about how the interpolation works in parallell.
> It seems as the interpolation is done by a call to the subroutine "intpts" in from the subroutine "interp_v":
>
> "call intpts(wrk,3,pts,n,uvw,.true.,.true.,ihandle)"
>
> This call interpolates velocities in wrk to positions in pts and outputs in uvw.
> The positions "pts" are positions of Lagranian particles.
>
> Each process has a local array "pts" and a sees a local piece of the velocity field, "wrk".
> Is it important that each point of "pts" is within the local piece of "wrk" for a processor?
>
> * If this is important; how can I see that this is considered within the code?
>
> * If it is not important; Could "intpts" be called only from one process (say nid == 0) in order to simplify the code?
>
> Also: Have I understood the hemi example correctly?
>
> Best,
>
> Johan
> -----------
> From: Nek5000-users <nek5000-users-bounces at lists.mcs.anl.gov> on behalf of nek5000-users at lists.mcs.anl.gov <nek5000-users at lists.mcs.anl.gov>
> Sent: Monday, October 16, 2017 2:26:20 PM
> To: nek5000-users at lists.mcs.anl.gov
> Subject: Re: [Nek5000-users] Calling subroutine "hpts" for several files
> Dear Johan,
>
> I would look at the hemi example.
>
> There it shows how to interpolate a list of values
> (interp_v is a routine inside hemi.usr).
>
> Once you have a list, you can write it out yourself
> (again, as shown in hemi.usr).
>
> Make certain that your interrogation list is _not_ repeated
> on every processor. (Otherwise you end up doing P times
> more work.)
>
> hth,
> Paul
>
> -----------
> From: Nek5000-users <nek5000-users-bounces at lists.mcs.anl.gov> on behalf of nek5000-users at lists.mcs.anl.gov <nek5000-users at lists.mcs.anl.gov>
> Sent: Monday, October 16, 2017 4:04:02 AM
> To: nek5000-users at lists.mcs.anl.gov
> Subject: [Nek5000-users] Calling subroutine "hpts" for several files
> Dear Neks,
>
> I am post processing my data using the subroutine "hpts" to extract field values at specific points.
> I want to loop over many files and extract the field values for each file.
> Therefore I want to rename the file "hpts.out" that is created everytime I call hpts.
> This code snippet is in userchk, and is looks like this:
>
> do i=1,100
>
> write(fname,'(I0.5)') i
> fname='cav0.f'//trim(fname)
>
> call load_fld(fname)
>
> call hpts
>
> call nekgsync
> if(nid==0)then
> call rename('hpts.out',trim(fname)//'_hpts')
> endif
> call nekgsync
>
> enddo
>
> My basename is "cav", so fname is "cav0.f00XXX" for XXX = 1, ..., 100.
> The code is intended to rename hpts.out for a file "cav0.f00XXX" into "cav0.f00XXX_hpts".
>
> However, this does not work.
> I get only one file; "cav0.f00001_hpts".
> When I open this file it contains the data from "cav0.f000100 - the last file that was opened in the loop!
>
> This makes me wonder if hpts.out is properly closed after beeing written to?
> And also, how can I change my code to make it work as I indended?
>
> Best,
>
> Johan
>
>
>
>
> _______________________________________________
> Nek5000-users mailing list
> Nek5000-users at lists.mcs.anl.gov
> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users
More information about the Nek5000-users
mailing list