From nek5000-users at lists.mcs.anl.gov Thu Jan 5 02:46:37 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 05 Jan 2012 09:46:37 +0100 Subject: [Nek5000-users] Restart problem In-Reply-To: References: <1323181417.5193.13.camel@skagsnebb.mech.kth.se> Message-ID: <1325753197.8552.9.camel@skagsnebb.mech.kth.se> On Thu, 2011-12-22 at 22:22 -0600, nek5000-users at lists.mcs.anl.gov wrote: Hi Thank you Paul. I've made short test with jet in crossflow and it seem to work, but I've got sometimes following warning WARNING: restart file has a NSPCAL > LDIMT read only part of the fld-data! WARNING: NPSCAL read from restart file differs from currently used NPSCAL! What's strange this warning doesn't show up at every restart. I'm going to make some longer tests now. Regards Adam > > Hi > > > > I simulate jet in crossflow problem with nek5000 and I've got serius > > problems with restarting the simulation. It causes strong spurious > > velocity oscillation and I cannot get rid of them. I've implemented > > restarting procedure described in prepost.f, but it doesn't help much. > > Playing with file format (parameters p66, p67) and projection (p94, p95) > > I can only decrease oscillations amplitude, but they are still there. > > Surprisingly saving output files in double precision (p63=8) makes > > everything worse. Has anybody got similar problems? > > Best regards > > > > Adam > > Hi, > > I think that the full restart capability should now work with > the current version of the source. > > There is a 2D example in the repo, with a README that I also > provide below. > > Basically, you will now save 4 files each time you wish to > checkpoint. The files come in two sets, A and B, and the A > set is then overwritten by the 3rd checkpoint, etc. so that > you have at most 8 checkpoint files on hand at any one time. > The files are 64 bit and thus cannot be used by VisIt --- thus, > they are truly designated as checkpoint/restart files and not > analysis files. More information in the README below. > > Please let me know if you have comments or questions. > > Best regards, > > Paul > > ------------------------------------------------------------------------- > >From the examples/cyl_restart directory: > > SET UP: > ======= > > This directory contains an example of full restart capabilities for > Nek5000. > > The model flow is a von Karman street in the wake of a 2D cylinder. > The quantity of interest is taken to be the lift, which is monitored > via "grep agy logfile" in the run_test script. A matlab file, doit.m, > can be used to analyze the output files containing the lift history > of the four cases. The cases are: > > ca - initial run (no projection) > cb - restart run for ca case > > pa - initial run (with projection) > pb - restart run for pa case > > BACKGROUND: > =========== > > Timestepping in Nek5000 is based on BDFk/EXTk (k=3, typ.), which uses kth-order > backward-difference formulae (BDFk) to evaluate velocity time derivatives and > kth-order extrapolation (EXTk) for explicit evaluation of the nonlinear and > pressure boundary terms. Under normal conditions, the velocity and pressure > for preceding timesteps are required to advance the the solution at each step. > > At startup, the timestepper is typically bootstrapped using a lower-order > BDF/EXT formula that, given the artificiality of most initial conditions, > is typically adequate. The velocity field often has enough inertia and > sufficient signature such that the same bootstrap procedure also works when > restarting from an existing solution (i.e., a .fnnnnn or .fldnn file, stored > in 32-bit precision). > > For some cases, it is important to have reproducibility of the time history > to the working precision (14 digits, typ.) of the code. The full restart > feature is designed to provide this capability. The main features of > full restart are: > > .Preserve alternating sets of snapshots (4 per set) in 64-bit precision. > (Alternating sets are saved in case the job fails in the middle of > saving a set.) > > .Use the most recent set to restart the computation by overwriting > the solution for the first steps, 0 through 3, with the preserved > snapshots. > > > Full restart is triggered through the .usr file. In the given example > cases, "ca" and "cb" the restart-save is illustrated in ca.usr and the > actual restart, plus the save, is illustrated in cb.usr. For these cases, > the restart is encapsulated in the user-provided routine "my_full_restart" > shown below, along with the calling format in userchk: > > > c----------------------------------------------------------------------- > subroutine userchk > include 'SIZE' > include 'TOTAL' > > logical if_drag_out,if_torq_out > > call my_full_restart > > scale = 1. > if_drag_out = .true. > if_torq_out = .false. > call torque_calc(scale,x0,if_drag_out,if_torq_out) > > return > end > c----------------------------------------------------------------------- > subroutine my_full_restart > > character*80 s80(4) > > call blank(s80,4*80) > s80(1) ='rs8ca0.f00005' > s80(2) ='rs8ca0.f00006' > s80(3) ='rs8ca0.f00007' > s80(4) ='rs8ca0.f00008' > > call full_restart(s80,4) ! Will overload 5-8 onto steps 0-3 > > > iosave = iostep ! Trigger save based on iostep > call full_restart_save(iosave) > > return > end > c----------------------------------------------------------------------- > > > Note that in the example above, the set enumerated 5--8 is used to restart > the computation. This set is generated by first running the "ca" case. > > Note that the frequency of the restart output is coincident with the > standard output frequency of field files (snapshots). This might be too > frequent if one is, say, making a movie where snapshots are typically > dumped every 10 steps. It would make more sense in this case to set > iosave=1000, say. > > Note also that if one is initiating a computation from something other > than the full restart mode then the full_restart() call should be commented > out. > > > COMMENTS: > ========= > > Full reproducibility of the solution is predicated on having sufficient > history information to replicate the state of "a" when running "b". > While such replication is possible, it does preclude acceleration of the > iterative solvers by projection onto prior solution spaces [1,2], since > these projections typically retain relatively long sequences of information > (e.g., spanning tens of steps) to maximally extract all the regularity in the > solution history. Consequently, _full_ reproducibility is not retained with > projection turned on. In this case, the solution is reproduced only to the > tolerance of the iterative solvers, which is in any case the maximum level > of accuracy attainable in the solution. To illustrate the difference, > we provide a test case pairing, "pa" and "pb", which is essentially the > same as the ca/cb pair save that projection is turned on for pa/pb. > > > > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users From nek5000-users at lists.mcs.anl.gov Thu Jan 5 03:13:14 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 5 Jan 2012 03:13:14 -0600 (CST) Subject: [Nek5000-users] Restart problem In-Reply-To: <1325753197.8552.9.camel@skagsnebb.mech.kth.se> References: <1323181417.5193.13.camel@skagsnebb.mech.kth.se> <1325753197.8552.9.camel@skagsnebb.mech.kth.se> Message-ID: Do you have any passive scalars? There are some issues w.r.t passive scalar i/o that we're working to resolve. Paul On Thu, 5 Jan 2012, nek5000-users at lists.mcs.anl.gov wrote: > On Thu, 2011-12-22 at 22:22 -0600, nek5000-users at lists.mcs.anl.gov > wrote: > > Hi > > Thank you Paul. I've made short test with jet in crossflow and it seem > to work, but I've got sometimes following warning > > WARNING: restart file has a NSPCAL > LDIMT > read only part of the fld-data! > WARNING: NPSCAL read from restart file differs from > currently used NPSCAL! > > What's strange this warning doesn't show up at every restart. I'm going > to make some longer tests now. > Regards > Adam > > > >>> Hi >>> >>> I simulate jet in crossflow problem with nek5000 and I've got serius >>> problems with restarting the simulation. It causes strong spurious >>> velocity oscillation and I cannot get rid of them. I've implemented >>> restarting procedure described in prepost.f, but it doesn't help much. >>> Playing with file format (parameters p66, p67) and projection (p94, p95) >>> I can only decrease oscillations amplitude, but they are still there. >>> Surprisingly saving output files in double precision (p63=8) makes >>> everything worse. Has anybody got similar problems? >>> Best regards >>> >>> Adam >> >> Hi, >> >> I think that the full restart capability should now work with >> the current version of the source. >> >> There is a 2D example in the repo, with a README that I also >> provide below. >> >> Basically, you will now save 4 files each time you wish to >> checkpoint. The files come in two sets, A and B, and the A >> set is then overwritten by the 3rd checkpoint, etc. so that >> you have at most 8 checkpoint files on hand at any one time. >> The files are 64 bit and thus cannot be used by VisIt --- thus, >> they are truly designated as checkpoint/restart files and not >> analysis files. More information in the README below. >> >> Please let me know if you have comments or questions. >> >> Best regards, >> >> Paul >> >> ------------------------------------------------------------------------- >>> From the examples/cyl_restart directory: >> >> SET UP: >> ======= >> >> This directory contains an example of full restart capabilities for >> Nek5000. >> >> The model flow is a von Karman street in the wake of a 2D cylinder. >> The quantity of interest is taken to be the lift, which is monitored >> via "grep agy logfile" in the run_test script. A matlab file, doit.m, >> can be used to analyze the output files containing the lift history >> of the four cases. The cases are: >> >> ca - initial run (no projection) >> cb - restart run for ca case >> >> pa - initial run (with projection) >> pb - restart run for pa case >> >> BACKGROUND: >> =========== >> >> Timestepping in Nek5000 is based on BDFk/EXTk (k=3, typ.), which uses kth-order >> backward-difference formulae (BDFk) to evaluate velocity time derivatives and >> kth-order extrapolation (EXTk) for explicit evaluation of the nonlinear and >> pressure boundary terms. Under normal conditions, the velocity and pressure >> for preceding timesteps are required to advance the the solution at each step. >> >> At startup, the timestepper is typically bootstrapped using a lower-order >> BDF/EXT formula that, given the artificiality of most initial conditions, >> is typically adequate. The velocity field often has enough inertia and >> sufficient signature such that the same bootstrap procedure also works when >> restarting from an existing solution (i.e., a .fnnnnn or .fldnn file, stored >> in 32-bit precision). >> >> For some cases, it is important to have reproducibility of the time history >> to the working precision (14 digits, typ.) of the code. The full restart >> feature is designed to provide this capability. The main features of >> full restart are: >> >> .Preserve alternating sets of snapshots (4 per set) in 64-bit precision. >> (Alternating sets are saved in case the job fails in the middle of >> saving a set.) >> >> .Use the most recent set to restart the computation by overwriting >> the solution for the first steps, 0 through 3, with the preserved >> snapshots. >> >> >> Full restart is triggered through the .usr file. In the given example >> cases, "ca" and "cb" the restart-save is illustrated in ca.usr and the >> actual restart, plus the save, is illustrated in cb.usr. For these cases, >> the restart is encapsulated in the user-provided routine "my_full_restart" >> shown below, along with the calling format in userchk: >> >> >> c----------------------------------------------------------------------- >> subroutine userchk >> include 'SIZE' >> include 'TOTAL' >> >> logical if_drag_out,if_torq_out >> >> call my_full_restart >> >> scale = 1. >> if_drag_out = .true. >> if_torq_out = .false. >> call torque_calc(scale,x0,if_drag_out,if_torq_out) >> >> return >> end >> c----------------------------------------------------------------------- >> subroutine my_full_restart >> >> character*80 s80(4) >> >> call blank(s80,4*80) >> s80(1) ='rs8ca0.f00005' >> s80(2) ='rs8ca0.f00006' >> s80(3) ='rs8ca0.f00007' >> s80(4) ='rs8ca0.f00008' >> >> call full_restart(s80,4) ! Will overload 5-8 onto steps 0-3 >> >> >> iosave = iostep ! Trigger save based on iostep >> call full_restart_save(iosave) >> >> return >> end >> c----------------------------------------------------------------------- >> >> >> Note that in the example above, the set enumerated 5--8 is used to restart >> the computation. This set is generated by first running the "ca" case. >> >> Note that the frequency of the restart output is coincident with the >> standard output frequency of field files (snapshots). This might be too >> frequent if one is, say, making a movie where snapshots are typically >> dumped every 10 steps. It would make more sense in this case to set >> iosave=1000, say. >> >> Note also that if one is initiating a computation from something other >> than the full restart mode then the full_restart() call should be commented >> out. >> >> >> COMMENTS: >> ========= >> >> Full reproducibility of the solution is predicated on having sufficient >> history information to replicate the state of "a" when running "b". >> While such replication is possible, it does preclude acceleration of the >> iterative solvers by projection onto prior solution spaces [1,2], since >> these projections typically retain relatively long sequences of information >> (e.g., spanning tens of steps) to maximally extract all the regularity in the >> solution history. Consequently, _full_ reproducibility is not retained with >> projection turned on. In this case, the solution is reproduced only to the >> tolerance of the iterative solvers, which is in any case the maximum level >> of accuracy attainable in the solution. To illustrate the difference, >> we provide a test case pairing, "pa" and "pb", which is essentially the >> same as the ca/cb pair save that projection is turned on for pa/pb. >> >> >> >> >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > From nek5000-users at lists.mcs.anl.gov Mon Jan 9 08:09:41 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 09 Jan 2012 19:39:41 +0530 Subject: [Nek5000-users] Compilation problem due to large LHIS Message-ID: <4F0AF525.2070304@iitk.ac.in> Dear Nek devs, I need to put LHIS in the SIZE file to a large number (52428800), but with this number I cannot seem to compile despite any amount of increasing the processors and decreasing LELT. The maximum LP that I tried is 2048 and LELT=77. I have attached the compile log and the SIZE file. Could I get around this problem somehow? Thanks, Mani -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: compile_log URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: high_ray.usr URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: SIZE URL: From nek5000-users at lists.mcs.anl.gov Mon Jan 9 09:44:08 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 9 Jan 2012 09:44:08 -0600 (CST) Subject: [Nek5000-users] Compilation problem due to large LHIS In-Reply-To: <4F0AF525.2070304@iitk.ac.in> References: <4F0AF525.2070304@iitk.ac.in> Message-ID: Mani, Why do you need it so large? Is it possible you can partition this across your processor set? Best regards, Paul On Mon, 9 Jan 2012, nek5000-users at lists.mcs.anl.gov wrote: > Dear Nek devs, > > I need to put LHIS in the SIZE file to a large number (52428800), but with > this number I cannot seem to compile despite any amount of increasing the > processors and decreasing LELT. The maximum LP that I tried is 2048 and > LELT=77. > > I have attached the compile log and the SIZE file. Could I get around this > problem somehow? > > Thanks, > Mani > From nek5000-users at lists.mcs.anl.gov Mon Jan 9 11:41:24 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 9 Jan 2012 23:11:24 +0530 Subject: [Nek5000-users] Compilation problem due to large LHIS In-Reply-To: References: <4F0AF525.2070304@iitk.ac.in> Message-ID: Hi Paul, We are trying to compute the energy spectrum by transforming the fields onto a particular basis set. The geometry is a box of size 5 x 5 x 1 with 157464 elements. The code to transform onto the basis set takes in hpts.out as the input. The size of the transform is 640 x 640 x 128 which gives 52428800. So you're saying that hpts() has to be parallelized? Thanks, Mani On Jan 9, 2012 9:14 PM, wrote: > > Mani, > > Why do you need it so large? > > Is it possible you can partition this across your processor set? > > Best regards, > > Paul > > > On Mon, 9 Jan 2012, nek5000-users at lists.mcs.anl.**govwrote: > > Dear Nek devs, >> >> I need to put LHIS in the SIZE file to a large number (52428800), but >> with this number I cannot seem to compile despite any amount of increasing >> the processors and decreasing LELT. The maximum LP that I tried is 2048 and >> LELT=77. >> >> I have attached the compile log and the SIZE file. Could I get around >> this problem somehow? >> >> Thanks, >> Mani >> >> ______________________________**_________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.**gov > https://lists.mcs.anl.gov/**mailman/listinfo/nek5000-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Mon Jan 9 11:50:48 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 9 Jan 2012 11:50:48 -0600 (CST) Subject: [Nek5000-users] Compilation problem due to large LHIS In-Reply-To: References: <4F0AF525.2070304@iitk.ac.in> Message-ID: Yes... looking at the code, I see this needs to be done. Fortunately, it is easy to parallelize. We'll look into it... Paul On Mon, 9 Jan 2012, nek5000-users at lists.mcs.anl.gov wrote: > Hi Paul, > > We are trying to compute the energy spectrum by transforming the fields > onto a particular basis set. The geometry is a box of size 5 x 5 x 1 with > 157464 elements. The code to transform onto the basis set takes in hpts.out > as the input. The size of the transform is 640 x 640 x 128 which > gives 52428800. So you're saying that hpts() has to be parallelized? > > Thanks, > > Mani > On Jan 9, 2012 9:14 PM, wrote: > >> >> Mani, >> >> Why do you need it so large? >> >> Is it possible you can partition this across your processor set? >> >> Best regards, >> >> Paul >> >> >> On Mon, 9 Jan 2012, nek5000-users at lists.mcs.anl.**govwrote: >> >> Dear Nek devs, >>> >>> I need to put LHIS in the SIZE file to a large number (52428800), but >>> with this number I cannot seem to compile despite any amount of increasing >>> the processors and decreasing LELT. The maximum LP that I tried is 2048 and >>> LELT=77. >>> >>> I have attached the compile log and the SIZE file. Could I get around >>> this problem somehow? >>> >>> Thanks, >>> Mani >>> >>> ______________________________**_________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.**gov >> https://lists.mcs.anl.gov/**mailman/listinfo/nek5000-users >> > From nek5000-users at lists.mcs.anl.gov Mon Jan 9 15:07:02 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 9 Jan 2012 22:07:02 +0100 Subject: [Nek5000-users] Compilation problem due to large LHIS In-Reply-To: References: <4F0AF525.2070304@iitk.ac.in> Message-ID: I guess one way to do this is to recycle my g2gi() code. It's fully parallel. -Stefan On 1/9/12, nek5000-users at lists.mcs.anl.gov wrote: > > Yes... looking at the code, I see this needs to be done. > > Fortunately, it is easy to parallelize. > > We'll look into it... > > Paul > > > On Mon, 9 Jan 2012, nek5000-users at lists.mcs.anl.gov wrote: > >> Hi Paul, >> >> We are trying to compute the energy spectrum by transforming the fields >> onto a particular basis set. The geometry is a box of size 5 x 5 x 1 with >> 157464 elements. The code to transform onto the basis set takes in >> hpts.out >> as the input. The size of the transform is 640 x 640 x 128 which >> gives 52428800. So you're saying that hpts() has to be parallelized? >> >> Thanks, >> >> Mani >> On Jan 9, 2012 9:14 PM, wrote: >> >>> >>> Mani, >>> >>> Why do you need it so large? >>> >>> Is it possible you can partition this across your processor set? >>> >>> Best regards, >>> >>> Paul >>> >>> >>> On Mon, 9 Jan 2012, >>> nek5000-users at lists.mcs.anl.**govwrote: >>> >>> Dear Nek devs, >>>> >>>> I need to put LHIS in the SIZE file to a large number (52428800), but >>>> with this number I cannot seem to compile despite any amount of >>>> increasing >>>> the processors and decreasing LELT. The maximum LP that I tried is 2048 >>>> and >>>> LELT=77. >>>> >>>> I have attached the compile log and the SIZE file. Could I get around >>>> this problem somehow? >>>> >>>> Thanks, >>>> Mani >>>> >>>> ______________________________**_________________ >>> Nek5000-users mailing list >>> Nek5000-users at lists.mcs.anl.**gov >>> https://lists.mcs.anl.gov/**mailman/listinfo/nek5000-users >>> >> > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > From nek5000-users at lists.mcs.anl.gov Mon Jan 9 15:07:02 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 9 Jan 2012 22:07:02 +0100 Subject: [Nek5000-users] Compilation problem due to large LHIS In-Reply-To: References: <4F0AF525.2070304@iitk.ac.in> Message-ID: I guess one way to do this is to recycle my g2gi() code. It's fully parallel. -Stefan On 1/9/12, nek5000-users at lists.mcs.anl.gov wrote: > > Yes... looking at the code, I see this needs to be done. > > Fortunately, it is easy to parallelize. > > We'll look into it... > > Paul > > > On Mon, 9 Jan 2012, nek5000-users at lists.mcs.anl.gov wrote: > >> Hi Paul, >> >> We are trying to compute the energy spectrum by transforming the fields >> onto a particular basis set. The geometry is a box of size 5 x 5 x 1 with >> 157464 elements. The code to transform onto the basis set takes in >> hpts.out >> as the input. The size of the transform is 640 x 640 x 128 which >> gives 52428800. So you're saying that hpts() has to be parallelized? >> >> Thanks, >> >> Mani >> On Jan 9, 2012 9:14 PM, wrote: >> >>> >>> Mani, >>> >>> Why do you need it so large? >>> >>> Is it possible you can partition this across your processor set? >>> >>> Best regards, >>> >>> Paul >>> >>> >>> On Mon, 9 Jan 2012, >>> nek5000-users at lists.mcs.anl.**govwrote: >>> >>> Dear Nek devs, >>>> >>>> I need to put LHIS in the SIZE file to a large number (52428800), but >>>> with this number I cannot seem to compile despite any amount of >>>> increasing >>>> the processors and decreasing LELT. The maximum LP that I tried is 2048 >>>> and >>>> LELT=77. >>>> >>>> I have attached the compile log and the SIZE file. Could I get around >>>> this problem somehow? >>>> >>>> Thanks, >>>> Mani >>>> >>>> ______________________________**_________________ >>> Nek5000-users mailing list >>> Nek5000-users at lists.mcs.anl.**gov >>> https://lists.mcs.anl.gov/**mailman/listinfo/nek5000-users >>> >> > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > From nek5000-users at lists.mcs.anl.gov Tue Jan 10 05:02:49 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Tue, 10 Jan 2012 12:02:49 +0100 Subject: [Nek5000-users] run-time hang up in gs_setup Message-ID: <1326193369.5547.30.camel@damavand.mech.kth.se> Dear NEKs; I am trying to run a simulation of a turbulent flow in a straight pipe in high Reynolds number (Re_tau = 1000). After generating the grid with PRENEK and extrude it using n2to3, the mesh ended up with 4,495,920 elements. It compiled properly; however, trying to run it, hanged up in the last stage: ######################################################################## verify mesh topology -1.000000000000000 1.000000000000000 Xrange -1.000000000000000 1.000000000000000 Yrange 0.000000000000000 25.00000000000000 Zrange done :: verify mesh topology E-solver strategy: 1 itr mg_nx: 1 3 mg_ny: 1 3 mg_nz: 1 3 call usrsetvert done :: usrsetvert gs_setup: 866937 unique labels shared pairwise times (avg, min, max): 0.000241442 0.00019722 0.000265908 crystal router : 0.000458177 0.000445795 0.000471807 used all_to_all method: pairwise setupds time 5.6048E-02 seconds 1 2 4565612 4495920 setvert3d: 4 86046564 122013924 86046564 86046564 call usrsetvert done :: usrsetvert gs_setup: 8041169 unique labels shared pairwise times (avg, min, max): 0.00050716 0.000427103 0.00056479 crystal router : 0.0040165 0.00392921 0.00411811 used all_to_all method: pairwise setupds time 1.0465E+00 seconds 2 4 86046564 4495920 setup h1 coarse grid, nx_crs= 2 call usrsetvert done :: usrsetvert gs_setup: 866937 unique labels shared pairwise times (avg, min, max): 0.000233683 0.000197816 0.00024941 crystal router : 0.000466869 0.00045588 0.000478101 used all_to_all method: pairwise ######################################################################## I was wondering if you could help me with that. I attached the run logfile and also genmap.out. Many thanks Azad -------------- next part -------------- Input (.rea) file name: PIPE Input mesh tolerance (default 0.2): NOTE: smaller is better, but generous is more forgiving for bad meshes. 0.1 reading .rea file data ... 4495920 3 4495920 F nelt,ndim,nelv,ifre2 start locglob_lexico: 8 4495920 35967360 0.10000000 locglob: 1 1 35967360 locglob: 2 340 35967360 locglob: 3 8490 35967360 locglob: 1 4457250 35967360 locglob: 2 4574325 35967360 locglob: 3 4574325 35967360 locglob: 1 4574325 35967360 locglob: 2 4574325 35967360 locglob: 3 4574325 35967360 done locglob_lexico: 4574325 4574325 35967360 8 start periodic vtx: 4495920 4574325 1000 5 5 3 0.00000000E+00 1000 shift 2000 5 5 3 1.19209290E-07 2000 shift 3000 5 5 3 1.19209290E-07 3000 shift 4000 5 5 3 0.00000000E+00 4000 shift 5000 5 5 3 0.00000000E+00 5000 shift 6000 5 5 3 0.00000000E+00 6000 shift 7000 5 5 3 0.00000000E+00 7000 shift 8000 5 5 3 5.96046448E-08 8000 shift 4487760 6 5 3 1.19209290E-07 9000 shift 4488760 6 5 3 1.19209290E-07 10000 shift 4489760 6 5 3 0.00000000E+00 11000 shift 4490760 6 5 3 0.00000000E+00 12000 shift 4491760 6 5 3 1.49011612E-08 13000 shift 4492760 6 5 3 2.98023224E-08 14000 shift 4493760 6 5 3 0.00000000E+00 15000 shift 4494760 6 5 3 0.00000000E+00 16000 shift 4495760 6 5 3 1.49011612E-08 17000 shift done periodic vtx start rec_bisect: 4495920 done: 0.0% not connected 1 2 1 0 done: 1.0% done: 2.0% done: 3.0% not connected 1 2 2 0 done: 4.0% not connected 1 2 3 0 done: 5.0% done: 6.0% done: 7.0% not connected 1 2 4 0 not connected 1 2 5 0 done: 8.0% not connected 1 2 6 0 done: 9.0% done: 10.0% done: 11.0% done: 12.0% done: 13.0% done: 14.0% not connected 1 2 7 0 done: 15.0% done: 16.0% done: 17.0% not connected 1 2 8 0 done: 18.0% done: 19.0% not connected 1 2 9 0 done: 20.0% not connected 1 2 10 0 done: 21.0% not connected 1 2 11 0 done: 22.0% not connected 1 2 12 0 not connected 1 2 13 0 done: 23.0% not connected 1 2 14 0 not connected 1 2 15 0 done: 24.0% not connected 1 2 16 0 not connected 1 2 17 0 done: 25.0% not connected 1 2 18 0 not connected 1 2 19 0 not connected 1 2 20 0 not connected 1 2 21 0 done: 26.0% not connected 1 2 22 0 not connected 1 2 23 0 done: 27.0% done: 28.0% not connected 1 2 24 0 done: 29.0% done: 30.0% not connected 1 2 25 0 done: 31.0% done: 32.0% not connected 1 2 26 0 done: 33.0% done: 34.0% not connected 1 2 27 0 done: 35.0% done: 36.0% done: 37.0% not connected 1 2 28 0 done: 38.0% done: 39.0% done: 40.0% done: 41.0% not connected 1 2 29 0 done: 42.0% done: 43.0% not connected 1 2 30 0 done: 44.0% not connected 1 2 31 0 done: 45.0% done: 46.0% not connected 1 2 32 0 not connected 1 2 33 0 done: 47.0% not connected 1 2 34 0 done: 48.0% done: 49.0% not connected 1 2 35 0 not connected 1 2 36 0 done: 50.0% not connected 1 2 37 0 done: 51.0% not connected 1 2 38 0 done: 52.0% not connected 1 2 39 0 done: 53.0% done: 54.0% done: 55.0% done: 56.0% done: 57.0% done: 58.0% not connected 1 2 40 0 done: 59.0% not connected 1 2 41 0 done: 60.0% not connected 1 2 42 0 not connected 1 2 43 0 done: 61.0% not connected 1 2 44 0 done: 62.0% not connected 1 2 45 0 not connected 1 2 46 0 done: 63.0% done: 64.0% done: 65.0% done: 66.0% not connected 1 2 47 0 not connected 1 2 48 0 done: 67.0% done: 68.0% done: 69.0% done: 70.0% not connected 1 2 49 0 done: 71.0% not connected 1 2 50 0 done: 72.0% not connected 1 2 51 0 done: 73.0% done: 74.0% not connected 1 2 52 0 done: 75.0% not connected 1 2 53 0 not connected 1 2 54 0 done: 76.0% done: 77.0% done: 78.0% not connected 1 2 55 0 done: 79.0% not connected 1 2 56 0 not connected 1 2 57 0 done: 80.0% not connected 1 2 58 0 done: 81.0% done: 82.0% not connected 1 2 59 0 done: 83.0% done: 84.0% not connected 1 2 60 0 done: 85.0% not connected 1 2 61 0 not connected 1 2 62 0 done: 86.0% not connected 1 2 63 0 done: 87.0% done: 88.0% done: 89.0% done: 90.0% done: 91.0% done: 92.0% not connected 1 2 64 0 done: 93.0% not connected 1 2 65 0 done: 94.0% done: 95.0% done: 96.0% done: 97.0% done: 98.0% done: 99.0% done: 100.0% done rec_bisect writing PIPE.map -------------- next part -------------- PE 0: MPICH/GNI environment settings: PE 0: MPICH_GNI_RECV_CQ_SIZE = 40960 PE 0: MPICH_GNI_LOCAL_CQ_SIZE = 8192 PE 0: MPICH_GNI_DEBUG_LEVEL = 0 PE 0: MPICH_GNI_MAX_VSHORT_MSG_SIZE = 464 PE 0: MPICH_GNI_MAX_EAGER_MSG_SIZE = 8192 PE 0: MPICH_GNI_NUM_BUFS = 64 PE 0: MPICH_GNI_NUM_MBOXES = -1 PE 0: MPICH_GNI_RDMA_THRESHOLD = 1024 PE 0: MPICH_GNI_RCVCQ_PROCNUM = 1 PE 0: MPICH_GNI_NDREG_ENTRIES(req.) = 151 PE 0: MPICH_GNI_NDREG_MAXSIZE = 524288 PE 0: MPICH_GNI_NDREG_LAZYMEM = LAZY_ALL PE 0: MPICH_GNI_DMAPP_INTEROP = 1 PE 0: MPICH_GNI_DYNAMIC_CONN = 1 PE 0: MPICH_GNI_MAX_NUM_RETRIES = 16 PE 0: MPICH_GNI_FORK_MODE = PARTCOPY PE 0: MPICH_GNI_MBOX_PLACEMENT = PROC PE 0: MPICH_GNI_VC_MSG_PROTOCOL = MBOX PE 0: MPICH_GNI_BTE_MULTI_CHANNEL = 1 PE 0: MPICH_GNI_LMT_GET_PATH = 1 PE 0: MPICH_GNI_LMT_PATH = 1 /----------------------------------------------------------\ | _ __ ______ __ __ ______ ____ ____ ____ | | / | / // ____// //_/ / ____/ / __ \ / __ \ / __ \ | | / |/ // __/ / ,< /___ \ / / / // / / // / / / | | / /| // /___ / /| | ____/ / / /_/ // /_/ // /_/ / | | /_/ |_//_____//_/ |_|/_____/ \____/ \____/ \____/ | | | |----------------------------------------------------------| | | | NEK5000: Open Source Spectral Element Solver | | COPYRIGHT (c) 2008-2010 UCHICAGO ARGONNE, LLC | | Version: 1.0rc1 / SVN r618 | | Web: http://nek5000.mcs.anl.gov | | | \----------------------------------------------------------/ Number of processors: 4096 REAL wdsize : 8 INTEGER wdsize : 4 Beginning session: /cfs/klemming/nobackup/a/anoorani/pipe/Retau1000/new/pipe.rea timer accuracy: 0.0000000E+00 sec read .rea file read .re2 file byte swap: F 6.543210 -2.9312772E+35 nelgt/nelgv/lelt: 4495920 4495920 1100 lx1 /lx2 /lx3 : 4 2 2 mapping elements to processors element load imbalance: 1 1097 1098 done :: mapping elements to processors reading mesh reading curved sides reading bc for ifld 1 done :: read .re2 file 0 objects found done :: read .rea file 114.58 sec setup mesh topology Right-handed check complete for 4495920 elements. OK. setvert3d: 4 86046564 122013924 86046564 86046564 call usrsetvert done :: usrsetvert gs_setup: 13275258 unique labels shared pairwise times (avg, min, max): 0.000561914 0.000431108 0.000673413 crystal router : 0.00190646 0.00186119 0.00197728 used all_to_all method: pairwise setupds time 3.6859E-01 seconds 0 4 86046564 4495920 8 max multiplicity done :: setup mesh topology call usrdat done :: usrdat generate geomerty data vol_t,vol_v: 78.53235632719681 78.53235632719681 done :: generate geomerty data call usrdat2 done :: usrdat2 regenerate geomerty data 1 vol_t,vol_v: 78.53235632719681 78.53235632719681 done :: regenerate geomerty data 1 verify mesh topology -1.000000000000000 1.000000000000000 Xrange -1.000000000000000 1.000000000000000 Yrange 0.000000000000000 25.00000000000000 Zrange done :: verify mesh topology 118 Parameters from file:/cfs/klemming/nobackup/a/anoorani/pipe/Retau1000/new/pipe.rea 1 1.00000 P001: DENSITY 2 -18850. P002: VISCOS 7 1.00000 P007: RHOCP 8 1.00000 P008: CONDUCT 11 100.0 P011: NSTEPS 12 -1.0000E-05 P012: DT 15 50.0000 P015: IOSTEP 17 1.00000 P017: 18 0.500000E-01 P018: GRID < 0 --> # cells on screen 19 -1.00000 P019: INTYPE 20 10.0000 P020: NORDER 21 0.100000E-05 P021: DIVERGENCE 22 0.100000E-06 P022: HELMHOLTZ 24 0.100000E-01 P024: TOLREL 25 0.100000E-01 P025: TOLABS 26 1.00000 P026: COURANT/NTAU 27 3.00000 P027: TORDER 28 0.00000 P028: TORDER: mesh velocity (0: p28=p27) 54 -3.00000 P054: fixed flow rate dir: |p54|=1,2,3=x,y,z 55 1.00000 P055: vol.flow rate (p54>0) or Ubar (p54<0) 65 1.00000 P065: #iofiles (eg, 0 or 64); <0 --> sep. dirs 66 4.00000 P066: output : <0=ascii, else binary 67 4.00000 P067: restart: <0=ascii, else binary 69 50000.0 P069: : : frequency of srf dump 93 20.0000 P093: Number of previous pressure solns saved 94 3.00000 P094: start projecting velocity after p94 step 95 5.00000 P095: start projecting pressure after p95 step 99 3.00000 P099: dealiasing: <0--> off/3--> old/4--> new 102 1.00000 P102: Dump out divergence at each time step 103 0.050000 P103: weight of stabilizing filter (.01) IFTRAN = T IFFLOW = T IFHEAT = F IFSPLIT = F IFLOMACH = F IFUSERVP = F IFUSERMV = F IFSTRS = F IFCHAR = T IFCYCLIC = F IFAXIS = F IFMVBD = F IFMELT = F IFMODEL = F IFKEPS = F IFMOAB = F IFSYNC = T IFVCOR = T IFINTQ = F IFCWUZ = F IFSWALL = F IFGEOM = F IFSURT = F IFWCNO = F IFTMSH for field 1 = F IFADVC for field 1 = T IFNONL for field 1 = F Dealiasing enabled, lxd= 6 Estimated eigenvalues EIGAA = 1.650197855862141 EIGGA = 12385388.91135105 EIGAE = 1.5791367041742974E-002 EIGAS = 7.9744816586921851E-004 EIGGE = 12385388.91135105 EIGGS = 2.000000000000000 verify mesh topology -1.000000000000000 1.000000000000000 Xrange -1.000000000000000 1.000000000000000 Yrange 0.000000000000000 25.00000000000000 Zrange done :: verify mesh topology E-solver strategy: 1 itr mg_nx: 1 3 mg_ny: 1 3 mg_nz: 1 3 call usrsetvert done :: usrsetvert gs_setup: 1393086 unique labels shared pairwise times (avg, min, max): 0.000286875 0.000198293 0.000347185 crystal router : 0.000353284 0.000346708 0.000362897 used all_to_all method: pairwise setupds time 3.3200E-02 seconds 1 2 4565612 4495920 setvert3d: 4 86046564 122013924 86046564 86046564 call usrsetvert done :: usrsetvert gs_setup: 13275258 unique labels shared pairwise times (avg, min, max): 0.000565474 0.000431323 0.000666308 crystal router : 0.00192211 0.00188022 0.00198519 used all_to_all method: pairwise setupds time 3.6968E-01 seconds 2 4 86046564 4495920 setup h1 coarse grid, nx_crs= 2 call usrsetvert done :: usrsetvert gs_setup: 1393086 unique labels shared pairwise times (avg, min, max): 0.00027245 0.000212216 0.000325584 crystal router : 0.000303 0.000294399 0.000315809 used all_to_all method: crystal router gs_setup: 1393086 unique labels shared pairwise times (avg, min, max): 0.000276989 0.000202703 0.000333595 crystal router : 0.000296147 0.000291204 0.000304508 used all_to_all method: crystal router Application 727540 exit codes: 143 Application 727540 resources: utime ~406010s, stime ~1268s From nek5000-users at lists.mcs.anl.gov Tue Jan 10 06:01:29 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Tue, 10 Jan 2012 06:01:29 -0600 (CST) Subject: [Nek5000-users] run-time hang up in gs_setup In-Reply-To: <1326193369.5547.30.camel@damavand.mech.kth.se> References: <1326193369.5547.30.camel@damavand.mech.kth.se> Message-ID: Hi Azad, You are in record-setting territory for element counts! :) Are you using the amg-based coarse-grid solver? It is certain that you will need to do this (and, therefore, you will need matlab to process the AMG operators). There is some discussion of the steps on the wiki page. We can walk you through this process if you have any questions. What value of lx1 are you using? I would recommend fewer elements and a higher value of lx1. I think it will be easier to manage the data, etc. Paul On Tue, 10 Jan 2012, nek5000-users at lists.mcs.anl.gov wrote: > Dear NEKs; > > I am trying to run a simulation of a turbulent flow in a straight pipe > in high Reynolds number (Re_tau = 1000). After generating the grid with > PRENEK and extrude it using n2to3, the mesh ended up with 4,495,920 > elements. It compiled properly; however, trying to run it, hanged up in > the last stage: > ######################################################################## > > verify mesh topology > -1.000000000000000 1.000000000000000 Xrange > -1.000000000000000 1.000000000000000 Yrange > 0.000000000000000 25.00000000000000 Zrange > done :: verify mesh topology > > E-solver strategy: 1 itr > mg_nx: 1 3 > mg_ny: 1 3 > mg_nz: 1 3 > call usrsetvert > done :: usrsetvert > > gs_setup: 866937 unique labels shared > pairwise times (avg, min, max): 0.000241442 0.00019722 0.000265908 > crystal router : 0.000458177 0.000445795 0.000471807 > used all_to_all method: pairwise > setupds time 5.6048E-02 seconds 1 2 4565612 4495920 > setvert3d: 4 86046564 122013924 86046564 86046564 > call usrsetvert > done :: usrsetvert > > gs_setup: 8041169 unique labels shared > pairwise times (avg, min, max): 0.00050716 0.000427103 0.00056479 > crystal router : 0.0040165 0.00392921 0.00411811 > used all_to_all method: pairwise > setupds time 1.0465E+00 seconds 2 4 86046564 4495920 > setup h1 coarse grid, nx_crs= 2 > call usrsetvert > done :: usrsetvert > > gs_setup: 866937 unique labels shared > pairwise times (avg, min, max): 0.000233683 0.000197816 0.00024941 > crystal router : 0.000466869 0.00045588 0.000478101 > used all_to_all method: pairwise > ######################################################################## > > > I was wondering if you could help me with that. I attached the run > logfile and also genmap.out. > > Many thanks > Azad > From nek5000-users at lists.mcs.anl.gov Tue Jan 10 06:35:22 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Tue, 10 Jan 2012 13:35:22 +0100 Subject: [Nek5000-users] run-time hang up in gs_setup In-Reply-To: References: <1326193369.5547.30.camel@damavand.mech.kth.se> Message-ID: Hi Azad, We have seen similar situations. I think this has to do with a known bug. Unfortunately this bug is hard to reproduce and we haven't managed to fix it yet. -Stefan On 1/10/12, nek5000-users at lists.mcs.anl.gov wrote: > > Hi Azad, > > You are in record-setting territory for element counts! :) > > Are you using the amg-based coarse-grid solver? > It is certain that you will need to do this (and, > therefore, you will need matlab to process the AMG > operators). There is some discussion of the steps > on the wiki page. We can walk you through this process > if you have any questions. > > What value of lx1 are you using? > > I would recommend fewer elements and a higher value of lx1. > I think it will be easier to manage the data, etc. > > Paul > > > > > > On Tue, 10 Jan 2012, nek5000-users at lists.mcs.anl.gov wrote: > >> Dear NEKs; >> >> I am trying to run a simulation of a turbulent flow in a straight pipe >> in high Reynolds number (Re_tau = 1000). After generating the grid with >> PRENEK and extrude it using n2to3, the mesh ended up with 4,495,920 >> elements. It compiled properly; however, trying to run it, hanged up in >> the last stage: >> ######################################################################## >> >> verify mesh topology >> -1.000000000000000 1.000000000000000 Xrange >> -1.000000000000000 1.000000000000000 Yrange >> 0.000000000000000 25.00000000000000 Zrange >> done :: verify mesh topology >> >> E-solver strategy: 1 itr >> mg_nx: 1 3 >> mg_ny: 1 3 >> mg_nz: 1 3 >> call usrsetvert >> done :: usrsetvert >> >> gs_setup: 866937 unique labels shared >> pairwise times (avg, min, max): 0.000241442 0.00019722 0.000265908 >> crystal router : 0.000458177 0.000445795 0.000471807 >> used all_to_all method: pairwise >> setupds time 5.6048E-02 seconds 1 2 4565612 4495920 >> setvert3d: 4 86046564 122013924 86046564 86046564 >> call usrsetvert >> done :: usrsetvert >> >> gs_setup: 8041169 unique labels shared >> pairwise times (avg, min, max): 0.00050716 0.000427103 0.00056479 >> crystal router : 0.0040165 0.00392921 0.00411811 >> used all_to_all method: pairwise >> setupds time 1.0465E+00 seconds 2 4 86046564 4495920 >> setup h1 coarse grid, nx_crs= 2 >> call usrsetvert >> done :: usrsetvert >> >> gs_setup: 866937 unique labels shared >> pairwise times (avg, min, max): 0.000233683 0.000197816 0.00024941 >> crystal router : 0.000466869 0.00045588 0.000478101 >> used all_to_all method: pairwise >> ######################################################################## >> >> >> I was wondering if you could help me with that. I attached the run >> logfile and also genmap.out. >> >> Many thanks >> Azad >> > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > From nek5000-users at lists.mcs.anl.gov Tue Jan 10 10:58:47 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Tue, 10 Jan 2012 17:58:47 +0100 Subject: [Nek5000-users] run-time hang up in gs_setup Message-ID: <1326214727.2600.282.camel@damavand.mech.kth.se> Dear Paul and Stefan; Thanks very much for looking into it. I use polynomial order 7th (lx1=8). For the coarse-grid solver I actually used XXt. I also tried to use AMG, but unfortunately neither v619 nor the latest version could have compiled its matlab files and always gives me this error (in matlab/R2011a): ############################################## ... sparsification tolerance [1e-4]: stol = 0.0001 ------------------------------------------------------------------------ Segmentation violation detected at Tue Jan 10 15:56:46 2012 ------------------------------------------------------------------------ .... Abnormal termination: Segmentation violation .... ############################################# I have been in the web page: "amg_matlab Matlab based tool to generate AMG solver inputfiles" (http://nek5000.mcs.anl.gov/index.php/Amg_matlab) which gives me an empty link. I had an old version of the .dat files needed to run AMG, which I tried those as (amg_Aff.dat, amgdmp_i.dat, amg.dat, amg_AfP.dat, amgdmp_p.dat, amgdmp_j.dat, amg_W.dat) and I have got this error: ############################################ ... AMG: reading through row 142800, pass 119/121 AMG: reading through row 144000, pass 120/121 AMG: reading through row 144540, pass 121/121 ERROR (proc 0000, /afs/pdc.kth.se/home/a/anoorani/codes/latest_nek/nek5_svn/trunk/nek/jl/amg.c:468): AMG: missing data for some rows call exitt: dying ... ############################################ I think AMG could be a possibility to overcome this problem, though I could not manage to get a run with that one. I look into the problem with higher polynomial order to see if it reduces the number of elements dramatically, or at least resolve this issue. Best regards Azad %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% >Hi Azad, > >We have seen similar situations. I think this has to do with a known >bug. Unfortunately this bug is hard to reproduce and we haven't >managed to fix it yet. > >-Stefan > >On 1/10/12, nek5000-users at lists.mcs.anl.gov > wrote: > > >Hi Azad, > >You are in record-setting territory for element counts! :) > >Are you using the amg-based coarse-grid solver? >It is certain that you will need to do this (and, >therefore, you will need matlab to process the AMG >operators). There is some discussion of the steps >on the wiki page. We can walk you through this process >if you have any questions. > >What value of lx1 are you using? > >I would recommend fewer elements and a higher value of lx1. >I think it will be easier to manage the data, etc. > >Paul > > > > > >On Tue, 10 Jan 2012, nek5000-users at lists.mcs.anl.gov wrote: > > Dear NEKs; > > I am trying to run a simulation of a turbulent flow in a straight pipe > in high Reynolds number (Re_tau = 1000). After generating the grid with > PRENEK and extrude it using n2to3, the mesh ended up with 4,495,920 > elements. It compiled properly; however, trying to run it, hanged up in > the last stage: > ######################################################################## > > verify mesh topology > -1.000000000000000 1.000000000000000 Xrange > -1.000000000000000 1.000000000000000 Yrange > 0.000000000000000 25.00000000000000 Zrange > done :: verify mesh topology > > E-solver strategy: 1 itr > mg_nx: 1 3 > mg_ny: 1 3 > mg_nz: 1 3 > call usrsetvert > done :: usrsetvert > > gs_setup: 866937 unique labels shared > pairwise times (avg, min, max): 0.000241442 0.00019722 0.000265908 > crystal router : 0.000458177 0.000445795 0.000471807 > used all_to_all method: pairwise > setupds time 5.6048E-02 seconds 1 2 4565612 4495920 > setvert3d: 4 86046564 122013924 86046564 86046564 > call usrsetvert > done :: usrsetvert > > gs_setup: 8041169 unique labels shared > pairwise times (avg, min, max): 0.00050716 0.000427103 0.00056479 > crystal router : 0.0040165 0.00392921 0.00411811 > used all_to_all method: pairwise > setupds time 1.0465E+00 seconds 2 4 86046564 4495920 > setup h1 coarse grid, nx_crs= 2 > call usrsetvert > done :: usrsetvert > > gs_setup: 866937 unique labels shared > pairwise times (avg, min, max): 0.000233683 0.000197816 0.00024941 > crystal router : 0.000466869 0.00045588 0.000478101 > used all_to_all method: pairwise > ######################################################################## > > > I was wondering if you could help me with that. I attached the run > logfile and also genmap.out. > > Many thanks > Azad > From nek5000-users at lists.mcs.anl.gov Tue Jan 10 12:13:45 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Tue, 10 Jan 2012 19:13:45 +0100 Subject: [Nek5000-users] run-time hang up in gs_setup In-Reply-To: <1326214727.2600.282.camel@damavand.mech.kth.se> References: <1326214727.2600.282.camel@damavand.mech.kth.se> Message-ID: Hi Azad, your choice of lx1=8 is fine (it's our preferred sweet spot). If you have a large element count (say > 300'000) the factorization in the XXt setup phase may take hours. I guess that's why it looks like it's hanging. Again, there is a known bug which looks the same. So can't tell exactly what's causing your problem. I just updated the Wiki: https://nek5000.mcs.anl.gov/index.php/Amg_matlab Can you verify that it still fails. -Stefan On 1/10/12, nek5000-users at lists.mcs.anl.gov wrote: > Dear Paul and Stefan; > > Thanks very much for looking into it. I use polynomial order 7th > (lx1=8). For the coarse-grid solver I actually used XXt. I also tried to > use AMG, but unfortunately neither v619 nor the latest version could > have compiled its matlab files and always gives me this error (in > matlab/R2011a): > ############################################## > ... > sparsification tolerance [1e-4]: stol = 0.0001 > > ------------------------------------------------------------------------ > Segmentation violation detected at Tue Jan 10 15:56:46 2012 > ------------------------------------------------------------------------ > .... > Abnormal termination: > Segmentation violation > .... > ############################################# > I have been in the web page: "amg_matlab Matlab based tool to generate > AMG solver inputfiles" (http://nek5000.mcs.anl.gov/index.php/Amg_matlab) > which gives me an empty link. > > I had an old version of the .dat files needed to run AMG, which I tried > those as (amg_Aff.dat, amgdmp_i.dat, amg.dat, amg_AfP.dat, amgdmp_p.dat, > amgdmp_j.dat, amg_W.dat) and I have got this error: > > ############################################ > ... > AMG: reading through row 142800, pass 119/121 > AMG: reading through row 144000, pass 120/121 > AMG: reading through row 144540, pass 121/121 > ERROR (proc > 0000, > /afs/pdc.kth.se/home/a/anoorani/codes/latest_nek/nek5_svn/trunk/nek/jl/amg.c:468): > AMG: missing data for some rows > > call exitt: dying ... > ############################################ > > I think AMG could be a possibility to overcome this problem, though I > could not manage to get a run with that one. I look into the problem > with higher polynomial order to see if it reduces the number of elements > dramatically, or at least resolve this issue. > > Best regards > Azad > > %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% >>Hi Azad, >> >>We have seen similar situations. I think this has to do with a known >>bug. Unfortunately this bug is hard to reproduce and we haven't >>managed to fix it yet. >> >>-Stefan >> >>On 1/10/12, nek5000-users at lists.mcs.anl.gov >> wrote: >> >> >>Hi Azad, >> >>You are in record-setting territory for element counts! :) >> >>Are you using the amg-based coarse-grid solver? >>It is certain that you will need to do this (and, >>therefore, you will need matlab to process the AMG >>operators). There is some discussion of the steps >>on the wiki page. We can walk you through this process >>if you have any questions. >> >>What value of lx1 are you using? >> >>I would recommend fewer elements and a higher value of lx1. >>I think it will be easier to manage the data, etc. >> >>Paul >> >> >> >> >> >>On Tue, 10 Jan 2012, nek5000-users at lists.mcs.anl.gov wrote: >> >> Dear NEKs; >> >> I am trying to run a simulation of a turbulent flow in a straight pipe >> in high Reynolds number (Re_tau = 1000). After generating the grid with >> PRENEK and extrude it using n2to3, the mesh ended up with 4,495,920 >> elements. It compiled properly; however, trying to run it, hanged up in >> the last stage: >> ######################################################################## >> >> verify mesh topology >> -1.000000000000000 1.000000000000000 Xrange >> -1.000000000000000 1.000000000000000 Yrange >> 0.000000000000000 25.00000000000000 Zrange >> done :: verify mesh topology >> >> E-solver strategy: 1 itr >> mg_nx: 1 3 >> mg_ny: 1 3 >> mg_nz: 1 3 >> call usrsetvert >> done :: usrsetvert >> >> gs_setup: 866937 unique labels shared >> pairwise times (avg, min, max): 0.000241442 0.00019722 0.000265908 >> crystal router : 0.000458177 0.000445795 0.000471807 >> used all_to_all method: pairwise >> setupds time 5.6048E-02 seconds 1 2 4565612 4495920 >> setvert3d: 4 86046564 122013924 86046564 86046564 >> call usrsetvert >> done :: usrsetvert >> >> gs_setup: 8041169 unique labels shared >> pairwise times (avg, min, max): 0.00050716 0.000427103 0.00056479 >> crystal router : 0.0040165 0.00392921 0.00411811 >> used all_to_all method: pairwise >> setupds time 1.0465E+00 seconds 2 4 86046564 4495920 >> setup h1 coarse grid, nx_crs= 2 >> call usrsetvert >> done :: usrsetvert >> >> gs_setup: 866937 unique labels shared >> pairwise times (avg, min, max): 0.000233683 0.000197816 0.00024941 >> crystal router : 0.000466869 0.00045588 0.000478101 >> used all_to_all method: pairwise >> ######################################################################## >> >> >> I was wondering if you could help me with that. I attached the run >> logfile and also genmap.out. >> >> Many thanks >> Azad >> > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > From nek5000-users at lists.mcs.anl.gov Tue Jan 10 13:25:58 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Tue, 10 Jan 2012 13:25:58 -0600 (CST) Subject: [Nek5000-users] run-time hang up in gs_setup In-Reply-To: <1326214727.2600.282.camel@damavand.mech.kth.se> Message-ID: <1339418296.129208.1326223558062.JavaMail.root@zimbra.anl.gov> Hi Azad, I believe old AMG files should work up to and including revision 707 in case you want to check AMG quickly. Best. Aleks ----- Original Message ----- From: nek5000-users at lists.mcs.anl.gov To: nek5000-users at lists.mcs.anl.gov Sent: Tuesday, January 10, 2012 10:58:47 AM Subject: Re: [Nek5000-users] run-time hang up in gs_setup Dear Paul and Stefan; Thanks very much for looking into it. I use polynomial order 7th (lx1=8). For the coarse-grid solver I actually used XXt. I also tried to use AMG, but unfortunately neither v619 nor the latest version could have compiled its matlab files and always gives me this error (in matlab/R2011a): ############################################## ... sparsification tolerance [1e-4]: stol = 0.0001 ------------------------------------------------------------------------ Segmentation violation detected at Tue Jan 10 15:56:46 2012 ------------------------------------------------------------------------ .... Abnormal termination: Segmentation violation .... ############################################# I have been in the web page: "amg_matlab Matlab based tool to generate AMG solver inputfiles" (http://nek5000.mcs.anl.gov/index.php/Amg_matlab) which gives me an empty link. I had an old version of the .dat files needed to run AMG, which I tried those as (amg_Aff.dat, amgdmp_i.dat, amg.dat, amg_AfP.dat, amgdmp_p.dat, amgdmp_j.dat, amg_W.dat) and I have got this error: ############################################ ... AMG: reading through row 142800, pass 119/121 AMG: reading through row 144000, pass 120/121 AMG: reading through row 144540, pass 121/121 ERROR (proc 0000, /afs/pdc.kth.se/home/a/anoorani/codes/latest_nek/nek5_svn/trunk/nek/jl/amg.c:468): AMG: missing data for some rows call exitt: dying ... ############################################ I think AMG could be a possibility to overcome this problem, though I could not manage to get a run with that one. I look into the problem with higher polynomial order to see if it reduces the number of elements dramatically, or at least resolve this issue. Best regards Azad %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% >Hi Azad, > >We have seen similar situations. I think this has to do with a known >bug. Unfortunately this bug is hard to reproduce and we haven't >managed to fix it yet. > >-Stefan > >On 1/10/12, nek5000-users at lists.mcs.anl.gov > wrote: > > >Hi Azad, > >You are in record-setting territory for element counts! :) > >Are you using the amg-based coarse-grid solver? >It is certain that you will need to do this (and, >therefore, you will need matlab to process the AMG >operators). There is some discussion of the steps >on the wiki page. We can walk you through this process >if you have any questions. > >What value of lx1 are you using? > >I would recommend fewer elements and a higher value of lx1. >I think it will be easier to manage the data, etc. > >Paul > > > > > >On Tue, 10 Jan 2012, nek5000-users at lists.mcs.anl.gov wrote: > > Dear NEKs; > > I am trying to run a simulation of a turbulent flow in a straight pipe > in high Reynolds number (Re_tau = 1000). After generating the grid with > PRENEK and extrude it using n2to3, the mesh ended up with 4,495,920 > elements. It compiled properly; however, trying to run it, hanged up in > the last stage: > ######################################################################## > > verify mesh topology > -1.000000000000000 1.000000000000000 Xrange > -1.000000000000000 1.000000000000000 Yrange > 0.000000000000000 25.00000000000000 Zrange > done :: verify mesh topology > > E-solver strategy: 1 itr > mg_nx: 1 3 > mg_ny: 1 3 > mg_nz: 1 3 > call usrsetvert > done :: usrsetvert > > gs_setup: 866937 unique labels shared > pairwise times (avg, min, max): 0.000241442 0.00019722 0.000265908 > crystal router : 0.000458177 0.000445795 0.000471807 > used all_to_all method: pairwise > setupds time 5.6048E-02 seconds 1 2 4565612 4495920 > setvert3d: 4 86046564 122013924 86046564 86046564 > call usrsetvert > done :: usrsetvert > > gs_setup: 8041169 unique labels shared > pairwise times (avg, min, max): 0.00050716 0.000427103 0.00056479 > crystal router : 0.0040165 0.00392921 0.00411811 > used all_to_all method: pairwise > setupds time 1.0465E+00 seconds 2 4 86046564 4495920 > setup h1 coarse grid, nx_crs= 2 > call usrsetvert > done :: usrsetvert > > gs_setup: 866937 unique labels shared > pairwise times (avg, min, max): 0.000233683 0.000197816 0.00024941 > crystal router : 0.000466869 0.00045588 0.000478101 > used all_to_all method: pairwise > ######################################################################## > > > I was wondering if you could help me with that. I attached the run > logfile and also genmap.out. > > Many thanks > Azad > _______________________________________________ Nek5000-users mailing list Nek5000-users at lists.mcs.anl.gov https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users From nek5000-users at lists.mcs.anl.gov Wed Jan 11 10:57:31 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 11 Jan 2012 10:57:31 -0600 Subject: [Nek5000-users] "usrdiv" not working with Pn/Pn-2 formulation Message-ID: Hello, I was trying to use the "subroutine fill_div(usrdiv)" with the Pn/Pn-2 formulation for a backward facing step simulation. This used to work before, now it is not working in my simulation. I did a little bit of digging by reverting to old Nek revisions and found the following: 1. Until revision 739, "usrdiv" was called in the main code with the following lines for Pn/Pn-2 formulation in file navier4.f. subroutine incompr() ....... ....... ....... call add2col2(respr,bm2,usrdiv,ntot2) ! User-defined divergence ....... ....... 2. The "subroutine incompr()" is called from "subroutine plan3()", which is in planx.f 3. From revision 740 "subroutine plan3()" calls "subroutine incomprn()" and not "subroutine incompr()". And incompr() is deleted from the navier4.f. 4. "usrdiv" is missing in the new incomprn() subroutine. Hence, the value of "usrdiv" computed in *.usr file is not added to the main solution. Just wanted to point this problem out. This may be useful for others in turbulent simulations. I got around this problem by interpolating an old solution onto the new grid. Regards, Harish. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Wed Jan 11 11:40:10 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 11 Jan 2012 18:40:10 +0100 Subject: [Nek5000-users] run-time hang up in gs_setup In-Reply-To: References: Message-ID: <20120111184010.12494dzon8lwzuka@www.mech.kth.se> Dear Stefan and Aleks; Thanks for updating the wiki webpage regarding the AMG, although, I persume there must be another step also there exist, namely: copy the generated files from the amg_matlb to the running directory? (Or they should be remained there and one puts the generated .dat files after running the 3rd step?). By the way non of the versions I tried working (even 707!) despite the fact that I had a range of matlab versions tried. Hanging with the old version I had I compiled again and have got the four files which was rather fast (with the message at the end: Error contraction factor: 0.47...) I used them and every time during the run-time it crashed simply: ########################################################################### AMG: reading through row 144540, pass 121/121 AMG: reading 0.071106 MB of W AMG: reading 0.115601 MB of AfP AMG: reading 0.132477 MB of Aff AMG level 1 F-vars: 440159 AMG level 2 F-vars: 55146 AMG level 3 F-vars: 28480 AMG level 4 F-vars: 17051 AMG level 5 F-vars: 7524 AMG level 6 F-vars: 5711 AMG level 7 F-vars: 5763 AMG level 8 F-vars: 28583 AMG level 9 F-vars: 5380 AMG level 10 F-vars: 5737 Application 731033 exit codes: 139 Application 731033 exit signals: Killed Application 731033 resources: utime ~417s, stime ~3s ########################################################################## Can you help me with that cause I believe this case still doable with correct AMG scheme. Many thanks Azad > > Hi Azad, > > I believe old AMG files should work up to and including revision 707 > in case >you want to check AMG quickly. > > Best. > Aleks > > > ----- Original Message ----- > From: nek5000-users at lists.mcs.anl.gov > To: nek5000-users at lists.mcs.anl.gov > Sent: Tuesday, January 10, 2012 10:58:47 AM > Subject: Re: [Nek5000-users] run-time hang up in gs_setup > > Hi Azad, > > your choice of lx1=8 is fine (it's our preferred sweet spot). If you > have a large element count (say > 300'000) the factorization in the > XXt setup phase may take hours. I guess that's why it looks like it's > hanging. Again, there is a known bug which looks the same. So can't > tell exactly what's causing your problem. > > I just updated the Wiki: https://nek5000.mcs.anl.gov/index.php/Amg_matlab > > Can you verify that it still fails. > > -Stefan > > Quoting nek5000-users-request at lists.mcs.anl.gov: > > Send Nek5000-users mailing list submissions to > nek5000-users at lists.mcs.anl.gov > > To subscribe or unsubscribe via the World Wide Web, visit > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > or, via email, send a message with subject or body 'help' to > nek5000-users-request at lists.mcs.anl.gov > > You can reach the person managing the list at > nek5000-users-owner at lists.mcs.anl.gov > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of Nek5000-users digest..." > > > Today's Topics: > > 1. Re: run-time hang up in gs_setup (nek5000-users at lists.mcs.anl.gov) > 2. Re: run-time hang up in gs_setup (nek5000-users at lists.mcs.anl.gov) > 3. Re: run-time hang up in gs_setup (nek5000-users at lists.mcs.anl.gov) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Tue, 10 Jan 2012 06:01:29 -0600 (CST) > From: nek5000-users at lists.mcs.anl.gov > Subject: Re: [Nek5000-users] run-time hang up in gs_setup > To: nek5000-users at lists.mcs.anl.gov > Message-ID: > Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed > > > Hi Azad, > > You are in record-setting territory for element counts! :) > > Are you using the amg-based coarse-grid solver? > It is certain that you will need to do this (and, > therefore, you will need matlab to process the AMG > operators). There is some discussion of the steps > on the wiki page. We can walk you through this process > if you have any questions. > > What value of lx1 are you using? > > I would recommend fewer elements and a higher value of lx1. > I think it will be easier to manage the data, etc. > > Paul > > > > > > On Tue, 10 Jan 2012, nek5000-users at lists.mcs.anl.gov wrote: > >> Dear NEKs; >> >> I am trying to run a simulation of a turbulent flow in a straight pipe >> in high Reynolds number (Re_tau = 1000). After generating the grid with >> PRENEK and extrude it using n2to3, the mesh ended up with 4,495,920 >> elements. It compiled properly; however, trying to run it, hanged up in >> the last stage: >> ######################################################################## >> >> verify mesh topology >> -1.000000000000000 1.000000000000000 Xrange >> -1.000000000000000 1.000000000000000 Yrange >> 0.000000000000000 25.00000000000000 Zrange >> done :: verify mesh topology >> >> E-solver strategy: 1 itr >> mg_nx: 1 3 >> mg_ny: 1 3 >> mg_nz: 1 3 >> call usrsetvert >> done :: usrsetvert >> >> gs_setup: 866937 unique labels shared >> pairwise times (avg, min, max): 0.000241442 0.00019722 0.000265908 >> crystal router : 0.000458177 0.000445795 0.000471807 >> used all_to_all method: pairwise >> setupds time 5.6048E-02 seconds 1 2 4565612 4495920 >> setvert3d: 4 86046564 122013924 86046564 86046564 >> call usrsetvert >> done :: usrsetvert >> >> gs_setup: 8041169 unique labels shared >> pairwise times (avg, min, max): 0.00050716 0.000427103 0.00056479 >> crystal router : 0.0040165 0.00392921 0.00411811 >> used all_to_all method: pairwise >> setupds time 1.0465E+00 seconds 2 4 86046564 4495920 >> setup h1 coarse grid, nx_crs= 2 >> call usrsetvert >> done :: usrsetvert >> >> gs_setup: 866937 unique labels shared >> pairwise times (avg, min, max): 0.000233683 0.000197816 0.00024941 >> crystal router : 0.000466869 0.00045588 0.000478101 >> used all_to_all method: pairwise >> ######################################################################## >> >> >> I was wondering if you could help me with that. I attached the run >> logfile and also genmap.out. >> >> Many thanks >> Azad >> > > > ------------------------------ > > Message: 2 > Date: Tue, 10 Jan 2012 13:35:22 +0100 > From: nek5000-users at lists.mcs.anl.gov > Subject: Re: [Nek5000-users] run-time hang up in gs_setup > To: nek5000-users at lists.mcs.anl.gov > Message-ID: > > Content-Type: text/plain; charset=ISO-8859-1 > > Hi Azad, > > We have seen similar situations. I think this has to do with a known > bug. Unfortunately this bug is hard to reproduce and we haven't > managed to fix it yet. > > -Stefan > > On 1/10/12, nek5000-users at lists.mcs.anl.gov > wrote: >> >> Hi Azad, >> >> You are in record-setting territory for element counts! :) >> >> Are you using the amg-based coarse-grid solver? >> It is certain that you will need to do this (and, >> therefore, you will need matlab to process the AMG >> operators). There is some discussion of the steps >> on the wiki page. We can walk you through this process >> if you have any questions. >> >> What value of lx1 are you using? >> >> I would recommend fewer elements and a higher value of lx1. >> I think it will be easier to manage the data, etc. >> >> Paul >> >> >> >> >> >> On Tue, 10 Jan 2012, nek5000-users at lists.mcs.anl.gov wrote: >> >>> Dear NEKs; >>> >>> I am trying to run a simulation of a turbulent flow in a straight pipe >>> in high Reynolds number (Re_tau = 1000). After generating the grid with >>> PRENEK and extrude it using n2to3, the mesh ended up with 4,495,920 >>> elements. It compiled properly; however, trying to run it, hanged up in >>> the last stage: >>> ######################################################################## >>> >>> verify mesh topology >>> -1.000000000000000 1.000000000000000 Xrange >>> -1.000000000000000 1.000000000000000 Yrange >>> 0.000000000000000 25.00000000000000 Zrange >>> done :: verify mesh topology >>> >>> E-solver strategy: 1 itr >>> mg_nx: 1 3 >>> mg_ny: 1 3 >>> mg_nz: 1 3 >>> call usrsetvert >>> done :: usrsetvert >>> >>> gs_setup: 866937 unique labels shared >>> pairwise times (avg, min, max): 0.000241442 0.00019722 0.000265908 >>> crystal router : 0.000458177 0.000445795 0.000471807 >>> used all_to_all method: pairwise >>> setupds time 5.6048E-02 seconds 1 2 4565612 4495920 >>> setvert3d: 4 86046564 122013924 86046564 86046564 >>> call usrsetvert >>> done :: usrsetvert >>> >>> gs_setup: 8041169 unique labels shared >>> pairwise times (avg, min, max): 0.00050716 0.000427103 0.00056479 >>> crystal router : 0.0040165 0.00392921 0.00411811 >>> used all_to_all method: pairwise >>> setupds time 1.0465E+00 seconds 2 4 86046564 4495920 >>> setup h1 coarse grid, nx_crs= 2 >>> call usrsetvert >>> done :: usrsetvert >>> >>> gs_setup: 866937 unique labels shared >>> pairwise times (avg, min, max): 0.000233683 0.000197816 0.00024941 >>> crystal router : 0.000466869 0.00045588 0.000478101 >>> used all_to_all method: pairwise >>> ######################################################################## >>> >>> >>> I was wondering if you could help me with that. I attached the run >>> logfile and also genmap.out. >>> >>> Many thanks >>> Azad >>> >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> > > > ------------------------------ > > Message: 3 > Date: Tue, 10 Jan 2012 17:58:47 +0100 > From: nek5000-users at lists.mcs.anl.gov > Subject: Re: [Nek5000-users] run-time hang up in gs_setup > To: nek5000-users at lists.mcs.anl.gov > Message-ID: <1326214727.2600.282.camel at damavand.mech.kth.se> > Content-Type: text/plain; charset="UTF-8" > > Dear Paul and Stefan; > > Thanks very much for looking into it. I use polynomial order 7th > (lx1=8). For the coarse-grid solver I actually used XXt. I also tried to > use AMG, but unfortunately neither v619 nor the latest version could > have compiled its matlab files and always gives me this error (in > matlab/R2011a): > ############################################## > ... > sparsification tolerance [1e-4]: stol = 0.0001 > > ------------------------------------------------------------------------ > Segmentation violation detected at Tue Jan 10 15:56:46 2012 > ------------------------------------------------------------------------ > .... > Abnormal termination: > Segmentation violation > .... > ############################################# > I have been in the web page: "amg_matlab Matlab based tool to generate > AMG solver inputfiles" (http://nek5000.mcs.anl.gov/index.php/Amg_matlab) > which gives me an empty link. > > I had an old version of the .dat files needed to run AMG, which I tried > those as (amg_Aff.dat, amgdmp_i.dat, amg.dat, amg_AfP.dat, amgdmp_p.dat, > amgdmp_j.dat, amg_W.dat) and I have got this error: > > ############################################ > ... > AMG: reading through row 142800, pass 119/121 > AMG: reading through row 144000, pass 120/121 > AMG: reading through row 144540, pass 121/121 > ERROR (proc > 0000, > /afs/pdc.kth.se/home/a/anoorani/codes/latest_nek/nek5_svn/trunk/nek/jl/amg.c:468): AMG: missing data for some > rows > > call exitt: dying ... > ############################################ > > I think AMG could be a possibility to overcome this problem, though I > could not manage to get a run with that one. I look into the problem > with higher polynomial order to see if it reduces the number of elements > dramatically, or at least resolve this issue. > > Best regards > Azad > > %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% >> Hi Azad, >> >> We have seen similar situations. I think this has to do with a known >> bug. Unfortunately this bug is hard to reproduce and we haven't >> managed to fix it yet. >> >> -Stefan >> >> On 1/10/12, nek5000-users at lists.mcs.anl.gov >> wrote: >> >> >> Hi Azad, >> >> You are in record-setting territory for element counts! :) >> >> Are you using the amg-based coarse-grid solver? >> It is certain that you will need to do this (and, >> therefore, you will need matlab to process the AMG >> operators). There is some discussion of the steps >> on the wiki page. We can walk you through this process >> if you have any questions. >> >> What value of lx1 are you using? >> >> I would recommend fewer elements and a higher value of lx1. >> I think it will be easier to manage the data, etc. >> >> Paul >> >> >> >> >> >> On Tue, 10 Jan 2012, nek5000-users at lists.mcs.anl.gov wrote: >> >> Dear NEKs; >> >> I am trying to run a simulation of a turbulent flow in a straight pipe >> in high Reynolds number (Re_tau = 1000). After generating the grid with >> PRENEK and extrude it using n2to3, the mesh ended up with 4,495,920 >> elements. It compiled properly; however, trying to run it, hanged up in >> the last stage: >> ######################################################################## >> >> verify mesh topology >> -1.000000000000000 1.000000000000000 Xrange >> -1.000000000000000 1.000000000000000 Yrange >> 0.000000000000000 25.00000000000000 Zrange >> done :: verify mesh topology >> >> E-solver strategy: 1 itr >> mg_nx: 1 3 >> mg_ny: 1 3 >> mg_nz: 1 3 >> call usrsetvert >> done :: usrsetvert >> >> gs_setup: 866937 unique labels shared >> pairwise times (avg, min, max): 0.000241442 0.00019722 0.000265908 >> crystal router : 0.000458177 0.000445795 0.000471807 >> used all_to_all method: pairwise >> setupds time 5.6048E-02 seconds 1 2 4565612 4495920 >> setvert3d: 4 86046564 122013924 86046564 86046564 >> call usrsetvert >> done :: usrsetvert >> >> gs_setup: 8041169 unique labels shared >> pairwise times (avg, min, max): 0.00050716 0.000427103 0.00056479 >> crystal router : 0.0040165 0.00392921 0.00411811 >> used all_to_all method: pairwise >> setupds time 1.0465E+00 seconds 2 4 86046564 4495920 >> setup h1 coarse grid, nx_crs= 2 >> call usrsetvert >> done :: usrsetvert >> >> gs_setup: 866937 unique labels shared >> pairwise times (avg, min, max): 0.000233683 0.000197816 0.00024941 >> crystal router : 0.000466869 0.00045588 0.000478101 >> used all_to_all method: pairwise >> ######################################################################## >> >> >> I was wondering if you could help me with that. I attached the run >> logfile and also genmap.out. >> >> Many thanks >> Azad >> > > > > ------------------------------ > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > > End of Nek5000-users Digest, Vol 35, Issue 5 > ******************************************** > ---------------------------------------------------------------- This message was sent using IMP, the Internet Messaging Program. From nek5000-users at lists.mcs.anl.gov Thu Jan 12 11:09:48 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 12 Jan 2012 18:09:48 +0100 Subject: [Nek5000-users] run-time hang up in gs_setup In-Reply-To: <20120111184010.12494dzon8lwzuka@www.mech.kth.se> References: <20120111184010.12494dzon8lwzuka@www.mech.kth.se> Message-ID: Hi Azad, can you try to run the turbChannel example using AMG and the latest version of the repo. Let me know if this works for you. Stefan On 1/11/12, nek5000-users at lists.mcs.anl.gov wrote: > Dear Stefan and Aleks; > > Thanks for updating the wiki webpage regarding the AMG, although, I > persume there must be another step also there exist, namely: copy the > generated files from the amg_matlb to the running directory? (Or they > should be remained there and one puts the generated .dat files after > running the 3rd step?). By the way non of the versions I tried working > (even 707!) despite the fact that I had a range of matlab versions > tried. Hanging with the old version I had I compiled again and have > got the four files which was rather fast (with the message at the end: > Error contraction factor: 0.47...) I used them and every time during > the run-time it crashed simply: > ########################################################################### > AMG: reading through row 144540, pass 121/121 > AMG: reading 0.071106 MB of W > AMG: reading 0.115601 MB of AfP > AMG: reading 0.132477 MB of Aff > AMG level 1 F-vars: 440159 > AMG level 2 F-vars: 55146 > AMG level 3 F-vars: 28480 > AMG level 4 F-vars: 17051 > AMG level 5 F-vars: 7524 > AMG level 6 F-vars: 5711 > AMG level 7 F-vars: 5763 > AMG level 8 F-vars: 28583 > AMG level 9 F-vars: 5380 > AMG level 10 F-vars: 5737 > Application 731033 exit codes: 139 > Application 731033 exit signals: Killed > Application 731033 resources: utime ~417s, stime ~3s > ########################################################################## > > Can you help me with that cause I believe this case still doable with > correct AMG scheme. > > Many thanks > Azad > > >> >> Hi Azad, >> >> I believe old AMG files should work up to and including revision 707 >> in case >you want to check AMG quickly. >> >> Best. >> Aleks >> >> >> ----- Original Message ----- >> From: nek5000-users at lists.mcs.anl.gov >> To: nek5000-users at lists.mcs.anl.gov >> Sent: Tuesday, January 10, 2012 10:58:47 AM >> Subject: Re: [Nek5000-users] run-time hang up in gs_setup >> >> Hi Azad, >> >> your choice of lx1=8 is fine (it's our preferred sweet spot). If you >> have a large element count (say > 300'000) the factorization in the >> XXt setup phase may take hours. I guess that's why it looks like it's >> hanging. Again, there is a known bug which looks the same. So can't >> tell exactly what's causing your problem. >> >> I just updated the Wiki: https://nek5000.mcs.anl.gov/index.php/Amg_matlab >> >> Can you verify that it still fails. >> >> -Stefan >> >> Quoting nek5000-users-request at lists.mcs.anl.gov: >> >> Send Nek5000-users mailing list submissions to >> nek5000-users at lists.mcs.anl.gov >> >> To subscribe or unsubscribe via the World Wide Web, visit >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> or, via email, send a message with subject or body 'help' to >> nek5000-users-request at lists.mcs.anl.gov >> >> You can reach the person managing the list at >> nek5000-users-owner at lists.mcs.anl.gov >> >> When replying, please edit your Subject line so it is more specific >> than "Re: Contents of Nek5000-users digest..." >> >> >> Today's Topics: >> >> 1. Re: run-time hang up in gs_setup (nek5000-users at lists.mcs.anl.gov) >> 2. Re: run-time hang up in gs_setup (nek5000-users at lists.mcs.anl.gov) >> 3. Re: run-time hang up in gs_setup (nek5000-users at lists.mcs.anl.gov) >> >> >> ---------------------------------------------------------------------- >> >> Message: 1 >> Date: Tue, 10 Jan 2012 06:01:29 -0600 (CST) >> From: nek5000-users at lists.mcs.anl.gov >> Subject: Re: [Nek5000-users] run-time hang up in gs_setup >> To: nek5000-users at lists.mcs.anl.gov >> Message-ID: >> Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed >> >> >> Hi Azad, >> >> You are in record-setting territory for element counts! :) >> >> Are you using the amg-based coarse-grid solver? >> It is certain that you will need to do this (and, >> therefore, you will need matlab to process the AMG >> operators). There is some discussion of the steps >> on the wiki page. We can walk you through this process >> if you have any questions. >> >> What value of lx1 are you using? >> >> I would recommend fewer elements and a higher value of lx1. >> I think it will be easier to manage the data, etc. >> >> Paul >> >> >> >> >> >> On Tue, 10 Jan 2012, nek5000-users at lists.mcs.anl.gov wrote: >> >>> Dear NEKs; >>> >>> I am trying to run a simulation of a turbulent flow in a straight pipe >>> in high Reynolds number (Re_tau = 1000). After generating the grid with >>> PRENEK and extrude it using n2to3, the mesh ended up with 4,495,920 >>> elements. It compiled properly; however, trying to run it, hanged up in >>> the last stage: >>> ######################################################################## >>> >>> verify mesh topology >>> -1.000000000000000 1.000000000000000 Xrange >>> -1.000000000000000 1.000000000000000 Yrange >>> 0.000000000000000 25.00000000000000 Zrange >>> done :: verify mesh topology >>> >>> E-solver strategy: 1 itr >>> mg_nx: 1 3 >>> mg_ny: 1 3 >>> mg_nz: 1 3 >>> call usrsetvert >>> done :: usrsetvert >>> >>> gs_setup: 866937 unique labels shared >>> pairwise times (avg, min, max): 0.000241442 0.00019722 0.000265908 >>> crystal router : 0.000458177 0.000445795 0.000471807 >>> used all_to_all method: pairwise >>> setupds time 5.6048E-02 seconds 1 2 4565612 4495920 >>> setvert3d: 4 86046564 122013924 86046564 86046564 >>> call usrsetvert >>> done :: usrsetvert >>> >>> gs_setup: 8041169 unique labels shared >>> pairwise times (avg, min, max): 0.00050716 0.000427103 0.00056479 >>> crystal router : 0.0040165 0.00392921 0.00411811 >>> used all_to_all method: pairwise >>> setupds time 1.0465E+00 seconds 2 4 86046564 4495920 >>> setup h1 coarse grid, nx_crs= 2 >>> call usrsetvert >>> done :: usrsetvert >>> >>> gs_setup: 866937 unique labels shared >>> pairwise times (avg, min, max): 0.000233683 0.000197816 0.00024941 >>> crystal router : 0.000466869 0.00045588 0.000478101 >>> used all_to_all method: pairwise >>> ######################################################################## >>> >>> >>> I was wondering if you could help me with that. I attached the run >>> logfile and also genmap.out. >>> >>> Many thanks >>> Azad >>> >> >> >> ------------------------------ >> >> Message: 2 >> Date: Tue, 10 Jan 2012 13:35:22 +0100 >> From: nek5000-users at lists.mcs.anl.gov >> Subject: Re: [Nek5000-users] run-time hang up in gs_setup >> To: nek5000-users at lists.mcs.anl.gov >> Message-ID: >> >> Content-Type: text/plain; charset=ISO-8859-1 >> >> Hi Azad, >> >> We have seen similar situations. I think this has to do with a known >> bug. Unfortunately this bug is hard to reproduce and we haven't >> managed to fix it yet. >> >> -Stefan >> >> On 1/10/12, nek5000-users at lists.mcs.anl.gov >> wrote: >>> >>> Hi Azad, >>> >>> You are in record-setting territory for element counts! :) >>> >>> Are you using the amg-based coarse-grid solver? >>> It is certain that you will need to do this (and, >>> therefore, you will need matlab to process the AMG >>> operators). There is some discussion of the steps >>> on the wiki page. We can walk you through this process >>> if you have any questions. >>> >>> What value of lx1 are you using? >>> >>> I would recommend fewer elements and a higher value of lx1. >>> I think it will be easier to manage the data, etc. >>> >>> Paul >>> >>> >>> >>> >>> >>> On Tue, 10 Jan 2012, nek5000-users at lists.mcs.anl.gov wrote: >>> >>>> Dear NEKs; >>>> >>>> I am trying to run a simulation of a turbulent flow in a straight pipe >>>> in high Reynolds number (Re_tau = 1000). After generating the grid with >>>> PRENEK and extrude it using n2to3, the mesh ended up with 4,495,920 >>>> elements. It compiled properly; however, trying to run it, hanged up in >>>> the last stage: >>>> ######################################################################## >>>> >>>> verify mesh topology >>>> -1.000000000000000 1.000000000000000 Xrange >>>> -1.000000000000000 1.000000000000000 Yrange >>>> 0.000000000000000 25.00000000000000 Zrange >>>> done :: verify mesh topology >>>> >>>> E-solver strategy: 1 itr >>>> mg_nx: 1 3 >>>> mg_ny: 1 3 >>>> mg_nz: 1 3 >>>> call usrsetvert >>>> done :: usrsetvert >>>> >>>> gs_setup: 866937 unique labels shared >>>> pairwise times (avg, min, max): 0.000241442 0.00019722 0.000265908 >>>> crystal router : 0.000458177 0.000445795 0.000471807 >>>> used all_to_all method: pairwise >>>> setupds time 5.6048E-02 seconds 1 2 4565612 4495920 >>>> setvert3d: 4 86046564 122013924 86046564 86046564 >>>> call usrsetvert >>>> done :: usrsetvert >>>> >>>> gs_setup: 8041169 unique labels shared >>>> pairwise times (avg, min, max): 0.00050716 0.000427103 0.00056479 >>>> crystal router : 0.0040165 0.00392921 0.00411811 >>>> used all_to_all method: pairwise >>>> setupds time 1.0465E+00 seconds 2 4 86046564 4495920 >>>> setup h1 coarse grid, nx_crs= 2 >>>> call usrsetvert >>>> done :: usrsetvert >>>> >>>> gs_setup: 866937 unique labels shared >>>> pairwise times (avg, min, max): 0.000233683 0.000197816 0.00024941 >>>> crystal router : 0.000466869 0.00045588 0.000478101 >>>> used all_to_all method: pairwise >>>> ######################################################################## >>>> >>>> >>>> I was wondering if you could help me with that. I attached the run >>>> logfile and also genmap.out. >>>> >>>> Many thanks >>>> Azad >>>> >>> _______________________________________________ >>> Nek5000-users mailing list >>> Nek5000-users at lists.mcs.anl.gov >>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>> >> >> >> ------------------------------ >> >> Message: 3 >> Date: Tue, 10 Jan 2012 17:58:47 +0100 >> From: nek5000-users at lists.mcs.anl.gov >> Subject: Re: [Nek5000-users] run-time hang up in gs_setup >> To: nek5000-users at lists.mcs.anl.gov >> Message-ID: <1326214727.2600.282.camel at damavand.mech.kth.se> >> Content-Type: text/plain; charset="UTF-8" >> >> Dear Paul and Stefan; >> >> Thanks very much for looking into it. I use polynomial order 7th >> (lx1=8). For the coarse-grid solver I actually used XXt. I also tried to >> use AMG, but unfortunately neither v619 nor the latest version could >> have compiled its matlab files and always gives me this error (in >> matlab/R2011a): >> ############################################## >> ... >> sparsification tolerance [1e-4]: stol = 0.0001 >> >> ------------------------------------------------------------------------ >> Segmentation violation detected at Tue Jan 10 15:56:46 2012 >> ------------------------------------------------------------------------ >> .... >> Abnormal termination: >> Segmentation violation >> .... >> ############################################# >> I have been in the web page: "amg_matlab Matlab based tool to generate >> AMG solver inputfiles" (http://nek5000.mcs.anl.gov/index.php/Amg_matlab) >> which gives me an empty link. >> >> I had an old version of the .dat files needed to run AMG, which I tried >> those as (amg_Aff.dat, amgdmp_i.dat, amg.dat, amg_AfP.dat, amgdmp_p.dat, >> amgdmp_j.dat, amg_W.dat) and I have got this error: >> >> ############################################ >> ... >> AMG: reading through row 142800, pass 119/121 >> AMG: reading through row 144000, pass 120/121 >> AMG: reading through row 144540, pass 121/121 >> ERROR (proc >> 0000, >> /afs/pdc.kth.se/home/a/anoorani/codes/latest_nek/nek5_svn/trunk/nek/jl/amg.c:468): >> AMG: missing data for some >> rows >> >> call exitt: dying ... >> ############################################ >> >> I think AMG could be a possibility to overcome this problem, though I >> could not manage to get a run with that one. I look into the problem >> with higher polynomial order to see if it reduces the number of elements >> dramatically, or at least resolve this issue. >> >> Best regards >> Azad >> >> %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% >>> Hi Azad, >>> >>> We have seen similar situations. I think this has to do with a known >>> bug. Unfortunately this bug is hard to reproduce and we haven't >>> managed to fix it yet. >>> >>> -Stefan >>> >>> On 1/10/12, nek5000-users at lists.mcs.anl.gov >>> wrote: >>> >>> >>> Hi Azad, >>> >>> You are in record-setting territory for element counts! :) >>> >>> Are you using the amg-based coarse-grid solver? >>> It is certain that you will need to do this (and, >>> therefore, you will need matlab to process the AMG >>> operators). There is some discussion of the steps >>> on the wiki page. We can walk you through this process >>> if you have any questions. >>> >>> What value of lx1 are you using? >>> >>> I would recommend fewer elements and a higher value of lx1. >>> I think it will be easier to manage the data, etc. >>> >>> Paul >>> >>> >>> >>> >>> >>> On Tue, 10 Jan 2012, nek5000-users at lists.mcs.anl.gov wrote: >>> >>> Dear NEKs; >>> >>> I am trying to run a simulation of a turbulent flow in a straight pipe >>> in high Reynolds number (Re_tau = 1000). After generating the grid with >>> PRENEK and extrude it using n2to3, the mesh ended up with 4,495,920 >>> elements. It compiled properly; however, trying to run it, hanged up in >>> the last stage: >>> ######################################################################## >>> >>> verify mesh topology >>> -1.000000000000000 1.000000000000000 Xrange >>> -1.000000000000000 1.000000000000000 Yrange >>> 0.000000000000000 25.00000000000000 Zrange >>> done :: verify mesh topology >>> >>> E-solver strategy: 1 itr >>> mg_nx: 1 3 >>> mg_ny: 1 3 >>> mg_nz: 1 3 >>> call usrsetvert >>> done :: usrsetvert >>> >>> gs_setup: 866937 unique labels shared >>> pairwise times (avg, min, max): 0.000241442 0.00019722 0.000265908 >>> crystal router : 0.000458177 0.000445795 0.000471807 >>> used all_to_all method: pairwise >>> setupds time 5.6048E-02 seconds 1 2 4565612 4495920 >>> setvert3d: 4 86046564 122013924 86046564 86046564 >>> call usrsetvert >>> done :: usrsetvert >>> >>> gs_setup: 8041169 unique labels shared >>> pairwise times (avg, min, max): 0.00050716 0.000427103 0.00056479 >>> crystal router : 0.0040165 0.00392921 0.00411811 >>> used all_to_all method: pairwise >>> setupds time 1.0465E+00 seconds 2 4 86046564 4495920 >>> setup h1 coarse grid, nx_crs= 2 >>> call usrsetvert >>> done :: usrsetvert >>> >>> gs_setup: 866937 unique labels shared >>> pairwise times (avg, min, max): 0.000233683 0.000197816 0.00024941 >>> crystal router : 0.000466869 0.00045588 0.000478101 >>> used all_to_all method: pairwise >>> ######################################################################## >>> >>> >>> I was wondering if you could help me with that. I attached the run >>> logfile and also genmap.out. >>> >>> Many thanks >>> Azad >>> >> >> >> >> ------------------------------ >> >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> >> >> End of Nek5000-users Digest, Vol 35, Issue 5 >> ******************************************** >> > > > > ---------------------------------------------------------------- > This message was sent using IMP, the Internet Messaging Program. > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > From nek5000-users at lists.mcs.anl.gov Thu Jan 12 12:08:59 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 12 Jan 2012 12:08:59 -0600 Subject: [Nek5000-users] "usrdiv" not working with Pn/Pn-2 formulation In-Reply-To: References: Message-ID: Thanks Harish, We have changed induct.f to include the usrdiv call. The current revision will now reflect this change. -Katie On Wed, Jan 11, 2012 at 10:57 AM, wrote: > Hello, > > I was trying to use the "subroutine fill_div(usrdiv)" with the Pn/Pn-2 > formulation for a backward facing step simulation. This used to work > before, now it is not working in my simulation. > > I did a little bit of digging by reverting to old Nek revisions and found > the following: > > 1. Until revision 739, "usrdiv" was called in the main code with the > following lines for Pn/Pn-2 formulation in file navier4.f. > > subroutine incompr() > ....... > ....... > ....... > call add2col2(respr,bm2,usrdiv,ntot2) ! User-defined divergence > ....... > ....... > > 2. The "subroutine incompr()" is called from "subroutine plan3()", which > is in planx.f > > 3. From revision 740 "subroutine plan3()" calls "subroutine incomprn()" > and not "subroutine incompr()". And incompr() is deleted from the > navier4.f. > > 4. "usrdiv" is missing in the new incomprn() subroutine. Hence, the value > of "usrdiv" computed in *.usr file is not added to the main solution. > > Just wanted to point this problem out. This may be useful for others in > turbulent simulations. > > I got around this problem by interpolating an old solution onto the new > grid. > > Regards, > > Harish. > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Thu Jan 12 13:04:33 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 12 Jan 2012 13:04:33 -0600 Subject: [Nek5000-users] "usrdiv" not working with Pn/Pn-2 formulation In-Reply-To: References: Message-ID: You are welcome! Harish. On Thu, Jan 12, 2012 at 12:08 PM, wrote: > > Thanks Harish, > > We have changed induct.f to include the usrdiv call. The current revision > will now reflect this change. > > -Katie > > On Wed, Jan 11, 2012 at 10:57 AM, wrote: > >> Hello, >> >> I was trying to use the "subroutine fill_div(usrdiv)" with the Pn/Pn-2 >> formulation for a backward facing step simulation. This used to work >> before, now it is not working in my simulation. >> >> I did a little bit of digging by reverting to old Nek revisions and found >> the following: >> >> 1. Until revision 739, "usrdiv" was called in the main code with the >> following lines for Pn/Pn-2 formulation in file navier4.f. >> >> subroutine incompr() >> ....... >> ....... >> ....... >> call add2col2(respr,bm2,usrdiv,ntot2) ! User-defined divergence >> ....... >> ....... >> >> 2. The "subroutine incompr()" is called from "subroutine plan3()", which >> is in planx.f >> >> 3. From revision 740 "subroutine plan3()" calls "subroutine incomprn()" >> and not "subroutine incompr()". And incompr() is deleted from the >> navier4.f. >> >> 4. "usrdiv" is missing in the new incomprn() subroutine. Hence, the value >> of "usrdiv" computed in *.usr file is not added to the main solution. >> >> Just wanted to point this problem out. This may be useful for others in >> turbulent simulations. >> >> I got around this problem by interpolating an old solution onto the new >> grid. >> >> Regards, >> >> Harish. >> >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> >> > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Thu Jan 12 13:38:17 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 12 Jan 2012 20:38:17 +0100 Subject: [Nek5000-users] run-time hang up in gs_setup In-Reply-To: References: Message-ID: <20120112203817.15641380tw4ubq21@www.mech.kth.se> Hi Stefan; Unfortunately it does not work. first of all, ./run in amg_matlab does not produce anything (latest version). Using my version of the files (running turbChannel), the simulation crashed giving me this error: ############################################### AMG level 8: 3 iterations with rho = 0.680429 AMG level 9: 2 iterations with rho = 0.560188 AMG: 144540 rows AMG: reading through row 1200, pass 1/121 ERROR (proc 0000, /scratch/azad/codes/late/nek5_svn/trunk/nek/jl/amg.c:875): AMG: data has more rows than given problem call exitt: dying ... ############################################### Regards Azad Quoting nek5000-users-request at lists.mcs.anl.gov: > Send Nek5000-users mailing list submissions to > nek5000-users at lists.mcs.anl.gov > > To subscribe or unsubscribe via the World Wide Web, visit > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > or, via email, send a message with subject or body 'help' to > nek5000-users-request at lists.mcs.anl.gov > > You can reach the person managing the list at > nek5000-users-owner at lists.mcs.anl.gov > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of Nek5000-users digest..." > > > Today's Topics: > > 1. Re: run-time hang up in gs_setup (nek5000-users at lists.mcs.anl.gov) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Thu, 12 Jan 2012 18:09:48 +0100 > From: nek5000-users at lists.mcs.anl.gov > Subject: Re: [Nek5000-users] run-time hang up in gs_setup > To: nek5000-users at lists.mcs.anl.gov > Message-ID: > > Content-Type: text/plain; charset=ISO-8859-1 > > Hi Azad, > > can you try to run the turbChannel example using AMG and the latest > version of the repo. Let me know if this works for you. > > Stefan > > On 1/11/12, nek5000-users at lists.mcs.anl.gov > wrote: >> Dear Stefan and Aleks; >> >> Thanks for updating the wiki webpage regarding the AMG, although, I >> persume there must be another step also there exist, namely: copy the >> generated files from the amg_matlb to the running directory? (Or they >> should be remained there and one puts the generated .dat files after >> running the 3rd step?). By the way non of the versions I tried working >> (even 707!) despite the fact that I had a range of matlab versions >> tried. Hanging with the old version I had I compiled again and have >> got the four files which was rather fast (with the message at the end: >> Error contraction factor: 0.47...) I used them and every time during >> the run-time it crashed simply: >> ########################################################################### >> AMG: reading through row 144540, pass 121/121 >> AMG: reading 0.071106 MB of W >> AMG: reading 0.115601 MB of AfP >> AMG: reading 0.132477 MB of Aff >> AMG level 1 F-vars: 440159 >> AMG level 2 F-vars: 55146 >> AMG level 3 F-vars: 28480 >> AMG level 4 F-vars: 17051 >> AMG level 5 F-vars: 7524 >> AMG level 6 F-vars: 5711 >> AMG level 7 F-vars: 5763 >> AMG level 8 F-vars: 28583 >> AMG level 9 F-vars: 5380 >> AMG level 10 F-vars: 5737 >> Application 731033 exit codes: 139 >> Application 731033 exit signals: Killed >> Application 731033 resources: utime ~417s, stime ~3s >> ########################################################################## >> >> Can you help me with that cause I believe this case still doable with >> correct AMG scheme. >> >> Many thanks >> Azad >> >> >>> >>> Hi Azad, >>> >>> I believe old AMG files should work up to and including revision 707 >>> in case >you want to check AMG quickly. >>> >>> Best. >>> Aleks >>> >>> >>> ----- Original Message ----- >>> From: nek5000-users at lists.mcs.anl.gov >>> To: nek5000-users at lists.mcs.anl.gov >>> Sent: Tuesday, January 10, 2012 10:58:47 AM >>> Subject: Re: [Nek5000-users] run-time hang up in gs_setup >>> >>> Hi Azad, >>> >>> your choice of lx1=8 is fine (it's our preferred sweet spot). If you >>> have a large element count (say > 300'000) the factorization in the >>> XXt setup phase may take hours. I guess that's why it looks like it's >>> hanging. Again, there is a known bug which looks the same. So can't >>> tell exactly what's causing your problem. >>> >>> I just updated the Wiki: https://nek5000.mcs.anl.gov/index.php/Amg_matlab >>> >>> Can you verify that it still fails. >>> >>> -Stefan >>> >>> Quoting nek5000-users-request at lists.mcs.anl.gov: >>> >>> Send Nek5000-users mailing list submissions to >>> nek5000-users at lists.mcs.anl.gov >>> >>> To subscribe or unsubscribe via the World Wide Web, visit >>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>> or, via email, send a message with subject or body 'help' to >>> nek5000-users-request at lists.mcs.anl.gov >>> >>> You can reach the person managing the list at >>> nek5000-users-owner at lists.mcs.anl.gov >>> >>> When replying, please edit your Subject line so it is more specific >>> than "Re: Contents of Nek5000-users digest..." >>> >>> >>> Today's Topics: >>> >>> 1. Re: run-time hang up in gs_setup (nek5000-users at lists.mcs.anl.gov) >>> 2. Re: run-time hang up in gs_setup (nek5000-users at lists.mcs.anl.gov) >>> 3. Re: run-time hang up in gs_setup (nek5000-users at lists.mcs.anl.gov) >>> >>> >>> ---------------------------------------------------------------------- >>> >>> Message: 1 >>> Date: Tue, 10 Jan 2012 06:01:29 -0600 (CST) >>> From: nek5000-users at lists.mcs.anl.gov >>> Subject: Re: [Nek5000-users] run-time hang up in gs_setup >>> To: nek5000-users at lists.mcs.anl.gov >>> Message-ID: >>> Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed >>> >>> >>> Hi Azad, >>> >>> You are in record-setting territory for element counts! :) >>> >>> Are you using the amg-based coarse-grid solver? >>> It is certain that you will need to do this (and, >>> therefore, you will need matlab to process the AMG >>> operators). There is some discussion of the steps >>> on the wiki page. We can walk you through this process >>> if you have any questions. >>> >>> What value of lx1 are you using? >>> >>> I would recommend fewer elements and a higher value of lx1. >>> I think it will be easier to manage the data, etc. >>> >>> Paul >>> >>> >>> >>> >>> >>> On Tue, 10 Jan 2012, nek5000-users at lists.mcs.anl.gov wrote: >>> >>>> Dear NEKs; >>>> >>>> I am trying to run a simulation of a turbulent flow in a straight pipe >>>> in high Reynolds number (Re_tau = 1000). After generating the grid with >>>> PRENEK and extrude it using n2to3, the mesh ended up with 4,495,920 >>>> elements. It compiled properly; however, trying to run it, hanged up in >>>> the last stage: >>>> ######################################################################## >>>> >>>> verify mesh topology >>>> -1.000000000000000 1.000000000000000 Xrange >>>> -1.000000000000000 1.000000000000000 Yrange >>>> 0.000000000000000 25.00000000000000 Zrange >>>> done :: verify mesh topology >>>> >>>> E-solver strategy: 1 itr >>>> mg_nx: 1 3 >>>> mg_ny: 1 3 >>>> mg_nz: 1 3 >>>> call usrsetvert >>>> done :: usrsetvert >>>> >>>> gs_setup: 866937 unique labels shared >>>> pairwise times (avg, min, max): 0.000241442 0.00019722 0.000265908 >>>> crystal router : 0.000458177 0.000445795 0.000471807 >>>> used all_to_all method: pairwise >>>> setupds time 5.6048E-02 seconds 1 2 4565612 4495920 >>>> setvert3d: 4 86046564 122013924 86046564 86046564 >>>> call usrsetvert >>>> done :: usrsetvert >>>> >>>> gs_setup: 8041169 unique labels shared >>>> pairwise times (avg, min, max): 0.00050716 0.000427103 0.00056479 >>>> crystal router : 0.0040165 0.00392921 0.00411811 >>>> used all_to_all method: pairwise >>>> setupds time 1.0465E+00 seconds 2 4 86046564 4495920 >>>> setup h1 coarse grid, nx_crs= 2 >>>> call usrsetvert >>>> done :: usrsetvert >>>> >>>> gs_setup: 866937 unique labels shared >>>> pairwise times (avg, min, max): 0.000233683 0.000197816 0.00024941 >>>> crystal router : 0.000466869 0.00045588 0.000478101 >>>> used all_to_all method: pairwise >>>> ######################################################################## >>>> >>>> >>>> I was wondering if you could help me with that. I attached the run >>>> logfile and also genmap.out. >>>> >>>> Many thanks >>>> Azad >>>> >>> >>> >>> ------------------------------ >>> >>> Message: 2 >>> Date: Tue, 10 Jan 2012 13:35:22 +0100 >>> From: nek5000-users at lists.mcs.anl.gov >>> Subject: Re: [Nek5000-users] run-time hang up in gs_setup >>> To: nek5000-users at lists.mcs.anl.gov >>> Message-ID: >>> >>> Content-Type: text/plain; charset=ISO-8859-1 >>> >>> Hi Azad, >>> >>> We have seen similar situations. I think this has to do with a known >>> bug. Unfortunately this bug is hard to reproduce and we haven't >>> managed to fix it yet. >>> >>> -Stefan >>> >>> On 1/10/12, nek5000-users at lists.mcs.anl.gov >>> wrote: >>>> >>>> Hi Azad, >>>> >>>> You are in record-setting territory for element counts! :) >>>> >>>> Are you using the amg-based coarse-grid solver? >>>> It is certain that you will need to do this (and, >>>> therefore, you will need matlab to process the AMG >>>> operators). There is some discussion of the steps >>>> on the wiki page. We can walk you through this process >>>> if you have any questions. >>>> >>>> What value of lx1 are you using? >>>> >>>> I would recommend fewer elements and a higher value of lx1. >>>> I think it will be easier to manage the data, etc. >>>> >>>> Paul >>>> >>>> >>>> >>>> >>>> >>>> On Tue, 10 Jan 2012, nek5000-users at lists.mcs.anl.gov wrote: >>>> >>>>> Dear NEKs; >>>>> >>>>> I am trying to run a simulation of a turbulent flow in a straight pipe >>>>> in high Reynolds number (Re_tau = 1000). After generating the grid with >>>>> PRENEK and extrude it using n2to3, the mesh ended up with 4,495,920 >>>>> elements. It compiled properly; however, trying to run it, hanged up in >>>>> the last stage: >>>>> ######################################################################## >>>>> >>>>> verify mesh topology >>>>> -1.000000000000000 1.000000000000000 Xrange >>>>> -1.000000000000000 1.000000000000000 Yrange >>>>> 0.000000000000000 25.00000000000000 Zrange >>>>> done :: verify mesh topology >>>>> >>>>> E-solver strategy: 1 itr >>>>> mg_nx: 1 3 >>>>> mg_ny: 1 3 >>>>> mg_nz: 1 3 >>>>> call usrsetvert >>>>> done :: usrsetvert >>>>> >>>>> gs_setup: 866937 unique labels shared >>>>> pairwise times (avg, min, max): 0.000241442 0.00019722 0.000265908 >>>>> crystal router : 0.000458177 0.000445795 0.000471807 >>>>> used all_to_all method: pairwise >>>>> setupds time 5.6048E-02 seconds 1 2 4565612 4495920 >>>>> setvert3d: 4 86046564 122013924 86046564 86046564 >>>>> call usrsetvert >>>>> done :: usrsetvert >>>>> >>>>> gs_setup: 8041169 unique labels shared >>>>> pairwise times (avg, min, max): 0.00050716 0.000427103 0.00056479 >>>>> crystal router : 0.0040165 0.00392921 0.00411811 >>>>> used all_to_all method: pairwise >>>>> setupds time 1.0465E+00 seconds 2 4 86046564 4495920 >>>>> setup h1 coarse grid, nx_crs= 2 >>>>> call usrsetvert >>>>> done :: usrsetvert >>>>> >>>>> gs_setup: 866937 unique labels shared >>>>> pairwise times (avg, min, max): 0.000233683 0.000197816 0.00024941 >>>>> crystal router : 0.000466869 0.00045588 0.000478101 >>>>> used all_to_all method: pairwise >>>>> ######################################################################## >>>>> >>>>> >>>>> I was wondering if you could help me with that. I attached the run >>>>> logfile and also genmap.out. >>>>> >>>>> Many thanks >>>>> Azad >>>>> >>>> _______________________________________________ >>>> Nek5000-users mailing list >>>> Nek5000-users at lists.mcs.anl.gov >>>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>>> >>> >>> >>> ------------------------------ >>> >>> Message: 3 >>> Date: Tue, 10 Jan 2012 17:58:47 +0100 >>> From: nek5000-users at lists.mcs.anl.gov >>> Subject: Re: [Nek5000-users] run-time hang up in gs_setup >>> To: nek5000-users at lists.mcs.anl.gov >>> Message-ID: <1326214727.2600.282.camel at damavand.mech.kth.se> >>> Content-Type: text/plain; charset="UTF-8" >>> >>> Dear Paul and Stefan; >>> >>> Thanks very much for looking into it. I use polynomial order 7th >>> (lx1=8). For the coarse-grid solver I actually used XXt. I also tried to >>> use AMG, but unfortunately neither v619 nor the latest version could >>> have compiled its matlab files and always gives me this error (in >>> matlab/R2011a): >>> ############################################## >>> ... >>> sparsification tolerance [1e-4]: stol = 0.0001 >>> >>> ------------------------------------------------------------------------ >>> Segmentation violation detected at Tue Jan 10 15:56:46 2012 >>> ------------------------------------------------------------------------ >>> .... >>> Abnormal termination: >>> Segmentation violation >>> .... >>> ############################################# >>> I have been in the web page: "amg_matlab Matlab based tool to generate >>> AMG solver inputfiles" (http://nek5000.mcs.anl.gov/index.php/Amg_matlab) >>> which gives me an empty link. >>> >>> I had an old version of the .dat files needed to run AMG, which I tried >>> those as (amg_Aff.dat, amgdmp_i.dat, amg.dat, amg_AfP.dat, amgdmp_p.dat, >>> amgdmp_j.dat, amg_W.dat) and I have got this error: >>> >>> ############################################ >>> ... >>> AMG: reading through row 142800, pass 119/121 >>> AMG: reading through row 144000, pass 120/121 >>> AMG: reading through row 144540, pass 121/121 >>> ERROR (proc >>> 0000, >>> /afs/pdc.kth.se/home/a/anoorani/codes/latest_nek/nek5_svn/trunk/nek/jl/amg.c:468): >>> AMG: missing data for some >>> rows >>> >>> call exitt: dying ... >>> ############################################ >>> >>> I think AMG could be a possibility to overcome this problem, though I >>> could not manage to get a run with that one. I look into the problem >>> with higher polynomial order to see if it reduces the number of elements >>> dramatically, or at least resolve this issue. >>> >>> Best regards >>> Azad >>> >>> %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% >>>> Hi Azad, >>>> >>>> We have seen similar situations. I think this has to do with a known >>>> bug. Unfortunately this bug is hard to reproduce and we haven't >>>> managed to fix it yet. >>>> >>>> -Stefan >>>> >>>> On 1/10/12, nek5000-users at lists.mcs.anl.gov >>>> wrote: >>>> >>>> >>>> Hi Azad, >>>> >>>> You are in record-setting territory for element counts! :) >>>> >>>> Are you using the amg-based coarse-grid solver? >>>> It is certain that you will need to do this (and, >>>> therefore, you will need matlab to process the AMG >>>> operators). There is some discussion of the steps >>>> on the wiki page. We can walk you through this process >>>> if you have any questions. >>>> >>>> What value of lx1 are you using? >>>> >>>> I would recommend fewer elements and a higher value of lx1. >>>> I think it will be easier to manage the data, etc. >>>> >>>> Paul >>>> >>>> >>>> >>>> >>>> >>>> On Tue, 10 Jan 2012, nek5000-users at lists.mcs.anl.gov wrote: >>>> >>>> Dear NEKs; >>>> >>>> I am trying to run a simulation of a turbulent flow in a straight pipe >>>> in high Reynolds number (Re_tau = 1000). After generating the grid with >>>> PRENEK and extrude it using n2to3, the mesh ended up with 4,495,920 >>>> elements. It compiled properly; however, trying to run it, hanged up in >>>> the last stage: >>>> ######################################################################## >>>> >>>> verify mesh topology >>>> -1.000000000000000 1.000000000000000 Xrange >>>> -1.000000000000000 1.000000000000000 Yrange >>>> 0.000000000000000 25.00000000000000 Zrange >>>> done :: verify mesh topology >>>> >>>> E-solver strategy: 1 itr >>>> mg_nx: 1 3 >>>> mg_ny: 1 3 >>>> mg_nz: 1 3 >>>> call usrsetvert >>>> done :: usrsetvert >>>> >>>> gs_setup: 866937 unique labels shared >>>> pairwise times (avg, min, max): 0.000241442 0.00019722 0.000265908 >>>> crystal router : 0.000458177 0.000445795 0.000471807 >>>> used all_to_all method: pairwise >>>> setupds time 5.6048E-02 seconds 1 2 4565612 4495920 >>>> setvert3d: 4 86046564 122013924 86046564 86046564 >>>> call usrsetvert >>>> done :: usrsetvert >>>> >>>> gs_setup: 8041169 unique labels shared >>>> pairwise times (avg, min, max): 0.00050716 0.000427103 0.00056479 >>>> crystal router : 0.0040165 0.00392921 0.00411811 >>>> used all_to_all method: pairwise >>>> setupds time 1.0465E+00 seconds 2 4 86046564 4495920 >>>> setup h1 coarse grid, nx_crs= 2 >>>> call usrsetvert >>>> done :: usrsetvert >>>> >>>> gs_setup: 866937 unique labels shared >>>> pairwise times (avg, min, max): 0.000233683 0.000197816 0.00024941 >>>> crystal router : 0.000466869 0.00045588 0.000478101 >>>> used all_to_all method: pairwise >>>> ######################################################################## >>>> >>>> >>>> I was wondering if you could help me with that. I attached the run >>>> logfile and also genmap.out. >>>> >>>> Many thanks >>>> Azad >>>> >>> >>> >>> >>> ------------------------------ >>> >>> _______________________________________________ >>> Nek5000-users mailing list >>> Nek5000-users at lists.mcs.anl.gov >>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>> >>> >>> End of Nek5000-users Digest, Vol 35, Issue 5 >>> ******************************************** >>> >> >> >> >> ---------------------------------------------------------------- >> This message was sent using IMP, the Internet Messaging Program. >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> > > > ------------------------------ > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > > End of Nek5000-users Digest, Vol 35, Issue 7 > ******************************************** > ---------------------------------------------------------------- This message was sent using IMP, the Internet Messaging Program. From nek5000-users at lists.mcs.anl.gov Thu Jan 12 14:09:12 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 12 Jan 2012 14:09:12 -0600 Subject: [Nek5000-users] run-time hang up in gs_setup In-Reply-To: <20120112203817.15641380tw4ubq21@www.mech.kth.se> References: <20120112203817.15641380tw4ubq21@www.mech.kth.se> Message-ID: Hi Azad, Earlier you had asked if a step was missing in the wiki with regards to the amg files/scripts. The has just been updated. Can you tell me exactly what you did to run the turbChannel test? And what output you had from the amg_matlab/run step? thanks, Katie On Thu, Jan 12, 2012 at 1:38 PM, wrote: > Hi Stefan; > > Unfortunately it does not work. first of all, ./run in amg_matlab does not > produce anything (latest version). Using my version of the files (running > turbChannel), the simulation crashed giving me this error: > > ##############################**################# > AMG level 8: 3 iterations with rho = 0.680429 > AMG level 9: 2 iterations with rho = 0.560188 > AMG: 144540 rows > AMG: reading through row 1200, pass 1/121 > ERROR (proc 0000, /scratch/azad/codes/late/nek5_**svn/trunk/nek/jl/amg.c:875): > AMG: data > has more rows than given problem > > call exitt: dying ... > ##############################**################# > > Regards > Azad > > Quoting nek5000-users-request at lists.**mcs.anl.gov > : > > Send Nek5000-users mailing list submissions to >> nek5000-users at lists.mcs.anl.**gov >> >> To subscribe or unsubscribe via the World Wide Web, visit >> https://lists.mcs.anl.gov/**mailman/listinfo/nek5000-users >> or, via email, send a message with subject or body 'help' to >> nek5000-users-request at lists.**mcs.anl.gov >> >> You can reach the person managing the list at >> nek5000-users-owner at lists.mcs.**anl.gov >> >> When replying, please edit your Subject line so it is more specific >> than "Re: Contents of Nek5000-users digest..." >> >> >> Today's Topics: >> >> 1. Re: run-time hang up in gs_setup (nek5000-users at lists.mcs.anl.**gov >> ) >> >> >> ------------------------------**------------------------------** >> ---------- >> >> Message: 1 >> Date: Thu, 12 Jan 2012 18:09:48 +0100 >> >> From: nek5000-users at lists.mcs.anl.**gov >> Subject: Re: [Nek5000-users] run-time hang up in gs_setup >> To: nek5000-users at lists.mcs.anl.**gov >> Message-ID: >> > gmail.com >> > >> >> Content-Type: text/plain; charset=ISO-8859-1 >> >> Hi Azad, >> >> can you try to run the turbChannel example using AMG and the latest >> version of the repo. Let me know if this works for you. >> >> Stefan >> >> On 1/11/12, nek5000-users at lists.mcs.anl.**gov >> > >> wrote: >> >>> Dear Stefan and Aleks; >>> >>> Thanks for updating the wiki webpage regarding the AMG, although, I >>> persume there must be another step also there exist, namely: copy the >>> generated files from the amg_matlb to the running directory? (Or they >>> should be remained there and one puts the generated .dat files after >>> running the 3rd step?). By the way non of the versions I tried working >>> (even 707!) despite the fact that I had a range of matlab versions >>> tried. Hanging with the old version I had I compiled again and have >>> got the four files which was rather fast (with the message at the end: >>> Error contraction factor: 0.47...) I used them and every time during >>> the run-time it crashed simply: >>> ##############################**##############################** >>> ############### >>> AMG: reading through row 144540, pass 121/121 >>> AMG: reading 0.071106 MB of W >>> AMG: reading 0.115601 MB of AfP >>> AMG: reading 0.132477 MB of Aff >>> AMG level 1 F-vars: 440159 >>> AMG level 2 F-vars: 55146 >>> AMG level 3 F-vars: 28480 >>> AMG level 4 F-vars: 17051 >>> AMG level 5 F-vars: 7524 >>> AMG level 6 F-vars: 5711 >>> AMG level 7 F-vars: 5763 >>> AMG level 8 F-vars: 28583 >>> AMG level 9 F-vars: 5380 >>> AMG level 10 F-vars: 5737 >>> Application 731033 exit codes: 139 >>> Application 731033 exit signals: Killed >>> Application 731033 resources: utime ~417s, stime ~3s >>> ##############################**##############################** >>> ############## >>> >>> Can you help me with that cause I believe this case still doable with >>> correct AMG scheme. >>> >>> Many thanks >>> Azad >>> >>> >>> >>>> Hi Azad, >>>> >>>> I believe old AMG files should work up to and including revision 707 >>>> in case >you want to check AMG quickly. >>>> >>>> Best. >>>> Aleks >>>> >>>> >>>> ----- Original Message ----- >>>> From: nek5000-users at lists.mcs.anl.gov >>>> To: nek5000-users at lists.mcs.anl.gov >>>> Sent: Tuesday, January 10, 2012 10:58:47 AM >>>> Subject: Re: [Nek5000-users] run-time hang up in gs_setup >>>> >>>> Hi Azad, >>>> >>>> your choice of lx1=8 is fine (it's our preferred sweet spot). If you >>>> have a large element count (say > 300'000) the factorization in the >>>> XXt setup phase may take hours. I guess that's why it looks like it's >>>> hanging. Again, there is a known bug which looks the same. So can't >>>> tell exactly what's causing your problem. >>>> >>>> I just updated the Wiki: https://nek5000.mcs.anl.gov/** >>>> index.php/Amg_matlab >>>> >>>> Can you verify that it still fails. >>>> >>>> -Stefan >>>> >>>> Quoting nek5000-users-request at lists.**mcs.anl.gov >>>> : >>>> >>>> Send Nek5000-users mailing list submissions to >>>> nek5000-users at lists.mcs.anl.**gov >>>> >>>> To subscribe or unsubscribe via the World Wide Web, visit >>>> https://lists.mcs.anl.gov/**mailman/listinfo/nek5000-users >>>> or, via email, send a message with subject or body 'help' to >>>> nek5000-users-request at lists.**mcs.anl.gov >>>> >>>> You can reach the person managing the list at >>>> nek5000-users-owner at lists.mcs.**anl.gov >>>> >>>> When replying, please edit your Subject line so it is more specific >>>> than "Re: Contents of Nek5000-users digest..." >>>> >>>> >>>> Today's Topics: >>>> >>>> 1. Re: run-time hang up in gs_setup (nek5000-users at lists.mcs.anl.** >>>> gov ) >>>> 2. Re: run-time hang up in gs_setup (nek5000-users at lists.mcs.anl.** >>>> gov ) >>>> 3. Re: run-time hang up in gs_setup (nek5000-users at lists.mcs.anl.** >>>> gov ) >>>> >>>> >>>> ------------------------------**------------------------------** >>>> ---------- >>>> >>>> Message: 1 >>>> Date: Tue, 10 Jan 2012 06:01:29 -0600 (CST) >>>> From: nek5000-users at lists.mcs.anl.**gov >>>> Subject: Re: [Nek5000-users] run-time hang up in gs_setup >>>> To: nek5000-users at lists.mcs.anl.**gov >>>> Message-ID: >>>> > >>>> Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed >>>> >>>> >>>> Hi Azad, >>>> >>>> You are in record-setting territory for element counts! :) >>>> >>>> Are you using the amg-based coarse-grid solver? >>>> It is certain that you will need to do this (and, >>>> therefore, you will need matlab to process the AMG >>>> operators). There is some discussion of the steps >>>> on the wiki page. We can walk you through this process >>>> if you have any questions. >>>> >>>> What value of lx1 are you using? >>>> >>>> I would recommend fewer elements and a higher value of lx1. >>>> I think it will be easier to manage the data, etc. >>>> >>>> Paul >>>> >>>> >>>> >>>> >>>> >>>> On Tue, 10 Jan 2012, nek5000-users at lists.mcs.anl.**govwrote: >>>> >>>> Dear NEKs; >>>>> >>>>> I am trying to run a simulation of a turbulent flow in a straight pipe >>>>> in high Reynolds number (Re_tau = 1000). After generating the grid with >>>>> PRENEK and extrude it using n2to3, the mesh ended up with 4,495,920 >>>>> elements. It compiled properly; however, trying to run it, hanged up in >>>>> the last stage: >>>>> ##############################**##############################** >>>>> ############ >>>>> >>>>> verify mesh topology >>>>> -1.000000000000000 1.000000000000000 Xrange >>>>> -1.000000000000000 1.000000000000000 Yrange >>>>> 0.000000000000000 25.00000000000000 Zrange >>>>> done :: verify mesh topology >>>>> >>>>> E-solver strategy: 1 itr >>>>> mg_nx: 1 3 >>>>> mg_ny: 1 3 >>>>> mg_nz: 1 3 >>>>> call usrsetvert >>>>> done :: usrsetvert >>>>> >>>>> gs_setup: 866937 unique labels shared >>>>> pairwise times (avg, min, max): 0.000241442 0.00019722 0.000265908 >>>>> crystal router : 0.000458177 0.000445795 0.000471807 >>>>> used all_to_all method: pairwise >>>>> setupds time 5.6048E-02 seconds 1 2 4565612 4495920 >>>>> setvert3d: 4 86046564 122013924 86046564 86046564 >>>>> call usrsetvert >>>>> done :: usrsetvert >>>>> >>>>> gs_setup: 8041169 unique labels shared >>>>> pairwise times (avg, min, max): 0.00050716 0.000427103 0.00056479 >>>>> crystal router : 0.0040165 0.00392921 0.00411811 >>>>> used all_to_all method: pairwise >>>>> setupds time 1.0465E+00 seconds 2 4 86046564 4495920 >>>>> setup h1 coarse grid, nx_crs= 2 >>>>> call usrsetvert >>>>> done :: usrsetvert >>>>> >>>>> gs_setup: 866937 unique labels shared >>>>> pairwise times (avg, min, max): 0.000233683 0.000197816 0.00024941 >>>>> crystal router : 0.000466869 0.00045588 0.000478101 >>>>> used all_to_all method: pairwise >>>>> ##############################**##############################** >>>>> ############ >>>>> >>>>> >>>>> I was wondering if you could help me with that. I attached the run >>>>> logfile and also genmap.out. >>>>> >>>>> Many thanks >>>>> Azad >>>>> >>>>> >>>> >>>> ------------------------------ >>>> >>>> Message: 2 >>>> Date: Tue, 10 Jan 2012 13:35:22 +0100 >>>> From: nek5000-users at lists.mcs.anl.**gov >>>> Subject: Re: [Nek5000-users] run-time hang up in gs_setup >>>> To: nek5000-users at lists.mcs.anl.**gov >>>> Message-ID: >>>> >>> gmail.com >>>> > >>>> Content-Type: text/plain; charset=ISO-8859-1 >>>> >>>> Hi Azad, >>>> >>>> We have seen similar situations. I think this has to do with a known >>>> bug. Unfortunately this bug is hard to reproduce and we haven't >>>> managed to fix it yet. >>>> >>>> -Stefan >>>> >>>> On 1/10/12, nek5000-users at lists.mcs.anl.**gov >>>> > >>>> wrote: >>>> >>>>> >>>>> Hi Azad, >>>>> >>>>> You are in record-setting territory for element counts! :) >>>>> >>>>> Are you using the amg-based coarse-grid solver? >>>>> It is certain that you will need to do this (and, >>>>> therefore, you will need matlab to process the AMG >>>>> operators). There is some discussion of the steps >>>>> on the wiki page. We can walk you through this process >>>>> if you have any questions. >>>>> >>>>> What value of lx1 are you using? >>>>> >>>>> I would recommend fewer elements and a higher value of lx1. >>>>> I think it will be easier to manage the data, etc. >>>>> >>>>> Paul >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> On Tue, 10 Jan 2012, nek5000-users at lists.mcs.anl.**govwrote: >>>>> >>>>> Dear NEKs; >>>>>> >>>>>> I am trying to run a simulation of a turbulent flow in a straight pipe >>>>>> in high Reynolds number (Re_tau = 1000). After generating the grid >>>>>> with >>>>>> PRENEK and extrude it using n2to3, the mesh ended up with 4,495,920 >>>>>> elements. It compiled properly; however, trying to run it, hanged up >>>>>> in >>>>>> the last stage: >>>>>> ##############################**##############################** >>>>>> ############ >>>>>> >>>>>> verify mesh topology >>>>>> -1.000000000000000 1.000000000000000 Xrange >>>>>> -1.000000000000000 1.000000000000000 Yrange >>>>>> 0.000000000000000 25.00000000000000 Zrange >>>>>> done :: verify mesh topology >>>>>> >>>>>> E-solver strategy: 1 itr >>>>>> mg_nx: 1 3 >>>>>> mg_ny: 1 3 >>>>>> mg_nz: 1 3 >>>>>> call usrsetvert >>>>>> done :: usrsetvert >>>>>> >>>>>> gs_setup: 866937 unique labels shared >>>>>> pairwise times (avg, min, max): 0.000241442 0.00019722 0.000265908 >>>>>> crystal router : 0.000458177 0.000445795 0.000471807 >>>>>> used all_to_all method: pairwise >>>>>> setupds time 5.6048E-02 seconds 1 2 4565612 4495920 >>>>>> setvert3d: 4 86046564 122013924 86046564 86046564 >>>>>> call usrsetvert >>>>>> done :: usrsetvert >>>>>> >>>>>> gs_setup: 8041169 unique labels shared >>>>>> pairwise times (avg, min, max): 0.00050716 0.000427103 0.00056479 >>>>>> crystal router : 0.0040165 0.00392921 0.00411811 >>>>>> used all_to_all method: pairwise >>>>>> setupds time 1.0465E+00 seconds 2 4 86046564 4495920 >>>>>> setup h1 coarse grid, nx_crs= 2 >>>>>> call usrsetvert >>>>>> done :: usrsetvert >>>>>> >>>>>> gs_setup: 866937 unique labels shared >>>>>> pairwise times (avg, min, max): 0.000233683 0.000197816 0.00024941 >>>>>> crystal router : 0.000466869 0.00045588 0.000478101 >>>>>> used all_to_all method: pairwise >>>>>> ##############################**##############################** >>>>>> ############ >>>>>> >>>>>> >>>>>> I was wondering if you could help me with that. I attached the run >>>>>> logfile and also genmap.out. >>>>>> >>>>>> Many thanks >>>>>> Azad >>>>>> >>>>>> ______________________________**_________________ >>>>> Nek5000-users mailing list >>>>> Nek5000-users at lists.mcs.anl.**gov >>>>> https://lists.mcs.anl.gov/**mailman/listinfo/nek5000-users >>>>> >>>>> >>>> >>>> ------------------------------ >>>> >>>> Message: 3 >>>> Date: Tue, 10 Jan 2012 17:58:47 +0100 >>>> From: nek5000-users at lists.mcs.anl.**gov >>>> Subject: Re: [Nek5000-users] run-time hang up in gs_setup >>>> To: nek5000-users at lists.mcs.anl.**gov >>>> Message-ID: <1326214727.2600.282.camel@**damavand.mech.kth.se<1326214727.2600.282.camel at damavand.mech.kth.se> >>>> > >>>> Content-Type: text/plain; charset="UTF-8" >>>> >>>> Dear Paul and Stefan; >>>> >>>> Thanks very much for looking into it. I use polynomial order 7th >>>> (lx1=8). For the coarse-grid solver I actually used XXt. I also tried to >>>> use AMG, but unfortunately neither v619 nor the latest version could >>>> have compiled its matlab files and always gives me this error (in >>>> matlab/R2011a): >>>> ##############################**################ >>>> ... >>>> sparsification tolerance [1e-4]: stol = 0.0001 >>>> >>>> ------------------------------**------------------------------** >>>> ------------ >>>> Segmentation violation detected at Tue Jan 10 15:56:46 2012 >>>> ------------------------------**------------------------------** >>>> ------------ >>>> .... >>>> Abnormal termination: >>>> Segmentation violation >>>> .... >>>> ##############################**############### >>>> I have been in the web page: "amg_matlab Matlab based tool to generate >>>> AMG solver inputfiles" (http://nek5000.mcs.anl.gov/** >>>> index.php/Amg_matlab ) >>>> which gives me an empty link. >>>> >>>> I had an old version of the .dat files needed to run AMG, which I tried >>>> those as (amg_Aff.dat, amgdmp_i.dat, amg.dat, amg_AfP.dat, amgdmp_p.dat, >>>> amgdmp_j.dat, amg_W.dat) and I have got this error: >>>> >>>> ##############################**############## >>>> ... >>>> AMG: reading through row 142800, pass 119/121 >>>> AMG: reading through row 144000, pass 120/121 >>>> AMG: reading through row 144540, pass 121/121 >>>> ERROR (proc >>>> 0000, >>>> /afs/pdc.kth.se/home/a/**anoorani/codes/latest_nek/** >>>> nek5_svn/trunk/nek/jl/amg.c:**468 >>>> ): >>>> AMG: missing data for some >>>> rows >>>> >>>> call exitt: dying ... >>>> ##############################**############## >>>> >>>> I think AMG could be a possibility to overcome this problem, though I >>>> could not manage to get a run with that one. I look into the problem >>>> with higher polynomial order to see if it reduces the number of elements >>>> dramatically, or at least resolve this issue. >>>> >>>> Best regards >>>> Azad >>>> >>>> %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%**%%%%%%%%%%%%%%%%%%%%%%%%% >>>> >>>>> Hi Azad, >>>>> >>>>> We have seen similar situations. I think this has to do with a known >>>>> bug. Unfortunately this bug is hard to reproduce and we haven't >>>>> managed to fix it yet. >>>>> >>>>> -Stefan >>>>> >>>>> On 1/10/12, nek5000-users at lists.mcs.anl.gov >>>>> wrote: >>>>> >>>>> >>>>> Hi Azad, >>>>> >>>>> You are in record-setting territory for element counts! :) >>>>> >>>>> Are you using the amg-based coarse-grid solver? >>>>> It is certain that you will need to do this (and, >>>>> therefore, you will need matlab to process the AMG >>>>> operators). There is some discussion of the steps >>>>> on the wiki page. We can walk you through this process >>>>> if you have any questions. >>>>> >>>>> What value of lx1 are you using? >>>>> >>>>> I would recommend fewer elements and a higher value of lx1. >>>>> I think it will be easier to manage the data, etc. >>>>> >>>>> Paul >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> On Tue, 10 Jan 2012, nek5000-users at lists.mcs.anl.gov wrote: >>>>> >>>>> Dear NEKs; >>>>> >>>>> I am trying to run a simulation of a turbulent flow in a straight pipe >>>>> in high Reynolds number (Re_tau = 1000). After generating the grid with >>>>> PRENEK and extrude it using n2to3, the mesh ended up with 4,495,920 >>>>> elements. It compiled properly; however, trying to run it, hanged up in >>>>> the last stage: >>>>> ##############################**##############################** >>>>> ############ >>>>> >>>>> verify mesh topology >>>>> -1.000000000000000 1.000000000000000 Xrange >>>>> -1.000000000000000 1.000000000000000 Yrange >>>>> 0.000000000000000 25.00000000000000 Zrange >>>>> done :: verify mesh topology >>>>> >>>>> E-solver strategy: 1 itr >>>>> mg_nx: 1 3 >>>>> mg_ny: 1 3 >>>>> mg_nz: 1 3 >>>>> call usrsetvert >>>>> done :: usrsetvert >>>>> >>>>> gs_setup: 866937 unique labels shared >>>>> pairwise times (avg, min, max): 0.000241442 0.00019722 0.000265908 >>>>> crystal router : 0.000458177 0.000445795 0.000471807 >>>>> used all_to_all method: pairwise >>>>> setupds time 5.6048E-02 seconds 1 2 4565612 4495920 >>>>> setvert3d: 4 86046564 122013924 86046564 86046564 >>>>> call usrsetvert >>>>> done :: usrsetvert >>>>> >>>>> gs_setup: 8041169 unique labels shared >>>>> pairwise times (avg, min, max): 0.00050716 0.000427103 0.00056479 >>>>> crystal router : 0.0040165 0.00392921 0.00411811 >>>>> used all_to_all method: pairwise >>>>> setupds time 1.0465E+00 seconds 2 4 86046564 4495920 >>>>> setup h1 coarse grid, nx_crs= 2 >>>>> call usrsetvert >>>>> done :: usrsetvert >>>>> >>>>> gs_setup: 866937 unique labels shared >>>>> pairwise times (avg, min, max): 0.000233683 0.000197816 0.00024941 >>>>> crystal router : 0.000466869 0.00045588 0.000478101 >>>>> used all_to_all method: pairwise >>>>> ##############################**##############################** >>>>> ############ >>>>> >>>>> >>>>> I was wondering if you could help me with that. I attached the run >>>>> logfile and also genmap.out. >>>>> >>>>> Many thanks >>>>> Azad >>>>> >>>>> >>>> >>>> >>>> ------------------------------ >>>> >>>> ______________________________**_________________ >>>> Nek5000-users mailing list >>>> Nek5000-users at lists.mcs.anl.**gov >>>> https://lists.mcs.anl.gov/**mailman/listinfo/nek5000-users >>>> >>>> >>>> End of Nek5000-users Digest, Vol 35, Issue 5 >>>> ********************************************** >>>> >>>> >>> >>> >>> ------------------------------**------------------------------**---- >>> This message was sent using IMP, the Internet Messaging Program. >>> ______________________________**_________________ >>> Nek5000-users mailing list >>> Nek5000-users at lists.mcs.anl.**gov >>> https://lists.mcs.anl.gov/**mailman/listinfo/nek5000-users >>> >>> >> >> ------------------------------ >> >> ______________________________**_________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.**gov >> https://lists.mcs.anl.gov/**mailman/listinfo/nek5000-users >> >> >> End of Nek5000-users Digest, Vol 35, Issue 7 >> ********************************************** >> >> > > > ------------------------------**------------------------------**---- > This message was sent using IMP, the Internet Messaging Program. > ______________________________**_________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.**gov > https://lists.mcs.anl.gov/**mailman/listinfo/nek5000-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Thu Jan 12 14:34:35 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 12 Jan 2012 21:34:35 +0100 Subject: [Nek5000-users] Nek5000-users Digest, Vol 35, Issue 9 In-Reply-To: References: Message-ID: <20120112213435.25972q7ffsnuvb2z@www.mech.kth.se> Hi Katie; Thanks for updating the page. I did exactly the same as it's suggested in the updated page. After ./run of the script in amg_matlab I got this error: ######################################################### Computing Lagrange multiplier governing matrix skeleton ... nnz = 25. Computing Lagrange multiplier governing matrix (C code) ... done. CG to obtain Lagrange multipliers ... 1 iterations. Computing interpolation weights (C code) ... done. Sparsifying R_f A P: compression = 0 simple_sparsify: nnzs 4/4 (1) Level 6, dim(A) = 2 Computing S = I - D^{-1/2} A D^{-1/2} ... done, nnz(S) = 2. Running coarsening heuristic ... ratio = 0, n = 0, max Gershgorin radius = 1 ratio = 0.5, n = 1, max Gershgorin radius = 0 connectivity = ??? Error using ==> fprintf Function is not defined for sparse inputs. Error in ==> coarse_par at 37 fprintf(1,'%g\n',max(lanczos(sparse(D*S*D)))); Error in ==> amg_setup at 30 [C F] = coarse_par(A,tolc); Error in ==> go at 20 data = amg_setup(A, full(0*A(:,1)+1),ctol, tol, stol, wtol); ############################################################ I tried it for turbChannel test case as well as some other simplified test cases. I have got the same errors. Regards Azad Quoting nek5000-users-request at lists.mcs.anl.gov: > Send Nek5000-users mailing list submissions to > nek5000-users at lists.mcs.anl.gov > > To subscribe or unsubscribe via the World Wide Web, visit > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > or, via email, send a message with subject or body 'help' to > nek5000-users-request at lists.mcs.anl.gov > > You can reach the person managing the list at > nek5000-users-owner at lists.mcs.anl.gov > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of Nek5000-users digest..." > > > Today's Topics: > > 1. Re: run-time hang up in gs_setup (nek5000-users at lists.mcs.anl.gov) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Thu, 12 Jan 2012 14:09:12 -0600 > From: nek5000-users at lists.mcs.anl.gov > Subject: Re: [Nek5000-users] run-time hang up in gs_setup > To: nek5000-users at lists.mcs.anl.gov > Message-ID: > > Content-Type: text/plain; charset="iso-8859-1" > > Hi Azad, > > Earlier you had asked if a step was missing in the wiki with regards to the > amg files/scripts. The has just been updated. > > Can you tell me exactly what you did to run the turbChannel test? And what > output you had from the amg_matlab/run step? > > thanks, > Katie > > On Thu, Jan 12, 2012 at 1:38 PM, wrote: > >> Hi Stefan; >> >> Unfortunately it does not work. first of all, ./run in amg_matlab does not >> produce anything (latest version). Using my version of the files (running >> turbChannel), the simulation crashed giving me this error: >> >> ##############################**################# >> AMG level 8: 3 iterations with rho = 0.680429 >> AMG level 9: 2 iterations with rho = 0.560188 >> AMG: 144540 rows >> AMG: reading through row 1200, pass 1/121 >> ERROR (proc 0000, >> /scratch/azad/codes/late/nek5_**svn/trunk/nek/jl/amg.c:875): >> AMG: data >> has more rows than given problem >> >> call exitt: dying ... >> ##############################**################# >> >> Regards >> Azad >> >> Quoting >> nek5000-users-request at lists.**mcs.anl.gov >> : >> >> Send Nek5000-users mailing list submissions to >>> nek5000-users at lists.mcs.anl.**gov >>> >>> To subscribe or unsubscribe via the World Wide Web, visit >>> >>> https://lists.mcs.anl.gov/**mailman/listinfo/nek5000-users >>> or, via email, send a message with subject or body 'help' to >>> >>> nek5000-users-request at lists.**mcs.anl.gov >>> >>> You can reach the person managing the list at >>> >>> nek5000-users-owner at lists.mcs.**anl.gov >>> >>> When replying, please edit your Subject line so it is more specific >>> than "Re: Contents of Nek5000-users digest..." >>> >>> >>> Today's Topics: >>> >>> 1. Re: run-time hang up in gs_setup >>> (nek5000-users at lists.mcs.anl.**gov >>> ) >>> >>> >>> ------------------------------**------------------------------** >>> ---------- >>> >>> Message: 1 >>> Date: Thu, 12 Jan 2012 18:09:48 +0100 >>> >>> From: nek5000-users at lists.mcs.anl.**gov >>> Subject: Re: [Nek5000-users] run-time hang up in gs_setup >>> To: nek5000-users at lists.mcs.anl.**gov >>> Message-ID: >>> >> gmail.com >>> > >>> >>> Content-Type: text/plain; charset=ISO-8859-1 >>> >>> Hi Azad, >>> >>> can you try to run the turbChannel example using AMG and the latest >>> version of the repo. Let me know if this works for you. >>> >>> Stefan >>> >>> On 1/11/12, >>> nek5000-users at lists.mcs.anl.**gov >>> > >>> wrote: >>> >>>> Dear Stefan and Aleks; >>>> >>>> Thanks for updating the wiki webpage regarding the AMG, although, I >>>> persume there must be another step also there exist, namely: copy the >>>> generated files from the amg_matlb to the running directory? (Or they >>>> should be remained there and one puts the generated .dat files after >>>> running the 3rd step?). By the way non of the versions I tried working >>>> (even 707!) despite the fact that I had a range of matlab versions >>>> tried. Hanging with the old version I had I compiled again and have >>>> got the four files which was rather fast (with the message at the end: >>>> Error contraction factor: 0.47...) I used them and every time during >>>> the run-time it crashed simply: >>>> ##############################**##############################** >>>> ############### >>>> AMG: reading through row 144540, pass 121/121 >>>> AMG: reading 0.071106 MB of W >>>> AMG: reading 0.115601 MB of AfP >>>> AMG: reading 0.132477 MB of Aff >>>> AMG level 1 F-vars: 440159 >>>> AMG level 2 F-vars: 55146 >>>> AMG level 3 F-vars: 28480 >>>> AMG level 4 F-vars: 17051 >>>> AMG level 5 F-vars: 7524 >>>> AMG level 6 F-vars: 5711 >>>> AMG level 7 F-vars: 5763 >>>> AMG level 8 F-vars: 28583 >>>> AMG level 9 F-vars: 5380 >>>> AMG level 10 F-vars: 5737 >>>> Application 731033 exit codes: 139 >>>> Application 731033 exit signals: Killed >>>> Application 731033 resources: utime ~417s, stime ~3s >>>> ##############################**##############################** >>>> ############## >>>> >>>> Can you help me with that cause I believe this case still doable with >>>> correct AMG scheme. >>>> >>>> Many thanks >>>> Azad >>>> >>>> >>>> >>>>> Hi Azad, >>>>> >>>>> I believe old AMG files should work up to and including revision 707 >>>>> in case >you want to check AMG quickly. >>>>> >>>>> Best. >>>>> Aleks >>>>> >>>>> >>>>> ----- Original Message ----- >>>>> From: nek5000-users at lists.mcs.anl.gov >>>>> To: nek5000-users at lists.mcs.anl.gov >>>>> Sent: Tuesday, January 10, 2012 10:58:47 AM >>>>> Subject: Re: [Nek5000-users] run-time hang up in gs_setup >>>>> >>>>> Hi Azad, >>>>> >>>>> your choice of lx1=8 is fine (it's our preferred sweet spot). If you >>>>> have a large element count (say > 300'000) the factorization in the >>>>> XXt setup phase may take hours. I guess that's why it looks like it's >>>>> hanging. Again, there is a known bug which looks the same. So can't >>>>> tell exactly what's causing your problem. >>>>> >>>>> I just updated the Wiki: https://nek5000.mcs.anl.gov/** >>>>> index.php/Amg_matlab >>>>> >>>>> Can you verify that it still fails. >>>>> >>>>> -Stefan >>>>> >>>>> Quoting >>>>> nek5000-users-request at lists.**mcs.anl.gov >>>>> : >>>>> >>>>> Send Nek5000-users mailing list submissions to >>>>> nek5000-users at lists.mcs.anl.**gov >>>>> >>>>> To subscribe or unsubscribe via the World Wide Web, visit >>>>> >>>>> https://lists.mcs.anl.gov/**mailman/listinfo/nek5000-users >>>>> or, via email, send a message with subject or body 'help' to >>>>> >>>>> nek5000-users-request at lists.**mcs.anl.gov >>>>> >>>>> You can reach the person managing the list at >>>>> >>>>> nek5000-users-owner at lists.mcs.**anl.gov >>>>> >>>>> When replying, please edit your Subject line so it is more specific >>>>> than "Re: Contents of Nek5000-users digest..." >>>>> >>>>> >>>>> Today's Topics: >>>>> >>>>> 1. Re: run-time hang up in gs_setup (nek5000-users at lists.mcs.anl.** >>>>> gov ) >>>>> 2. Re: run-time hang up in gs_setup (nek5000-users at lists.mcs.anl.** >>>>> gov ) >>>>> 3. Re: run-time hang up in gs_setup (nek5000-users at lists.mcs.anl.** >>>>> gov ) >>>>> >>>>> >>>>> ------------------------------**------------------------------** >>>>> ---------- >>>>> >>>>> Message: 1 >>>>> Date: Tue, 10 Jan 2012 06:01:29 -0600 (CST) >>>>> From: nek5000-users at lists.mcs.anl.**gov >>>>> Subject: Re: [Nek5000-users] run-time hang up in gs_setup >>>>> To: nek5000-users at lists.mcs.anl.**gov >>>>> Message-ID: >>>>> >>>>> > >>>>> Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed >>>>> >>>>> >>>>> Hi Azad, >>>>> >>>>> You are in record-setting territory for element counts! :) >>>>> >>>>> Are you using the amg-based coarse-grid solver? >>>>> It is certain that you will need to do this (and, >>>>> therefore, you will need matlab to process the AMG >>>>> operators). There is some discussion of the steps >>>>> on the wiki page. We can walk you through this process >>>>> if you have any questions. >>>>> >>>>> What value of lx1 are you using? >>>>> >>>>> I would recommend fewer elements and a higher value of lx1. >>>>> I think it will be easier to manage the data, etc. >>>>> >>>>> Paul >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> On Tue, 10 Jan 2012, >>>>> nek5000-users at lists.mcs.anl.**govwrote: >>>>> >>>>> Dear NEKs; >>>>>> >>>>>> I am trying to run a simulation of a turbulent flow in a straight pipe >>>>>> in high Reynolds number (Re_tau = 1000). After generating the grid with >>>>>> PRENEK and extrude it using n2to3, the mesh ended up with 4,495,920 >>>>>> elements. It compiled properly; however, trying to run it, hanged up in >>>>>> the last stage: >>>>>> ##############################**##############################** >>>>>> ############ >>>>>> >>>>>> verify mesh topology >>>>>> -1.000000000000000 1.000000000000000 Xrange >>>>>> -1.000000000000000 1.000000000000000 Yrange >>>>>> 0.000000000000000 25.00000000000000 Zrange >>>>>> done :: verify mesh topology >>>>>> >>>>>> E-solver strategy: 1 itr >>>>>> mg_nx: 1 3 >>>>>> mg_ny: 1 3 >>>>>> mg_nz: 1 3 >>>>>> call usrsetvert >>>>>> done :: usrsetvert >>>>>> >>>>>> gs_setup: 866937 unique labels shared >>>>>> pairwise times (avg, min, max): 0.000241442 0.00019722 0.000265908 >>>>>> crystal router : 0.000458177 0.000445795 0.000471807 >>>>>> used all_to_all method: pairwise >>>>>> setupds time 5.6048E-02 seconds 1 2 4565612 4495920 >>>>>> setvert3d: 4 86046564 122013924 86046564 86046564 >>>>>> call usrsetvert >>>>>> done :: usrsetvert >>>>>> >>>>>> gs_setup: 8041169 unique labels shared >>>>>> pairwise times (avg, min, max): 0.00050716 0.000427103 0.00056479 >>>>>> crystal router : 0.0040165 0.00392921 0.00411811 >>>>>> used all_to_all method: pairwise >>>>>> setupds time 1.0465E+00 seconds 2 4 86046564 4495920 >>>>>> setup h1 coarse grid, nx_crs= 2 >>>>>> call usrsetvert >>>>>> done :: usrsetvert >>>>>> >>>>>> gs_setup: 866937 unique labels shared >>>>>> pairwise times (avg, min, max): 0.000233683 0.000197816 0.00024941 >>>>>> crystal router : 0.000466869 0.00045588 0.000478101 >>>>>> used all_to_all method: pairwise >>>>>> ##############################**##############################** >>>>>> ############ >>>>>> >>>>>> >>>>>> I was wondering if you could help me with that. I attached the run >>>>>> logfile and also genmap.out. >>>>>> >>>>>> Many thanks >>>>>> Azad >>>>>> >>>>>> >>>>> >>>>> ------------------------------ >>>>> >>>>> Message: 2 >>>>> Date: Tue, 10 Jan 2012 13:35:22 +0100 >>>>> From: nek5000-users at lists.mcs.anl.**gov >>>>> Subject: Re: [Nek5000-users] run-time hang up in gs_setup >>>>> To: nek5000-users at lists.mcs.anl.**gov >>>>> Message-ID: >>>>> >>>> gmail.com >>>>> > >>>>> Content-Type: text/plain; charset=ISO-8859-1 >>>>> >>>>> Hi Azad, >>>>> >>>>> We have seen similar situations. I think this has to do with a known >>>>> bug. Unfortunately this bug is hard to reproduce and we haven't >>>>> managed to fix it yet. >>>>> >>>>> -Stefan >>>>> >>>>> On 1/10/12, >>>>> nek5000-users at lists.mcs.anl.**gov >>>>> > >>>>> wrote: >>>>> >>>>>> >>>>>> Hi Azad, >>>>>> >>>>>> You are in record-setting territory for element counts! :) >>>>>> >>>>>> Are you using the amg-based coarse-grid solver? >>>>>> It is certain that you will need to do this (and, >>>>>> therefore, you will need matlab to process the AMG >>>>>> operators). There is some discussion of the steps >>>>>> on the wiki page. We can walk you through this process >>>>>> if you have any questions. >>>>>> >>>>>> What value of lx1 are you using? >>>>>> >>>>>> I would recommend fewer elements and a higher value of lx1. >>>>>> I think it will be easier to manage the data, etc. >>>>>> >>>>>> Paul >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> On Tue, 10 Jan 2012, >>>>>> nek5000-users at lists.mcs.anl.**govwrote: >>>>>> >>>>>> Dear NEKs; >>>>>>> >>>>>>> I am trying to run a simulation of a turbulent flow in a straight pipe >>>>>>> in high Reynolds number (Re_tau = 1000). After generating the grid >>>>>>> with >>>>>>> PRENEK and extrude it using n2to3, the mesh ended up with 4,495,920 >>>>>>> elements. It compiled properly; however, trying to run it, hanged up >>>>>>> in >>>>>>> the last stage: >>>>>>> ##############################**##############################** >>>>>>> ############ >>>>>>> >>>>>>> verify mesh topology >>>>>>> -1.000000000000000 1.000000000000000 Xrange >>>>>>> -1.000000000000000 1.000000000000000 Yrange >>>>>>> 0.000000000000000 25.00000000000000 Zrange >>>>>>> done :: verify mesh topology >>>>>>> >>>>>>> E-solver strategy: 1 itr >>>>>>> mg_nx: 1 3 >>>>>>> mg_ny: 1 3 >>>>>>> mg_nz: 1 3 >>>>>>> call usrsetvert >>>>>>> done :: usrsetvert >>>>>>> >>>>>>> gs_setup: 866937 unique labels shared >>>>>>> pairwise times (avg, min, max): 0.000241442 0.00019722 0.000265908 >>>>>>> crystal router : 0.000458177 0.000445795 0.000471807 >>>>>>> used all_to_all method: pairwise >>>>>>> setupds time 5.6048E-02 seconds 1 2 4565612 4495920 >>>>>>> setvert3d: 4 86046564 122013924 86046564 86046564 >>>>>>> call usrsetvert >>>>>>> done :: usrsetvert >>>>>>> >>>>>>> gs_setup: 8041169 unique labels shared >>>>>>> pairwise times (avg, min, max): 0.00050716 0.000427103 0.00056479 >>>>>>> crystal router : 0.0040165 0.00392921 0.00411811 >>>>>>> used all_to_all method: pairwise >>>>>>> setupds time 1.0465E+00 seconds 2 4 86046564 4495920 >>>>>>> setup h1 coarse grid, nx_crs= 2 >>>>>>> call usrsetvert >>>>>>> done :: usrsetvert >>>>>>> >>>>>>> gs_setup: 866937 unique labels shared >>>>>>> pairwise times (avg, min, max): 0.000233683 0.000197816 0.00024941 >>>>>>> crystal router : 0.000466869 0.00045588 0.000478101 >>>>>>> used all_to_all method: pairwise >>>>>>> ##############################**##############################** >>>>>>> ############ >>>>>>> >>>>>>> >>>>>>> I was wondering if you could help me with that. I attached the run >>>>>>> logfile and also genmap.out. >>>>>>> >>>>>>> Many thanks >>>>>>> Azad >>>>>>> >>>>>>> ______________________________**_________________ >>>>>> Nek5000-users mailing list >>>>>> Nek5000-users at lists.mcs.anl.**gov >>>>>> https://lists.mcs.anl.gov/**mailman/listinfo/nek5000-users >>>>>> >>>>>> >>>>> >>>>> ------------------------------ >>>>> >>>>> Message: 3 >>>>> Date: Tue, 10 Jan 2012 17:58:47 +0100 >>>>> From: nek5000-users at lists.mcs.anl.**gov >>>>> Subject: Re: [Nek5000-users] run-time hang up in gs_setup >>>>> To: nek5000-users at lists.mcs.anl.**gov >>>>> Message-ID: >>>>> <1326214727.2600.282.camel@**damavand.mech.kth.se<1326214727.2600.282.camel at damavand.mech.kth.se> >>>>> > >>>>> Content-Type: text/plain; charset="UTF-8" >>>>> >>>>> Dear Paul and Stefan; >>>>> >>>>> Thanks very much for looking into it. I use polynomial order 7th >>>>> (lx1=8). For the coarse-grid solver I actually used XXt. I also tried to >>>>> use AMG, but unfortunately neither v619 nor the latest version could >>>>> have compiled its matlab files and always gives me this error (in >>>>> matlab/R2011a): >>>>> ##############################**################ >>>>> ... >>>>> sparsification tolerance [1e-4]: stol = 0.0001 >>>>> >>>>> ------------------------------**------------------------------** >>>>> ------------ >>>>> Segmentation violation detected at Tue Jan 10 15:56:46 2012 >>>>> ------------------------------**------------------------------** >>>>> ------------ >>>>> .... >>>>> Abnormal termination: >>>>> Segmentation violation >>>>> .... >>>>> ##############################**############### >>>>> I have been in the web page: "amg_matlab Matlab based tool to generate >>>>> AMG solver inputfiles" (http://nek5000.mcs.anl.gov/** >>>>> index.php/Amg_matlab ) >>>>> which gives me an empty link. >>>>> >>>>> I had an old version of the .dat files needed to run AMG, which I tried >>>>> those as (amg_Aff.dat, amgdmp_i.dat, amg.dat, amg_AfP.dat, amgdmp_p.dat, >>>>> amgdmp_j.dat, amg_W.dat) and I have got this error: >>>>> >>>>> ##############################**############## >>>>> ... >>>>> AMG: reading through row 142800, pass 119/121 >>>>> AMG: reading through row 144000, pass 120/121 >>>>> AMG: reading through row 144540, pass 121/121 >>>>> ERROR (proc >>>>> 0000, >>>>> /afs/pdc.kth.se/home/a/**anoorani/codes/latest_nek/** >>>>> nek5_svn/trunk/nek/jl/amg.c:**468 >>>>> ): >>>>> AMG: missing data for some >>>>> rows >>>>> >>>>> call exitt: dying ... >>>>> ##############################**############## >>>>> >>>>> I think AMG could be a possibility to overcome this problem, though I >>>>> could not manage to get a run with that one. I look into the problem >>>>> with higher polynomial order to see if it reduces the number of elements >>>>> dramatically, or at least resolve this issue. >>>>> >>>>> Best regards >>>>> Azad >>>>> >>>>> %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%**%%%%%%%%%%%%%%%%%%%%%%%%% >>>>> >>>>>> Hi Azad, >>>>>> >>>>>> We have seen similar situations. I think this has to do with a known >>>>>> bug. Unfortunately this bug is hard to reproduce and we haven't >>>>>> managed to fix it yet. >>>>>> >>>>>> -Stefan >>>>>> >>>>>> On 1/10/12, nek5000-users at lists.mcs.anl.gov >>>>>> wrote: >>>>>> >>>>>> >>>>>> Hi Azad, >>>>>> >>>>>> You are in record-setting territory for element counts! :) >>>>>> >>>>>> Are you using the amg-based coarse-grid solver? >>>>>> It is certain that you will need to do this (and, >>>>>> therefore, you will need matlab to process the AMG >>>>>> operators). There is some discussion of the steps >>>>>> on the wiki page. We can walk you through this process >>>>>> if you have any questions. >>>>>> >>>>>> What value of lx1 are you using? >>>>>> >>>>>> I would recommend fewer elements and a higher value of lx1. >>>>>> I think it will be easier to manage the data, etc. >>>>>> >>>>>> Paul >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> On Tue, 10 Jan 2012, nek5000-users at lists.mcs.anl.gov wrote: >>>>>> >>>>>> Dear NEKs; >>>>>> >>>>>> I am trying to run a simulation of a turbulent flow in a straight pipe >>>>>> in high Reynolds number (Re_tau = 1000). After generating the grid with >>>>>> PRENEK and extrude it using n2to3, the mesh ended up with 4,495,920 >>>>>> elements. It compiled properly; however, trying to run it, hanged up in >>>>>> the last stage: >>>>>> ##############################**##############################** >>>>>> ############ >>>>>> >>>>>> verify mesh topology >>>>>> -1.000000000000000 1.000000000000000 Xrange >>>>>> -1.000000000000000 1.000000000000000 Yrange >>>>>> 0.000000000000000 25.00000000000000 Zrange >>>>>> done :: verify mesh topology >>>>>> >>>>>> E-solver strategy: 1 itr >>>>>> mg_nx: 1 3 >>>>>> mg_ny: 1 3 >>>>>> mg_nz: 1 3 >>>>>> call usrsetvert >>>>>> done :: usrsetvert >>>>>> >>>>>> gs_setup: 866937 unique labels shared >>>>>> pairwise times (avg, min, max): 0.000241442 0.00019722 0.000265908 >>>>>> crystal router : 0.000458177 0.000445795 0.000471807 >>>>>> used all_to_all method: pairwise >>>>>> setupds time 5.6048E-02 seconds 1 2 4565612 4495920 >>>>>> setvert3d: 4 86046564 122013924 86046564 86046564 >>>>>> call usrsetvert >>>>>> done :: usrsetvert >>>>>> >>>>>> gs_setup: 8041169 unique labels shared >>>>>> pairwise times (avg, min, max): 0.00050716 0.000427103 0.00056479 >>>>>> crystal router : 0.0040165 0.00392921 0.00411811 >>>>>> used all_to_all method: pairwise >>>>>> setupds time 1.0465E+00 seconds 2 4 86046564 4495920 >>>>>> setup h1 coarse grid, nx_crs= 2 >>>>>> call usrsetvert >>>>>> done :: usrsetvert >>>>>> >>>>>> gs_setup: 866937 unique labels shared >>>>>> pairwise times (avg, min, max): 0.000233683 0.000197816 0.00024941 >>>>>> crystal router : 0.000466869 0.00045588 0.000478101 >>>>>> used all_to_all method: pairwise >>>>>> ##############################**##############################** >>>>>> ############ >>>>>> >>>>>> >>>>>> I was wondering if you could help me with that. I attached the run >>>>>> logfile and also genmap.out. >>>>>> >>>>>> Many thanks >>>>>> Azad >>>>>> >>>>>> >>>>> >>>>> >>>>> ------------------------------ >>>>> >>>>> ______________________________**_________________ >>>>> Nek5000-users mailing list >>>>> Nek5000-users at lists.mcs.anl.**gov >>>>> https://lists.mcs.anl.gov/**mailman/listinfo/nek5000-users >>>>> >>>>> >>>>> End of Nek5000-users Digest, Vol 35, Issue 5 >>>>> ********************************************** >>>>> >>>>> >>>> >>>> >>>> ------------------------------**------------------------------**---- >>>> This message was sent using IMP, the Internet Messaging Program. >>>> ______________________________**_________________ >>>> Nek5000-users mailing list >>>> Nek5000-users at lists.mcs.anl.**gov >>>> https://lists.mcs.anl.gov/**mailman/listinfo/nek5000-users >>>> >>>> >>> >>> ------------------------------ >>> >>> ______________________________**_________________ >>> Nek5000-users mailing list >>> Nek5000-users at lists.mcs.anl.**gov >>> https://lists.mcs.anl.gov/**mailman/listinfo/nek5000-users >>> >>> >>> End of Nek5000-users Digest, Vol 35, Issue 7 >>> ********************************************** >>> >>> >> >> >> ------------------------------**------------------------------**---- >> This message was sent using IMP, the Internet Messaging Program. >> ______________________________**_________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.**gov >> https://lists.mcs.anl.gov/**mailman/listinfo/nek5000-users >> > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > > ------------------------------ > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > > End of Nek5000-users Digest, Vol 35, Issue 9 > ******************************************** > ---------------------------------------------------------------- This message was sent using IMP, the Internet Messaging Program. From nek5000-users at lists.mcs.anl.gov Thu Jan 12 15:12:23 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 12 Jan 2012 15:12:23 -0600 Subject: [Nek5000-users] Nek5000-users Digest, Vol 35, Issue 9 In-Reply-To: <20120112213435.25972q7ffsnuvb2z@www.mech.kth.se> References: <20120112213435.25972q7ffsnuvb2z@www.mech.kth.se> Message-ID: Hi Azad, This error means that your version of matlab doesn't like the fprint() in coarse_par.f, line 37. Since this is just a print statement, you can comment this out by adding % in front of line 37. So it should look like: % fprintf(1,'%g\n',max(lanczos(sparse(D*S*D)))); Then this should work, at least for turbChannel. good luck, Katie On Thu, Jan 12, 2012 at 2:34 PM, wrote: > > Hi Katie; > > Thanks for updating the page. I did exactly the same as it's suggested in > the updated page. After ./run of the script in amg_matlab I got this error: > > ##############################**########################### > Computing Lagrange multiplier governing matrix skeleton ... nnz = 25. > Computing Lagrange multiplier governing matrix (C code) ... done. > CG to obtain Lagrange multipliers ... 1 iterations. > Computing interpolation weights (C code) ... done. > Sparsifying R_f A P: compression = 0 > simple_sparsify: nnzs 4/4 (1) > Level 6, dim(A) = 2 > Computing S = I - D^{-1/2} A D^{-1/2} ... done, nnz(S) = 2. > Running coarsening heuristic ... > ratio = 0, n = 0, max Gershgorin radius = 1 > ratio = 0.5, n = 1, max Gershgorin radius = 0 > connectivity = ??? Error using ==> fprintf > Function is not defined for sparse inputs. > > Error in ==> coarse_par at 37 > fprintf(1,'%g\n',max(lanczos(sparse(D*S*D)))); > > Error in ==> amg_setup at 30 > [C F] = coarse_par(A,tolc); > > Error in ==> go at 20 > data = amg_setup(A, full(0*A(:,1)+1),ctol, tol, stol, wtol); > ##############################**############################## > > I tried it for turbChannel test case as well as some other simplified test > cases. I have got the same errors. > > > Regards > Azad > > > > Quoting nek5000-users-request at lists.**mcs.anl.gov > : > > Send Nek5000-users mailing list submissions to >> nek5000-users at lists.mcs.anl.**gov >> >> To subscribe or unsubscribe via the World Wide Web, visit >> https://lists.mcs.anl.gov/**mailman/listinfo/nek5000-users >> or, via email, send a message with subject or body 'help' to >> nek5000-users-request at lists.**mcs.anl.gov >> >> You can reach the person managing the list at >> nek5000-users-owner at lists.mcs.**anl.gov >> >> When replying, please edit your Subject line so it is more specific >> than "Re: Contents of Nek5000-users digest..." >> >> >> Today's Topics: >> >> 1. Re: run-time hang up in gs_setup (nek5000-users at lists.mcs.anl.**gov >> ) >> >> >> ------------------------------**------------------------------** >> ---------- >> >> Message: 1 >> Date: Thu, 12 Jan 2012 14:09:12 -0600 >> From: nek5000-users at lists.mcs.anl.**gov >> Subject: Re: [Nek5000-users] run-time hang up in gs_setup >> To: nek5000-users at lists.mcs.anl.**gov >> Message-ID: >> > aATYkEDCifYsw at mail.gmail.com > >> Content-Type: text/plain; charset="iso-8859-1" >> >> Hi Azad, >> >> Earlier you had asked if a step was missing in the wiki with regards to >> the >> amg files/scripts. The has just been updated. >> >> Can you tell me exactly what you did to run the turbChannel test? And >> what >> output you had from the amg_matlab/run step? >> >> thanks, >> Katie >> >> On Thu, Jan 12, 2012 at 1:38 PM, > >> wrote: >> >> Hi Stefan; >>> >>> Unfortunately it does not work. first of all, ./run in amg_matlab does >>> not >>> produce anything (latest version). Using my version of the files (running >>> turbChannel), the simulation crashed giving me this error: >>> >>> ##############################****################# >>> AMG level 8: 3 iterations with rho = 0.680429 >>> AMG level 9: 2 iterations with rho = 0.560188 >>> AMG: 144540 rows >>> AMG: reading through row 1200, pass 1/121 >>> ERROR (proc 0000, /scratch/azad/codes/late/nek5_** >>> **svn/trunk/nek/jl/amg.c:875): >>> AMG: data >>> has more rows than given problem >>> >>> call exitt: dying ... >>> ##############################****################# >>> >>> Regards >>> Azad >>> >>> Quoting nek5000-users-request at lists.****mcs.anl.gov >>> >>> > >>> : >>> >>> Send Nek5000-users mailing list submissions to >>> >>>> nek5000-users at lists.mcs.anl.****gov>>> anl.gov > >>>> >>>> To subscribe or unsubscribe via the World Wide Web, visit >>>> https://lists.mcs.anl.gov/****mailman/listinfo/nek5000-users >>>> ** >>>> **> >>>> or, via email, send a message with subject or body 'help' to >>>> nek5000-users-request at lists.****mcs.anl.gov < >>>> nek5000-users-**request at lists.mcs.anl.gov >>>> > >>>> >>>> You can reach the person managing the list at >>>> nek5000-users-owner at lists.mcs.****anl.gov>>> lists.mcs.anl.gov > >>>> >>>> When replying, please edit your Subject line so it is more specific >>>> than "Re: Contents of Nek5000-users digest..." >>>> >>>> >>>> Today's Topics: >>>> >>>> 1. Re: run-time hang up in gs_setup (nek5000-users at lists.mcs.anl.*** >>>> *gov >>>> > >>>> ) >>>> >>>> >>>> ------------------------------****----------------------------**--** >>>> ---------- >>>> >>>> Message: 1 >>>> Date: Thu, 12 Jan 2012 18:09:48 +0100 >>>> >>>> From: nek5000-users at lists.mcs.anl.****gov >>> **gov > >>>> Subject: Re: [Nek5000-users] run-time hang up in gs_setup >>>> To: nek5000-users at lists.mcs.anl.****gov >>> gov > >>>> Message-ID: >>>> <**CAGTrLsaGQkuowxvfpK3AxvCtgCEk****as9YBJkX024qF2Sh7RcmdA at mail.* >>>> *** >>>> gmail.com>>> A at mail.gmail.com >>>> > >>>> > >>>> >>>> Content-Type: text/plain; charset=ISO-8859-1 >>>> >>>> Hi Azad, >>>> >>>> can you try to run the turbChannel example using AMG and the latest >>>> version of the repo. Let me know if this works for you. >>>> >>>> Stefan >>>> >>>> On 1/11/12, nek5000-users at lists.mcs.anl.****gov< >>>> nek5000-users at lists.mcs.**anl.gov > >>>> >>>> >> >>>> wrote: >>>> >>>> Dear Stefan and Aleks; >>>>> >>>>> Thanks for updating the wiki webpage regarding the AMG, although, I >>>>> persume there must be another step also there exist, namely: copy the >>>>> generated files from the amg_matlb to the running directory? (Or they >>>>> should be remained there and one puts the generated .dat files after >>>>> running the 3rd step?). By the way non of the versions I tried working >>>>> (even 707!) despite the fact that I had a range of matlab versions >>>>> tried. Hanging with the old version I had I compiled again and have >>>>> got the four files which was rather fast (with the message at the end: >>>>> Error contraction factor: 0.47...) I used them and every time during >>>>> the run-time it crashed simply: >>>>> ##############################****############################**##** >>>>> ############### >>>>> AMG: reading through row 144540, pass 121/121 >>>>> AMG: reading 0.071106 MB of W >>>>> AMG: reading 0.115601 MB of AfP >>>>> AMG: reading 0.132477 MB of Aff >>>>> AMG level 1 F-vars: 440159 >>>>> AMG level 2 F-vars: 55146 >>>>> AMG level 3 F-vars: 28480 >>>>> AMG level 4 F-vars: 17051 >>>>> AMG level 5 F-vars: 7524 >>>>> AMG level 6 F-vars: 5711 >>>>> AMG level 7 F-vars: 5763 >>>>> AMG level 8 F-vars: 28583 >>>>> AMG level 9 F-vars: 5380 >>>>> AMG level 10 F-vars: 5737 >>>>> Application 731033 exit codes: 139 >>>>> Application 731033 exit signals: Killed >>>>> Application 731033 resources: utime ~417s, stime ~3s >>>>> ##############################****############################**##** >>>>> ############## >>>>> >>>>> Can you help me with that cause I believe this case still doable with >>>>> correct AMG scheme. >>>>> >>>>> Many thanks >>>>> Azad >>>>> >>>>> >>>>> >>>>> Hi Azad, >>>>>> >>>>>> I believe old AMG files should work up to and including revision 707 >>>>>> in case >you want to check AMG quickly. >>>>>> >>>>>> Best. >>>>>> Aleks >>>>>> >>>>>> >>>>>> ----- Original Message ----- >>>>>> From: nek5000-users at lists.mcs.anl.gov >>>>>> To: nek5000-users at lists.mcs.anl.gov >>>>>> Sent: Tuesday, January 10, 2012 10:58:47 AM >>>>>> Subject: Re: [Nek5000-users] run-time hang up in gs_setup >>>>>> >>>>>> Hi Azad, >>>>>> >>>>>> your choice of lx1=8 is fine (it's our preferred sweet spot). If you >>>>>> have a large element count (say > 300'000) the factorization in the >>>>>> XXt setup phase may take hours. I guess that's why it looks like it's >>>>>> hanging. Again, there is a known bug which looks the same. So can't >>>>>> tell exactly what's causing your problem. >>>>>> >>>>>> I just updated the Wiki: https://nek5000.mcs.anl.gov/** >>>>>> index.php/Amg_matlab >>>>> index.php/Amg_matlab >>>>>> > >>>>>> >>>>>> Can you verify that it still fails. >>>>>> >>>>>> -Stefan >>>>>> >>>>>> Quoting nek5000-users-request at lists.****mcs.anl.gov >>>>>> >>>>>> > >>>>>> : >>>>>> >>>>>> Send Nek5000-users mailing list submissions to >>>>>> nek5000-users at lists.mcs.anl.****gov>>>>> anl.gov > >>>>>> >>>>>> To subscribe or unsubscribe via the World Wide Web, visit >>>>>> https://lists.mcs.anl.gov/****mailman/listinfo/nek5000-users >>>>>> ** >>>>>> **> >>>>>> or, via email, send a message with subject or body 'help' to >>>>>> nek5000-users-request at lists.****mcs.anl.gov >>>>>> >>>>>> > >>>>>> >>>>>> You can reach the person managing the list at >>>>>> nek5000-users-owner at lists.mcs.****anl.gov>>>>> *lists.mcs.anl.gov > >>>>>> >>>>>> When replying, please edit your Subject line so it is more specific >>>>>> than "Re: Contents of Nek5000-users digest..." >>>>>> >>>>>> >>>>>> Today's Topics: >>>>>> >>>>>> 1. Re: run-time hang up in gs_setup (nek5000-users at lists.mcs.anl.*** >>>>>> * >>>>>> gov >>>>>> >) >>>>>> 2. Re: run-time hang up in gs_setup (nek5000-users at lists.mcs.anl.*** >>>>>> * >>>>>> gov >>>>>> >) >>>>>> 3. Re: run-time hang up in gs_setup (nek5000-users at lists.mcs.anl.*** >>>>>> * >>>>>> gov >>>>>> >) >>>>>> >>>>>> >>>>>> ------------------------------****----------------------------**--** >>>>>> ---------- >>>>>> >>>>>> Message: 1 >>>>>> Date: Tue, 10 Jan 2012 06:01:29 -0600 (CST) >>>>>> From: nek5000-users at lists.mcs.anl.****gov>>>>> anl.gov > >>>>>> Subject: Re: [Nek5000-users] run-time hang up in gs_setup >>>>>> To: nek5000-users at lists.mcs.anl.****gov >>>>> **gov > >>>>>> Message-ID: >>>>> Pine.LNX.**4.64.1201100557250.6026 at v8.**mcs.anl.gov >>>>>> > >>>>>> > >>>>>> Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed >>>>>> >>>>>> >>>>>> Hi Azad, >>>>>> >>>>>> You are in record-setting territory for element counts! :) >>>>>> >>>>>> Are you using the amg-based coarse-grid solver? >>>>>> It is certain that you will need to do this (and, >>>>>> therefore, you will need matlab to process the AMG >>>>>> operators). There is some discussion of the steps >>>>>> on the wiki page. We can walk you through this process >>>>>> if you have any questions. >>>>>> >>>>>> What value of lx1 are you using? >>>>>> >>>>>> I would recommend fewer elements and a higher value of lx1. >>>>>> I think it will be easier to manage the data, etc. >>>>>> >>>>>> Paul >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> On Tue, 10 Jan 2012, nek5000-users at lists.mcs.anl.****gov< >>>>>> nek5000-users at lists.mcs.**anl.gov >>>>>> >wrote: >>>>>> >>>>>> Dear NEKs; >>>>>> >>>>>>> >>>>>>> I am trying to run a simulation of a turbulent flow in a straight >>>>>>> pipe >>>>>>> in high Reynolds number (Re_tau = 1000). After generating the grid >>>>>>> with >>>>>>> PRENEK and extrude it using n2to3, the mesh ended up with 4,495,920 >>>>>>> elements. It compiled properly; however, trying to run it, hanged up >>>>>>> in >>>>>>> the last stage: >>>>>>> ##############################****############################**##** >>>>>>> ############ >>>>>>> >>>>>>> verify mesh topology >>>>>>> -1.000000000000000 1.000000000000000 Xrange >>>>>>> -1.000000000000000 1.000000000000000 Yrange >>>>>>> 0.000000000000000 25.00000000000000 Zrange >>>>>>> done :: verify mesh topology >>>>>>> >>>>>>> E-solver strategy: 1 itr >>>>>>> mg_nx: 1 3 >>>>>>> mg_ny: 1 3 >>>>>>> mg_nz: 1 3 >>>>>>> call usrsetvert >>>>>>> done :: usrsetvert >>>>>>> >>>>>>> gs_setup: 866937 unique labels shared >>>>>>> pairwise times (avg, min, max): 0.000241442 0.00019722 0.000265908 >>>>>>> crystal router : 0.000458177 0.000445795 0.000471807 >>>>>>> used all_to_all method: pairwise >>>>>>> setupds time 5.6048E-02 seconds 1 2 4565612 4495920 >>>>>>> setvert3d: 4 86046564 122013924 86046564 86046564 >>>>>>> call usrsetvert >>>>>>> done :: usrsetvert >>>>>>> >>>>>>> gs_setup: 8041169 unique labels shared >>>>>>> pairwise times (avg, min, max): 0.00050716 0.000427103 0.00056479 >>>>>>> crystal router : 0.0040165 0.00392921 0.00411811 >>>>>>> used all_to_all method: pairwise >>>>>>> setupds time 1.0465E+00 seconds 2 4 86046564 4495920 >>>>>>> setup h1 coarse grid, nx_crs= 2 >>>>>>> call usrsetvert >>>>>>> done :: usrsetvert >>>>>>> >>>>>>> gs_setup: 866937 unique labels shared >>>>>>> pairwise times (avg, min, max): 0.000233683 0.000197816 0.00024941 >>>>>>> crystal router : 0.000466869 0.00045588 0.000478101 >>>>>>> used all_to_all method: pairwise >>>>>>> ##############################****############################**##** >>>>>>> ############ >>>>>>> >>>>>>> >>>>>>> I was wondering if you could help me with that. I attached the run >>>>>>> logfile and also genmap.out. >>>>>>> >>>>>>> Many thanks >>>>>>> Azad >>>>>>> >>>>>>> >>>>>>> >>>>>> ------------------------------ >>>>>> >>>>>> Message: 2 >>>>>> Date: Tue, 10 Jan 2012 13:35:22 +0100 >>>>>> From: nek5000-users at lists.mcs.anl.****gov>>>>> anl.gov > >>>>>> Subject: Re: [Nek5000-users] run-time hang up in gs_setup >>>>>> To: nek5000-users at lists.mcs.anl.****gov >>>>> **gov > >>>>>> Message-ID: >>>>>> <**CAGTrLsaexkteQN1Y1NQ3FYz7Q2ab*** >>>>>> *b5YSLzOv+zeTwdvYXpD3Fw at mail.**** >>>>>> gmail.com>>>>> 2BzeTwdvYXpD3Fw at mail.gmail.com >>>>>> **> >>>>>> > >>>>>> Content-Type: text/plain; charset=ISO-8859-1 >>>>>> >>>>>> Hi Azad, >>>>>> >>>>>> We have seen similar situations. I think this has to do with a known >>>>>> bug. Unfortunately this bug is hard to reproduce and we haven't >>>>>> managed to fix it yet. >>>>>> >>>>>> -Stefan >>>>>> >>>>>> On 1/10/12, nek5000-users at lists.mcs.anl.****gov< >>>>>> nek5000-users at lists.mcs.**anl.gov > >>>>>> >>>>> gov >> >>>>>> wrote: >>>>>> >>>>>> >>>>>>> Hi Azad, >>>>>>> >>>>>>> You are in record-setting territory for element counts! :) >>>>>>> >>>>>>> Are you using the amg-based coarse-grid solver? >>>>>>> It is certain that you will need to do this (and, >>>>>>> therefore, you will need matlab to process the AMG >>>>>>> operators). There is some discussion of the steps >>>>>>> on the wiki page. We can walk you through this process >>>>>>> if you have any questions. >>>>>>> >>>>>>> What value of lx1 are you using? >>>>>>> >>>>>>> I would recommend fewer elements and a higher value of lx1. >>>>>>> I think it will be easier to manage the data, etc. >>>>>>> >>>>>>> Paul >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Tue, 10 Jan 2012, nek5000-users at lists.mcs.anl.****gov< >>>>>>> nek5000-users at lists.mcs.**anl.gov >>>>>>> >wrote: >>>>>>> >>>>>>> Dear NEKs; >>>>>>> >>>>>>>> >>>>>>>> I am trying to run a simulation of a turbulent flow in a straight >>>>>>>> pipe >>>>>>>> in high Reynolds number (Re_tau = 1000). After generating the grid >>>>>>>> with >>>>>>>> PRENEK and extrude it using n2to3, the mesh ended up with 4,495,920 >>>>>>>> elements. It compiled properly; however, trying to run it, hanged up >>>>>>>> in >>>>>>>> the last stage: >>>>>>>> ##############################****############################** >>>>>>>> ##** >>>>>>>> ############ >>>>>>>> >>>>>>>> verify mesh topology >>>>>>>> -1.000000000000000 1.000000000000000 Xrange >>>>>>>> -1.000000000000000 1.000000000000000 Yrange >>>>>>>> 0.000000000000000 25.00000000000000 Zrange >>>>>>>> done :: verify mesh topology >>>>>>>> >>>>>>>> E-solver strategy: 1 itr >>>>>>>> mg_nx: 1 3 >>>>>>>> mg_ny: 1 3 >>>>>>>> mg_nz: 1 3 >>>>>>>> call usrsetvert >>>>>>>> done :: usrsetvert >>>>>>>> >>>>>>>> gs_setup: 866937 unique labels shared >>>>>>>> pairwise times (avg, min, max): 0.000241442 0.00019722 0.000265908 >>>>>>>> crystal router : 0.000458177 0.000445795 0.000471807 >>>>>>>> used all_to_all method: pairwise >>>>>>>> setupds time 5.6048E-02 seconds 1 2 4565612 4495920 >>>>>>>> setvert3d: 4 86046564 122013924 86046564 86046564 >>>>>>>> call usrsetvert >>>>>>>> done :: usrsetvert >>>>>>>> >>>>>>>> gs_setup: 8041169 unique labels shared >>>>>>>> pairwise times (avg, min, max): 0.00050716 0.000427103 0.00056479 >>>>>>>> crystal router : 0.0040165 0.00392921 0.00411811 >>>>>>>> used all_to_all method: pairwise >>>>>>>> setupds time 1.0465E+00 seconds 2 4 86046564 4495920 >>>>>>>> setup h1 coarse grid, nx_crs= 2 >>>>>>>> call usrsetvert >>>>>>>> done :: usrsetvert >>>>>>>> >>>>>>>> gs_setup: 866937 unique labels shared >>>>>>>> pairwise times (avg, min, max): 0.000233683 0.000197816 0.00024941 >>>>>>>> crystal router : 0.000466869 0.00045588 0.000478101 >>>>>>>> used all_to_all method: pairwise >>>>>>>> ##############################****############################** >>>>>>>> ##** >>>>>>>> ############ >>>>>>>> >>>>>>>> >>>>>>>> I was wondering if you could help me with that. I attached the run >>>>>>>> logfile and also genmap.out. >>>>>>>> >>>>>>>> Many thanks >>>>>>>> Azad >>>>>>>> >>>>>>>> ______________________________****_________________ >>>>>>>> >>>>>>> Nek5000-users mailing list >>>>>>> Nek5000-users at lists.mcs.anl.****gov >>>>>> gov > >>>>>>> https://lists.mcs.anl.gov/****mailman/listinfo/nek5000-users >>>>>>> ** >>>>>>> **> >>>>>>> >>>>>>> >>>>>>> >>>>>> ------------------------------ >>>>>> >>>>>> Message: 3 >>>>>> Date: Tue, 10 Jan 2012 17:58:47 +0100 >>>>>> From: nek5000-users at lists.mcs.anl.****gov>>>>> anl.gov > >>>>>> Subject: Re: [Nek5000-users] run-time hang up in gs_setup >>>>>> To: nek5000-users at lists.mcs.anl.****gov >>>>> **gov > >>>>>> Message-ID: <1326214727.2600.282.camel@**d**amavand.mech.kth.se >>>>>> <1326214727**.2600.282.camel at damavand.mech.**kth.se<1326214727.2600.282.camel at damavand.mech.kth.se> >>>>>> > >>>>>> > >>>>>> Content-Type: text/plain; charset="UTF-8" >>>>>> >>>>>> Dear Paul and Stefan; >>>>>> >>>>>> Thanks very much for looking into it. I use polynomial order 7th >>>>>> (lx1=8). For the coarse-grid solver I actually used XXt. I also tried >>>>>> to >>>>>> use AMG, but unfortunately neither v619 nor the latest version could >>>>>> have compiled its matlab files and always gives me this error (in >>>>>> matlab/R2011a): >>>>>> ##############################****################ >>>>>> ... >>>>>> sparsification tolerance [1e-4]: stol = 0.0001 >>>>>> >>>>>> ------------------------------****----------------------------**--** >>>>>> ------------ >>>>>> Segmentation violation detected at Tue Jan 10 15:56:46 2012 >>>>>> ------------------------------****----------------------------**--** >>>>>> ------------ >>>>>> .... >>>>>> Abnormal termination: >>>>>> Segmentation violation >>>>>> .... >>>>>> ##############################****############### >>>>>> I have been in the web page: "amg_matlab Matlab based tool to generate >>>>>> AMG solver inputfiles" (http://nek5000.mcs.anl.gov/** >>>>>> index.php/Amg_matlab >>>>> index.php/Amg_matlab >>>>>> >) >>>>>> which gives me an empty link. >>>>>> >>>>>> I had an old version of the .dat files needed to run AMG, which I >>>>>> tried >>>>>> those as (amg_Aff.dat, amgdmp_i.dat, amg.dat, amg_AfP.dat, >>>>>> amgdmp_p.dat, >>>>>> amgdmp_j.dat, amg_W.dat) and I have got this error: >>>>>> >>>>>> ##############################****############## >>>>>> ... >>>>>> AMG: reading through row 142800, pass 119/121 >>>>>> AMG: reading through row 144000, pass 120/121 >>>>>> AMG: reading through row 144540, pass 121/121 >>>>>> ERROR (proc >>>>>> 0000, >>>>>> /afs/pdc.kth.se/home/a/****anoorani/codes/latest_nek/** >>>>>> nek5_svn/trunk/nek/jl/amg.c:****468>>>>> anoorani/codes/latest_nek/**nek5_svn/trunk/nek/jl/amg.c:**468 >>>>>> > >>>>>> ): >>>>>> AMG: missing data for some >>>>>> rows >>>>>> >>>>>> call exitt: dying ... >>>>>> ##############################****############## >>>>>> >>>>>> I think AMG could be a possibility to overcome this problem, though I >>>>>> could not manage to get a run with that one. I look into the problem >>>>>> with higher polynomial order to see if it reduces the number of >>>>>> elements >>>>>> dramatically, or at least resolve this issue. >>>>>> >>>>>> Best regards >>>>>> Azad >>>>>> >>>>>> %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%****%%%%%%%%%%%%%%%%%%%%%%%%% >>>>>> >>>>>> Hi Azad, >>>>>>> >>>>>>> We have seen similar situations. I think this has to do with a known >>>>>>> bug. Unfortunately this bug is hard to reproduce and we haven't >>>>>>> managed to fix it yet. >>>>>>> >>>>>>> -Stefan >>>>>>> >>>>>>> On 1/10/12, nek5000-users at lists.mcs.anl.gov >>>>>>> wrote: >>>>>>> >>>>>>> >>>>>>> Hi Azad, >>>>>>> >>>>>>> You are in record-setting territory for element counts! :) >>>>>>> >>>>>>> Are you using the amg-based coarse-grid solver? >>>>>>> It is certain that you will need to do this (and, >>>>>>> therefore, you will need matlab to process the AMG >>>>>>> operators). There is some discussion of the steps >>>>>>> on the wiki page. We can walk you through this process >>>>>>> if you have any questions. >>>>>>> >>>>>>> What value of lx1 are you using? >>>>>>> >>>>>>> I would recommend fewer elements and a higher value of lx1. >>>>>>> I think it will be easier to manage the data, etc. >>>>>>> >>>>>>> Paul >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Tue, 10 Jan 2012, nek5000-users at lists.mcs.anl.gov wrote: >>>>>>> >>>>>>> Dear NEKs; >>>>>>> >>>>>>> I am trying to run a simulation of a turbulent flow in a straight >>>>>>> pipe >>>>>>> in high Reynolds number (Re_tau = 1000). After generating the grid >>>>>>> with >>>>>>> PRENEK and extrude it using n2to3, the mesh ended up with 4,495,920 >>>>>>> elements. It compiled properly; however, trying to run it, hanged up >>>>>>> in >>>>>>> the last stage: >>>>>>> ##############################****############################**##** >>>>>>> ############ >>>>>>> >>>>>>> verify mesh topology >>>>>>> -1.000000000000000 1.000000000000000 Xrange >>>>>>> -1.000000000000000 1.000000000000000 Yrange >>>>>>> 0.000000000000000 25.00000000000000 Zrange >>>>>>> done :: verify mesh topology >>>>>>> >>>>>>> E-solver strategy: 1 itr >>>>>>> mg_nx: 1 3 >>>>>>> mg_ny: 1 3 >>>>>>> mg_nz: 1 3 >>>>>>> call usrsetvert >>>>>>> done :: usrsetvert >>>>>>> >>>>>>> gs_setup: 866937 unique labels shared >>>>>>> pairwise times (avg, min, max): 0.000241442 0.00019722 0.000265908 >>>>>>> crystal router : 0.000458177 0.000445795 0.000471807 >>>>>>> used all_to_all method: pairwise >>>>>>> setupds time 5.6048E-02 seconds 1 2 4565612 4495920 >>>>>>> setvert3d: 4 86046564 122013924 86046564 86046564 >>>>>>> call usrsetvert >>>>>>> done :: usrsetvert >>>>>>> >>>>>>> gs_setup: 8041169 unique labels shared >>>>>>> pairwise times (avg, min, max): 0.00050716 0.000427103 0.00056479 >>>>>>> crystal router : 0.0040165 0.00392921 0.00411811 >>>>>>> used all_to_all method: pairwise >>>>>>> setupds time 1.0465E+00 seconds 2 4 86046564 4495920 >>>>>>> setup h1 coarse grid, nx_crs= 2 >>>>>>> call usrsetvert >>>>>>> done :: usrsetvert >>>>>>> >>>>>>> gs_setup: 866937 unique labels shared >>>>>>> pairwise times (avg, min, max): 0.000233683 0.000197816 0.00024941 >>>>>>> crystal router : 0.000466869 0.00045588 0.000478101 >>>>>>> used all_to_all method: pairwise >>>>>>> ##############################****############################**##** >>>>>>> ############ >>>>>>> >>>>>>> >>>>>>> I was wondering if you could help me with that. I attached the run >>>>>>> logfile and also genmap.out. >>>>>>> >>>>>>> Many thanks >>>>>>> Azad >>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>>> ------------------------------ >>>>>> >>>>>> ______________________________****_________________ >>>>>> Nek5000-users mailing list >>>>>> Nek5000-users at lists.mcs.anl.****gov >>>>> gov > >>>>>> https://lists.mcs.anl.gov/****mailman/listinfo/nek5000-users >>>>>> ** >>>>>> **> >>>>>> >>>>>> >>>>>> End of Nek5000-users Digest, Vol 35, Issue 5 >>>>>> ************************************************ >>>>>> >>>>>> >>>>>> >>>>> >>>>> ------------------------------****----------------------------** >>>>> --**---- >>>>> This message was sent using IMP, the Internet Messaging Program. >>>>> ______________________________****_________________ >>>>> Nek5000-users mailing list >>>>> Nek5000-users at lists.mcs.anl.****gov >>>>> > >>>>> https://lists.mcs.anl.gov/****mailman/listinfo/nek5000-users >>>>> ** >>>>> **> >>>>> >>>>> >>>>> >>>> ------------------------------ >>>> >>>> ______________________________****_________________ >>>> Nek5000-users mailing list >>>> Nek5000-users at lists.mcs.anl.****gov >>>> > >>>> https://lists.mcs.anl.gov/****mailman/listinfo/nek5000-users >>>> ** >>>> **> >>>> >>>> >>>> End of Nek5000-users Digest, Vol 35, Issue 7 >>>> ************************************************ >>>> >>>> >>>> >>> >>> ------------------------------****----------------------------**--**---- >>> This message was sent using IMP, the Internet Messaging Program. >>> ______________________________****_________________ >>> Nek5000-users mailing list >>> Nek5000-users at lists.mcs.anl.****gov >>> > >>> https://lists.mcs.anl.gov/****mailman/listinfo/nek5000-users >>> ** >>> **> >>> >>> -------------- next part -------------- >> An HTML attachment was scrubbed... >> URL: > attachments/20120112/b0da596a/**attachment.htm >> > >> >> ------------------------------ >> >> ______________________________**_________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.**gov >> https://lists.mcs.anl.gov/**mailman/listinfo/nek5000-users >> >> >> End of Nek5000-users Digest, Vol 35, Issue 9 >> ********************************************** >> >> > > > ------------------------------**------------------------------**---- > This message was sent using IMP, the Internet Messaging Program. > ______________________________**_________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.**gov > https://lists.mcs.anl.gov/**mailman/listinfo/nek5000-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Thu Jan 12 15:40:57 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 12 Jan 2012 22:40:57 +0100 Subject: [Nek5000-users] Nek5000-users Digest, Vol 35, Issue 10 In-Reply-To: References: Message-ID: <20120112224057.12451h955usfkx2h@www.mech.kth.se> Hi Katie; The AMG scheme is working properly now. Thanks very much for your help. Best regards Azad Quoting nek5000-users-request at lists.mcs.anl.gov: > Send Nek5000-users mailing list submissions to > nek5000-users at lists.mcs.anl.gov > > To subscribe or unsubscribe via the World Wide Web, visit > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > or, via email, send a message with subject or body 'help' to > nek5000-users-request at lists.mcs.anl.gov > > You can reach the person managing the list at > nek5000-users-owner at lists.mcs.anl.gov > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of Nek5000-users digest..." > > > Today's Topics: > > 1. Re: Nek5000-users Digest, Vol 35, Issue 9 > (nek5000-users at lists.mcs.anl.gov) > 2. Re: Nek5000-users Digest, Vol 35, Issue 9 > (nek5000-users at lists.mcs.anl.gov) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Thu, 12 Jan 2012 21:34:35 +0100 > From: nek5000-users at lists.mcs.anl.gov > Subject: Re: [Nek5000-users] Nek5000-users Digest, Vol 35, Issue 9 > To: nek5000-users at lists.mcs.anl.gov > Message-ID: <20120112213435.25972q7ffsnuvb2z at www.mech.kth.se> > Content-Type: text/plain; charset=ISO-8859-1; DelSp="Yes"; > format="flowed" > > > Hi Katie; > > Thanks for updating the page. I did exactly the same as it's suggested > in the updated page. After ./run of the script in amg_matlab I got > this error: > > ######################################################### > Computing Lagrange multiplier governing matrix skeleton ... nnz = 25. > Computing Lagrange multiplier governing matrix (C code) ... done. > CG to obtain Lagrange multipliers ... 1 iterations. > Computing interpolation weights (C code) ... done. > Sparsifying R_f A P: compression = 0 > simple_sparsify: nnzs 4/4 (1) > Level 6, dim(A) = 2 > Computing S = I - D^{-1/2} A D^{-1/2} ... done, nnz(S) = 2. > Running coarsening heuristic ... > ratio = 0, n = 0, max Gershgorin radius = 1 > ratio = 0.5, n = 1, max Gershgorin radius = 0 > connectivity = ??? Error using ==> fprintf > Function is not defined for sparse inputs. > > Error in ==> coarse_par at 37 > fprintf(1,'%g\n',max(lanczos(sparse(D*S*D)))); > > Error in ==> amg_setup at 30 > [C F] = coarse_par(A,tolc); > > Error in ==> go at 20 > data = amg_setup(A, full(0*A(:,1)+1),ctol, tol, stol, wtol); > ############################################################ > > I tried it for turbChannel test case as well as some other simplified > test cases. I have got the same errors. > > > Regards > Azad > > > > Quoting nek5000-users-request at lists.mcs.anl.gov: > >> Send Nek5000-users mailing list submissions to >> nek5000-users at lists.mcs.anl.gov >> >> To subscribe or unsubscribe via the World Wide Web, visit >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> or, via email, send a message with subject or body 'help' to >> nek5000-users-request at lists.mcs.anl.gov >> >> You can reach the person managing the list at >> nek5000-users-owner at lists.mcs.anl.gov >> >> When replying, please edit your Subject line so it is more specific >> than "Re: Contents of Nek5000-users digest..." >> >> >> Today's Topics: >> >> 1. Re: run-time hang up in gs_setup (nek5000-users at lists.mcs.anl.gov) >> >> >> ---------------------------------------------------------------------- >> >> Message: 1 >> Date: Thu, 12 Jan 2012 14:09:12 -0600 >> From: nek5000-users at lists.mcs.anl.gov >> Subject: Re: [Nek5000-users] run-time hang up in gs_setup >> To: nek5000-users at lists.mcs.anl.gov >> Message-ID: >> >> Content-Type: text/plain; charset="iso-8859-1" >> >> Hi Azad, >> >> Earlier you had asked if a step was missing in the wiki with regards to the >> amg files/scripts. The has just been updated. >> >> Can you tell me exactly what you did to run the turbChannel test? And what >> output you had from the amg_matlab/run step? >> >> thanks, >> Katie >> >> On Thu, Jan 12, 2012 at 1:38 PM, wrote: >> >>> Hi Stefan; >>> >>> Unfortunately it does not work. first of all, ./run in amg_matlab does not >>> produce anything (latest version). Using my version of the files (running >>> turbChannel), the simulation crashed giving me this error: >>> >>> ##############################**################# >>> AMG level 8: 3 iterations with rho = 0.680429 >>> AMG level 9: 2 iterations with rho = 0.560188 >>> AMG: 144540 rows >>> AMG: reading through row 1200, pass 1/121 >>> ERROR (proc 0000, >>> /scratch/azad/codes/late/nek5_**svn/trunk/nek/jl/amg.c:875): >>> AMG: data >>> has more rows than given problem >>> >>> call exitt: dying ... >>> ##############################**################# >>> >>> Regards >>> Azad >>> >>> Quoting >>> nek5000-users-request at lists.**mcs.anl.gov >>> : >>> >>> Send Nek5000-users mailing list submissions to >>>> nek5000-users at lists.mcs.anl.**gov >>>> >>>> To subscribe or unsubscribe via the World Wide Web, visit >>>> >>>> https://lists.mcs.anl.gov/**mailman/listinfo/nek5000-users >>>> or, via email, send a message with subject or body 'help' to >>>> >>>> nek5000-users-request at lists.**mcs.anl.gov >>>> >>>> You can reach the person managing the list at >>>> >>>> nek5000-users-owner at lists.mcs.**anl.gov >>>> >>>> When replying, please edit your Subject line so it is more specific >>>> than "Re: Contents of Nek5000-users digest..." >>>> >>>> >>>> Today's Topics: >>>> >>>> 1. Re: run-time hang up in gs_setup >>>> (nek5000-users at lists.mcs.anl.**gov >>>> ) >>>> >>>> >>>> ------------------------------**------------------------------** >>>> ---------- >>>> >>>> Message: 1 >>>> Date: Thu, 12 Jan 2012 18:09:48 +0100 >>>> >>>> From: nek5000-users at lists.mcs.anl.**gov >>>> Subject: Re: [Nek5000-users] run-time hang up in gs_setup >>>> To: nek5000-users at lists.mcs.anl.**gov >>>> Message-ID: >>>> >>> gmail.com >>>> > >>>> >>>> Content-Type: text/plain; charset=ISO-8859-1 >>>> >>>> Hi Azad, >>>> >>>> can you try to run the turbChannel example using AMG and the latest >>>> version of the repo. Let me know if this works for you. >>>> >>>> Stefan >>>> >>>> On 1/11/12, >>>> nek5000-users at lists.mcs.anl.**gov >>>> > >>>> wrote: >>>> >>>>> Dear Stefan and Aleks; >>>>> >>>>> Thanks for updating the wiki webpage regarding the AMG, although, I >>>>> persume there must be another step also there exist, namely: copy the >>>>> generated files from the amg_matlb to the running directory? (Or they >>>>> should be remained there and one puts the generated .dat files after >>>>> running the 3rd step?). By the way non of the versions I tried working >>>>> (even 707!) despite the fact that I had a range of matlab versions >>>>> tried. Hanging with the old version I had I compiled again and have >>>>> got the four files which was rather fast (with the message at the end: >>>>> Error contraction factor: 0.47...) I used them and every time during >>>>> the run-time it crashed simply: >>>>> ##############################**##############################** >>>>> ############### >>>>> AMG: reading through row 144540, pass 121/121 >>>>> AMG: reading 0.071106 MB of W >>>>> AMG: reading 0.115601 MB of AfP >>>>> AMG: reading 0.132477 MB of Aff >>>>> AMG level 1 F-vars: 440159 >>>>> AMG level 2 F-vars: 55146 >>>>> AMG level 3 F-vars: 28480 >>>>> AMG level 4 F-vars: 17051 >>>>> AMG level 5 F-vars: 7524 >>>>> AMG level 6 F-vars: 5711 >>>>> AMG level 7 F-vars: 5763 >>>>> AMG level 8 F-vars: 28583 >>>>> AMG level 9 F-vars: 5380 >>>>> AMG level 10 F-vars: 5737 >>>>> Application 731033 exit codes: 139 >>>>> Application 731033 exit signals: Killed >>>>> Application 731033 resources: utime ~417s, stime ~3s >>>>> ##############################**##############################** >>>>> ############## >>>>> >>>>> Can you help me with that cause I believe this case still doable with >>>>> correct AMG scheme. >>>>> >>>>> Many thanks >>>>> Azad >>>>> >>>>> >>>>> >>>>>> Hi Azad, >>>>>> >>>>>> I believe old AMG files should work up to and including revision 707 >>>>>> in case >you want to check AMG quickly. >>>>>> >>>>>> Best. >>>>>> Aleks >>>>>> >>>>>> >>>>>> ----- Original Message ----- >>>>>> From: nek5000-users at lists.mcs.anl.gov >>>>>> To: nek5000-users at lists.mcs.anl.gov >>>>>> Sent: Tuesday, January 10, 2012 10:58:47 AM >>>>>> Subject: Re: [Nek5000-users] run-time hang up in gs_setup >>>>>> >>>>>> Hi Azad, >>>>>> >>>>>> your choice of lx1=8 is fine (it's our preferred sweet spot). If you >>>>>> have a large element count (say > 300'000) the factorization in the >>>>>> XXt setup phase may take hours. I guess that's why it looks like it's >>>>>> hanging. Again, there is a known bug which looks the same. So can't >>>>>> tell exactly what's causing your problem. >>>>>> >>>>>> I just updated the Wiki: https://nek5000.mcs.anl.gov/** >>>>>> index.php/Amg_matlab >>>>>> >>>>>> Can you verify that it still fails. >>>>>> >>>>>> -Stefan >>>>>> >>>>>> Quoting >>>>>> nek5000-users-request at lists.**mcs.anl.gov >>>>>> : >>>>>> >>>>>> Send Nek5000-users mailing list submissions to >>>>>> >>>>>> nek5000-users at lists.mcs.anl.**gov >>>>>> >>>>>> To subscribe or unsubscribe via the World Wide Web, visit >>>>>> >>>>>> https://lists.mcs.anl.gov/**mailman/listinfo/nek5000-users >>>>>> or, via email, send a message with subject or body 'help' to >>>>>> >>>>>> nek5000-users-request at lists.**mcs.anl.gov >>>>>> >>>>>> You can reach the person managing the list at >>>>>> >>>>>> nek5000-users-owner at lists.mcs.**anl.gov >>>>>> >>>>>> When replying, please edit your Subject line so it is more specific >>>>>> than "Re: Contents of Nek5000-users digest..." >>>>>> >>>>>> >>>>>> Today's Topics: >>>>>> >>>>>> 1. Re: run-time hang up in gs_setup (nek5000-users at lists.mcs.anl.** >>>>>> gov ) >>>>>> 2. Re: run-time hang up in gs_setup (nek5000-users at lists.mcs.anl.** >>>>>> gov ) >>>>>> 3. Re: run-time hang up in gs_setup (nek5000-users at lists.mcs.anl.** >>>>>> gov ) >>>>>> >>>>>> >>>>>> ------------------------------**------------------------------** >>>>>> ---------- >>>>>> >>>>>> Message: 1 >>>>>> Date: Tue, 10 Jan 2012 06:01:29 -0600 (CST) >>>>>> From: nek5000-users at lists.mcs.anl.**gov >>>>>> Subject: Re: [Nek5000-users] run-time hang up in gs_setup >>>>>> To: nek5000-users at lists.mcs.anl.**gov >>>>>> Message-ID: >>>>>> >>>>>> > >>>>>> Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed >>>>>> >>>>>> >>>>>> Hi Azad, >>>>>> >>>>>> You are in record-setting territory for element counts! :) >>>>>> >>>>>> Are you using the amg-based coarse-grid solver? >>>>>> It is certain that you will need to do this (and, >>>>>> therefore, you will need matlab to process the AMG >>>>>> operators). There is some discussion of the steps >>>>>> on the wiki page. We can walk you through this process >>>>>> if you have any questions. >>>>>> >>>>>> What value of lx1 are you using? >>>>>> >>>>>> I would recommend fewer elements and a higher value of lx1. >>>>>> I think it will be easier to manage the data, etc. >>>>>> >>>>>> Paul >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> On Tue, 10 Jan 2012, >>>>>> nek5000-users at lists.mcs.anl.**govwrote: >>>>>> >>>>>> Dear NEKs; >>>>>>> >>>>>>> I am trying to run a simulation of a turbulent flow in a straight pipe >>>>>>> in high Reynolds number (Re_tau = 1000). After generating the grid with >>>>>>> PRENEK and extrude it using n2to3, the mesh ended up with 4,495,920 >>>>>>> elements. It compiled properly; however, trying to run it, hanged up in >>>>>>> the last stage: >>>>>>> ##############################**##############################** >>>>>>> ############ >>>>>>> >>>>>>> verify mesh topology >>>>>>> -1.000000000000000 1.000000000000000 Xrange >>>>>>> -1.000000000000000 1.000000000000000 Yrange >>>>>>> 0.000000000000000 25.00000000000000 Zrange >>>>>>> done :: verify mesh topology >>>>>>> >>>>>>> E-solver strategy: 1 itr >>>>>>> mg_nx: 1 3 >>>>>>> mg_ny: 1 3 >>>>>>> mg_nz: 1 3 >>>>>>> call usrsetvert >>>>>>> done :: usrsetvert >>>>>>> >>>>>>> gs_setup: 866937 unique labels shared >>>>>>> pairwise times (avg, min, max): 0.000241442 0.00019722 0.000265908 >>>>>>> crystal router : 0.000458177 0.000445795 0.000471807 >>>>>>> used all_to_all method: pairwise >>>>>>> setupds time 5.6048E-02 seconds 1 2 4565612 4495920 >>>>>>> setvert3d: 4 86046564 122013924 86046564 86046564 >>>>>>> call usrsetvert >>>>>>> done :: usrsetvert >>>>>>> >>>>>>> gs_setup: 8041169 unique labels shared >>>>>>> pairwise times (avg, min, max): 0.00050716 0.000427103 0.00056479 >>>>>>> crystal router : 0.0040165 0.00392921 0.00411811 >>>>>>> used all_to_all method: pairwise >>>>>>> setupds time 1.0465E+00 seconds 2 4 86046564 4495920 >>>>>>> setup h1 coarse grid, nx_crs= 2 >>>>>>> call usrsetvert >>>>>>> done :: usrsetvert >>>>>>> >>>>>>> gs_setup: 866937 unique labels shared >>>>>>> pairwise times (avg, min, max): 0.000233683 0.000197816 0.00024941 >>>>>>> crystal router : 0.000466869 0.00045588 0.000478101 >>>>>>> used all_to_all method: pairwise >>>>>>> ##############################**##############################** >>>>>>> ############ >>>>>>> >>>>>>> >>>>>>> I was wondering if you could help me with that. I attached the run >>>>>>> logfile and also genmap.out. >>>>>>> >>>>>>> Many thanks >>>>>>> Azad >>>>>>> >>>>>>> >>>>>> >>>>>> ------------------------------ >>>>>> >>>>>> Message: 2 >>>>>> Date: Tue, 10 Jan 2012 13:35:22 +0100 >>>>>> From: nek5000-users at lists.mcs.anl.**gov >>>>>> Subject: Re: [Nek5000-users] run-time hang up in gs_setup >>>>>> To: nek5000-users at lists.mcs.anl.**gov >>>>>> Message-ID: >>>>>> >>>>> gmail.com >>>>>> > >>>>>> Content-Type: text/plain; charset=ISO-8859-1 >>>>>> >>>>>> Hi Azad, >>>>>> >>>>>> We have seen similar situations. I think this has to do with a known >>>>>> bug. Unfortunately this bug is hard to reproduce and we haven't >>>>>> managed to fix it yet. >>>>>> >>>>>> -Stefan >>>>>> >>>>>> On 1/10/12, >>>>>> nek5000-users at lists.mcs.anl.**gov >>>>>> > >>>>>> wrote: >>>>>> >>>>>>> >>>>>>> Hi Azad, >>>>>>> >>>>>>> You are in record-setting territory for element counts! :) >>>>>>> >>>>>>> Are you using the amg-based coarse-grid solver? >>>>>>> It is certain that you will need to do this (and, >>>>>>> therefore, you will need matlab to process the AMG >>>>>>> operators). There is some discussion of the steps >>>>>>> on the wiki page. We can walk you through this process >>>>>>> if you have any questions. >>>>>>> >>>>>>> What value of lx1 are you using? >>>>>>> >>>>>>> I would recommend fewer elements and a higher value of lx1. >>>>>>> I think it will be easier to manage the data, etc. >>>>>>> >>>>>>> Paul >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Tue, 10 Jan 2012, >>>>>>> nek5000-users at lists.mcs.anl.**govwrote: >>>>>>> >>>>>>> Dear NEKs; >>>>>>>> >>>>>>>> I am trying to run a simulation of a turbulent flow in a straight pipe >>>>>>>> in high Reynolds number (Re_tau = 1000). After generating the grid >>>>>>>> with >>>>>>>> PRENEK and extrude it using n2to3, the mesh ended up with 4,495,920 >>>>>>>> elements. It compiled properly; however, trying to run it, hanged up >>>>>>>> in >>>>>>>> the last stage: >>>>>>>> ##############################**##############################** >>>>>>>> ############ >>>>>>>> >>>>>>>> verify mesh topology >>>>>>>> -1.000000000000000 1.000000000000000 Xrange >>>>>>>> -1.000000000000000 1.000000000000000 Yrange >>>>>>>> 0.000000000000000 25.00000000000000 Zrange >>>>>>>> done :: verify mesh topology >>>>>>>> >>>>>>>> E-solver strategy: 1 itr >>>>>>>> mg_nx: 1 3 >>>>>>>> mg_ny: 1 3 >>>>>>>> mg_nz: 1 3 >>>>>>>> call usrsetvert >>>>>>>> done :: usrsetvert >>>>>>>> >>>>>>>> gs_setup: 866937 unique labels shared >>>>>>>> pairwise times (avg, min, max): 0.000241442 0.00019722 0.000265908 >>>>>>>> crystal router : 0.000458177 0.000445795 0.000471807 >>>>>>>> used all_to_all method: pairwise >>>>>>>> setupds time 5.6048E-02 seconds 1 2 4565612 4495920 >>>>>>>> setvert3d: 4 86046564 122013924 86046564 86046564 >>>>>>>> call usrsetvert >>>>>>>> done :: usrsetvert >>>>>>>> >>>>>>>> gs_setup: 8041169 unique labels shared >>>>>>>> pairwise times (avg, min, max): 0.00050716 0.000427103 0.00056479 >>>>>>>> crystal router : 0.0040165 0.00392921 0.00411811 >>>>>>>> used all_to_all method: pairwise >>>>>>>> setupds time 1.0465E+00 seconds 2 4 86046564 4495920 >>>>>>>> setup h1 coarse grid, nx_crs= 2 >>>>>>>> call usrsetvert >>>>>>>> done :: usrsetvert >>>>>>>> >>>>>>>> gs_setup: 866937 unique labels shared >>>>>>>> pairwise times (avg, min, max): 0.000233683 0.000197816 0.00024941 >>>>>>>> crystal router : 0.000466869 0.00045588 0.000478101 >>>>>>>> used all_to_all method: pairwise >>>>>>>> ##############################**##############################** >>>>>>>> ############ >>>>>>>> >>>>>>>> >>>>>>>> I was wondering if you could help me with that. I attached the run >>>>>>>> logfile and also genmap.out. >>>>>>>> >>>>>>>> Many thanks >>>>>>>> Azad >>>>>>>> >>>>>>>> ______________________________**_________________ >>>>>>> Nek5000-users mailing list >>>>>>> Nek5000-users at lists.mcs.anl.**gov >>>>>>> https://lists.mcs.anl.gov/**mailman/listinfo/nek5000-users >>>>>>> >>>>>>> >>>>>> >>>>>> ------------------------------ >>>>>> >>>>>> Message: 3 >>>>>> Date: Tue, 10 Jan 2012 17:58:47 +0100 >>>>>> From: nek5000-users at lists.mcs.anl.**gov >>>>>> Subject: Re: [Nek5000-users] run-time hang up in gs_setup >>>>>> To: nek5000-users at lists.mcs.anl.**gov >>>>>> Message-ID: >>>>>> <1326214727.2600.282.camel@**damavand.mech.kth.se<1326214727.2600.282.camel at damavand.mech.kth.se> >>>>>> > >>>>>> Content-Type: text/plain; charset="UTF-8" >>>>>> >>>>>> Dear Paul and Stefan; >>>>>> >>>>>> Thanks very much for looking into it. I use polynomial order 7th >>>>>> (lx1=8). For the coarse-grid solver I actually used XXt. I also tried to >>>>>> use AMG, but unfortunately neither v619 nor the latest version could >>>>>> have compiled its matlab files and always gives me this error (in >>>>>> matlab/R2011a): >>>>>> ##############################**################ >>>>>> ... >>>>>> sparsification tolerance [1e-4]: stol = 0.0001 >>>>>> >>>>>> ------------------------------**------------------------------** >>>>>> ------------ >>>>>> Segmentation violation detected at Tue Jan 10 15:56:46 2012 >>>>>> ------------------------------**------------------------------** >>>>>> ------------ >>>>>> .... >>>>>> Abnormal termination: >>>>>> Segmentation violation >>>>>> .... >>>>>> ##############################**############### >>>>>> I have been in the web page: "amg_matlab Matlab based tool to generate >>>>>> AMG solver inputfiles" (http://nek5000.mcs.anl.gov/** >>>>>> index.php/Amg_matlab ) >>>>>> which gives me an empty link. >>>>>> >>>>>> I had an old version of the .dat files needed to run AMG, which I tried >>>>>> those as (amg_Aff.dat, amgdmp_i.dat, amg.dat, amg_AfP.dat, amgdmp_p.dat, >>>>>> amgdmp_j.dat, amg_W.dat) and I have got this error: >>>>>> >>>>>> ##############################**############## >>>>>> ... >>>>>> AMG: reading through row 142800, pass 119/121 >>>>>> AMG: reading through row 144000, pass 120/121 >>>>>> AMG: reading through row 144540, pass 121/121 >>>>>> ERROR (proc >>>>>> 0000, >>>>>> /afs/pdc.kth.se/home/a/**anoorani/codes/latest_nek/** >>>>>> nek5_svn/trunk/nek/jl/amg.c:**468 >>>>>> ): >>>>>> AMG: missing data for some >>>>>> rows >>>>>> >>>>>> call exitt: dying ... >>>>>> ##############################**############## >>>>>> >>>>>> I think AMG could be a possibility to overcome this problem, though I >>>>>> could not manage to get a run with that one. I look into the problem >>>>>> with higher polynomial order to see if it reduces the number of elements >>>>>> dramatically, or at least resolve this issue. >>>>>> >>>>>> Best regards >>>>>> Azad >>>>>> >>>>>> %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%**%%%%%%%%%%%%%%%%%%%%%%%%% >>>>>> >>>>>>> Hi Azad, >>>>>>> >>>>>>> We have seen similar situations. I think this has to do with a known >>>>>>> bug. Unfortunately this bug is hard to reproduce and we haven't >>>>>>> managed to fix it yet. >>>>>>> >>>>>>> -Stefan >>>>>>> >>>>>>> On 1/10/12, nek5000-users at lists.mcs.anl.gov >>>>>>> wrote: >>>>>>> >>>>>>> >>>>>>> Hi Azad, >>>>>>> >>>>>>> You are in record-setting territory for element counts! :) >>>>>>> >>>>>>> Are you using the amg-based coarse-grid solver? >>>>>>> It is certain that you will need to do this (and, >>>>>>> therefore, you will need matlab to process the AMG >>>>>>> operators). There is some discussion of the steps >>>>>>> on the wiki page. We can walk you through this process >>>>>>> if you have any questions. >>>>>>> >>>>>>> What value of lx1 are you using? >>>>>>> >>>>>>> I would recommend fewer elements and a higher value of lx1. >>>>>>> I think it will be easier to manage the data, etc. >>>>>>> >>>>>>> Paul >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Tue, 10 Jan 2012, nek5000-users at lists.mcs.anl.gov wrote: >>>>>>> >>>>>>> Dear NEKs; >>>>>>> >>>>>>> I am trying to run a simulation of a turbulent flow in a straight pipe >>>>>>> in high Reynolds number (Re_tau = 1000). After generating the grid with >>>>>>> PRENEK and extrude it using n2to3, the mesh ended up with 4,495,920 >>>>>>> elements. It compiled properly; however, trying to run it, hanged up in >>>>>>> the last stage: >>>>>>> ##############################**##############################** >>>>>>> ############ >>>>>>> >>>>>>> verify mesh topology >>>>>>> -1.000000000000000 1.000000000000000 Xrange >>>>>>> -1.000000000000000 1.000000000000000 Yrange >>>>>>> 0.000000000000000 25.00000000000000 Zrange >>>>>>> done :: verify mesh topology >>>>>>> >>>>>>> E-solver strategy: 1 itr >>>>>>> mg_nx: 1 3 >>>>>>> mg_ny: 1 3 >>>>>>> mg_nz: 1 3 >>>>>>> call usrsetvert >>>>>>> done :: usrsetvert >>>>>>> >>>>>>> gs_setup: 866937 unique labels shared >>>>>>> pairwise times (avg, min, max): 0.000241442 0.00019722 0.000265908 >>>>>>> crystal router : 0.000458177 0.000445795 0.000471807 >>>>>>> used all_to_all method: pairwise >>>>>>> setupds time 5.6048E-02 seconds 1 2 4565612 4495920 >>>>>>> setvert3d: 4 86046564 122013924 86046564 86046564 >>>>>>> call usrsetvert >>>>>>> done :: usrsetvert >>>>>>> >>>>>>> gs_setup: 8041169 unique labels shared >>>>>>> pairwise times (avg, min, max): 0.00050716 0.000427103 0.00056479 >>>>>>> crystal router : 0.0040165 0.00392921 0.00411811 >>>>>>> used all_to_all method: pairwise >>>>>>> setupds time 1.0465E+00 seconds 2 4 86046564 4495920 >>>>>>> setup h1 coarse grid, nx_crs= 2 >>>>>>> call usrsetvert >>>>>>> done :: usrsetvert >>>>>>> >>>>>>> gs_setup: 866937 unique labels shared >>>>>>> pairwise times (avg, min, max): 0.000233683 0.000197816 0.00024941 >>>>>>> crystal router : 0.000466869 0.00045588 0.000478101 >>>>>>> used all_to_all method: pairwise >>>>>>> ##############################**##############################** >>>>>>> ############ >>>>>>> >>>>>>> >>>>>>> I was wondering if you could help me with that. I attached the run >>>>>>> logfile and also genmap.out. >>>>>>> >>>>>>> Many thanks >>>>>>> Azad >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>>> ------------------------------ >>>>>> >>>>>> ______________________________**_________________ >>>>>> Nek5000-users mailing list >>>>>> Nek5000-users at lists.mcs.anl.**gov >>>>>> https://lists.mcs.anl.gov/**mailman/listinfo/nek5000-users >>>>>> >>>>>> >>>>>> End of Nek5000-users Digest, Vol 35, Issue 5 >>>>>> ********************************************** >>>>>> >>>>>> >>>>> >>>>> >>>>> ------------------------------**------------------------------**---- >>>>> This message was sent using IMP, the Internet Messaging Program. >>>>> ______________________________**_________________ >>>>> Nek5000-users mailing list >>>>> Nek5000-users at lists.mcs.anl.**gov >>>>> https://lists.mcs.anl.gov/**mailman/listinfo/nek5000-users >>>>> >>>>> >>>> >>>> ------------------------------ >>>> >>>> ______________________________**_________________ >>>> Nek5000-users mailing list >>>> Nek5000-users at lists.mcs.anl.**gov >>>> https://lists.mcs.anl.gov/**mailman/listinfo/nek5000-users >>>> >>>> >>>> End of Nek5000-users Digest, Vol 35, Issue 7 >>>> ********************************************** >>>> >>>> >>> >>> >>> ------------------------------**------------------------------**---- >>> This message was sent using IMP, the Internet Messaging Program. >>> ______________________________**_________________ >>> Nek5000-users mailing list >>> Nek5000-users at lists.mcs.anl.**gov >>> https://lists.mcs.anl.gov/**mailman/listinfo/nek5000-users >>> >> -------------- next part -------------- >> An HTML attachment was scrubbed... >> URL: >> >> >> ------------------------------ >> >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> >> >> End of Nek5000-users Digest, Vol 35, Issue 9 >> ******************************************** >> > > > > ---------------------------------------------------------------- > This message was sent using IMP, the Internet Messaging Program. > > > ------------------------------ > > Message: 2 > Date: Thu, 12 Jan 2012 15:12:23 -0600 > From: nek5000-users at lists.mcs.anl.gov > Subject: Re: [Nek5000-users] Nek5000-users Digest, Vol 35, Issue 9 > To: nek5000-users at lists.mcs.anl.gov > Message-ID: > > Content-Type: text/plain; charset="iso-8859-1" > > Hi Azad, > > This error means that your version of matlab doesn't like the fprint() in > coarse_par.f, line 37. > > Since this is just a print statement, you can comment this out by adding % > in front of line 37. So it should look like: > > % fprintf(1,'%g\n',max(lanczos(sparse(D*S*D)))); > > Then this should work, at least for turbChannel. > > good luck, > Katie > > On Thu, Jan 12, 2012 at 2:34 PM, wrote: > >> >> Hi Katie; >> >> Thanks for updating the page. I did exactly the same as it's suggested in >> the updated page. After ./run of the script in amg_matlab I got this error: >> >> ##############################**########################### >> Computing Lagrange multiplier governing matrix skeleton ... nnz = 25. >> Computing Lagrange multiplier governing matrix (C code) ... done. >> CG to obtain Lagrange multipliers ... 1 iterations. >> Computing interpolation weights (C code) ... done. >> Sparsifying R_f A P: compression = 0 >> simple_sparsify: nnzs 4/4 (1) >> Level 6, dim(A) = 2 >> Computing S = I - D^{-1/2} A D^{-1/2} ... done, nnz(S) = 2. >> Running coarsening heuristic ... >> ratio = 0, n = 0, max Gershgorin radius = 1 >> ratio = 0.5, n = 1, max Gershgorin radius = 0 >> connectivity = ??? Error using ==> fprintf >> Function is not defined for sparse inputs. >> >> Error in ==> coarse_par at 37 >> fprintf(1,'%g\n',max(lanczos(sparse(D*S*D)))); >> >> Error in ==> amg_setup at 30 >> [C F] = coarse_par(A,tolc); >> >> Error in ==> go at 20 >> data = amg_setup(A, full(0*A(:,1)+1),ctol, tol, stol, wtol); >> ##############################**############################## >> >> I tried it for turbChannel test case as well as some other simplified test >> cases. I have got the same errors. >> >> >> Regards >> Azad >> >> >> >> Quoting >> nek5000-users-request at lists.**mcs.anl.gov >> : >> >> Send Nek5000-users mailing list submissions to >>> nek5000-users at lists.mcs.anl.**gov >>> >>> To subscribe or unsubscribe via the World Wide Web, visit >>> >>> https://lists.mcs.anl.gov/**mailman/listinfo/nek5000-users >>> or, via email, send a message with subject or body 'help' to >>> >>> nek5000-users-request at lists.**mcs.anl.gov >>> >>> You can reach the person managing the list at >>> >>> nek5000-users-owner at lists.mcs.**anl.gov >>> >>> When replying, please edit your Subject line so it is more specific >>> than "Re: Contents of Nek5000-users digest..." >>> >>> >>> Today's Topics: >>> >>> 1. Re: run-time hang up in gs_setup >>> (nek5000-users at lists.mcs.anl.**gov >>> ) >>> >>> >>> ------------------------------**------------------------------** >>> ---------- >>> >>> Message: 1 >>> Date: Thu, 12 Jan 2012 14:09:12 -0600 >>> From: nek5000-users at lists.mcs.anl.**gov >>> Subject: Re: [Nek5000-users] run-time hang up in gs_setup >>> To: nek5000-users at lists.mcs.anl.**gov >>> Message-ID: >>> >> aATYkEDCifYsw at mail.gmail.com > >>> Content-Type: text/plain; charset="iso-8859-1" >>> >>> Hi Azad, >>> >>> Earlier you had asked if a step was missing in the wiki with regards to >>> the >>> amg files/scripts. The has just been updated. >>> >>> Can you tell me exactly what you did to run the turbChannel test? And >>> what >>> output you had from the amg_matlab/run step? >>> >>> thanks, >>> Katie >>> >>> On Thu, Jan 12, 2012 at 1:38 PM, >>> > >>> wrote: >>> >>> Hi Stefan; >>>> >>>> Unfortunately it does not work. first of all, ./run in amg_matlab does >>>> not >>>> produce anything (latest version). Using my version of the files (running >>>> turbChannel), the simulation crashed giving me this error: >>>> >>>> ##############################****################# >>>> AMG level 8: 3 iterations with rho = 0.680429 >>>> AMG level 9: 2 iterations with rho = 0.560188 >>>> AMG: 144540 rows >>>> AMG: reading through row 1200, pass 1/121 >>>> ERROR (proc 0000, /scratch/azad/codes/late/nek5_** >>>> **svn/trunk/nek/jl/amg.c:875): >>>> AMG: data >>>> has more rows than given problem >>>> >>>> call exitt: dying ... >>>> ##############################****################# >>>> >>>> Regards >>>> Azad >>>> >>>> Quoting nek5000-users-request at lists.****mcs.anl.gov >>>> >>>> > >>>> : >>>> >>>> Send Nek5000-users mailing list submissions to >>>> >>>>> nek5000-users at lists.mcs.anl.****gov>>>> anl.gov > >>>>> >>>>> To subscribe or unsubscribe via the World Wide Web, visit >>>>> >>>>> https://lists.mcs.anl.gov/****mailman/listinfo/nek5000-users >>>>> ** >>>>> **> >>>>> or, via email, send a message with subject or body 'help' to >>>>> nek5000-users-request at lists.****mcs.anl.gov < >>>>> nek5000-users-**request at lists.mcs.anl.gov >>>>> > >>>>> >>>>> You can reach the person managing the list at >>>>> nek5000-users-owner at lists.mcs.****anl.gov>>>> lists.mcs.anl.gov > >>>>> >>>>> When replying, please edit your Subject line so it is more specific >>>>> than "Re: Contents of Nek5000-users digest..." >>>>> >>>>> >>>>> Today's Topics: >>>>> >>>>> 1. Re: run-time hang up in gs_setup (nek5000-users at lists.mcs.anl.*** >>>>> *gov >>>>> > >>>>> ) >>>>> >>>>> >>>>> ------------------------------****----------------------------**--** >>>>> ---------- >>>>> >>>>> Message: 1 >>>>> Date: Thu, 12 Jan 2012 18:09:48 +0100 >>>>> >>>>> From: nek5000-users at lists.mcs.anl.****gov >>>> **gov > >>>>> Subject: Re: [Nek5000-users] run-time hang up in gs_setup >>>>> To: nek5000-users at lists.mcs.anl.****gov >>>> gov > >>>>> Message-ID: >>>>> <**CAGTrLsaGQkuowxvfpK3AxvCtgCEk****as9YBJkX024qF2Sh7RcmdA at mail.* >>>>> *** >>>>> gmail.com>>>> A at mail.gmail.com >>>>> > >>>>> > >>>>> >>>>> Content-Type: text/plain; charset=ISO-8859-1 >>>>> >>>>> Hi Azad, >>>>> >>>>> can you try to run the turbChannel example using AMG and the latest >>>>> version of the repo. Let me know if this works for you. >>>>> >>>>> Stefan >>>>> >>>>> On 1/11/12, nek5000-users at lists.mcs.anl.****gov< >>>>> nek5000-users at lists.mcs.**anl.gov > >>>>> >>>> >>>>> >> >>>>> wrote: >>>>> >>>>> Dear Stefan and Aleks; >>>>>> >>>>>> Thanks for updating the wiki webpage regarding the AMG, although, I >>>>>> persume there must be another step also there exist, namely: copy the >>>>>> generated files from the amg_matlb to the running directory? (Or they >>>>>> should be remained there and one puts the generated .dat files after >>>>>> running the 3rd step?). By the way non of the versions I tried working >>>>>> (even 707!) despite the fact that I had a range of matlab versions >>>>>> tried. Hanging with the old version I had I compiled again and have >>>>>> got the four files which was rather fast (with the message at the end: >>>>>> Error contraction factor: 0.47...) I used them and every time during >>>>>> the run-time it crashed simply: >>>>>> ##############################****############################**##** >>>>>> ############### >>>>>> AMG: reading through row 144540, pass 121/121 >>>>>> AMG: reading 0.071106 MB of W >>>>>> AMG: reading 0.115601 MB of AfP >>>>>> AMG: reading 0.132477 MB of Aff >>>>>> AMG level 1 F-vars: 440159 >>>>>> AMG level 2 F-vars: 55146 >>>>>> AMG level 3 F-vars: 28480 >>>>>> AMG level 4 F-vars: 17051 >>>>>> AMG level 5 F-vars: 7524 >>>>>> AMG level 6 F-vars: 5711 >>>>>> AMG level 7 F-vars: 5763 >>>>>> AMG level 8 F-vars: 28583 >>>>>> AMG level 9 F-vars: 5380 >>>>>> AMG level 10 F-vars: 5737 >>>>>> Application 731033 exit codes: 139 >>>>>> Application 731033 exit signals: Killed >>>>>> Application 731033 resources: utime ~417s, stime ~3s >>>>>> ##############################****############################**##** >>>>>> ############## >>>>>> >>>>>> Can you help me with that cause I believe this case still doable with >>>>>> correct AMG scheme. >>>>>> >>>>>> Many thanks >>>>>> Azad >>>>>> >>>>>> >>>>>> >>>>>> Hi Azad, >>>>>>> >>>>>>> I believe old AMG files should work up to and including revision 707 >>>>>>> in case >you want to check AMG quickly. >>>>>>> >>>>>>> Best. >>>>>>> Aleks >>>>>>> >>>>>>> >>>>>>> ----- Original Message ----- >>>>>>> From: nek5000-users at lists.mcs.anl.gov >>>>>>> To: nek5000-users at lists.mcs.anl.gov >>>>>>> Sent: Tuesday, January 10, 2012 10:58:47 AM >>>>>>> Subject: Re: [Nek5000-users] run-time hang up in gs_setup >>>>>>> >>>>>>> Hi Azad, >>>>>>> >>>>>>> your choice of lx1=8 is fine (it's our preferred sweet spot). If you >>>>>>> have a large element count (say > 300'000) the factorization in the >>>>>>> XXt setup phase may take hours. I guess that's why it looks like it's >>>>>>> hanging. Again, there is a known bug which looks the same. So can't >>>>>>> tell exactly what's causing your problem. >>>>>>> >>>>>>> I just updated the Wiki: https://nek5000.mcs.anl.gov/** >>>>>>> index.php/Amg_matlab >>>>>> index.php/Amg_matlab >>>>>>> > >>>>>>> >>>>>>> Can you verify that it still fails. >>>>>>> >>>>>>> -Stefan >>>>>>> >>>>>>> Quoting nek5000-users-request at lists.****mcs.anl.gov >>>>>>> >>>>>>> > >>>>>>> : >>>>>>> >>>>>>> Send Nek5000-users mailing list submissions to >>>>>>> nek5000-users at lists.mcs.anl.****gov>>>>>> anl.gov > >>>>>>> >>>>>>> To subscribe or unsubscribe via the World Wide Web, visit >>>>>>> >>>>>>> https://lists.mcs.anl.gov/****mailman/listinfo/nek5000-users >>>>>>> ** >>>>>>> **> >>>>>>> or, via email, send a message with subject or body 'help' to >>>>>>> nek5000-users-request at lists.****mcs.anl.gov >>>>>>> >>>>>>> > >>>>>>> >>>>>>> You can reach the person managing the list at >>>>>>> nek5000-users-owner at lists.mcs.****anl.gov>>>>>> *lists.mcs.anl.gov > >>>>>>> >>>>>>> When replying, please edit your Subject line so it is more specific >>>>>>> than "Re: Contents of Nek5000-users digest..." >>>>>>> >>>>>>> >>>>>>> Today's Topics: >>>>>>> >>>>>>> 1. Re: run-time hang up in gs_setup (nek5000-users at lists.mcs.anl.*** >>>>>>> * >>>>>>> gov >>>>>>> >) >>>>>>> 2. Re: run-time hang up in gs_setup (nek5000-users at lists.mcs.anl.*** >>>>>>> * >>>>>>> gov >>>>>>> >) >>>>>>> 3. Re: run-time hang up in gs_setup (nek5000-users at lists.mcs.anl.*** >>>>>>> * >>>>>>> gov >>>>>>> >) >>>>>>> >>>>>>> >>>>>>> ------------------------------****----------------------------**--** >>>>>>> ---------- >>>>>>> >>>>>>> Message: 1 >>>>>>> Date: Tue, 10 Jan 2012 06:01:29 -0600 (CST) >>>>>>> From: nek5000-users at lists.mcs.anl.****gov>>>>>> anl.gov > >>>>>>> Subject: Re: [Nek5000-users] run-time hang up in gs_setup >>>>>>> To: nek5000-users at lists.mcs.anl.****gov >>>>>> **gov > >>>>>>> Message-ID: >>>>>> Pine.LNX.**4.64.1201100557250.6026 at v8.**mcs.anl.gov >>>>>>> > >>>>>>> > >>>>>>> Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed >>>>>>> >>>>>>> >>>>>>> Hi Azad, >>>>>>> >>>>>>> You are in record-setting territory for element counts! :) >>>>>>> >>>>>>> Are you using the amg-based coarse-grid solver? >>>>>>> It is certain that you will need to do this (and, >>>>>>> therefore, you will need matlab to process the AMG >>>>>>> operators). There is some discussion of the steps >>>>>>> on the wiki page. We can walk you through this process >>>>>>> if you have any questions. >>>>>>> >>>>>>> What value of lx1 are you using? >>>>>>> >>>>>>> I would recommend fewer elements and a higher value of lx1. >>>>>>> I think it will be easier to manage the data, etc. >>>>>>> >>>>>>> Paul >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Tue, 10 Jan 2012, nek5000-users at lists.mcs.anl.****gov< >>>>>>> nek5000-users at lists.mcs.**anl.gov >>>>>>> >wrote: >>>>>>> >>>>>>> Dear NEKs; >>>>>>> >>>>>>>> >>>>>>>> I am trying to run a simulation of a turbulent flow in a straight >>>>>>>> pipe >>>>>>>> in high Reynolds number (Re_tau = 1000). After generating the grid >>>>>>>> with >>>>>>>> PRENEK and extrude it using n2to3, the mesh ended up with 4,495,920 >>>>>>>> elements. It compiled properly; however, trying to run it, hanged up >>>>>>>> in >>>>>>>> the last stage: >>>>>>>> ##############################****############################**##** >>>>>>>> ############ >>>>>>>> >>>>>>>> verify mesh topology >>>>>>>> -1.000000000000000 1.000000000000000 Xrange >>>>>>>> -1.000000000000000 1.000000000000000 Yrange >>>>>>>> 0.000000000000000 25.00000000000000 Zrange >>>>>>>> done :: verify mesh topology >>>>>>>> >>>>>>>> E-solver strategy: 1 itr >>>>>>>> mg_nx: 1 3 >>>>>>>> mg_ny: 1 3 >>>>>>>> mg_nz: 1 3 >>>>>>>> call usrsetvert >>>>>>>> done :: usrsetvert >>>>>>>> >>>>>>>> gs_setup: 866937 unique labels shared >>>>>>>> pairwise times (avg, min, max): 0.000241442 0.00019722 0.000265908 >>>>>>>> crystal router : 0.000458177 0.000445795 0.000471807 >>>>>>>> used all_to_all method: pairwise >>>>>>>> setupds time 5.6048E-02 seconds 1 2 4565612 4495920 >>>>>>>> setvert3d: 4 86046564 122013924 86046564 86046564 >>>>>>>> call usrsetvert >>>>>>>> done :: usrsetvert >>>>>>>> >>>>>>>> gs_setup: 8041169 unique labels shared >>>>>>>> pairwise times (avg, min, max): 0.00050716 0.000427103 0.00056479 >>>>>>>> crystal router : 0.0040165 0.00392921 0.00411811 >>>>>>>> used all_to_all method: pairwise >>>>>>>> setupds time 1.0465E+00 seconds 2 4 86046564 4495920 >>>>>>>> setup h1 coarse grid, nx_crs= 2 >>>>>>>> call usrsetvert >>>>>>>> done :: usrsetvert >>>>>>>> >>>>>>>> gs_setup: 866937 unique labels shared >>>>>>>> pairwise times (avg, min, max): 0.000233683 0.000197816 0.00024941 >>>>>>>> crystal router : 0.000466869 0.00045588 0.000478101 >>>>>>>> used all_to_all method: pairwise >>>>>>>> ##############################****############################**##** >>>>>>>> ############ >>>>>>>> >>>>>>>> >>>>>>>> I was wondering if you could help me with that. I attached the run >>>>>>>> logfile and also genmap.out. >>>>>>>> >>>>>>>> Many thanks >>>>>>>> Azad >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> ------------------------------ >>>>>>> >>>>>>> Message: 2 >>>>>>> Date: Tue, 10 Jan 2012 13:35:22 +0100 >>>>>>> From: nek5000-users at lists.mcs.anl.****gov>>>>>> anl.gov > >>>>>>> Subject: Re: [Nek5000-users] run-time hang up in gs_setup >>>>>>> To: nek5000-users at lists.mcs.anl.****gov >>>>>> **gov > >>>>>>> Message-ID: >>>>>>> <**CAGTrLsaexkteQN1Y1NQ3FYz7Q2ab*** >>>>>>> *b5YSLzOv+zeTwdvYXpD3Fw at mail.**** >>>>>>> gmail.com>>>>>> 2BzeTwdvYXpD3Fw at mail.gmail.com >>>>>>> **> >>>>>>> > >>>>>>> Content-Type: text/plain; charset=ISO-8859-1 >>>>>>> >>>>>>> Hi Azad, >>>>>>> >>>>>>> We have seen similar situations. I think this has to do with a known >>>>>>> bug. Unfortunately this bug is hard to reproduce and we haven't >>>>>>> managed to fix it yet. >>>>>>> >>>>>>> -Stefan >>>>>>> >>>>>>> On 1/10/12, nek5000-users at lists.mcs.anl.****gov< >>>>>>> nek5000-users at lists.mcs.**anl.gov > >>>>>>> >>>>>> gov >> >>>>>>> wrote: >>>>>>> >>>>>>> >>>>>>>> Hi Azad, >>>>>>>> >>>>>>>> You are in record-setting territory for element counts! :) >>>>>>>> >>>>>>>> Are you using the amg-based coarse-grid solver? >>>>>>>> It is certain that you will need to do this (and, >>>>>>>> therefore, you will need matlab to process the AMG >>>>>>>> operators). There is some discussion of the steps >>>>>>>> on the wiki page. We can walk you through this process >>>>>>>> if you have any questions. >>>>>>>> >>>>>>>> What value of lx1 are you using? >>>>>>>> >>>>>>>> I would recommend fewer elements and a higher value of lx1. >>>>>>>> I think it will be easier to manage the data, etc. >>>>>>>> >>>>>>>> Paul >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Tue, 10 Jan 2012, nek5000-users at lists.mcs.anl.****gov< >>>>>>>> nek5000-users at lists.mcs.**anl.gov >>>>>>>> >wrote: >>>>>>>> >>>>>>>> Dear NEKs; >>>>>>>> >>>>>>>>> >>>>>>>>> I am trying to run a simulation of a turbulent flow in a straight >>>>>>>>> pipe >>>>>>>>> in high Reynolds number (Re_tau = 1000). After generating the grid >>>>>>>>> with >>>>>>>>> PRENEK and extrude it using n2to3, the mesh ended up with 4,495,920 >>>>>>>>> elements. It compiled properly; however, trying to run it, hanged up >>>>>>>>> in >>>>>>>>> the last stage: >>>>>>>>> ##############################****############################** >>>>>>>>> ##** >>>>>>>>> ############ >>>>>>>>> >>>>>>>>> verify mesh topology >>>>>>>>> -1.000000000000000 1.000000000000000 Xrange >>>>>>>>> -1.000000000000000 1.000000000000000 Yrange >>>>>>>>> 0.000000000000000 25.00000000000000 Zrange >>>>>>>>> done :: verify mesh topology >>>>>>>>> >>>>>>>>> E-solver strategy: 1 itr >>>>>>>>> mg_nx: 1 3 >>>>>>>>> mg_ny: 1 3 >>>>>>>>> mg_nz: 1 3 >>>>>>>>> call usrsetvert >>>>>>>>> done :: usrsetvert >>>>>>>>> >>>>>>>>> gs_setup: 866937 unique labels shared >>>>>>>>> pairwise times (avg, min, max): 0.000241442 0.00019722 0.000265908 >>>>>>>>> crystal router : 0.000458177 0.000445795 0.000471807 >>>>>>>>> used all_to_all method: pairwise >>>>>>>>> setupds time 5.6048E-02 seconds 1 2 4565612 4495920 >>>>>>>>> setvert3d: 4 86046564 122013924 86046564 86046564 >>>>>>>>> call usrsetvert >>>>>>>>> done :: usrsetvert >>>>>>>>> >>>>>>>>> gs_setup: 8041169 unique labels shared >>>>>>>>> pairwise times (avg, min, max): 0.00050716 0.000427103 0.00056479 >>>>>>>>> crystal router : 0.0040165 0.00392921 0.00411811 >>>>>>>>> used all_to_all method: pairwise >>>>>>>>> setupds time 1.0465E+00 seconds 2 4 86046564 4495920 >>>>>>>>> setup h1 coarse grid, nx_crs= 2 >>>>>>>>> call usrsetvert >>>>>>>>> done :: usrsetvert >>>>>>>>> >>>>>>>>> gs_setup: 866937 unique labels shared >>>>>>>>> pairwise times (avg, min, max): 0.000233683 0.000197816 0.00024941 >>>>>>>>> crystal router : 0.000466869 0.00045588 0.000478101 >>>>>>>>> used all_to_all method: pairwise >>>>>>>>> ##############################****############################** >>>>>>>>> ##** >>>>>>>>> ############ >>>>>>>>> >>>>>>>>> >>>>>>>>> I was wondering if you could help me with that. I attached the run >>>>>>>>> logfile and also genmap.out. >>>>>>>>> >>>>>>>>> Many thanks >>>>>>>>> Azad >>>>>>>>> >>>>>>>>> ______________________________****_________________ >>>>>>>>> >>>>>>>> Nek5000-users mailing list >>>>>>>> Nek5000-users at lists.mcs.anl.****gov >>>>>>> gov > >>>>>>>> https://lists.mcs.anl.gov/****mailman/listinfo/nek5000-users >>>>>>>> ** >>>>>>>> **> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> ------------------------------ >>>>>>> >>>>>>> Message: 3 >>>>>>> Date: Tue, 10 Jan 2012 17:58:47 +0100 >>>>>>> From: nek5000-users at lists.mcs.anl.****gov>>>>>> anl.gov > >>>>>>> Subject: Re: [Nek5000-users] run-time hang up in gs_setup >>>>>>> To: nek5000-users at lists.mcs.anl.****gov >>>>>> **gov > >>>>>>> Message-ID: >>>>>>> <1326214727.2600.282.camel@**d**amavand.mech.kth.se >>>>>>> <1326214727**.2600.282.camel at damavand.mech.**kth.se<1326214727.2600.282.camel at damavand.mech.kth.se> >>>>>>> > >>>>>>> > >>>>>>> Content-Type: text/plain; charset="UTF-8" >>>>>>> >>>>>>> Dear Paul and Stefan; >>>>>>> >>>>>>> Thanks very much for looking into it. I use polynomial order 7th >>>>>>> (lx1=8). For the coarse-grid solver I actually used XXt. I also tried >>>>>>> to >>>>>>> use AMG, but unfortunately neither v619 nor the latest version could >>>>>>> have compiled its matlab files and always gives me this error (in >>>>>>> matlab/R2011a): >>>>>>> ##############################****################ >>>>>>> ... >>>>>>> sparsification tolerance [1e-4]: stol = 0.0001 >>>>>>> >>>>>>> ------------------------------****----------------------------**--** >>>>>>> ------------ >>>>>>> Segmentation violation detected at Tue Jan 10 15:56:46 2012 >>>>>>> ------------------------------****----------------------------**--** >>>>>>> ------------ >>>>>>> .... >>>>>>> Abnormal termination: >>>>>>> Segmentation violation >>>>>>> .... >>>>>>> ##############################****############### >>>>>>> I have been in the web page: "amg_matlab Matlab based tool to generate >>>>>>> AMG solver inputfiles" (http://nek5000.mcs.anl.gov/** >>>>>>> index.php/Amg_matlab >>>>>> index.php/Amg_matlab >>>>>>> >) >>>>>>> which gives me an empty link. >>>>>>> >>>>>>> I had an old version of the .dat files needed to run AMG, which I >>>>>>> tried >>>>>>> those as (amg_Aff.dat, amgdmp_i.dat, amg.dat, amg_AfP.dat, >>>>>>> amgdmp_p.dat, >>>>>>> amgdmp_j.dat, amg_W.dat) and I have got this error: >>>>>>> >>>>>>> ##############################****############## >>>>>>> ... >>>>>>> AMG: reading through row 142800, pass 119/121 >>>>>>> AMG: reading through row 144000, pass 120/121 >>>>>>> AMG: reading through row 144540, pass 121/121 >>>>>>> ERROR (proc >>>>>>> 0000, >>>>>>> /afs/pdc.kth.se/home/a/****anoorani/codes/latest_nek/** >>>>>>> nek5_svn/trunk/nek/jl/amg.c:****468>>>>>> anoorani/codes/latest_nek/**nek5_svn/trunk/nek/jl/amg.c:**468 >>>>>>> > >>>>>>> ): >>>>>>> AMG: missing data for some >>>>>>> rows >>>>>>> >>>>>>> call exitt: dying ... >>>>>>> ##############################****############## >>>>>>> >>>>>>> I think AMG could be a possibility to overcome this problem, though I >>>>>>> could not manage to get a run with that one. I look into the problem >>>>>>> with higher polynomial order to see if it reduces the number of >>>>>>> elements >>>>>>> dramatically, or at least resolve this issue. >>>>>>> >>>>>>> Best regards >>>>>>> Azad >>>>>>> >>>>>>> %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%****%%%%%%%%%%%%%%%%%%%%%%%%% >>>>>>> >>>>>>> Hi Azad, >>>>>>>> >>>>>>>> We have seen similar situations. I think this has to do with a known >>>>>>>> bug. Unfortunately this bug is hard to reproduce and we haven't >>>>>>>> managed to fix it yet. >>>>>>>> >>>>>>>> -Stefan >>>>>>>> >>>>>>>> On 1/10/12, nek5000-users at lists.mcs.anl.gov >>>>>>>> wrote: >>>>>>>> >>>>>>>> >>>>>>>> Hi Azad, >>>>>>>> >>>>>>>> You are in record-setting territory for element counts! :) >>>>>>>> >>>>>>>> Are you using the amg-based coarse-grid solver? >>>>>>>> It is certain that you will need to do this (and, >>>>>>>> therefore, you will need matlab to process the AMG >>>>>>>> operators). There is some discussion of the steps >>>>>>>> on the wiki page. We can walk you through this process >>>>>>>> if you have any questions. >>>>>>>> >>>>>>>> What value of lx1 are you using? >>>>>>>> >>>>>>>> I would recommend fewer elements and a higher value of lx1. >>>>>>>> I think it will be easier to manage the data, etc. >>>>>>>> >>>>>>>> Paul >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Tue, 10 Jan 2012, nek5000-users at lists.mcs.anl.gov wrote: >>>>>>>> >>>>>>>> Dear NEKs; >>>>>>>> >>>>>>>> I am trying to run a simulation of a turbulent flow in a straight >>>>>>>> pipe >>>>>>>> in high Reynolds number (Re_tau = 1000). After generating the grid >>>>>>>> with >>>>>>>> PRENEK and extrude it using n2to3, the mesh ended up with 4,495,920 >>>>>>>> elements. It compiled properly; however, trying to run it, hanged up >>>>>>>> in >>>>>>>> the last stage: >>>>>>>> ##############################****############################**##** >>>>>>>> ############ >>>>>>>> >>>>>>>> verify mesh topology >>>>>>>> -1.000000000000000 1.000000000000000 Xrange >>>>>>>> -1.000000000000000 1.000000000000000 Yrange >>>>>>>> 0.000000000000000 25.00000000000000 Zrange >>>>>>>> done :: verify mesh topology >>>>>>>> >>>>>>>> E-solver strategy: 1 itr >>>>>>>> mg_nx: 1 3 >>>>>>>> mg_ny: 1 3 >>>>>>>> mg_nz: 1 3 >>>>>>>> call usrsetvert >>>>>>>> done :: usrsetvert >>>>>>>> >>>>>>>> gs_setup: 866937 unique labels shared >>>>>>>> pairwise times (avg, min, max): 0.000241442 0.00019722 0.000265908 >>>>>>>> crystal router : 0.000458177 0.000445795 0.000471807 >>>>>>>> used all_to_all method: pairwise >>>>>>>> setupds time 5.6048E-02 seconds 1 2 4565612 4495920 >>>>>>>> setvert3d: 4 86046564 122013924 86046564 86046564 >>>>>>>> call usrsetvert >>>>>>>> done :: usrsetvert >>>>>>>> >>>>>>>> gs_setup: 8041169 unique labels shared >>>>>>>> pairwise times (avg, min, max): 0.00050716 0.000427103 0.00056479 >>>>>>>> crystal router : 0.0040165 0.00392921 0.00411811 >>>>>>>> used all_to_all method: pairwise >>>>>>>> setupds time 1.0465E+00 seconds 2 4 86046564 4495920 >>>>>>>> setup h1 coarse grid, nx_crs= 2 >>>>>>>> call usrsetvert >>>>>>>> done :: usrsetvert >>>>>>>> >>>>>>>> gs_setup: 866937 unique labels shared >>>>>>>> pairwise times (avg, min, max): 0.000233683 0.000197816 0.00024941 >>>>>>>> crystal router : 0.000466869 0.00045588 0.000478101 >>>>>>>> used all_to_all method: pairwise >>>>>>>> ##############################****############################**##** >>>>>>>> ############ >>>>>>>> >>>>>>>> >>>>>>>> I was wondering if you could help me with that. I attached the run >>>>>>>> logfile and also genmap.out. >>>>>>>> >>>>>>>> Many thanks >>>>>>>> Azad >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> ------------------------------ >>>>>>> >>>>>>> ______________________________****_________________ >>>>>>> Nek5000-users mailing list >>>>>>> Nek5000-users at lists.mcs.anl.****gov >>>>>> gov > >>>>>>> https://lists.mcs.anl.gov/****mailman/listinfo/nek5000-users >>>>>>> ** >>>>>>> **> >>>>>>> >>>>>>> >>>>>>> End of Nek5000-users Digest, Vol 35, Issue 5 >>>>>>> ************************************************ >>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>>> ------------------------------****----------------------------** >>>>>> --**---- >>>>>> This message was sent using IMP, the Internet Messaging Program. >>>>>> ______________________________****_________________ >>>>>> Nek5000-users mailing list >>>>>> Nek5000-users at lists.mcs.anl.****gov >>>>>> >>>>>> > >>>>>> https://lists.mcs.anl.gov/****mailman/listinfo/nek5000-users >>>>>> ** >>>>>> **> >>>>>> >>>>>> >>>>>> >>>>> ------------------------------ >>>>> >>>>> ______________________________****_________________ >>>>> Nek5000-users mailing list >>>>> Nek5000-users at lists.mcs.anl.****gov >>>>> >>>>> > >>>>> https://lists.mcs.anl.gov/****mailman/listinfo/nek5000-users >>>>> ** >>>>> **> >>>>> >>>>> >>>>> End of Nek5000-users Digest, Vol 35, Issue 7 >>>>> ************************************************ >>>>> >>>>> >>>>> >>>> >>>> ------------------------------****----------------------------**--**---- >>>> This message was sent using IMP, the Internet Messaging Program. >>>> ______________________________****_________________ >>>> Nek5000-users mailing list >>>> Nek5000-users at lists.mcs.anl.****gov >>>> >>>> > >>>> https://lists.mcs.anl.gov/****mailman/listinfo/nek5000-users >>>> ** >>>> **> >>>> >>>> -------------- next part -------------- >>> An HTML attachment was scrubbed... >>> URL: >> attachments/20120112/b0da596a/**attachment.htm >>> > >>> >>> ------------------------------ >>> >>> ______________________________**_________________ >>> Nek5000-users mailing list >>> Nek5000-users at lists.mcs.anl.**gov >>> https://lists.mcs.anl.gov/**mailman/listinfo/nek5000-users >>> >>> >>> End of Nek5000-users Digest, Vol 35, Issue 9 >>> ********************************************** >>> >>> >> >> >> ------------------------------**------------------------------**---- >> This message was sent using IMP, the Internet Messaging Program. >> ______________________________**_________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.**gov >> https://lists.mcs.anl.gov/**mailman/listinfo/nek5000-users >> > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > > ------------------------------ > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > > End of Nek5000-users Digest, Vol 35, Issue 10 > ********************************************* > ---------------------------------------------------------------- This message was sent using IMP, the Internet Messaging Program. From nek5000-users at lists.mcs.anl.gov Fri Jan 13 14:54:46 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 13 Jan 2012 13:54:46 -0700 Subject: [Nek5000-users] userf for temperature Message-ID: Hello All. I'd like to add a user-defined forcing for my temperature equation. It's clear how to do this for the momentum equation via ffx,ffy,ffz in subroutine userf, but what about for temperature? Thanks. --Mike From nek5000-users at lists.mcs.anl.gov Fri Jan 13 14:57:24 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 13 Jan 2012 14:57:24 -0600 Subject: [Nek5000-users] userf for temperature In-Reply-To: References: Message-ID: Hi Mike, I believe it is set using "qvol" in userq. Josh On Fri, Jan 13, 2012 at 2:54 PM, wrote: > Hello All. > > I'd like to add a user-defined forcing for my temperature equation. ? It's clear how to do this for the momentum equation via ffx,ffy,ffz in subroutine userf, but what about for temperature? > > Thanks. > --Mike > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users -- Josh Camp "All that is necessary for the triumph of evil is that good men do nothing" -- Edmund Burke From nek5000-users at lists.mcs.anl.gov Fri Jan 13 14:58:10 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 13 Jan 2012 14:58:10 -0600 (CST) Subject: [Nek5000-users] userf for temperature In-Reply-To: References: Message-ID: Hi Mike, userq() provides the equivalent functionality to T as userf does for fluid. You simply state: qvol = 1. (say) for uniform volumetric heating. Paul On Fri, 13 Jan 2012, nek5000-users at lists.mcs.anl.gov wrote: > Hello All. > > I'd like to add a user-defined forcing for my temperature equation. It's clear how to do this for the momentum equation via ffx,ffy,ffz in subroutine userf, but what about for temperature? > > Thanks. > --Mike > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > From nek5000-users at lists.mcs.anl.gov Mon Jan 16 03:46:12 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 16 Jan 2012 10:46:12 +0100 Subject: [Nek5000-users] Restart problem In-Reply-To: References: <1323181417.5193.13.camel@skagsnebb.mech.kth.se> <1325753197.8552.9.camel@skagsnebb.mech.kth.se> Message-ID: <1326707172.2534.52.camel@skagsnebb.mech.kth.se> Hi I tested new restart both with and without projection and it works. What is the problem with passive scalars? I use them, but they are not crucial at this point. However, I thought to use them as storage for eigenvalues for my arnoldi implementation. Best regards Adam On Thu, 2012-01-05 at 03:13 -0600, nek5000-users at lists.mcs.anl.gov wrote: > Do you have any passive scalars? > > There are some issues w.r.t passive scalar i/o that we're working > to resolve. > > Paul > > > On Thu, 5 Jan 2012, nek5000-users at lists.mcs.anl.gov wrote: > > > On Thu, 2011-12-22 at 22:22 -0600, nek5000-users at lists.mcs.anl.gov > > wrote: > > > > Hi > > > > Thank you Paul. I've made short test with jet in crossflow and it seem > > to work, but I've got sometimes following warning > > > > WARNING: restart file has a NSPCAL > LDIMT > > read only part of the fld-data! > > WARNING: NPSCAL read from restart file differs from > > currently used NPSCAL! > > > > What's strange this warning doesn't show up at every restart. I'm going > > to make some longer tests now. > > Regards > > Adam > > > > > > > >>> Hi > >>> > >>> I simulate jet in crossflow problem with nek5000 and I've got serius > >>> problems with restarting the simulation. It causes strong spurious > >>> velocity oscillation and I cannot get rid of them. I've implemented > >>> restarting procedure described in prepost.f, but it doesn't help much. > >>> Playing with file format (parameters p66, p67) and projection (p94, p95) > >>> I can only decrease oscillations amplitude, but they are still there. > >>> Surprisingly saving output files in double precision (p63=8) makes > >>> everything worse. Has anybody got similar problems? > >>> Best regards > >>> > >>> Adam > >> > >> Hi, > >> > >> I think that the full restart capability should now work with > >> the current version of the source. > >> > >> There is a 2D example in the repo, with a README that I also > >> provide below. > >> > >> Basically, you will now save 4 files each time you wish to > >> checkpoint. The files come in two sets, A and B, and the A > >> set is then overwritten by the 3rd checkpoint, etc. so that > >> you have at most 8 checkpoint files on hand at any one time. > >> The files are 64 bit and thus cannot be used by VisIt --- thus, > >> they are truly designated as checkpoint/restart files and not > >> analysis files. More information in the README below. > >> > >> Please let me know if you have comments or questions. > >> > >> Best regards, > >> > >> Paul > >> > >> ------------------------------------------------------------------------- > >>> From the examples/cyl_restart directory: > >> > >> SET UP: > >> ======= > >> > >> This directory contains an example of full restart capabilities for > >> Nek5000. > >> > >> The model flow is a von Karman street in the wake of a 2D cylinder. > >> The quantity of interest is taken to be the lift, which is monitored > >> via "grep agy logfile" in the run_test script. A matlab file, doit.m, > >> can be used to analyze the output files containing the lift history > >> of the four cases. The cases are: > >> > >> ca - initial run (no projection) > >> cb - restart run for ca case > >> > >> pa - initial run (with projection) > >> pb - restart run for pa case > >> > >> BACKGROUND: > >> =========== > >> > >> Timestepping in Nek5000 is based on BDFk/EXTk (k=3, typ.), which uses kth-order > >> backward-difference formulae (BDFk) to evaluate velocity time derivatives and > >> kth-order extrapolation (EXTk) for explicit evaluation of the nonlinear and > >> pressure boundary terms. Under normal conditions, the velocity and pressure > >> for preceding timesteps are required to advance the the solution at each step. > >> > >> At startup, the timestepper is typically bootstrapped using a lower-order > >> BDF/EXT formula that, given the artificiality of most initial conditions, > >> is typically adequate. The velocity field often has enough inertia and > >> sufficient signature such that the same bootstrap procedure also works when > >> restarting from an existing solution (i.e., a .fnnnnn or .fldnn file, stored > >> in 32-bit precision). > >> > >> For some cases, it is important to have reproducibility of the time history > >> to the working precision (14 digits, typ.) of the code. The full restart > >> feature is designed to provide this capability. The main features of > >> full restart are: > >> > >> .Preserve alternating sets of snapshots (4 per set) in 64-bit precision. > >> (Alternating sets are saved in case the job fails in the middle of > >> saving a set.) > >> > >> .Use the most recent set to restart the computation by overwriting > >> the solution for the first steps, 0 through 3, with the preserved > >> snapshots. > >> > >> > >> Full restart is triggered through the .usr file. In the given example > >> cases, "ca" and "cb" the restart-save is illustrated in ca.usr and the > >> actual restart, plus the save, is illustrated in cb.usr. For these cases, > >> the restart is encapsulated in the user-provided routine "my_full_restart" > >> shown below, along with the calling format in userchk: > >> > >> > >> c----------------------------------------------------------------------- > >> subroutine userchk > >> include 'SIZE' > >> include 'TOTAL' > >> > >> logical if_drag_out,if_torq_out > >> > >> call my_full_restart > >> > >> scale = 1. > >> if_drag_out = .true. > >> if_torq_out = .false. > >> call torque_calc(scale,x0,if_drag_out,if_torq_out) > >> > >> return > >> end > >> c----------------------------------------------------------------------- > >> subroutine my_full_restart > >> > >> character*80 s80(4) > >> > >> call blank(s80,4*80) > >> s80(1) ='rs8ca0.f00005' > >> s80(2) ='rs8ca0.f00006' > >> s80(3) ='rs8ca0.f00007' > >> s80(4) ='rs8ca0.f00008' > >> > >> call full_restart(s80,4) ! Will overload 5-8 onto steps 0-3 > >> > >> > >> iosave = iostep ! Trigger save based on iostep > >> call full_restart_save(iosave) > >> > >> return > >> end > >> c----------------------------------------------------------------------- > >> > >> > >> Note that in the example above, the set enumerated 5--8 is used to restart > >> the computation. This set is generated by first running the "ca" case. > >> > >> Note that the frequency of the restart output is coincident with the > >> standard output frequency of field files (snapshots). This might be too > >> frequent if one is, say, making a movie where snapshots are typically > >> dumped every 10 steps. It would make more sense in this case to set > >> iosave=1000, say. > >> > >> Note also that if one is initiating a computation from something other > >> than the full restart mode then the full_restart() call should be commented > >> out. > >> > >> > >> COMMENTS: > >> ========= > >> > >> Full reproducibility of the solution is predicated on having sufficient > >> history information to replicate the state of "a" when running "b". > >> While such replication is possible, it does preclude acceleration of the > >> iterative solvers by projection onto prior solution spaces [1,2], since > >> these projections typically retain relatively long sequences of information > >> (e.g., spanning tens of steps) to maximally extract all the regularity in the > >> solution history. Consequently, _full_ reproducibility is not retained with > >> projection turned on. In this case, the solution is reproduced only to the > >> tolerance of the iterative solvers, which is in any case the maximum level > >> of accuracy attainable in the solution. To illustrate the difference, > >> we provide a test case pairing, "pa" and "pb", which is essentially the > >> same as the ca/cb pair save that projection is turned on for pa/pb. > >> > >> > >> > >> > >> _______________________________________________ > >> Nek5000-users mailing list > >> Nek5000-users at lists.mcs.anl.gov > >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > > > > > _______________________________________________ > > Nek5000-users mailing list > > Nek5000-users at lists.mcs.anl.gov > > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users From nek5000-users at lists.mcs.anl.gov Mon Jan 16 07:54:10 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 16 Jan 2012 07:54:10 -0600 (CST) Subject: [Nek5000-users] Restart problem In-Reply-To: <1326707172.2534.52.camel@skagsnebb.mech.kth.se> References: <1323181417.5193.13.camel@skagsnebb.mech.kth.se> <1325753197.8552.9.camel@skagsnebb.mech.kth.se> <1326707172.2534.52.camel@skagsnebb.mech.kth.se> Message-ID: Hi Adam, How many vectors do you intend to store in the restart file? I think we can readily accommodate this. Paul On Mon, 16 Jan 2012, nek5000-users at lists.mcs.anl.gov wrote: > Hi > > I tested new restart both with and without projection and it works. What > is the problem with passive scalars? I use them, but they are not > crucial at this point. However, I thought to use them as storage for > eigenvalues for my arnoldi implementation. > Best regards > > Adam > > On Thu, 2012-01-05 at 03:13 -0600, nek5000-users at lists.mcs.anl.gov > wrote: >> Do you have any passive scalars? >> >> There are some issues w.r.t passive scalar i/o that we're working >> to resolve. >> >> Paul >> >> >> On Thu, 5 Jan 2012, nek5000-users at lists.mcs.anl.gov wrote: >> >>> On Thu, 2011-12-22 at 22:22 -0600, nek5000-users at lists.mcs.anl.gov >>> wrote: >>> >>> Hi >>> >>> Thank you Paul. I've made short test with jet in crossflow and it seem >>> to work, but I've got sometimes following warning >>> >>> WARNING: restart file has a NSPCAL > LDIMT >>> read only part of the fld-data! >>> WARNING: NPSCAL read from restart file differs from >>> currently used NPSCAL! >>> >>> What's strange this warning doesn't show up at every restart. I'm going >>> to make some longer tests now. >>> Regards >>> Adam >>> >>> >>> >>>>> Hi >>>>> >>>>> I simulate jet in crossflow problem with nek5000 and I've got serius >>>>> problems with restarting the simulation. It causes strong spurious >>>>> velocity oscillation and I cannot get rid of them. I've implemented >>>>> restarting procedure described in prepost.f, but it doesn't help much. >>>>> Playing with file format (parameters p66, p67) and projection (p94, p95) >>>>> I can only decrease oscillations amplitude, but they are still there. >>>>> Surprisingly saving output files in double precision (p63=8) makes >>>>> everything worse. Has anybody got similar problems? >>>>> Best regards >>>>> >>>>> Adam >>>> >>>> Hi, >>>> >>>> I think that the full restart capability should now work with >>>> the current version of the source. >>>> >>>> There is a 2D example in the repo, with a README that I also >>>> provide below. >>>> >>>> Basically, you will now save 4 files each time you wish to >>>> checkpoint. The files come in two sets, A and B, and the A >>>> set is then overwritten by the 3rd checkpoint, etc. so that >>>> you have at most 8 checkpoint files on hand at any one time. >>>> The files are 64 bit and thus cannot be used by VisIt --- thus, >>>> they are truly designated as checkpoint/restart files and not >>>> analysis files. More information in the README below. >>>> >>>> Please let me know if you have comments or questions. >>>> >>>> Best regards, >>>> >>>> Paul >>>> >>>> ------------------------------------------------------------------------- >>>>> From the examples/cyl_restart directory: >>>> >>>> SET UP: >>>> ======= >>>> >>>> This directory contains an example of full restart capabilities for >>>> Nek5000. >>>> >>>> The model flow is a von Karman street in the wake of a 2D cylinder. >>>> The quantity of interest is taken to be the lift, which is monitored >>>> via "grep agy logfile" in the run_test script. A matlab file, doit.m, >>>> can be used to analyze the output files containing the lift history >>>> of the four cases. The cases are: >>>> >>>> ca - initial run (no projection) >>>> cb - restart run for ca case >>>> >>>> pa - initial run (with projection) >>>> pb - restart run for pa case >>>> >>>> BACKGROUND: >>>> =========== >>>> >>>> Timestepping in Nek5000 is based on BDFk/EXTk (k=3, typ.), which uses kth-order >>>> backward-difference formulae (BDFk) to evaluate velocity time derivatives and >>>> kth-order extrapolation (EXTk) for explicit evaluation of the nonlinear and >>>> pressure boundary terms. Under normal conditions, the velocity and pressure >>>> for preceding timesteps are required to advance the the solution at each step. >>>> >>>> At startup, the timestepper is typically bootstrapped using a lower-order >>>> BDF/EXT formula that, given the artificiality of most initial conditions, >>>> is typically adequate. The velocity field often has enough inertia and >>>> sufficient signature such that the same bootstrap procedure also works when >>>> restarting from an existing solution (i.e., a .fnnnnn or .fldnn file, stored >>>> in 32-bit precision). >>>> >>>> For some cases, it is important to have reproducibility of the time history >>>> to the working precision (14 digits, typ.) of the code. The full restart >>>> feature is designed to provide this capability. The main features of >>>> full restart are: >>>> >>>> .Preserve alternating sets of snapshots (4 per set) in 64-bit precision. >>>> (Alternating sets are saved in case the job fails in the middle of >>>> saving a set.) >>>> >>>> .Use the most recent set to restart the computation by overwriting >>>> the solution for the first steps, 0 through 3, with the preserved >>>> snapshots. >>>> >>>> >>>> Full restart is triggered through the .usr file. In the given example >>>> cases, "ca" and "cb" the restart-save is illustrated in ca.usr and the >>>> actual restart, plus the save, is illustrated in cb.usr. For these cases, >>>> the restart is encapsulated in the user-provided routine "my_full_restart" >>>> shown below, along with the calling format in userchk: >>>> >>>> >>>> c----------------------------------------------------------------------- >>>> subroutine userchk >>>> include 'SIZE' >>>> include 'TOTAL' >>>> >>>> logical if_drag_out,if_torq_out >>>> >>>> call my_full_restart >>>> >>>> scale = 1. >>>> if_drag_out = .true. >>>> if_torq_out = .false. >>>> call torque_calc(scale,x0,if_drag_out,if_torq_out) >>>> >>>> return >>>> end >>>> c----------------------------------------------------------------------- >>>> subroutine my_full_restart >>>> >>>> character*80 s80(4) >>>> >>>> call blank(s80,4*80) >>>> s80(1) ='rs8ca0.f00005' >>>> s80(2) ='rs8ca0.f00006' >>>> s80(3) ='rs8ca0.f00007' >>>> s80(4) ='rs8ca0.f00008' >>>> >>>> call full_restart(s80,4) ! Will overload 5-8 onto steps 0-3 >>>> >>>> >>>> iosave = iostep ! Trigger save based on iostep >>>> call full_restart_save(iosave) >>>> >>>> return >>>> end >>>> c----------------------------------------------------------------------- >>>> >>>> >>>> Note that in the example above, the set enumerated 5--8 is used to restart >>>> the computation. This set is generated by first running the "ca" case. >>>> >>>> Note that the frequency of the restart output is coincident with the >>>> standard output frequency of field files (snapshots). This might be too >>>> frequent if one is, say, making a movie where snapshots are typically >>>> dumped every 10 steps. It would make more sense in this case to set >>>> iosave=1000, say. >>>> >>>> Note also that if one is initiating a computation from something other >>>> than the full restart mode then the full_restart() call should be commented >>>> out. >>>> >>>> >>>> COMMENTS: >>>> ========= >>>> >>>> Full reproducibility of the solution is predicated on having sufficient >>>> history information to replicate the state of "a" when running "b". >>>> While such replication is possible, it does preclude acceleration of the >>>> iterative solvers by projection onto prior solution spaces [1,2], since >>>> these projections typically retain relatively long sequences of information >>>> (e.g., spanning tens of steps) to maximally extract all the regularity in the >>>> solution history. Consequently, _full_ reproducibility is not retained with >>>> projection turned on. In this case, the solution is reproduced only to the >>>> tolerance of the iterative solvers, which is in any case the maximum level >>>> of accuracy attainable in the solution. To illustrate the difference, >>>> we provide a test case pairing, "pa" and "pb", which is essentially the >>>> same as the ca/cb pair save that projection is turned on for pa/pb. >>>> >>>> >>>> >>>> >>>> _______________________________________________ >>>> Nek5000-users mailing list >>>> Nek5000-users at lists.mcs.anl.gov >>>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>> >>> >>> _______________________________________________ >>> Nek5000-users mailing list >>> Nek5000-users at lists.mcs.anl.gov >>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>> >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > From nek5000-users at lists.mcs.anl.gov Thu Jan 19 07:38:13 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 19 Jan 2012 13:38:13 +0000 Subject: [Nek5000-users] Add a forcing term on scalar equation Message-ID: Hi I would like to add a forcing term to the scalar equation. According to the Nekton manual, Chapter 5, the forcing term is treated implicitly and the convective term is integrated using the third order Adam_Bashforth scheme (AB3). For my case, the forcing term is a nonlinear function of the scalar field, so I cannot use the implicit scheme and I should lump it with the AB3. Therefore, could you please guide me and indicate where in the code I can find the implementation of AB3 for the convective term of passive scalar? Cheers Iman -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Thu Jan 19 09:20:43 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 19 Jan 2012 09:20:43 -0600 (CST) Subject: [Nek5000-users] Add a forcing term on scalar equation In-Reply-To: References: Message-ID: It's actually evaluated explicitly, so you shold be ok. Paul On Thu, 19 Jan 2012, nek5000-users at lists.mcs.anl.gov wrote: > Hi > > I would like to add a forcing term to the scalar equation. > According to the Nekton manual, Chapter 5, the forcing term is treated implicitly and the convective term is integrated using the third order Adam_Bashforth scheme (AB3). > For my case, the forcing term is a nonlinear function of the scalar field, so I cannot use the implicit scheme and I should lump it with the AB3. > Therefore, could you please guide me and indicate where in the code I can find the implementation of AB3 for the convective term of passive scalar? > > Cheers > Iman > > From nek5000-users at lists.mcs.anl.gov Thu Jan 19 09:36:49 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 19 Jan 2012 16:36:49 +0100 Subject: [Nek5000-users] Failure with TORDER = 3 (P027) In-Reply-To: References: <20111207145358.57299dpe5pezsqgm@www.mech.kth.se> <20111207163805.11226tijj6k67nfh@www.mech.kth.se> <20111209150722.80955p873qx25fyy@www.mech.kth.se> Message-ID: <20120119163649.20565e8hnalboa6p@www.mech.kth.se> Dear Neks, I am using the 758 version of the code with the new full-restart option, and my pipe flow simulation runs quite fine with TORDER = 3, and the projection parameters, i.e. P094 and P095 set to non-zero values. Regards George Quoting nek5000-users at lists.mcs.anl.gov: > George, > > Can you do a run using single 4 byte .fXXXXX file using just one > IO-node. Also, turn off the characteristics scheme (IFCHAR). Then try > to do a restart again. > > Cheers, > Stefan > > On 12/9/11, nek5000-users at lists.mcs.anl.gov > wrote: >> >> Hi Stefan, >> >> Here is a part of a file I obtained from a run that failed with TORDER = 3. >> >> Regards >> George >> >> ------------------------------------------------------------------------------- >> 118 Parameters from file:/ >> 1 1.00000 P001: DENSITY >> 2 -9500. P002: VISCOS >> 7 1.00000 P007: RHOCP >> 8 1.00000 P008: CONDUCT >> 11 500.0 P011: NSTEPS >> 12 -5.000E-04 P012: DT >> 15 500.00 P015: IOSTEP >> 17 1.00000 P017: >> 18 0.500000E-01 P018: GRID < 0 --> # cells on screen >> 19 -1.00000 P019: INTYPE >> 20 10.0000 P020: NORDER >> 21 0.100000E-05 P021: DIVERGENCE >> 22 9.920000E-08 P022: HELMHOLTZ >> 24 0.100000E-01 P024: TOLREL >> 25 0.100000E-01 P025: TOLABS >> 26 1.00000 P026: COURANT/NTAU >> 27 3.00000 P027: TORDER >> 28 0.00000 P028: TORDER: mesh velocity (0: p28=p27) >> 54 -3.00000 P054: fixed flow rate dir: |p54|=1,2,3=x,y,z >> 55 1.00000 P055: vol.flow rate (p54>0) or Ubar (p54<0) >> 63 8.00000 P063: =8 --> force 8-byte output >> 65 6.00000 P065: #iofiles (eg, 0 or 64); <0 --> sep. dirs >> 66 6.00000 P066: output : <0=ascii, else binary >> 67 6.00000 P067: restart: <0=ascii, else binary >> 68 500.00 P068: iastep: freq for avg_all (0=iostep) >> 69 50000.0 P069: : : frequency of srf dump >> 93 20.0000 P093: Number of previous pressure solns saved >> 99 3.00000 P099: dealiasing: <0--> off/3--> old/4--> new >> 102 1.00000 P102: Dump out divergence at each time step >> 103 0.05000 P103: weight of stabilizing filter (.01) >> >> IFTRAN = T >> IFFLOW = T >> IFHEAT = F >> IFSPLIT = F >> IFLOMACH = F >> IFUSERVP = F >> IFUSERMV = F >> IFSTRS = F >> IFCHAR = T >> IFCYCLIC = F >> IFAXIS = F >> IFMVBD = F >> IFMELT = F >> IFMODEL = F >> IFKEPS = F >> IFMOAB = F >> IFNEKNEK = F >> IFSYNC = T >> >> IFVCOR = T >> IFINTQ = F >> IFCWUZ = F >> IFSWALL = F >> IFGEOM = F >> IFSURT = F >> IFWCNO = F >> >> IFTMSH for field 1 = F >> IFADVC for field 1 = T >> IFNONL for field 1 = F >> >> Dealiasing enabled, lxd= 12 >> >> Estimated eigenvalues >> EIGAA = 1.650197855862139 >> EIGGA = 71694413.86227663 >> EIGAE = 1.5791367041742943E-002 >> EIGAS = 7.9744816586921753E-004 >> EIGGE = 71694413.86227663 >> EIGGS = 2.000000000000000 >> >> verify mesh topology >> -1.000000000000000 1.000000000000000 Xrange >> -1.000000000000000 1.000000000000000 Yrange >> 0.000000000000000 25.00000000000002 Zrange >> done :: verify mesh topology >> >> E-solver strategy: 1 itr >> mg_nx: 1 5 7 >> mg_ny: 1 5 7 >> mg_nz: 1 5 7 >> call usrsetvert >> done :: usrsetvert >> >> gs_setup: 277536 unique labels shared >> pairwise times (avg, min, max): 0.000236133 0.000198293 0.000261211 >> crystal router : 0.000244021 0.000238085 0.00025022 >> used all_to_all method: crystal router >> setupds time 2.1331E-02 seconds 1 2 875808 853632 >> setvert3d: 4 16416864 23245920 16416864 16416864 >> call usrsetvert >> done :: usrsetvert >> >> gs_setup: 2635744 unique labels shared >> pairwise times (avg, min, max): 0.0004331 0.000362206 0.000494504 >> crystal router : 0.001126 0.0011049 0.00114682 >> used all_to_all method: pairwise >> setupds time 1.9091E-01 seconds 2 4 16416864 853632 >> setvert3d: 6 52620192 107252640 52620192 52620192 >> call usrsetvert >> done :: usrsetvert >> >> gs_setup: 7399328 unique labels shared >> pairwise times (avg, min, max): 0.000524018 0.000427318 0.000591493 >> crystal router : 0.00345075 0.0033884 0.0035347 >> used all_to_all method: pairwise >> setupds time 5.6218E-01 seconds 3 6 52620192 853632 >> setvert3d: 8 109485792 293870304 109485792 109485792 >> call usrsetvert >> done :: usrsetvert >> >> gs_setup: 14568288 unique labels shared >> pairwise times (avg, min, max): 0.00098448 0.000790119 0.00117922 >> crystal router : 0.00694697 0.00683801 0.00708301 >> used all_to_all method: pairwise >> setupds time 1.4705E+00 seconds 4 8 109485792 853632 >> setup h1 coarse grid, nx_crs= 2 >> call usrsetvert >> done :: usrsetvert >> >> gs_setup: 277536 unique labels shared >> pairwise times (avg, min, max): 0.000271898 0.000193095 0.000345087 >> crystal router : 0.000370127 0.000366497 0.000374007 >> used all_to_all method: pairwise >> done :: setup h1 coarse grid 562.8824191093445 sec >> >> call usrdat3 >> done :: usrdat3 >> >> set initial conditions >> Checking restart options: pipe?.f00001 >> Reading checkpoint data >> 0 0 OPEN: pipe0.f00001 >> byte swap: F 6.543210 -2.9312772E+35 >> 850 0 OPEN: pipe5.f00001 >> 510 0 OPEN: pipe3.f00001 >> 170 0 OPEN: pipe1.f00001 >> 1020 0 OPEN: pipe6.f00001 >> 680 0 OPEN: pipe4.f00001 >> 340 0 OPEN: pipe2.f00001 >> >> 0 1.6225E+02 done :: Read checkpoint data >> avg data-throughput = -65.6MBps >> io-nodes = 6 >> >> xyz min -1.0000 -1.0000 0.0000 >> uvwpt min -0.43349 -0.45564 -0.77820E-01 0.69058E+08 0.0000 >> xyz max 1.0000 1.0000 25.000 >> uvwpt max 0.44557 0.38210 1.4216 0.69058E+08 0.0000 >> Restart: recompute geom. factors. >> regenerate geomerty data 1 >> vol_t,vol_v: 78.53976641971477 78.53976641971477 >> done :: regenerate geomerty data 1 >> >> done :: set initial conditions >> >> call userchk >> done :: userchk >> >> gridpoints unique/tot: 293870304 437059584 >> dofs: 291725280 184384512 >> >> Initial time: 0.1622500E+03 >> Initialization successfully completed 616.10 sec >> >> Starting time loop ... >> >> DT/DTCFL/DTFS/DTINIT 0.500E-03 0.494-323 0.299-316 0.500E-03 >> Step 1, t= 1.6225050E+02, DT= 5.0000000E-04, C= 0.251 0.0000E+00 >> 0.0000E+00 >> Solving for fluid >> 9.9200000000000002E-008 p22 1 1 >> 1 1 Helmholtz VELX F: 1.0654E+00 9.9200E-08 1.0526E-04 >> 2.0000E+03 >> 1 2 Helmholtz VELX F: 1.5163E-02 9.9200E-08 1.0526E-04 >> 2.0000E+03 >> 1 3 Helmholtz VELX F: 1.6029E-03 9.9200E-08 1.0526E-04 >> 2.0000E+03 >> 1 4 Helmholtz VELX F: 3.9700E-04 9.9200E-08 1.0526E-04 >> 2.0000E+03 >> 1 5 Helmholtz VELX F: 1.4559E-04 9.9200E-08 1.0526E-04 >> 2.0000E+03 >> 1 6 Helmholtz VELX F: 4.8307E-05 9.9200E-08 1.0526E-04 >> 2.0000E+03 >> 1 7 Helmholtz VELX F: 1.5822E-05 9.9200E-08 1.0526E-04 >> 2.0000E+03 >> 1 8 Helmholtz VELX F: 4.7557E-06 9.9200E-08 1.0526E-04 >> 2.0000E+03 >> 1 9 Helmholtz VELX F: 1.4659E-06 9.9200E-08 1.0526E-04 >> 2.0000E+03 >> 1 10 Helmholtz VELX F: 5.6372E-07 9.9200E-08 1.0526E-04 >> 2.0000E+03 >> 1 11 Helmholtz VELX F: 1.6238E-07 9.9200E-08 1.0526E-04 >> 2.0000E+03 >> 1 12 Helmholtz VELX F: 4.9454E-08 9.9200E-08 1.0526E-04 >> 2.0000E+03 >> 1 Hmholtz VELX: 11 4.9454E-08 1.0654E+00 9.9200E-08 >> 9.9200000000000002E-008 p22 1 1 >> 1 1 Helmholtz VELY F: 1.0592E+00 9.9200E-08 1.0526E-04 >> 2.0000E+03 >> 1 2 Helmholtz VELY F: 1.5106E-02 9.9200E-08 1.0526E-04 >> 2.0000E+03 >> 1 3 Helmholtz VELY F: 1.6168E-03 9.9200E-08 1.0526E-04 >> 2.0000E+03 >> 1 4 Helmholtz VELY F: 3.9446E-04 9.9200E-08 1.0526E-04 >> 2.0000E+03 >> 1 5 Helmholtz VELY F: 1.4562E-04 9.9200E-08 1.0526E-04 >> 2.0000E+03 >> 1 6 Helmholtz VELY F: 4.9132E-05 9.9200E-08 1.0526E-04 >> 2.0000E+03 >> 1 7 Helmholtz VELY F: 1.5898E-05 9.9200E-08 1.0526E-04 >> 2.0000E+03 >> 1 8 Helmholtz VELY F: 4.7011E-06 9.9200E-08 1.0526E-04 >> 2.0000E+03 >> 1 9 Helmholtz VELY F: 1.4592E-06 9.9200E-08 1.0526E-04 >> 2.0000E+03 >> 1 10 Helmholtz VELY F: 5.6658E-07 9.9200E-08 1.0526E-04 >> 2.0000E+03 >> 1 11 Helmholtz VELY F: 1.6209E-07 9.9200E-08 1.0526E-04 >> 2.0000E+03 >> 1 12 Helmholtz VELY F: 4.8705E-08 9.9200E-08 1.0526E-04 >> 2.0000E+03 >> 1 Hmholtz VELY: 11 4.8705E-08 1.0592E+00 9.9200E-08 >> 9.9200000000000002E-008 p22 1 1 >> 1 1 Helmholtz VELZ F: 9.0867E-01 9.9200E-08 1.0526E-04 >> 2.0000E+03 >> 1 2 Helmholtz VELZ F: 1.5203E-02 9.9200E-08 1.0526E-04 >> 2.0000E+03 >> 1 3 Helmholtz VELZ F: 2.3594E-03 9.9200E-08 1.0526E-04 >> 2.0000E+03 >> 1 4 Helmholtz VELZ F: 5.4341E-04 9.9200E-08 1.0526E-04 >> 2.0000E+03 >> 1 5 Helmholtz VELZ F: 1.9420E-04 9.9200E-08 1.0526E-04 >> 2.0000E+03 >> 1 6 Helmholtz VELZ F: 6.9938E-05 9.9200E-08 1.0526E-04 >> 2.0000E+03 >> 1 7 Helmholtz VELZ F: 2.1336E-05 9.9200E-08 1.0526E-04 >> 2.0000E+03 >> 1 8 Helmholtz VELZ F: 6.4972E-06 9.9200E-08 1.0526E-04 >> 2.0000E+03 >> 1 9 Helmholtz VELZ F: 2.1068E-06 9.9200E-08 1.0526E-04 >> 2.0000E+03 >> 1 10 Helmholtz VELZ F: 7.2366E-07 9.9200E-08 1.0526E-04 >> 2.0000E+03 >> 1 11 Helmholtz VELZ F: 2.2873E-07 9.9200E-08 1.0526E-04 >> 2.0000E+03 >> 1 12 Helmholtz VELZ F: 6.6523E-08 9.9200E-08 1.0526E-04 >> 2.0000E+03 >> 1 Hmholtz VELZ: 11 6.6523E-08 9.0867E-01 9.9200E-08 >> 1 1.00000E-06 8.15804E-04 1.72021E-03 4.74246E-01 1 Divergence >> 2 1.00000E-06 4.25351E-04 1.72021E-03 2.47267E-01 1 Divergence >> 3 1.00000E-06 2.20250E-04 1.72021E-03 1.28037E-01 1 Divergence >> 4 1.00000E-06 1.10200E-04 1.72021E-03 6.40617E-02 1 Divergence >> 5 1.00000E-06 6.66356E-05 1.72021E-03 3.87369E-02 1 Divergence >> 6 1.00000E-06 4.55137E-05 1.72021E-03 2.64582E-02 1 Divergence >> 7 1.00000E-06 3.45979E-05 1.72021E-03 2.01126E-02 1 Divergence >> 8 1.00000E-06 2.74987E-05 1.72021E-03 1.59856E-02 1 Divergence >> 9 1.00000E-06 2.25703E-05 1.72021E-03 1.31207E-02 1 Divergence >> 10 1.00000E-06 1.84355E-05 1.72021E-03 1.07170E-02 1 Divergence >> 11 1.00000E-06 1.51102E-05 1.72021E-03 8.78394E-03 1 Divergence >> 12 1.00000E-06 1.23753E-05 1.72021E-03 7.19407E-03 1 Divergence >> 13 1.00000E-06 9.99015E-06 1.72021E-03 5.80751E-03 1 Divergence >> 14 1.00000E-06 7.91532E-06 1.72021E-03 4.60136E-03 1 Divergence >> 15 1.00000E-06 6.25368E-06 1.72021E-03 3.63541E-03 1 Divergence >> 16 1.00000E-06 4.91692E-06 1.72021E-03 2.85832E-03 1 Divergence >> 17 1.00000E-06 3.87115E-06 1.72021E-03 2.25039E-03 1 Divergence >> 18 1.00000E-06 3.04686E-06 1.72021E-03 1.77121E-03 1 Divergence >> 19 1.00000E-06 2.41971E-06 1.72021E-03 1.40663E-03 1 Divergence >> 20 1.00000E-06 1.93080E-06 1.72021E-03 1.12242E-03 1 Divergence >> 21 1.00000E-06 1.69768E-06 1.72021E-03 9.86902E-04 1 Divergence >> 22 1.00000E-06 1.48272E-06 1.72021E-03 8.61940E-04 1 Divergence >> 23 1.00000E-06 1.31245E-06 1.72021E-03 7.62959E-04 1 Divergence >> 24 1.00000E-06 1.15596E-06 1.72021E-03 6.71990E-04 1 Divergence >> 25 1.00000E-06 9.86100E-07 1.72021E-03 5.73243E-04 1 Divergence >> 1 U-PRES gmres: 25 9.8610E-07 1.0000E-06 1.7202E-03 >> 9.0149E+00 1.6742E+01 >> 1 DNORM, DIVEX 9.8609999662049055E-007 >> 9.8609999670093433E-007 >> 9.9200000000000002E-008 p22 1 1 >> 1 1 Helmholtz VELX F: 0.0000E+00 9.9200E-08 1.0526E-04 >> 2.0000E+03 >> 1 Hmholtz VELX: 0 0.0000E+00 0.0000E+00 9.9200E-08 >> 9.9200000000000002E-008 p22 1 1 >> 1 1 Helmholtz VELY F: 0.0000E+00 9.9200E-08 1.0526E-04 >> 2.0000E+03 >> 1 Hmholtz VELY: 0 0.0000E+00 0.0000E+00 9.9200E-08 >> 9.9200000000000002E-008 p22 1 1 >> 1 1 Helmholtz VELZ F: 9.9993E-01 9.9200E-08 1.0526E-04 >> 2.0000E+03 >> 1 2 Helmholtz VELZ F: 3.4255E-02 9.9200E-08 1.0526E-04 >> 2.0000E+03 >> 1 3 Helmholtz VELZ F: 8.5689E-03 9.9200E-08 1.0526E-04 >> 2.0000E+03 >> 1 4 Helmholtz VELZ F: 2.0449E-03 9.9200E-08 1.0526E-04 >> 2.0000E+03 >> 1 5 Helmholtz VELZ F: 8.2452E-04 9.9200E-08 1.0526E-04 >> 2.0000E+03 >> 1 6 Helmholtz VELZ F: 2.5912E-04 9.9200E-08 1.0526E-04 >> 2.0000E+03 >> 1 7 Helmholtz VELZ F: 8.5857E-05 9.9200E-08 1.0526E-04 >> 2.0000E+03 >> 1 8 Helmholtz VELZ F: 2.4937E-05 9.9200E-08 1.0526E-04 >> 2.0000E+03 >> 1 9 Helmholtz VELZ F: 8.7854E-06 9.9200E-08 1.0526E-04 >> 2.0000E+03 >> 1 10 Helmholtz VELZ F: 3.0249E-06 9.9200E-08 1.0526E-04 >> 2.0000E+03 >> 1 11 Helmholtz VELZ F: 9.2479E-07 9.9200E-08 1.0526E-04 >> 2.0000E+03 >> 1 12 Helmholtz VELZ F: 3.0301E-07 9.9200E-08 1.0526E-04 >> 2.0000E+03 >> 1 13 Helmholtz VELZ F: 1.0306E-07 9.9200E-08 1.0526E-04 >> 2.0000E+03 >> 1 14 Helmholtz VELZ F: 3.4387E-08 9.9200E-08 1.0526E-04 >> 2.0000E+03 >> 1 Hmholtz VELZ: 13 3.4387E-08 9.9993E-01 9.9200E-08 >> 1 1.00000E-04 8.48221E-11 1.31021E-10 6.47394E-01 0 Divergence >> 0 U-PRES gmres: 1 8.4822E-11 1.0000E-04 1.3102E-10 >> 3.6466E-01 6.1275E-01 >> 1 1.57007E-03 2.50000E+01 1.00000E+00 basflow Z >> 1 0.1622505E+03 6.74973E-03 1.05976E-05 3.14158E+00 >> 3.14159E+00 volflow Z >> 1 1.6225E+02 2.5581E+01 Fluid done >> filt amp 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0500 >> filt trn 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.9500 >> schfile: >> /cfs/klemming/nobackup/g/georgeek/Pipe_550/pipe.sch >> Step 2, t= 1.6225100E+02, DT= 5.0000000E-04, C= 0.252 2.9590E+01 >> 2.9590E+01 >> Solving for fluid >> 9.9200000000000002E-008 p22 2 1 >> 2 Hmholtz VELX: 10 2.6583E-08 1.5981E+00 9.9200E-08 >> 9.9200000000000002E-008 p22 2 1 >> 2 Hmholtz VELY: 10 2.6270E-08 1.5887E+00 9.9200E-08 >> 9.9200000000000002E-008 p22 2 1 >> 2 Hmholtz VELZ: 10 3.7961E-08 1.3628E+00 9.9200E-08 >> 2 U-PRES gmres: 26 8.8941E-07 1.0000E-06 8.7570E-04 >> 9.3625E+00 1.7342E+01 >> 2 DNORM, DIVEX 8.8940552367860238E-007 >> 8.8940552232510758E-007 >> 9.9200000000000002E-008 p22 2 1 >> 2 Hmholtz VELX: 0 0.0000E+00 0.0000E+00 9.9200E-08 >> 9.9200000000000002E-008 p22 2 1 >> 2 Hmholtz VELY: 0 0.0000E+00 0.0000E+00 9.9200E-08 >> 9.9200000000000002E-008 p22 2 1 >> 2 Hmholtz VELZ: 11 2.9548E-08 9.9993E-01 9.9200E-08 >> 0 U-PRES gmres: 1 5.0819E-11 1.0000E-04 7.6670E-11 >> 3.6476E-01 6.1121E-01 >> 2 1.04680E-03 2.50000E+01 1.00000E+00 basflow Z >> 2 0.1622510E+03 6.74912E-03 7.06497E-06 3.14158E+00 >> 3.14159E+00 volflow Z >> 2 1.6225E+02 2.5493E+01 Fluid done >> Step 3, t= 1.6225150E+02, DT= 5.0000000E-04, C= 0.253 6.2005E+01 >> 3.2415E+01 >> Solving for fluid >> 9.9200000000000002E-008 p22 3 1 >> 3 Hmholtz VELX: 9 5.9973E-08 1.9551E+00 9.9200E-08 >> 9.9200000000000002E-008 p22 3 1 >> 3 Hmholtz VELY: 9 5.9734E-08 1.9436E+00 9.9200E-08 >> 9.9200000000000002E-008 p22 3 1 >> 3 Hmholtz VELZ: 9 6.7060E-08 1.6769E+00 9.9200E-08 >> 3 U-PRES gmres: 43 9.7341E-07 1.0000E-06 5.8253E-03 >> 1.5480E+01 2.9163E+01 >> 3 DNORM, DIVEX 9.7340773053122238E-007 >> 9.7340773046898007E-007 >> 9.9200000000000002E-008 p22 3 1 >> 3 Hmholtz VELX: 0 0.0000E+00 0.0000E+00 9.9200E-08 >> 9.9200000000000002E-008 p22 3 1 >> 3 Hmholtz VELY: 0 0.0000E+00 0.0000E+00 9.9200E-08 >> 9.9200000000000002E-008 p22 3 1 >> 3 Hmholtz VELZ: 10 3.2545E-08 9.9993E-01 9.9200E-08 >> 0 U-PRES gmres: 1 3.1707E-11 1.0000E-04 3.8884E-11 >> 3.6436E-01 6.1134E-01 >> 3 8.56501E-04 2.50000E+01 1.00000E+00 basflow Z >> 3 0.1622515E+03 6.74739E-03 5.77915E-06 3.14158E+00 >> 3.14159E+00 volflow Z >> 3 1.6225E+02 3.6833E+01 Fluid done >> Step 4, t= 1.6225200E+02, DT= 5.0000000E-04, C= 0.253 1.0909E+02 >> 4.7088E+01 >> Solving for fluid >> 9.9200000000000002E-008 p22 4 1 >> 4 Hmholtz VELX: 9 5.1382E-08 1.9533E+00 9.9200E-08 >> 9.9200000000000002E-008 p22 4 1 >> 4 Hmholtz VELY: 9 5.1245E-08 1.9419E+00 9.9200E-08 >> 9.9200000000000002E-008 p22 4 1 >> 4 Hmholtz VELZ: 9 5.8121E-08 1.6656E+00 9.9200E-08 >> 4 U-PRES gmres: 20 9.8736E-07 1.0000E-06 5.8401E-04 >> 7.2002E+00 1.3507E+01 >> 4 DNORM, DIVEX 9.8735913161531008E-007 >> 9.8735912866392627E-007 >> 4 0.1622520E+03 6.74462E-03 5.77677E-06 3.14158E+00 >> 3.14159E+00 volflow Z >> 4 1.6225E+02 1.8072E+01 Fluid done >> Step 5, t= 1.6225250E+02, DT= 5.0000000E-04, C= 0.254 1.3731E+02 >> 2.8220E+01 >> Solving for fluid >> 9.9200000000000002E-008 p22 5 1 >> 5 Hmholtz VELX: 9 5.5907E-08 1.9533E+00 9.9200E-08 >> 9.9200000000000002E-008 p22 5 1 >> 5 Hmholtz VELY: 9 5.5659E-08 1.9419E+00 9.9200E-08 >> 9.9200000000000002E-008 p22 5 1 >> 5 Hmholtz VELZ: 9 5.9684E-08 1.6655E+00 9.9200E-08 >> 5 U-PRES gmres: 16 9.2427E-07 1.0000E-06 1.5556E-04 >> 5.7643E+00 1.0499E+01 >> 5 DNORM, DIVEX 9.2426901539348950E-007 >> 9.2426901307669303E-007 >> 5 0.1622525E+03 6.74218E-03 5.77469E-06 3.14158E+00 >> 3.14159E+00 volflow Z >> 5 1.6225E+02 1.5064E+01 Fluid done >> Step 6, t= 1.6225300E+02, DT= 5.0000000E-04, C= 0.255 1.6256E+02 >> 2.5244E+01 >> Solving for fluid >> 9.9200000000000002E-008 p22 6 1 >> 6 Hmholtz VELX: 9 6.0915E-08 1.9533E+00 9.9200E-08 >> 9.9200000000000002E-008 p22 6 1 >> 6 Hmholtz VELY: 9 6.0642E-08 1.9418E+00 9.9200E-08 >> 9.9200000000000002E-008 p22 6 1 >> 6 Hmholtz VELZ: 9 5.9861E-08 1.6655E+00 9.9200E-08 >> 6 U-PRES gmres: 14 8.6298E-07 1.0000E-06 1.2403E-04 >> 5.0438E+00 9.0451E+00 >> 6 DNORM, DIVEX 8.6297683366363408E-007 >> 8.6297681628504936E-007 >> 6 0.1622530E+03 6.73975E-03 5.77261E-06 3.14158E+00 >> 3.14159E+00 volflow Z >> 6 1.6225E+02 1.3610E+01 Fluid done >> Step 7, t= 1.6225350E+02, DT= 5.0000000E-04, C= 0.255 1.8632E+02 >> 2.3767E+01 >> Solving for fluid >> 9.9200000000000002E-008 p22 7 1 >> 7 Hmholtz VELX: 9 7.2389E-08 1.9533E+00 9.9200E-08 >> 9.9200000000000002E-008 p22 7 1 >> 7 Hmholtz VELY: 9 7.1978E-08 1.9418E+00 9.9200E-08 >> 9.9200000000000002E-008 p22 7 1 >> 7 Hmholtz VELZ: 9 6.0336E-08 1.6654E+00 9.9200E-08 >> 7 U-PRES gmres: 14 7.8284E-07 1.0000E-06 1.2790E-04 >> 5.0456E+00 9.0568E+00 >> 7 DNORM, DIVEX 7.8284171959673956E-007 >> 7.8284171260985997E-007 >> 7 0.1622535E+03 6.73731E-03 5.77052E-06 3.14158E+00 >> 3.14159E+00 volflow Z >> 7 1.6225E+02 1.3620E+01 Fluid done >> Step 8, t= 1.6225400E+02, DT= 5.0000000E-04, C= 0.256 2.1010E+02 >> 2.3780E+01 >> Solving for fluid >> 9.9200000000000002E-008 p22 8 1 >> 8 Hmholtz VELX: 9 8.0530E-08 1.9533E+00 9.9200E-08 >> 9.9200000000000002E-008 p22 8 1 >> 8 Hmholtz VELY: 9 8.0474E-08 1.9418E+00 9.9200E-08 >> 9.9200000000000002E-008 p22 8 1 >> 8 Hmholtz VELZ: 9 6.1838E-08 1.6654E+00 9.9200E-08 >> 8 U-PRES gmres: 13 9.3871E-07 1.0000E-06 8.2637E-05 >> 4.6862E+00 8.3450E+00 >> 8 DNORM, DIVEX 9.3870751505892285E-007 >> 9.3870750972531774E-007 >> 8 0.1622540E+03 6.73487E-03 5.76843E-06 3.14158E+00 >> 3.14159E+00 volflow Z >> 8 1.6225E+02 1.2911E+01 Fluid done >> Step 9, t= 1.6225450E+02, DT= 5.0000000E-04, C= 0.257 2.3318E+02 >> 2.3081E+01 >> Solving for fluid >> 9.9200000000000002E-008 p22 9 1 >> 9 Hmholtz VELX: 9 8.5197E-08 1.9533E+00 9.9200E-08 >> 9.9200000000000002E-008 p22 9 1 >> 9 Hmholtz VELY: 9 8.4881E-08 1.9418E+00 9.9200E-08 >> 9.9200000000000002E-008 p22 9 1 >> 9 Hmholtz VELZ: 9 6.3849E-08 1.6654E+00 9.9200E-08 >> 9 U-PRES gmres: 10 8.0419E-07 1.0000E-06 5.5243E-05 >> 3.6054E+00 6.2748E+00 >> 9 DNORM, DIVEX 8.0418938467388352E-007 >> 8.0418939044644440E-007 >> 9 0.1622545E+03 6.73242E-03 5.76633E-06 3.14158E+00 >> 3.14159E+00 volflow Z >> 9 1.6225E+02 1.0838E+01 Fluid done >> Step 10, t= 1.6225500E+02, DT= 5.0000000E-04, C= 0.257 2.5418E+02 >> 2.0995E+01 >> Solving for fluid >> 9.9200000000000002E-008 p22 10 1 >> 10 Hmholtz VELX: 9 8.7316E-08 1.9533E+00 9.9200E-08 >> 9.9200000000000002E-008 p22 10 1 >> 10 Hmholtz VELY: 9 8.7257E-08 1.9418E+00 9.9200E-08 >> 9.9200000000000002E-008 p22 10 1 >> 10 Hmholtz VELZ: 9 6.5276E-08 1.6654E+00 9.9200E-08 >> 10 U-PRES gmres: 14 7.9070E-07 1.0000E-06 5.7631E-05 >> 5.0451E+00 9.0540E+00 >> 10 DNORM, DIVEX 7.9069874056473697E-007 >> 7.9069875187583686E-007 >> 10 0.1622550E+03 6.72998E-03 5.76424E-06 3.14158E+00 >> 3.14159E+00 volflow Z >> 10 1.6225E+02 1.3620E+01 Fluid done >> Step 11, t= 1.6225550E+02, DT= 5.0000000E-04, C= 0.258 2.7885E+02 >> 2.4675E+01 >> Solving for fluid >> 11 100 **ERROR**: Failed in HMHOLTZ: VELX 7.9521E+07 >> 1.9533E+00 9.9200E-08 >> 11 100 **ERROR**: Failed in HMHOLTZ: VELY 2.8223E+03 >> 1.9418E+00 9.9200E-08 >> 11 Hmholtz VELZ: 9 6.6283E-08 1.6654E+00 9.9200E-08 >> 11 U-PRES gmres: 100 1.6296E+02 1.0000E-06 2.6253E+12 >> 3.6002E+01 6.8265E+01 >> 11 DNORM, DIVEX 54998323.86255041 162.9642641112922 >> 11 0.1622555E+03 4.32525E-03 3.70458E-06 3.14159E+00 >> 3.14159E+00 volflow Z >> 11 1.6226E+02 9.4526E+01 Fluid done >> CFL, Ctarg! 11083557069312.66 1.000000000000000 >> call outfld: ifpsco: F >> >> 12 1.6226E+02 Write checkpoint: >> >> call outfld: ifpsco: F >> >> 12 1.6226E+02 Write checkpoint: >> 0 12 OPEN: pipe0.f00001 >> 850 12 OPEN: pipe5.f00001 >> 510 12 OPEN: pipe3.f00001 >> 170 12 OPEN: pipe1.f00001 >> 1020 12 OPEN: pipe6.f00001 >> 680 12 OPEN: pipe4.f00001 >> 340 12 OPEN: pipe2.f00001 >> >> 12 1.6226E+02 done :: Write checkpoint >> file size = 234.E+02MB >> >> 899 Emergency exit: 12 time = 162.2554999999999 >> >> 512 Emergency exit: 12 time = 162.2554999999999 >> >> 459 Emergency exit: 12 time = 162.2554999999999 >> Latest solution and data are dumped for post-processing. >> *** STOP *** >> 461 Emergency exit: 12 time = 162.2554999999999 >> Latest solution and data are dumped for post-processing. >> *** STOP *** >> >> ------------------------------------------------------------------------------- >> >> Quoting nek5000-users at lists.mcs.anl.gov: >> >>> Geroge, >>> >>> Can you provide more details. A logfile would be helpful. >>> >>> Cheers, >>> Stefan >> >> >> ---------------------------------------------------------------- >> This message was sent using IMP, the Internet Messaging Program. >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > ---------------------------------------------------------------- This message was sent using IMP, the Internet Messaging Program. From nek5000-users at lists.mcs.anl.gov Thu Jan 19 09:39:35 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 19 Jan 2012 09:39:35 -0600 (CST) Subject: [Nek5000-users] Failure with TORDER = 3 (P027) In-Reply-To: <20120119163649.20565e8hnalboa6p@www.mech.kth.se> References: <20111207145358.57299dpe5pezsqgm@www.mech.kth.se> <20111207163805.11226tijj6k67nfh@www.mech.kth.se> <20111209150722.80955p873qx25fyy@www.mech.kth.se> <20120119163649.20565e8hnalboa6p@www.mech.kth.se> Message-ID: Thank you George - so do I interpret this to mean that all is ok? Paul On Thu, 19 Jan 2012, nek5000-users at lists.mcs.anl.gov wrote: > > Dear Neks, > > I am using the 758 version of the code with the new full-restart option, and > my pipe flow simulation runs quite fine with TORDER = 3, and the projection > parameters, i.e. P094 and P095 set to non-zero values. > > Regards > George > > > Quoting nek5000-users at lists.mcs.anl.gov: > >> George, >> >> Can you do a run using single 4 byte .fXXXXX file using just one >> IO-node. Also, turn off the characteristics scheme (IFCHAR). Then try >> to do a restart again. >> >> Cheers, >> Stefan >> >> On 12/9/11, nek5000-users at lists.mcs.anl.gov >> wrote: >>> >>> Hi Stefan, >>> >>> Here is a part of a file I obtained from a run that failed with TORDER = >>> 3. >>> >>> Regards >>> George >>> >>> ------------------------------------------------------------------------------- >>> 118 Parameters from file:/ >>> 1 1.00000 P001: DENSITY >>> 2 -9500. P002: VISCOS >>> 7 1.00000 P007: RHOCP >>> 8 1.00000 P008: CONDUCT >>> 11 500.0 P011: NSTEPS >>> 12 -5.000E-04 P012: DT >>> 15 500.00 P015: IOSTEP >>> 17 1.00000 P017: >>> 18 0.500000E-01 P018: GRID < 0 --> # cells on screen >>> 19 -1.00000 P019: INTYPE >>> 20 10.0000 P020: NORDER >>> 21 0.100000E-05 P021: DIVERGENCE >>> 22 9.920000E-08 P022: HELMHOLTZ >>> 24 0.100000E-01 P024: TOLREL >>> 25 0.100000E-01 P025: TOLABS >>> 26 1.00000 P026: COURANT/NTAU >>> 27 3.00000 P027: TORDER >>> 28 0.00000 P028: TORDER: mesh velocity (0: p28=p27) >>> 54 -3.00000 P054: fixed flow rate dir: |p54|=1,2,3=x,y,z >>> 55 1.00000 P055: vol.flow rate (p54>0) or Ubar (p54<0) >>> 63 8.00000 P063: =8 --> force 8-byte output >>> 65 6.00000 P065: #iofiles (eg, 0 or 64); <0 --> sep. dirs >>> 66 6.00000 P066: output : <0=ascii, else binary >>> 67 6.00000 P067: restart: <0=ascii, else binary >>> 68 500.00 P068: iastep: freq for avg_all (0=iostep) >>> 69 50000.0 P069: : : frequency of srf dump >>> 93 20.0000 P093: Number of previous pressure solns saved >>> 99 3.00000 P099: dealiasing: <0--> off/3--> old/4--> new >>> 102 1.00000 P102: Dump out divergence at each time step >>> 103 0.05000 P103: weight of stabilizing filter (.01) >>> >>> IFTRAN = T >>> IFFLOW = T >>> IFHEAT = F >>> IFSPLIT = F >>> IFLOMACH = F >>> IFUSERVP = F >>> IFUSERMV = F >>> IFSTRS = F >>> IFCHAR = T >>> IFCYCLIC = F >>> IFAXIS = F >>> IFMVBD = F >>> IFMELT = F >>> IFMODEL = F >>> IFKEPS = F >>> IFMOAB = F >>> IFNEKNEK = F >>> IFSYNC = T >>> >>> IFVCOR = T >>> IFINTQ = F >>> IFCWUZ = F >>> IFSWALL = F >>> IFGEOM = F >>> IFSURT = F >>> IFWCNO = F >>> >>> IFTMSH for field 1 = F >>> IFADVC for field 1 = T >>> IFNONL for field 1 = F >>> >>> Dealiasing enabled, lxd= 12 >>> >>> Estimated eigenvalues >>> EIGAA = 1.650197855862139 >>> EIGGA = 71694413.86227663 >>> EIGAE = 1.5791367041742943E-002 >>> EIGAS = 7.9744816586921753E-004 >>> EIGGE = 71694413.86227663 >>> EIGGS = 2.000000000000000 >>> >>> verify mesh topology >>> -1.000000000000000 1.000000000000000 Xrange >>> -1.000000000000000 1.000000000000000 Yrange >>> 0.000000000000000 25.00000000000002 Zrange >>> done :: verify mesh topology >>> >>> E-solver strategy: 1 itr >>> mg_nx: 1 5 7 >>> mg_ny: 1 5 7 >>> mg_nz: 1 5 7 >>> call usrsetvert >>> done :: usrsetvert >>> >>> gs_setup: 277536 unique labels shared >>> pairwise times (avg, min, max): 0.000236133 0.000198293 0.000261211 >>> crystal router : 0.000244021 0.000238085 0.00025022 >>> used all_to_all method: crystal router >>> setupds time 2.1331E-02 seconds 1 2 875808 853632 >>> setvert3d: 4 16416864 23245920 16416864 16416864 >>> call usrsetvert >>> done :: usrsetvert >>> >>> gs_setup: 2635744 unique labels shared >>> pairwise times (avg, min, max): 0.0004331 0.000362206 0.000494504 >>> crystal router : 0.001126 0.0011049 0.00114682 >>> used all_to_all method: pairwise >>> setupds time 1.9091E-01 seconds 2 4 16416864 853632 >>> setvert3d: 6 52620192 107252640 52620192 52620192 >>> call usrsetvert >>> done :: usrsetvert >>> >>> gs_setup: 7399328 unique labels shared >>> pairwise times (avg, min, max): 0.000524018 0.000427318 0.000591493 >>> crystal router : 0.00345075 0.0033884 0.0035347 >>> used all_to_all method: pairwise >>> setupds time 5.6218E-01 seconds 3 6 52620192 853632 >>> setvert3d: 8 109485792 293870304 109485792 109485792 >>> call usrsetvert >>> done :: usrsetvert >>> >>> gs_setup: 14568288 unique labels shared >>> pairwise times (avg, min, max): 0.00098448 0.000790119 0.00117922 >>> crystal router : 0.00694697 0.00683801 0.00708301 >>> used all_to_all method: pairwise >>> setupds time 1.4705E+00 seconds 4 8 109485792 853632 >>> setup h1 coarse grid, nx_crs= 2 >>> call usrsetvert >>> done :: usrsetvert >>> >>> gs_setup: 277536 unique labels shared >>> pairwise times (avg, min, max): 0.000271898 0.000193095 0.000345087 >>> crystal router : 0.000370127 0.000366497 0.000374007 >>> used all_to_all method: pairwise >>> done :: setup h1 coarse grid 562.8824191093445 sec >>> >>> call usrdat3 >>> done :: usrdat3 >>> >>> set initial conditions >>> Checking restart options: pipe?.f00001 >>> Reading checkpoint data >>> 0 0 OPEN: pipe0.f00001 >>> byte swap: F 6.543210 -2.9312772E+35 >>> 850 0 OPEN: pipe5.f00001 >>> 510 0 OPEN: pipe3.f00001 >>> 170 0 OPEN: pipe1.f00001 >>> 1020 0 OPEN: pipe6.f00001 >>> 680 0 OPEN: pipe4.f00001 >>> 340 0 OPEN: pipe2.f00001 >>> >>> 0 1.6225E+02 done :: Read checkpoint data >>> avg data-throughput = -65.6MBps >>> io-nodes = 6 >>> >>> xyz min -1.0000 -1.0000 0.0000 >>> uvwpt min -0.43349 -0.45564 -0.77820E-01 0.69058E+08 0.0000 >>> xyz max 1.0000 1.0000 25.000 >>> uvwpt max 0.44557 0.38210 1.4216 0.69058E+08 0.0000 >>> Restart: recompute geom. factors. >>> regenerate geomerty data 1 >>> vol_t,vol_v: 78.53976641971477 78.53976641971477 >>> done :: regenerate geomerty data 1 >>> >>> done :: set initial conditions >>> >>> call userchk >>> done :: userchk >>> >>> gridpoints unique/tot: 293870304 437059584 >>> dofs: 291725280 184384512 >>> >>> Initial time: 0.1622500E+03 >>> Initialization successfully completed 616.10 sec >>> >>> Starting time loop ... >>> >>> DT/DTCFL/DTFS/DTINIT 0.500E-03 0.494-323 0.299-316 0.500E-03 >>> Step 1, t= 1.6225050E+02, DT= 5.0000000E-04, C= 0.251 0.0000E+00 >>> 0.0000E+00 >>> Solving for fluid >>> 9.9200000000000002E-008 p22 1 1 >>> 1 1 Helmholtz VELX F: 1.0654E+00 9.9200E-08 1.0526E-04 >>> 2.0000E+03 >>> 1 2 Helmholtz VELX F: 1.5163E-02 9.9200E-08 1.0526E-04 >>> 2.0000E+03 >>> 1 3 Helmholtz VELX F: 1.6029E-03 9.9200E-08 1.0526E-04 >>> 2.0000E+03 >>> 1 4 Helmholtz VELX F: 3.9700E-04 9.9200E-08 1.0526E-04 >>> 2.0000E+03 >>> 1 5 Helmholtz VELX F: 1.4559E-04 9.9200E-08 1.0526E-04 >>> 2.0000E+03 >>> 1 6 Helmholtz VELX F: 4.8307E-05 9.9200E-08 1.0526E-04 >>> 2.0000E+03 >>> 1 7 Helmholtz VELX F: 1.5822E-05 9.9200E-08 1.0526E-04 >>> 2.0000E+03 >>> 1 8 Helmholtz VELX F: 4.7557E-06 9.9200E-08 1.0526E-04 >>> 2.0000E+03 >>> 1 9 Helmholtz VELX F: 1.4659E-06 9.9200E-08 1.0526E-04 >>> 2.0000E+03 >>> 1 10 Helmholtz VELX F: 5.6372E-07 9.9200E-08 1.0526E-04 >>> 2.0000E+03 >>> 1 11 Helmholtz VELX F: 1.6238E-07 9.9200E-08 1.0526E-04 >>> 2.0000E+03 >>> 1 12 Helmholtz VELX F: 4.9454E-08 9.9200E-08 1.0526E-04 >>> 2.0000E+03 >>> 1 Hmholtz VELX: 11 4.9454E-08 1.0654E+00 9.9200E-08 >>> 9.9200000000000002E-008 p22 1 1 >>> 1 1 Helmholtz VELY F: 1.0592E+00 9.9200E-08 1.0526E-04 >>> 2.0000E+03 >>> 1 2 Helmholtz VELY F: 1.5106E-02 9.9200E-08 1.0526E-04 >>> 2.0000E+03 >>> 1 3 Helmholtz VELY F: 1.6168E-03 9.9200E-08 1.0526E-04 >>> 2.0000E+03 >>> 1 4 Helmholtz VELY F: 3.9446E-04 9.9200E-08 1.0526E-04 >>> 2.0000E+03 >>> 1 5 Helmholtz VELY F: 1.4562E-04 9.9200E-08 1.0526E-04 >>> 2.0000E+03 >>> 1 6 Helmholtz VELY F: 4.9132E-05 9.9200E-08 1.0526E-04 >>> 2.0000E+03 >>> 1 7 Helmholtz VELY F: 1.5898E-05 9.9200E-08 1.0526E-04 >>> 2.0000E+03 >>> 1 8 Helmholtz VELY F: 4.7011E-06 9.9200E-08 1.0526E-04 >>> 2.0000E+03 >>> 1 9 Helmholtz VELY F: 1.4592E-06 9.9200E-08 1.0526E-04 >>> 2.0000E+03 >>> 1 10 Helmholtz VELY F: 5.6658E-07 9.9200E-08 1.0526E-04 >>> 2.0000E+03 >>> 1 11 Helmholtz VELY F: 1.6209E-07 9.9200E-08 1.0526E-04 >>> 2.0000E+03 >>> 1 12 Helmholtz VELY F: 4.8705E-08 9.9200E-08 1.0526E-04 >>> 2.0000E+03 >>> 1 Hmholtz VELY: 11 4.8705E-08 1.0592E+00 9.9200E-08 >>> 9.9200000000000002E-008 p22 1 1 >>> 1 1 Helmholtz VELZ F: 9.0867E-01 9.9200E-08 1.0526E-04 >>> 2.0000E+03 >>> 1 2 Helmholtz VELZ F: 1.5203E-02 9.9200E-08 1.0526E-04 >>> 2.0000E+03 >>> 1 3 Helmholtz VELZ F: 2.3594E-03 9.9200E-08 1.0526E-04 >>> 2.0000E+03 >>> 1 4 Helmholtz VELZ F: 5.4341E-04 9.9200E-08 1.0526E-04 >>> 2.0000E+03 >>> 1 5 Helmholtz VELZ F: 1.9420E-04 9.9200E-08 1.0526E-04 >>> 2.0000E+03 >>> 1 6 Helmholtz VELZ F: 6.9938E-05 9.9200E-08 1.0526E-04 >>> 2.0000E+03 >>> 1 7 Helmholtz VELZ F: 2.1336E-05 9.9200E-08 1.0526E-04 >>> 2.0000E+03 >>> 1 8 Helmholtz VELZ F: 6.4972E-06 9.9200E-08 1.0526E-04 >>> 2.0000E+03 >>> 1 9 Helmholtz VELZ F: 2.1068E-06 9.9200E-08 1.0526E-04 >>> 2.0000E+03 >>> 1 10 Helmholtz VELZ F: 7.2366E-07 9.9200E-08 1.0526E-04 >>> 2.0000E+03 >>> 1 11 Helmholtz VELZ F: 2.2873E-07 9.9200E-08 1.0526E-04 >>> 2.0000E+03 >>> 1 12 Helmholtz VELZ F: 6.6523E-08 9.9200E-08 1.0526E-04 >>> 2.0000E+03 >>> 1 Hmholtz VELZ: 11 6.6523E-08 9.0867E-01 9.9200E-08 >>> 1 1.00000E-06 8.15804E-04 1.72021E-03 4.74246E-01 1 Divergence >>> 2 1.00000E-06 4.25351E-04 1.72021E-03 2.47267E-01 1 Divergence >>> 3 1.00000E-06 2.20250E-04 1.72021E-03 1.28037E-01 1 Divergence >>> 4 1.00000E-06 1.10200E-04 1.72021E-03 6.40617E-02 1 Divergence >>> 5 1.00000E-06 6.66356E-05 1.72021E-03 3.87369E-02 1 Divergence >>> 6 1.00000E-06 4.55137E-05 1.72021E-03 2.64582E-02 1 Divergence >>> 7 1.00000E-06 3.45979E-05 1.72021E-03 2.01126E-02 1 Divergence >>> 8 1.00000E-06 2.74987E-05 1.72021E-03 1.59856E-02 1 Divergence >>> 9 1.00000E-06 2.25703E-05 1.72021E-03 1.31207E-02 1 Divergence >>> 10 1.00000E-06 1.84355E-05 1.72021E-03 1.07170E-02 1 Divergence >>> 11 1.00000E-06 1.51102E-05 1.72021E-03 8.78394E-03 1 Divergence >>> 12 1.00000E-06 1.23753E-05 1.72021E-03 7.19407E-03 1 Divergence >>> 13 1.00000E-06 9.99015E-06 1.72021E-03 5.80751E-03 1 Divergence >>> 14 1.00000E-06 7.91532E-06 1.72021E-03 4.60136E-03 1 Divergence >>> 15 1.00000E-06 6.25368E-06 1.72021E-03 3.63541E-03 1 Divergence >>> 16 1.00000E-06 4.91692E-06 1.72021E-03 2.85832E-03 1 Divergence >>> 17 1.00000E-06 3.87115E-06 1.72021E-03 2.25039E-03 1 Divergence >>> 18 1.00000E-06 3.04686E-06 1.72021E-03 1.77121E-03 1 Divergence >>> 19 1.00000E-06 2.41971E-06 1.72021E-03 1.40663E-03 1 Divergence >>> 20 1.00000E-06 1.93080E-06 1.72021E-03 1.12242E-03 1 Divergence >>> 21 1.00000E-06 1.69768E-06 1.72021E-03 9.86902E-04 1 Divergence >>> 22 1.00000E-06 1.48272E-06 1.72021E-03 8.61940E-04 1 Divergence >>> 23 1.00000E-06 1.31245E-06 1.72021E-03 7.62959E-04 1 Divergence >>> 24 1.00000E-06 1.15596E-06 1.72021E-03 6.71990E-04 1 Divergence >>> 25 1.00000E-06 9.86100E-07 1.72021E-03 5.73243E-04 1 Divergence >>> 1 U-PRES gmres: 25 9.8610E-07 1.0000E-06 1.7202E-03 >>> 9.0149E+00 1.6742E+01 >>> 1 DNORM, DIVEX 9.8609999662049055E-007 >>> 9.8609999670093433E-007 >>> 9.9200000000000002E-008 p22 1 1 >>> 1 1 Helmholtz VELX F: 0.0000E+00 9.9200E-08 1.0526E-04 >>> 2.0000E+03 >>> 1 Hmholtz VELX: 0 0.0000E+00 0.0000E+00 9.9200E-08 >>> 9.9200000000000002E-008 p22 1 1 >>> 1 1 Helmholtz VELY F: 0.0000E+00 9.9200E-08 1.0526E-04 >>> 2.0000E+03 >>> 1 Hmholtz VELY: 0 0.0000E+00 0.0000E+00 9.9200E-08 >>> 9.9200000000000002E-008 p22 1 1 >>> 1 1 Helmholtz VELZ F: 9.9993E-01 9.9200E-08 1.0526E-04 >>> 2.0000E+03 >>> 1 2 Helmholtz VELZ F: 3.4255E-02 9.9200E-08 1.0526E-04 >>> 2.0000E+03 >>> 1 3 Helmholtz VELZ F: 8.5689E-03 9.9200E-08 1.0526E-04 >>> 2.0000E+03 >>> 1 4 Helmholtz VELZ F: 2.0449E-03 9.9200E-08 1.0526E-04 >>> 2.0000E+03 >>> 1 5 Helmholtz VELZ F: 8.2452E-04 9.9200E-08 1.0526E-04 >>> 2.0000E+03 >>> 1 6 Helmholtz VELZ F: 2.5912E-04 9.9200E-08 1.0526E-04 >>> 2.0000E+03 >>> 1 7 Helmholtz VELZ F: 8.5857E-05 9.9200E-08 1.0526E-04 >>> 2.0000E+03 >>> 1 8 Helmholtz VELZ F: 2.4937E-05 9.9200E-08 1.0526E-04 >>> 2.0000E+03 >>> 1 9 Helmholtz VELZ F: 8.7854E-06 9.9200E-08 1.0526E-04 >>> 2.0000E+03 >>> 1 10 Helmholtz VELZ F: 3.0249E-06 9.9200E-08 1.0526E-04 >>> 2.0000E+03 >>> 1 11 Helmholtz VELZ F: 9.2479E-07 9.9200E-08 1.0526E-04 >>> 2.0000E+03 >>> 1 12 Helmholtz VELZ F: 3.0301E-07 9.9200E-08 1.0526E-04 >>> 2.0000E+03 >>> 1 13 Helmholtz VELZ F: 1.0306E-07 9.9200E-08 1.0526E-04 >>> 2.0000E+03 >>> 1 14 Helmholtz VELZ F: 3.4387E-08 9.9200E-08 1.0526E-04 >>> 2.0000E+03 >>> 1 Hmholtz VELZ: 13 3.4387E-08 9.9993E-01 9.9200E-08 >>> 1 1.00000E-04 8.48221E-11 1.31021E-10 6.47394E-01 0 Divergence >>> 0 U-PRES gmres: 1 8.4822E-11 1.0000E-04 1.3102E-10 >>> 3.6466E-01 6.1275E-01 >>> 1 1.57007E-03 2.50000E+01 1.00000E+00 basflow Z >>> 1 0.1622505E+03 6.74973E-03 1.05976E-05 3.14158E+00 >>> 3.14159E+00 volflow Z >>> 1 1.6225E+02 2.5581E+01 Fluid done >>> filt amp 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0500 >>> filt trn 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.9500 >>> schfile: >>> /cfs/klemming/nobackup/g/georgeek/Pipe_550/pipe.sch >>> Step 2, t= 1.6225100E+02, DT= 5.0000000E-04, C= 0.252 2.9590E+01 >>> 2.9590E+01 >>> Solving for fluid >>> 9.9200000000000002E-008 p22 2 1 >>> 2 Hmholtz VELX: 10 2.6583E-08 1.5981E+00 9.9200E-08 >>> 9.9200000000000002E-008 p22 2 1 >>> 2 Hmholtz VELY: 10 2.6270E-08 1.5887E+00 9.9200E-08 >>> 9.9200000000000002E-008 p22 2 1 >>> 2 Hmholtz VELZ: 10 3.7961E-08 1.3628E+00 9.9200E-08 >>> 2 U-PRES gmres: 26 8.8941E-07 1.0000E-06 8.7570E-04 >>> 9.3625E+00 1.7342E+01 >>> 2 DNORM, DIVEX 8.8940552367860238E-007 >>> 8.8940552232510758E-007 >>> 9.9200000000000002E-008 p22 2 1 >>> 2 Hmholtz VELX: 0 0.0000E+00 0.0000E+00 9.9200E-08 >>> 9.9200000000000002E-008 p22 2 1 >>> 2 Hmholtz VELY: 0 0.0000E+00 0.0000E+00 9.9200E-08 >>> 9.9200000000000002E-008 p22 2 1 >>> 2 Hmholtz VELZ: 11 2.9548E-08 9.9993E-01 9.9200E-08 >>> 0 U-PRES gmres: 1 5.0819E-11 1.0000E-04 7.6670E-11 >>> 3.6476E-01 6.1121E-01 >>> 2 1.04680E-03 2.50000E+01 1.00000E+00 basflow Z >>> 2 0.1622510E+03 6.74912E-03 7.06497E-06 3.14158E+00 >>> 3.14159E+00 volflow Z >>> 2 1.6225E+02 2.5493E+01 Fluid done >>> Step 3, t= 1.6225150E+02, DT= 5.0000000E-04, C= 0.253 6.2005E+01 >>> 3.2415E+01 >>> Solving for fluid >>> 9.9200000000000002E-008 p22 3 1 >>> 3 Hmholtz VELX: 9 5.9973E-08 1.9551E+00 9.9200E-08 >>> 9.9200000000000002E-008 p22 3 1 >>> 3 Hmholtz VELY: 9 5.9734E-08 1.9436E+00 9.9200E-08 >>> 9.9200000000000002E-008 p22 3 1 >>> 3 Hmholtz VELZ: 9 6.7060E-08 1.6769E+00 9.9200E-08 >>> 3 U-PRES gmres: 43 9.7341E-07 1.0000E-06 5.8253E-03 >>> 1.5480E+01 2.9163E+01 >>> 3 DNORM, DIVEX 9.7340773053122238E-007 >>> 9.7340773046898007E-007 >>> 9.9200000000000002E-008 p22 3 1 >>> 3 Hmholtz VELX: 0 0.0000E+00 0.0000E+00 9.9200E-08 >>> 9.9200000000000002E-008 p22 3 1 >>> 3 Hmholtz VELY: 0 0.0000E+00 0.0000E+00 9.9200E-08 >>> 9.9200000000000002E-008 p22 3 1 >>> 3 Hmholtz VELZ: 10 3.2545E-08 9.9993E-01 9.9200E-08 >>> 0 U-PRES gmres: 1 3.1707E-11 1.0000E-04 3.8884E-11 >>> 3.6436E-01 6.1134E-01 >>> 3 8.56501E-04 2.50000E+01 1.00000E+00 basflow Z >>> 3 0.1622515E+03 6.74739E-03 5.77915E-06 3.14158E+00 >>> 3.14159E+00 volflow Z >>> 3 1.6225E+02 3.6833E+01 Fluid done >>> Step 4, t= 1.6225200E+02, DT= 5.0000000E-04, C= 0.253 1.0909E+02 >>> 4.7088E+01 >>> Solving for fluid >>> 9.9200000000000002E-008 p22 4 1 >>> 4 Hmholtz VELX: 9 5.1382E-08 1.9533E+00 9.9200E-08 >>> 9.9200000000000002E-008 p22 4 1 >>> 4 Hmholtz VELY: 9 5.1245E-08 1.9419E+00 9.9200E-08 >>> 9.9200000000000002E-008 p22 4 1 >>> 4 Hmholtz VELZ: 9 5.8121E-08 1.6656E+00 9.9200E-08 >>> 4 U-PRES gmres: 20 9.8736E-07 1.0000E-06 5.8401E-04 >>> 7.2002E+00 1.3507E+01 >>> 4 DNORM, DIVEX 9.8735913161531008E-007 >>> 9.8735912866392627E-007 >>> 4 0.1622520E+03 6.74462E-03 5.77677E-06 3.14158E+00 >>> 3.14159E+00 volflow Z >>> 4 1.6225E+02 1.8072E+01 Fluid done >>> Step 5, t= 1.6225250E+02, DT= 5.0000000E-04, C= 0.254 1.3731E+02 >>> 2.8220E+01 >>> Solving for fluid >>> 9.9200000000000002E-008 p22 5 1 >>> 5 Hmholtz VELX: 9 5.5907E-08 1.9533E+00 9.9200E-08 >>> 9.9200000000000002E-008 p22 5 1 >>> 5 Hmholtz VELY: 9 5.5659E-08 1.9419E+00 9.9200E-08 >>> 9.9200000000000002E-008 p22 5 1 >>> 5 Hmholtz VELZ: 9 5.9684E-08 1.6655E+00 9.9200E-08 >>> 5 U-PRES gmres: 16 9.2427E-07 1.0000E-06 1.5556E-04 >>> 5.7643E+00 1.0499E+01 >>> 5 DNORM, DIVEX 9.2426901539348950E-007 >>> 9.2426901307669303E-007 >>> 5 0.1622525E+03 6.74218E-03 5.77469E-06 3.14158E+00 >>> 3.14159E+00 volflow Z >>> 5 1.6225E+02 1.5064E+01 Fluid done >>> Step 6, t= 1.6225300E+02, DT= 5.0000000E-04, C= 0.255 1.6256E+02 >>> 2.5244E+01 >>> Solving for fluid >>> 9.9200000000000002E-008 p22 6 1 >>> 6 Hmholtz VELX: 9 6.0915E-08 1.9533E+00 9.9200E-08 >>> 9.9200000000000002E-008 p22 6 1 >>> 6 Hmholtz VELY: 9 6.0642E-08 1.9418E+00 9.9200E-08 >>> 9.9200000000000002E-008 p22 6 1 >>> 6 Hmholtz VELZ: 9 5.9861E-08 1.6655E+00 9.9200E-08 >>> 6 U-PRES gmres: 14 8.6298E-07 1.0000E-06 1.2403E-04 >>> 5.0438E+00 9.0451E+00 >>> 6 DNORM, DIVEX 8.6297683366363408E-007 >>> 8.6297681628504936E-007 >>> 6 0.1622530E+03 6.73975E-03 5.77261E-06 3.14158E+00 >>> 3.14159E+00 volflow Z >>> 6 1.6225E+02 1.3610E+01 Fluid done >>> Step 7, t= 1.6225350E+02, DT= 5.0000000E-04, C= 0.255 1.8632E+02 >>> 2.3767E+01 >>> Solving for fluid >>> 9.9200000000000002E-008 p22 7 1 >>> 7 Hmholtz VELX: 9 7.2389E-08 1.9533E+00 9.9200E-08 >>> 9.9200000000000002E-008 p22 7 1 >>> 7 Hmholtz VELY: 9 7.1978E-08 1.9418E+00 9.9200E-08 >>> 9.9200000000000002E-008 p22 7 1 >>> 7 Hmholtz VELZ: 9 6.0336E-08 1.6654E+00 9.9200E-08 >>> 7 U-PRES gmres: 14 7.8284E-07 1.0000E-06 1.2790E-04 >>> 5.0456E+00 9.0568E+00 >>> 7 DNORM, DIVEX 7.8284171959673956E-007 >>> 7.8284171260985997E-007 >>> 7 0.1622535E+03 6.73731E-03 5.77052E-06 3.14158E+00 >>> 3.14159E+00 volflow Z >>> 7 1.6225E+02 1.3620E+01 Fluid done >>> Step 8, t= 1.6225400E+02, DT= 5.0000000E-04, C= 0.256 2.1010E+02 >>> 2.3780E+01 >>> Solving for fluid >>> 9.9200000000000002E-008 p22 8 1 >>> 8 Hmholtz VELX: 9 8.0530E-08 1.9533E+00 9.9200E-08 >>> 9.9200000000000002E-008 p22 8 1 >>> 8 Hmholtz VELY: 9 8.0474E-08 1.9418E+00 9.9200E-08 >>> 9.9200000000000002E-008 p22 8 1 >>> 8 Hmholtz VELZ: 9 6.1838E-08 1.6654E+00 9.9200E-08 >>> 8 U-PRES gmres: 13 9.3871E-07 1.0000E-06 8.2637E-05 >>> 4.6862E+00 8.3450E+00 >>> 8 DNORM, DIVEX 9.3870751505892285E-007 >>> 9.3870750972531774E-007 >>> 8 0.1622540E+03 6.73487E-03 5.76843E-06 3.14158E+00 >>> 3.14159E+00 volflow Z >>> 8 1.6225E+02 1.2911E+01 Fluid done >>> Step 9, t= 1.6225450E+02, DT= 5.0000000E-04, C= 0.257 2.3318E+02 >>> 2.3081E+01 >>> Solving for fluid >>> 9.9200000000000002E-008 p22 9 1 >>> 9 Hmholtz VELX: 9 8.5197E-08 1.9533E+00 9.9200E-08 >>> 9.9200000000000002E-008 p22 9 1 >>> 9 Hmholtz VELY: 9 8.4881E-08 1.9418E+00 9.9200E-08 >>> 9.9200000000000002E-008 p22 9 1 >>> 9 Hmholtz VELZ: 9 6.3849E-08 1.6654E+00 9.9200E-08 >>> 9 U-PRES gmres: 10 8.0419E-07 1.0000E-06 5.5243E-05 >>> 3.6054E+00 6.2748E+00 >>> 9 DNORM, DIVEX 8.0418938467388352E-007 >>> 8.0418939044644440E-007 >>> 9 0.1622545E+03 6.73242E-03 5.76633E-06 3.14158E+00 >>> 3.14159E+00 volflow Z >>> 9 1.6225E+02 1.0838E+01 Fluid done >>> Step 10, t= 1.6225500E+02, DT= 5.0000000E-04, C= 0.257 2.5418E+02 >>> 2.0995E+01 >>> Solving for fluid >>> 9.9200000000000002E-008 p22 10 1 >>> 10 Hmholtz VELX: 9 8.7316E-08 1.9533E+00 9.9200E-08 >>> 9.9200000000000002E-008 p22 10 1 >>> 10 Hmholtz VELY: 9 8.7257E-08 1.9418E+00 9.9200E-08 >>> 9.9200000000000002E-008 p22 10 1 >>> 10 Hmholtz VELZ: 9 6.5276E-08 1.6654E+00 9.9200E-08 >>> 10 U-PRES gmres: 14 7.9070E-07 1.0000E-06 5.7631E-05 >>> 5.0451E+00 9.0540E+00 >>> 10 DNORM, DIVEX 7.9069874056473697E-007 >>> 7.9069875187583686E-007 >>> 10 0.1622550E+03 6.72998E-03 5.76424E-06 3.14158E+00 >>> 3.14159E+00 volflow Z >>> 10 1.6225E+02 1.3620E+01 Fluid done >>> Step 11, t= 1.6225550E+02, DT= 5.0000000E-04, C= 0.258 2.7885E+02 >>> 2.4675E+01 >>> Solving for fluid >>> 11 100 **ERROR**: Failed in HMHOLTZ: VELX 7.9521E+07 >>> 1.9533E+00 9.9200E-08 >>> 11 100 **ERROR**: Failed in HMHOLTZ: VELY 2.8223E+03 >>> 1.9418E+00 9.9200E-08 >>> 11 Hmholtz VELZ: 9 6.6283E-08 1.6654E+00 9.9200E-08 >>> 11 U-PRES gmres: 100 1.6296E+02 1.0000E-06 2.6253E+12 >>> 3.6002E+01 6.8265E+01 >>> 11 DNORM, DIVEX 54998323.86255041 162.9642641112922 >>> 11 0.1622555E+03 4.32525E-03 3.70458E-06 3.14159E+00 >>> 3.14159E+00 volflow Z >>> 11 1.6226E+02 9.4526E+01 Fluid done >>> CFL, Ctarg! 11083557069312.66 1.000000000000000 >>> call outfld: ifpsco: F >>> >>> 12 1.6226E+02 Write checkpoint: >>> >>> call outfld: ifpsco: F >>> >>> 12 1.6226E+02 Write checkpoint: >>> 0 12 OPEN: pipe0.f00001 >>> 850 12 OPEN: pipe5.f00001 >>> 510 12 OPEN: pipe3.f00001 >>> 170 12 OPEN: pipe1.f00001 >>> 1020 12 OPEN: pipe6.f00001 >>> 680 12 OPEN: pipe4.f00001 >>> 340 12 OPEN: pipe2.f00001 >>> >>> 12 1.6226E+02 done :: Write checkpoint >>> file size = 234.E+02MB >>> >>> 899 Emergency exit: 12 time = 162.2554999999999 >>> >>> 512 Emergency exit: 12 time = 162.2554999999999 >>> >>> 459 Emergency exit: 12 time = 162.2554999999999 >>> Latest solution and data are dumped for post-processing. >>> *** STOP *** >>> 461 Emergency exit: 12 time = 162.2554999999999 >>> Latest solution and data are dumped for post-processing. >>> *** STOP *** >>> >>> ------------------------------------------------------------------------------- >>> >>> Quoting nek5000-users at lists.mcs.anl.gov: >>> >>>> Geroge, >>>> >>>> Can you provide more details. A logfile would be helpful. >>>> >>>> Cheers, >>>> Stefan >>> >>> >>> ---------------------------------------------------------------- >>> This message was sent using IMP, the Internet Messaging Program. >>> _______________________________________________ >>> Nek5000-users mailing list >>> Nek5000-users at lists.mcs.anl.gov >>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>> >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> > > > > ---------------------------------------------------------------- > This message was sent using IMP, the Internet Messaging Program. > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users From nek5000-users at lists.mcs.anl.gov Thu Jan 19 09:42:59 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 19 Jan 2012 16:42:59 +0100 Subject: [Nek5000-users] Failure with TORDER = 3 (P027) In-Reply-To: References: <20111207145358.57299dpe5pezsqgm@www.mech.kth.se> <20111207163805.11226tijj6k67nfh@www.mech.kth.se> <20111209150722.80955p873qx25fyy@www.mech.kth.se> <20120119163649.20565e8hnalboa6p@www.mech.kth.se> Message-ID: <20120119164259.13191840zlrruqrn@www.mech.kth.se> Hi Paul, Yes, all is ok, and thanks for the help. George Quoting nek5000-users at lists.mcs.anl.gov: > > Thank you George - so do I interpret this to mean that > all is ok? > > Paul > > > On Thu, 19 Jan 2012, nek5000-users at lists.mcs.anl.gov wrote: > >> >> Dear Neks, >> >> I am using the 758 version of the code with the new full-restart >> option, and my pipe flow simulation runs quite fine with TORDER = >> 3, and the projection parameters, i.e. P094 and P095 set to >> non-zero values. >> >> Regards >> George >> >> >> Quoting nek5000-users at lists.mcs.anl.gov: >> >>> George, >>> >>> Can you do a run using single 4 byte .fXXXXX file using just one >>> IO-node. Also, turn off the characteristics scheme (IFCHAR). Then try >>> to do a restart again. >>> >>> Cheers, >>> Stefan >>> >>> On 12/9/11, nek5000-users at lists.mcs.anl.gov >>> wrote: >>>> >>>> Hi Stefan, >>>> >>>> Here is a part of a file I obtained from a run that failed with >>>> TORDER = 3. >>>> >>>> Regards >>>> George >>>> >>>> ------------------------------------------------------------------------------- >>>> 118 Parameters from file:/ >>>> 1 1.00000 P001: DENSITY >>>> 2 -9500. P002: VISCOS >>>> 7 1.00000 P007: RHOCP >>>> 8 1.00000 P008: CONDUCT >>>> 11 500.0 P011: NSTEPS >>>> 12 -5.000E-04 P012: DT >>>> 15 500.00 P015: IOSTEP >>>> 17 1.00000 P017: >>>> 18 0.500000E-01 P018: GRID < 0 --> # cells on screen >>>> 19 -1.00000 P019: INTYPE >>>> 20 10.0000 P020: NORDER >>>> 21 0.100000E-05 P021: DIVERGENCE >>>> 22 9.920000E-08 P022: HELMHOLTZ >>>> 24 0.100000E-01 P024: TOLREL >>>> 25 0.100000E-01 P025: TOLABS >>>> 26 1.00000 P026: COURANT/NTAU >>>> 27 3.00000 P027: TORDER >>>> 28 0.00000 P028: TORDER: mesh velocity (0: p28=p27) >>>> 54 -3.00000 P054: fixed flow rate dir: |p54|=1,2,3=x,y,z >>>> 55 1.00000 P055: vol.flow rate (p54>0) or Ubar (p54<0) >>>> 63 8.00000 P063: =8 --> force 8-byte output >>>> 65 6.00000 P065: #iofiles (eg, 0 or 64); <0 --> sep. dirs >>>> 66 6.00000 P066: output : <0=ascii, else binary >>>> 67 6.00000 P067: restart: <0=ascii, else binary >>>> 68 500.00 P068: iastep: freq for avg_all (0=iostep) >>>> 69 50000.0 P069: : : frequency of srf dump >>>> 93 20.0000 P093: Number of previous pressure solns saved >>>> 99 3.00000 P099: dealiasing: <0--> off/3--> old/4--> new >>>> 102 1.00000 P102: Dump out divergence at each time step >>>> 103 0.05000 P103: weight of stabilizing filter (.01) >>>> >>>> IFTRAN = T >>>> IFFLOW = T >>>> IFHEAT = F >>>> IFSPLIT = F >>>> IFLOMACH = F >>>> IFUSERVP = F >>>> IFUSERMV = F >>>> IFSTRS = F >>>> IFCHAR = T >>>> IFCYCLIC = F >>>> IFAXIS = F >>>> IFMVBD = F >>>> IFMELT = F >>>> IFMODEL = F >>>> IFKEPS = F >>>> IFMOAB = F >>>> IFNEKNEK = F >>>> IFSYNC = T >>>> >>>> IFVCOR = T >>>> IFINTQ = F >>>> IFCWUZ = F >>>> IFSWALL = F >>>> IFGEOM = F >>>> IFSURT = F >>>> IFWCNO = F >>>> >>>> IFTMSH for field 1 = F >>>> IFADVC for field 1 = T >>>> IFNONL for field 1 = F >>>> >>>> Dealiasing enabled, lxd= 12 >>>> >>>> Estimated eigenvalues >>>> EIGAA = 1.650197855862139 >>>> EIGGA = 71694413.86227663 >>>> EIGAE = 1.5791367041742943E-002 >>>> EIGAS = 7.9744816586921753E-004 >>>> EIGGE = 71694413.86227663 >>>> EIGGS = 2.000000000000000 >>>> >>>> verify mesh topology >>>> -1.000000000000000 1.000000000000000 Xrange >>>> -1.000000000000000 1.000000000000000 Yrange >>>> 0.000000000000000 25.00000000000002 Zrange >>>> done :: verify mesh topology >>>> >>>> E-solver strategy: 1 itr >>>> mg_nx: 1 5 7 >>>> mg_ny: 1 5 7 >>>> mg_nz: 1 5 7 >>>> call usrsetvert >>>> done :: usrsetvert >>>> >>>> gs_setup: 277536 unique labels shared >>>> pairwise times (avg, min, max): 0.000236133 0.000198293 0.000261211 >>>> crystal router : 0.000244021 0.000238085 0.00025022 >>>> used all_to_all method: crystal router >>>> setupds time 2.1331E-02 seconds 1 2 875808 853632 >>>> setvert3d: 4 16416864 23245920 16416864 16416864 >>>> call usrsetvert >>>> done :: usrsetvert >>>> >>>> gs_setup: 2635744 unique labels shared >>>> pairwise times (avg, min, max): 0.0004331 0.000362206 0.000494504 >>>> crystal router : 0.001126 0.0011049 0.00114682 >>>> used all_to_all method: pairwise >>>> setupds time 1.9091E-01 seconds 2 4 16416864 853632 >>>> setvert3d: 6 52620192 107252640 52620192 52620192 >>>> call usrsetvert >>>> done :: usrsetvert >>>> >>>> gs_setup: 7399328 unique labels shared >>>> pairwise times (avg, min, max): 0.000524018 0.000427318 0.000591493 >>>> crystal router : 0.00345075 0.0033884 0.0035347 >>>> used all_to_all method: pairwise >>>> setupds time 5.6218E-01 seconds 3 6 52620192 853632 >>>> setvert3d: 8 109485792 293870304 109485792 109485792 >>>> call usrsetvert >>>> done :: usrsetvert >>>> >>>> gs_setup: 14568288 unique labels shared >>>> pairwise times (avg, min, max): 0.00098448 0.000790119 0.00117922 >>>> crystal router : 0.00694697 0.00683801 0.00708301 >>>> used all_to_all method: pairwise >>>> setupds time 1.4705E+00 seconds 4 8 109485792 853632 >>>> setup h1 coarse grid, nx_crs= 2 >>>> call usrsetvert >>>> done :: usrsetvert >>>> >>>> gs_setup: 277536 unique labels shared >>>> pairwise times (avg, min, max): 0.000271898 0.000193095 0.000345087 >>>> crystal router : 0.000370127 0.000366497 0.000374007 >>>> used all_to_all method: pairwise >>>> done :: setup h1 coarse grid 562.8824191093445 sec >>>> >>>> call usrdat3 >>>> done :: usrdat3 >>>> >>>> set initial conditions >>>> Checking restart options: pipe?.f00001 >>>> Reading checkpoint data >>>> 0 0 OPEN: pipe0.f00001 >>>> byte swap: F 6.543210 -2.9312772E+35 >>>> 850 0 OPEN: pipe5.f00001 >>>> 510 0 OPEN: pipe3.f00001 >>>> 170 0 OPEN: pipe1.f00001 >>>> 1020 0 OPEN: pipe6.f00001 >>>> 680 0 OPEN: pipe4.f00001 >>>> 340 0 OPEN: pipe2.f00001 >>>> >>>> 0 1.6225E+02 done :: Read checkpoint data >>>> avg data-throughput = -65.6MBps >>>> io-nodes = 6 >>>> >>>> xyz min -1.0000 -1.0000 0.0000 >>>> uvwpt min -0.43349 -0.45564 -0.77820E-01 0.69058E+08 0.0000 >>>> xyz max 1.0000 1.0000 25.000 >>>> uvwpt max 0.44557 0.38210 1.4216 0.69058E+08 0.0000 >>>> Restart: recompute geom. factors. >>>> regenerate geomerty data 1 >>>> vol_t,vol_v: 78.53976641971477 78.53976641971477 >>>> done :: regenerate geomerty data 1 >>>> >>>> done :: set initial conditions >>>> >>>> call userchk >>>> done :: userchk >>>> >>>> gridpoints unique/tot: 293870304 437059584 >>>> dofs: 291725280 184384512 >>>> >>>> Initial time: 0.1622500E+03 >>>> Initialization successfully completed 616.10 sec >>>> >>>> Starting time loop ... >>>> >>>> DT/DTCFL/DTFS/DTINIT 0.500E-03 0.494-323 0.299-316 0.500E-03 >>>> Step 1, t= 1.6225050E+02, DT= 5.0000000E-04, C= 0.251 0.0000E+00 >>>> 0.0000E+00 >>>> Solving for fluid >>>> 9.9200000000000002E-008 p22 1 1 >>>> 1 1 Helmholtz VELX F: 1.0654E+00 9.9200E-08 1.0526E-04 >>>> 2.0000E+03 >>>> 1 2 Helmholtz VELX F: 1.5163E-02 9.9200E-08 1.0526E-04 >>>> 2.0000E+03 >>>> 1 3 Helmholtz VELX F: 1.6029E-03 9.9200E-08 1.0526E-04 >>>> 2.0000E+03 >>>> 1 4 Helmholtz VELX F: 3.9700E-04 9.9200E-08 1.0526E-04 >>>> 2.0000E+03 >>>> 1 5 Helmholtz VELX F: 1.4559E-04 9.9200E-08 1.0526E-04 >>>> 2.0000E+03 >>>> 1 6 Helmholtz VELX F: 4.8307E-05 9.9200E-08 1.0526E-04 >>>> 2.0000E+03 >>>> 1 7 Helmholtz VELX F: 1.5822E-05 9.9200E-08 1.0526E-04 >>>> 2.0000E+03 >>>> 1 8 Helmholtz VELX F: 4.7557E-06 9.9200E-08 1.0526E-04 >>>> 2.0000E+03 >>>> 1 9 Helmholtz VELX F: 1.4659E-06 9.9200E-08 1.0526E-04 >>>> 2.0000E+03 >>>> 1 10 Helmholtz VELX F: 5.6372E-07 9.9200E-08 1.0526E-04 >>>> 2.0000E+03 >>>> 1 11 Helmholtz VELX F: 1.6238E-07 9.9200E-08 1.0526E-04 >>>> 2.0000E+03 >>>> 1 12 Helmholtz VELX F: 4.9454E-08 9.9200E-08 1.0526E-04 >>>> 2.0000E+03 >>>> 1 Hmholtz VELX: 11 4.9454E-08 1.0654E+00 9.9200E-08 >>>> 9.9200000000000002E-008 p22 1 1 >>>> 1 1 Helmholtz VELY F: 1.0592E+00 9.9200E-08 1.0526E-04 >>>> 2.0000E+03 >>>> 1 2 Helmholtz VELY F: 1.5106E-02 9.9200E-08 1.0526E-04 >>>> 2.0000E+03 >>>> 1 3 Helmholtz VELY F: 1.6168E-03 9.9200E-08 1.0526E-04 >>>> 2.0000E+03 >>>> 1 4 Helmholtz VELY F: 3.9446E-04 9.9200E-08 1.0526E-04 >>>> 2.0000E+03 >>>> 1 5 Helmholtz VELY F: 1.4562E-04 9.9200E-08 1.0526E-04 >>>> 2.0000E+03 >>>> 1 6 Helmholtz VELY F: 4.9132E-05 9.9200E-08 1.0526E-04 >>>> 2.0000E+03 >>>> 1 7 Helmholtz VELY F: 1.5898E-05 9.9200E-08 1.0526E-04 >>>> 2.0000E+03 >>>> 1 8 Helmholtz VELY F: 4.7011E-06 9.9200E-08 1.0526E-04 >>>> 2.0000E+03 >>>> 1 9 Helmholtz VELY F: 1.4592E-06 9.9200E-08 1.0526E-04 >>>> 2.0000E+03 >>>> 1 10 Helmholtz VELY F: 5.6658E-07 9.9200E-08 1.0526E-04 >>>> 2.0000E+03 >>>> 1 11 Helmholtz VELY F: 1.6209E-07 9.9200E-08 1.0526E-04 >>>> 2.0000E+03 >>>> 1 12 Helmholtz VELY F: 4.8705E-08 9.9200E-08 1.0526E-04 >>>> 2.0000E+03 >>>> 1 Hmholtz VELY: 11 4.8705E-08 1.0592E+00 9.9200E-08 >>>> 9.9200000000000002E-008 p22 1 1 >>>> 1 1 Helmholtz VELZ F: 9.0867E-01 9.9200E-08 1.0526E-04 >>>> 2.0000E+03 >>>> 1 2 Helmholtz VELZ F: 1.5203E-02 9.9200E-08 1.0526E-04 >>>> 2.0000E+03 >>>> 1 3 Helmholtz VELZ F: 2.3594E-03 9.9200E-08 1.0526E-04 >>>> 2.0000E+03 >>>> 1 4 Helmholtz VELZ F: 5.4341E-04 9.9200E-08 1.0526E-04 >>>> 2.0000E+03 >>>> 1 5 Helmholtz VELZ F: 1.9420E-04 9.9200E-08 1.0526E-04 >>>> 2.0000E+03 >>>> 1 6 Helmholtz VELZ F: 6.9938E-05 9.9200E-08 1.0526E-04 >>>> 2.0000E+03 >>>> 1 7 Helmholtz VELZ F: 2.1336E-05 9.9200E-08 1.0526E-04 >>>> 2.0000E+03 >>>> 1 8 Helmholtz VELZ F: 6.4972E-06 9.9200E-08 1.0526E-04 >>>> 2.0000E+03 >>>> 1 9 Helmholtz VELZ F: 2.1068E-06 9.9200E-08 1.0526E-04 >>>> 2.0000E+03 >>>> 1 10 Helmholtz VELZ F: 7.2366E-07 9.9200E-08 1.0526E-04 >>>> 2.0000E+03 >>>> 1 11 Helmholtz VELZ F: 2.2873E-07 9.9200E-08 1.0526E-04 >>>> 2.0000E+03 >>>> 1 12 Helmholtz VELZ F: 6.6523E-08 9.9200E-08 1.0526E-04 >>>> 2.0000E+03 >>>> 1 Hmholtz VELZ: 11 6.6523E-08 9.0867E-01 9.9200E-08 >>>> 1 1.00000E-06 8.15804E-04 1.72021E-03 4.74246E-01 1 Divergence >>>> 2 1.00000E-06 4.25351E-04 1.72021E-03 2.47267E-01 1 Divergence >>>> 3 1.00000E-06 2.20250E-04 1.72021E-03 1.28037E-01 1 Divergence >>>> 4 1.00000E-06 1.10200E-04 1.72021E-03 6.40617E-02 1 Divergence >>>> 5 1.00000E-06 6.66356E-05 1.72021E-03 3.87369E-02 1 Divergence >>>> 6 1.00000E-06 4.55137E-05 1.72021E-03 2.64582E-02 1 Divergence >>>> 7 1.00000E-06 3.45979E-05 1.72021E-03 2.01126E-02 1 Divergence >>>> 8 1.00000E-06 2.74987E-05 1.72021E-03 1.59856E-02 1 Divergence >>>> 9 1.00000E-06 2.25703E-05 1.72021E-03 1.31207E-02 1 Divergence >>>> 10 1.00000E-06 1.84355E-05 1.72021E-03 1.07170E-02 1 Divergence >>>> 11 1.00000E-06 1.51102E-05 1.72021E-03 8.78394E-03 1 Divergence >>>> 12 1.00000E-06 1.23753E-05 1.72021E-03 7.19407E-03 1 Divergence >>>> 13 1.00000E-06 9.99015E-06 1.72021E-03 5.80751E-03 1 Divergence >>>> 14 1.00000E-06 7.91532E-06 1.72021E-03 4.60136E-03 1 Divergence >>>> 15 1.00000E-06 6.25368E-06 1.72021E-03 3.63541E-03 1 Divergence >>>> 16 1.00000E-06 4.91692E-06 1.72021E-03 2.85832E-03 1 Divergence >>>> 17 1.00000E-06 3.87115E-06 1.72021E-03 2.25039E-03 1 Divergence >>>> 18 1.00000E-06 3.04686E-06 1.72021E-03 1.77121E-03 1 Divergence >>>> 19 1.00000E-06 2.41971E-06 1.72021E-03 1.40663E-03 1 Divergence >>>> 20 1.00000E-06 1.93080E-06 1.72021E-03 1.12242E-03 1 Divergence >>>> 21 1.00000E-06 1.69768E-06 1.72021E-03 9.86902E-04 1 Divergence >>>> 22 1.00000E-06 1.48272E-06 1.72021E-03 8.61940E-04 1 Divergence >>>> 23 1.00000E-06 1.31245E-06 1.72021E-03 7.62959E-04 1 Divergence >>>> 24 1.00000E-06 1.15596E-06 1.72021E-03 6.71990E-04 1 Divergence >>>> 25 1.00000E-06 9.86100E-07 1.72021E-03 5.73243E-04 1 Divergence >>>> 1 U-PRES gmres: 25 9.8610E-07 1.0000E-06 1.7202E-03 >>>> 9.0149E+00 1.6742E+01 >>>> 1 DNORM, DIVEX 9.8609999662049055E-007 >>>> 9.8609999670093433E-007 >>>> 9.9200000000000002E-008 p22 1 1 >>>> 1 1 Helmholtz VELX F: 0.0000E+00 9.9200E-08 1.0526E-04 >>>> 2.0000E+03 >>>> 1 Hmholtz VELX: 0 0.0000E+00 0.0000E+00 9.9200E-08 >>>> 9.9200000000000002E-008 p22 1 1 >>>> 1 1 Helmholtz VELY F: 0.0000E+00 9.9200E-08 1.0526E-04 >>>> 2.0000E+03 >>>> 1 Hmholtz VELY: 0 0.0000E+00 0.0000E+00 9.9200E-08 >>>> 9.9200000000000002E-008 p22 1 1 >>>> 1 1 Helmholtz VELZ F: 9.9993E-01 9.9200E-08 1.0526E-04 >>>> 2.0000E+03 >>>> 1 2 Helmholtz VELZ F: 3.4255E-02 9.9200E-08 1.0526E-04 >>>> 2.0000E+03 >>>> 1 3 Helmholtz VELZ F: 8.5689E-03 9.9200E-08 1.0526E-04 >>>> 2.0000E+03 >>>> 1 4 Helmholtz VELZ F: 2.0449E-03 9.9200E-08 1.0526E-04 >>>> 2.0000E+03 >>>> 1 5 Helmholtz VELZ F: 8.2452E-04 9.9200E-08 1.0526E-04 >>>> 2.0000E+03 >>>> 1 6 Helmholtz VELZ F: 2.5912E-04 9.9200E-08 1.0526E-04 >>>> 2.0000E+03 >>>> 1 7 Helmholtz VELZ F: 8.5857E-05 9.9200E-08 1.0526E-04 >>>> 2.0000E+03 >>>> 1 8 Helmholtz VELZ F: 2.4937E-05 9.9200E-08 1.0526E-04 >>>> 2.0000E+03 >>>> 1 9 Helmholtz VELZ F: 8.7854E-06 9.9200E-08 1.0526E-04 >>>> 2.0000E+03 >>>> 1 10 Helmholtz VELZ F: 3.0249E-06 9.9200E-08 1.0526E-04 >>>> 2.0000E+03 >>>> 1 11 Helmholtz VELZ F: 9.2479E-07 9.9200E-08 1.0526E-04 >>>> 2.0000E+03 >>>> 1 12 Helmholtz VELZ F: 3.0301E-07 9.9200E-08 1.0526E-04 >>>> 2.0000E+03 >>>> 1 13 Helmholtz VELZ F: 1.0306E-07 9.9200E-08 1.0526E-04 >>>> 2.0000E+03 >>>> 1 14 Helmholtz VELZ F: 3.4387E-08 9.9200E-08 1.0526E-04 >>>> 2.0000E+03 >>>> 1 Hmholtz VELZ: 13 3.4387E-08 9.9993E-01 9.9200E-08 >>>> 1 1.00000E-04 8.48221E-11 1.31021E-10 6.47394E-01 0 Divergence >>>> 0 U-PRES gmres: 1 8.4822E-11 1.0000E-04 1.3102E-10 >>>> 3.6466E-01 6.1275E-01 >>>> 1 1.57007E-03 2.50000E+01 1.00000E+00 basflow Z >>>> 1 0.1622505E+03 6.74973E-03 1.05976E-05 3.14158E+00 >>>> 3.14159E+00 volflow Z >>>> 1 1.6225E+02 2.5581E+01 Fluid done >>>> filt amp 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0500 >>>> filt trn 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.9500 >>>> schfile: >>>> /cfs/klemming/nobackup/g/georgeek/Pipe_550/pipe.sch >>>> Step 2, t= 1.6225100E+02, DT= 5.0000000E-04, C= 0.252 2.9590E+01 >>>> 2.9590E+01 >>>> Solving for fluid >>>> 9.9200000000000002E-008 p22 2 1 >>>> 2 Hmholtz VELX: 10 2.6583E-08 1.5981E+00 9.9200E-08 >>>> 9.9200000000000002E-008 p22 2 1 >>>> 2 Hmholtz VELY: 10 2.6270E-08 1.5887E+00 9.9200E-08 >>>> 9.9200000000000002E-008 p22 2 1 >>>> 2 Hmholtz VELZ: 10 3.7961E-08 1.3628E+00 9.9200E-08 >>>> 2 U-PRES gmres: 26 8.8941E-07 1.0000E-06 8.7570E-04 >>>> 9.3625E+00 1.7342E+01 >>>> 2 DNORM, DIVEX 8.8940552367860238E-007 >>>> 8.8940552232510758E-007 >>>> 9.9200000000000002E-008 p22 2 1 >>>> 2 Hmholtz VELX: 0 0.0000E+00 0.0000E+00 9.9200E-08 >>>> 9.9200000000000002E-008 p22 2 1 >>>> 2 Hmholtz VELY: 0 0.0000E+00 0.0000E+00 9.9200E-08 >>>> 9.9200000000000002E-008 p22 2 1 >>>> 2 Hmholtz VELZ: 11 2.9548E-08 9.9993E-01 9.9200E-08 >>>> 0 U-PRES gmres: 1 5.0819E-11 1.0000E-04 7.6670E-11 >>>> 3.6476E-01 6.1121E-01 >>>> 2 1.04680E-03 2.50000E+01 1.00000E+00 basflow Z >>>> 2 0.1622510E+03 6.74912E-03 7.06497E-06 3.14158E+00 >>>> 3.14159E+00 volflow Z >>>> 2 1.6225E+02 2.5493E+01 Fluid done >>>> Step 3, t= 1.6225150E+02, DT= 5.0000000E-04, C= 0.253 6.2005E+01 >>>> 3.2415E+01 >>>> Solving for fluid >>>> 9.9200000000000002E-008 p22 3 1 >>>> 3 Hmholtz VELX: 9 5.9973E-08 1.9551E+00 9.9200E-08 >>>> 9.9200000000000002E-008 p22 3 1 >>>> 3 Hmholtz VELY: 9 5.9734E-08 1.9436E+00 9.9200E-08 >>>> 9.9200000000000002E-008 p22 3 1 >>>> 3 Hmholtz VELZ: 9 6.7060E-08 1.6769E+00 9.9200E-08 >>>> 3 U-PRES gmres: 43 9.7341E-07 1.0000E-06 5.8253E-03 >>>> 1.5480E+01 2.9163E+01 >>>> 3 DNORM, DIVEX 9.7340773053122238E-007 >>>> 9.7340773046898007E-007 >>>> 9.9200000000000002E-008 p22 3 1 >>>> 3 Hmholtz VELX: 0 0.0000E+00 0.0000E+00 9.9200E-08 >>>> 9.9200000000000002E-008 p22 3 1 >>>> 3 Hmholtz VELY: 0 0.0000E+00 0.0000E+00 9.9200E-08 >>>> 9.9200000000000002E-008 p22 3 1 >>>> 3 Hmholtz VELZ: 10 3.2545E-08 9.9993E-01 9.9200E-08 >>>> 0 U-PRES gmres: 1 3.1707E-11 1.0000E-04 3.8884E-11 >>>> 3.6436E-01 6.1134E-01 >>>> 3 8.56501E-04 2.50000E+01 1.00000E+00 basflow Z >>>> 3 0.1622515E+03 6.74739E-03 5.77915E-06 3.14158E+00 >>>> 3.14159E+00 volflow Z >>>> 3 1.6225E+02 3.6833E+01 Fluid done >>>> Step 4, t= 1.6225200E+02, DT= 5.0000000E-04, C= 0.253 1.0909E+02 >>>> 4.7088E+01 >>>> Solving for fluid >>>> 9.9200000000000002E-008 p22 4 1 >>>> 4 Hmholtz VELX: 9 5.1382E-08 1.9533E+00 9.9200E-08 >>>> 9.9200000000000002E-008 p22 4 1 >>>> 4 Hmholtz VELY: 9 5.1245E-08 1.9419E+00 9.9200E-08 >>>> 9.9200000000000002E-008 p22 4 1 >>>> 4 Hmholtz VELZ: 9 5.8121E-08 1.6656E+00 9.9200E-08 >>>> 4 U-PRES gmres: 20 9.8736E-07 1.0000E-06 5.8401E-04 >>>> 7.2002E+00 1.3507E+01 >>>> 4 DNORM, DIVEX 9.8735913161531008E-007 >>>> 9.8735912866392627E-007 >>>> 4 0.1622520E+03 6.74462E-03 5.77677E-06 3.14158E+00 >>>> 3.14159E+00 volflow Z >>>> 4 1.6225E+02 1.8072E+01 Fluid done >>>> Step 5, t= 1.6225250E+02, DT= 5.0000000E-04, C= 0.254 1.3731E+02 >>>> 2.8220E+01 >>>> Solving for fluid >>>> 9.9200000000000002E-008 p22 5 1 >>>> 5 Hmholtz VELX: 9 5.5907E-08 1.9533E+00 9.9200E-08 >>>> 9.9200000000000002E-008 p22 5 1 >>>> 5 Hmholtz VELY: 9 5.5659E-08 1.9419E+00 9.9200E-08 >>>> 9.9200000000000002E-008 p22 5 1 >>>> 5 Hmholtz VELZ: 9 5.9684E-08 1.6655E+00 9.9200E-08 >>>> 5 U-PRES gmres: 16 9.2427E-07 1.0000E-06 1.5556E-04 >>>> 5.7643E+00 1.0499E+01 >>>> 5 DNORM, DIVEX 9.2426901539348950E-007 >>>> 9.2426901307669303E-007 >>>> 5 0.1622525E+03 6.74218E-03 5.77469E-06 3.14158E+00 >>>> 3.14159E+00 volflow Z >>>> 5 1.6225E+02 1.5064E+01 Fluid done >>>> Step 6, t= 1.6225300E+02, DT= 5.0000000E-04, C= 0.255 1.6256E+02 >>>> 2.5244E+01 >>>> Solving for fluid >>>> 9.9200000000000002E-008 p22 6 1 >>>> 6 Hmholtz VELX: 9 6.0915E-08 1.9533E+00 9.9200E-08 >>>> 9.9200000000000002E-008 p22 6 1 >>>> 6 Hmholtz VELY: 9 6.0642E-08 1.9418E+00 9.9200E-08 >>>> 9.9200000000000002E-008 p22 6 1 >>>> 6 Hmholtz VELZ: 9 5.9861E-08 1.6655E+00 9.9200E-08 >>>> 6 U-PRES gmres: 14 8.6298E-07 1.0000E-06 1.2403E-04 >>>> 5.0438E+00 9.0451E+00 >>>> 6 DNORM, DIVEX 8.6297683366363408E-007 >>>> 8.6297681628504936E-007 >>>> 6 0.1622530E+03 6.73975E-03 5.77261E-06 3.14158E+00 >>>> 3.14159E+00 volflow Z >>>> 6 1.6225E+02 1.3610E+01 Fluid done >>>> Step 7, t= 1.6225350E+02, DT= 5.0000000E-04, C= 0.255 1.8632E+02 >>>> 2.3767E+01 >>>> Solving for fluid >>>> 9.9200000000000002E-008 p22 7 1 >>>> 7 Hmholtz VELX: 9 7.2389E-08 1.9533E+00 9.9200E-08 >>>> 9.9200000000000002E-008 p22 7 1 >>>> 7 Hmholtz VELY: 9 7.1978E-08 1.9418E+00 9.9200E-08 >>>> 9.9200000000000002E-008 p22 7 1 >>>> 7 Hmholtz VELZ: 9 6.0336E-08 1.6654E+00 9.9200E-08 >>>> 7 U-PRES gmres: 14 7.8284E-07 1.0000E-06 1.2790E-04 >>>> 5.0456E+00 9.0568E+00 >>>> 7 DNORM, DIVEX 7.8284171959673956E-007 >>>> 7.8284171260985997E-007 >>>> 7 0.1622535E+03 6.73731E-03 5.77052E-06 3.14158E+00 >>>> 3.14159E+00 volflow Z >>>> 7 1.6225E+02 1.3620E+01 Fluid done >>>> Step 8, t= 1.6225400E+02, DT= 5.0000000E-04, C= 0.256 2.1010E+02 >>>> 2.3780E+01 >>>> Solving for fluid >>>> 9.9200000000000002E-008 p22 8 1 >>>> 8 Hmholtz VELX: 9 8.0530E-08 1.9533E+00 9.9200E-08 >>>> 9.9200000000000002E-008 p22 8 1 >>>> 8 Hmholtz VELY: 9 8.0474E-08 1.9418E+00 9.9200E-08 >>>> 9.9200000000000002E-008 p22 8 1 >>>> 8 Hmholtz VELZ: 9 6.1838E-08 1.6654E+00 9.9200E-08 >>>> 8 U-PRES gmres: 13 9.3871E-07 1.0000E-06 8.2637E-05 >>>> 4.6862E+00 8.3450E+00 >>>> 8 DNORM, DIVEX 9.3870751505892285E-007 >>>> 9.3870750972531774E-007 >>>> 8 0.1622540E+03 6.73487E-03 5.76843E-06 3.14158E+00 >>>> 3.14159E+00 volflow Z >>>> 8 1.6225E+02 1.2911E+01 Fluid done >>>> Step 9, t= 1.6225450E+02, DT= 5.0000000E-04, C= 0.257 2.3318E+02 >>>> 2.3081E+01 >>>> Solving for fluid >>>> 9.9200000000000002E-008 p22 9 1 >>>> 9 Hmholtz VELX: 9 8.5197E-08 1.9533E+00 9.9200E-08 >>>> 9.9200000000000002E-008 p22 9 1 >>>> 9 Hmholtz VELY: 9 8.4881E-08 1.9418E+00 9.9200E-08 >>>> 9.9200000000000002E-008 p22 9 1 >>>> 9 Hmholtz VELZ: 9 6.3849E-08 1.6654E+00 9.9200E-08 >>>> 9 U-PRES gmres: 10 8.0419E-07 1.0000E-06 5.5243E-05 >>>> 3.6054E+00 6.2748E+00 >>>> 9 DNORM, DIVEX 8.0418938467388352E-007 >>>> 8.0418939044644440E-007 >>>> 9 0.1622545E+03 6.73242E-03 5.76633E-06 3.14158E+00 >>>> 3.14159E+00 volflow Z >>>> 9 1.6225E+02 1.0838E+01 Fluid done >>>> Step 10, t= 1.6225500E+02, DT= 5.0000000E-04, C= 0.257 2.5418E+02 >>>> 2.0995E+01 >>>> Solving for fluid >>>> 9.9200000000000002E-008 p22 10 1 >>>> 10 Hmholtz VELX: 9 8.7316E-08 1.9533E+00 9.9200E-08 >>>> 9.9200000000000002E-008 p22 10 1 >>>> 10 Hmholtz VELY: 9 8.7257E-08 1.9418E+00 9.9200E-08 >>>> 9.9200000000000002E-008 p22 10 1 >>>> 10 Hmholtz VELZ: 9 6.5276E-08 1.6654E+00 9.9200E-08 >>>> 10 U-PRES gmres: 14 7.9070E-07 1.0000E-06 5.7631E-05 >>>> 5.0451E+00 9.0540E+00 >>>> 10 DNORM, DIVEX 7.9069874056473697E-007 >>>> 7.9069875187583686E-007 >>>> 10 0.1622550E+03 6.72998E-03 5.76424E-06 3.14158E+00 >>>> 3.14159E+00 volflow Z >>>> 10 1.6225E+02 1.3620E+01 Fluid done >>>> Step 11, t= 1.6225550E+02, DT= 5.0000000E-04, C= 0.258 2.7885E+02 >>>> 2.4675E+01 >>>> Solving for fluid >>>> 11 100 **ERROR**: Failed in HMHOLTZ: VELX 7.9521E+07 >>>> 1.9533E+00 9.9200E-08 >>>> 11 100 **ERROR**: Failed in HMHOLTZ: VELY 2.8223E+03 >>>> 1.9418E+00 9.9200E-08 >>>> 11 Hmholtz VELZ: 9 6.6283E-08 1.6654E+00 9.9200E-08 >>>> 11 U-PRES gmres: 100 1.6296E+02 1.0000E-06 2.6253E+12 >>>> 3.6002E+01 6.8265E+01 >>>> 11 DNORM, DIVEX 54998323.86255041 162.9642641112922 >>>> 11 0.1622555E+03 4.32525E-03 3.70458E-06 3.14159E+00 >>>> 3.14159E+00 volflow Z >>>> 11 1.6226E+02 9.4526E+01 Fluid done >>>> CFL, Ctarg! 11083557069312.66 1.000000000000000 >>>> call outfld: ifpsco: F >>>> >>>> 12 1.6226E+02 Write checkpoint: >>>> >>>> call outfld: ifpsco: F >>>> >>>> 12 1.6226E+02 Write checkpoint: >>>> 0 12 OPEN: pipe0.f00001 >>>> 850 12 OPEN: pipe5.f00001 >>>> 510 12 OPEN: pipe3.f00001 >>>> 170 12 OPEN: pipe1.f00001 >>>> 1020 12 OPEN: pipe6.f00001 >>>> 680 12 OPEN: pipe4.f00001 >>>> 340 12 OPEN: pipe2.f00001 >>>> >>>> 12 1.6226E+02 done :: Write checkpoint >>>> file size = 234.E+02MB >>>> >>>> 899 Emergency exit: 12 time = 162.2554999999999 >>>> >>>> 512 Emergency exit: 12 time = 162.2554999999999 >>>> >>>> 459 Emergency exit: 12 time = 162.2554999999999 >>>> Latest solution and data are dumped for post-processing. >>>> *** STOP *** >>>> 461 Emergency exit: 12 time = 162.2554999999999 >>>> Latest solution and data are dumped for post-processing. >>>> *** STOP *** >>>> >>>> ------------------------------------------------------------------------------- >>>> >>>> Quoting nek5000-users at lists.mcs.anl.gov: >>>> >>>>> Geroge, >>>>> >>>>> Can you provide more details. A logfile would be helpful. >>>>> >>>>> Cheers, >>>>> Stefan >>>> >>>> >>>> ---------------------------------------------------------------- >>>> This message was sent using IMP, the Internet Messaging Program. >>>> _______________________________________________ >>>> Nek5000-users mailing list >>>> Nek5000-users at lists.mcs.anl.gov >>>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>>> >>> _______________________________________________ >>> Nek5000-users mailing list >>> Nek5000-users at lists.mcs.anl.gov >>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>> >> >> >> >> ---------------------------------------------------------------- >> This message was sent using IMP, the Internet Messaging Program. >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > ---------------------------------------------------------------- This message was sent using IMP, the Internet Messaging Program. From nek5000-users at lists.mcs.anl.gov Fri Jan 20 03:05:23 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 20 Jan 2012 09:05:23 +0000 Subject: [Nek5000-users] Add a forcing term on scalar equation In-Reply-To: References: , Message-ID: Thanks. So I just used the "USERQ" to implement the forcing term to scalar filed explicitly. In the next step, I want to coupled the scalar field to Navier-Stokes equation using a forcing term. In this case, the forcing term is a function of scalar field and I would like to implement it implicitly. As far as I know, if I used USERF, the scheme is explicit. So how can I add a forcing term to N-S equation implicitly!? Iman ________________________________________ From: nek5000-users-bounces at lists.mcs.anl.gov [nek5000-users-bounces at lists.mcs.anl.gov] on behalf of nek5000-users at lists.mcs.anl.gov [nek5000-users at lists.mcs.anl.gov] Sent: Thursday, January 19, 2012 4:20 PM To: nek5000-users at lists.mcs.anl.gov Subject: Re: [Nek5000-users] Add a forcing term on scalar equation It's actually evaluated explicitly, so you shold be ok. Paul On Thu, 19 Jan 2012, nek5000-users at lists.mcs.anl.gov wrote: > Hi > > I would like to add a forcing term to the scalar equation. > According to the Nekton manual, Chapter 5, the forcing term is treated implicitly and the convective term is integrated using the third order Adam_Bashforth scheme (AB3). > For my case, the forcing term is a nonlinear function of the scalar field, so I cannot use the implicit scheme and I should lump it with the AB3. > Therefore, could you please guide me and indicate where in the code I can find the implementation of AB3 for the convective term of passive scalar? > > Cheers > Iman > > _______________________________________________ Nek5000-users mailing list Nek5000-users at lists.mcs.anl.gov https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users From nek5000-users at lists.mcs.anl.gov Fri Jan 20 07:25:48 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 20 Jan 2012 07:25:48 -0600 (CST) Subject: [Nek5000-users] Add a forcing term on scalar equation In-Reply-To: References: , Message-ID: Iman, These can be coupled explicitly, but not implicitly. Usually, however, this is sufficent. Equation are of the form: u_t = NS + f(u,T) T_t = energy eq. + q(u,T) f & q explicit. The code will take care of the bookkeeping. Paul On Fri, 20 Jan 2012, nek5000-users at lists.mcs.anl.gov wrote: > Thanks. > So I just used the "USERQ" to implement the forcing term to scalar filed explicitly. > > In the next step, I want to coupled the scalar field to Navier-Stokes equation using a forcing term. > In this case, the forcing term is a function of scalar field and I would like to implement it implicitly. > As far as I know, if I used USERF, the scheme is explicit. So how can I add a forcing term to N-S equation implicitly!? > > Iman > > > > ________________________________________ > From: nek5000-users-bounces at lists.mcs.anl.gov [nek5000-users-bounces at lists.mcs.anl.gov] on behalf of nek5000-users at lists.mcs.anl.gov [nek5000-users at lists.mcs.anl.gov] > Sent: Thursday, January 19, 2012 4:20 PM > To: nek5000-users at lists.mcs.anl.gov > Subject: Re: [Nek5000-users] Add a forcing term on scalar equation > > It's actually evaluated explicitly, so you shold be ok. > > Paul > > > On Thu, 19 Jan 2012, nek5000-users at lists.mcs.anl.gov wrote: > >> Hi >> >> I would like to add a forcing term to the scalar equation. >> According to the Nekton manual, Chapter 5, the forcing term is treated implicitly and the convective term is integrated using the third order Adam_Bashforth scheme (AB3). >> For my case, the forcing term is a nonlinear function of the scalar field, so I cannot use the implicit scheme and I should lump it with the AB3. >> Therefore, could you please guide me and indicate where in the code I can find the implementation of AB3 for the convective term of passive scalar? >> >> Cheers >> Iman >> >> > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > From nek5000-users at lists.mcs.anl.gov Fri Jan 20 09:20:19 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 20 Jan 2012 15:20:19 +0000 Subject: [Nek5000-users] Add a forcing term on scalar equation In-Reply-To: References: , , Message-ID: Paul, Numerically the momentum equation has an explicit forcing term. Only that, it is based on the T at the same time. However, I would like to first solve the equation for the scalar. In other words, I want to first solve the equation for T, use it to calculate f(T) and afterward solve the momentum equation ( u_t=NS + f(T) ). So How I can do that!? Iman ________________________________________ From: nek5000-users-bounces at lists.mcs.anl.gov [nek5000-users-bounces at lists.mcs.anl.gov] on behalf of nek5000-users at lists.mcs.anl.gov [nek5000-users at lists.mcs.anl.gov] Sent: Friday, January 20, 2012 2:25 PM To: nek5000-users at lists.mcs.anl.gov Subject: Re: [Nek5000-users] Add a forcing term on scalar equation Iman, These can be coupled explicitly, but not implicitly. Usually, however, this is sufficent. Equation are of the form: u_t = NS + f(u,T) T_t = energy eq. + q(u,T) f & q explicit. The code will take care of the bookkeeping. Paul On Fri, 20 Jan 2012, nek5000-users at lists.mcs.anl.gov wrote: > Thanks. > So I just used the "USERQ" to implement the forcing term to scalar filed explicitly. > > In the next step, I want to coupled the scalar field to Navier-Stokes equation using a forcing term. > In this case, the forcing term is a function of scalar field and I would like to implement it implicitly. > As far as I know, if I used USERF, the scheme is explicit. So how can I add a forcing term to N-S equation implicitly!? > > Iman > > > > ________________________________________ > From: nek5000-users-bounces at lists.mcs.anl.gov [nek5000-users-bounces at lists.mcs.anl.gov] on behalf of nek5000-users at lists.mcs.anl.gov [nek5000-users at lists.mcs.anl.gov] > Sent: Thursday, January 19, 2012 4:20 PM > To: nek5000-users at lists.mcs.anl.gov > Subject: Re: [Nek5000-users] Add a forcing term on scalar equation > > It's actually evaluated explicitly, so you shold be ok. > > Paul > > > On Thu, 19 Jan 2012, nek5000-users at lists.mcs.anl.gov wrote: > >> Hi >> >> I would like to add a forcing term to the scalar equation. >> According to the Nekton manual, Chapter 5, the forcing term is treated implicitly and the convective term is integrated using the third order Adam_Bashforth scheme (AB3). >> For my case, the forcing term is a nonlinear function of the scalar field, so I cannot use the implicit scheme and I should lump it with the AB3. >> Therefore, could you please guide me and indicate where in the code I can find the implementation of AB3 for the convective term of passive scalar? >> >> Cheers >> Iman >> >> > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > _______________________________________________ Nek5000-users mailing list Nek5000-users at lists.mcs.anl.gov https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users From nek5000-users at lists.mcs.anl.gov Fri Jan 20 10:12:21 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 20 Jan 2012 10:12:21 -0600 (CST) Subject: [Nek5000-users] Add a forcing term on scalar equation In-Reply-To: References: , , Message-ID: Hi Iman, There are some minor bookkeeping issues here... Just to make certain we're on the same page, are you currently using Pn-Pn-2, or Pn-Pn ? Paul On Fri, 20 Jan 2012, nek5000-users at lists.mcs.anl.gov wrote: > Paul, > > Numerically the momentum equation has an explicit forcing term. > Only that, it is based on the T at the same time. However, I would like to first solve the equation for the scalar. > > In other words, I want to first solve the equation for T, use it to calculate f(T) and afterward solve the momentum equation ( u_t=NS + f(T) ). So How I can do that!? > > > Iman > > > > > > ________________________________________ > From: nek5000-users-bounces at lists.mcs.anl.gov [nek5000-users-bounces at lists.mcs.anl.gov] on behalf of nek5000-users at lists.mcs.anl.gov [nek5000-users at lists.mcs.anl.gov] > Sent: Friday, January 20, 2012 2:25 PM > To: nek5000-users at lists.mcs.anl.gov > Subject: Re: [Nek5000-users] Add a forcing term on scalar equation > > Iman, > > These can be coupled explicitly, but not implicitly. Usually, however, > this is sufficent. Equation are of the form: > > u_t = NS + f(u,T) > > T_t = energy eq. + q(u,T) > > f & q explicit. The code will take care of the bookkeeping. > > Paul > > > On Fri, 20 Jan 2012, nek5000-users at lists.mcs.anl.gov wrote: > >> Thanks. >> So I just used the "USERQ" to implement the forcing term to scalar filed explicitly. >> >> In the next step, I want to coupled the scalar field to Navier-Stokes equation using a forcing term. >> In this case, the forcing term is a function of scalar field and I would like to implement it implicitly. >> As far as I know, if I used USERF, the scheme is explicit. So how can I add a forcing term to N-S equation implicitly!? >> >> Iman >> >> >> >> ________________________________________ >> From: nek5000-users-bounces at lists.mcs.anl.gov [nek5000-users-bounces at lists.mcs.anl.gov] on behalf of nek5000-users at lists.mcs.anl.gov [nek5000-users at lists.mcs.anl.gov] >> Sent: Thursday, January 19, 2012 4:20 PM >> To: nek5000-users at lists.mcs.anl.gov >> Subject: Re: [Nek5000-users] Add a forcing term on scalar equation >> >> It's actually evaluated explicitly, so you shold be ok. >> >> Paul >> >> >> On Thu, 19 Jan 2012, nek5000-users at lists.mcs.anl.gov wrote: >> >>> Hi >>> >>> I would like to add a forcing term to the scalar equation. >>> According to the Nekton manual, Chapter 5, the forcing term is treated implicitly and the convective term is integrated using the third order Adam_Bashforth scheme (AB3). >>> For my case, the forcing term is a nonlinear function of the scalar field, so I cannot use the implicit scheme and I should lump it with the AB3. >>> Therefore, could you please guide me and indicate where in the code I can find the implementation of AB3 for the convective term of passive scalar? >>> >>> Cheers >>> Iman >>> >>> >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > From nek5000-users at lists.mcs.anl.gov Fri Jan 20 10:24:57 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 20 Jan 2012 17:24:57 +0100 Subject: [Nek5000-users] Add a forcing term on scalar equation In-Reply-To: References: Message-ID: Hi Iman, I don't know what type of system you are trying solve but I want to draw your attention on the following option: You can use the PnPn formulation. First, a non-linear coupled scalar system is solved using CVODE (a stiffly stable integrator).Then, the hydrodynamic equations (velocity and pressure) are solved. The two system are decoupled using a high-order splitting approach. We do the same for chemically reactive flows. Cheers, Stefan On 1/20/12, nek5000-users at lists.mcs.anl.gov wrote: > > Hi Iman, > > There are some minor bookkeeping issues here... > > Just to make certain we're on the same page, are you currently > using Pn-Pn-2, or Pn-Pn ? > > Paul > > > On Fri, 20 Jan 2012, nek5000-users at lists.mcs.anl.gov wrote: > >> Paul, >> >> Numerically the momentum equation has an explicit forcing term. >> Only that, it is based on the T at the same time. However, I would like to >> first solve the equation for the scalar. >> >> In other words, I want to first solve the equation for T, use it to >> calculate f(T) and afterward solve the momentum equation ( u_t=NS + f(T) >> ). So How I can do that!? >> >> >> Iman >> >> >> >> >> >> ________________________________________ >> From: nek5000-users-bounces at lists.mcs.anl.gov >> [nek5000-users-bounces at lists.mcs.anl.gov] on behalf of >> nek5000-users at lists.mcs.anl.gov [nek5000-users at lists.mcs.anl.gov] >> Sent: Friday, January 20, 2012 2:25 PM >> To: nek5000-users at lists.mcs.anl.gov >> Subject: Re: [Nek5000-users] Add a forcing term on scalar equation >> >> Iman, >> >> These can be coupled explicitly, but not implicitly. Usually, however, >> this is sufficent. Equation are of the form: >> >> u_t = NS + f(u,T) >> >> T_t = energy eq. + q(u,T) >> >> f & q explicit. The code will take care of the bookkeeping. >> >> Paul >> >> >> On Fri, 20 Jan 2012, nek5000-users at lists.mcs.anl.gov wrote: >> >>> Thanks. >>> So I just used the "USERQ" to implement the forcing term to scalar filed >>> explicitly. >>> >>> In the next step, I want to coupled the scalar field to Navier-Stokes >>> equation using a forcing term. >>> In this case, the forcing term is a function of scalar field and I would >>> like to implement it implicitly. >>> As far as I know, if I used USERF, the scheme is explicit. So how can I >>> add a forcing term to N-S equation implicitly!? >>> >>> Iman >>> >>> >>> >>> ________________________________________ >>> From: nek5000-users-bounces at lists.mcs.anl.gov >>> [nek5000-users-bounces at lists.mcs.anl.gov] on behalf of >>> nek5000-users at lists.mcs.anl.gov [nek5000-users at lists.mcs.anl.gov] >>> Sent: Thursday, January 19, 2012 4:20 PM >>> To: nek5000-users at lists.mcs.anl.gov >>> Subject: Re: [Nek5000-users] Add a forcing term on scalar equation >>> >>> It's actually evaluated explicitly, so you shold be ok. >>> >>> Paul >>> >>> >>> On Thu, 19 Jan 2012, nek5000-users at lists.mcs.anl.gov wrote: >>> >>>> Hi >>>> >>>> I would like to add a forcing term to the scalar equation. >>>> According to the Nekton manual, Chapter 5, the forcing term is treated >>>> implicitly and the convective term is integrated using the third order >>>> Adam_Bashforth scheme (AB3). >>>> For my case, the forcing term is a nonlinear function of the scalar >>>> field, so I cannot use the implicit scheme and I should lump it with the >>>> AB3. >>>> Therefore, could you please guide me and indicate where in the code I >>>> can find the implementation of AB3 for the convective term of passive >>>> scalar? >>>> >>>> Cheers >>>> Iman >>>> >>>> >>> _______________________________________________ >>> Nek5000-users mailing list >>> Nek5000-users at lists.mcs.anl.gov >>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>> _______________________________________________ >>> Nek5000-users mailing list >>> Nek5000-users at lists.mcs.anl.gov >>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>> >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > From nek5000-users at lists.mcs.anl.gov Mon Jan 23 04:15:06 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 23 Jan 2012 10:15:06 +0000 Subject: [Nek5000-users] Add a forcing term on scalar equation In-Reply-To: References: , Message-ID: Dear Stefan, We are solving a system like this: Dc/Dt= Nabla^2 c +NL(u,c) DU/Dt= Nabla^2 u - grad p + f(c), where we want to use the c just computed. It's okay to solve the scalar equation explicitly according to previous tests with another fully spectral (Fourier) code. But, it's difficult to get stable results if the momentum equation is fully explicit. Two questions: would it be a problem to use Pn-Pn? Would it be slower or less accurate? In the code, can we change the forcing term to be implicit, so using Crank-Nicolson for f(c) (f contains derivatives). We understood userf is adding terms to the AB part, while we want something implicit. Cheers Iman ________________________________________ From: nek5000-users-bounces at lists.mcs.anl.gov [nek5000-users-bounces at lists.mcs.anl.gov] on behalf of nek5000-users at lists.mcs.anl.gov [nek5000-users at lists.mcs.anl.gov] Sent: Friday, January 20, 2012 5:24 PM To: nek5000-users at lists.mcs.anl.gov Subject: Re: [Nek5000-users] Add a forcing term on scalar equation Hi Iman, I don't know what type of system you are trying solve but I want to draw your attention on the following option: You can use the PnPn formulation. First, a non-linear coupled scalar system is solved using CVODE (a stiffly stable integrator).Then, the hydrodynamic equations (velocity and pressure) are solved. The two system are decoupled using a high-order splitting approach. We do the same for chemically reactive flows. Cheers, Stefan On 1/20/12, nek5000-users at lists.mcs.anl.gov wrote: > > Hi Iman, > > There are some minor bookkeeping issues here... > > Just to make certain we're on the same page, are you currently > using Pn-Pn-2, or Pn-Pn ? > > Paul > > > On Fri, 20 Jan 2012, nek5000-users at lists.mcs.anl.gov wrote: > >> Paul, >> >> Numerically the momentum equation has an explicit forcing term. >> Only that, it is based on the T at the same time. However, I would like to >> first solve the equation for the scalar. >> >> In other words, I want to first solve the equation for T, use it to >> calculate f(T) and afterward solve the momentum equation ( u_t=NS + f(T) >> ). So How I can do that!? >> >> >> Iman >> >> >> >> >> >> ________________________________________ >> From: nek5000-users-bounces at lists.mcs.anl.gov >> [nek5000-users-bounces at lists.mcs.anl.gov] on behalf of >> nek5000-users at lists.mcs.anl.gov [nek5000-users at lists.mcs.anl.gov] >> Sent: Friday, January 20, 2012 2:25 PM >> To: nek5000-users at lists.mcs.anl.gov >> Subject: Re: [Nek5000-users] Add a forcing term on scalar equation >> >> Iman, >> >> These can be coupled explicitly, but not implicitly. Usually, however, >> this is sufficent. Equation are of the form: >> >> u_t = NS + f(u,T) >> >> T_t = energy eq. + q(u,T) >> >> f & q explicit. The code will take care of the bookkeeping. >> >> Paul >> >> >> On Fri, 20 Jan 2012, nek5000-users at lists.mcs.anl.gov wrote: >> >>> Thanks. >>> So I just used the "USERQ" to implement the forcing term to scalar filed >>> explicitly. >>> >>> In the next step, I want to coupled the scalar field to Navier-Stokes >>> equation using a forcing term. >>> In this case, the forcing term is a function of scalar field and I would >>> like to implement it implicitly. >>> As far as I know, if I used USERF, the scheme is explicit. So how can I >>> add a forcing term to N-S equation implicitly!? >>> >>> Iman >>> >>> >>> >>> ________________________________________ >>> From: nek5000-users-bounces at lists.mcs.anl.gov >>> [nek5000-users-bounces at lists.mcs.anl.gov] on behalf of >>> nek5000-users at lists.mcs.anl.gov [nek5000-users at lists.mcs.anl.gov] >>> Sent: Thursday, January 19, 2012 4:20 PM >>> To: nek5000-users at lists.mcs.anl.gov >>> Subject: Re: [Nek5000-users] Add a forcing term on scalar equation >>> >>> It's actually evaluated explicitly, so you shold be ok. >>> >>> Paul >>> >>> >>> On Thu, 19 Jan 2012, nek5000-users at lists.mcs.anl.gov wrote: >>> >>>> Hi >>>> >>>> I would like to add a forcing term to the scalar equation. >>>> According to the Nekton manual, Chapter 5, the forcing term is treated >>>> implicitly and the convective term is integrated using the third order >>>> Adam_Bashforth scheme (AB3). >>>> For my case, the forcing term is a nonlinear function of the scalar >>>> field, so I cannot use the implicit scheme and I should lump it with the >>>> AB3. >>>> Therefore, could you please guide me and indicate where in the code I >>>> can find the implementation of AB3 for the convective term of passive >>>> scalar? >>>> >>>> Cheers >>>> Iman >>>> >>>> >>> _______________________________________________ >>> Nek5000-users mailing list >>> Nek5000-users at lists.mcs.anl.gov >>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>> _______________________________________________ >>> Nek5000-users mailing list >>> Nek5000-users at lists.mcs.anl.gov >>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>> >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > _______________________________________________ Nek5000-users mailing list Nek5000-users at lists.mcs.anl.gov https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users From nek5000-users at lists.mcs.anl.gov Mon Jan 23 18:08:22 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 23 Jan 2012 19:08:22 -0500 Subject: [Nek5000-users] Lyapunov Exponents Message-ID: <20120123190822.10765gj9bpojerna@webmail.vt.edu> Hi, I want to compute the perturbation vectors and Lyapunov exponents in a simulation of Rayleigh-Benard convection. But I realized that the "computelyap" subroutine in perturb.f file is never called anywhere in the code. I wonder if you could help me how to handle the calculation of the Lyapunov exponents and store the corresponding perturbation fld files and lyp file. Regards, Alireza From nek5000-users at lists.mcs.anl.gov Mon Jan 23 18:17:04 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 23 Jan 2012 18:17:04 -0600 (CST) Subject: [Nek5000-users] Lyapunov Exponents In-Reply-To: <20120123190822.10765gj9bpojerna@webmail.vt.edu> References: <20120123190822.10765gj9bpojerna@webmail.vt.edu> Message-ID: Hi Alireza, this routine was donated by some of our users and is definitely on the experimental side of things... My suggestion would be to look carefully at it and see if you can understand it. As I recall, a key point was being able to come up with a meaningful norm when both velocity and temperature are involved in the eigenvector. Paul On Mon, 23 Jan 2012, nek5000-users at lists.mcs.anl.gov wrote: > Hi, > > I want to compute the perturbation vectors and Lyapunov exponents in a > simulation of Rayleigh-Benard convection. But I realized that the > "computelyap" subroutine in perturb.f file is never called anywhere in the > code. I wonder if you could help me how to handle the calculation of the > Lyapunov exponents and store the corresponding perturbation fld files and lyp > file. > > Regards, > Alireza > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users From nek5000-users at lists.mcs.anl.gov Mon Jan 23 20:00:10 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 23 Jan 2012 21:00:10 -0500 Subject: [Nek5000-users] Most up to date build instructions? Message-ID: <4F1E10AA.3000107@ornl.gov> Everyone, I recently downloaded Nek5000 and tried to build it following the instructions on the wiki, but it failed while running makenek. It appears that makenek, from looking at the script, no longer functions in the way described on the wiki. Are there updated instructions or a binary build available? Building on 64 bit RHEL-6. Jay From nek5000-users at lists.mcs.anl.gov Mon Jan 23 21:19:51 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 23 Jan 2012 21:19:51 -0600 (CST) Subject: [Nek5000-users] Most up to date build instructions? In-Reply-To: <4F1E10AA.3000107@ornl.gov> References: <4F1E10AA.3000107@ornl.gov> Message-ID: Hi Jay, What was the failure stmt? Did you modify the source path in makenek to point to the location of your source ? I just checked out the latest version and it seemed to work fine. I modified the path since, in my case, I did not have this newest version sitting in $HOME/nek5_svn/... Please let me know if these suggestions help. Regards, Paul On Mon, 23 Jan 2012, nek5000-users at lists.mcs.anl.gov wrote: > Everyone, > > I recently downloaded Nek5000 and tried to build it following the > instructions on the wiki, but it failed while running makenek. It appears > that makenek, from looking at the script, no longer functions in the way > described on the wiki. Are there updated instructions or a binary build > available? > > Building on 64 bit RHEL-6. > > Jay > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > From nek5000-users at lists.mcs.anl.gov Tue Jan 24 02:08:54 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Tue, 24 Jan 2012 09:08:54 +0100 Subject: [Nek5000-users] Help us to write a tutorial In-Reply-To: References: Message-ID: <1327392534.5134.16.camel@skagsnebb.mech.kth.se> Hi Stefan We have quite a number of nekton users in Stockholm, so I think we could help in writing of the tutorial. Have you already started to work on it? How can we help you? Best regards Adam On Thu, 2011-12-29 at 13:45 +0100, nek5000-users at lists.mcs.anl.gov wrote: > Dear Nek users, > > We are looking for help from the community to provide a tutorial (from > users to users) to get new Nek users started. The tutorial will be > published on our Wiki. A good starting point is the Lid-driven cavity > flow tutorial of OpenFOAM: > http://www.openfoam.org/docs/user/tutorials.php > > You can get yourself a Wiki account to setup the tutorial: > http://nek5000.mcs.anl.gov/index.php?title=LidDrivenCavity&action=edit&redlink=1 > > > Thank you very much for your support! > Stefan > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users From nek5000-users at lists.mcs.anl.gov Thu Jan 26 13:19:40 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 26 Jan 2012 14:19:40 -0500 Subject: [Nek5000-users] Most up to date build instructions? In-Reply-To: References: <4F1E10AA.3000107@ornl.gov> Message-ID: <4F21A74C.10708@ornl.gov> Paul, Thanks for getting back to me on this and sorry for the slow response. I had not modified the source path and fixing that at least got makenek started. I am now getting a message about incorrect usage. Running the following command from a build directory just below the source fails: ../makenek ../zero makenek - automatic build tool for Nek5000 FATAL ERROR: Cannot find SIZE! Any thoughts? Jay On 01/23/2012 10:19 PM, nek5000-users at lists.mcs.anl.gov wrote: > Hi Jay, > > What was the failure stmt? > > Did you modify the source path in makenek to point to the location > of your source ? > > I just checked out the latest version and it seemed to work fine. > I modified the path since, in my case, I did not have this newest > version sitting in $HOME/nek5_svn/... > > Please let me know if these suggestions help. > > Regards, > > Paul > > On Mon, 23 Jan 2012, nek5000-users at lists.mcs.anl.gov wrote: > >> Everyone, >> >> I recently downloaded Nek5000 and tried to build it following the >> instructions on the wiki, but it failed while running makenek. It appears >> that makenek, from looking at the script, no longer functions in the way >> described on the wiki. Are there updated instructions or a binary build >> available? >> >> Building on 64 bit RHEL-6. >> >> Jay >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users From nek5000-users at lists.mcs.anl.gov Thu Jan 26 13:27:01 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 26 Jan 2012 13:27:01 -0600 (CST) Subject: [Nek5000-users] Most up to date build instructions? In-Reply-To: <4F21A74C.10708@ornl.gov> References: <4F1E10AA.3000107@ornl.gov> <4F21A74C.10708@ornl.gov> Message-ID: Hi Jay, Yes .. you need SIZE in your working directory. Each of the examples should have this... I recommend copying the entire directory contents from an example into your working directory. Paul On Thu, 26 Jan 2012, nek5000-users at lists.mcs.anl.gov wrote: > Paul, > > Thanks for getting back to me on this and sorry for the slow response. > > I had not modified the source path and fixing that at least got makenek > started. I am now getting a message about incorrect usage. Running the > following command from a build directory just below the source fails: > > ../makenek ../zero > makenek - automatic build tool for Nek5000 > FATAL ERROR: Cannot find SIZE! > > Any thoughts? > > Jay > > On 01/23/2012 10:19 PM, nek5000-users at lists.mcs.anl.gov wrote: >> Hi Jay, >> >> What was the failure stmt? >> >> Did you modify the source path in makenek to point to the location >> of your source ? >> >> I just checked out the latest version and it seemed to work fine. >> I modified the path since, in my case, I did not have this newest >> version sitting in $HOME/nek5_svn/... >> >> Please let me know if these suggestions help. >> >> Regards, >> >> Paul >> >> On Mon, 23 Jan 2012, nek5000-users at lists.mcs.anl.gov wrote: >> >>> Everyone, >>> >>> I recently downloaded Nek5000 and tried to build it following the >>> instructions on the wiki, but it failed while running makenek. It appears >>> that makenek, from looking at the script, no longer functions in the way >>> described on the wiki. Are there updated instructions or a binary build >>> available? >>> >>> Building on 64 bit RHEL-6. >>> >>> Jay >>> _______________________________________________ >>> Nek5000-users mailing list >>> Nek5000-users at lists.mcs.anl.gov >>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>> >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > From nek5000-users at lists.mcs.anl.gov Thu Jan 26 13:31:22 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 26 Jan 2012 13:31:22 -0600 (CST) Subject: [Nek5000-users] Most up to date build instructions? In-Reply-To: <4F21A74C.10708@ornl.gov> Message-ID: <148039255.182910.1327606282117.JavaMail.root@zimbra.anl.gov> Hi Jay, Try compiling oen of the examples like example/eddy meaning cd nek5_svn/examples/eddy cp nek5_svn/trunk/nek/makenek . ./makenek eddy_uv Best. Aleks ----- Original Message ----- From: nek5000-users at lists.mcs.anl.gov To: nek5000-users at lists.mcs.anl.gov Sent: Thursday, January 26, 2012 1:19:40 PM Subject: Re: [Nek5000-users] Most up to date build instructions? Paul, Thanks for getting back to me on this and sorry for the slow response. I had not modified the source path and fixing that at least got makenek started. I am now getting a message about incorrect usage. Running the following command from a build directory just below the source fails: ../makenek ../zero makenek - automatic build tool for Nek5000 FATAL ERROR: Cannot find SIZE! Any thoughts? Jay On 01/23/2012 10:19 PM, nek5000-users at lists.mcs.anl.gov wrote: > Hi Jay, > > What was the failure stmt? > > Did you modify the source path in makenek to point to the location > of your source ? > > I just checked out the latest version and it seemed to work fine. > I modified the path since, in my case, I did not have this newest > version sitting in $HOME/nek5_svn/... > > Please let me know if these suggestions help. > > Regards, > > Paul > > On Mon, 23 Jan 2012, nek5000-users at lists.mcs.anl.gov wrote: > >> Everyone, >> >> I recently downloaded Nek5000 and tried to build it following the >> instructions on the wiki, but it failed while running makenek. It appears >> that makenek, from looking at the script, no longer functions in the way >> described on the wiki. Are there updated instructions or a binary build >> available? >> >> Building on 64 bit RHEL-6. >> >> Jay >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users _______________________________________________ Nek5000-users mailing list Nek5000-users at lists.mcs.anl.gov https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users From nek5000-users at lists.mcs.anl.gov Mon Jan 30 04:23:40 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 30 Jan 2012 11:23:40 +0100 Subject: [Nek5000-users] Running NEK on CRAY; compilation & running issues Message-ID: <4F266FAC.1080103@iag.uni-stuttgart.de> Dear NEKs, I'm currently trying to compile NEK via crayftn to run on a XE6 and I run into some issues. Some compilation problems I solved (with kind help of CRAY personnel). I'll mention them here as they can be part of the problem, I assume. 1) changed *ftn*) P="-r8 -Mpreprocess to *ftn*) P="-s real64 -eZ -em" for double precision reals and to invoke preprocessor in makenek.inc (Cray uses ftn command as a wrapper) 2) changed subroutine name ?gsync? to ?gsync_nek? in all source files as crayftn has a build-in routine of the same name that conflicts with Nek's gsync 3) changed calls to subroutine ?multd? from CALL MULTD (TA1,TVX,RXM2,SXM2,TXM2,1) to CALL MULTD (TA1,TVX,RXM2,SXM2,TXM2,1,0) in ?navier1.f? as crayftn complains about missing seventh argument ?iflg? (calls come from subroutine ?tmultd? which seems not to be called in the code, though) In the SIZE file I'm setting lelt=300, lelv=lelt, lp = 64, lelg = 300. With these changes I'm running the eddy_uv example via ?aprun -n 64 -N 32 ./nek5000? on 64 processors and I'm getting the following output: /----------------------------------------------------------\\ | _ __ ______ __ __ ______ ____ ____ ____ | | / | / // ____// //_/ / ____/ / __ \\ / __ \\ / __ \\ | | / |/ // __/ / ,< /___ \\ / / / // / / // / / / | | / /| // /___ / /| | ____/ / / /_/ // /_/ // /_/ / | | /_/ |_//_____//_/ |_|/_____/ \\____/ \\____/ \\____/ | | | |----------------------------------------------------------| | | | NEK5000: Open Source Spectral Element Solver | | COPYRIGHT (c) 2008-2010 UCHICAGO ARGONNE, LLC | | Version: 1.0rc1 / SVN r730 | | Web: http://nek5000.mcs.anl.gov | | | \\----------------------------------------------------------/ Number of processors: 64 REAL wdsize : 8 INTEGER wdsize : 4 Beginning session: /zhome/academic/HLRS/iag/iagoschm/run_nek/eddy_example/eddy_uv.rea timer accuracy: 2.8610229E-07 sec read .rea file nelgt/nelgv/lelt: 256 256 300 lx1 /lx2 /lx3 : 8 6 8 mapping elements to processors 0, 2*4, 2*256 NELV 1, 2*4, 2*256 NELV 8, 2*4, 2*256 NELV 9, 2*4, 2*256 NELV 17, 2*4, 2*256 NELV 16, 2*4, 2*256 NELV 20, 2*4, 2*256 NELV 3*4, 2*256 NELV 29, 2*4, 2*256 NELV 21, 2*4, 2*256 NELV 28, 2*4, 2*256 NELV 7, 2*4, 2*256 NELV 31, 2*4, 2*256 NELV 25, 2*4, 2*256 NELV 11, 2*4, 2*256 NELV 12, 2*4, 2*256 NELV 6, 2*4, 2*256 NELV 30, 2*4, 2*256 NELV 18, 2*4, 2*256 NELV 19, 2*4, 2*256 NELV 2, 2*4, 2*256 NELV 5, 2*4, 2*256 NELV 23, 2*4, 2*256 NELV 27, 2*4, 2*256 NELV 10, 2*4, 2*256 NELV 22, 2*4, 2*256 NELV 13, 2*4, 2*256 NELV 14, 2*4, 2*256 NELV 15, 2*4, 2*256 NELV 3, 2*4, 2*256 NELV 26, 2*4, 2*256 NELV 24, 2*4, 2*256 NELV 33, 2*4, 2*256 NELV 41, 2*4, 2*256 NELV 32, 2*4, 2*256 NELV 40, 2*4, 2*256 NELV 45, 2*4, 2*256 NELV 44, 2*4, 2*256 NELV 49, 2*4, 2*256 NELV 34, 2*4, 2*256 NELV 35, 2*4, 2*256 NELV 48, 2*4, 2*256 NELV 39, 2*4, 2*256 NELV 38, 2*4, 2*256 NELV 50, 2*4, 2*256 NELV 51, 2*4, 2*256 NELV 37, 2*4, 2*256 NELV 36, 2*4, 2*256 NELV 55, 2*4, 2*256 NELV 54, 2*4, 2*256 NELV 57, 2*4, 2*256 NELV 58, 2*4, 2*256 NELV 43, 2*4, 2*256 NELV 42, 2*4, 2*256 NELV 47, 2*4, 2*256 NELV 53, 2*4, 2*256 NELV 59, 2*4, 2*256 NELV 46, 2*4, 2*256 NELV 60, 2*4, 2*256 NELV 61, 2*4, 2*256 NELV 63, 2*4, 2*256 NELV 62, 2*4, 2*256 NELV 52, 2*4, 2*256 NELV 56, 2*4, 2*256 NELV 25, 0, 2*4, 256 NELT FAIL 24, 0, 2*4, 256 NELT FAIL 21, 0, 2*4, 256 NELT FAIL Check that .map file and .rea file agree Check that .map file and .rea file agree Check that .map file and .rea file agree 20, 0, 2*4, 256 NELT FAIL Check that .map file and .rea file agree 31, 0, 2*4, 256 NELT FAIL 8, 0, 2*4, 256 NELT FAIL Check that .map file and .rea file agree 17, 0, 2*4, 256 NELT FAIL 14, 0, 2*4, 256 NELT FAIL Check that .map file and .rea file agree 15, 0, 2*4, 256 NELT FAIL 16, 0, 2*4, 256 NELT FAIL 26, 0, 2*4, 256 NELT FAIL Check that .map file and .rea file agree Check that .map file and .rea file agree 10, 0, 2*4, 256 NELT FAIL 2, 0, 2*4, 256 NELT FAIL 30, 0, 2*4, 256 NELT FAIL 11, 0, 2*4, 256 NELT FAIL 7, 0, 2*4, 256 NELT FAIL 1, 0, 2*4, 256 NELT FAIL Check that .map file and .rea file agree 2*0, 2*4, 256 NELT FAIL 6, 0, 2*4, 256 NELT FAIL 4, 0, 2*4, 256 NELT FAIL Check that .map file and .rea file agree Check that .map file and .rea file agree Check that .map file and .rea file agree 13, 0, 2*4, 256 NELT FAIL 12, 0, 2*4, 256 NELT FAIL 28, 0, 2*4, 256 NELT FAIL Check that .map file and .rea file agree Check that .map file and .rea file agree Check that .map file and .rea file agree 18, 0, 2*4, 256 NELT FAIL 29, 0, 2*4, 256 NELT FAIL Check that .map file and .rea file agree Check that .map file and .rea file agree Check that .map file and .rea file agree 23, 0, 2*4, 256 NELT FAIL Check that .map file and .rea file agree 27, 0, 2*4, 256 NELT FAIL 3, 0, 2*4, 256 NELT FAIL Check that .map file and .rea file agree Check that .map file and .rea file agree Check that .map file and .rea file agree Check that .map file and .rea file agree Check that .map file and .rea file agree Check that .map file and .rea file agree 19, 0, 2*4, 256 NELT FAIL Check that .map file and .rea file agree 5, 0, 2*4, 256 NELT FAIL Check that .map file and .rea file agree Check that .map file and .rea file agree Check that .map file and .rea file agree 22, 0, 2*4, 256 NELT FAIL Check that .map file and .rea file agree Check that .map file and .rea file agree 2*0, 2*4, 256 NELT FB 1, 0, 2*4, 256 NELT FB 2, 0, 2*4, 256 NELT FB 3, 0, 2*4, 256 NELT FB 4, 0, 2*4, 256 NELT FB 5, 0, 2*4, 256 NELT FB 6, 0, 2*4, 256 NELT FB 7, 0, 2*4, 256 NELT FB 8, 0, 2*4, 256 NELT FB 9, 3*4, 256 NELT FB 10, 0, 2*4, 256 NELT FB 11, 0, 2*4, 256 NELT FB 12, 0, 2*4, 256 NELT FB 13, 0, 2*4, 256 NELT FB 14, 0, 2*4, 256 NELT FB 15, 0, 2*4, 256 NELT FB 16, 0, 2*4, 256 NELT FB 17, 0, 2*4, 256 NELT FB 18, 0, 2*4, 256 NELT FB 19, 0, 2*4, 256 NELT FB 20, 0, 2*4, 256 NELT FB 21, 0, 2*4, 256 NELT FB 22, 0, 2*4, 256 NELT FB 23, 0, 2*4, 256 NELT FB 24, 0, 2*4, 256 NELT FB 25, 0, 2*4, 256 NELT FB 26, 0, 2*4, 256 NELT FB 27, 0, 2*4, 256 NELT FB 28, 0, 2*4, 256 NELT FB 29, 0, 2*4, 256 NELT FB 30, 0, 2*4, 256 NELT FB 31, 0, 2*4, 256 NELT FB call exitt: dying ... backtrace(): obtained 1 stack frames. [0x662a40] total elapsed time : 7.82199E-02 sec total solver time incl. I/O : 0.00000E+00 sec time/timestep : 0.00000E+00 sec CPU seconds/timestep/gridpt : 0.00000E+00 sec 39, 0, 2*4, 256 NELT FAIL 38, 0, 2*4, 256 NELT FAIL Check that .map file and .rea file agree Check that .map file and .rea file agree 36, 0, 2*4, 256 NELT FAIL 41, 0, 2*4, 256 NELT FAIL Check that .map file and .rea file agree 32, 0, 2*4, 256 NELT FAIL Check that .map file and .rea file agree 48, 0, 2*4, 256 NELT FAIL 34, 0, 2*4, 256 NELT FAIL 54, 196, 2*4, 256 NELT FAIL Check that .map file and .rea file agree 40, 0, 2*4, 256 NELT FAIL Check that .map file and .rea file agree Check that .map file and .rea file agree 46, 0, 2*4, 256 NELT FAIL 47, 0, 2*4, 256 NELT FAIL 33, 0, 2*4, 256 NELT FAIL 37, 0, 2*4, 256 NELT FAIL 35, 0, 2*4, 256 NELT FAIL 43, 0, 2*4, 256 NELT FAIL Check that .map file and .rea file agree Check that .map file and .rea file agree Check that .map file and .rea file agree Check that .map file and .rea file agree Check that .map file and .rea file agree Check that .map file and .rea file agree Check that .map file and .rea file agree Check that .map file and .rea file agree 45, 0, 2*4, 256 NELT FAIL 42, 0, 2*4, 256 NELT FAIL 44, 0, 2*4, 256 NELT FAIL Check that .map file and .rea file agree Check that .map file and .rea file agree Check that .map file and .rea file agree 32, 0, 2*4, 256 NELT FB 33, 0, 2*4, 256 NELT FB 34, 0, 2*4, 256 NELT FB 35, 0, 2*4, 256 NELT FB 36, 0, 2*4, 256 NELT FB 37, 0, 2*4, 256 NELT FB 38, 0, 2*4, 256 NELT FB 39, 0, 2*4, 256 NELT FB 40, 0, 2*4, 256 NELT FB 41, 0, 2*4, 256 NELT FB 42, 0, 2*4, 256 NELT FB 43, 0, 2*4, 256 NELT FB 44, 0, 2*4, 256 NELT FB 45, 0, 2*4, 256 NELT FB 46, 0, 2*4, 256 NELT FB 47, 0, 2*4, 256 NELT FB 48, 0, 2*4, 256 NELT FB 49, 3*4, 256 NELT FB 50, 3*4, 256 NELT FB 51, 3*4, 256 NELT FB 52, 3*4, 256 NELT FB 53, 3*4, 256 NELT FB 54, 196, 2*4, 256 NELT FB 55, 3*4, 256 NELT FB 56, 3*4, 256 NELT FB 57, 3*4, 256 NELT FB 58, 3*4, 256 NELT FB 59, 3*4, 256 NELT FB 60, 3*4, 256 NELT FB 61, 3*4, 256 NELT FB 62, 3*4, 256 NELT FB 63, 3*4, 256 NELT FB Application 419539 resources: utime ~8s, stime ~13s Does the code map in parallel here? I'm not sure what's happening here, because the .map and .rea file do in fact agree. The error seems to occur when processor #25 is mapped for a second time. The same thing happens with my own cases that were running on a similar machine before but with Nek compiled with PGI. Any help would be greatly appreciated Oliver From nek5000-users at lists.mcs.anl.gov Mon Jan 30 06:57:13 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 30 Jan 2012 06:57:13 -0600 (CST) Subject: [Nek5000-users] Running NEK on CRAY; compilation & running issues In-Reply-To: <4F266FAC.1080103@iag.uni-stuttgart.de> References: <4F266FAC.1080103@iag.uni-stuttgart.de> Message-ID: Oliver, Thanks for your comments. We'll certainly take care of the multd() and gsync() issues. We have what I believe is an xe6 on site and several users are using it regularly. They may have some comments about the flags, etc. and the issues that you are running into. Hopefully, someone will get back to you early today. Regards, Paul On Mon, 30 Jan 2012, nek5000-users at lists.mcs.anl.gov wrote: > Dear NEKs, > > I'm currently trying to compile NEK via crayftn to run on a XE6 and I > run into some issues. Some compilation problems I solved (with kind help > of CRAY personnel). I'll mention them here as they can be part of the > problem, I assume. > > 1) changed > *ftn*) P="-r8 -Mpreprocess > to *ftn*) P="-s real64 -eZ -em" > for double precision reals and to invoke preprocessor in makenek.inc > (Cray uses ftn command as a wrapper) > > 2) changed subroutine name ?gsync? to ?gsync_nek? in all source files > as > crayftn has a build-in routine of the same name that conflicts with > Nek's gsync > > 3) changed calls to subroutine ?multd? from > CALL MULTD (TA1,TVX,RXM2,SXM2,TXM2,1) > to CALL MULTD (TA1,TVX,RXM2,SXM2,TXM2,1,0) > in ?navier1.f? as crayftn complains about missing seventh argument > ?iflg? (calls come from subroutine ?tmultd? which seems not to be > called > in the code, though) > > In the SIZE file I'm setting lelt=300, lelv=lelt, lp = 64, lelg = 300. > With these changes I'm running the eddy_uv example via ?aprun -n 64 -N > 32 ./nek5000? on 64 processors and I'm getting the following output: > > /----------------------------------------------------------\\ > | _ __ ______ __ __ ______ ____ ____ ____ | > | / | / // ____// //_/ / ____/ / __ \\ / __ \\ / __ \\ | > | / |/ // __/ / ,< /___ \\ / / / // / / // / / / | > | / /| // /___ / /| | ____/ / / /_/ // /_/ // /_/ / | > | /_/ |_//_____//_/ |_|/_____/ \\____/ \\____/ \\____/ | > | | > |----------------------------------------------------------| > | | > | NEK5000: Open Source Spectral Element Solver | > | COPYRIGHT (c) 2008-2010 UCHICAGO ARGONNE, LLC | > | Version: 1.0rc1 / SVN r730 | > | Web: http://nek5000.mcs.anl.gov | > | | > \\----------------------------------------------------------/ > > > Number of processors: 64 > REAL wdsize : 8 > INTEGER wdsize : 4 > > > Beginning session: > /zhome/academic/HLRS/iag/iagoschm/run_nek/eddy_example/eddy_uv.rea > > > timer accuracy: 2.8610229E-07 sec > > read .rea file > nelgt/nelgv/lelt: 256 256 300 > lx1 /lx2 /lx3 : 8 6 8 > > mapping elements to processors > 0, 2*4, 2*256 NELV > 1, 2*4, 2*256 NELV > 8, 2*4, 2*256 NELV > 9, 2*4, 2*256 NELV > 17, 2*4, 2*256 NELV > 16, 2*4, 2*256 NELV > 20, 2*4, 2*256 NELV > 3*4, 2*256 NELV > 29, 2*4, 2*256 NELV > 21, 2*4, 2*256 NELV > 28, 2*4, 2*256 NELV > 7, 2*4, 2*256 NELV > 31, 2*4, 2*256 NELV > 25, 2*4, 2*256 NELV > 11, 2*4, 2*256 NELV > 12, 2*4, 2*256 NELV > 6, 2*4, 2*256 NELV > 30, 2*4, 2*256 NELV > 18, 2*4, 2*256 NELV > 19, 2*4, 2*256 NELV > 2, 2*4, 2*256 NELV > 5, 2*4, 2*256 NELV > 23, 2*4, 2*256 NELV > 27, 2*4, 2*256 NELV > 10, 2*4, 2*256 NELV > 22, 2*4, 2*256 NELV > 13, 2*4, 2*256 NELV > 14, 2*4, 2*256 NELV > 15, 2*4, 2*256 NELV > 3, 2*4, 2*256 NELV > 26, 2*4, 2*256 NELV > 24, 2*4, 2*256 NELV > 33, 2*4, 2*256 NELV > 41, 2*4, 2*256 NELV > 32, 2*4, 2*256 NELV > 40, 2*4, 2*256 NELV > 45, 2*4, 2*256 NELV > 44, 2*4, 2*256 NELV > 49, 2*4, 2*256 NELV > 34, 2*4, 2*256 NELV > 35, 2*4, 2*256 NELV > 48, 2*4, 2*256 NELV > 39, 2*4, 2*256 NELV > 38, 2*4, 2*256 NELV > 50, 2*4, 2*256 NELV > 51, 2*4, 2*256 NELV > 37, 2*4, 2*256 NELV > 36, 2*4, 2*256 NELV > 55, 2*4, 2*256 NELV > 54, 2*4, 2*256 NELV > 57, 2*4, 2*256 NELV > 58, 2*4, 2*256 NELV > 43, 2*4, 2*256 NELV > 42, 2*4, 2*256 NELV > 47, 2*4, 2*256 NELV > 53, 2*4, 2*256 NELV > 59, 2*4, 2*256 NELV > 46, 2*4, 2*256 NELV > 60, 2*4, 2*256 NELV > 61, 2*4, 2*256 NELV > 63, 2*4, 2*256 NELV > 62, 2*4, 2*256 NELV > 52, 2*4, 2*256 NELV > 56, 2*4, 2*256 NELV > 25, 0, 2*4, 256 NELT FAIL > 24, 0, 2*4, 256 NELT FAIL > 21, 0, 2*4, 256 NELT FAIL > Check that .map file and .rea file agree > Check that .map file and .rea file agree > Check that .map file and .rea file agree > 20, 0, 2*4, 256 NELT FAIL > Check that .map file and .rea file agree > 31, 0, 2*4, 256 NELT FAIL > 8, 0, 2*4, 256 NELT FAIL > Check that .map file and .rea file agree > 17, 0, 2*4, 256 NELT FAIL > 14, 0, 2*4, 256 NELT FAIL > Check that .map file and .rea file agree > 15, 0, 2*4, 256 NELT FAIL > 16, 0, 2*4, 256 NELT FAIL > 26, 0, 2*4, 256 NELT FAIL > Check that .map file and .rea file agree > Check that .map file and .rea file agree > 10, 0, 2*4, 256 NELT FAIL > 2, 0, 2*4, 256 NELT FAIL > 30, 0, 2*4, 256 NELT FAIL > 11, 0, 2*4, 256 NELT FAIL > 7, 0, 2*4, 256 NELT FAIL > 1, 0, 2*4, 256 NELT FAIL > Check that .map file and .rea file agree > 2*0, 2*4, 256 NELT FAIL > 6, 0, 2*4, 256 NELT FAIL > 4, 0, 2*4, 256 NELT FAIL > Check that .map file and .rea file agree > Check that .map file and .rea file agree > Check that .map file and .rea file agree > 13, 0, 2*4, 256 NELT FAIL > 12, 0, 2*4, 256 NELT FAIL > 28, 0, 2*4, 256 NELT FAIL > Check that .map file and .rea file agree > Check that .map file and .rea file agree > Check that .map file and .rea file agree > 18, 0, 2*4, 256 NELT FAIL > 29, 0, 2*4, 256 NELT FAIL > Check that .map file and .rea file agree > Check that .map file and .rea file agree > Check that .map file and .rea file agree > 23, 0, 2*4, 256 NELT FAIL > Check that .map file and .rea file agree > 27, 0, 2*4, 256 NELT FAIL > 3, 0, 2*4, 256 NELT FAIL > Check that .map file and .rea file agree > Check that .map file and .rea file agree > Check that .map file and .rea file agree > Check that .map file and .rea file agree > Check that .map file and .rea file agree > Check that .map file and .rea file agree > 19, 0, 2*4, 256 NELT FAIL > Check that .map file and .rea file agree > 5, 0, 2*4, 256 NELT FAIL > Check that .map file and .rea file agree > Check that .map file and .rea file agree > Check that .map file and .rea file agree > 22, 0, 2*4, 256 NELT FAIL > Check that .map file and .rea file agree > Check that .map file and .rea file agree > 2*0, 2*4, 256 NELT FB > 1, 0, 2*4, 256 NELT FB > 2, 0, 2*4, 256 NELT FB > 3, 0, 2*4, 256 NELT FB > 4, 0, 2*4, 256 NELT FB > 5, 0, 2*4, 256 NELT FB > 6, 0, 2*4, 256 NELT FB > 7, 0, 2*4, 256 NELT FB > 8, 0, 2*4, 256 NELT FB > 9, 3*4, 256 NELT FB > 10, 0, 2*4, 256 NELT FB > 11, 0, 2*4, 256 NELT FB > 12, 0, 2*4, 256 NELT FB > 13, 0, 2*4, 256 NELT FB > 14, 0, 2*4, 256 NELT FB > 15, 0, 2*4, 256 NELT FB > 16, 0, 2*4, 256 NELT FB > 17, 0, 2*4, 256 NELT FB > 18, 0, 2*4, 256 NELT FB > 19, 0, 2*4, 256 NELT FB > 20, 0, 2*4, 256 NELT FB > 21, 0, 2*4, 256 NELT FB > 22, 0, 2*4, 256 NELT FB > 23, 0, 2*4, 256 NELT FB > 24, 0, 2*4, 256 NELT FB > 25, 0, 2*4, 256 NELT FB > 26, 0, 2*4, 256 NELT FB > 27, 0, 2*4, 256 NELT FB > 28, 0, 2*4, 256 NELT FB > 29, 0, 2*4, 256 NELT FB > 30, 0, 2*4, 256 NELT FB > 31, 0, 2*4, 256 NELT FB > > call exitt: dying ... > > backtrace(): obtained 1 stack frames. > [0x662a40] > > total elapsed time : 7.82199E-02 sec > total solver time incl. I/O : 0.00000E+00 sec > time/timestep : 0.00000E+00 sec > CPU seconds/timestep/gridpt : 0.00000E+00 sec > > 39, 0, 2*4, 256 NELT FAIL > 38, 0, 2*4, 256 NELT FAIL > Check that .map file and .rea file agree > Check that .map file and .rea file agree > 36, 0, 2*4, 256 NELT FAIL > 41, 0, 2*4, 256 NELT FAIL > Check that .map file and .rea file agree > 32, 0, 2*4, 256 NELT FAIL > Check that .map file and .rea file agree > 48, 0, 2*4, 256 NELT FAIL > 34, 0, 2*4, 256 NELT FAIL > 54, 196, 2*4, 256 NELT FAIL > Check that .map file and .rea file agree > 40, 0, 2*4, 256 NELT FAIL > Check that .map file and .rea file agree > Check that .map file and .rea file agree > 46, 0, 2*4, 256 NELT FAIL > 47, 0, 2*4, 256 NELT FAIL > 33, 0, 2*4, 256 NELT FAIL > 37, 0, 2*4, 256 NELT FAIL > 35, 0, 2*4, 256 NELT FAIL > 43, 0, 2*4, 256 NELT FAIL > Check that .map file and .rea file agree > Check that .map file and .rea file agree > Check that .map file and .rea file agree > Check that .map file and .rea file agree > Check that .map file and .rea file agree > Check that .map file and .rea file agree > Check that .map file and .rea file agree > Check that .map file and .rea file agree > 45, 0, 2*4, 256 NELT FAIL > 42, 0, 2*4, 256 NELT FAIL > 44, 0, 2*4, 256 NELT FAIL > Check that .map file and .rea file agree > Check that .map file and .rea file agree > Check that .map file and .rea file agree > 32, 0, 2*4, 256 NELT FB > 33, 0, 2*4, 256 NELT FB > 34, 0, 2*4, 256 NELT FB > 35, 0, 2*4, 256 NELT FB > 36, 0, 2*4, 256 NELT FB > 37, 0, 2*4, 256 NELT FB > 38, 0, 2*4, 256 NELT FB > 39, 0, 2*4, 256 NELT FB > 40, 0, 2*4, 256 NELT FB > 41, 0, 2*4, 256 NELT FB > 42, 0, 2*4, 256 NELT FB > 43, 0, 2*4, 256 NELT FB > 44, 0, 2*4, 256 NELT FB > 45, 0, 2*4, 256 NELT FB > 46, 0, 2*4, 256 NELT FB > 47, 0, 2*4, 256 NELT FB > 48, 0, 2*4, 256 NELT FB > 49, 3*4, 256 NELT FB > 50, 3*4, 256 NELT FB > 51, 3*4, 256 NELT FB > 52, 3*4, 256 NELT FB > 53, 3*4, 256 NELT FB > 54, 196, 2*4, 256 NELT FB > 55, 3*4, 256 NELT FB > 56, 3*4, 256 NELT FB > 57, 3*4, 256 NELT FB > 58, 3*4, 256 NELT FB > 59, 3*4, 256 NELT FB > 60, 3*4, 256 NELT FB > 61, 3*4, 256 NELT FB > 62, 3*4, 256 NELT FB > 63, 3*4, 256 NELT FB > Application 419539 resources: utime ~8s, stime ~13s > > Does the code map in parallel here? I'm not sure what's happening here, > because the .map and .rea file do in fact agree. The error seems to > occur when processor #25 is mapped for a second time. The same thing > happens with my own cases that were running on a similar machine before > but with Nek compiled with PGI. > > Any help would be greatly appreciated > > Oliver > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > From nek5000-users at lists.mcs.anl.gov Mon Jan 30 12:02:23 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 30 Jan 2012 19:02:23 +0100 Subject: [Nek5000-users] Running NEK on CRAY; compilation & running issues In-Reply-To: References: <4F266FAC.1080103@iag.uni-stuttgart.de> Message-ID: Hi Oliver, Can you try to use the PGI compiler instead of the Cray. Just want to check if this is a compiler specific issue. Cheers, Stefan On 1/30/12, nek5000-users at lists.mcs.anl.gov wrote: > > Oliver, > > Thanks for your comments. We'll certainly take care of the > multd() and gsync() issues. > > We have what I believe is an xe6 on site and several users > are using it regularly. They may have some comments about > the flags, etc. and the issues that you are running into. > Hopefully, someone will get back to you early today. > > Regards, > > Paul > > > On Mon, 30 Jan 2012, nek5000-users at lists.mcs.anl.gov wrote: > >> Dear NEKs, >> >> I'm currently trying to compile NEK via crayftn to run on a XE6 and I >> run into some issues. Some compilation problems I solved (with kind help >> of CRAY personnel). I'll mention them here as they can be part of the >> problem, I assume. >> >> 1) changed >> *ftn*) P="-r8 -Mpreprocess >> to *ftn*) P="-s real64 -eZ -em" >> for double precision reals and to invoke preprocessor in makenek.inc >> (Cray uses ftn command as a wrapper) >> >> 2) changed subroutine name ?gsync? to ?gsync_nek? in all source files >> as >> crayftn has a build-in routine of the same name that conflicts with >> Nek's gsync >> >> 3) changed calls to subroutine ?multd? from >> CALL MULTD (TA1,TVX,RXM2,SXM2,TXM2,1) >> to CALL MULTD (TA1,TVX,RXM2,SXM2,TXM2,1,0) >> in ?navier1.f? as crayftn complains about missing seventh argument >> ?iflg? (calls come from subroutine ?tmultd? which seems not to be >> called >> in the code, though) >> >> In the SIZE file I'm setting lelt=300, lelv=lelt, lp = 64, lelg = 300. >> With these changes I'm running the eddy_uv example via ?aprun -n 64 -N >> 32 ./nek5000? on 64 processors and I'm getting the following output: >> >> /----------------------------------------------------------\\ >> | _ __ ______ __ __ ______ ____ ____ ____ | >> | / | / // ____// //_/ / ____/ / __ \\ / __ \\ / __ \\ | >> | / |/ // __/ / ,< /___ \\ / / / // / / // / / / | >> | / /| // /___ / /| | ____/ / / /_/ // /_/ // /_/ / | >> | /_/ |_//_____//_/ |_|/_____/ \\____/ \\____/ \\____/ | >> | | >> |----------------------------------------------------------| >> | | >> | NEK5000: Open Source Spectral Element Solver | >> | COPYRIGHT (c) 2008-2010 UCHICAGO ARGONNE, LLC | >> | Version: 1.0rc1 / SVN r730 | >> | Web: http://nek5000.mcs.anl.gov | >> | | >> \\----------------------------------------------------------/ >> >> >> Number of processors: 64 >> REAL wdsize : 8 >> INTEGER wdsize : 4 >> >> >> Beginning session: >> /zhome/academic/HLRS/iag/iagoschm/run_nek/eddy_example/eddy_uv.rea >> >> >> timer accuracy: 2.8610229E-07 sec >> >> read .rea file >> nelgt/nelgv/lelt: 256 256 300 >> lx1 /lx2 /lx3 : 8 6 8 >> >> mapping elements to processors >> 0, 2*4, 2*256 NELV >> 1, 2*4, 2*256 NELV >> 8, 2*4, 2*256 NELV >> 9, 2*4, 2*256 NELV >> 17, 2*4, 2*256 NELV >> 16, 2*4, 2*256 NELV >> 20, 2*4, 2*256 NELV >> 3*4, 2*256 NELV >> 29, 2*4, 2*256 NELV >> 21, 2*4, 2*256 NELV >> 28, 2*4, 2*256 NELV >> 7, 2*4, 2*256 NELV >> 31, 2*4, 2*256 NELV >> 25, 2*4, 2*256 NELV >> 11, 2*4, 2*256 NELV >> 12, 2*4, 2*256 NELV >> 6, 2*4, 2*256 NELV >> 30, 2*4, 2*256 NELV >> 18, 2*4, 2*256 NELV >> 19, 2*4, 2*256 NELV >> 2, 2*4, 2*256 NELV >> 5, 2*4, 2*256 NELV >> 23, 2*4, 2*256 NELV >> 27, 2*4, 2*256 NELV >> 10, 2*4, 2*256 NELV >> 22, 2*4, 2*256 NELV >> 13, 2*4, 2*256 NELV >> 14, 2*4, 2*256 NELV >> 15, 2*4, 2*256 NELV >> 3, 2*4, 2*256 NELV >> 26, 2*4, 2*256 NELV >> 24, 2*4, 2*256 NELV >> 33, 2*4, 2*256 NELV >> 41, 2*4, 2*256 NELV >> 32, 2*4, 2*256 NELV >> 40, 2*4, 2*256 NELV >> 45, 2*4, 2*256 NELV >> 44, 2*4, 2*256 NELV >> 49, 2*4, 2*256 NELV >> 34, 2*4, 2*256 NELV >> 35, 2*4, 2*256 NELV >> 48, 2*4, 2*256 NELV >> 39, 2*4, 2*256 NELV >> 38, 2*4, 2*256 NELV >> 50, 2*4, 2*256 NELV >> 51, 2*4, 2*256 NELV >> 37, 2*4, 2*256 NELV >> 36, 2*4, 2*256 NELV >> 55, 2*4, 2*256 NELV >> 54, 2*4, 2*256 NELV >> 57, 2*4, 2*256 NELV >> 58, 2*4, 2*256 NELV >> 43, 2*4, 2*256 NELV >> 42, 2*4, 2*256 NELV >> 47, 2*4, 2*256 NELV >> 53, 2*4, 2*256 NELV >> 59, 2*4, 2*256 NELV >> 46, 2*4, 2*256 NELV >> 60, 2*4, 2*256 NELV >> 61, 2*4, 2*256 NELV >> 63, 2*4, 2*256 NELV >> 62, 2*4, 2*256 NELV >> 52, 2*4, 2*256 NELV >> 56, 2*4, 2*256 NELV >> 25, 0, 2*4, 256 NELT FAIL >> 24, 0, 2*4, 256 NELT FAIL >> 21, 0, 2*4, 256 NELT FAIL >> Check that .map file and .rea file agree >> Check that .map file and .rea file agree >> Check that .map file and .rea file agree >> 20, 0, 2*4, 256 NELT FAIL >> Check that .map file and .rea file agree >> 31, 0, 2*4, 256 NELT FAIL >> 8, 0, 2*4, 256 NELT FAIL >> Check that .map file and .rea file agree >> 17, 0, 2*4, 256 NELT FAIL >> 14, 0, 2*4, 256 NELT FAIL >> Check that .map file and .rea file agree >> 15, 0, 2*4, 256 NELT FAIL >> 16, 0, 2*4, 256 NELT FAIL >> 26, 0, 2*4, 256 NELT FAIL >> Check that .map file and .rea file agree >> Check that .map file and .rea file agree >> 10, 0, 2*4, 256 NELT FAIL >> 2, 0, 2*4, 256 NELT FAIL >> 30, 0, 2*4, 256 NELT FAIL >> 11, 0, 2*4, 256 NELT FAIL >> 7, 0, 2*4, 256 NELT FAIL >> 1, 0, 2*4, 256 NELT FAIL >> Check that .map file and .rea file agree >> 2*0, 2*4, 256 NELT FAIL >> 6, 0, 2*4, 256 NELT FAIL >> 4, 0, 2*4, 256 NELT FAIL >> Check that .map file and .rea file agree >> Check that .map file and .rea file agree >> Check that .map file and .rea file agree >> 13, 0, 2*4, 256 NELT FAIL >> 12, 0, 2*4, 256 NELT FAIL >> 28, 0, 2*4, 256 NELT FAIL >> Check that .map file and .rea file agree >> Check that .map file and .rea file agree >> Check that .map file and .rea file agree >> 18, 0, 2*4, 256 NELT FAIL >> 29, 0, 2*4, 256 NELT FAIL >> Check that .map file and .rea file agree >> Check that .map file and .rea file agree >> Check that .map file and .rea file agree >> 23, 0, 2*4, 256 NELT FAIL >> Check that .map file and .rea file agree >> 27, 0, 2*4, 256 NELT FAIL >> 3, 0, 2*4, 256 NELT FAIL >> Check that .map file and .rea file agree >> Check that .map file and .rea file agree >> Check that .map file and .rea file agree >> Check that .map file and .rea file agree >> Check that .map file and .rea file agree >> Check that .map file and .rea file agree >> 19, 0, 2*4, 256 NELT FAIL >> Check that .map file and .rea file agree >> 5, 0, 2*4, 256 NELT FAIL >> Check that .map file and .rea file agree >> Check that .map file and .rea file agree >> Check that .map file and .rea file agree >> 22, 0, 2*4, 256 NELT FAIL >> Check that .map file and .rea file agree >> Check that .map file and .rea file agree >> 2*0, 2*4, 256 NELT FB >> 1, 0, 2*4, 256 NELT FB >> 2, 0, 2*4, 256 NELT FB >> 3, 0, 2*4, 256 NELT FB >> 4, 0, 2*4, 256 NELT FB >> 5, 0, 2*4, 256 NELT FB >> 6, 0, 2*4, 256 NELT FB >> 7, 0, 2*4, 256 NELT FB >> 8, 0, 2*4, 256 NELT FB >> 9, 3*4, 256 NELT FB >> 10, 0, 2*4, 256 NELT FB >> 11, 0, 2*4, 256 NELT FB >> 12, 0, 2*4, 256 NELT FB >> 13, 0, 2*4, 256 NELT FB >> 14, 0, 2*4, 256 NELT FB >> 15, 0, 2*4, 256 NELT FB >> 16, 0, 2*4, 256 NELT FB >> 17, 0, 2*4, 256 NELT FB >> 18, 0, 2*4, 256 NELT FB >> 19, 0, 2*4, 256 NELT FB >> 20, 0, 2*4, 256 NELT FB >> 21, 0, 2*4, 256 NELT FB >> 22, 0, 2*4, 256 NELT FB >> 23, 0, 2*4, 256 NELT FB >> 24, 0, 2*4, 256 NELT FB >> 25, 0, 2*4, 256 NELT FB >> 26, 0, 2*4, 256 NELT FB >> 27, 0, 2*4, 256 NELT FB >> 28, 0, 2*4, 256 NELT FB >> 29, 0, 2*4, 256 NELT FB >> 30, 0, 2*4, 256 NELT FB >> 31, 0, 2*4, 256 NELT FB >> >> call exitt: dying ... >> >> backtrace(): obtained 1 stack frames. >> [0x662a40] >> >> total elapsed time : 7.82199E-02 sec >> total solver time incl. I/O : 0.00000E+00 sec >> time/timestep : 0.00000E+00 sec >> CPU seconds/timestep/gridpt : 0.00000E+00 sec >> >> 39, 0, 2*4, 256 NELT FAIL >> 38, 0, 2*4, 256 NELT FAIL >> Check that .map file and .rea file agree >> Check that .map file and .rea file agree >> 36, 0, 2*4, 256 NELT FAIL >> 41, 0, 2*4, 256 NELT FAIL >> Check that .map file and .rea file agree >> 32, 0, 2*4, 256 NELT FAIL >> Check that .map file and .rea file agree >> 48, 0, 2*4, 256 NELT FAIL >> 34, 0, 2*4, 256 NELT FAIL >> 54, 196, 2*4, 256 NELT FAIL >> Check that .map file and .rea file agree >> 40, 0, 2*4, 256 NELT FAIL >> Check that .map file and .rea file agree >> Check that .map file and .rea file agree >> 46, 0, 2*4, 256 NELT FAIL >> 47, 0, 2*4, 256 NELT FAIL >> 33, 0, 2*4, 256 NELT FAIL >> 37, 0, 2*4, 256 NELT FAIL >> 35, 0, 2*4, 256 NELT FAIL >> 43, 0, 2*4, 256 NELT FAIL >> Check that .map file and .rea file agree >> Check that .map file and .rea file agree >> Check that .map file and .rea file agree >> Check that .map file and .rea file agree >> Check that .map file and .rea file agree >> Check that .map file and .rea file agree >> Check that .map file and .rea file agree >> Check that .map file and .rea file agree >> 45, 0, 2*4, 256 NELT FAIL >> 42, 0, 2*4, 256 NELT FAIL >> 44, 0, 2*4, 256 NELT FAIL >> Check that .map file and .rea file agree >> Check that .map file and .rea file agree >> Check that .map file and .rea file agree >> 32, 0, 2*4, 256 NELT FB >> 33, 0, 2*4, 256 NELT FB >> 34, 0, 2*4, 256 NELT FB >> 35, 0, 2*4, 256 NELT FB >> 36, 0, 2*4, 256 NELT FB >> 37, 0, 2*4, 256 NELT FB >> 38, 0, 2*4, 256 NELT FB >> 39, 0, 2*4, 256 NELT FB >> 40, 0, 2*4, 256 NELT FB >> 41, 0, 2*4, 256 NELT FB >> 42, 0, 2*4, 256 NELT FB >> 43, 0, 2*4, 256 NELT FB >> 44, 0, 2*4, 256 NELT FB >> 45, 0, 2*4, 256 NELT FB >> 46, 0, 2*4, 256 NELT FB >> 47, 0, 2*4, 256 NELT FB >> 48, 0, 2*4, 256 NELT FB >> 49, 3*4, 256 NELT FB >> 50, 3*4, 256 NELT FB >> 51, 3*4, 256 NELT FB >> 52, 3*4, 256 NELT FB >> 53, 3*4, 256 NELT FB >> 54, 196, 2*4, 256 NELT FB >> 55, 3*4, 256 NELT FB >> 56, 3*4, 256 NELT FB >> 57, 3*4, 256 NELT FB >> 58, 3*4, 256 NELT FB >> 59, 3*4, 256 NELT FB >> 60, 3*4, 256 NELT FB >> 61, 3*4, 256 NELT FB >> 62, 3*4, 256 NELT FB >> 63, 3*4, 256 NELT FB >> Application 419539 resources: utime ~8s, stime ~13s >> >> Does the code map in parallel here? I'm not sure what's happening here, >> because the .map and .rea file do in fact agree. The error seems to >> occur when processor #25 is mapped for a second time. The same thing >> happens with my own cases that were running on a similar machine before >> but with Nek compiled with PGI. >> >> Any help would be greatly appreciated >> >> Oliver >> >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> From nek5000-users at lists.mcs.anl.gov Mon Jan 30 13:43:50 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 30 Jan 2012 13:43:50 -0600 (CST) Subject: [Nek5000-users] Running NEK on CRAY; compilation & running issues In-Reply-To: Message-ID: <304748520.192593.1327952630495.JavaMail.root@zimbra.anl.gov> Hi Oliver, I have been using Cray xe6/xt6 successfully with PGI compilers specified in makenek with # Fortran compiler F77="ftn" # C compiler CC="cc" and the submission script I use is below. Best. Aleks rm *batch* echo $1 > SESSION.NAME echo `pwd`'/' >> SESSION.NAME touch $1.rea rm -f ioinfo mv -f $1.log.$2 $1.log1.$2 mv -f $1.his $1.his1 mv -f $1.sch $1.sch1 rm -f logfile echo '' > $1.log.$2 echo "#!/bin/bash" > $1.batch echo "#PBS -l mppwidth="$2 >> $1.batch echo "#PBS -l walltime="$3":"$4":00" >> $1.batch echo "#PBS -j oe" >> $1.batch echo cd `pwd` >> $1.batch echo aprun -n $2 ./nek5000 ">>" $1.log.$2 >> $1.batch echo "exit 0;" >> $1.batch qsub -q batch $1.batch sleep 3 ln $1.log.$2 logfile ## ## usage: neke case cores hours minutes ----- Original Message ----- From: nek5000-users at lists.mcs.anl.gov To: nek5000-users at lists.mcs.anl.gov Sent: Monday, January 30, 2012 12:02:23 PM Subject: Re: [Nek5000-users] Running NEK on CRAY; compilation & running issues Hi Oliver, Can you try to use the PGI compiler instead of the Cray. Just want to check if this is a compiler specific issue. Cheers, Stefan On 1/30/12, nek5000-users at lists.mcs.anl.gov wrote: > > Oliver, > > Thanks for your comments. We'll certainly take care of the > multd() and gsync() issues. > > We have what I believe is an xe6 on site and several users > are using it regularly. They may have some comments about > the flags, etc. and the issues that you are running into. > Hopefully, someone will get back to you early today. > > Regards, > > Paul > > > On Mon, 30 Jan 2012, nek5000-users at lists.mcs.anl.gov wrote: > >> Dear NEKs, >> >> I'm currently trying to compile NEK via crayftn to run on a XE6 and I >> run into some issues. Some compilation problems I solved (with kind help >> of CRAY personnel). I'll mention them here as they can be part of the >> problem, I assume. >> >> 1) changed >> *ftn*) P="-r8 -Mpreprocess >> to *ftn*) P="-s real64 -eZ -em" >> for double precision reals and to invoke preprocessor in makenek.inc >> (Cray uses ftn command as a wrapper) >> >> 2) changed subroutine name ?gsync? to ?gsync_nek? in all source files >> as >> crayftn has a build-in routine of the same name that conflicts with >> Nek's gsync >> >> 3) changed calls to subroutine ?multd? from >> CALL MULTD (TA1,TVX,RXM2,SXM2,TXM2,1) >> to CALL MULTD (TA1,TVX,RXM2,SXM2,TXM2,1,0) >> in ?navier1.f? as crayftn complains about missing seventh argument >> ?iflg? (calls come from subroutine ?tmultd? which seems not to be >> called >> in the code, though) >> >> In the SIZE file I'm setting lelt=300, lelv=lelt, lp = 64, lelg = 300. >> With these changes I'm running the eddy_uv example via ?aprun -n 64 -N >> 32 ./nek5000? on 64 processors and I'm getting the following output: >> >> /----------------------------------------------------------\\ >> | _ __ ______ __ __ ______ ____ ____ ____ | >> | / | / // ____// //_/ / ____/ / __ \\ / __ \\ / __ \\ | >> | / |/ // __/ / ,< /___ \\ / / / // / / // / / / | >> | / /| // /___ / /| | ____/ / / /_/ // /_/ // /_/ / | >> | /_/ |_//_____//_/ |_|/_____/ \\____/ \\____/ \\____/ | >> | | >> |----------------------------------------------------------| >> | | >> | NEK5000: Open Source Spectral Element Solver | >> | COPYRIGHT (c) 2008-2010 UCHICAGO ARGONNE, LLC | >> | Version: 1.0rc1 / SVN r730 | >> | Web: http://nek5000.mcs.anl.gov | >> | | >> \\----------------------------------------------------------/ >> >> >> Number of processors: 64 >> REAL wdsize : 8 >> INTEGER wdsize : 4 >> >> >> Beginning session: >> /zhome/academic/HLRS/iag/iagoschm/run_nek/eddy_example/eddy_uv.rea >> >> >> timer accuracy: 2.8610229E-07 sec >> >> read .rea file >> nelgt/nelgv/lelt: 256 256 300 >> lx1 /lx2 /lx3 : 8 6 8 >> >> mapping elements to processors >> 0, 2*4, 2*256 NELV >> 1, 2*4, 2*256 NELV >> 8, 2*4, 2*256 NELV >> 9, 2*4, 2*256 NELV >> 17, 2*4, 2*256 NELV >> 16, 2*4, 2*256 NELV >> 20, 2*4, 2*256 NELV >> 3*4, 2*256 NELV >> 29, 2*4, 2*256 NELV >> 21, 2*4, 2*256 NELV >> 28, 2*4, 2*256 NELV >> 7, 2*4, 2*256 NELV >> 31, 2*4, 2*256 NELV >> 25, 2*4, 2*256 NELV >> 11, 2*4, 2*256 NELV >> 12, 2*4, 2*256 NELV >> 6, 2*4, 2*256 NELV >> 30, 2*4, 2*256 NELV >> 18, 2*4, 2*256 NELV >> 19, 2*4, 2*256 NELV >> 2, 2*4, 2*256 NELV >> 5, 2*4, 2*256 NELV >> 23, 2*4, 2*256 NELV >> 27, 2*4, 2*256 NELV >> 10, 2*4, 2*256 NELV >> 22, 2*4, 2*256 NELV >> 13, 2*4, 2*256 NELV >> 14, 2*4, 2*256 NELV >> 15, 2*4, 2*256 NELV >> 3, 2*4, 2*256 NELV >> 26, 2*4, 2*256 NELV >> 24, 2*4, 2*256 NELV >> 33, 2*4, 2*256 NELV >> 41, 2*4, 2*256 NELV >> 32, 2*4, 2*256 NELV >> 40, 2*4, 2*256 NELV >> 45, 2*4, 2*256 NELV >> 44, 2*4, 2*256 NELV >> 49, 2*4, 2*256 NELV >> 34, 2*4, 2*256 NELV >> 35, 2*4, 2*256 NELV >> 48, 2*4, 2*256 NELV >> 39, 2*4, 2*256 NELV >> 38, 2*4, 2*256 NELV >> 50, 2*4, 2*256 NELV >> 51, 2*4, 2*256 NELV >> 37, 2*4, 2*256 NELV >> 36, 2*4, 2*256 NELV >> 55, 2*4, 2*256 NELV >> 54, 2*4, 2*256 NELV >> 57, 2*4, 2*256 NELV >> 58, 2*4, 2*256 NELV >> 43, 2*4, 2*256 NELV >> 42, 2*4, 2*256 NELV >> 47, 2*4, 2*256 NELV >> 53, 2*4, 2*256 NELV >> 59, 2*4, 2*256 NELV >> 46, 2*4, 2*256 NELV >> 60, 2*4, 2*256 NELV >> 61, 2*4, 2*256 NELV >> 63, 2*4, 2*256 NELV >> 62, 2*4, 2*256 NELV >> 52, 2*4, 2*256 NELV >> 56, 2*4, 2*256 NELV >> 25, 0, 2*4, 256 NELT FAIL >> 24, 0, 2*4, 256 NELT FAIL >> 21, 0, 2*4, 256 NELT FAIL >> Check that .map file and .rea file agree >> Check that .map file and .rea file agree >> Check that .map file and .rea file agree >> 20, 0, 2*4, 256 NELT FAIL >> Check that .map file and .rea file agree >> 31, 0, 2*4, 256 NELT FAIL >> 8, 0, 2*4, 256 NELT FAIL >> Check that .map file and .rea file agree >> 17, 0, 2*4, 256 NELT FAIL >> 14, 0, 2*4, 256 NELT FAIL >> Check that .map file and .rea file agree >> 15, 0, 2*4, 256 NELT FAIL >> 16, 0, 2*4, 256 NELT FAIL >> 26, 0, 2*4, 256 NELT FAIL >> Check that .map file and .rea file agree >> Check that .map file and .rea file agree >> 10, 0, 2*4, 256 NELT FAIL >> 2, 0, 2*4, 256 NELT FAIL >> 30, 0, 2*4, 256 NELT FAIL >> 11, 0, 2*4, 256 NELT FAIL >> 7, 0, 2*4, 256 NELT FAIL >> 1, 0, 2*4, 256 NELT FAIL >> Check that .map file and .rea file agree >> 2*0, 2*4, 256 NELT FAIL >> 6, 0, 2*4, 256 NELT FAIL >> 4, 0, 2*4, 256 NELT FAIL >> Check that .map file and .rea file agree >> Check that .map file and .rea file agree >> Check that .map file and .rea file agree >> 13, 0, 2*4, 256 NELT FAIL >> 12, 0, 2*4, 256 NELT FAIL >> 28, 0, 2*4, 256 NELT FAIL >> Check that .map file and .rea file agree >> Check that .map file and .rea file agree >> Check that .map file and .rea file agree >> 18, 0, 2*4, 256 NELT FAIL >> 29, 0, 2*4, 256 NELT FAIL >> Check that .map file and .rea file agree >> Check that .map file and .rea file agree >> Check that .map file and .rea file agree >> 23, 0, 2*4, 256 NELT FAIL >> Check that .map file and .rea file agree >> 27, 0, 2*4, 256 NELT FAIL >> 3, 0, 2*4, 256 NELT FAIL >> Check that .map file and .rea file agree >> Check that .map file and .rea file agree >> Check that .map file and .rea file agree >> Check that .map file and .rea file agree >> Check that .map file and .rea file agree >> Check that .map file and .rea file agree >> 19, 0, 2*4, 256 NELT FAIL >> Check that .map file and .rea file agree >> 5, 0, 2*4, 256 NELT FAIL >> Check that .map file and .rea file agree >> Check that .map file and .rea file agree >> Check that .map file and .rea file agree >> 22, 0, 2*4, 256 NELT FAIL >> Check that .map file and .rea file agree >> Check that .map file and .rea file agree >> 2*0, 2*4, 256 NELT FB >> 1, 0, 2*4, 256 NELT FB >> 2, 0, 2*4, 256 NELT FB >> 3, 0, 2*4, 256 NELT FB >> 4, 0, 2*4, 256 NELT FB >> 5, 0, 2*4, 256 NELT FB >> 6, 0, 2*4, 256 NELT FB >> 7, 0, 2*4, 256 NELT FB >> 8, 0, 2*4, 256 NELT FB >> 9, 3*4, 256 NELT FB >> 10, 0, 2*4, 256 NELT FB >> 11, 0, 2*4, 256 NELT FB >> 12, 0, 2*4, 256 NELT FB >> 13, 0, 2*4, 256 NELT FB >> 14, 0, 2*4, 256 NELT FB >> 15, 0, 2*4, 256 NELT FB >> 16, 0, 2*4, 256 NELT FB >> 17, 0, 2*4, 256 NELT FB >> 18, 0, 2*4, 256 NELT FB >> 19, 0, 2*4, 256 NELT FB >> 20, 0, 2*4, 256 NELT FB >> 21, 0, 2*4, 256 NELT FB >> 22, 0, 2*4, 256 NELT FB >> 23, 0, 2*4, 256 NELT FB >> 24, 0, 2*4, 256 NELT FB >> 25, 0, 2*4, 256 NELT FB >> 26, 0, 2*4, 256 NELT FB >> 27, 0, 2*4, 256 NELT FB >> 28, 0, 2*4, 256 NELT FB >> 29, 0, 2*4, 256 NELT FB >> 30, 0, 2*4, 256 NELT FB >> 31, 0, 2*4, 256 NELT FB >> >> call exitt: dying ... >> >> backtrace(): obtained 1 stack frames. >> [0x662a40] >> >> total elapsed time : 7.82199E-02 sec >> total solver time incl. I/O : 0.00000E+00 sec >> time/timestep : 0.00000E+00 sec >> CPU seconds/timestep/gridpt : 0.00000E+00 sec >> >> 39, 0, 2*4, 256 NELT FAIL >> 38, 0, 2*4, 256 NELT FAIL >> Check that .map file and .rea file agree >> Check that .map file and .rea file agree >> 36, 0, 2*4, 256 NELT FAIL >> 41, 0, 2*4, 256 NELT FAIL >> Check that .map file and .rea file agree >> 32, 0, 2*4, 256 NELT FAIL >> Check that .map file and .rea file agree >> 48, 0, 2*4, 256 NELT FAIL >> 34, 0, 2*4, 256 NELT FAIL >> 54, 196, 2*4, 256 NELT FAIL >> Check that .map file and .rea file agree >> 40, 0, 2*4, 256 NELT FAIL >> Check that .map file and .rea file agree >> Check that .map file and .rea file agree >> 46, 0, 2*4, 256 NELT FAIL >> 47, 0, 2*4, 256 NELT FAIL >> 33, 0, 2*4, 256 NELT FAIL >> 37, 0, 2*4, 256 NELT FAIL >> 35, 0, 2*4, 256 NELT FAIL >> 43, 0, 2*4, 256 NELT FAIL >> Check that .map file and .rea file agree >> Check that .map file and .rea file agree >> Check that .map file and .rea file agree >> Check that .map file and .rea file agree >> Check that .map file and .rea file agree >> Check that .map file and .rea file agree >> Check that .map file and .rea file agree >> Check that .map file and .rea file agree >> 45, 0, 2*4, 256 NELT FAIL >> 42, 0, 2*4, 256 NELT FAIL >> 44, 0, 2*4, 256 NELT FAIL >> Check that .map file and .rea file agree >> Check that .map file and .rea file agree >> Check that .map file and .rea file agree >> 32, 0, 2*4, 256 NELT FB >> 33, 0, 2*4, 256 NELT FB >> 34, 0, 2*4, 256 NELT FB >> 35, 0, 2*4, 256 NELT FB >> 36, 0, 2*4, 256 NELT FB >> 37, 0, 2*4, 256 NELT FB >> 38, 0, 2*4, 256 NELT FB >> 39, 0, 2*4, 256 NELT FB >> 40, 0, 2*4, 256 NELT FB >> 41, 0, 2*4, 256 NELT FB >> 42, 0, 2*4, 256 NELT FB >> 43, 0, 2*4, 256 NELT FB >> 44, 0, 2*4, 256 NELT FB >> 45, 0, 2*4, 256 NELT FB >> 46, 0, 2*4, 256 NELT FB >> 47, 0, 2*4, 256 NELT FB >> 48, 0, 2*4, 256 NELT FB >> 49, 3*4, 256 NELT FB >> 50, 3*4, 256 NELT FB >> 51, 3*4, 256 NELT FB >> 52, 3*4, 256 NELT FB >> 53, 3*4, 256 NELT FB >> 54, 196, 2*4, 256 NELT FB >> 55, 3*4, 256 NELT FB >> 56, 3*4, 256 NELT FB >> 57, 3*4, 256 NELT FB >> 58, 3*4, 256 NELT FB >> 59, 3*4, 256 NELT FB >> 60, 3*4, 256 NELT FB >> 61, 3*4, 256 NELT FB >> 62, 3*4, 256 NELT FB >> 63, 3*4, 256 NELT FB >> Application 419539 resources: utime ~8s, stime ~13s >> >> Does the code map in parallel here? I'm not sure what's happening here, >> because the .map and .rea file do in fact agree. The error seems to >> occur when processor #25 is mapped for a second time. The same thing >> happens with my own cases that were running on a similar machine before >> but with Nek compiled with PGI. >> >> Any help would be greatly appreciated >> >> Oliver >> >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> _______________________________________________ Nek5000-users mailing list Nek5000-users at lists.mcs.anl.gov https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users