From nek5000-users at lists.mcs.anl.gov Mon Sep 3 09:06:16 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 3 Sep 2012 14:06:16 +0000 Subject: [Nek5000-users] 2D Axisymmetric in semi-circular annulus In-Reply-To: References: Message-ID: Hi Aleks, I've tested this update and it appears to have solved that problem. Thanks for looking into this! Best wishes, Adrian > Hi Adrian, > > We have committed fixes that resolves the issues you had earlier (revision 856). > > Let us now if this works for you. > > Best. > Aleks > > ----- Original Message ----- > From: nek5000-users at lists.mcs.anl.gov > To: nek5000-users at lists.mcs.anl.gov > Sent: Tuesday, August 28, 2012 2:19:00 AM > Subject: Re: [Nek5000-users] 2D Axisymmetric in semi-circular annulus > > OK --- that does not appear to be the issue... will dig a bit > deeper. > > Paul > > On Tue, 28 Aug 2012, nek5000-users at lists.mcs.anl.gov wrote: > >> Hi Adrian, >> >> My suspicion is that the issue is somehow related to the SYM bc + curve sides >> + ifaxis. Generally, SYM on >> curve sides implies using the stress formulation. >> >> At present, I'm not certain of the status of the combination: >> >> IFSTRS T >> >> IFAXIS T >> >> We can look into this as well. >> >> Sorry that this has been holding you up. >> >> Paul >> >> On Mon, 27 Aug 2012, nek5000-users at lists.mcs.anl.gov wrote: >> >>> Dear Nek users, >>> >>> I have been attempting to set up a 2D axisymmetric problem with an >>> azimuthal >>> velocity, in a semi-circular annulus container (the 2D axisymmetric version >>> of a spherical shell), but have encountered problems with the terms >>> evolving >>> the azimuthal velocity blowing up near to the axis, and wondered if anybody >>> would be able to help with this? >>> >>> I have looked at the vortex2 example, and have copied the relevant aspects >>> to >>> my .usr and .rea files. I have set IFAXIS=T to get the correct viscous >>> operators for cylindrical geometry where (x,y)->(z,R), as well as IFHEAT=T, >>> IFAZIV=T, using temperature as the azimuthal velocity. My test mesh is >>> created using prenek, and contains 4 elements on a polar grid from r=0.5 to >>> 1 >>> and theta=0 to 180. Since elements 2 & 4 have one side touching the R=0 >>> (y=0) >>> axis, I have used "A" boundary conditions, which are applied to side 1 for >>> each of these elements, as is required. The remaining BCs are 'SYM' and 'I' >>> for all of the exterior sides. The .rea file containing the mesh with 4 >>> elements is attached to this email. (In the SIZE file I have set >>> lx1=20,lxd=30,lx2=lx1-2). >>> >>> For a simple test, if I set the initial conditions to be a uniform rotation >>> about the z-axis (setting temp = y), within several time-steps the code >>> blows >>> up, giving NaN's for each quantity (I have tested various initial >>> conditions, >>> which each give the same error). The line of code that appears to cause >>> problems is the line ~240 of hmholtz.f, TERM1 = >>> BM1(I,J,1,IEL)*U(I,J,1,IEL)/YM1(I,J,1,IEL)**2, >>> since, when this term is set to 0, the code does not produce NaN's. (Though >>> the viscous operator for the azimuthal velocity is then incorrect...) >>> >>> When IFHEAT=IFAZIV=F, i.e. when I am not integrating the >>> temperature/azimuthal velocity, the code runs without problem on the same >>> mesh. >>> >>> I am wondering if there is a problem with how I have set up the mesh that >>> causes this problem? And if so, is there a way to fix this? >>> >>> Thanks, >>> Adrian >>> >>> Modifications to .usr file are: >>> >>> c----------------------------------------------------------------------- >>> subroutine userf (ix,iy,iz,eg) >>> include 'SIZE' >>> include 'TOTAL' >>> include 'NEKUSE' >>> integer e,f,eg >>> if(y.gt.0) ffy = temp*temp/y >>> return >>> end >>> c----------------------------------------------------------------------- >>> subroutine userq (ix,iy,iz,eg) >>> include 'SIZE' >>> include 'TOTAL' >>> include 'NEKUSE' >>> integer e,f,eg >>> if(y.gt.0) then >>> visc = param(2) >>> qvol = -uy*temp/y >>> endif >>> return >>> end >>> c----------------------------------------------------------------------- >>> subroutine useric (ix,iy,iz,ieg) >>> include 'SIZE' >>> include 'TOTAL' >>> include 'NEKUSE' >>> temp = y >>> return >>> end >>> c----------------------------------------------------------------------- >>> >>> My .rea file: >>> >>> ****** PARAMETERS ***** >>> 2.610000 NEKTON VERSION >>> 2 DIMENSIONAL RUN >>> 118 PARAMETERS FOLLOW >>> 1.00000 p001 DENSITY >>> 0.10000E-06 p002 VISCOS >>> 0.00000 p003 >>> 1.00000 p004 >>> 0.00000 p005 >>> 0.00000 p006 >>> 1.00000 p007 RHOCP >>> 0.100000E-06 p008 CONDUCT >>> 0.00000 p009 >>> 0.00000 p010 FINTIME >>> 0.100000E+08 p011 NSTEPS >>> 0.100000E-03 p012 DT >>> 0.00000 p013 IOCOMM >>> 0.00000 p014 IOTIME >>> 1000.00 p015 IOSTEP >>> 0.00000 p016 PSSOLVER: 0=default >>> 1.00000 p017 >>> 0.500000E-01 p018 GRID < 0 --> # cells on screen >>> -1.00000 p019 INTYPE >>> 5.00000 p020 NORDER >>> 0.100000E-06 p021 DIVERGENCE >>> 0.100000E-08 p022 HELMHOLTZ >>> 0.00000 p023 NPSCAL >>> 0.100000E-01 p024 TOLREL >>> 0.100000E-01 p025 TOLABS >>> 0.450000 p026 COURANT/NTAU >>> 3.00000 p027 TORDER >>> 1.00000 p028 TORDER: mesh velocity (0: p28=p27) >>> 0.00000 p029 = magnetic visc if > 0, = -1/Rm if < 0 >>> 0.00000 p030 > 0 ==> properties set in uservp() >>> 0.00000 p031 NPERT: #perturbation modes >>> 0.00000 p032 #BCs in re2 file, if > 0 >>> 0.00000 p033 >>> 0.00000 p034 >>> 0.00000 p035 >>> 0.00000 p036 >>> 0.00000 p037 >>> 0.00000 p038 >>> 0.00000 p039 >>> 0.00000 p040 >>> 0.00000 p041 1-->multiplicative SEMG >>> 0.00000 p042 0=gmres/1=pcg >>> 0.00000 p043 0=semg/1=schwarz >>> 0.00000 p044 0=E-based/1=A-based prec. >>> 0.00000 p045 Relaxation factor for DTFS >>> 0.00000 p046 reserved >>> 0.00000 p047 vnu: mesh matieral prop. >>> 0.00000 p048 >>> 0.00000 p049 >>> 0.00000 p050 >>> 0.00000 p051 >>> 0.00000 p052 IOHIS >>> 0.00000 p053 >>> 0.00000 p054 fixed flow rate dir: |p54|=1,2,3=x,y,z >>> 0.00000 p055 vol.flow rate (p54>0) or Ubar (p54<0) >>> 0.00000 p056 >>> 0.00000 p057 >>> 0.00000 p058 >>> 0.00000 p059 !=0 --> full Jac. eval. for each el. >>> 0.00000 p060 !=0 --> init. velocity to small nonzero >>> 0.00000 p061 >>> 0.00000 p062 >0 --> force byte_swap for output >>> 0.00000 p063 =8 --> force 8-byte output >>> 0.00000 p064 =1 --> perturbation restart >>> 0.00000 p065 #iofiles (eg, 0 or 64); <0 --> sep. dirs >>> 0.00000 p066 output : <0=ascii, else binary >>> 0.00000 p067 restart: <0=ascii, else binary >>> 0.00000 p068 iastep: freq for avg_all (0=iostep) >>> 0.00000 p069 >>> 0.00000 p070 >>> 0.00000 p071 >>> 0.00000 p072 >>> 0.00000 p073 >>> 0.00000 p074 verbose Helmholtz >>> 0.00000 p075 >>> 0.00000 p076 >>> 0.00000 p077 >>> 0.00000 p078 >>> 0.00000 p079 >>> 0.00000 p080 >>> 0.00000 p081 >>> 0.00000 p082 >>> 0.00000 p083 >>> 0.00000 p084 !=0 --> sets initial timestep if p12>0 >>> 0.00000 p085 dt ratio if p84 !=0, for timesteps>0 >>> 0.00000 p086 reserved >>> 0.00000 p087 >>> 0.00000 p088 >>> 0.00000 p089 >>> 0.00000 p090 >>> 0.00000 p091 >>> 0.00000 p092 >>> 20.0000 p093 Number of previous pressure solns saved >>> 5.00000 p094 start projecting velocity after p94 step >>> 5.00000 p095 start projecting pressure after p95 step >>> 0.00000 p096 >>> 0.00000 p097 >>> 0.00000 p098 >>> 3.00000 p099 dealiasing: <0--> off/3--> old/4--> new >>> 0.00000 p100 >>> 0.00000 p101 Number of additional modes to filter >>> 1.00000 p102 Dump out divergence at each time step >>> 0.100000E-01 p103 weight of stabilizing filter (.01) >>> 0.00000 p104 >>> 0.00000 p105 >>> 0.00000 p106 >>> 0.00000 p107 !=0 --> add to h2 array in hlmhotz eqn >>> 0.00000 p108 >>> 0.00000 p109 >>> 0.00000 p110 >>> 0.00000 p111 >>> 0.00000 p112 >>> 0.00000 p113 >>> 0.00000 p114 >>> 0.00000 p115 >>> 0.00000 p116 !=0: x elements for fast tensor product >>> 0.00000 p117 !=0: y elements for fast tensor product >>> 0.00000 p118 !=0: z elements for fast tensor product >>> 4 Lines of passive scalar data follows2 CONDUCT; 2RHOCP >>> 0.00000 0.00000 0.00000 0.00000 0.00000 >>> 0.00000 0.00000 0.00000 0.00000 >>> 0.00000 0.00000 0.00000 0.00000 0.00000 >>> 0.00000 0.00000 0.00000 0.00000 >>> 12 LOGICAL SWITCHES FOLLOW >>> T IFFLOW >>> T IFHEAT >>> T IFTRAN >>> T T F F F F F F F F F IFNAV & IFADVC (convection in P.S. fields) >>> F F T T T T T T T T T T IFTMSH (IF mesh for this field is T mesh) >>> T IFAXIS >>> T IFAZIV >>> T IFSTRS >>> F IFMGRID >>> F IFMODEL >>> F IFKEPS >>> F IFCHAR >>> 10.0000 10.0000 -4.00000 -5.50000 >>> XFAC,YFAC,XZERO,YZERO >>> **MESH DATA** 1st line is X of corner 1,2,3,4. 2nd line is Y. >>> 4 2 4 NEL,NDIM,NELV >>> ELEMENT 1 [ 1C] GROUP 0 >>> -0.4371138E-07 -0.2185569E-07 0.3535533 0.7071066 >>> 1.000000 0.5000000 0.3535533 0.7071066 >>> ELEMENT 2 [ 1D] GROUP 0 >>> 0.5000000 1.000000 0.7071066 0.3535533 >>> 0.000000 0.000000 0.7071066 0.3535533 >>> ELEMENT 3 [ 1D] GROUP 0 >>> 0.5962440E-08 0.1192488E-07 -0.7071066 -0.3535533 >>> 0.5000000 1.000000 0.7071066 0.3535533 >>> ELEMENT 4 [ 1 ] GROUP 0 >>> -1.000000 -0.5000000 -0.3535533 -0.7071066 >>> 0.8742278E-07 0.4371139E-07 0.3535533 0.7071066 >>> ***** CURVED SIDE DATA ***** >>> 8 Curved sides follow IEDGE,IEL,CURVE(I),I=1,5, CCURVE >>> 2 1 -0.500000 0.00000 0.00000 0.00000 0.00000 >>> C >>> 4 1 1.00000 0.00000 0.00000 0.00000 0.00000 >>> C >>> 2 2 1.00000 0.00000 0.00000 0.00000 0.00000 >>> C >>> 4 2 -0.500000 0.00000 0.00000 0.00000 0.00000 >>> C >>> 2 3 1.00000 0.00000 0.00000 0.00000 0.00000 >>> C >>> 4 3 -0.500000 0.00000 0.00000 0.00000 0.00000 >>> C >>> 2 4 -0.500000 0.00000 0.00000 0.00000 0.00000 >>> C >>> 4 4 1.00000 0.00000 0.00000 0.00000 0.00000 >>> C >>> ***** BOUNDARY CONDITIONS ***** >>> ***** FLUID BOUNDARY CONDITIONS ***** >>> E 1 1 3.00000 1.00000 0.00000 0.00000 0.00000 >>> SYM 1 2 0.00000 0.00000 0.00000 0.00000 0.00000 >>> E 1 3 2.00000 3.00000 0.00000 0.00000 0.00000 >>> SYM 1 4 0.00000 0.00000 0.00000 0.00000 0.00000 >>> A 2 1 0.00000 0.00000 0.00000 0.00000 0.00000 >>> SYM 2 2 0.00000 0.00000 0.00000 0.00000 0.00000 >>> E 2 3 1.00000 3.00000 0.00000 0.00000 0.00000 >>> SYM 2 4 0.00000 0.00000 0.00000 0.00000 0.00000 >>> E 3 1 1.00000 1.00000 0.00000 0.00000 0.00000 >>> SYM 3 2 0.00000 0.00000 0.00000 0.00000 0.00000 >>> E 3 3 4.00000 3.00000 0.00000 0.00000 0.00000 >>> SYM 3 4 0.00000 0.00000 0.00000 0.00000 0.00000 >>> A 4 1 0.00000 0.00000 0.00000 0.00000 0.00000 >>> SYM 4 2 0.00000 0.00000 0.00000 0.00000 0.00000 >>> E 4 3 3.00000 3.00000 0.00000 0.00000 0.00000 >>> SYM 4 4 0.00000 0.00000 0.00000 0.00000 0.00000 >>> ***** THERMAL BOUNDARY CONDITIONS ***** >>> E 1 1 3.00000 1.00000 0.00000 0.00000 0.00000 >>> I 1 2 0.00000 0.00000 0.00000 0.00000 0.00000 >>> E 1 3 2.00000 3.00000 0.00000 0.00000 0.00000 >>> I 1 4 0.00000 0.00000 0.00000 0.00000 0.00000 >>> A 2 1 0.00000 0.00000 0.00000 0.00000 0.00000 >>> I 2 2 0.00000 0.00000 0.00000 0.00000 0.00000 >>> E 2 3 1.00000 3.00000 0.00000 0.00000 0.00000 >>> I 2 4 0.00000 0.00000 0.00000 0.00000 0.00000 >>> E 3 1 1.00000 1.00000 0.00000 0.00000 0.00000 >>> I 3 2 0.00000 0.00000 0.00000 0.00000 0.00000 >>> E 3 3 4.00000 3.00000 0.00000 0.00000 0.00000 >>> I 3 4 0.00000 0.00000 0.00000 0.00000 0.00000 >>> A 4 1 0.00000 0.00000 0.00000 0.00000 0.00000 >>> I 4 2 0.00000 0.00000 0.00000 0.00000 0.00000 >>> E 4 3 3.00000 3.00000 0.00000 0.00000 0.00000 >>> I 4 4 0.00000 0.00000 0.00000 0.00000 0.00000 >>> 0 PRESOLVE/RESTART OPTIONS ***** >>> 7 INITIAL CONDITIONS ***** >>> C Default >>> C Default >>> C Default >>> C Default >>> C Default >>> C Default >>> C Default >>> ***** DRIVE FORCE DATA ***** BODY FORCE, FLOW, Q >>> 4 Lines of Drive force data follow >>> C >>> C >>> C >>> C >>> ***** Variable Property Data ***** Overrrides Parameter data. >>> 1 Lines follow. >>> 0 PACKETS OF DATA FOLLOW >>> ***** HISTORY AND INTEGRAL DATA ***** >>> 0 POINTS. Hcode, I,J,H,IEL >>> ***** OUTPUT FIELD SPECIFICATION ***** >>> 6 SPECIFICATIONS FOLLOW >>> T COORDINATES >>> T VELOCITY >>> T PRESSURE >>> T TEMPERATURE >>> F TEMPERATURE GRADIENT >>> 0 PASSIVE SCALARS >>> ***** OBJECT SPECIFICATION ***** >>> 0 Surface Objects >>> 0 Volume Objects >>> 0 Edge Objects >>> 0 Point Objects From nek5000-users at lists.mcs.anl.gov Tue Sep 4 01:14:21 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Tue, 4 Sep 2012 06:14:21 +0000 Subject: [Nek5000-users] Reminder - please register for the nek5000 Development and User Group Meting 2012! Message-ID: Dear nek5000 users, developers and friends! We are pleased to invite you to attend the second nek5000 Development and User Meeting, to be hosted by the Aeorthermochemistry and Combustion Systems Laboratory (LAV) at the Swiss Federal Institute of Technology (ETHZ) in Zurich, Switzerland, on December 7-8, 2012. The objective is to bring developers and users of nek5000 together to promote collaborative activities, exchange information, and share experiences in using nek5000 in areas of common interest. The schedule is under development, but we are planning to have technical sessions on Friday, Dec. 7 and Saturday, Dec. 8, including hands-on sessions in the use of nek5000 and VisIt. One objective of the meeting is to assist users in defining their simulation problems and to address technical issues that they may face. Attendees are invited to contribute short (15 min.) presentations on their applications of nek5000. Contributions should consist of an abstract (1-2 pages) and presentation slides. Registration and up to date information can be found at https://nek5000.mcs.anl.gov/index.php/Usermeeting2012 We look forward to your attendance! Sincerely, Nek5000 Development Team From nek5000-users at lists.mcs.anl.gov Tue Sep 4 04:56:19 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Tue, 04 Sep 2012 11:56:19 +0200 Subject: [Nek5000-users] 3D computations in 2D axisymmetric geometry Message-ID: Dear Neks, My plan is to do 3D stability computations in an axisymmetric geometry, starting in October. Even if the geometry is 2D, the flow will be 3D, so I need to do a 3D mesh (not just a slice with constant theta). The cluster hours for that project will only last 5 months, so I need to get going in October, so I need to start thinking about the mesh already. I have run planar geometries before (channel junction as the last one), which were easy to mesh in prenek. I was wondering how other people running similar computations have created their mesh. What would be a good starting point? I have seen some examples of pipe flow mesh cross section with "rectangular mesh section" in the middle, and curved elements outside. Is this the best way to go? Make a mesh for a pipe cross-section and then extrude it in the axial direction? My geometry will also have an inlet, but if I am able to make a mesh for a pipe and for an annulus, and incline them in the axial direction, I will be fine. Thank you in advance, Outi P.S. I have asked about a 2D axisymmetric mesh before, but that time we chose a planar flow case instead (channel junction), which I was able to mesh in prenek. The axisymmetric geometry I am considering now is not the same I was considering before. ---------------------------------------------------------------- This message was sent using IMP, the Internet Messaging Program. From nek5000-users at lists.mcs.anl.gov Wed Sep 5 03:29:52 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 05 Sep 2012 10:29:52 +0200 Subject: [Nek5000-users] Lift on a plan in the streamwise direction Message-ID: Hi Nek's, I'm working on an accelerated flat plate in 3D. I'm trying to calculate the lift but only on a plan (streamwise direction) of the plate in order to make the curve C_L (spanwise direction). I already tried several methods but non of them worked. For example, I use the routine hpts() to interpolate the value of pressure and gradient velocity on the points of the plate and then calculating the lift with those values. Does somebody have an idea to do this? Best Regards, Hugo From nek5000-users at lists.mcs.anl.gov Mon Sep 10 15:54:26 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 10 Sep 2012 22:54:26 +0200 Subject: [Nek5000-users] startup time nek5000 Message-ID: Dear all, We have just successfully compiled and run nek5000 on another cluster, using Intel MPI and the corresponding wrappers mpiifort and mpiicc. The code runs fine, without problem, but it stays for about 10 minutes (using 4096 cores) during the startup with the following output: .... gs_setup: 559948 unique labels shared pairwise times (avg, min, max): 0.000220039 0.000176096 0.000265098 crystal router : 0.000166412 0.000162292 0.000180507 used all_to_all method: crystal router Attaching gdb tells me the following location: (gdb) where #0 0x00002adafd4b51db in MPIDI_CH3I_Progress () from /pdc/vol/intelmpi/4.0.3/lib64/libmpi.so.4 #1 0x00002adafd625fe6 in PMPI_Recv () from /pdc/vol/intelmpi/4.0.3/lib64/libmpi.so.4 #2 0x000000000083041c in orthogonalize () #3 0x000000000082ed23 in jl_crs_setup () #4 0x0000000000831d69 in crs_setup_ () #5 0x0000000000632760 in set_up_h1_crs_ () #6 0x000000000061feba in set_overlap_ () #7 0x000000000040b7c1 in nek_init_ () #8 0x000000000040a824 in MAIN__ () #9 0x000000000040472c in main () As I said, the code runs fine, and very fast, so no problem. Just wanted to ask whether these 10 minutes in the startup would be to be expected, or whether we could try to bring that time down a bit. We restart every say 24 hours so it's not a big problem. I have to say that our size is very close to the memory available per core. Thanks, Philipp From nek5000-users at lists.mcs.anl.gov Mon Sep 10 16:08:32 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 10 Sep 2012 16:08:32 -0500 (CDT) Subject: [Nek5000-users] startup time nek5000 In-Reply-To: References: Message-ID: Dear Philipp, This is generally expected for the direct, XX^T-based, coarse grid solve. How many elements in your problem? The only alternative is to switch to AMG, but that is less automatic than XXT at this point. (It is faster for some problems, but I don't think it's faster for your class of problems. By "faster" here I refer to the execution phase rather than the setup costs.) Best regards, Paul On Mon, 10 Sep 2012, nek5000-users at lists.mcs.anl.gov wrote: > Dear all, > We have just successfully compiled and run nek5000 on another cluster, using > Intel MPI and the corresponding wrappers mpiifort and mpiicc. The code runs > fine, without problem, but it stays for about 10 minutes (using 4096 cores) > during the startup with the following output: > .... > gs_setup: 559948 unique labels shared > pairwise times (avg, min, max): 0.000220039 0.000176096 0.000265098 > crystal router : 0.000166412 0.000162292 0.000180507 > used all_to_all method: crystal router > > > Attaching gdb tells me the following location: > > (gdb) where > #0 0x00002adafd4b51db in MPIDI_CH3I_Progress () from > /pdc/vol/intelmpi/4.0.3/lib64/libmpi.so.4 > #1 0x00002adafd625fe6 in PMPI_Recv () from > /pdc/vol/intelmpi/4.0.3/lib64/libmpi.so.4 > #2 0x000000000083041c in orthogonalize () > #3 0x000000000082ed23 in jl_crs_setup () > #4 0x0000000000831d69 in crs_setup_ () > #5 0x0000000000632760 in set_up_h1_crs_ () > #6 0x000000000061feba in set_overlap_ () > #7 0x000000000040b7c1 in nek_init_ () > #8 0x000000000040a824 in MAIN__ () > #9 0x000000000040472c in main () > > As I said, the code runs fine, and very fast, so no problem. Just wanted to > ask whether these 10 minutes in the startup would be to be expected, or > whether we could try to bring that time down a bit. We restart every say 24 > hours so it's not a big problem. I have to say that our size is very close > to the memory available per core. > > Thanks, > Philipp > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > From nek5000-users at lists.mcs.anl.gov Tue Sep 11 03:33:21 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Tue, 11 Sep 2012 10:33:21 +0200 Subject: [Nek5000-users] Compilation Error on BG/Q Message-ID: Hi neks! On the BG/Q in J?lich we encounter several errors while compiling nek5000 (Revision: 853). Please find attached the compiler output, the makenek script and the makefile. The installed compiler versions are "IBM XL C/C++ for Blue Gene, V12.1" and "IBM XL Fortran for Blue Gene, V14.1". Did you come across similar problems on your BG/Q runs? Do you have any hint how I can fix this? Thanks in advance Fabian -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: compiler.out URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: makefile URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: makenek URL: From nek5000-users at lists.mcs.anl.gov Tue Sep 11 03:47:27 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Tue, 11 Sep 2012 10:47:27 +0200 Subject: [Nek5000-users] Compilation Error on BG/Q In-Reply-To: References: Message-ID: I cannot see any errors (only warnings) in your compiler.out? -Stefan On Tue, Sep 11, 2012 at 10:33 AM, wrote: > Hi neks! > > On the BG/Q in J?lich we encounter several errors while compiling nek5000 > (Revision: 853). Please find attached the compiler output, the makenek > script and the makefile. > The installed compiler versions are "IBM XL C/C++ for Blue Gene, V12.1" > and "IBM XL Fortran for Blue Gene, V14.1". > Did you come across similar problems on your BG/Q runs? Do you have any > hint how I can fix this? > > Thanks in advance > Fabian > > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Tue Sep 11 07:10:23 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Tue, 11 Sep 2012 14:10:23 +0200 Subject: [Nek5000-users] Compilation Error on BG/Q In-Reply-To: References: Message-ID: Do you get an executable from the compiler? In this case, just go ahead a give it a whirl. -Stefan Well, the answer is pretty simple: If your compiler was able to create an executable On Tue, Sep 11, 2012 at 12:07 PM, wrote: > Hi Stefan, > > Aren't those lines tagged with "(E)" errors (and "(W)" warnings)? > > "$NEKDIR/trunk/nek/plan4.f", line 408.19: 1516-023 (E) Subscript is out of > bounds. > "$NEKDIR/trunk/nek/plan4.f", line 409.28: 1516-023 (E) Subscript is out of > bounds. > "$NEKDIR/trunk/nek/plan4.f", line 413.21: 1516-023 (E) Subscript is out of > bounds. > "$NEKDIR/trunk/nek/plan4.f", line 414.30: 1516-023 (E) Subscript is out of > bounds. > > and > > "$NEKDIR/trunk/nek/ic.f", line 935.29: 1516-023 (E) Subscript is out of > bounds. > > and > > "$NEKDIR/trunk/nek/jl/poly_imp.h", line 8.28: 1506-1418 (E) Assignment > between restrict pointers "w" and "data" is not allowed. Only > outer-to-inner scope assignments between restrict pointers are allowed. > ... > > In your opinion everything should work fine? > > Fabian > > > On 09/11/2012 10:47 AM, nek5000-users at lists.mcs.anl.gov wrote: > > I cannot see any errors (only warnings) in your compiler.out? > -Stefan > > On Tue, Sep 11, 2012 at 10:33 AM, wrote: > >> Hi neks! >> >> On the BG/Q in J?lich we encounter several errors while compiling nek5000 >> (Revision: 853). Please find attached the compiler output, the makenek >> script and the makefile. >> The installed compiler versions are "IBM XL C/C++ for Blue Gene, V12.1" >> and "IBM XL Fortran for Blue Gene, V14.1". >> Did you come across similar problems on your BG/Q runs? Do you have any >> hint how I can fix this? >> >> Thanks in advance >> Fabian >> >> >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> >> > > > _______________________________________________ > Nek5000-users mailing listNek5000-users at lists.mcs.anl.govhttps://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Tue Sep 11 07:13:38 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Tue, 11 Sep 2012 14:13:38 +0200 Subject: [Nek5000-users] Compilation Error on BG/Q In-Reply-To: References: Message-ID: btw: What follows is a statement from the compiler manual: (E) An error that the compiler can correct. The program should run correctly. hth, -Stefan On Tue, Sep 11, 2012 at 2:10 PM, S K wrote: > Do you get an executable from the compiler? In this case, just go ahead a > give it a whirl. > -Stefan > > > Well, the answer is pretty simple: If your compiler was able to create an > executable > > > On Tue, Sep 11, 2012 at 12:07 PM, wrote: > >> Hi Stefan, >> >> Aren't those lines tagged with "(E)" errors (and "(W)" warnings)? >> >> "$NEKDIR/trunk/nek/plan4.f", line 408.19: 1516-023 (E) Subscript is out >> of bounds. >> "$NEKDIR/trunk/nek/plan4.f", line 409.28: 1516-023 (E) Subscript is out >> of bounds. >> "$NEKDIR/trunk/nek/plan4.f", line 413.21: 1516-023 (E) Subscript is out >> of bounds. >> "$NEKDIR/trunk/nek/plan4.f", line 414.30: 1516-023 (E) Subscript is out >> of bounds. >> >> and >> >> "$NEKDIR/trunk/nek/ic.f", line 935.29: 1516-023 (E) Subscript is out of >> bounds. >> >> and >> >> "$NEKDIR/trunk/nek/jl/poly_imp.h", line 8.28: 1506-1418 (E) Assignment >> between restrict pointers "w" and "data" is not allowed. Only >> outer-to-inner scope assignments between restrict pointers are allowed. >> ... >> >> In your opinion everything should work fine? >> >> Fabian >> >> >> On 09/11/2012 10:47 AM, nek5000-users at lists.mcs.anl.gov wrote: >> >> I cannot see any errors (only warnings) in your compiler.out? >> -Stefan >> >> On Tue, Sep 11, 2012 at 10:33 AM, wrote: >> >>> Hi neks! >>> >>> On the BG/Q in J?lich we encounter several errors while compiling >>> nek5000 (Revision: 853). Please find attached the compiler output, the >>> makenek script and the makefile. >>> The installed compiler versions are "IBM XL C/C++ for Blue Gene, V12.1" >>> and "IBM XL Fortran for Blue Gene, V14.1". >>> Did you come across similar problems on your BG/Q runs? Do you have any >>> hint how I can fix this? >>> >>> Thanks in advance >>> Fabian >>> >>> >>> _______________________________________________ >>> Nek5000-users mailing list >>> Nek5000-users at lists.mcs.anl.gov >>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>> >>> >> >> >> _______________________________________________ >> Nek5000-users mailing listNek5000-users at lists.mcs.anl.govhttps://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> >> >> -- >> Dipl.-Ing. Fabian Hennig >> Wissenschaftlicher Mitarbeiter >> Institut f?r Technische Verbrennung (ITV) >> RWTH Aachen >> Templergraben 64 >> 52056 Aachen >> Telefon: +49-241-80-94614 >> E-Mail: fhennig at itv.rwth-aachen.de >> >> >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Tue Sep 11 07:29:02 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Tue, 11 Sep 2012 14:29:02 +0200 Subject: [Nek5000-users] startup time nek5000 In-Reply-To: References: Message-ID: Dear Paul, the case that I am talking about is our largest pipe DNS with about 1.2 million elements with polynomial order 11. The reason why I was asking is that we have not observed such a "long" time spent in the simulation startup, also not for the same case run on other architectures (for instance Cray XE6 with even more processes). Therefore I was suspecting either something with the IntelMPI or the infiniband-type network... Best regards, Philipp -----Original Message----- From: nek5000-users-bounces at lists.mcs.anl.gov [mailto:nek5000-users-bounces at lists.mcs.anl.gov] On Behalf Of nek5000-users at lists.mcs.anl.gov Sent: den 10 september 2012 23:09 To: nek5000-users at lists.mcs.anl.gov Subject: Re: [Nek5000-users] startup time nek5000 Dear Philipp, This is generally expected for the direct, XX^T-based, coarse grid solve. How many elements in your problem? The only alternative is to switch to AMG, but that is less automatic than XXT at this point. (It is faster for some problems, but I don't think it's faster for your class of problems. By "faster" here I refer to the execution phase rather than the setup costs.) Best regards, Paul On Mon, 10 Sep 2012, nek5000-users at lists.mcs.anl.gov wrote: > Dear all, > We have just successfully compiled and run nek5000 on another cluster, > using Intel MPI and the corresponding wrappers mpiifort and mpiicc. > The code runs fine, without problem, but it stays for about 10 minutes > (using 4096 cores) during the startup with the following output: > .... > gs_setup: 559948 unique labels shared > pairwise times (avg, min, max): 0.000220039 0.000176096 0.000265098 > crystal router : 0.000166412 0.000162292 0.000180507 > used all_to_all method: crystal router > > > Attaching gdb tells me the following location: > > (gdb) where > #0 0x00002adafd4b51db in MPIDI_CH3I_Progress () from > /pdc/vol/intelmpi/4.0.3/lib64/libmpi.so.4 > #1 0x00002adafd625fe6 in PMPI_Recv () from > /pdc/vol/intelmpi/4.0.3/lib64/libmpi.so.4 > #2 0x000000000083041c in orthogonalize () > #3 0x000000000082ed23 in jl_crs_setup () > #4 0x0000000000831d69 in crs_setup_ () > #5 0x0000000000632760 in set_up_h1_crs_ () > #6 0x000000000061feba in set_overlap_ () > #7 0x000000000040b7c1 in nek_init_ () > #8 0x000000000040a824 in MAIN__ () > #9 0x000000000040472c in main () > > As I said, the code runs fine, and very fast, so no problem. Just > wanted to ask whether these 10 minutes in the startup would be to be > expected, or whether we could try to bring that time down a bit. We > restart every say 24 hours so it's not a big problem. I have to say > that our size is very close to the memory available per core. > > Thanks, > Philipp > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > _______________________________________________ Nek5000-users mailing list Nek5000-users at lists.mcs.anl.gov https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users From nek5000-users at lists.mcs.anl.gov Tue Sep 11 07:56:43 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Tue, 11 Sep 2012 14:56:43 +0200 Subject: [Nek5000-users] startup time nek5000 In-Reply-To: References: Message-ID: Hi again, I forgot before, here I paste the complete output around the "critical" parts: gs_setup: 559948 unique labels shared pairwise times (avg, min, max): 0.000216368 0.000171804 0.000280309 crystal router : 0.000162064 0.000158095 0.000166893 used all_to_all method: crystal router done :: setup h1 coarse grid 728.853798866272 sec So you can see it takes about 10 minutes. Again, it is not a bit deal at all, just wondering whether this is fine. Best, Philipp -----Original Message----- From: nek5000-users-bounces at lists.mcs.anl.gov [mailto:nek5000-users-bounces at lists.mcs.anl.gov] On Behalf Of nek5000-users at lists.mcs.anl.gov Sent: den 10 september 2012 23:09 To: nek5000-users at lists.mcs.anl.gov Subject: Re: [Nek5000-users] startup time nek5000 Dear Philipp, This is generally expected for the direct, XX^T-based, coarse grid solve. How many elements in your problem? The only alternative is to switch to AMG, but that is less automatic than XXT at this point. (It is faster for some problems, but I don't think it's faster for your class of problems. By "faster" here I refer to the execution phase rather than the setup costs.) Best regards, Paul On Mon, 10 Sep 2012, nek5000-users at lists.mcs.anl.gov wrote: > Dear all, > We have just successfully compiled and run nek5000 on another cluster, > using Intel MPI and the corresponding wrappers mpiifort and mpiicc. > The code runs fine, without problem, but it stays for about 10 minutes > (using 4096 cores) during the startup with the following output: > .... > gs_setup: 559948 unique labels shared > pairwise times (avg, min, max): 0.000220039 0.000176096 0.000265098 > crystal router : 0.000166412 0.000162292 0.000180507 > used all_to_all method: crystal router > > > Attaching gdb tells me the following location: > > (gdb) where > #0 0x00002adafd4b51db in MPIDI_CH3I_Progress () from > /pdc/vol/intelmpi/4.0.3/lib64/libmpi.so.4 > #1 0x00002adafd625fe6 in PMPI_Recv () from > /pdc/vol/intelmpi/4.0.3/lib64/libmpi.so.4 > #2 0x000000000083041c in orthogonalize () > #3 0x000000000082ed23 in jl_crs_setup () > #4 0x0000000000831d69 in crs_setup_ () > #5 0x0000000000632760 in set_up_h1_crs_ () > #6 0x000000000061feba in set_overlap_ () > #7 0x000000000040b7c1 in nek_init_ () > #8 0x000000000040a824 in MAIN__ () > #9 0x000000000040472c in main () > > As I said, the code runs fine, and very fast, so no problem. Just > wanted to ask whether these 10 minutes in the startup would be to be > expected, or whether we could try to bring that time down a bit. We > restart every say 24 hours so it's not a big problem. I have to say > that our size is very close to the memory available per core. > > Thanks, > Philipp > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > _______________________________________________ Nek5000-users mailing list Nek5000-users at lists.mcs.anl.gov https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users From nek5000-users at lists.mcs.anl.gov Tue Sep 11 09:51:37 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Tue, 11 Sep 2012 09:51:37 -0500 (CDT) Subject: [Nek5000-users] startup time nek5000 In-Reply-To: Message-ID: Hi Philipp, I have seen the setup times up to gs_setup: 569654 unique labels shared pairwise times (avg, min, max): 9.04538e-05 6.03561e-05 0.000142141 crystal router : 0.000198481 0.00019553 0.000204876 used all_to_all method: pairwise done :: setup h1 coarse grid 1128.82858828941153 sec on 16384 cores BG/P for a case with 700k elements and gridpoints unique/tot: 243765798 356072448 dofs: 233299737 150218064 Once I switched to AMG, I got done :: setup h1 coarse grid 117.992192675294092 sec Best. Aleks ----- Original Message ----- From: nek5000-users at lists.mcs.anl.gov To: nek5000-users at lists.mcs.anl.gov Sent: Tuesday, September 11, 2012 7:56:43 AM Subject: Re: [Nek5000-users] startup time nek5000 Hi again, I forgot before, here I paste the complete output around the "critical" parts: gs_setup: 559948 unique labels shared pairwise times (avg, min, max): 0.000216368 0.000171804 0.000280309 crystal router : 0.000162064 0.000158095 0.000166893 used all_to_all method: crystal router done :: setup h1 coarse grid 728.853798866272 sec So you can see it takes about 10 minutes. Again, it is not a bit deal at all, just wondering whether this is fine. Best, Philipp -----Original Message----- From: nek5000-users-bounces at lists.mcs.anl.gov [mailto:nek5000-users-bounces at lists.mcs.anl.gov] On Behalf Of nek5000-users at lists.mcs.anl.gov Sent: den 10 september 2012 23:09 To: nek5000-users at lists.mcs.anl.gov Subject: Re: [Nek5000-users] startup time nek5000 Dear Philipp, This is generally expected for the direct, XX^T-based, coarse grid solve. How many elements in your problem? The only alternative is to switch to AMG, but that is less automatic than XXT at this point. (It is faster for some problems, but I don't think it's faster for your class of problems. By "faster" here I refer to the execution phase rather than the setup costs.) Best regards, Paul On Mon, 10 Sep 2012, nek5000-users at lists.mcs.anl.gov wrote: > Dear all, > We have just successfully compiled and run nek5000 on another cluster, > using Intel MPI and the corresponding wrappers mpiifort and mpiicc. > The code runs fine, without problem, but it stays for about 10 minutes > (using 4096 cores) during the startup with the following output: > .... > gs_setup: 559948 unique labels shared > pairwise times (avg, min, max): 0.000220039 0.000176096 0.000265098 > crystal router : 0.000166412 0.000162292 0.000180507 > used all_to_all method: crystal router > > > Attaching gdb tells me the following location: > > (gdb) where > #0 0x00002adafd4b51db in MPIDI_CH3I_Progress () from > /pdc/vol/intelmpi/4.0.3/lib64/libmpi.so.4 > #1 0x00002adafd625fe6 in PMPI_Recv () from > /pdc/vol/intelmpi/4.0.3/lib64/libmpi.so.4 > #2 0x000000000083041c in orthogonalize () > #3 0x000000000082ed23 in jl_crs_setup () > #4 0x0000000000831d69 in crs_setup_ () > #5 0x0000000000632760 in set_up_h1_crs_ () > #6 0x000000000061feba in set_overlap_ () > #7 0x000000000040b7c1 in nek_init_ () > #8 0x000000000040a824 in MAIN__ () > #9 0x000000000040472c in main () > > As I said, the code runs fine, and very fast, so no problem. Just > wanted to ask whether these 10 minutes in the startup would be to be > expected, or whether we could try to bring that time down a bit. We > restart every say 24 hours so it's not a big problem. I have to say > that our size is very close to the memory available per core. > > Thanks, > Philipp > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > _______________________________________________ Nek5000-users mailing list Nek5000-users at lists.mcs.anl.gov https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users _______________________________________________ Nek5000-users mailing list Nek5000-users at lists.mcs.anl.gov https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users From nek5000-users at lists.mcs.anl.gov Thu Sep 13 17:41:03 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 14 Sep 2012 00:41:03 +0200 Subject: [Nek5000-users] Visit reader for double precision f files Message-ID: Dear all, We were wondering whether it would be possible to include support for 64 bit .f files in the Visit reader for Nek5000. So far, only 32 bit files are supported, which we are hardly using any longer except for those files to visualise with Visit; however we have quite a number of f files (snapshots) from various simulations that we use for different purposes such as the computation of spectra, POD modes etc. for which we naturally chose 64 bit due to accuracy consideration. To be able to directly visualise such files in Visit without converting them to 32 bit would be very helpful indeed. Another aspect that came up recently was the question on how to indicate in a f file whether or not the pressure is mapped to the Pn mesh if the Pn/Pn-2 method is used. Normally, restart files would not perform this mapping, snapshot files however would do. We were wondering whether a simple solution could be to choose different letters in the header (say "P" for pressure on the Pn mesh and "Q" for pressure on the Pn-2 mesh)? We could give a try for both things here if you think it would not be too difficult... Best regards, Philipp From nek5000-users at lists.mcs.anl.gov Thu Sep 13 17:46:11 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 13 Sep 2012 15:46:11 -0700 Subject: [Nek5000-users] Visit reader for double precision f files In-Reply-To: References: Message-ID: Hi Philip, With regards to VisIt, I am happy to investigate. IIRC, this issue came up before and we decided that the reader did work. (But possibly it slipped off my queue and it doesn't work!!) If you can make a file available to me (hchilds at lbl.gov), I can pursue. With respect to the file format issue, someone else more knowledgable will need to answer. Best, Hank On Thu, Sep 13, 2012 at 3:41 PM, wrote: > Dear all, > We were wondering whether it would be possible to include support for 64 > bit > .f files in the Visit reader for Nek5000. So far, only 32 bit files are > supported, which we are hardly using any longer except for those files to > visualise with Visit; however we have quite a number of f files (snapshots) > from various simulations that we use for different purposes such as the > computation of spectra, POD modes etc. for which we naturally chose 64 bit > due to accuracy consideration. To be able to directly visualise such files > in Visit without converting them to 32 bit would be very helpful indeed. > > Another aspect that came up recently was the question on how to indicate in > a f file whether or not the pressure is mapped to the Pn mesh if the > Pn/Pn-2 > method is used. Normally, restart files would not perform this mapping, > snapshot files however would do. We were wondering whether a simple > solution > could be to choose different letters in the header (say "P" for pressure on > the Pn mesh and "Q" for pressure on the Pn-2 mesh)? > > We could give a try for both things here if you think it would not be too > difficult... > > Best regards, > Philipp > > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Fri Sep 14 08:25:26 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 14 Sep 2012 08:25:26 -0500 (CDT) Subject: [Nek5000-users] Visit reader for double precision f files In-Reply-To: References: Message-ID: Hi Hank, I'm wondering if there would be a possibility of adding a "gauss-to-gauss-lobatto" interpolator in the VisIt reader, which would be one way of addressing the pressure data problem. I realize of course that VisIt at present has no knowledge of what a spectral element is and that this is a bit of a shift from our original approach, but perhaps it wouldn't be so hard. We can discuss this off-line if wish. Thanks! Paul On Thu, 13 Sep 2012, nek5000-users at lists.mcs.anl.gov wrote: > Hi Philip, > > With regards to VisIt, I am happy to investigate. IIRC, this issue came up > before and we decided that the reader did work. (But possibly it slipped > off my queue and it doesn't work!!) > > If you can make a file available to me (hchilds at lbl.gov), I can pursue. > > With respect to the file format issue, someone else more knowledgable will > need to answer. > > Best, > Hank > > On Thu, Sep 13, 2012 at 3:41 PM, wrote: > >> Dear all, >> We were wondering whether it would be possible to include support for 64 >> bit >> .f files in the Visit reader for Nek5000. So far, only 32 bit files are >> supported, which we are hardly using any longer except for those files to >> visualise with Visit; however we have quite a number of f files (snapshots) >> from various simulations that we use for different purposes such as the >> computation of spectra, POD modes etc. for which we naturally chose 64 bit >> due to accuracy consideration. To be able to directly visualise such files >> in Visit without converting them to 32 bit would be very helpful indeed. >> >> Another aspect that came up recently was the question on how to indicate in >> a f file whether or not the pressure is mapped to the Pn mesh if the >> Pn/Pn-2 >> method is used. Normally, restart files would not perform this mapping, >> snapshot files however would do. We were wondering whether a simple >> solution >> could be to choose different letters in the header (say "P" for pressure on >> the Pn mesh and "Q" for pressure on the Pn-2 mesh)? >> >> We could give a try for both things here if you think it would not be too >> difficult... >> >> Best regards, >> Philipp >> >> >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> > From nek5000-users at lists.mcs.anl.gov Fri Sep 14 16:09:10 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 14 Sep 2012 23:09:10 +0200 Subject: [Nek5000-users] Visit reader for double precision f files In-Reply-To: References: Message-ID: Dear Hank, thanks a lot for your answer. Indeed, now that I checked myself it turns out that double precision files work perfectly in version 2.4.0 and onwards of visit, so this was false alarm, sorry! I will inquire why we thought in the first place that 64 bit would not work. Thanks again, Philipp On Fri, 14 Sep 2012 00:46:11 +0200, wrote: > Hi Philip, > > With regards to VisIt, I am happy to investigate. IIRC, this issue came > up > before and we decided that the reader did work. (But possibly it slipped > off my queue and it doesn't work!!) > > If you can make a file available to me (hchilds at lbl.gov), I can pursue. > > With respect to the file format issue, someone else more knowledgable > will > need to answer. > > Best, > Hank > > On Thu, Sep 13, 2012 at 3:41 PM, wrote: > >> Dear all, >> We were wondering whether it would be possible to include support for 64 >> bit >> .f files in the Visit reader for Nek5000. So far, only 32 bit files are >> supported, which we are hardly using any longer except for those files >> to >> visualise with Visit; however we have quite a number of f files >> (snapshots) >> from various simulations that we use for different purposes such as the >> computation of spectra, POD modes etc. for which we naturally chose 64 >> bit >> due to accuracy consideration. To be able to directly visualise such >> files >> in Visit without converting them to 32 bit would be very helpful indeed. >> >> Another aspect that came up recently was the question on how to >> indicate in >> a f file whether or not the pressure is mapped to the Pn mesh if the >> Pn/Pn-2 >> method is used. Normally, restart files would not perform this mapping, >> snapshot files however would do. We were wondering whether a simple >> solution >> could be to choose different letters in the header (say "P" for >> pressure on >> the Pn mesh and "Q" for pressure on the Pn-2 mesh)? >> >> We could give a try for both things here if you think it would not be >> too >> difficult... >> >> Best regards, >> Philipp >> >> >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users From nek5000-users at lists.mcs.anl.gov Fri Sep 14 16:18:14 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 14 Sep 2012 16:18:14 -0500 (CDT) Subject: [Nek5000-users] Visit reader for double precision f files In-Reply-To: References: Message-ID: Hi Philipp, Hank, I _think_ there may still be an issue for the Pn-Pn-2 case if we are dumping out for the full-restart capability, in which case pressure does not occupy (N+1)^3 memory sites, but only (N-1)^3, with the remainder being zero. This would not influence analysis of velocity fields, however. Paul PS - it's also possible that this deficit of pressure data exists only for the restart files, rather than for standard files that are written in double precision, but I'd have to check on that. On Fri, 14 Sep 2012, nek5000-users at lists.mcs.anl.gov wrote: > Dear Hank, > thanks a lot for your answer. Indeed, now that I checked myself it turns out > that double precision files work perfectly in version 2.4.0 and onwards of > visit, so this was false alarm, sorry! I will inquire why we thought in the > first place that 64 bit would not work. > > Thanks again, > Philipp > > > > > On Fri, 14 Sep 2012 00:46:11 +0200, wrote: > >> Hi Philip, >> >> With regards to VisIt, I am happy to investigate. IIRC, this issue came up >> before and we decided that the reader did work. (But possibly it slipped >> off my queue and it doesn't work!!) >> >> If you can make a file available to me (hchilds at lbl.gov), I can pursue. >> >> With respect to the file format issue, someone else more knowledgable will >> need to answer. >> >> Best, >> Hank >> >> On Thu, Sep 13, 2012 at 3:41 PM, wrote: >> >>> Dear all, >>> We were wondering whether it would be possible to include support for 64 >>> bit >>> .f files in the Visit reader for Nek5000. So far, only 32 bit files are >>> supported, which we are hardly using any longer except for those files to >>> visualise with Visit; however we have quite a number of f files >>> (snapshots) >>> from various simulations that we use for different purposes such as the >>> computation of spectra, POD modes etc. for which we naturally chose 64 bit >>> due to accuracy consideration. To be able to directly visualise such files >>> in Visit without converting them to 32 bit would be very helpful indeed. >>> >>> Another aspect that came up recently was the question on how to indicate >>> in >>> a f file whether or not the pressure is mapped to the Pn mesh if the >>> Pn/Pn-2 >>> method is used. Normally, restart files would not perform this mapping, >>> snapshot files however would do. We were wondering whether a simple >>> solution >>> could be to choose different letters in the header (say "P" for pressure >>> on >>> the Pn mesh and "Q" for pressure on the Pn-2 mesh)? >>> >>> We could give a try for both things here if you think it would not be too >>> difficult... >>> >>> Best regards, >>> Philipp >>> >>> >>> _______________________________________________ >>> Nek5000-users mailing list >>> Nek5000-users at lists.mcs.anl.gov >>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users From nek5000-users at lists.mcs.anl.gov Mon Sep 24 16:17:20 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 24 Sep 2012 23:17:20 +0200 Subject: [Nek5000-users] how to choose lx1 Message-ID: Hi all, I'm trying to setup nek5000 with a high Rayleigh number problem. At first, I've used # of elements of 8192 with 64x16x8, in x,y,z respectively. And I set lex1 to 12 (thus lxd=15). When I compute (# of elements)x(order of polynomial+1)^3x400x8/(memory of a single proc.), I compute that aprox. 32 procs. will be fine to run the simulation. I compiled the model and that was fine. But I can only run the model with 32 procs. If I try to increase the number of procs. (48 or 64) it holds on the following line; gs_setup: 670615 unique labels shared pairwise times (avg, min, max): 0.106644 0.089896 0.1268 crystal router : 0.535782 0.507226 0.57275 used all_to_all method: pairwise setupds time 2.8244E+01 seconds 4 8 5826591 43904 setup h1 coarse grid, nx_crs= 2 call usrsetvert done :: usrsetvert gs_setup: 13483 unique labels shared pairwise times (avg, min, max): 0.0878268 0.065232 0.1044 I ran the model with 32 procs. for a while but I realized that it was too slow for my application. So I decided to setup another optimum version of it. What I read from the webpage is that I should choose lx1=7 or 9. So I increase my total number of elements to 112*28*14=43904. Now if I set lx1=9, lxd=15 and lelt=650 to run the model for 72 procs. It is still holding on the same location. If I set the lx1=8 and lxd=12,lelt=1220 then the model doesnot compile. It gives the following error; mpif90 -o nek5000 -fpp -DALLOW_USE_MPI -DALWAYS_USE_MPI obj/hc3d.o obj/drive.o obj/drive1.o obj/drive2.o obj/plan4.o obj/bdry.o obj/coef.o obj/conduct.o obj/connect1.o obj/connect2.o obj/dssum.o obj/edgec.o obj/eigsolv.o obj/gauss.o obj/genxyz.o obj/navier1.o obj/makeq.o obj/navier0.o obj/navier2.o obj/navier3.o obj/navier4.o obj/prepost.o obj/speclib.o obj/map2.o obj/turb.o obj/mvmesh.o obj/ic.o obj/ssolv.o obj/planx.o obj/math.o obj/mxm_wrapper.o obj/hmholtz.o obj/gfdm_par.o obj/gfdm_op.o obj/gfdm_solve.o obj/subs1.o obj/subs2.o obj/genbox.o obj/gmres.o obj/hsmg.o obj/convect.o obj/induct.o obj/perturb.o obj/navier5.o obj/navier6.o obj/navier7.o obj/navier8.o obj/fast3d.o obj/fasts.o obj/calcz.o obj/byte.o obj/chelpers.o obj/byte_mpi.o obj/postpro.o obj/cvode_driver.o obj/nek_comm.o obj/init_plugin.o obj/setprop.o obj/qthermal.o obj/cvode_aux.o obj/makeq_aux.o obj/papi.o obj/ssygv.o obj/dsygv.o obj/mxm_std.o obj/blas.o obj/comm_mpi.o obj/singlmesh.o obj/jl_gs.o obj/jl_sort.o obj/jl_sarray_transfer.o obj/jl_sarray_sort.o obj/jl_gs_local.o obj/jl_crystal.o obj/jl_comm.o obj/jl_tensor.o obj/jl_fail.o obj/jl_fcrystal.o obj/jl_findpts.o obj/jl_findpts_local.o obj/jl_obbox.o obj/jl_poly.o obj/jl_lob_bnd.o obj/jl_findpts_el_3.o obj/jl_findpts_el_2.o obj/jl_sparse_cholesky.o obj/jl_xxt.o obj/jl_fcrs.o obj/gmres.o: In function `gen_fast_g_': /home/ntnu/milicak/models/nek5_svn/trunk/nek/gmres.f:(.text+0xc4a9): relocation truncated to fit: R_X86_64_32S against symbol `fast1dsem_' defined in COMMON section in obj/fast3d.o I'm not quite sure what to do next. Thanks for any suggestions in advance, Mehmet From nek5000-users at lists.mcs.anl.gov Mon Sep 24 17:05:13 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 24 Sep 2012 17:05:13 -0500 Subject: [Nek5000-users] how to choose lx1 In-Reply-To: References: Message-ID: Hi Mehmet, How long is nek at the gs_setup line? It can take several minutes for large cases. The last error you're having is because you've run out of memory. Remember that the memory footprint on each processor scales as lx1^3 * lelt. With certain compilers you can add the following option in makenek: G="-mcmodel=medium" This allows you compile larger arrays, and generally use more memory. Best, Justin On Mon, Sep 24, 2012 at 4:17 PM, wrote: > Hi all, > > I'm trying to setup nek5000 with a high Rayleigh number problem. > At first, I've used # of elements of 8192 with 64x16x8, in x,y,z > respectively. > And I set lex1 to 12 (thus lxd=15). When I compute > (# of elements)x(order of polynomial+1)^3x400x8/(memory of a single proc.), > I compute that aprox. 32 procs. will be fine to run the simulation. > > I compiled the model and that was fine. But I can only run the model with > 32 procs. > If I try to increase the number of procs. (48 or 64) it holds on the > following line; > > gs_setup: 670615 unique labels shared > pairwise times (avg, min, max): 0.106644 0.089896 0.1268 > crystal router : 0.535782 0.507226 0.57275 > used all_to_all method: pairwise > setupds time 2.8244E+01 seconds 4 8 5826591 43904 > setup h1 coarse grid, nx_crs= 2 > call usrsetvert > done :: usrsetvert > > gs_setup: 13483 unique labels shared > pairwise times (avg, min, max): 0.0878268 0.065232 0.1044 > > > I ran the model with 32 procs. for a while but I realized that it was too > slow for my application. > So I decided to setup another optimum version of it. What I read from the > webpage is that I should choose lx1=7 or 9. > > So I increase my total number of elements to 112*28*14=43904. > Now if I set lx1=9, lxd=15 and lelt=650 to run the model for 72 procs. > It is still holding on the same location. > > If I set the lx1=8 and lxd=12,lelt=1220 then the model doesnot compile. It > gives the following error; > > mpif90 -o nek5000 -fpp -DALLOW_USE_MPI -DALWAYS_USE_MPI obj/hc3d.o > obj/drive.o obj/drive1.o obj/drive2.o obj/plan4.o obj/bdry.o obj/coef.o > obj/conduct.o obj/connect1.o obj/connect2.o obj/dssum.o obj/edgec.o > obj/eigsolv.o obj/gauss.o obj/genxyz.o obj/navier1.o obj/makeq.o > obj/navier0.o obj/navier2.o obj/navier3.o obj/navier4.o obj/prepost.o > obj/speclib.o obj/map2.o obj/turb.o obj/mvmesh.o obj/ic.o obj/ssolv.o > obj/planx.o obj/math.o obj/mxm_wrapper.o obj/hmholtz.o obj/gfdm_par.o > obj/gfdm_op.o obj/gfdm_solve.o obj/subs1.o obj/subs2.o obj/genbox.o > obj/gmres.o obj/hsmg.o obj/convect.o obj/induct.o obj/perturb.o > obj/navier5.o obj/navier6.o obj/navier7.o obj/navier8.o obj/fast3d.o > obj/fasts.o obj/calcz.o obj/byte.o obj/chelpers.o obj/byte_mpi.o > obj/postpro.o obj/cvode_driver.o obj/nek_comm.o obj/init_plugin.o > obj/setprop.o obj/qthermal.o obj/cvode_aux.o obj/makeq_aux.o obj/papi.o > obj/ssygv.o obj/dsygv.o obj/mxm_std.o obj/blas.o obj/comm_mpi.o > obj/singlmesh.o obj/jl_gs.o obj/jl_sort.o obj/jl_sarray_transfer.o > obj/jl_sarray_sort.o obj/jl_gs_local.o obj/jl_crystal.o obj/jl_comm.o > obj/jl_tensor.o obj/jl_fail.o obj/jl_fcrystal.o obj/jl_findpts.o > obj/jl_findpts_local.o obj/jl_obbox.o obj/jl_poly.o obj/jl_lob_bnd.o > obj/jl_findpts_el_3.o obj/jl_findpts_el_2.o obj/jl_sparse_cholesky.o > obj/jl_xxt.o obj/jl_fcrs.o > obj/gmres.o: In function `gen_fast_g_': > /home/ntnu/milicak/models/**nek5_svn/trunk/nek/gmres.f:(.**text+0xc4a9): > relocation truncated to fit: R_X86_64_32S against symbol `fast1dsem_' > defined in COMMON section in obj/fast3d.o > > > I'm not quite sure what to do next. > Thanks for any suggestions in advance, > Mehmet > ______________________________**_________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.**gov > https://lists.mcs.anl.gov/**mailman/listinfo/nek5000-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Mon Sep 24 17:30:10 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Tue, 25 Sep 2012 00:30:10 +0200 Subject: [Nek5000-users] how to choose lx1 In-Reply-To: <5060DCC8.1000409@uib.no> References: <5060DCC8.1000409@uib.no> Message-ID: Hi Justin, Thanks for your quick email. I killed the simulation after half an hour later, because it was still on the same line. I don't think my setup is that big. So I am not sure if I should wait more or not. Best, Mehmet > > > Hi Mehmet, > > How long is nek at the gs_setup line? It can take several minutes for large > cases. > > The last error you're having is because you've run out of memory. Remember > that the memory footprint on each processor scales as lx1^3 * lelt. With > certain compilers you can add the following option in makenek: > > G="-mcmodel=medium" > > This allows you compile larger arrays, and generally use more memory. > > Best, > Justin -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Mon Sep 24 17:46:23 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 24 Sep 2012 17:46:23 -0500 Subject: [Nek5000-users] how to choose lx1 In-Reply-To: References: <5060DCC8.1000409@uib.no> Message-ID: Hi Mehmet, We've seen nek hang on machines when the number of procs was increased, as you've just encountered. Sometimes, if you change the parameters in SIZE to give you some extra buffer in memory, you can get past that sticking point. So, say you need lelt >= 100. Then, if you have enough memory left on the proc, compile for lelt=150 (or whatever you can manage). This doesn't always work, but it's a quick thing to try. Best, Justin On Mon, Sep 24, 2012 at 5:30 PM, wrote: > > Hi Justin, > > Thanks for your quick email. > > I killed the simulation after half an hour later, because it was still on > the same line. > I don't think my setup is that big. So I am not sure if I should wait more > or not. > > Best, > Mehmet > > > > > > > Hi Mehmet, > > How long is nek at the gs_setup line? It can take several minutes for large > cases. > > The last error you're having is because you've run out of memory. Remember > that the memory footprint on each processor scales as lx1^3 * lelt. With > certain compilers you can add the following option in makenek: > > G="-mcmodel=medium" > > This allows you compile larger arrays, and generally use more memory. > > Best, > Justin > > > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Tue Sep 25 07:15:58 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Tue, 25 Sep 2012 12:15:58 +0000 Subject: [Nek5000-users] nek5000 Development and User meeting 2012 - Reminder and new registration deadline Message-ID: Dear nek5000 users, developers and friends! The deadline for the registration for the 2012 meeting has been extended to October 12. Please register as soon as possible so that we can proceed with the rest of the organizational "details". Sincerely, Christos Frouzakis & Nek5000 Development Team From nek5000-users at lists.mcs.anl.gov Tue Sep 25 08:03:19 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Tue, 25 Sep 2012 15:03:19 +0200 Subject: [Nek5000-users] compilation problems with IBM xlc compiler In-Reply-To: <316A791B-D1EB-4715-BAAE-57F4C27E1A2A@ing.unipi.it> References: <3D9AD86B-BCF4-4664-9586-574C30CA33E7@ing.unipi.it> <316A791B-D1EB-4715-BAAE-57F4C27E1A2A@ing.unipi.it> Message-ID: Hi Nek's, I actually have the same problem with the IBM Power 6 of IDRIS (French supercomputing center) and have absolutely no idea how to solve it properly. Could anyone give me a hint? Regards, On 3 March 2012 13:44, wrote: > Hi again, > > I just found out the problem! Actually in /usr/include/sys/types.h on the > SP6 machine I am using there is a type names unique_id already defined, > that was used instead of the one defined in gs.c when compiling nek5000. > For the moment I just changed the name in gs.c from unique_id to unique_id2 > because I don't know how to override the definition in the type.h file. Now > I only have a final link problem with the function .flush: > > ld: 0711-317 ERROR: Undefined symbol: .flush > > I think this is easier to be solved and I will try to do so. Any > suggestion? > > I take the occasion to thank all of you for having shared this great tool, > nek5000, with the scientific community!! > > Regards, > > Simone > > > > > Hi neks, > > > > I have the following problem when trying to compile NEK on a IBM SP6 > using the IBM compiler. When the compilation arrives to the C routine > gs.c, I get the following errors: > > > > > -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- > > mpcc -c -DUNDERSCORE -O2 -DPTRSIZE8 -DMPI -DLONGINT8 -DGLOBAL_LONG_LONG > -DPREFIX=jl_ /sp6/userexternal/CODICI/nek5_svn/trunk/nek/jl/gs.c -o > obj/jl_gs.o > > /gpfs/prod/xlc10/usr/vacpp/bin/.orig/cc_r: 1501-245 (W) Warning: Hard > ulimit has been reduced to less than RLIM_INFINITY. There may not be > enough space to complete the compilation. > > "/sp6/userexternal/CODICI/nek5_svn/trunk/nek/jl/gs.c", line 116.8: > 1506-334 (S) Identifier unique_id has already been defined on line 649 of > "/usr/include/sys/types.h". > > "/sp6/userexternal/CODICI/nek5_svn/trunk/nek/jl/gs.c", line 124.13: > 1506-022 (S) "id" is not a member of "struct unique_id". > > "/sp6/userexternal/CODICI/nek5_svn/trunk/nek/jl/gs.c", line 125.13: > 1506-022 (S) "work_proc" is not a member of "struct unique_id". > > "/sp6/userexternal/CODICI/nek5_svn/trunk/nek/jl/gs.c", line 126.13: > 1506-022 (S) "src_if" is not a member of "struct unique_id". > > "/sp6/userexternal/CODICI/nek5_svn/trunk/nek/jl/gs.c", line 201.41: > 1506-022 (S) "work_proc" is not a member of "struct unique_id". > > "/sp6/userexternal/CODICI/nek5_svn/trunk/nek/jl/gs.c", line 203.53: > 1506-022 (S) "src_if" is not a member of "struct unique_id". > > "/sp6/userexternal/CODICI/nek5_svn/trunk/nek/jl/gs.c", line 203.53: > 1506-022 (S) "src_if" is not a member of "struct unique_id". > > "/sp6/userexternal/CODICI/nek5_svn/trunk/nek/jl/gs.c", line 203.47: > 1506-022 (S) "id" is not a member of "struct unique_id". > > "/sp6/userexternal/CODICI/nek5_svn/trunk/nek/jl/gs.c", line 203.47: > 1506-022 (S) "id" is not a member of "struct unique_id". > > "/sp6/userexternal/CODICI/nek5_svn/trunk/nek/jl/gs.c", line 206.24: > 1506-022 (S) "id" is not a member of "struct unique_id". > > "/sp6/userexternal/CODICI/nek5_svn/trunk/nek/jl/gs.c", line 207.17: > 1506-022 (S) "src_if" is not a member of "struct unique_id". > > "/sp6/userexternal/CODICI/nek5_svn/trunk/nek/jl/gs.c", line 207.32: > 1506-022 (S) "src_if" is not a member of "struct unique_id". > > "/sp6/userexternal/CODICI/nek5_svn/trunk/nek/jl/gs.c", line 210.30: > 1506-022 (S) "id" is not a member of "struct unique_id". > > "/sp6/userexternal/CODICI/nek5_svn/trunk/nek/jl/gs.c", line 220.24: > 1506-022 (S) "id" is not a member of "struct unique_id". > > "/sp6/userexternal/CODICI/nek5_svn/trunk/nek/jl/gs.c", line 221.23: > 1506-022 (S) "work_proc" is not a member of "struct unique_id". > > "/sp6/userexternal/CODICI/nek5_svn/trunk/nek/jl/gs.c", line 221.48: > 1506-022 (S) "src_if" is not a member of "struct unique_id". > > "/sp6/userexternal/CODICI/nek5_svn/trunk/nek/jl/gs.c", line 223.46: > 1506-022 (S) "id" is not a member of "struct unique_id". > > "/sp6/userexternal/CODICI/nek5_svn/trunk/nek/jl/gs.c", line 224.24: > 1506-022 (S) "work_proc" is not a member of "struct unique_id". > > "/sp6/userexternal/CODICI/nek5_svn/trunk/nek/jl/gs.c", line 224.48: > 1506-022 (S) "src_if" is not a member of "struct unique_id". > > make: *** [obj/jl_gs.o] Error 1 > > -bash-3.2$ > > > -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- > > > > I think that this is related to the compilation flags, which seems to be > the same as those employed with gcc......Did you encountered this problem > before? Do you have any suggestion to try to solve it? > > > > Thank you in advance for your help! > > > > Regards, > > Simone > > > > > > > > _______________________________________________ > > Nek5000-users mailing list > > Nek5000-users at lists.mcs.anl.gov > > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > -- JC -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Thu Sep 20 03:02:56 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 20 Sep 2012 10:02:56 +0200 Subject: [Nek5000-users] neuman bc for the normal velocity component Message-ID: Dear Nekians, I'm relatively new to nek. I would like to compute the flow of a boundary layer and I would like to impose the following boundary condition on the top (2D) boundary, which is parallel to the wall: u = U_base and dv/dy = - dU_base/dx, where U_base is the known inviscid base flow. The velocity component u is parallel to the boundary, whereas v is normal. I tried to use 'ON ' but this did not give the result I was expecting (Am I doing smth. wrong?). At the boundary nodes u was equal to U_base, but at the very next node u was completly different (and v was of course completly wrong). Thanks for any help Joris From nek5000-users at lists.mcs.anl.gov Mon Sep 24 17:20:56 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Tue, 25 Sep 2012 00:20:56 +0200 Subject: [Nek5000-users] how to choose lx1 Message-ID: Hi Justin, Thanks for your quick email. I killed the simulation after half an hour later, because it was still on the same line. I don't think my setup is that big. So I am not sure if I should wait more or not. Best, Mehmet > Hi Mehmet, How long is nek at the gs_setup line? It can take several minutes for large cases. The last error you're having is because you've run out of memory. Remember that the memory footprint on each processor scales as lx1^3 * lelt. With certain compilers you can add the following option in makenek: G="-mcmodel=medium" This allows you compile larger arrays, and generally use more memory. Best, Justin -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Tue Sep 25 10:42:20 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Tue, 25 Sep 2012 10:42:20 -0500 (CDT) Subject: [Nek5000-users] neuman bc for the normal velocity component In-Reply-To: References: Message-ID: Hi Joris, The usage you describe should work. I've put an example "blasius" in the examples directory. Note that a common failure mode is where fluid is drawn _in_ to the domain through the ON boundary. This can happen if the pressure is such that flow wants to come in rather than leave. However, from what you describe, it sounds as if something else is going wrong in your case. Perhaps looking at the example will help. Paul On Thu, 20 Sep 2012, nek5000-users at lists.mcs.anl.gov wrote: > Dear Nekians, > > I'm relatively new to nek. > > I would like to > compute the flow of a boundary layer and > I would like to impose the following > boundary condition on the top (2D) boundary, > which is parallel to the wall: > > u = U_base and dv/dy = - dU_base/dx, > > where U_base is the known inviscid base flow. > The velocity component u is parallel to > the boundary, whereas v is normal. > > I tried to use 'ON ' but this did not give > the result I was expecting (Am I doing smth. wrong?). > At the boundary nodes u was equal to U_base, but > at the very next node u was completly different > (and v was of course completly wrong). > > Thanks for any help > > Joris > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > From nek5000-users at lists.mcs.anl.gov Tue Sep 25 12:46:29 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Tue, 25 Sep 2012 19:46:29 +0200 Subject: [Nek5000-users] neuman bc for the normal velocity component In-Reply-To: References: Message-ID: Hi Paul, thanks a lot for your help. Now I understand why 'ON ' did not work. The inviscid flow we're trying to compute the boundary layer to is a pulse which first has v > 0 (outflow) and then after the crest has passed v < 0 (inflow). Therefore 'ON ' did not work. I helped myself by imposing U_base and V_bl on the top boundary, where U_base is the inviscid base flow and V_bl the normal velocity computed by a boundary layer solver. However, just for the fun, do you think imposing u = U_base and dv/dy = - dU_base/dx on the top boundary would work in this case? Thanks a lot! Joris 2012/9/25 > > Hi Joris, > > The usage you describe should work. > > I've put an example "blasius" in the examples directory. > > Note that a common failure mode is where fluid is drawn _in_ > to the domain through the ON boundary. This can happen if > the pressure is such that flow wants to come in rather than > leave. > > However, from what you describe, it sounds as if something > else is going wrong in your case. Perhaps looking at the > example will help. > > Paul > > > > On Thu, 20 Sep 2012, nek5000-users at lists.mcs.anl.**govwrote: > > Dear Nekians, >> >> I'm relatively new to nek. >> >> I would like to >> compute the flow of a boundary layer and >> I would like to impose the following >> boundary condition on the top (2D) boundary, >> which is parallel to the wall: >> >> u = U_base and dv/dy = - dU_base/dx, >> >> where U_base is the known inviscid base flow. >> The velocity component u is parallel to >> the boundary, whereas v is normal. >> >> I tried to use 'ON ' but this did not give >> the result I was expecting (Am I doing smth. wrong?). >> At the boundary nodes u was equal to U_base, but >> at the very next node u was completly different >> (and v was of course completly wrong). >> >> Thanks for any help >> >> Joris >> ______________________________**_________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.**gov >> https://lists.mcs.anl.gov/**mailman/listinfo/nek5000-users >> >> ______________________________**_________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.**gov > https://lists.mcs.anl.gov/**mailman/listinfo/nek5000-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Fri Sep 28 04:22:27 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 28 Sep 2012 11:22:27 +0200 Subject: [Nek5000-users] Output of temperature gradient, scalar dissipation rate and convective flux Message-ID: Hi, thanks for the following suggestion concerning our post processing where we need temperature gradients, scalar dissipation rate and convective heat fluxes: subroutine userchk : parameter (lt=lx1*ly1*lz1*lelt) common /mygrad/ tx(lt),ty(lt),tz(lt) : if (mod(istep,iostep).eq.0) then ! , say, call gradm1(tx,ty,tz,t) call outpost(tx,ty,tz,pr,t,'gdt') ! write to gdtblah.f000... endif : return end It has however one shortcoming, namely the unnecessary additional output of the pressure field which we don't need for this analysis. As far as I can see when going down from the routine outpost --> outpost2 --> prepost this is a fixed structure and the pressure field cannot be simply substituted by another scalar field (e.g. the scalar dissipation rate) since it might have a different dimension (e.g. in case of spaces Pn-Pn-2 which we are using). --- Can this problem be solved without editing the routines prepost? --- Is prepost_map inside prepost still necessary? Thanks, Joerg. From nek5000-users at lists.mcs.anl.gov Fri Sep 28 04:50:22 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 28 Sep 2012 04:50:22 -0500 (CDT) Subject: [Nek5000-users] Output of temperature gradient, scalar dissipation rate and convective flux In-Reply-To: Message-ID: Hi Joerg, You can set ifpo=.false. just before the call to outpost. You can then turn it back on. You can also output multiple passive scalars. For this, however, you need to call outpost2(), which has a slightly different interface. Have a look at the source and hopefully it will be clear how to proceed if you wish to dump additional scalar fields in that way. For myself, I tend to just use outpost() and then insert whatever I need in the vx-vy-vz slots, coupled with different prefixes, as in the example below. Paul ----- Original Message ----- From: nek5000-users at lists.mcs.anl.gov To: nek5000-users at lists.mcs.anl.gov Sent: Friday, September 28, 2012 10:22:27 AM Subject: [Nek5000-users] Output of temperature gradient, scalar dissipation rate and convective flux Hi, thanks for the following suggestion concerning our post processing where we need temperature gradients, scalar dissipation rate and convective heat fluxes: subroutine userchk : parameter (lt=lx1*ly1*lz1*lelt) common /mygrad/ tx(lt),ty(lt),tz(lt) : if (mod(istep,iostep).eq.0) then ! , say, call gradm1(tx,ty,tz,t) call outpost(tx,ty,tz,pr,t,'gdt') ! write to gdtblah.f000... endif : return end It has however one shortcoming, namely the unnecessary additional output of the pressure field which we don't need for this analysis. As far as I can see when going down from the routine outpost --> outpost2 --> prepost this is a fixed structure and the pressure field cannot be simply substituted by another scalar field (e.g. the scalar dissipation rate) since it might have a different dimension (e.g. in case of spaces Pn-Pn-2 which we are using). --- Can this problem be solved without editing the routines prepost? --- Is prepost_map inside prepost still necessary? Thanks, Joerg. _______________________________________________ Nek5000-users mailing list Nek5000-users at lists.mcs.anl.gov https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users From nek5000-users at lists.mcs.anl.gov Fri Sep 28 08:06:12 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 28 Sep 2012 15:06:12 +0200 Subject: [Nek5000-users] Output of temperature gradient, scalar dissipation rate and convective flux In-Reply-To: References: Message-ID: Hi Paul, seems to work: if ((mod(istep,iostep).eq.0).and.(istep.ge.iostep)) then !-----Temperature gradient call gradm1(dtdx,dtdy,dtdz,t) do i=1,n !-----Convective current velu=vz(i,1,1,1) uzt(i) = t(i,1,1,1,1)*velu !-----Thermal dissipation epsT=(grad T)**2/sqrt(RaPr) gradt2=dtdx(i)**2+dtdy(i)**2+dtdz(i)**2 epst(i)=gradt2*rapr enddo !-------No pressure field, no scalar output ifpo=.false. ifto=.false. call outpost2(dtdx,dtdy,dtdz,pr,t,0,'gdt') call outpost2( t, uzt,epsT,pr,t,0,'uzt') ifpo=.true. ifto=.true. endif Best regards, Joerg. > Hi Joerg, > > You can set ifpo=.false. just before the call to outpost. > You can then turn it back on. > > You can also output multiple passive scalars. For this, however, you > need to call outpost2(), which has a slightly different interface. > Have a look at the source and hopefully it will be clear how to proceed > if you wish to dump additional scalar fields in that way. > > For myself, I tend to just use outpost() and then insert whatever I need > in the vx-vy-vz slots, coupled with different prefixes, as in the example > below. > > Paul > > > ----- Original Message ----- > From: nek5000-users at lists.mcs.anl.gov > To: nek5000-users at lists.mcs.anl.gov > Sent: Friday, September 28, 2012 10:22:27 AM > Subject: [Nek5000-users] Output of temperature gradient, scalar > dissipation rate and convective flux > > Hi, > > thanks for the following suggestion concerning our post processing > where we need temperature gradients, scalar dissipation rate and > convective heat fluxes: > > subroutine userchk > : > parameter (lt=lx1*ly1*lz1*lelt) > common /mygrad/ tx(lt),ty(lt),tz(lt) > : > if (mod(istep,iostep).eq.0) then ! , say, > call gradm1(tx,ty,tz,t) > call outpost(tx,ty,tz,pr,t,'gdt') ! write to gdtblah.f000... > endif > : > return > end > > It has however one shortcoming, namely the unnecessary additional > output of the pressure field which we don't need > for this analysis. > > As far as I can see when going down from the routine outpost --> > outpost2 --> prepost this is a fixed > structure and the pressure field cannot be simply substituted by > another scalar field (e.g. the scalar dissipation rate) > since it might have a different dimension (e.g. in case of spaces > Pn-Pn-2 which we are using). > > --- Can this problem be solved without editing the routines prepost? > --- Is prepost_map inside prepost still necessary? > > Thanks, Joerg. > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users From nek5000-users at lists.mcs.anl.gov Fri Sep 28 08:21:08 2012 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 28 Sep 2012 08:21:08 -0500 (CDT) Subject: [Nek5000-users] Output of temperature gradient, scalar dissipation rate and convective flux In-Reply-To: References: Message-ID: Great -- Thanks Joerg! On Fri, 28 Sep 2012, nek5000-users at lists.mcs.anl.gov wrote: > Hi Paul, > > seems to work: > > if ((mod(istep,iostep).eq.0).and.(istep.ge.iostep)) then > !-----Temperature gradient > call gradm1(dtdx,dtdy,dtdz,t) > > do i=1,n > > !-----Convective current > velu=vz(i,1,1,1) > uzt(i) = t(i,1,1,1,1)*velu > > !-----Thermal dissipation epsT=(grad T)**2/sqrt(RaPr) > gradt2=dtdx(i)**2+dtdy(i)**2+dtdz(i)**2 > epst(i)=gradt2*rapr > > enddo > > !-------No pressure field, no scalar output > ifpo=.false. > ifto=.false. > > call outpost2(dtdx,dtdy,dtdz,pr,t,0,'gdt') > call outpost2( t, uzt,epsT,pr,t,0,'uzt') > > ifpo=.true. > ifto=.true. > > endif > > Best regards, Joerg. > > > > > >> Hi Joerg, >> >> You can set ifpo=.false. just before the call to outpost. >> You can then turn it back on. >> >> You can also output multiple passive scalars. For this, however, you >> need to call outpost2(), which has a slightly different interface. >> Have a look at the source and hopefully it will be clear how to proceed >> if you wish to dump additional scalar fields in that way. >> >> For myself, I tend to just use outpost() and then insert whatever I need >> in the vx-vy-vz slots, coupled with different prefixes, as in the example >> below. >> >> Paul >> >> >> ----- Original Message ----- >> From: nek5000-users at lists.mcs.anl.gov >> To: nek5000-users at lists.mcs.anl.gov >> Sent: Friday, September 28, 2012 10:22:27 AM >> Subject: [Nek5000-users] Output of temperature gradient, scalar >> dissipation rate and convective flux >> >> Hi, >> >> thanks for the following suggestion concerning our post processing >> where we need temperature gradients, scalar dissipation rate and >> convective heat fluxes: >> >> subroutine userchk >> : >> parameter (lt=lx1*ly1*lz1*lelt) >> common /mygrad/ tx(lt),ty(lt),tz(lt) >> : >> if (mod(istep,iostep).eq.0) then ! , say, >> call gradm1(tx,ty,tz,t) >> call outpost(tx,ty,tz,pr,t,'gdt') ! write to gdtblah.f000... >> endif >> : >> return >> end >> >> It has however one shortcoming, namely the unnecessary additional >> output of the pressure field which we don't need >> for this analysis. >> >> As far as I can see when going down from the routine outpost --> >> outpost2 --> prepost this is a fixed >> structure and the pressure field cannot be simply substituted by >> another scalar field (e.g. the scalar dissipation rate) >> since it might have a different dimension (e.g. in case of spaces >> Pn-Pn-2 which we are using). >> >> --- Can this problem be solved without editing the routines prepost? >> --- Is prepost_map inside prepost still necessary? >> >> Thanks, Joerg. >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >