From nek5000-users at lists.mcs.anl.gov Wed Aug 4 02:22:33 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 4 Aug 2010 15:22:33 +0800 Subject: [Nek5000-users] Problem with the votex breakdown example Message-ID: Hello? I ran the example case 'r1854' in \nek_svn\examples\escudier, and I didn't changed anything, then I loaded the reults in VIsit 2.0. There seems some problem with the mesh. The image is attached. The nek source is the latest. How does this come out? By the way, how can I make similar cylindrical mesh, is there examples or detailed description? Regards Bofu Wang -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: mesh.png Type: image/png Size: 46105 bytes Desc: not available URL: From nek5000-users at lists.mcs.anl.gov Thu Aug 5 12:38:36 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 5 Aug 2010 12:38:36 -0500 (CDT) Subject: [Nek5000-users] Restart w/o saving over existing fld files? Message-ID: <410747621.2314861281029916704.JavaMail.root@neo-mail-3> Hello, Is there a way to restart without having Nek save new blah.f files over existing ones, perhaps tell it the number to start with? Say I have run a simulation and have 50 fld files. I would like to restart from 50, but the next fld ideally should read 51, not 1. This makes it difficult to get into visit I have found when doing restarts (unless there is alternative trick to get around this?) Thanks for any help with this , Michael -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Thu Aug 5 12:49:00 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 5 Aug 2010 10:49:00 -0700 Subject: [Nek5000-users] Restart w/o saving over existing fld files? In-Reply-To: <410747621.2314861281029916704.JavaMail.root@neo-mail-3> References: <410747621.2314861281029916704.JavaMail.root@neo-mail-3> Message-ID: Hi Michael, You should be able to combine the restarts with a .visit file. So: % ls */*.nek3d run1/f.nek3d run2/f.nek3d % ls */*.nek3d > both_runs.visit % cat both_runs.visit run1/f.nek3d run2/f.nek3d Then you can open both_runs.visit and it will append the time series together. (Is this the problem you were encountering?) Best, Hank On Thu, Aug 5, 2010 at 10:38 AM, wrote: > Hello, > > Is there a way to restart without having Nek save new blah.f files over > existing ones, perhaps tell it the number to start with? > > Say I have run a simulation and have 50 fld files. I would like to restart > from 50, but the next fld ideally should read 51, not 1. This makes it > difficult to get into visit I have found when doing restarts (unless there > is alternative trick to get around this?) > > Thanks for any help with this , > > Michael > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Thu Aug 5 13:27:05 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 5 Aug 2010 13:27:05 -0500 (CDT) Subject: [Nek5000-users] Restart w/o saving over existing fld files? In-Reply-To: Message-ID: <1540073470.2326511281032825350.JavaMail.root@neo-mail-3> Hi Hank, Yes this is the exact issue, I will try it out. Thanks for the tip! - Michael ----- Original Message ----- From: nek5000-users at lists.mcs.anl.gov To: nek5000-users at lists.mcs.anl.gov Sent: Thursday, August 5, 2010 12:49:00 PM GMT -06:00 Guadalajara / Mexico City / Monterrey Subject: Re: [Nek5000-users] Restart w/o saving over existing fld files? Hi Michael, You should be able to combine the restarts with a .visit file. So: % ls */*.nek3d run1/f.nek3d run2/f.nek3d % ls */*.nek3d > both_runs.visit % cat both_runs.visit run1/f.nek3d run2/f.nek3d Then you can open both_runs.visit and it will append the time series together. (Is this the problem you were encountering?) Best, Hank On Thu, Aug 5, 2010 at 10:38 AM, < nek5000-users at lists.mcs.anl.gov > wrote: Hello, Is there a way to restart without having Nek save new blah.f files over existing ones, perhaps tell it the number to start with? Say I have run a simulation and have 50 fld files. I would like to restart from 50, but the next fld ideally should read 51, not 1. This makes it difficult to get into visit I have found when doing restarts (unless there is alternative trick to get around this?) Thanks for any help with this , Michael _______________________________________________ Nek5000-users mailing list Nek5000-users at lists.mcs.anl.gov https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users _______________________________________________ Nek5000-users mailing list Nek5000-users at lists.mcs.anl.gov https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Thu Aug 5 16:12:13 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 5 Aug 2010 23:12:13 +0200 Subject: [Nek5000-users] Restart w/o saving over existing fld files? In-Reply-To: <410747621.2314861281029916704.JavaMail.root@neo-mail-3> References: <410747621.2314861281029916704.JavaMail.root@neo-mail-3> Message-ID: I typically use this 6-liner shell script to add an offset to my fld files. for i in `ls $1.f*` do basename=`echo $i | awk 'BEGIN { FS = ".f" } ; { print $1 }'` oldid=`echo $i | awk 'BEGIN { FS = ".f" } ; { print $2 }'`; mv $i `expr $oldid + $2 | xargs printf "$basename.f%04d\n"` done If you save the script as 'mvfld' and your fld files are {foo0.f0001, ...} just run 'mvfld foo 50' to add an offset of 50 to your fld files. hth, Stefan On Aug 5, 2010, at 7:38 PM, wrote: > Hello, > > Is there a way to restart without having Nek save new blah.f files over existing ones, perhaps tell it the number to start with? > > Say I have run a simulation and have 50 fld files. I would like to restart from 50, but the next fld ideally should read 51, not 1. This makes it difficult to get into visit I have found when doing restarts (unless there is alternative trick to get around this?) > > Thanks for any help with this , > > Michael > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Thu Aug 5 18:04:04 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 06 Aug 2010 01:04:04 +0200 Subject: [Nek5000-users] Marangoni flow Message-ID: <1281049444.7953.144.camel@localhost.localdomain> Hello Paul, Just wondering if you have had some time to think about the issue of including the Marangoni stress in a two-fluid model. Cheers, Frank -- Frank Herbert Muldoon, Ph.D. Mechanical Engineering Technische Universit?t Wien (Vienna Technical University) Inst. f. Str?mungsmechanik und W?rme?bertragung (Institute of Fluid Mechanics and Heat Transfer) Resselgasse 3 1040 Wien Tel: +4315880132232 Fax: +4315880132299 Cell:+436765203470 fmuldoo (skype) http://tetra.fluid.tuwien.ac.at/fmuldoo/public_html/webpage/frank-muldoon.html From nek5000-users at lists.mcs.anl.gov Fri Aug 6 03:51:27 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 06 Aug 2010 14:21:27 +0530 Subject: [Nek5000-users] Performance problem Message-ID: <4C5BCD0F.8090306@iitk.ac.in> Hi, I'm solving for Rayleigh-Benard convection in a 3D box of 37632, 4rth order elements. I fired the job on 512 processors on a machine with quad-core, quad socket configuration (32 nodes with 16 cores each ) with a 20 Gbps infiniband interconnect. In 12 hours it has run 163 time steps. Is this normal or is there maybe some way to improve performance? Attached is the SIZE file. Regards, Mani chandra -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: SIZE URL: From nek5000-users at lists.mcs.anl.gov Fri Aug 6 03:58:44 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 6 Aug 2010 10:58:44 +0200 Subject: [Nek5000-users] Performance problem In-Reply-To: <4C5BCD0F.8090306@iitk.ac.in> References: <4C5BCD0F.8090306@iitk.ac.in> Message-ID: <07A34D73-7E15-43DE-8F3C-8B4869244B62@lav.mavt.ethz.ch> A logfile would help. Stefan On Aug 6, 2010, at 10:51 AM, wrote: > Hi, > > I'm solving for Rayleigh-Benard convection in a 3D box of 37632, 4rth order elements. I fired the job on 512 processors on a machine with quad-core, quad socket configuration (32 nodes with 16 cores each ) with a 20 Gbps infiniband interconnect. In 12 hours it has run 163 time steps. Is this normal or is there maybe some way to improve performance? Attached is the SIZE file. > > Regards, > Mani chandra > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users From nek5000-users at lists.mcs.anl.gov Fri Aug 6 04:12:34 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 06 Aug 2010 14:42:34 +0530 Subject: [Nek5000-users] [*] Re: Performance problem In-Reply-To: <07A34D73-7E15-43DE-8F3C-8B4869244B62@lav.mavt.ethz.ch> References: <4C5BCD0F.8090306@iitk.ac.in> <07A34D73-7E15-43DE-8F3C-8B4869244B62@lav.mavt.ethz.ch> Message-ID: <4C5BD202.6090304@iitk.ac.in> On 08/06/2010 02:28 PM, nek5000-users at lists.mcs.anl.gov wrote: > A logfile would help. > Stefan > > On Aug 6, 2010, at 10:51 AM, wrote: > >> Hi, >> >> I'm solving for Rayleigh-Benard convection in a 3D box of 37632, 4rth order elements. I fired the job on 512 processors on a machine with quad-core, quad socket configuration (32 nodes with 16 cores each ) with a 20 Gbps infiniband interconnect. In 12 hours it has run 163 time steps. Is this normal or is there maybe some way to improve performance? Attached is the SIZE file. >> >> Regards, >> Mani chandra >> >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > Hi, Attached is the output of the code. Mani -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: output.file URL: From nek5000-users at lists.mcs.anl.gov Fri Aug 6 04:13:18 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 6 Aug 2010 11:13:18 +0200 Subject: [Nek5000-users] Performance problem In-Reply-To: <4C5BCD0F.8090306@iitk.ac.in> References: <4C5BCD0F.8090306@iitk.ac.in> Message-ID: Dear Mani, I haven't checked your logfile yet but there are my first thoughts: N=4 is low Your polynomial order (N=4) is low and the tensor-product formulation won't buy you much. The performance of all matrix-matrix multiplies (MxM) will limited by the memory access times. This is in particular a problem on multi-core and multi-socket machines. We have seen that the performance drop can be significant. On top of that you carry around a large number of duplicate DOF and your surface to volume ratio is high (more communication). I Parallel Performance Your gridpoints per core (~4700) is quite small! On Blue Gene (BG) systems we can scale well (e.g. 70-80% parallel efficiency) with around 10k gridpoints per core. On other system (e.g. Cray XT5) you need much more gridpoints per core (say 80k) because the network has a higher latency (NEK is sensitive to latency not bandwidth) and the processors are much faster. Cheers, Stefan On Aug 6, 2010, at 10:51 AM, wrote: > Hi, > > I'm solving for Rayleigh-Benard convection in a 3D box of 37632, 4rth order elements. I fired the job on 512 processors on a machine with quad-core, quad socket configuration (32 nodes with 16 cores each ) with a 20 Gbps infiniband interconnect. In 12 hours it has run 163 time steps. Is this normal or is there maybe some way to improve performance? Attached is the SIZE file. > > Regards, > Mani chandra > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users From nek5000-users at lists.mcs.anl.gov Fri Aug 6 04:48:15 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 6 Aug 2010 11:48:15 +0200 Subject: [Nek5000-users] Performance problem In-Reply-To: References: <4C5BCD0F.8090306@iitk.ac.in> Message-ID: <175E47B2-1097-4F88-AA80-2F3C14EA3EAB@lav.mavt.ethz.ch> Ok here are some suggestions to improve the performance: - set timestep, param(12), to -3e-5 - set param(102) and param(103) to 5 (this will turn on the residual projection) - increase lgmres (in SIZE) to 40 - you have want to tune the Helmholtz (velocity) and pressure tolerance (e.g. 1e-8 and 1e-5) btw: what's the Reynolds number of this flow? Stefan On Aug 6, 2010, at 11:13 AM, wrote: > Dear Mani, > > I haven't checked your logfile yet but there are my first thoughts: > > N=4 is low > Your polynomial order (N=4) is low and the tensor-product formulation won't buy you much. The performance of all matrix-matrix multiplies (MxM) will limited by the memory access times. This is in particular a problem on multi-core and multi-socket machines. We have seen that the performance drop can be significant. > On top of that you carry around a large number of duplicate DOF and your surface to volume ratio is high (more communication). I > > > Parallel Performance > Your gridpoints per core (~4700) is quite small! > On Blue Gene (BG) systems we can scale well (e.g. 70-80% parallel efficiency) with around 10k gridpoints per core. On other system (e.g. Cray XT5) you need much more gridpoints per core (say 80k) because the network has a higher latency (NEK is sensitive to latency not bandwidth) and the processors are much faster. > > Cheers, > Stefan > > On Aug 6, 2010, at 10:51 AM, wrote: > >> Hi, >> >> I'm solving for Rayleigh-Benard convection in a 3D box of 37632, 4rth order elements. I fired the job on 512 processors on a machine with quad-core, quad socket configuration (32 nodes with 16 cores each ) with a 20 Gbps infiniband interconnect. In 12 hours it has run 163 time steps. Is this normal or is there maybe some way to improve performance? Attached is the SIZE file. >> >> Regards, >> Mani chandra >> >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users From nek5000-users at lists.mcs.anl.gov Fri Aug 6 04:55:57 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 6 Aug 2010 04:55:57 -0500 (CDT) Subject: [Nek5000-users] Performance problem In-Reply-To: <175E47B2-1097-4F88-AA80-2F3C14EA3EAB@lav.mavt.ethz.ch> References: <4C5BCD0F.8090306@iitk.ac.in> <175E47B2-1097-4F88-AA80-2F3C14EA3EAB@lav.mavt.ethz.ch> Message-ID: > - set param(102) and param(103) to 5 (this will turn on the residual projection) Should this be param 94 & 95 ? - Paul On Fri, 6 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: > Ok here are some suggestions to improve the performance: > > - set timestep, param(12), to -3e-5 > - set param(102) and param(103) to 5 (this will turn on the residual projection) > - increase lgmres (in SIZE) to 40 > - you have want to tune the Helmholtz (velocity) and pressure tolerance (e.g. 1e-8 and 1e-5) > > btw: what's the Reynolds number of this flow? > > > Stefan > > > On Aug 6, 2010, at 11:13 AM, wrote: > >> Dear Mani, >> >> I haven't checked your logfile yet but there are my first thoughts: >> >> N=4 is low >> Your polynomial order (N=4) is low and the tensor-product formulation won't buy you much. The performance of all matrix-matrix multiplies (MxM) will limited by the memory access times. This is in particular a problem on multi-core and multi-socket machines. We have seen that the performance drop can be significant. >> On top of that you carry around a large number of duplicate DOF and your surface to volume ratio is high (more communication). I >> >> >> Parallel Performance >> Your gridpoints per core (~4700) is quite small! >> On Blue Gene (BG) systems we can scale well (e.g. 70-80% parallel efficiency) with around 10k gridpoints per core. On other system (e.g. Cray XT5) you need much more gridpoints per core (say 80k) because the network has a higher latency (NEK is sensitive to latency not bandwidth) and the processors are much faster. >> >> Cheers, >> Stefan >> >> On Aug 6, 2010, at 10:51 AM, wrote: >> >>> Hi, >>> >>> I'm solving for Rayleigh-Benard convection in a 3D box of 37632, 4rth order elements. I fired the job on 512 processors on a machine with quad-core, quad socket configuration (32 nodes with 16 cores each ) with a 20 Gbps infiniband interconnect. In 12 hours it has run 163 time steps. Is this normal or is there maybe some way to improve performance? Attached is the SIZE file. >>> >>> Regards, >>> Mani chandra >>> >>> _______________________________________________ >>> Nek5000-users mailing list >>> Nek5000-users at lists.mcs.anl.gov >>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > From nek5000-users at lists.mcs.anl.gov Fri Aug 6 05:00:36 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 6 Aug 2010 12:00:36 +0200 Subject: [Nek5000-users] Performance problem In-Reply-To: References: <4C5BCD0F.8090306@iitk.ac.in> <175E47B2-1097-4F88-AA80-2F3C14EA3EAB@lav.mavt.ethz.ch> Message-ID: <69294E47-C3BB-4518-B8FF-94E01F1999B7@lav.mavt.ethz.ch> Yes! On Aug 6, 2010, at 11:55 AM, wrote: > > >> - set param(102) and param(103) to 5 (this will turn on the residual projection) > > Should this be param 94 & 95 ? > > - Paul > > > > > On Fri, 6 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: > >> Ok here are some suggestions to improve the performance: >> >> - set timestep, param(12), to -3e-5 >> - set param(102) and param(103) to 5 (this will turn on the residual projection) >> - increase lgmres (in SIZE) to 40 >> - you have want to tune the Helmholtz (velocity) and pressure tolerance (e.g. 1e-8 and 1e-5) >> >> btw: what's the Reynolds number of this flow? >> >> >> Stefan >> >> >> On Aug 6, 2010, at 11:13 AM, wrote: >> >>> Dear Mani, >>> >>> I haven't checked your logfile yet but there are my first thoughts: >>> >>> N=4 is low >>> Your polynomial order (N=4) is low and the tensor-product formulation won't buy you much. The performance of all matrix-matrix multiplies (MxM) will limited by the memory access times. This is in particular a problem on multi-core and multi-socket machines. We have seen that the performance drop can be significant. >>> On top of that you carry around a large number of duplicate DOF and your surface to volume ratio is high (more communication). I >>> >>> >>> Parallel Performance >>> Your gridpoints per core (~4700) is quite small! >>> On Blue Gene (BG) systems we can scale well (e.g. 70-80% parallel efficiency) with around 10k gridpoints per core. On other system (e.g. Cray XT5) you need much more gridpoints per core (say 80k) because the network has a higher latency (NEK is sensitive to latency not bandwidth) and the processors are much faster. >>> >>> Cheers, >>> Stefan >>> >>> On Aug 6, 2010, at 10:51 AM, wrote: >>> >>>> Hi, >>>> >>>> I'm solving for Rayleigh-Benard convection in a 3D box of 37632, 4rth order elements. I fired the job on 512 processors on a machine with quad-core, quad socket configuration (32 nodes with 16 cores each ) with a 20 Gbps infiniband interconnect. In 12 hours it has run 163 time steps. Is this normal or is there maybe some way to improve performance? Attached is the SIZE file. >>>> >>>> Regards, >>>> Mani chandra >>>> >>>> _______________________________________________ >>>> Nek5000-users mailing list >>>> Nek5000-users at lists.mcs.anl.gov >>>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>> >>> _______________________________________________ >>> Nek5000-users mailing list >>> Nek5000-users at lists.mcs.anl.gov >>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users From nek5000-users at lists.mcs.anl.gov Fri Aug 6 06:04:03 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 6 Aug 2010 06:04:03 -0500 (CDT) Subject: [Nek5000-users] Performance problem In-Reply-To: References: <4C5BCD0F.8090306@iitk.ac.in> <175E47B2-1097-4F88-AA80-2F3C14EA3EAB@lav.mavt.ethz.ch> Message-ID: It also looks like your tolerances are a bit tight. tolrel of 1.e-3 or 1.e-4 should be plenty (even 1.e-2). Paul On Fri, 6 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: > > >> - set param(102) and param(103) to 5 (this will turn on the residual >> projection) > > Should this be param 94 & 95 ? > > - Paul > > > > > On Fri, 6 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: > >> Ok here are some suggestions to improve the performance: >> >> - set timestep, param(12), to -3e-5 >> - set param(102) and param(103) to 5 (this will turn on the residual >> projection) >> - increase lgmres (in SIZE) to 40 >> - you have want to tune the Helmholtz (velocity) and pressure tolerance >> (e.g. 1e-8 and 1e-5) >> >> btw: what's the Reynolds number of this flow? >> >> >> Stefan >> >> >> On Aug 6, 2010, at 11:13 AM, >> wrote: >> >>> Dear Mani, >>> >>> I haven't checked your logfile yet but there are my first thoughts: >>> >>> N=4 is low >>> Your polynomial order (N=4) is low and the tensor-product formulation >>> won't buy you much. The performance of all matrix-matrix multiplies (MxM) >>> will limited by the memory access times. This is in particular a problem >>> on multi-core and multi-socket machines. We have seen that the performance >>> drop can be significant. >>> On top of that you carry around a large number of duplicate DOF and your >>> surface to volume ratio is high (more communication). I >>> >>> >>> Parallel Performance >>> Your gridpoints per core (~4700) is quite small! >>> On Blue Gene (BG) systems we can scale well (e.g. 70-80% parallel >>> efficiency) with around 10k gridpoints per core. On other system (e.g. >>> Cray XT5) you need much more gridpoints per core (say 80k) because the >>> network has a higher latency (NEK is sensitive to latency not bandwidth) >>> and the processors are much faster. >>> >>> Cheers, >>> Stefan >>> >>> On Aug 6, 2010, at 10:51 AM, wrote: >>> >>>> Hi, >>>> >>>> I'm solving for Rayleigh-Benard convection in a 3D box of 37632, 4rth >>>> order elements. I fired the job on 512 processors on a machine with >>>> quad-core, quad socket configuration (32 nodes with 16 cores each ) with >>>> a 20 Gbps infiniband interconnect. In 12 hours it has run 163 time steps. >>>> Is this normal or is there maybe some way to improve performance? >>>> Attached is the SIZE file. >>>> >>>> Regards, >>>> Mani chandra >>>> >>>> _______________________________________________ >>>> Nek5000-users mailing list >>>> Nek5000-users at lists.mcs.anl.gov >>>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>> >>> _______________________________________________ >>> Nek5000-users mailing list >>> Nek5000-users at lists.mcs.anl.gov >>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > From nek5000-users at lists.mcs.anl.gov Fri Aug 6 06:09:13 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 6 Aug 2010 06:09:13 -0500 (CDT) Subject: [Nek5000-users] Performance problem In-Reply-To: References: <4C5BCD0F.8090306@iitk.ac.in> <175E47B2-1097-4F88-AA80-2F3C14EA3EAB@lav.mavt.ethz.ch> Message-ID: If you set p94 & p95 to 5 (say), I recommend strongly to set p12 (dt) to -3e-5 in your case. The reason for this is that the projection scheme is much more stable for fixed dt. On the whole, however, Stefan's earlier comments about using, say, lx1=8 and fewer elements is a better strategy. It's also possible that we should switch your coarse-grid solve at this scale to AMG. Paul On Fri, 6 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: > > >> - set param(102) and param(103) to 5 (this will turn on the residual >> projection) > > Should this be param 94 & 95 ? > > - Paul > > > > > On Fri, 6 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: > >> Ok here are some suggestions to improve the performance: >> >> - set timestep, param(12), to -3e-5 >> - set param(102) and param(103) to 5 (this will turn on the residual >> projection) >> - increase lgmres (in SIZE) to 40 >> - you have want to tune the Helmholtz (velocity) and pressure tolerance >> (e.g. 1e-8 and 1e-5) >> >> btw: what's the Reynolds number of this flow? >> >> >> Stefan >> >> >> On Aug 6, 2010, at 11:13 AM, >> wrote: >> >>> Dear Mani, >>> >>> I haven't checked your logfile yet but there are my first thoughts: >>> >>> N=4 is low >>> Your polynomial order (N=4) is low and the tensor-product formulation >>> won't buy you much. The performance of all matrix-matrix multiplies (MxM) >>> will limited by the memory access times. This is in particular a problem >>> on multi-core and multi-socket machines. We have seen that the performance >>> drop can be significant. >>> On top of that you carry around a large number of duplicate DOF and your >>> surface to volume ratio is high (more communication). I >>> >>> >>> Parallel Performance >>> Your gridpoints per core (~4700) is quite small! >>> On Blue Gene (BG) systems we can scale well (e.g. 70-80% parallel >>> efficiency) with around 10k gridpoints per core. On other system (e.g. >>> Cray XT5) you need much more gridpoints per core (say 80k) because the >>> network has a higher latency (NEK is sensitive to latency not bandwidth) >>> and the processors are much faster. >>> >>> Cheers, >>> Stefan >>> >>> On Aug 6, 2010, at 10:51 AM, wrote: >>> >>>> Hi, >>>> >>>> I'm solving for Rayleigh-Benard convection in a 3D box of 37632, 4rth >>>> order elements. I fired the job on 512 processors on a machine with >>>> quad-core, quad socket configuration (32 nodes with 16 cores each ) with >>>> a 20 Gbps infiniband interconnect. In 12 hours it has run 163 time steps. >>>> Is this normal or is there maybe some way to improve performance? >>>> Attached is the SIZE file. >>>> >>>> Regards, >>>> Mani chandra >>>> >>>> _______________________________________________ >>>> Nek5000-users mailing list >>>> Nek5000-users at lists.mcs.anl.gov >>>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>> >>> _______________________________________________ >>> Nek5000-users mailing list >>> Nek5000-users at lists.mcs.anl.gov >>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > From nek5000-users at lists.mcs.anl.gov Fri Aug 6 08:08:57 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 6 Aug 2010 08:08:57 -0500 (CDT) Subject: [Nek5000-users] Performance problem In-Reply-To: References: <4C5BCD0F.8090306@iitk.ac.in> <175E47B2-1097-4F88-AA80-2F3C14EA3EAB@lav.mavt.ethz.ch> Message-ID: Mani, I think there must be something else wrong ... I'm seeing about 5 sec/step on a 64 proc. linux cluster. If you'd like to send me a gzip'd file w/ the essentials, contact me off-list (fischer at mcs.anl.gov) and I can take a closer look. Paul On Fri, 6 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: > > If you set p94 & p95 to 5 (say), I recommend strongly to set p12 (dt) to > > -3e-5 > > in your case. The reason for this is that the projection scheme is much > more stable for fixed dt. > > On the whole, however, Stefan's earlier comments about using, say, lx1=8 > and fewer elements is a better strategy. It's also possible that we should > switch your coarse-grid solve at this scale to AMG. > > Paul > > > On Fri, 6 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: > >> >> >>> - set param(102) and param(103) to 5 (this will turn on the residual >>> projection) >> >> Should this be param 94 & 95 ? >> >> - Paul >> >> >> >> >> On Fri, 6 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: >> >>> Ok here are some suggestions to improve the performance: >>> >>> - set timestep, param(12), to -3e-5 >>> - set param(102) and param(103) to 5 (this will turn on the residual >>> projection) >>> - increase lgmres (in SIZE) to 40 >>> - you have want to tune the Helmholtz (velocity) and pressure tolerance >>> (e.g. 1e-8 and 1e-5) >>> >>> btw: what's the Reynolds number of this flow? >>> >>> >>> Stefan >>> >>> >>> On Aug 6, 2010, at 11:13 AM, >>> wrote: >>> >>>> Dear Mani, >>>> >>>> I haven't checked your logfile yet but there are my first thoughts: >>>> >>>> N=4 is low >>>> Your polynomial order (N=4) is low and the tensor-product formulation >>>> won't buy you much. The performance of all matrix-matrix multiplies (MxM) >>>> will limited by the memory access times. This is in particular a problem >>>> on multi-core and multi-socket machines. We have seen that the >>>> performance drop can be significant. >>>> On top of that you carry around a large number of duplicate DOF and your >>>> surface to volume ratio is high (more communication). I >>>> >>>> >>>> Parallel Performance >>>> Your gridpoints per core (~4700) is quite small! >>>> On Blue Gene (BG) systems we can scale well (e.g. 70-80% parallel >>>> efficiency) with around 10k gridpoints per core. On other system (e.g. >>>> Cray XT5) you need much more gridpoints per core (say 80k) because the >>>> network has a higher latency (NEK is sensitive to latency not bandwidth) >>>> and the processors are much faster. >>>> >>>> Cheers, >>>> Stefan >>>> >>>> On Aug 6, 2010, at 10:51 AM, wrote: >>>> >>>>> Hi, >>>>> >>>>> I'm solving for Rayleigh-Benard convection in a 3D box of 37632, 4rth >>>>> order elements. I fired the job on 512 processors on a machine with >>>>> quad-core, quad socket configuration (32 nodes with 16 cores each ) with >>>>> a 20 Gbps infiniband interconnect. In 12 hours it has run 163 time >>>>> steps. Is this normal or is there maybe some way to improve performance? >>>>> Attached is the SIZE file. >>>>> >>>>> Regards, >>>>> Mani chandra >>>>> >>>>> _______________________________________________ >>>>> Nek5000-users mailing list >>>>> Nek5000-users at lists.mcs.anl.gov >>>>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>>> >>>> _______________________________________________ >>>> Nek5000-users mailing list >>>> Nek5000-users at lists.mcs.anl.gov >>>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>> >>> _______________________________________________ >>> Nek5000-users mailing list >>> Nek5000-users at lists.mcs.anl.gov >>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>> >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > From nek5000-users at lists.mcs.anl.gov Fri Aug 6 08:19:17 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 6 Aug 2010 15:19:17 +0200 Subject: [Nek5000-users] Performance problem In-Reply-To: References: <4C5BCD0F.8090306@iitk.ac.in> <175E47B2-1097-4F88-AA80-2F3C14EA3EAB@lav.mavt.ethz.ch> Message-ID: <216ABACC-1300-49CA-85ED-A8E97AD81BFD@lav.mavt.ethz.ch> I guess your system has a problem. The TEMP solve takes ~0.14 sec for step 6 and 22.2 sec for step 7. Overall step 6 takes 8.4 sec and step 7 51.1 sec although the iteration counts are very similar! Stefan Check this: 0: Step 6, t= 2.0307248E-01, DT= 3.0347043E-05, C= 2.043 4.8383E+01 3.0220E+00 0: Solving for heat 0: Solving for fluid 0: 0.000000000000000E+000 p22 6 2 0: 6 Hmholtz TEMP: 64 7.0899E-08 7.6377E+01 7.7888E-08 0: 6 2.0307E-01 1.3816E-01 Heat done 0: 0.000000000000000E+000 p22 6 1 0: 6 Hmholtz VELX: 57 1.0070E-05 5.8954E+04 1.2278E-05 0: 0.000000000000000E+000 p22 6 1 0: 6 Hmholtz VELY: 56 1.1755E-05 5.8020E+04 1.2278E-05 0: 0.000000000000000E+000 p22 6 1 0: 6 Hmholtz VELZ: 57 1.0011E-05 7.7873E+04 1.2278E-05 0: 6 U-Pres gmres: 48 1.6044E-09 2.3009E-09 2.3009E+00 4.5887E+00 6.8322E+00 0: 6 DNORM, DIVEX 1.604430720383758E-009 1.604426975107925E-009 0: 6 2.0307E-01 7.4035E+00 Fluid done 0: Step 7, t= 2.0310283E-01, DT= 3.0347043E-05, C= 2.052 5.6850E+01 8.4673E+00 0: Solving for heat 0: Solving for fluid 0: 0.000000000000000E+000 p22 7 2 0: 7 Hmholtz TEMP: 64 6.9851E-08 7.6526E+01 7.8021E-08 0: 7 2.0310E-01 2.2240E+01 Heat done 0: 0.000000000000000E+000 p22 7 1 0: 7 Hmholtz VELX: 57 1.0101E-05 5.8874E+04 1.2295E-05 0: 0.000000000000000E+000 p22 7 1 0: 7 Hmholtz VELY: 56 1.1723E-05 5.7947E+04 1.2295E-05 0: 0.000000000000000E+000 p22 7 1 0: 7 Hmholtz VELZ: 57 9.9682E-06 7.7881E+04 1.2295E-05 0: 7 U-Pres gmres: 48 1.6001E-09 2.2892E-09 2.2892E+00 1.9881E+00 3.3138E+00 0: 7 DNORM, DIVEX 1.600110264916237E-009 1.600109913837109E-009 0: 7 2.0310E-01 1.9966E+01 Fluid done 0: Step 8, t= 2.0313318E-01, DT= 3.0347043E-05, C= 2.060 1.0837E+02 5.1516E+01 On Aug 6, 2010, at 3:08 PM, wrote: > > Mani, > > I think there must be something else wrong ... I'm seeing about > 5 sec/step on a 64 proc. linux cluster. > > If you'd like to send me a gzip'd file w/ the essentials, contact > me off-list (fischer at mcs.anl.gov) and I can take a closer look. > > Paul > > > On Fri, 6 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: > >> >> If you set p94 & p95 to 5 (say), I recommend strongly to set p12 (dt) to >> >> -3e-5 >> >> in your case. The reason for this is that the projection scheme is much >> more stable for fixed dt. >> >> On the whole, however, Stefan's earlier comments about using, say, lx1=8 >> and fewer elements is a better strategy. It's also possible that we should switch your coarse-grid solve at this scale to AMG. >> >> Paul >> >> >> On Fri, 6 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: >> >>>> - set param(102) and param(103) to 5 (this will turn on the residual projection) >>> >>> Should this be param 94 & 95 ? >>> >>> - Paul >>> On Fri, 6 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: >>>> Ok here are some suggestions to improve the performance: >>>> - set timestep, param(12), to -3e-5 >>>> - set param(102) and param(103) to 5 (this will turn on the residual projection) >>>> - increase lgmres (in SIZE) to 40 >>>> - you have want to tune the Helmholtz (velocity) and pressure tolerance (e.g. 1e-8 and 1e-5) >>>> btw: what's the Reynolds number of this flow? >>>> Stefan >>>> On Aug 6, 2010, at 11:13 AM, wrote: >>>>> Dear Mani, >>>>> I haven't checked your logfile yet but there are my first thoughts: >>>>> N=4 is low >>>>> Your polynomial order (N=4) is low and the tensor-product formulation won't buy you much. The performance of all matrix-matrix multiplies (MxM) will limited by the memory access times. This is in particular a problem on multi-core and multi-socket machines. We have seen that the performance drop can be significant. >>>>> On top of that you carry around a large number of duplicate DOF and your surface to volume ratio is high (more communication). I >>>>> Parallel Performance >>>>> Your gridpoints per core (~4700) is quite small! >>>>> On Blue Gene (BG) systems we can scale well (e.g. 70-80% parallel efficiency) with around 10k gridpoints per core. On other system (e.g. Cray XT5) you need much more gridpoints per core (say 80k) because the network has a higher latency (NEK is sensitive to latency not bandwidth) and the processors are much faster. >>>>> Cheers, >>>>> Stefan >>>>> On Aug 6, 2010, at 10:51 AM, wrote: >>>>>> Hi, >>>>>> >>>>>> I'm solving for Rayleigh-Benard convection in a 3D box of 37632, 4rth order elements. I fired the job on 512 processors on a machine with quad-core, quad socket configuration (32 nodes with 16 cores each ) with a 20 Gbps infiniband interconnect. In 12 hours it has run 163 time steps. Is this normal or is there maybe some way to improve performance? Attached is the SIZE file. >>>>>> Regards, >>>>>> Mani chandra >>>>>> _______________________________________________ >>>>>> Nek5000-users mailing list >>>>>> Nek5000-users at lists.mcs.anl.gov >>>>>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>>>> _______________________________________________ >>>>> Nek5000-users mailing list >>>>> Nek5000-users at lists.mcs.anl.gov >>>>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>>> _______________________________________________ >>>> Nek5000-users mailing list >>>> Nek5000-users at lists.mcs.anl.gov >>>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>> _______________________________________________ >>> Nek5000-users mailing list >>> Nek5000-users at lists.mcs.anl.gov >>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users From nek5000-users at lists.mcs.anl.gov Fri Aug 6 10:57:23 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 06 Aug 2010 21:27:23 +0530 Subject: [Nek5000-users] [*] Re: Performance problem In-Reply-To: <216ABACC-1300-49CA-85ED-A8E97AD81BFD@lav.mavt.ethz.ch> References: <4C5BCD0F.8090306@iitk.ac.in> <175E47B2-1097-4F88-AA80-2F3C14EA3EAB@lav.mavt.ethz.ch> <216ABACC-1300-49CA-85ED-A8E97AD81BFD@lav.mavt.ethz.ch> Message-ID: <4C5C30E3.8090307@iitk.ac.in> This is weird. I have a 2D equivalent of this system, with the same parameters, mesh structure and it is quite fast. On 08/06/2010 06:49 PM, nek5000-users at lists.mcs.anl.gov wrote: > I guess your system has a problem. The TEMP solve takes ~0.14 sec for step 6 and 22.2 sec for step 7. Overall step 6 takes 8.4 sec and step 7 51.1 sec although the iteration counts are very similar! > > Stefan > > > > Check this: > > 0: Step 6, t= 2.0307248E-01, DT= 3.0347043E-05, C= 2.043 4.8383E+01 3.0220E+00 > 0: Solving for heat > 0: Solving for fluid > 0: 0.000000000000000E+000 p22 6 2 > 0: 6 Hmholtz TEMP: 64 7.0899E-08 7.6377E+01 7.7888E-08 > 0: 6 2.0307E-01 1.3816E-01 Heat done > 0: 0.000000000000000E+000 p22 6 1 > 0: 6 Hmholtz VELX: 57 1.0070E-05 5.8954E+04 1.2278E-05 > 0: 0.000000000000000E+000 p22 6 1 > 0: 6 Hmholtz VELY: 56 1.1755E-05 5.8020E+04 1.2278E-05 > 0: 0.000000000000000E+000 p22 6 1 > 0: 6 Hmholtz VELZ: 57 1.0011E-05 7.7873E+04 1.2278E-05 > 0: 6 U-Pres gmres: 48 1.6044E-09 2.3009E-09 2.3009E+00 4.5887E+00 6.8322E+00 > 0: 6 DNORM, DIVEX 1.604430720383758E-009 1.604426975107925E-009 > 0: 6 2.0307E-01 7.4035E+00 Fluid done > 0: Step 7, t= 2.0310283E-01, DT= 3.0347043E-05, C= 2.052 5.6850E+01 8.4673E+00 > 0: Solving for heat > 0: Solving for fluid > 0: 0.000000000000000E+000 p22 7 2 > 0: 7 Hmholtz TEMP: 64 6.9851E-08 7.6526E+01 7.8021E-08 > 0: 7 2.0310E-01 2.2240E+01 Heat done > 0: 0.000000000000000E+000 p22 7 1 > 0: 7 Hmholtz VELX: 57 1.0101E-05 5.8874E+04 1.2295E-05 > 0: 0.000000000000000E+000 p22 7 1 > 0: 7 Hmholtz VELY: 56 1.1723E-05 5.7947E+04 1.2295E-05 > 0: 0.000000000000000E+000 p22 7 1 > 0: 7 Hmholtz VELZ: 57 9.9682E-06 7.7881E+04 1.2295E-05 > 0: 7 U-Pres gmres: 48 1.6001E-09 2.2892E-09 2.2892E+00 1.9881E+00 3.3138E+00 > 0: 7 DNORM, DIVEX 1.600110264916237E-009 1.600109913837109E-009 > 0: 7 2.0310E-01 1.9966E+01 Fluid done > 0: Step 8, t= 2.0313318E-01, DT= 3.0347043E-05, C= 2.060 1.0837E+02 5.1516E+01 > > On Aug 6, 2010, at 3:08 PM, wrote: > >> Mani, >> >> I think there must be something else wrong ... I'm seeing about >> 5 sec/step on a 64 proc. linux cluster. >> >> If you'd like to send me a gzip'd file w/ the essentials, contact >> me off-list (fischer at mcs.anl.gov) and I can take a closer look. >> >> Paul >> >> >> On Fri, 6 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: >> >>> If you set p94& p95 to 5 (say), I recommend strongly to set p12 (dt) to >>> >>> -3e-5 >>> >>> in your case. The reason for this is that the projection scheme is much >>> more stable for fixed dt. >>> >>> On the whole, however, Stefan's earlier comments about using, say, lx1=8 >>> and fewer elements is a better strategy. It's also possible that we should switch your coarse-grid solve at this scale to AMG. >>> >>> Paul >>> >>> >>> On Fri, 6 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: >>> >>>>> - set param(102) and param(103) to 5 (this will turn on the residual projection) >>>> Should this be param 94& 95 ? >>>> >>>> - Paul >>>> On Fri, 6 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: >>>>> Ok here are some suggestions to improve the performance: >>>>> - set timestep, param(12), to -3e-5 >>>>> - set param(102) and param(103) to 5 (this will turn on the residual projection) >>>>> - increase lgmres (in SIZE) to 40 >>>>> - you have want to tune the Helmholtz (velocity) and pressure tolerance (e.g. 1e-8 and 1e-5) >>>>> btw: what's the Reynolds number of this flow? >>>>> Stefan >>>>> On Aug 6, 2010, at 11:13 AM, wrote: >>>>>> Dear Mani, >>>>>> I haven't checked your logfile yet but there are my first thoughts: >>>>>> N=4 is low >>>>>> Your polynomial order (N=4) is low and the tensor-product formulation won't buy you much. The performance of all matrix-matrix multiplies (MxM) will limited by the memory access times. This is in particular a problem on multi-core and multi-socket machines. We have seen that the performance drop can be significant. >>>>>> On top of that you carry around a large number of duplicate DOF and your surface to volume ratio is high (more communication). I >>>>>> Parallel Performance >>>>>> Your gridpoints per core (~4700) is quite small! >>>>>> On Blue Gene (BG) systems we can scale well (e.g. 70-80% parallel efficiency) with around 10k gridpoints per core. On other system (e.g. Cray XT5) you need much more gridpoints per core (say 80k) because the network has a higher latency (NEK is sensitive to latency not bandwidth) and the processors are much faster. >>>>>> Cheers, >>>>>> Stefan >>>>>> On Aug 6, 2010, at 10:51 AM, wrote: >>>>>>> Hi, >>>>>>> >>>>>>> I'm solving for Rayleigh-Benard convection in a 3D box of 37632, 4rth order elements. I fired the job on 512 processors on a machine with quad-core, quad socket configuration (32 nodes with 16 cores each ) with a 20 Gbps infiniband interconnect. In 12 hours it has run 163 time steps. Is this normal or is there maybe some way to improve performance? Attached is the SIZE file. >>>>>>> Regards, >>>>>>> Mani chandra >>>>>>> _______________________________________________ >>>>>>> Nek5000-users mailing list >>>>>>> Nek5000-users at lists.mcs.anl.gov >>>>>>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>>>>> _______________________________________________ >>>>>> Nek5000-users mailing list >>>>>> Nek5000-users at lists.mcs.anl.gov >>>>>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>>>> _______________________________________________ >>>>> Nek5000-users mailing list >>>>> Nek5000-users at lists.mcs.anl.gov >>>>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>>> _______________________________________________ >>>> Nek5000-users mailing list >>>> Nek5000-users at lists.mcs.anl.gov >>>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>> _______________________________________________ >>> Nek5000-users mailing list >>> Nek5000-users at lists.mcs.anl.gov >>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>> >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > From nek5000-users at lists.mcs.anl.gov Fri Aug 6 11:24:17 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 6 Aug 2010 11:24:17 -0500 (CDT) Subject: [Nek5000-users] [*] Re: Performance problem In-Reply-To: <4C5C30E3.8090307@iitk.ac.in> References: <4C5BCD0F.8090306@iitk.ac.in> <175E47B2-1097-4F88-AA80-2F3C14EA3EAB@lav.mavt.ethz.ch> <216ABACC-1300-49CA-85ED-A8E97AD81BFD@lav.mavt.ethz.ch> <4C5C30E3.8090307@iitk.ac.in> Message-ID: 2D is generally always fast but more attention is required in 3D. Regardless of the parameter settings I would concur w/ Stefan's analysis that there is something happening on the machine. Are others occupying the resource during the run ? (Sometimes there are rogue processes on a cluster from a prior run, etc.) Paul On Fri, 6 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: > This is weird. I have a 2D equivalent of this system, with the same > parameters, mesh structure and it is quite fast. > On 08/06/2010 06:49 PM, nek5000-users at lists.mcs.anl.gov wrote: >> I guess your system has a problem. The TEMP solve takes ~0.14 sec for step >> 6 and 22.2 sec for step 7. Overall step 6 takes 8.4 sec and step 7 51.1 sec >> although the iteration counts are very similar! >> >> Stefan >> >> >> >> Check this: >> >> 0: Step 6, t= 2.0307248E-01, DT= 3.0347043E-05, C= 2.043 4.8383E+01 >> 3.0220E+00 >> 0: Solving for heat >> 0: Solving for fluid >> 0: 0.000000000000000E+000 p22 6 2 >> 0: 6 Hmholtz TEMP: 64 7.0899E-08 7.6377E+01 >> 7.7888E-08 >> 0: 6 2.0307E-01 1.3816E-01 Heat done >> 0: 0.000000000000000E+000 p22 6 1 >> 0: 6 Hmholtz VELX: 57 1.0070E-05 5.8954E+04 >> 1.2278E-05 >> 0: 0.000000000000000E+000 p22 6 1 >> 0: 6 Hmholtz VELY: 56 1.1755E-05 5.8020E+04 >> 1.2278E-05 >> 0: 0.000000000000000E+000 p22 6 1 >> 0: 6 Hmholtz VELZ: 57 1.0011E-05 7.7873E+04 >> 1.2278E-05 >> 0: 6 U-Pres gmres: 48 1.6044E-09 2.3009E-09 >> 2.3009E+00 4.5887E+00 6.8322E+00 >> 0: 6 DNORM, DIVEX 1.604430720383758E-009 >> 1.604426975107925E-009 >> 0: 6 2.0307E-01 7.4035E+00 Fluid done >> 0: Step 7, t= 2.0310283E-01, DT= 3.0347043E-05, C= 2.052 5.6850E+01 >> 8.4673E+00 >> 0: Solving for heat >> 0: Solving for fluid >> 0: 0.000000000000000E+000 p22 7 2 >> 0: 7 Hmholtz TEMP: 64 6.9851E-08 7.6526E+01 >> 7.8021E-08 >> 0: 7 2.0310E-01 2.2240E+01 Heat done >> 0: 0.000000000000000E+000 p22 7 1 >> 0: 7 Hmholtz VELX: 57 1.0101E-05 5.8874E+04 >> 1.2295E-05 >> 0: 0.000000000000000E+000 p22 7 1 >> 0: 7 Hmholtz VELY: 56 1.1723E-05 5.7947E+04 >> 1.2295E-05 >> 0: 0.000000000000000E+000 p22 7 1 >> 0: 7 Hmholtz VELZ: 57 9.9682E-06 7.7881E+04 >> 1.2295E-05 >> 0: 7 U-Pres gmres: 48 1.6001E-09 2.2892E-09 >> 2.2892E+00 1.9881E+00 3.3138E+00 >> 0: 7 DNORM, DIVEX 1.600110264916237E-009 >> 1.600109913837109E-009 >> 0: 7 2.0310E-01 1.9966E+01 Fluid done >> 0: Step 8, t= 2.0313318E-01, DT= 3.0347043E-05, C= 2.060 1.0837E+02 >> 5.1516E+01 >> >> On Aug 6, 2010, at 3:08 PM, wrote: >> >>> Mani, >>> >>> I think there must be something else wrong ... I'm seeing about >>> 5 sec/step on a 64 proc. linux cluster. >>> >>> If you'd like to send me a gzip'd file w/ the essentials, contact >>> me off-list (fischer at mcs.anl.gov) and I can take a closer look. >>> >>> Paul >>> >>> >>> On Fri, 6 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: >>> >>>> If you set p94& p95 to 5 (say), I recommend strongly to set p12 (dt) to >>>> >>>> -3e-5 >>>> >>>> in your case. The reason for this is that the projection scheme is much >>>> more stable for fixed dt. >>>> >>>> On the whole, however, Stefan's earlier comments about using, say, lx1=8 >>>> and fewer elements is a better strategy. It's also possible that we >>>> should switch your coarse-grid solve at this scale to AMG. >>>> >>>> Paul >>>> >>>> >>>> On Fri, 6 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: >>>> >>>>>> - set param(102) and param(103) to 5 (this will turn on the residual >>>>>> projection) >>>>> Should this be param 94& 95 ? >>>>> >>>>> - Paul >>>>> On Fri, 6 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: >>>>>> Ok here are some suggestions to improve the performance: >>>>>> - set timestep, param(12), to -3e-5 >>>>>> - set param(102) and param(103) to 5 (this will turn on the residual >>>>>> projection) >>>>>> - increase lgmres (in SIZE) to 40 >>>>>> - you have want to tune the Helmholtz (velocity) and pressure tolerance >>>>>> (e.g. 1e-8 and 1e-5) >>>>>> btw: what's the Reynolds number of this flow? >>>>>> Stefan >>>>>> On Aug 6, 2010, at 11:13 AM, >>>>>> wrote: >>>>>>> Dear Mani, >>>>>>> I haven't checked your logfile yet but there are my first thoughts: >>>>>>> N=4 is low >>>>>>> Your polynomial order (N=4) is low and the tensor-product formulation >>>>>>> won't buy you much. The performance of all matrix-matrix multiplies >>>>>>> (MxM) will limited by the memory access times. This is in particular a >>>>>>> problem on multi-core and multi-socket machines. We have seen that the >>>>>>> performance drop can be significant. >>>>>>> On top of that you carry around a large number of duplicate DOF and >>>>>>> your surface to volume ratio is high (more communication). I >>>>>>> Parallel Performance >>>>>>> Your gridpoints per core (~4700) is quite small! >>>>>>> On Blue Gene (BG) systems we can scale well (e.g. 70-80% parallel >>>>>>> efficiency) with around 10k gridpoints per core. On other system (e.g. >>>>>>> Cray XT5) you need much more gridpoints per core (say 80k) because the >>>>>>> network has a higher latency (NEK is sensitive to latency not >>>>>>> bandwidth) and the processors are much faster. >>>>>>> Cheers, >>>>>>> Stefan >>>>>>> On Aug 6, 2010, at 10:51 AM, wrote: >>>>>>>> Hi, >>>>>>>> >>>>>>>> I'm solving for Rayleigh-Benard convection in a 3D box of 37632, >>>>>>>> 4rth order elements. I fired the job on 512 processors on a machine >>>>>>>> with quad-core, quad socket configuration (32 nodes with 16 cores >>>>>>>> each ) with a 20 Gbps infiniband interconnect. In 12 hours it has run >>>>>>>> 163 time steps. Is this normal or is there maybe some way to improve >>>>>>>> performance? Attached is the SIZE file. >>>>>>>> Regards, >>>>>>>> Mani chandra >>>>>>>> _______________________________________________ >>>>>>>> Nek5000-users mailing list >>>>>>>> Nek5000-users at lists.mcs.anl.gov >>>>>>>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>>>>>> _______________________________________________ >>>>>>> Nek5000-users mailing list >>>>>>> Nek5000-users at lists.mcs.anl.gov >>>>>>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>>>>> _______________________________________________ >>>>>> Nek5000-users mailing list >>>>>> Nek5000-users at lists.mcs.anl.gov >>>>>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>>>> _______________________________________________ >>>>> Nek5000-users mailing list >>>>> Nek5000-users at lists.mcs.anl.gov >>>>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>>> _______________________________________________ >>>> Nek5000-users mailing list >>>> Nek5000-users at lists.mcs.anl.gov >>>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>>> >>> _______________________________________________ >>> Nek5000-users mailing list >>> Nek5000-users at lists.mcs.anl.gov >>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > From nek5000-users at lists.mcs.anl.gov Fri Aug 6 14:20:06 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Sat, 7 Aug 2010 00:50:06 +0530 Subject: [Nek5000-users] [*] Re: [*] Re: Performance problem In-Reply-To: References: <4C5BCD0F.8090306@iitk.ac.in> <175E47B2-1097-4F88-AA80-2F3C14EA3EAB@lav.mavt.ethz.ch> <216ABACC-1300-49CA-85ED-A8E97AD81BFD@lav.mavt.ethz.ch> <4C5C30E3.8090307@iitk.ac.in> Message-ID: Hi, I will check with the sysadmin about rogue processes and get back to you. But the performance problem for this run was also there in a completely different cluster (which also has an infiniband interconnect). Mani > > 2D is generally always fast but more attention is required in 3D. > > Regardless of the parameter settings I would concur w/ Stefan's > analysis that there is something happening on the machine. > > Are others occupying the resource during the run ? (Sometimes > there are rogue processes on a cluster from a prior run, etc.) > > Paul > > On Fri, 6 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: > >> This is weird. I have a 2D equivalent of this system, with the same >> parameters, mesh structure and it is quite fast. >> On 08/06/2010 06:49 PM, nek5000-users at lists.mcs.anl.gov wrote: >>> I guess your system has a problem. The TEMP solve takes ~0.14 sec for >>> step >>> 6 and 22.2 sec for step 7. Overall step 6 takes 8.4 sec and step 7 51.1 >>> sec >>> although the iteration counts are very similar! >>> >>> Stefan >>> >>> >>> >>> Check this: >>> >>> 0: Step 6, t= 2.0307248E-01, DT= 3.0347043E-05, C= 2.043 >>> 4.8383E+01 >>> 3.0220E+00 >>> 0: Solving for heat >>> 0: Solving for fluid >>> 0: 0.000000000000000E+000 p22 6 2 >>> 0: 6 Hmholtz TEMP: 64 7.0899E-08 7.6377E+01 >>> 7.7888E-08 >>> 0: 6 2.0307E-01 1.3816E-01 Heat done >>> 0: 0.000000000000000E+000 p22 6 1 >>> 0: 6 Hmholtz VELX: 57 1.0070E-05 5.8954E+04 >>> 1.2278E-05 >>> 0: 0.000000000000000E+000 p22 6 1 >>> 0: 6 Hmholtz VELY: 56 1.1755E-05 5.8020E+04 >>> 1.2278E-05 >>> 0: 0.000000000000000E+000 p22 6 1 >>> 0: 6 Hmholtz VELZ: 57 1.0011E-05 7.7873E+04 >>> 1.2278E-05 >>> 0: 6 U-Pres gmres: 48 1.6044E-09 2.3009E-09 >>> 2.3009E+00 4.5887E+00 6.8322E+00 >>> 0: 6 DNORM, DIVEX 1.604430720383758E-009 >>> 1.604426975107925E-009 >>> 0: 6 2.0307E-01 7.4035E+00 Fluid done >>> 0: Step 7, t= 2.0310283E-01, DT= 3.0347043E-05, C= 2.052 >>> 5.6850E+01 >>> 8.4673E+00 >>> 0: Solving for heat >>> 0: Solving for fluid >>> 0: 0.000000000000000E+000 p22 7 2 >>> 0: 7 Hmholtz TEMP: 64 6.9851E-08 7.6526E+01 >>> 7.8021E-08 >>> 0: 7 2.0310E-01 2.2240E+01 Heat done >>> 0: 0.000000000000000E+000 p22 7 1 >>> 0: 7 Hmholtz VELX: 57 1.0101E-05 5.8874E+04 >>> 1.2295E-05 >>> 0: 0.000000000000000E+000 p22 7 1 >>> 0: 7 Hmholtz VELY: 56 1.1723E-05 5.7947E+04 >>> 1.2295E-05 >>> 0: 0.000000000000000E+000 p22 7 1 >>> 0: 7 Hmholtz VELZ: 57 9.9682E-06 7.7881E+04 >>> 1.2295E-05 >>> 0: 7 U-Pres gmres: 48 1.6001E-09 2.2892E-09 >>> 2.2892E+00 1.9881E+00 3.3138E+00 >>> 0: 7 DNORM, DIVEX 1.600110264916237E-009 >>> 1.600109913837109E-009 >>> 0: 7 2.0310E-01 1.9966E+01 Fluid done >>> 0: Step 8, t= 2.0313318E-01, DT= 3.0347043E-05, C= 2.060 >>> 1.0837E+02 >>> 5.1516E+01 >>> >>> On Aug 6, 2010, at 3:08 PM, wrote: >>> >>>> Mani, >>>> >>>> I think there must be something else wrong ... I'm seeing about >>>> 5 sec/step on a 64 proc. linux cluster. >>>> >>>> If you'd like to send me a gzip'd file w/ the essentials, contact >>>> me off-list (fischer at mcs.anl.gov) and I can take a closer look. >>>> >>>> Paul >>>> >>>> >>>> On Fri, 6 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: >>>> >>>>> If you set p94& p95 to 5 (say), I recommend strongly to set p12 (dt) >>>>> to >>>>> >>>>> -3e-5 >>>>> >>>>> in your case. The reason for this is that the projection scheme is >>>>> much >>>>> more stable for fixed dt. >>>>> >>>>> On the whole, however, Stefan's earlier comments about using, say, >>>>> lx1=8 >>>>> and fewer elements is a better strategy. It's also possible that we >>>>> should switch your coarse-grid solve at this scale to AMG. >>>>> >>>>> Paul >>>>> >>>>> >>>>> On Fri, 6 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: >>>>> >>>>>>> - set param(102) and param(103) to 5 (this will turn on the >>>>>>> residual >>>>>>> projection) >>>>>> Should this be param 94& 95 ? >>>>>> >>>>>> - Paul >>>>>> On Fri, 6 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: >>>>>>> Ok here are some suggestions to improve the performance: >>>>>>> - set timestep, param(12), to -3e-5 >>>>>>> - set param(102) and param(103) to 5 (this will turn on the >>>>>>> residual >>>>>>> projection) >>>>>>> - increase lgmres (in SIZE) to 40 >>>>>>> - you have want to tune the Helmholtz (velocity) and pressure >>>>>>> tolerance >>>>>>> (e.g. 1e-8 and 1e-5) >>>>>>> btw: what's the Reynolds number of this flow? >>>>>>> Stefan >>>>>>> On Aug 6, 2010, at 11:13 AM, >>>>>>> wrote: >>>>>>>> Dear Mani, >>>>>>>> I haven't checked your logfile yet but there are my first >>>>>>>> thoughts: >>>>>>>> N=4 is low >>>>>>>> Your polynomial order (N=4) is low and the tensor-product >>>>>>>> formulation >>>>>>>> won't buy you much. The performance of all matrix-matrix >>>>>>>> multiplies >>>>>>>> (MxM) will limited by the memory access times. This is in >>>>>>>> particular a >>>>>>>> problem on multi-core and multi-socket machines. We have seen that >>>>>>>> the >>>>>>>> performance drop can be significant. >>>>>>>> On top of that you carry around a large number of duplicate DOF >>>>>>>> and >>>>>>>> your surface to volume ratio is high (more communication). I >>>>>>>> Parallel Performance >>>>>>>> Your gridpoints per core (~4700) is quite small! >>>>>>>> On Blue Gene (BG) systems we can scale well (e.g. 70-80% parallel >>>>>>>> efficiency) with around 10k gridpoints per core. On other system >>>>>>>> (e.g. >>>>>>>> Cray XT5) you need much more gridpoints per core (say 80k) because >>>>>>>> the >>>>>>>> network has a higher latency (NEK is sensitive to latency not >>>>>>>> bandwidth) and the processors are much faster. >>>>>>>> Cheers, >>>>>>>> Stefan >>>>>>>> On Aug 6, 2010, at 10:51 AM, >>>>>>>> wrote: >>>>>>>>> Hi, >>>>>>>>> >>>>>>>>> I'm solving for Rayleigh-Benard convection in a 3D box of >>>>>>>>> 37632, >>>>>>>>> 4rth order elements. I fired the job on 512 processors on a >>>>>>>>> machine >>>>>>>>> with quad-core, quad socket configuration (32 nodes with 16 cores >>>>>>>>> each ) with a 20 Gbps infiniband interconnect. In 12 hours it has >>>>>>>>> run >>>>>>>>> 163 time steps. Is this normal or is there maybe some way to >>>>>>>>> improve >>>>>>>>> performance? Attached is the SIZE file. >>>>>>>>> Regards, >>>>>>>>> Mani chandra >>>>>>>>> _______________________________________________ >>>>>>>>> Nek5000-users mailing list >>>>>>>>> Nek5000-users at lists.mcs.anl.gov >>>>>>>>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>>>>>>> _______________________________________________ >>>>>>>> Nek5000-users mailing list >>>>>>>> Nek5000-users at lists.mcs.anl.gov >>>>>>>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>>>>>> _______________________________________________ >>>>>>> Nek5000-users mailing list >>>>>>> Nek5000-users at lists.mcs.anl.gov >>>>>>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>>>>> _______________________________________________ >>>>>> Nek5000-users mailing list >>>>>> Nek5000-users at lists.mcs.anl.gov >>>>>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>>>> _______________________________________________ >>>>> Nek5000-users mailing list >>>>> Nek5000-users at lists.mcs.anl.gov >>>>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>>>> >>>> _______________________________________________ >>>> Nek5000-users mailing list >>>> Nek5000-users at lists.mcs.anl.gov >>>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>> _______________________________________________ >>> Nek5000-users mailing list >>> Nek5000-users at lists.mcs.anl.gov >>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>> >> >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > From nek5000-users at lists.mcs.anl.gov Sun Aug 8 12:00:07 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Sun, 8 Aug 2010 12:00:07 -0500 Subject: [Nek5000-users] Smooth Restart feature in Nek Message-ID: Hi, I found a function in nek which would let you do a restart from 3 previous fld files. This is how I called the function: % ---- in usrchk: prefix = 'rs6' iosave = iostep iosize = 8 nfld = 3 call restart_save(iosave,iosize,nfld) . %---- in rea file : 3 PRESOLVE/RESTART OPTIONS ***** rs6sqr0.f0004 rs6sqr0.f0005 rs6sqr0.f0006 %----- This dumps out six rs6blah0.f***** and is overwritten once every 2*iostep I believe. When I try to restart using this, I still see that at the restart, the pressure values are off. I am guessing that I am not calling it the right way in the rea file. Is there a separate function for loading multiple restart files that I would have to call in usrchk ? Or is this behavior a by-product of SEM ? Thanks for any help. Regards Shriram -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Mon Aug 9 14:00:41 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 9 Aug 2010 14:00:41 -0500 (CDT) Subject: [Nek5000-users] Smooth Restart feature in Nek In-Reply-To: References: Message-ID: Hi Shriram, We're working on this ... should have it set by tomorrow. Paul On Sun, 8 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: > Hi, > > I found a function in nek which would let you do a restart from 3 previous > fld files. This is how I called the function: > > % ---- > in usrchk: > > prefix = 'rs6' > iosave = iostep > iosize = 8 > nfld = 3 > call restart_save(iosave,iosize,nfld) . > %---- > in rea file : > > 3 PRESOLVE/RESTART OPTIONS ***** > rs6sqr0.f0004 > rs6sqr0.f0005 > rs6sqr0.f0006 > %----- > > This dumps out six rs6blah0.f***** and is overwritten once every 2*iostep I > believe. When I try to restart using this, I still see that at the restart, > the pressure values are off. I am guessing that I am not calling it the > right way in the rea file. Is there a separate function for loading multiple > restart files that I would have to call in usrchk ? > > Or is this behavior a by-product of SEM ? > > Thanks for any help. > > Regards > Shriram > From nek5000-users at lists.mcs.anl.gov Mon Aug 9 14:02:04 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 9 Aug 2010 14:02:04 -0500 Subject: [Nek5000-users] Smooth Restart feature in Nek In-Reply-To: References: Message-ID: Hi Paul, Thanks a lot. That would be very helpful. Regards Shriram -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Tue Aug 10 10:50:59 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Tue, 10 Aug 2010 10:50:59 -0500 Subject: [Nek5000-users] Low Mach Number solver Message-ID: Hi, I am new to Nek5000, and had a question regarding the Low Mach Number solver. Does the Low Mach Number solver use an algorithm that splits pressure into a time dependent, but spatially independent component ( P_thermodynamic, used to calculate the density), and a spatially and time dependent component? If so, what is the variable name for P_thermodynamic? If you use another algorithm, could you please point me to a paper that explains it (if there is one). (The examples doc from the Nek5000 wiki currently doesn't have this info.) I am asking this to be able to correctly set/calculate rho*cp (UTRANS) in the uservp subroutine. Also, I am not directly able to enable the IFLOMACH option using prenek. (I need to change it manually in the .rea file by replacing the existing parameters with the ones from the lowMach_test example). I am currently using the version 2.6099999 of Nekton and the version of prenek that came with that. Is this something that will be implemented later? Thanks, Pradeep -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Tue Aug 10 15:27:06 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Tue, 10 Aug 2010 22:27:06 +0200 Subject: [Nek5000-users] Low Mach Number solver In-Reply-To: References: Message-ID: <90BA5E08-B7A8-4741-9BF7-756337094E2B@lav.mavt.ethz.ch> Hi Pradeep, you'll find all details about the implemented Low Mach number algorithm in: A.G. Tomboulides, J.C.Y. Lee, and S.A. Orzag. Numerical Simulation of Low Mach Number Reactive Flows. Journal of Scientific Computing, 1997:139?167, 12. Only the thermodynamic pressure (p0) matters for the thermodynamic properties (e.g. density). p0 is constant in space, in fact the current implementation makes the assumption that p0 is constant (open system). Please note that the prenek is quite outdated and does not support all new features (e.g. Low Mach number formulation). We're working on more power meshing options which will replace prenek at some point. hth, Stefan On Aug 10, 2010, at 5:50 PM, wrote: > Hi, > > I am new to Nek5000, and had a question regarding the Low Mach Number solver. > > Does the Low Mach Number solver use an algorithm that splits pressure into a time dependent, but spatially independent component ( P_thermodynamic, used to calculate the density), and a spatially and time dependent component? > If so, what is the variable name for P_thermodynamic? > > If you use another algorithm, could you please point me to a paper that explains it (if there is one). (The examples doc from the Nek5000 wiki currently doesn't have this info.) > > I am asking this to be able to correctly set/calculate rho*cp (UTRANS) in the uservp subroutine. > > Also, I am not directly able to enable the IFLOMACH option using prenek. (I need to change it manually in the .rea file by replacing the existing parameters with the ones from the lowMach_test example). I am currently using the version 2.6099999 of Nekton and the version of prenek that came with that. Is this something that will be implemented later? > > Thanks, > Pradeep > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users From nek5000-users at lists.mcs.anl.gov Wed Aug 11 08:03:01 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 11 Aug 2010 08:03:01 -0500 (CDT) Subject: [Nek5000-users] Smooth Restart feature in Nek In-Reply-To: References: Message-ID: Shriram, Attached is a full-restart routine. You need to update the repo to use it. It seems to work ok - hopefully the remarks in the attached .usr file are clear. Paul On Mon, 9 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: > Hi Paul, > > Thanks a lot. That would be very helpful. > > Regards > Shriram > -------------- next part -------------- c----------------------------------------------------------------------- c c .usr file with full-restart routines, called from userchk c c----------------------------------------------------------------------- subroutine my_full_restart_save ! Call this from userchk c Saves files for next full restart include 'SIZE' include 'TOTAL' include 'RESTART' ! max_rst c This is the full-restart save part: max_rst = 2*(nbdinp-1) ! max # of rst files saved nps1 = 0 if (ifheat) nps1 = 1 + npscal mostep = mod1(istep,iostep) if (istep.gt.iostep.and.mostep.lt.nbdinp) $ call outpost2(vx,vy,vz,pr,t,nps1,'rst') return end c----------------------------------------------------------------------- subroutine my_full_restart_load ! Call this from userchk include 'SIZE' include 'TOTAL' c Typical file sequence for Torder=3 c : c : crw-r--r-- 1 fischer mcsz 135252 Aug 11 04:42 x.fld89 crw-r--r-- 1 fischer mcsz 135252 Aug 11 04:43 x.fld90 crw-r--r-- 1 fischer mcsz 135252 Aug 11 04:43 x.fld91 crw-r--r-- 1 fischer mcsz 135252 Aug 11 04:43 rstx.fld01 crw-r--r-- 1 fischer mcsz 135252 Aug 11 04:43 rstx.fld02 crw-r--r-- 1 fischer mcsz 135252 Aug 11 04:43 x.fld92 crw-r--r-- 1 fischer mcsz 135252 Aug 11 04:43 rstx.fld03 crw-r--r-- 1 fischer mcsz 135252 Aug 11 04:43 rstx.fld04 c c To restart from such a sequence, assuming you have specified x.fld92 c in the .rea file, then you would use the following lines (uncommented): c c if (istep.eq.1) s80 ='rstx.fld03' ! This would be the case for Torder=3 c if (istep.eq.2) s80 ='rstx.fld04' ! using the second restart pair c c If restarting from x.fld91, use c c if (istep.eq.1) s80 ='rstx.fld01' ! This would be the case for Torder=3 c if (istep.eq.2) s80 ='rstx.fld02' ! using the second restart pair c c so that you get the two files (timesteps) immediately following x.fld91. c c A similar procedure should hold for the 0.f0000n files. c c For Torder=2, see comments below. c character*80 s80 call blank(s80,80) if (istep.eq.1) s80 ='rstx.fld01' ! This would be the case for Torder=3 if (istep.eq.2) s80 ='rstx.fld02' ! using the first restart pair c if (istep.eq.1) s80 ='rstx.fld03' ! This would be the case for Torder=3 c if (istep.eq.2) s80 ='rstx.fld04' ! using the second restart pair c if (istep.eq.1) s80 ='rstx.fld01' ! This would be the case for Torder=2, file 1 c if (istep.eq.1) s80 ='rstx.fld02' ! This would be the case for Torder=2, file 2 call bcast(s80,80) call chcopy(initc,s80,80) c time_curr = time nfiles = 1 call restart(nfiles) ! Note -- time is reset. if (nid.ne.0) time=0 time = glmax(time,1) ! Synchronize time across all processors c time = time_curr ! Preserve current simulation time return end c----------------------------------------------------------------------- subroutine uservp (ix,iy,iz,eg) include 'SIZE' include 'TOTAL' include 'NEKUSE' integer e,f,eg c e = gllel(eg) udiff =0. utrans=0. return end c----------------------------------------------------------------------- subroutine userf (ix,iy,iz,eg) include 'SIZE' include 'TOTAL' include 'NEKUSE' integer e,f,eg c e = gllel(eg) c Note: this is an acceleration term, NOT a force! c Thus, ffx will subsequently be multiplied by rho(x,t). ffx = 0.0 ffy = 0.0 ffz = 0.0 return end c----------------------------------------------------------------------- subroutine userq (ix,iy,iz,eg) include 'SIZE' include 'TOTAL' include 'NEKUSE' integer e,f,eg c e = gllel(eg) qvol = 0.0 return end c----------------------------------------------------------------------- subroutine userchk include 'SIZE' include 'TOTAL' logical if_drag_out,if_torq_out real x0(ldim) data x0 /ldim*0/ c Comment out the line below if not restarting if (istep.gt.0.and.istep.lt.nbdinp) call my_full_restart_load call my_full_restart_save ! save add'l files for full-restart scale = 1. if_drag_out = .true. if_torq_out = .false. call torque_calc(scale,x0,if_drag_out,if_torq_out) return end c----------------------------------------------------------------------- subroutine userbc (ix,iy,iz,iside,ieg) include 'SIZE' include 'TOTAL' include 'NEKUSE' ux=1.0 uy=0.0 uz=0.0 temp=0.0 return end c----------------------------------------------------------------------- subroutine useric (ix,iy,iz,ieg) include 'SIZE' include 'TOTAL' include 'NEKUSE' ux=1.0 uy=0.0 uz=0.0 temp=0 return end c----------------------------------------------------------------------- subroutine usrdat include 'SIZE' include 'TOTAL' c return end c----------------------------------------------------------------------- subroutine usrdat2 include 'SIZE' include 'TOTAL' param(66) = 4. ! These give the std nek binary i/o and are param(67) = 4. ! good default values call set_obj ! define objects for surface integrals return end c----------------------------------------------------------------------- subroutine usrdat3 include 'SIZE' include 'TOTAL' c return end c----------------------------------------------------------------------- subroutine set_obj ! define objects for surface integrals c include 'SIZE' include 'TOTAL' c integer e,f c c Define new objects nobj = 1 iobj = 0 do ii=nhis+1,nhis+nobj iobj = iobj+1 hcode(10,ii) = 'I' hcode( 1,ii) = 'F' ! 'F' hcode( 2,ii) = 'F' ! 'F' hcode( 3,ii) = 'F' ! 'F' lochis(1,ii) = iobj enddo nhis = nhis + nobj if (maxobj.lt.nobj) write(6,*) 'increase maxobj in SIZE' if (maxobj.lt.nobj) call exitt do e=1,nelv do f=1,2*ndim if (cbc(f,e,1).eq.'W ') then iobj = 0 c if (f.eq.1) iobj=1 ! lower wall c if (f.eq.3) iobj=2 ! upper wall iobj=1 ! cylinder wall if (iobj.gt.0) then nmember(iobj) = nmember(iobj) + 1 mem = nmember(iobj) ieg = lglel(e) object(iobj,mem,1) = ieg object(iobj,mem,2) = f c write(6,1) iobj,mem,f,ieg,e,nid,' OBJ' 1 format(6i9,a4) endif c endif enddo enddo c write(6,*) 'number',(nmember(k),k=1,4) c return end c----------------------------------------------------------------------- c c automatically added by makenek subroutine usrsetvert(glo_num,nel,nx,ny,nz) ! to modify glo_num integer*8 glo_num(1) return end From nek5000-users at lists.mcs.anl.gov Wed Aug 11 12:59:00 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 11 Aug 2010 12:59:00 -0500 Subject: [Nek5000-users] Low Mach Number solver Message-ID: Hi Stefan, Thanks for your reply. Regards, Pradeep > ---------- Forwarded message ---------- > From: nek5000-users at lists.mcs.anl.gov > To: > Date: Tue, 10 Aug 2010 22:27:06 +0200 > Subject: Re: [Nek5000-users] Low Mach Number solver > Hi Pradeep, > > you'll find all details about the implemented Low Mach number algorithm in: > > A.G. Tomboulides, J.C.Y. Lee, and S.A. Orzag. Numerical Simulation of Low > Mach Number Reactive Flows. Journal of Scientific Computing, 1997:139?167, > 12. > > Only the thermodynamic pressure (p0) matters for the thermodynamic > properties (e.g. density). > p0 is constant in space, in fact the current implementation makes the > assumption that p0 is constant (open system). > > Please note that the prenek is quite outdated and does not support all new > features (e.g. Low Mach number formulation). We're working on more power > meshing options which will replace prenek at some point. > > hth, > Stefan > > > > On Aug 10, 2010, at 5:50 PM, wrote: > > > Hi, > > > > I am new to Nek5000, and had a question regarding the Low Mach Number > solver. > > > > Does the Low Mach Number solver use an algorithm that splits pressure > into a time dependent, but spatially independent component ( > P_thermodynamic, used to calculate the density), and a spatially and time > dependent component? > > If so, what is the variable name for P_thermodynamic? > > > > If you use another algorithm, could you please point me to a paper that > explains it (if there is one). (The examples doc from the Nek5000 wiki > currently doesn't have this info.) > > > > I am asking this to be able to correctly set/calculate rho*cp (UTRANS) in > the uservp subroutine. > > > > Also, I am not directly able to enable the IFLOMACH option using prenek. > (I need to change it manually in the .rea file by replacing the existing > parameters with the ones from the lowMach_test example). I am currently > using the version 2.6099999 of Nekton and the version of prenek that came > with that. Is this something that will be implemented later? > > > > Thanks, > > Pradeep > > > > _______________________________________________ > > Nek5000-users mailing list > > Nek5000-users at lists.mcs.anl.gov > > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > > > > ---------- Forwarded message ---------- > From: nek5000-users at lists.mcs.anl.gov > To: nek5000-users at lists.mcs.anl.gov > Date: Wed, 11 Aug 2010 08:03:01 -0500 (CDT) > Subject: Re: [Nek5000-users] Smooth Restart feature in Nek > > Shriram, > > Attached is a full-restart routine. You need to update the repo > to use it. It seems to work ok - hopefully the remarks in the attached > .usr file are clear. > > Paul > > On Mon, 9 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: > > Hi Paul, >> >> Thanks a lot. That would be very helpful. >> >> Regards >> Shriram >> > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > -- Pradeep C. Rao Graduate Research Assistant for FT2L (http://www1.mengr.tamu.edu/FT2L/) Department of Mechanical Engineering Texas A&M University College Station, TX 77843-3123 428 Engineering Physics Building (713) 210-9769 -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Wed Aug 11 16:34:09 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 11 Aug 2010 15:34:09 -0600 Subject: [Nek5000-users] Reading binary data In-Reply-To: <1747B85B-7371-4604-800D-7D3E7CAD4BB5@nrel.gov> References: <1747B85B-7371-4604-800D-7D3E7CAD4BB5@nrel.gov> Message-ID: Hello All. I found this helpful message from Stefan regarding the structure of a binary field file. Could someone please tell me what the structure is when the geometry info is also contained? Thanks! --Mike nek5000-users at lists.mcs.anl.gov nek5000-users at lists.mcs.anl.gov > Thu May 6 07:18:03 CDT 2010 > ? Previous message: [Nek5000-users] Reading binary data > ? Next message: [Nek5000-users] History points > ? Messages sorted by: [ date ] [ thread ] [ subject ] [ author ] > Hi Fred, > > header: 132 bytes > endian test tag: 4 bytes > element mapping: nel* 4 bytes > data: nfields*nxyz*nel* wdsizo (where wdsizo is 4 or 8 bytes) > metadata (min/max values): nfields*2*nel * 4 bytes > > Stefan > From nek5000-users at lists.mcs.anl.gov Wed Aug 11 16:50:02 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 11 Aug 2010 23:50:02 +0200 Subject: [Nek5000-users] Reading binary data In-Reply-To: References: <1747B85B-7371-4604-800D-7D3E7CAD4BB5@nrel.gov> Message-ID: <8DB5E0B8-F923-4450-8421-AF2220903306@lav.mavt.ethz.ch> Hi Mike, vector fields (e.g. mesh coordinates) are stored in the following way: LOOP over all elements LOOP i = {x,y,z} i for all GLL points (internal element points) ENDLOOP ENDLOOP 2D example with (E=2,N=2): x1_1 x2_1 x3_1 y1_1 y2_1 y3_1 x1_1 x2_2 x3_2 y1_2 y2_2 y3_2 where x2_1 means the x-coordinate of the 2nd GLL point of element 1. hth, Stefan On Aug 11, 2010, at 11:34 PM, wrote: > > Hello All. I found this helpful message from Stefan regarding the structure of a binary field file. Could someone please tell me what the structure is when the geometry info is also contained? > > Thanks! > --Mike > > nek5000-users at lists.mcs.anl.gov nek5000-users at lists.mcs.anl.gov >> Thu May 6 07:18:03 CDT 2010 >> ? Previous message: [Nek5000-users] Reading binary data >> ? Next message: [Nek5000-users] History points >> ? Messages sorted by: [ date ] [ thread ] [ subject ] [ author ] >> Hi Fred, >> >> header: 132 bytes >> endian test tag: 4 bytes >> element mapping: nel* 4 bytes >> data: nfields*nxyz*nel* wdsizo (where wdsizo is 4 or 8 bytes) >> metadata (min/max values): nfields*2*nel * 4 bytes >> >> Stefan >> > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users From nek5000-users at lists.mcs.anl.gov Wed Aug 11 17:01:09 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 11 Aug 2010 16:01:09 -0600 Subject: [Nek5000-users] Reading binary data In-Reply-To: <8DB5E0B8-F923-4450-8421-AF2220903306@lav.mavt.ethz.ch> References: <1747B85B-7371-4604-800D-7D3E7CAD4BB5@nrel.gov> <8DB5E0B8-F923-4450-8421-AF2220903306@lav.mavt.ethz.ch> Message-ID: <6B43509E-91FD-40B6-AD21-3B63EE8D7A27@nrel.gov> Stefan, Thanks. I figured that, but the binary file size doesn't seem to be adding up. For example, a field file that doesn't have geometry data is the size described by the initial thread, i.e. 132 bytes + 4 bytes + nel* 4 bytes + nfields*nxyz*nel* wdsizo + nfields*2*nel * 4 bytes My naive assumption was that a field file with xyz data would then have the additional space requirement of nx*ny*nz*nel*wdsizo but the file's bigger than that. What info am I missing? --Mike On Aug 11, 2010, at 3:50 PM, wrote: > Hi Mike, > > vector fields (e.g. mesh coordinates) are stored in the following way: > > LOOP over all elements > LOOP i = {x,y,z} > i for all GLL points (internal element points) > ENDLOOP > ENDLOOP > > 2D example with (E=2,N=2): > x1_1 x2_1 x3_1 y1_1 y2_1 y3_1 x1_1 x2_2 x3_2 y1_2 y2_2 y3_2 > > where x2_1 means the x-coordinate of the 2nd GLL point of element 1. > > hth, > Stefan > > On Aug 11, 2010, at 11:34 PM, wrote: > >> >> Hello All. I found this helpful message from Stefan regarding the structure of a binary field file. Could someone please tell me what the structure is when the geometry info is also contained? >> >> Thanks! >> --Mike >> >> nek5000-users at lists.mcs.anl.gov nek5000-users at lists.mcs.anl.gov >>> Thu May 6 07:18:03 CDT 2010 >>> ? Previous message: [Nek5000-users] Reading binary data >>> ? Next message: [Nek5000-users] History points >>> ? Messages sorted by: [ date ] [ thread ] [ subject ] [ author ] >>> Hi Fred, >>> >>> header: 132 bytes >>> endian test tag: 4 bytes >>> element mapping: nel* 4 bytes >>> data: nfields*nxyz*nel* wdsizo (where wdsizo is 4 or 8 bytes) >>> metadata (min/max values): nfields*2*nel * 4 bytes >>> >>> Stefan >>> >> >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users From nek5000-users at lists.mcs.anl.gov Wed Aug 11 17:09:43 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 12 Aug 2010 00:09:43 +0200 Subject: [Nek5000-users] Reading binary data In-Reply-To: <6B43509E-91FD-40B6-AD21-3B63EE8D7A27@nrel.gov> References: <1747B85B-7371-4604-800D-7D3E7CAD4BB5@nrel.gov> <8DB5E0B8-F923-4450-8421-AF2220903306@lav.mavt.ethz.ch> <6B43509E-91FD-40B6-AD21-3B63EE8D7A27@nrel.gov> Message-ID: No, if a field file only contains the geometry (mesh coordinates) nfields is equal to ndim (2 or 3). Stefan On Aug 12, 2010, at 12:01 AM, wrote: > Stefan, > > Thanks. I figured that, but the binary file size doesn't seem to be adding up. For example, a field file that doesn't have geometry data is the size described by the initial thread, i.e. > > 132 bytes + 4 bytes + nel* 4 bytes + nfields*nxyz*nel* wdsizo + nfields*2*nel * 4 bytes > > My naive assumption was that a field file with xyz data would then have the additional space requirement of > > nx*ny*nz*nel*wdsizo > > but the file's bigger than that. What info am I missing? > > --Mike > > On Aug 11, 2010, at 3:50 PM, wrote: > >> Hi Mike, >> >> vector fields (e.g. mesh coordinates) are stored in the following way: >> >> LOOP over all elements >> LOOP i = {x,y,z} >> i for all GLL points (internal element points) >> ENDLOOP >> ENDLOOP >> >> 2D example with (E=2,N=2): >> x1_1 x2_1 x3_1 y1_1 y2_1 y3_1 x1_1 x2_2 x3_2 y1_2 y2_2 y3_2 >> >> where x2_1 means the x-coordinate of the 2nd GLL point of element 1. >> >> hth, >> Stefan >> >> On Aug 11, 2010, at 11:34 PM, wrote: >> >>> >>> Hello All. I found this helpful message from Stefan regarding the structure of a binary field file. Could someone please tell me what the structure is when the geometry info is also contained? >>> >>> Thanks! >>> --Mike >>> >>> nek5000-users at lists.mcs.anl.gov nek5000-users at lists.mcs.anl.gov >>>> Thu May 6 07:18:03 CDT 2010 >>>> ? Previous message: [Nek5000-users] Reading binary data >>>> ? Next message: [Nek5000-users] History points >>>> ? Messages sorted by: [ date ] [ thread ] [ subject ] [ author ] >>>> Hi Fred, >>>> >>>> header: 132 bytes >>>> endian test tag: 4 bytes >>>> element mapping: nel* 4 bytes >>>> data: nfields*nxyz*nel* wdsizo (where wdsizo is 4 or 8 bytes) >>>> metadata (min/max values): nfields*2*nel * 4 bytes >>>> >>>> Stefan >>>> >>> >>> _______________________________________________ >>> Nek5000-users mailing list >>> Nek5000-users at lists.mcs.anl.gov >>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users From nek5000-users at lists.mcs.anl.gov Wed Aug 11 17:21:54 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 11 Aug 2010 16:21:54 -0600 Subject: [Nek5000-users] Reading binary data In-Reply-To: References: <1747B85B-7371-4604-800D-7D3E7CAD4BB5@nrel.gov> <8DB5E0B8-F923-4450-8421-AF2220903306@lav.mavt.ethz.ch> <6B43509E-91FD-40B6-AD21-3B63EE8D7A27@nrel.gov> Message-ID: Sorry -- I'm still not getting it. The first file had mesh coordinates and field information in it; here is what the header shows: #std 4 8 8 8 512 512 0.2000000000000E+02 1000 0 1 XUPS02 The size of this file is 9476232, but I can't figure out what is contributing to that size. For the next field file, the header shows: #std 4 8 8 8 512 512 0.1200000000000E+03 6000 0 1 UPS02 The size of this file is 6318216, which is exactly predicted by the expression below -- all is well. What is the makeup of the 3158016 bytes difference between these files? The mesh coordinates alone are only 512 * 8^3 * 4 * 3 (ndim) = 3145728 bytes Where and what are the remaining 12288 bytes? Thanks. On Aug 11, 2010, at 4:09 PM, wrote: > No, if a field file only contains the geometry (mesh coordinates) nfields is equal to ndim (2 or 3). > Stefan > > > On Aug 12, 2010, at 12:01 AM, wrote: > >> Stefan, >> >> Thanks. I figured that, but the binary file size doesn't seem to be adding up. For example, a field file that doesn't have geometry data is the size described by the initial thread, i.e. >> >> 132 bytes + 4 bytes + nel* 4 bytes + nfields*nxyz*nel* wdsizo + nfields*2*nel * 4 bytes >> >> My naive assumption was that a field file with xyz data would then have the additional space requirement of >> >> nx*ny*nz*nel*wdsizo >> >> but the file's bigger than that. What info am I missing? >> >> --Mike >> >> On Aug 11, 2010, at 3:50 PM, wrote: >> >>> Hi Mike, >>> >>> vector fields (e.g. mesh coordinates) are stored in the following way: >>> >>> LOOP over all elements >>> LOOP i = {x,y,z} >>> i for all GLL points (internal element points) >>> ENDLOOP >>> ENDLOOP >>> >>> 2D example with (E=2,N=2): >>> x1_1 x2_1 x3_1 y1_1 y2_1 y3_1 x1_1 x2_2 x3_2 y1_2 y2_2 y3_2 >>> >>> where x2_1 means the x-coordinate of the 2nd GLL point of element 1. >>> >>> hth, >>> Stefan >>> >>> On Aug 11, 2010, at 11:34 PM, wrote: >>> >>>> >>>> Hello All. I found this helpful message from Stefan regarding the structure of a binary field file. Could someone please tell me what the structure is when the geometry info is also contained? >>>> >>>> Thanks! >>>> --Mike >>>> >>>> nek5000-users at lists.mcs.anl.gov nek5000-users at lists.mcs.anl.gov >>>>> Thu May 6 07:18:03 CDT 2010 >>>>> ? Previous message: [Nek5000-users] Reading binary data >>>>> ? Next message: [Nek5000-users] History points >>>>> ? Messages sorted by: [ date ] [ thread ] [ subject ] [ author ] >>>>> Hi Fred, >>>>> >>>>> header: 132 bytes >>>>> endian test tag: 4 bytes >>>>> element mapping: nel* 4 bytes >>>>> data: nfields*nxyz*nel* wdsizo (where wdsizo is 4 or 8 bytes) >>>>> metadata (min/max values): nfields*2*nel * 4 bytes >>>>> >>>>> Stefan >>>>> >>>> >>>> _______________________________________________ >>>> Nek5000-users mailing list >>>> Nek5000-users at lists.mcs.anl.gov >>>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>> >>> _______________________________________________ >>> Nek5000-users mailing list >>> Nek5000-users at lists.mcs.anl.gov >>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users From nek5000-users at lists.mcs.anl.gov Thu Aug 12 02:06:45 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 12 Aug 2010 09:06:45 +0200 Subject: [Nek5000-users] Reading binary data In-Reply-To: References: <1747B85B-7371-4604-800D-7D3E7CAD4BB5@nrel.gov> <8DB5E0B8-F923-4450-8421-AF2220903306@lav.mavt.ethz.ch> <6B43509E-91FD-40B6-AD21-3B63EE8D7A27@nrel.gov> Message-ID: The 12288 bytes is the metadata for all 3 mesh coordinates: ndim * nel * 2 * 4 bytes Stefan On Aug 12, 2010, at 12:21 AM, wrote: > Sorry -- I'm still not getting it. > > The first file had mesh coordinates and field information in it; here is what the header shows: > > #std 4 8 8 8 512 512 0.2000000000000E+02 1000 0 1 XUPS02 > > The size of this file is 9476232, but I can't figure out what is contributing to that size. For the next field file, the header shows: > > #std 4 8 8 8 512 512 0.1200000000000E+03 6000 0 1 UPS02 > > The size of this file is 6318216, which is exactly predicted by the expression below -- all is well. > > What is the makeup of the 3158016 bytes difference between these files? The mesh coordinates alone are only > > 512 * 8^3 * 4 * 3 (ndim) = 3145728 bytes > > Where and what are the remaining 12288 bytes? > > Thanks. > > > > On Aug 11, 2010, at 4:09 PM, wrote: > >> No, if a field file only contains the geometry (mesh coordinates) nfields is equal to ndim (2 or 3). >> Stefan >> >> >> On Aug 12, 2010, at 12:01 AM, wrote: >> >>> Stefan, >>> >>> Thanks. I figured that, but the binary file size doesn't seem to be adding up. For example, a field file that doesn't have geometry data is the size described by the initial thread, i.e. >>> >>> 132 bytes + 4 bytes + nel* 4 bytes + nfields*nxyz*nel* wdsizo + nfields*2*nel * 4 bytes >>> >>> My naive assumption was that a field file with xyz data would then have the additional space requirement of >>> >>> nx*ny*nz*nel*wdsizo >>> >>> but the file's bigger than that. What info am I missing? >>> >>> --Mike >>> >>> On Aug 11, 2010, at 3:50 PM, wrote: >>> >>>> Hi Mike, >>>> >>>> vector fields (e.g. mesh coordinates) are stored in the following way: >>>> >>>> LOOP over all elements >>>> LOOP i = {x,y,z} >>>> i for all GLL points (internal element points) >>>> ENDLOOP >>>> ENDLOOP >>>> >>>> 2D example with (E=2,N=2): >>>> x1_1 x2_1 x3_1 y1_1 y2_1 y3_1 x1_1 x2_2 x3_2 y1_2 y2_2 y3_2 >>>> >>>> where x2_1 means the x-coordinate of the 2nd GLL point of element 1. >>>> >>>> hth, >>>> Stefan >>>> >>>> On Aug 11, 2010, at 11:34 PM, wrote: >>>> >>>>> >>>>> Hello All. I found this helpful message from Stefan regarding the structure of a binary field file. Could someone please tell me what the structure is when the geometry info is also contained? >>>>> >>>>> Thanks! >>>>> --Mike >>>>> >>>>> nek5000-users at lists.mcs.anl.gov nek5000-users at lists.mcs.anl.gov >>>>>> Thu May 6 07:18:03 CDT 2010 >>>>>> ? Previous message: [Nek5000-users] Reading binary data >>>>>> ? Next message: [Nek5000-users] History points >>>>>> ? Messages sorted by: [ date ] [ thread ] [ subject ] [ author ] >>>>>> Hi Fred, >>>>>> >>>>>> header: 132 bytes >>>>>> endian test tag: 4 bytes >>>>>> element mapping: nel* 4 bytes >>>>>> data: nfields*nxyz*nel* wdsizo (where wdsizo is 4 or 8 bytes) >>>>>> metadata (min/max values): nfields*2*nel * 4 bytes >>>>>> >>>>>> Stefan >>>>>> >>>>> >>>>> _______________________________________________ >>>>> Nek5000-users mailing list >>>>> Nek5000-users at lists.mcs.anl.gov >>>>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>>> >>>> _______________________________________________ >>>> Nek5000-users mailing list >>>> Nek5000-users at lists.mcs.anl.gov >>>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>> >>> _______________________________________________ >>> Nek5000-users mailing list >>> Nek5000-users at lists.mcs.anl.gov >>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users From nek5000-users at lists.mcs.anl.gov Thu Aug 12 08:58:51 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 12 Aug 2010 07:58:51 -0600 Subject: [Nek5000-users] Reading binary data In-Reply-To: References: <1747B85B-7371-4604-800D-7D3E7CAD4BB5@nrel.gov> <8DB5E0B8-F923-4450-8421-AF2220903306@lav.mavt.ethz.ch> <6B43509E-91FD-40B6-AD21-3B63EE8D7A27@nrel.gov> , Message-ID: <0857E5BE05150D42B75A92CBC2B50BD1184E338C18@MAILBOX1.nrel.gov> Stefan, Ok, I see all the contributions to the field file when mesh data is included. The only thing I am still unsure of is the order of the components. Can you please check the order in my list below and tell me what should be moved? Field File with mesh data ============================ 1) header: 132 bytes 2) endian test tag: 4 bytes 3) element mapping: nel* 4 bytes 4) mesh data: nel * nx*ny*nz * ndim * wdsizo 5) field data: nfields*nxyz*nel* wdsizo (where wdsizo is 4 or 8 bytes) 6) field metadata (min/max values): nfields*2*nel * 4 bytes 7) mesh metadata: ndim*ne*2*4 Thanks again for all your help. --Mike ________________________________________ From: nek5000-users-bounces at lists.mcs.anl.gov [nek5000-users-bounces at lists.mcs.anl.gov] On Behalf Of nek5000-users at lists.mcs.anl.gov [nek5000-users at lists.mcs.anl.gov] Sent: Thursday, August 12, 2010 1:06 AM To: nek5000-users at lists.mcs.anl.gov Subject: Re: [Nek5000-users] Reading binary data The 12288 bytes is the metadata for all 3 mesh coordinates: ndim * nel * 2 * 4 bytes Stefan On Aug 12, 2010, at 12:21 AM, wrote: > Sorry -- I'm still not getting it. > > The first file had mesh coordinates and field information in it; here is what the header shows: > > #std 4 8 8 8 512 512 0.2000000000000E+02 1000 0 1 XUPS02 > > The size of this file is 9476232, but I can't figure out what is contributing to that size. For the next field file, the header shows: > > #std 4 8 8 8 512 512 0.1200000000000E+03 6000 0 1 UPS02 > > The size of this file is 6318216, which is exactly predicted by the expression below -- all is well. > > What is the makeup of the 3158016 bytes difference between these files? The mesh coordinates alone are only > > 512 * 8^3 * 4 * 3 (ndim) = 3145728 bytes > > Where and what are the remaining 12288 bytes? > > Thanks. > > > > On Aug 11, 2010, at 4:09 PM, wrote: > >> No, if a field file only contains the geometry (mesh coordinates) nfields is equal to ndim (2 or 3). >> Stefan >> >> >> On Aug 12, 2010, at 12:01 AM, wrote: >> >>> Stefan, >>> >>> Thanks. I figured that, but the binary file size doesn't seem to be adding up. For example, a field file that doesn't have geometry data is the size described by the initial thread, i.e. >>> >>> 132 bytes + 4 bytes + nel* 4 bytes + nfields*nxyz*nel* wdsizo + nfields*2*nel * 4 bytes >>> >>> My naive assumption was that a field file with xyz data would then have the additional space requirement of >>> >>> nx*ny*nz*nel*wdsizo >>> >>> but the file's bigger than that. What info am I missing? >>> >>> --Mike >>> >>> On Aug 11, 2010, at 3:50 PM, wrote: >>> >>>> Hi Mike, >>>> >>>> vector fields (e.g. mesh coordinates) are stored in the following way: >>>> >>>> LOOP over all elements >>>> LOOP i = {x,y,z} >>>> i for all GLL points (internal element points) >>>> ENDLOOP >>>> ENDLOOP >>>> >>>> 2D example with (E=2,N=2): >>>> x1_1 x2_1 x3_1 y1_1 y2_1 y3_1 x1_1 x2_2 x3_2 y1_2 y2_2 y3_2 >>>> >>>> where x2_1 means the x-coordinate of the 2nd GLL point of element 1. >>>> >>>> hth, >>>> Stefan >>>> >>>> On Aug 11, 2010, at 11:34 PM, wrote: >>>> >>>>> >>>>> Hello All. I found this helpful message from Stefan regarding the structure of a binary field file. Could someone please tell me what the structure is when the geometry info is also contained? >>>>> >>>>> Thanks! >>>>> --Mike >>>>> >>>>> nek5000-users at lists.mcs.anl.gov nek5000-users at lists.mcs.anl.gov >>>>>> Thu May 6 07:18:03 CDT 2010 >>>>>> ? Previous message: [Nek5000-users] Reading binary data >>>>>> ? Next message: [Nek5000-users] History points >>>>>> ? Messages sorted by: [ date ] [ thread ] [ subject ] [ author ] >>>>>> Hi Fred, >>>>>> >>>>>> header: 132 bytes >>>>>> endian test tag: 4 bytes >>>>>> element mapping: nel* 4 bytes >>>>>> data: nfields*nxyz*nel* wdsizo (where wdsizo is 4 or 8 bytes) >>>>>> metadata (min/max values): nfields*2*nel * 4 bytes >>>>>> >>>>>> Stefan >>>>>> >>>>> >>>>> _______________________________________________ >>>>> Nek5000-users mailing list >>>>> Nek5000-users at lists.mcs.anl.gov >>>>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>>> >>>> _______________________________________________ >>>> Nek5000-users mailing list >>>> Nek5000-users at lists.mcs.anl.gov >>>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>> >>> _______________________________________________ >>> Nek5000-users mailing list >>> Nek5000-users at lists.mcs.anl.gov >>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users _______________________________________________ Nek5000-users mailing list Nek5000-users at lists.mcs.anl.gov https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users From nek5000-users at lists.mcs.anl.gov Thu Aug 12 10:20:46 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 12 Aug 2010 17:20:46 +0200 Subject: [Nek5000-users] Reading binary data In-Reply-To: <0857E5BE05150D42B75A92CBC2B50BD1184E338C18@MAILBOX1.nrel.gov> References: <1747B85B-7371-4604-800D-7D3E7CAD4BB5@nrel.gov> <8DB5E0B8-F923-4450-8421-AF2220903306@lav.mavt.ethz.ch> <6B43509E-91FD-40B6-AD21-3B63EE8D7A27@nrel.gov> , <0857E5BE05150D42B75A92CBC2B50BD1184E338C18@MAILBOX1.nrel.gov> Message-ID: <28FDCE36-3D3D-4F69-B46A-BE7263225F78@lav.mavt.ethz.ch> Hi Mike, I guess the name nfields is a little bit misleading. It actually the number of variables to dump (the mesh counts as ndim variables, same thing for the velocity) e.g. XUPTS03 => nfields = ndim + ndim + 5. The variable encoder (here: XUPTS03) has a fixed order and represents the order of the variables in the file. File Layout: header: 132 bytes endian test tag: 4 bytes element mapping: nel* 4 bytes data: nfields*nxyz*nel* wdsizo metadata (min/max values): nfields*2*nel * 4 bytes Does that make sense? Stefan On Aug 12, 2010, at 3:58 PM, wrote: > Stefan, > > Ok, I see all the contributions to the field file when mesh data is included. The only thing I am still unsure of is the order of the components. Can you please check the order in my list below and tell me what should be moved? > > Field File with mesh data > ============================ > 1) header: 132 bytes > 2) endian test tag: 4 bytes > 3) element mapping: nel* 4 bytes > 4) mesh data: nel * nx*ny*nz * ndim * wdsizo > 5) field data: nfields*nxyz*nel* wdsizo (where wdsizo is 4 or 8 bytes) > 6) field metadata (min/max values): nfields*2*nel * 4 bytes > 7) mesh metadata: ndim*ne*2*4 > > Thanks again for all your help. > > --Mike > > ________________________________________ > From: nek5000-users-bounces at lists.mcs.anl.gov [nek5000-users-bounces at lists.mcs.anl.gov] On Behalf Of nek5000-users at lists.mcs.anl.gov [nek5000-users at lists.mcs.anl.gov] > Sent: Thursday, August 12, 2010 1:06 AM > To: nek5000-users at lists.mcs.anl.gov > Subject: Re: [Nek5000-users] Reading binary data > > The 12288 bytes is the metadata for all 3 mesh coordinates: > ndim * nel * 2 * 4 bytes > > Stefan > > On Aug 12, 2010, at 12:21 AM, wrote: > >> Sorry -- I'm still not getting it. >> >> The first file had mesh coordinates and field information in it; here is what the header shows: >> >> #std 4 8 8 8 512 512 0.2000000000000E+02 1000 0 1 XUPS02 >> >> The size of this file is 9476232, but I can't figure out what is contributing to that size. For the next field file, the header shows: >> >> #std 4 8 8 8 512 512 0.1200000000000E+03 6000 0 1 UPS02 >> >> The size of this file is 6318216, which is exactly predicted by the expression below -- all is well. >> >> What is the makeup of the 3158016 bytes difference between these files? The mesh coordinates alone are only >> >> 512 * 8^3 * 4 * 3 (ndim) = 3145728 bytes >> >> Where and what are the remaining 12288 bytes? >> >> Thanks. >> >> >> >> On Aug 11, 2010, at 4:09 PM, wrote: >> >>> No, if a field file only contains the geometry (mesh coordinates) nfields is equal to ndim (2 or 3). >>> Stefan >>> >>> >>> On Aug 12, 2010, at 12:01 AM, wrote: >>> >>>> Stefan, >>>> >>>> Thanks. I figured that, but the binary file size doesn't seem to be adding up. For example, a field file that doesn't have geometry data is the size described by the initial thread, i.e. >>>> >>>> 132 bytes + 4 bytes + nel* 4 bytes + nfields*nxyz*nel* wdsizo + nfields*2*nel * 4 bytes >>>> >>>> My naive assumption was that a field file with xyz data would then have the additional space requirement of >>>> >>>> nx*ny*nz*nel*wdsizo >>>> >>>> but the file's bigger than that. What info am I missing? >>>> >>>> --Mike >>>> >>>> On Aug 11, 2010, at 3:50 PM, wrote: >>>> >>>>> Hi Mike, >>>>> >>>>> vector fields (e.g. mesh coordinates) are stored in the following way: >>>>> >>>>> LOOP over all elements >>>>> LOOP i = {x,y,z} >>>>> i for all GLL points (internal element points) >>>>> ENDLOOP >>>>> ENDLOOP >>>>> >>>>> 2D example with (E=2,N=2): >>>>> x1_1 x2_1 x3_1 y1_1 y2_1 y3_1 x1_1 x2_2 x3_2 y1_2 y2_2 y3_2 >>>>> >>>>> where x2_1 means the x-coordinate of the 2nd GLL point of element 1. >>>>> >>>>> hth, >>>>> Stefan >>>>> >>>>> On Aug 11, 2010, at 11:34 PM, wrote: >>>>> >>>>>> >>>>>> Hello All. I found this helpful message from Stefan regarding the structure of a binary field file. Could someone please tell me what the structure is when the geometry info is also contained? >>>>>> >>>>>> Thanks! >>>>>> --Mike >>>>>> >>>>>> nek5000-users at lists.mcs.anl.gov nek5000-users at lists.mcs.anl.gov >>>>>>> Thu May 6 07:18:03 CDT 2010 >>>>>>> ? Previous message: [Nek5000-users] Reading binary data >>>>>>> ? Next message: [Nek5000-users] History points >>>>>>> ? Messages sorted by: [ date ] [ thread ] [ subject ] [ author ] >>>>>>> Hi Fred, >>>>>>> >>>>>>> header: 132 bytes >>>>>>> endian test tag: 4 bytes >>>>>>> element mapping: nel* 4 bytes >>>>>>> data: nfields*nxyz*nel* wdsizo (where wdsizo is 4 or 8 bytes) >>>>>>> metadata (min/max values): nfields*2*nel * 4 bytes >>>>>>> >>>>>>> Stefan >>>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> Nek5000-users mailing list >>>>>> Nek5000-users at lists.mcs.anl.gov >>>>>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>>>> >>>>> _______________________________________________ >>>>> Nek5000-users mailing list >>>>> Nek5000-users at lists.mcs.anl.gov >>>>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>>> >>>> _______________________________________________ >>>> Nek5000-users mailing list >>>> Nek5000-users at lists.mcs.anl.gov >>>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>> >>> _______________________________________________ >>> Nek5000-users mailing list >>> Nek5000-users at lists.mcs.anl.gov >>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users From nek5000-users at lists.mcs.anl.gov Fri Aug 13 13:27:40 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 13 Aug 2010 12:27:40 -0600 Subject: [Nek5000-users] Reading binary data In-Reply-To: <8DB5E0B8-F923-4450-8421-AF2220903306@lav.mavt.ethz.ch> References: <1747B85B-7371-4604-800D-7D3E7CAD4BB5@nrel.gov> <8DB5E0B8-F923-4450-8421-AF2220903306@lav.mavt.ethz.ch> Message-ID: Stefan, I'm still having some problems reading the field data properly. So, please consider a field file that has BOTH mesh data and field data, and assume there are three fields stored (u,v,w,p). After the initial file data (header: 132 bytes, endian 4 bytes, mapping: nel* 4 bytes), does the mesh data come in single chunk before field data (as discussed in your message below): LOOP over all elements LOOP i = {x,y,z} i for all GLL points (internal element points) ENDLOOP ENDLOOP If that is indeed all before the field data, is the field data then organized as LOOP over all elements LOOP i = three fields i for all GLL points (internal element points) ENDLOOP ENDLOOP or, is it LOOP over all elements LOOP i = GLL points i for all fields ENDLOOP ENDLOOP ?? Is there by chance a detailed description for the format of the field file somewhere that I am missing? --Mike On Aug 11, 2010, at 3:50 PM, wrote: > Hi Mike, > > vector fields (e.g. mesh coordinates) are stored in the following way: > > LOOP over all elements > LOOP i = {x,y,z} > i for all GLL points (internal element points) > ENDLOOP > ENDLOOP > > 2D example with (E=2,N=2): > x1_1 x2_1 x3_1 y1_1 y2_1 y3_1 x1_1 x2_2 x3_2 y1_2 y2_2 y3_2 > > where x2_1 means the x-coordinate of the 2nd GLL point of element 1. > > hth, > Stefan > > On Aug 11, 2010, at 11:34 PM, wrote: > >> >> Hello All. I found this helpful message from Stefan regarding the structure of a binary field file. Could someone please tell me what the structure is when the geometry info is also contained? >> >> Thanks! >> --Mike >> >> nek5000-users at lists.mcs.anl.gov nek5000-users at lists.mcs.anl.gov >>> Thu May 6 07:18:03 CDT 2010 >>> ? Previous message: [Nek5000-users] Reading binary data >>> ? Next message: [Nek5000-users] History points >>> ? Messages sorted by: [ date ] [ thread ] [ subject ] [ author ] >>> Hi Fred, >>> >>> header: 132 bytes >>> endian test tag: 4 bytes >>> element mapping: nel* 4 bytes >>> data: nfields*nxyz*nel* wdsizo (where wdsizo is 4 or 8 bytes) >>> metadata (min/max values): nfields*2*nel * 4 bytes >>> >>> Stefan >>> >> >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users From nek5000-users at lists.mcs.anl.gov Fri Aug 13 16:28:50 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 13 Aug 2010 16:28:50 -0500 (CDT) Subject: [Nek5000-users] Reading binary data In-Reply-To: References: <1747B85B-7371-4604-800D-7D3E7CAD4BB5@nrel.gov> <8DB5E0B8-F923-4450-8421-AF2220903306@lav.mavt.ethz.ch> Message-ID: Hi Mike, I was able to track this down by cd nek5_svn/trunk/nek edit prepost.f search for ifxyo Near one of these I find: call mfo_outv(xm1,ym1,zm1,nout,nxo,nyo,nzo) and looking at the mfo_outv() [ multifile-output, output a vector ] routine, I find: j = 0 if (wdsizo.eq.4) then ! 32-bit output do iel = 1,nel call copyx4 (u4(j+1),u(1,iel),nxyz) j = j + nxyz call copyx4 (u4(j+1),v(1,iel),nxyz) j = j + nxyz if(if3d) then call copyx4 (u4(j+1),w(1,iel),nxyz) j = j + nxyz which indicates that the vector fields are output a vx(:,:,:,1) nxyz points (32 bits/pt) vy(:,:,:,1) nxyz points vz(:,:,:,1) nxyz points vx(:,:,:,2) nxyz points vy(:,:,:,2) nxyz points vz(:,:,:,2) nxyz points : vx(:,:,:,nel) nxyz points vy(:,:,:,nel) nxyz points vz(:,:,:,nel) nxyz points etc. So, you would likely see, xyz (interleaved, as above) uvw (interleaved, as above) p (entire field) T (entire field) PS1 (entire field) PS2 (entire field) etc. Note that the order of the elements will likely not be sequential unless you are running on one processor. It will instead be ordered according to the header info. Paul On Fri, 13 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: > Stefan, > > I'm still having some problems reading the field data properly. So, please consider a field file that has BOTH mesh data and field data, and assume there are three fields stored (u,v,w,p). > > After the initial file data (header: 132 bytes, endian 4 bytes, mapping: nel* 4 bytes), does the mesh data come in single chunk before field data (as discussed in your message below): > > LOOP over all elements > LOOP i = {x,y,z} > i for all GLL points (internal element points) > ENDLOOP > ENDLOOP > > If that is indeed all before the field data, is the field data then organized as > > LOOP over all elements > LOOP i = three fields > i for all GLL points (internal element points) > ENDLOOP > ENDLOOP > > or, is it > > LOOP over all elements > LOOP i = GLL points > i for all fields > ENDLOOP > ENDLOOP > > ?? > > Is there by chance a detailed description for the format of the field file somewhere that I am missing? > > --Mike > > On Aug 11, 2010, at 3:50 PM, wrote: > >> Hi Mike, >> >> vector fields (e.g. mesh coordinates) are stored in the following way: >> >> LOOP over all elements >> LOOP i = {x,y,z} >> i for all GLL points (internal element points) >> ENDLOOP >> ENDLOOP >> >> 2D example with (E=2,N=2): >> x1_1 x2_1 x3_1 y1_1 y2_1 y3_1 x1_1 x2_2 x3_2 y1_2 y2_2 y3_2 >> >> where x2_1 means the x-coordinate of the 2nd GLL point of element 1. >> >> hth, >> Stefan >> >> On Aug 11, 2010, at 11:34 PM, wrote: >> >>> >>> Hello All. I found this helpful message from Stefan regarding the structure of a binary field file. Could someone please tell me what the structure is when the geometry info is also contained? >>> >>> Thanks! >>> --Mike >>> >>> nek5000-users at lists.mcs.anl.gov nek5000-users at lists.mcs.anl.gov >>>> Thu May 6 07:18:03 CDT 2010 >>>> ? Previous message: [Nek5000-users] Reading binary data >>>> ? Next message: [Nek5000-users] History points >>>> ? Messages sorted by: [ date ] [ thread ] [ subject ] [ author ] >>>> Hi Fred, >>>> >>>> header: 132 bytes >>>> endian test tag: 4 bytes >>>> element mapping: nel* 4 bytes >>>> data: nfields*nxyz*nel* wdsizo (where wdsizo is 4 or 8 bytes) >>>> metadata (min/max values): nfields*2*nel * 4 bytes >>>> >>>> Stefan >>>> >>> >>> _______________________________________________ >>> Nek5000-users mailing list >>> Nek5000-users at lists.mcs.anl.gov >>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > From nek5000-users at lists.mcs.anl.gov Fri Aug 13 23:42:44 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 13 Aug 2010 23:42:44 -0500 Subject: [Nek5000-users] Smooth Restart feature in Nek In-Reply-To: References: Message-ID: Hi Paul, My apologies for the late reply. I just checked by making a trial case and it works. Thanks a lot . Regards Shriram -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Sat Aug 14 18:49:10 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Sat, 14 Aug 2010 18:49:10 -0500 Subject: [Nek5000-users] Uservp and drag_calc routine Message-ID: Hello, I turned on uservp through param(30) in rea file and found that it messes up my output from drag_calc(). Shown below are outputs with and w/o uservp [ Repo : 560 ] . Is there any specific flag for calculating integral quantities with uservp ? With Uservp : ----------------- Calculating eddy visosity ediff max :-> 1.41399999999999989E-003 5 2.2500E-02 3.0084E+00 9.7050E-01 3.0084E+00 9.7050E-01 umx 5 2.25000E-02 1.50000E+00 3.46825E-01 6.48743E+00 6.48743E+00 cdiv 5 1377 0.000E+00 6.316E+00 0.000E+00 4.436E-01 6.487E+00 1.203E-01divmnmx 5 2.2500000E-02 0.0000000E+00 0.0000000E+00 6.5247004E-17 drgxt***** 5 2.2500000E-02 0.0000000E+00 0.0000000E+00 1.0521227E-20 drgyt***** 5 2.2500000E-02 0.0000000E+00 0.0000000E+00 -1.0486441E-20 drgzt***** Without Uservp : --------------------- ediff max :-> 1.41399999999999989E-003 5 2.2500E-02 3.0107E+00 9.7141E-01 3.0107E+00 9.7141E-01 umx 5 2.25000E-02 1.50000E+00 3.46825E-01 6.48743E+00 6.48743E+00 cdiv 5 1377 0.000E+00 6.316E+00 0.000E+00 4.436E-01 6.487E+00 1.203E-01divmnmx 5 2.2500000E-02 1.3228757E+00 1.0841687E+00 2.3870700E-01 dragx 1 5 2.2500000E-02 -1.2363700E-04 -1.2361744E-04 -1.9553962E-08 dragy 1 5 2.2500000E-02 1.6762005E-15 1.5786015E-15 9.7598988E-17 dragz 1 check: 6.28249446E+00 3.43007686E-16 0.00000000E+00 5 5 2.2500000E-02 1.3228757E+00 1.0841687E+00 2.3870700E-01 drgxt 1 5 2.2500000E-02 -1.2363700E-04 -1.2361744E-04 -1.9553962E-08 drgyt 1 5 2.2500000E-02 1.6762005E-15 1.5786015E-15 9.7598988E-17 drgzt 1 Please let me know if rea/usr file is needed. Thanks. Regards Shriram -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Sat Aug 14 18:51:54 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Sat, 14 Aug 2010 18:51:54 -0500 (CDT) Subject: [Nek5000-users] Uservp and drag_calc routine In-Reply-To: References: Message-ID: Are you actually using a variable viscosity ? I don't know that I've coded up the drag for variable viscosity yet. Also, I prefer torq_calc() to drag_calc() --- the torq routine is somewhat newer and drag comes for free. Paul On Sat, 14 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: > Hello, > > I turned on uservp through param(30) in rea file and found that it messes up > my output from drag_calc(). Shown below are outputs with and w/o uservp [ > Repo : 560 ] . > Is there any specific flag for calculating integral quantities with uservp ? > > With Uservp : > ----------------- > > Calculating eddy visosity > ediff max :-> 1.41399999999999989E-003 > 5 2.2500E-02 3.0084E+00 9.7050E-01 3.0084E+00 9.7050E-01 umx > 5 2.25000E-02 1.50000E+00 3.46825E-01 6.48743E+00 6.48743E+00 > cdiv > 5 1377 0.000E+00 6.316E+00 0.000E+00 4.436E-01 6.487E+00 > 1.203E-01divmnmx > 5 2.2500000E-02 0.0000000E+00 0.0000000E+00 6.5247004E-17 > drgxt***** > 5 2.2500000E-02 0.0000000E+00 0.0000000E+00 1.0521227E-20 > drgyt***** > 5 2.2500000E-02 0.0000000E+00 0.0000000E+00 -1.0486441E-20 > drgzt***** > > Without Uservp : > --------------------- > > ediff max :-> 1.41399999999999989E-003 > 5 2.2500E-02 3.0107E+00 9.7141E-01 3.0107E+00 9.7141E-01 umx > 5 2.25000E-02 1.50000E+00 3.46825E-01 6.48743E+00 6.48743E+00 > cdiv > 5 1377 0.000E+00 6.316E+00 0.000E+00 4.436E-01 6.487E+00 > 1.203E-01divmnmx > 5 2.2500000E-02 1.3228757E+00 1.0841687E+00 2.3870700E-01 dragx > 1 > 5 2.2500000E-02 -1.2363700E-04 -1.2361744E-04 -1.9553962E-08 dragy > 1 > 5 2.2500000E-02 1.6762005E-15 1.5786015E-15 9.7598988E-17 dragz > 1 > check: 6.28249446E+00 3.43007686E-16 0.00000000E+00 5 > 5 2.2500000E-02 1.3228757E+00 1.0841687E+00 2.3870700E-01 drgxt > 1 > 5 2.2500000E-02 -1.2363700E-04 -1.2361744E-04 -1.9553962E-08 drgyt > 1 > 5 2.2500000E-02 1.6762005E-15 1.5786015E-15 9.7598988E-17 drgzt > 1 > > Please let me know if rea/usr file is needed. Thanks. > > Regards > Shriram > From nek5000-users at lists.mcs.anl.gov Sat Aug 14 19:40:07 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Sat, 14 Aug 2010 19:40:07 -0500 Subject: [Nek5000-users] Uservp and drag_calc routine In-Reply-To: References: Message-ID: Paul, Yes. I am using a eddy viscosity model, similar to the one in turbchannel. I just tried using the torq_calc() but couldn't get it to print the drag values to logfile. This is how I tried. logical if_drag_out,if_torq_out real x0(ldim) data x0 /ldim*0/ scale = 1.0 if_drag_out = .true. if_torq_out = .false. call torque_calc(scale,x0,if_drag_out,if_torq_out) The full_restart.usr file served as a reference for me. Am I missing something ? Regards Shriram -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Sat Aug 14 20:57:52 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Sat, 14 Aug 2010 20:57:52 -0500 (CDT) Subject: [Nek5000-users] Uservp and drag_calc routine In-Reply-To: References: Message-ID: Shriram, I was getting drag values printed for my case... you need to also set the obj definitions. I did this through a routine called set_obj, which was in the same .usr file sent earlier ? Paul On Sat, 14 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: > Paul, > > Yes. I am using a eddy viscosity model, similar to the one in turbchannel. > > I just tried using the torq_calc() but couldn't get it to print the drag > values to logfile. This is how I tried. > > logical if_drag_out,if_torq_out > real x0(ldim) > data x0 /ldim*0/ > > scale = 1.0 > if_drag_out = .true. > if_torq_out = .false. > call torque_calc(scale,x0,if_drag_out,if_torq_out) > > The full_restart.usr file served as a reference for me. Am I missing > something ? > > > Regards > Shriram > From nek5000-users at lists.mcs.anl.gov Sat Aug 14 21:14:05 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Sat, 14 Aug 2010 21:14:05 -0500 Subject: [Nek5000-users] Uservp and drag_calc routine In-Reply-To: References: Message-ID: Paul, Thanks for the reply. Yes. I set my wall as an obj through the same routine. It prints the value if param(30) is 0 but doesn't if param(30) is 1. I looked at the drag_calc() routine and found that it works for constant viscosity since param(2) is initialized to all the points while torq_calc allows variable viscosity through visc array and so thought it might work with param(30)=1. Regards Shriram -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Sat Aug 14 21:44:49 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Sat, 14 Aug 2010 21:44:49 -0500 (CDT) Subject: [Nek5000-users] Uservp and drag_calc routine In-Reply-To: References: Message-ID: Interesting... I'll try this. Paul On Sat, 14 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: > Paul, > > Thanks for the reply. > > Yes. I set my wall as an obj through the same routine. It prints the value > if param(30) is 0 but doesn't if param(30) is 1. > > I looked at the drag_calc() routine and found that it works for constant > viscosity since param(2) is initialized to all the points while torq_calc > allows variable viscosity through visc array and so thought it might work > with param(30)=1. > > Regards > Shriram > From nek5000-users at lists.mcs.anl.gov Sun Aug 15 20:43:55 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Sun, 15 Aug 2010 20:43:55 -0500 (CDT) Subject: [Nek5000-users] Uservp and drag_calc routine In-Reply-To: References: Message-ID: Shriram, I'm not having this difficulty either with T IFUSERVP nor with p30=1 in the .rea file. If you'd like to tar up .rea/.usr/SIZE and send them to me off-list I can take a look. Paul On Sat, 14 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: > > Interesting... I'll try this. > > Paul > > > On Sat, 14 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: > >> Paul, >> >> Thanks for the reply. >> >> Yes. I set my wall as an obj through the same routine. It prints the value >> if param(30) is 0 but doesn't if param(30) is 1. >> >> I looked at the drag_calc() routine and found that it works for constant >> viscosity since param(2) is initialized to all the points while torq_calc >> allows variable viscosity through visc array and so thought it might work >> with param(30)=1. >> >> Regards >> Shriram >> > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > From nek5000-users at lists.mcs.anl.gov Mon Aug 16 10:25:44 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 16 Aug 2010 10:25:44 -0500 (CDT) Subject: [Nek5000-users] userf / pressure drop BC Message-ID: <1788493858.114051281972344910.JavaMail.root@neo-mail-3.tamu.edu> Hello, I am having an issue with one of my simulations, where the flow wants to change directions. A brief description for the domain is: - Two inlets, one in the x-direction, the other in the z-direction. - The two inlets "feed into the main domain"... that is I give both an initial condition, and then apply a recycling BC to each individually. My question is: Is there a simple fix for this by use of the userf routine in the user file? I haven't used it before. What I ideally would like to do, is apply some pressure drop across the inlets or something to keep the flow from changing directions. I messed around with the initial conditions, and through some tweaking was able to get one of the inlets to work, but would like something more robust. Sorry if this doubles up on a previous question, I couldn't find any old messages about this and also I think the archive is down on the website. - Michael M. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Mon Aug 16 10:43:26 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 16 Aug 2010 10:43:26 -0500 (CDT) Subject: [Nek5000-users] userf / pressure drop BC In-Reply-To: <1788493858.114051281972344910.JavaMail.root@neo-mail-3.tamu.edu> References: <1788493858.114051281972344910.JavaMail.root@neo-mail-3.tamu.edu> Message-ID: Michael, Depending on your domain, this should be possible with recycling and/or periodic bcs. It's crucial to understand your outflow conditions... If you tar your .usr/.rea/SIZE file and send off-list I'll take a look with Aleks, who's been working on the recycling stuff. Thanks, Paul On Mon, 16 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: > Hello, > > I am having an issue with one of my simulations, where the flow wants to change directions. A brief description for the domain is: > > - Two inlets, one in the x-direction, the other in the z-direction. > - The two inlets "feed into the main domain"... that is I give both an initial condition, and then apply a recycling BC to each individually. > > My question is: Is there a simple fix for this by use of the userf routine in the user file? I haven't used it before. What I ideally would like to do, is apply some pressure drop across the inlets or something to keep the flow from changing directions. I messed around with the initial conditions, and through some tweaking was able to get one of the inlets to work, but would like something more robust. > > Sorry if this doubles up on a previous question, I couldn't find any old messages about this and also I think the archive is down on the website. > > - Michael M. > > From nek5000-users at lists.mcs.anl.gov Tue Aug 17 11:40:06 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Tue, 17 Aug 2010 10:40:06 -0600 Subject: [Nek5000-users] Reading binary data In-Reply-To: References: <1747B85B-7371-4604-800D-7D3E7CAD4BB5@nrel.gov> <8DB5E0B8-F923-4450-8421-AF2220903306@lav.mavt.ethz.ch> Message-ID: Paul, That info did the trick -- Thanks!! --Mike On Aug 13, 2010, at 3:28 PM, wrote: > > Hi Mike, > > I was able to track this down by > > cd nek5_svn/trunk/nek > > edit prepost.f > > search for ifxyo > > Near one of these I find: > > call mfo_outv(xm1,ym1,zm1,nout,nxo,nyo,nzo) > > > and looking at the mfo_outv() [ multifile-output, > output a vector ] routine, I find: > > > j = 0 > if (wdsizo.eq.4) then ! 32-bit output > do iel = 1,nel > call copyx4 (u4(j+1),u(1,iel),nxyz) > j = j + nxyz > call copyx4 (u4(j+1),v(1,iel),nxyz) > j = j + nxyz > if(if3d) then > call copyx4 (u4(j+1),w(1,iel),nxyz) > j = j + nxyz > > > which indicates that the vector fields are output a > > vx(:,:,:,1) nxyz points (32 bits/pt) > vy(:,:,:,1) nxyz points > vz(:,:,:,1) nxyz points > vx(:,:,:,2) nxyz points > vy(:,:,:,2) nxyz points > vz(:,:,:,2) nxyz points > : > vx(:,:,:,nel) nxyz points > vy(:,:,:,nel) nxyz points > vz(:,:,:,nel) nxyz points > > etc. > > So, you would likely see, > > xyz (interleaved, as above) > uvw (interleaved, as above) > p (entire field) > T (entire field) > PS1 (entire field) > PS2 (entire field) > > etc. > > Note that the order of the elements will likely not be > sequential unless you are running on one processor. It > will instead be ordered according to the header info. > > Paul > > > > > > > On Fri, 13 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: > >> Stefan, >> >> I'm still having some problems reading the field data properly. So, please consider a field file that has BOTH mesh data and field data, and assume there are three fields stored (u,v,w,p). >> >> After the initial file data (header: 132 bytes, endian 4 bytes, mapping: nel* 4 bytes), does the mesh data come in single chunk before field data (as discussed in your message below): >> >> LOOP over all elements >> LOOP i = {x,y,z} >> i for all GLL points (internal element points) >> ENDLOOP >> ENDLOOP >> >> If that is indeed all before the field data, is the field data then organized as >> >> LOOP over all elements >> LOOP i = three fields >> i for all GLL points (internal element points) >> ENDLOOP >> ENDLOOP >> >> or, is it >> >> LOOP over all elements >> LOOP i = GLL points >> i for all fields >> ENDLOOP >> ENDLOOP >> >> ?? >> >> Is there by chance a detailed description for the format of the field file somewhere that I am missing? >> >> --Mike >> >> On Aug 11, 2010, at 3:50 PM, wrote: >> >>> Hi Mike, >>> >>> vector fields (e.g. mesh coordinates) are stored in the following way: >>> >>> LOOP over all elements >>> LOOP i = {x,y,z} >>> i for all GLL points (internal element points) >>> ENDLOOP >>> ENDLOOP >>> >>> 2D example with (E=2,N=2): >>> x1_1 x2_1 x3_1 y1_1 y2_1 y3_1 x1_1 x2_2 x3_2 y1_2 y2_2 y3_2 >>> >>> where x2_1 means the x-coordinate of the 2nd GLL point of element 1. >>> >>> hth, >>> Stefan >>> >>> On Aug 11, 2010, at 11:34 PM, wrote: >>> >>>> >>>> Hello All. I found this helpful message from Stefan regarding the structure of a binary field file. Could someone please tell me what the structure is when the geometry info is also contained? >>>> >>>> Thanks! >>>> --Mike >>>> >>>> nek5000-users at lists.mcs.anl.gov nek5000-users at lists.mcs.anl.gov >>>>> Thu May 6 07:18:03 CDT 2010 >>>>> ? Previous message: [Nek5000-users] Reading binary data >>>>> ? Next message: [Nek5000-users] History points >>>>> ? Messages sorted by: [ date ] [ thread ] [ subject ] [ author ] >>>>> Hi Fred, >>>>> >>>>> header: 132 bytes >>>>> endian test tag: 4 bytes >>>>> element mapping: nel* 4 bytes >>>>> data: nfields*nxyz*nel* wdsizo (where wdsizo is 4 or 8 bytes) >>>>> metadata (min/max values): nfields*2*nel * 4 bytes >>>>> >>>>> Stefan >>>>> >>>> >>>> _______________________________________________ >>>> Nek5000-users mailing list >>>> Nek5000-users at lists.mcs.anl.gov >>>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>> >>> _______________________________________________ >>> Nek5000-users mailing list >>> Nek5000-users at lists.mcs.anl.gov >>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > From nek5000-users at lists.mcs.anl.gov Thu Aug 19 10:55:23 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 19 Aug 2010 09:55:23 -0600 (GMT-06:00) Subject: [Nek5000-users] postx & movie In-Reply-To: <1278183117.8879.7.camel@localhost.localdomain> Message-ID: <2138174108.1082321282233323238.JavaMail.root@zimbra.anl.gov> Hi Frank, If you still have the following problem then I think you can try to modify the following line in postnek9.f (line 939): call grab_window_raw(xpmn,ypmn,xpmx,ypmx,'raw\0') Depending on the compiler it is treated differently so at some point we will have better solution. Best, Aleks ----- Original Message ----- From: nek5000-users at lists.mcs.anl.gov To: nek5000-users at lists.mcs.anl.gov Sent: Saturday, July 3, 2010 1:51:57 PM GMT -06:00 US/Canada Central Subject: [Nek5000-users] postx & movie Hello all, When trying to make a movie using "postx", I am running into the following sorts of errors (output to the terminal from "postx"): convert -depth 8 -size 0590x0075 raw.rgb movie00085.gif convert: unable to open image `raw.rgb': No such file or directory. In the directory that "postx" is running a "raw\0.rgb" appears but not a "raw.rgb". Cheers, Frank -- Frank Herbert Muldoon, Ph.D. Mechanical Engineering Technische Universit?t Wien (Technical University of Vienna) Inst. f. Str?mungsmechanik und W?rme?bertragung (Institute of Fluid Mechanics and Heat Transfer) Resselgasse 3 1040 Wien Tel: +4315880132232 Fax: +4315880132299 Cell:+436765203470 fmuldoo (skype) http://tetra.fluid.tuwien.ac.at/fmuldoo/public_html/webpage/frank-muldoon.html _______________________________________________ Nek5000-users mailing list Nek5000-users at lists.mcs.anl.gov https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users From nek5000-users at lists.mcs.anl.gov Thu Aug 19 21:12:15 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 19 Aug 2010 21:12:15 -0500 Subject: [Nek5000-users] Post-Processing / VisiT Message-ID: Hello All, I am doing a LES in a turbine blade and specifically interested in the parameters (drag, pressure etc) on the blade . I have one time averaged blah0.f0000* field file from Nek . I am trying to get a pressure plot on the blade, but not sure how to isolate the blade on VisiT. If it was inside nek, I can easily isolate the blade by using the cbc array. I tried to look into the subset feature in VisIT, but that wasn't very relevant to what I needed. I am wondering if there is a flag that I could associate with the blade in nek and then if VisiT could read it, I can then look at the pressure plot just at the blade. Any suggestions would be helpful. Regards Shriram -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Fri Aug 20 06:02:37 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 20 Aug 2010 06:02:37 -0500 (CDT) Subject: [Nek5000-users] Post-Processing / VisiT In-Reply-To: References: Message-ID: What I normally do is to pick an isosurface of velocity magnitude ~ 0, then plot pressure on that. Presumably this could be coupled with a clip if you don't want the end-wall (but I'm not certain how to do the clip). Paul On Thu, 19 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: > Hello All, > > I am doing a LES in a turbine blade and specifically interested in > the parameters (drag, pressure etc) on the blade . I have one time averaged > blah0.f0000* field file from Nek . I am trying to get a pressure plot on > the blade, but not sure how to isolate the blade on VisiT. If it was inside > nek, I can easily isolate the blade by using the cbc array. I tried to look > into the subset feature in VisIT, but that wasn't very relevant to what I > needed. > > I am wondering if there is a flag that I could associate with the blade in > nek and then if VisiT could read it, I can then look at the pressure plot > just at the blade. > > Any suggestions would be helpful. > > > Regards > Shriram > From nek5000-users at lists.mcs.anl.gov Fri Aug 20 09:55:47 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 20 Aug 2010 09:55:47 -0500 Subject: [Nek5000-users] Post-Processing / VisiT In-Reply-To: References: Message-ID: Hi Paul, Thanks. That helps. I got that to work. There is a Box operator that could be used to clip to the domain of interest. Any chance that it could be exported to .txt file either uin VisIt or postnek ? Thanks. Regards Shriram -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Fri Aug 20 10:13:27 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 20 Aug 2010 08:13:27 -0700 Subject: [Nek5000-users] Post-Processing / VisiT In-Reply-To: References: Message-ID: Hi Shriram, What you are envisioning being in the text file? The Clip location? Best, Hank On Fri, Aug 20, 2010 at 7:55 AM, wrote: > Hi Paul, > > Thanks. That helps. I got that to work. There is a Box operator that could > be used to clip to the domain of interest. > Any chance that it could be exported to .txt file either uin VisIt or > postnek ? Thanks. > > Regards > Shriram > > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Fri Aug 20 10:26:25 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 20 Aug 2010 10:26:25 -0500 Subject: [Nek5000-users] Post-Processing / VisiT In-Reply-To: References: Message-ID: Hi Hank, Thanks for the reply. I would like the pressure values and coordinates of the blade to be exported to a txt file. The way Paul said, now I can isolate the blade in VisiT itself. I was able to visualize the pressure on the blade by the following : slice plot of pressure -> sliced iso-surface of velocity mag ~0 But this gives me a color plot of pressure on the blade. For some comparisons, it would be nice to see the data. I thought if I have it as a txt file, I could import it to Matlab, do a spanwise average and also compare against some available experimental results. I would happy to hear any suggestions/ alternative approach that you might have. Thanks. Regards Shriram -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Fri Aug 20 21:04:48 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 20 Aug 2010 21:04:48 -0500 Subject: [Nek5000-users] Nek on Ranger (TACC) Message-ID: Hi, When I try to run nek on Ranger, I always get the following error. I tried submitting 4way / 8 way / 16 way, but the error remains the same. This has been quite independent of the repo in the recent past (~ 1 month). ERROR (proc 4): /share/home/01420/user/nek5_svn/trunk/nek/jl/../jl2/sort_imp.h(475): allocation of 1337120 bytes failed ERROR (proc 20): /share/home/01420/user/nek5_svn/trunk/nek/jl/../jl2/sort_imp.h(475): allocation of 1337120 bytes failed .... ...... I contacted the HPC support at TACC and they suggested increasing memory per core, but that didn't help. Has anybody had this before ? Regards Shriram -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Fri Aug 20 21:08:27 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 20 Aug 2010 21:08:27 -0500 (CDT) Subject: [Nek5000-users] Nek on Ranger (TACC) In-Reply-To: References: Message-ID: Shriram, Do you know which compiler is being used ? We're encountering some similar problems on one of our platforms here. Paul On Fri, 20 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: > Hi, > > When I try to run nek on Ranger, I always get the following error. I tried > submitting 4way / 8 way / 16 way, but the error remains the same. This has > been quite independent of the repo in the recent past (~ 1 month). > > ERROR (proc 4): > /share/home/01420/user/nek5_svn/trunk/nek/jl/../jl2/sort_imp.h(475): > allocation of 1337120 bytes failed > > ERROR (proc 20): > /share/home/01420/user/nek5_svn/trunk/nek/jl/../jl2/sort_imp.h(475): > allocation of 1337120 bytes failed > > .... > ...... > > I contacted the HPC support at TACC and they suggested increasing memory per > core, but that didn't help. Has anybody had this before ? > > Regards > Shriram > From nek5000-users at lists.mcs.anl.gov Fri Aug 20 21:13:02 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 20 Aug 2010 21:13:02 -0500 Subject: [Nek5000-users] Nek on Ranger (TACC) In-Reply-To: References: Message-ID: Hi Paul, I use PGI ( version 7.1 ) and mvapich2 for MPI. I have used nek before ( way back in April / May ) on Ranger , but its been for the past one month or so, I am been able to run. Regards Shriram -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Sat Aug 21 01:03:10 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Sat, 21 Aug 2010 08:03:10 +0200 Subject: [Nek5000-users] Nek on Ranger (TACC) In-Reply-To: References: Message-ID: Hi Shriram, please run 'size nek5000' and post the output. Stefan On Sat, Aug 21, 2010 at 4:04 AM, wrote: > Hi, > > When I try to run nek on Ranger, I always get the following error. I tried > submitting 4way / 8 way / 16 way, but the error remains the same. This has > been quite independent of the repo in the recent past (~ 1 month). > > ERROR (proc 4): > /share/home/01420/user/nek5_svn/trunk/nek/jl/../jl2/sort_imp.h(475): > allocation of 1337120 bytes failed > > ERROR (proc 20): > /share/home/01420/user/nek5_svn/trunk/nek/jl/../jl2/sort_imp.h(475): > allocation of 1337120 bytes failed > > .... > ...... > > I contacted the HPC support at TACC and they suggested increasing memory > per core, but that didn't help. Has anybody had this before ? > > Regards > Shriram > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Sat Aug 21 09:13:05 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Sat, 21 Aug 2010 09:13:05 -0500 Subject: [Nek5000-users] Nek on Ranger (TACC) In-Reply-To: References: Message-ID: Hi Stefan, Please see below for the output : text data bss dec hex filename 2038927 153672 607192120 609384719 2452790f nek5000 Regards Shriram -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Sat Aug 21 10:26:34 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Sat, 21 Aug 2010 10:26:34 -0500 (CDT) Subject: [Nek5000-users] Nek on Ranger (TACC) In-Reply-To: References: Message-ID: Hi Shriram, I wonder if there is merit to trying the -mcmodel=medium flag... ? Paul On Sat, 21 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: > Hi Stefan, > > Please see below for the output : > > text data bss dec hex filename > 2038927 153672 607192120 609384719 2452790f nek5000 > > > Regards > Shriram > From nek5000-users at lists.mcs.anl.gov Sat Aug 21 11:13:49 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Sat, 21 Aug 2010 11:13:49 -0500 Subject: [Nek5000-users] Nek on Ranger (TACC) In-Reply-To: References: Message-ID: Paul, I get the following error with the bigmem flag : pgf90-Error-Switches -fPIC and -mcmodel=medium are not supported together pgf90-Error-Switches -fPIC and -mcmodel=medium are not supported together make: *** [obj/drive2.o] Error 1 make: *** Waiting for unfinished jobs.... make: *** [obj/drive.o] Error 1 pgf90-Error-Switches -fPIC and -mcmodel=medium are not supported together make: *** [obj/drive1.o] Error 1 pgf90-Error-Switches -fPIC and -mcmodel=medium are not supported together Shall I try with some other compiler ? Regards Shriram -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Sun Aug 22 02:41:11 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Sun, 22 Aug 2010 09:41:11 +0200 Subject: [Nek5000-users] Nek on Ranger (TACC) In-Reply-To: References: Message-ID: The numbers are somehow not separated? Give it a try with PGI and OpenMPI. Stefan On Sat, Aug 21, 2010 at 4:13 PM, wrote: > Hi Stefan, > > Please see below for the output : > > text data bss dec hex filename > 2038927 153672 607192120 609384719 2452790f nek5000 > > > Regards > Shriram > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Sun Aug 22 11:05:05 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Sun, 22 Aug 2010 11:05:05 -0500 Subject: [Nek5000-users] Nek on Ranger (TACC) In-Reply-To: References: Message-ID: Hi Stefan, That works ! I tried with pgi7.1 and openmpi 1.3. Thanks ! Regards Shriram -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Sun Aug 22 11:06:11 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Sun, 22 Aug 2010 11:06:11 -0500 Subject: [Nek5000-users] Nek on Ranger (TACC) In-Reply-To: References: Message-ID: Hi again, I pasted the output of 'size nek5000' directly, So I guess its not separated. Regards Shriram Jagannathan On 22 August 2010 11:05, Shriram Jagannathan wrote: > Hi Stefan, > > That works ! > I tried with pgi7.1 and openmpi 1.3. Thanks ! > > > Regards > Shriram > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Sun Aug 22 16:10:00 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Sun, 22 Aug 2010 16:10:00 -0500 (CDT) Subject: [Nek5000-users] IFCHAR & Conj. HT? Message-ID: <1654984554.43511282511400903.JavaMail.root@neo-mail-3.tamu.edu> Hi Developers, I have just realized that IFCHAR is not compatible with conjugate heat transfer cases? Is there a reason/solution for this? Thanks, Michael -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Mon Aug 23 13:17:11 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 23 Aug 2010 13:17:11 -0500 Subject: [Nek5000-users] postpro.f compiler problems Message-ID: <4C72BB27.4050608@oddjob.uchicago.edu> Hi, I tried to update to the newest version of the code, but I am having trouble getting it to compile. It is having a problem with compiling postpro.f. It looks like the compiler fails at line 1364 when parameter is used to set a variable equal to a non-constant quantity instead of a constant one. It looks like this was added in version 537 by Stefan. I've included the end of the output from the compiler, which is the Portland group compiler on Franklin. At Aleks' suggestion I tried deleting the subroutine g2gi, but that did not fix the problem. I was finally able to get it to compile by using postpro.f from version 536 and deleting the subroutine hpts. That subroutine was causing a similar problem. Elizabeth PGF90-S-0050-Assumed size array, rst, is not a dummy argument (/global/homes/e/ehicks/nek5_svn/trunk/nek/postpro.f: 448) PGF90-S-0050-Assumed size array, dist, is not a dummy argument (/global/homes/e/ehicks/nek5_svn/trunk/nek/postpro.f: 448) PGF90-S-0050-Assumed size array, rcode, is not a dummy argument (/global/homes/e/ehicks/nek5_svn/trunk/nek/postpro.f: 449) PGF90-S-0050-Assumed size array, elid, is not a dummy argument (/global/homes/e/ehicks/nek5_svn/trunk/nek/postpro.f: 449) PGF90-S-0050-Assumed size array, proc, is not a dummy argument (/global/homes/e/ehicks/nek5_svn/trunk/nek/postpro.f: 449) 0 inform, 0 warnings, 5 severes, 0 fatal for intpts /opt/cray/xt-asyncpe/3.7/bin/ftn: INFO: linux target is being used ftn -c -O2 -r8 -Mpreprocess -DMPI -DLONGINT8 -DUNDERSCORE -DGLOBAL_LONG_LONG -I/scratch/scratchdirs/ehicks/nek/runs/run325 -I/global/homes/e/ehicks/nek5_svn/trunk/nek -I./ /global/homes/e/ehicks/nek5_svn/trunk/nek/qthermal.f -o obj/qthermal.o PGF90-S-0087-Non-constant expression where constant expression required (/global/homes/e/ehicks/nek5_svn/trunk/nek/postpro.f : 1346) 0 inform, 0 warnings, 1 severes, 0 fatal for g2gi make: *** [obj/postpro.o] Error 2 From nek5000-users at lists.mcs.anl.gov Mon Aug 23 14:48:44 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 23 Aug 2010 14:48:44 -0500 Subject: [Nek5000-users] Temperature gradient at a point Message-ID: Hi, I wanted to know if there was a way to find the temperature gradient at a point. I need that information in the userbc function. I tried using gradm1(), but I am not sure how to get the value at a given point. Thanks, Pradeep -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Mon Aug 23 14:50:39 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 23 Aug 2010 14:50:39 -0500 (CDT) Subject: [Nek5000-users] Temperature gradient at a point In-Reply-To: References: Message-ID: Pradeep, if you give me some idea of the nature of your bc, I can perhaps help --- there are a large number of bc types already supported inside nek Paul On Mon, 23 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: > Hi, > > I wanted to know if there was a way to find the temperature gradient at a > point. I need that information in the userbc function. > > I tried using gradm1(), but I am not sure how to get the value at a given > point. > > Thanks, > Pradeep > From nek5000-users at lists.mcs.anl.gov Mon Aug 23 14:59:41 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 23 Aug 2010 21:59:41 +0200 Subject: [Nek5000-users] postpro.f compiler problems In-Reply-To: <4C72BB27.4050608@oddjob.uchicago.edu> References: <4C72BB27.4050608@oddjob.uchicago.edu> Message-ID: Hi, please post your SIZE file. Stefan On Mon, Aug 23, 2010 at 8:17 PM, wrote: > Hi, > I tried to update to the newest version of the code, but I am having > trouble getting it to compile. It is having a problem with compiling > postpro.f. It looks like the compiler fails at line 1364 when parameter > is used to set a variable equal to a non-constant quantity instead of a > constant one. It looks like this was added in version 537 by Stefan. I've > included the end of the output from the compiler, which is the Portland > group compiler on Franklin. At Aleks' suggestion I tried deleting the > subroutine g2gi, but that did not fix the problem. I was finally able to get > it to compile by using postpro.f from version 536 and deleting the > subroutine hpts. That subroutine was causing a similar problem. Elizabeth > > > PGF90-S-0050-Assumed size array, rst, is not a dummy argument > (/global/homes/e/ehicks/nek5_svn/trunk/nek/postpro.f: 448) > PGF90-S-0050-Assumed size array, dist, is not a dummy argument > (/global/homes/e/ehicks/nek5_svn/trunk/nek/postpro.f: 448) > PGF90-S-0050-Assumed size array, rcode, is not a dummy argument > (/global/homes/e/ehicks/nek5_svn/trunk/nek/postpro.f: 449) > PGF90-S-0050-Assumed size array, elid, is not a dummy argument > (/global/homes/e/ehicks/nek5_svn/trunk/nek/postpro.f: 449) > PGF90-S-0050-Assumed size array, proc, is not a dummy argument > (/global/homes/e/ehicks/nek5_svn/trunk/nek/postpro.f: 449) > 0 inform, 0 warnings, 5 severes, 0 fatal for intpts > /opt/cray/xt-asyncpe/3.7/bin/ftn: INFO: linux target is being used > ftn -c -O2 -r8 -Mpreprocess -DMPI -DLONGINT8 -DUNDERSCORE > -DGLOBAL_LONG_LONG -I/scratch/scratchdirs/ehicks/nek/runs/run325 > -I/global/homes/e/ehicks/nek5_svn/trunk/nek -I./ > /global/homes/e/ehicks/nek5_svn/trunk/nek/qthermal.f -o obj/qthermal.o > PGF90-S-0087-Non-constant expression where constant expression required > (/global/homes/e/ehicks/nek5_svn/trunk/nek/postpro.f > : 1346) > 0 inform, 0 warnings, 1 severes, 0 fatal for g2gi > make: *** [obj/postpro.o] Error 2 > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Mon Aug 23 15:03:11 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 23 Aug 2010 15:03:11 -0500 Subject: [Nek5000-users] Temperature gradient at a point In-Reply-To: References: Message-ID: Hi Paul, I am basically trying to solve a conjugate heat transfer problem in an iterative manner, for flow over an infinitely long cylinder (2D). I need to use the heat transfer at the boundary, to calculate the new temperature at the boundary for the next time step. The temperature for the next time step is solved for using this heat flux, by a function in the usr file using an FEM algorithm for the solid part (cylinder). The bc type I am using is Temperature - fortran function. Regards, Pradeep On Mon, Aug 23, 2010 at 2:50 PM, wrote: > > Pradeep, > > if you give me some idea of the nature of your bc, I can > perhaps help --- there are a large number of bc types already > supported inside nek > > Paul > > > > On Mon, 23 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: > > Hi, >> >> I wanted to know if there was a way to find the temperature gradient at a >> point. I need that information in the userbc function. >> >> I tried using gradm1(), but I am not sure how to get the value at a given >> point. >> >> Thanks, >> Pradeep >> >> _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > -- Pradeep C. Rao Graduate Research Assistant for FT2L (http://www1.mengr.tamu.edu/FT2L/) Department of Mechanical Engineering Texas A&M University College Station, TX 77843-3123 428 Engineering Physics Building (713) 210-9769 -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Mon Aug 23 15:21:50 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 23 Aug 2010 15:21:50 -0500 Subject: [Nek5000-users] postpro.f compiler problems In-Reply-To: References: <4C72BB27.4050608@oddjob.uchicago.edu> Message-ID: <4C72D85E.3000708@oddjob.uchicago.edu> Here is the SIZE file. Elizabeth nek5000-users at lists.mcs.anl.gov wrote: > Hi, > > please post your SIZE file. > > Stefan > > > > On Mon, Aug 23, 2010 at 8:17 PM, > wrote: > > Hi, > I tried to update to the newest version of the code, but I am having > trouble getting it to compile. It is having a problem with compiling > postpro.f. It looks like the compiler fails at line 1364 when > parameter > is used to set a variable equal to a non-constant quantity instead > of a > constant one. It looks like this was added in version 537 by > Stefan. I've included the end of the output from the compiler, > which is the Portland group compiler on Franklin. At Aleks' > suggestion I tried deleting the subroutine g2gi, but that did not > fix the problem. I was finally able to get it to compile by using > postpro.f from version 536 and deleting the subroutine hpts. > That subroutine was causing a similar problem. Elizabeth > > > PGF90-S-0050-Assumed size array, rst, is not a dummy argument > (/global/homes/e/ehicks/nek5_svn/trunk/nek/postpro.f: 448) > PGF90-S-0050-Assumed size array, dist, is not a dummy argument > (/global/homes/e/ehicks/nek5_svn/trunk/nek/postpro.f: 448) > PGF90-S-0050-Assumed size array, rcode, is not a dummy argument > (/global/homes/e/ehicks/nek5_svn/trunk/nek/postpro.f: 449) > PGF90-S-0050-Assumed size array, elid, is not a dummy argument > (/global/homes/e/ehicks/nek5_svn/trunk/nek/postpro.f: 449) > PGF90-S-0050-Assumed size array, proc, is not a dummy argument > (/global/homes/e/ehicks/nek5_svn/trunk/nek/postpro.f: 449) > 0 inform, 0 warnings, 5 severes, 0 fatal for intpts > /opt/cray/xt-asyncpe/3.7/bin/ftn: INFO: linux target is being used > ftn -c -O2 -r8 -Mpreprocess -DMPI -DLONGINT8 -DUNDERSCORE > -DGLOBAL_LONG_LONG -I/scratch/scratchdirs/ehicks/nek/runs/run325 > -I/global/homes/e/ehicks/nek5_svn/trunk/nek -I./ > /global/homes/e/ehicks/nek5_svn/trunk/nek/qthermal.f -o obj/qthermal.o > PGF90-S-0087-Non-constant expression where constant expression > required (/global/homes/e/ehicks/nek5_svn/trunk/nek/postpro.f > : 1346) > 0 inform, 0 warnings, 1 severes, 0 fatal for g2gi > make: *** [obj/postpro.o] Error 2 > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > > ------------------------------------------------------------------------ > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: SIZE URL: From nek5000-users at lists.mcs.anl.gov Mon Aug 23 15:47:47 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 23 Aug 2010 22:47:47 +0200 Subject: [Nek5000-users] postpro.f compiler problems In-Reply-To: <4C72D85E.3000708@oddjob.uchicago.edu> References: <4C72BB27.4050608@oddjob.uchicago.edu> <4C72D85E.3000708@oddjob.uchicago.edu> Message-ID: Your compiler problem seems to be related to the fact that the lpart parameter is commented. Did you comment this parameter? c automatically added by makenek c parameter(lpart = 10000 ) ! max number of particles Stefan On Mon, Aug 23, 2010 at 10:21 PM, wrote: > Here is the SIZE file. > > Elizabeth > > > nek5000-users at lists.mcs.anl.gov wrote: > >> Hi, >> >> please post your SIZE file. >> >> Stefan >> >> >> >> On Mon, Aug 23, 2010 at 8:17 PM, > nek5000-users at lists.mcs.anl.gov>> wrote: >> >> Hi, >> I tried to update to the newest version of the code, but I am having >> trouble getting it to compile. It is having a problem with compiling >> postpro.f. It looks like the compiler fails at line 1364 when >> parameter >> is used to set a variable equal to a non-constant quantity instead >> of a >> constant one. It looks like this was added in version 537 by >> Stefan. I've included the end of the output from the compiler, >> which is the Portland group compiler on Franklin. At Aleks' >> suggestion I tried deleting the subroutine g2gi, but that did not >> fix the problem. I was finally able to get it to compile by using >> postpro.f from version 536 and deleting the subroutine hpts. That >> subroutine was causing a similar problem. Elizabeth >> >> >> PGF90-S-0050-Assumed size array, rst, is not a dummy argument >> (/global/homes/e/ehicks/nek5_svn/trunk/nek/postpro.f: 448) >> PGF90-S-0050-Assumed size array, dist, is not a dummy argument >> (/global/homes/e/ehicks/nek5_svn/trunk/nek/postpro.f: 448) >> PGF90-S-0050-Assumed size array, rcode, is not a dummy argument >> (/global/homes/e/ehicks/nek5_svn/trunk/nek/postpro.f: 449) >> PGF90-S-0050-Assumed size array, elid, is not a dummy argument >> (/global/homes/e/ehicks/nek5_svn/trunk/nek/postpro.f: 449) >> PGF90-S-0050-Assumed size array, proc, is not a dummy argument >> (/global/homes/e/ehicks/nek5_svn/trunk/nek/postpro.f: 449) >> 0 inform, 0 warnings, 5 severes, 0 fatal for intpts >> /opt/cray/xt-asyncpe/3.7/bin/ftn: INFO: linux target is being used >> ftn -c -O2 -r8 -Mpreprocess -DMPI -DLONGINT8 -DUNDERSCORE >> -DGLOBAL_LONG_LONG -I/scratch/scratchdirs/ehicks/nek/runs/run325 >> -I/global/homes/e/ehicks/nek5_svn/trunk/nek -I./ >> /global/homes/e/ehicks/nek5_svn/trunk/nek/qthermal.f -o obj/qthermal.o >> PGF90-S-0087-Non-constant expression where constant expression >> required (/global/homes/e/ehicks/nek5_svn/trunk/nek/postpro.f >> : 1346) >> 0 inform, 0 warnings, 1 severes, 0 fatal for g2gi >> make: *** [obj/postpro.o] Error 2 >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> >> >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> >> >> ------------------------------------------------------------------------ >> >> >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> >> > > C Dimension file to be included > C > C HCUBE array dimensions > C > parameter (ldim=2) > parameter (lx1=16,ly1=lx1,lz1=1,lelt=36,lelv=lelt) > parameter (lxd=24,lyd=lxd,lzd=1) > parameter (lelx=1,lely=1,lelz=1) > c > parameter (lzl=3 + 2*(ldim-3)) > c > parameter (lx2=lx1-2) > parameter (ly2=ly1-2) > parameter (lz2=1) > parameter (lx3=lx2) > parameter (ly3=ly2) > parameter (lz3=lz2) > c > c parameter (lpelv=lelv,lpelt=lelt,lpert=3) ! perturbation > c parameter (lpx1=lx1,lpy1=ly1,lpz1=lz1) ! array sizes > c parameter (lpx2=lx2,lpy2=ly2,lpz2=lz2) > c > parameter (lpelv=1,lpelt=1,lpert=1) ! perturbation > parameter (lpx1=1,lpy1=1,lpz1=1) ! array sizes > parameter (lpx2=1,lpy2=1,lpz2=1) > c > c > c parameter (lbelv=lelv,lbelt=lelt) ! MHD > c parameter (lbx1=lx1,lby1=ly1,lbz1=lz1) ! array sizes > c parameter (lbx2=lx2,lby2=ly2,lbz2=lz2) > c > parameter (lbelv=1,lbelt=1) ! MHD > parameter (lbx1=1,lby1=1,lbz1=1) ! array sizes > parameter (lbx2=1,lby2=1,lbz2=1) > c > C LX1M=LX1 when there are moving meshes; =1 otherwise > parameter (lx1m=1,ly1m=1,lz1m=1) > parameter (ldimt= 1) ! 3 passive scalars + T > parameter (ldimt1=ldimt+1) > parameter (ldimt3=ldimt+3) > parameter (lp = 1024) > parameter (lelg = 1152) > c > c Note: In the new code, LELGEC should be about sqrt(LELG) > c > PARAMETER (LELGEC = 1) > PARAMETER (LXYZ2 = 1) > PARAMETER (LXZ21 = 1) > c > PARAMETER (LMAXV=LX1*LY1*LZ1*LELV) > PARAMETER (LMAXT=LX1*LY1*LZ1*LELT) > PARAMETER (LMAXP=LX2*LY2*LZ2*LELV) > PARAMETER (LXZ=LX1*LZ1) > PARAMETER (LORDER=3) > PARAMETER (MAXOBJ=1,MAXMBR=LELT*6,lhis=10) > C > C Common Block Dimensions > C > PARAMETER (LCTMP0 =2*LX1*LY1*LZ1*LELT) > PARAMETER (LCTMP1 =4*LX1*LY1*LZ1*LELT) > C > C The parameter LVEC controls whether an additional 42 field arrays > C are required for Steady State Solutions. If you are not using > C Steady State, it is recommended that LVEC=1. > C > PARAMETER (LVEC=1) > C > C Uzawa projection array dimensions > C > parameter (mxprev = 20) > parameter (lgmres = 30) > C > C Split projection array dimensions > C > parameter(lmvec = 1) > parameter(lsvec = 1) > parameter(lstore=lmvec*lsvec) > c > c NONCONFORMING STUFF > c > parameter (maxmor = lelt) > C > C Array dimensions > C > COMMON/DIMN/NELV,NELT,NX1,NY1,NZ1,NX2,NY2,NZ2 > $,NX3,NY3,NZ3,NDIM,NFIELD,NPERT,NID > $,NXD,NYD,NZD > > > > c automatically added by makenek > parameter(lxo = lx1) ! max output grid size (lxo>=lx1) > > c automatically added by makenek > c parameter(lpart = 10000 ) ! max number of particles > > c automatically added by makenek > integer ax1,ay1,az1,ax2,ay2,az2 > parameter (ax1=lx1,ay1=ly1,az1=lz1,ax2=lx2,ay2=ly2,az2=lz2) ! running > averages > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Mon Aug 23 15:49:29 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 23 Aug 2010 15:49:29 -0500 Subject: [Nek5000-users] postpro.f compiler problems In-Reply-To: References: <4C72BB27.4050608@oddjob.uchicago.edu> <4C72D85E.3000708@oddjob.uchicago.edu> Message-ID: <4C72DED9.1080905@oddjob.uchicago.edu> No, I didn't. I'll uncomment it and try again. Thank you! Elizabeth nek5000-users at lists.mcs.anl.gov wrote: > Your compiler problem seems to be related to the fact that the lpart > parameter is commented. Did you comment this parameter? > > c automatically added by makenek > c parameter(lpart = 10000 ) ! max number of particles > > Stefan > > > > On Mon, Aug 23, 2010 at 10:21 PM, > wrote: > > Here is the SIZE file. > > Elizabeth > > > nek5000-users at lists.mcs.anl.gov > wrote: > > Hi, > > please post your SIZE file. > > Stefan > > > > On Mon, Aug 23, 2010 at 8:17 PM, > > >> wrote: > > Hi, > I tried to update to the newest version of the code, but I > am having > trouble getting it to compile. It is having a problem with > compiling > postpro.f. It looks like the compiler fails at line 1364 when > parameter > is used to set a variable equal to a non-constant quantity > instead > of a > constant one. It looks like this was added in version 537 by > Stefan. I've included the end of the output from the compiler, > which is the Portland group compiler on Franklin. At Aleks' > suggestion I tried deleting the subroutine g2gi, but that > did not > fix the problem. I was finally able to get it to compile by > using > postpro.f from version 536 and deleting the subroutine > hpts. That subroutine was causing a similar problem. > Elizabeth > > > PGF90-S-0050-Assumed size array, rst, is not a dummy argument > (/global/homes/e/ehicks/nek5_svn/trunk/nek/postpro.f: 448) > PGF90-S-0050-Assumed size array, dist, is not a dummy argument > (/global/homes/e/ehicks/nek5_svn/trunk/nek/postpro.f: 448) > PGF90-S-0050-Assumed size array, rcode, is not a dummy argument > (/global/homes/e/ehicks/nek5_svn/trunk/nek/postpro.f: 449) > PGF90-S-0050-Assumed size array, elid, is not a dummy argument > (/global/homes/e/ehicks/nek5_svn/trunk/nek/postpro.f: 449) > PGF90-S-0050-Assumed size array, proc, is not a dummy argument > (/global/homes/e/ehicks/nek5_svn/trunk/nek/postpro.f: 449) > 0 inform, 0 warnings, 5 severes, 0 fatal for intpts > /opt/cray/xt-asyncpe/3.7/bin/ftn: INFO: linux target is > being used > ftn -c -O2 -r8 -Mpreprocess -DMPI -DLONGINT8 -DUNDERSCORE > -DGLOBAL_LONG_LONG > -I/scratch/scratchdirs/ehicks/nek/runs/run325 > -I/global/homes/e/ehicks/nek5_svn/trunk/nek -I./ > /global/homes/e/ehicks/nek5_svn/trunk/nek/qthermal.f -o > obj/qthermal.o > PGF90-S-0087-Non-constant expression where constant expression > required (/global/homes/e/ehicks/nek5_svn/trunk/nek/postpro.f > : 1346) > 0 inform, 0 warnings, 1 severes, 0 fatal for g2gi > make: *** [obj/postpro.o] Error 2 > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > > > > > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > > ------------------------------------------------------------------------ > > > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > > > C Dimension file to be included > C > C HCUBE array dimensions > C > parameter (ldim=2) > parameter (lx1=16,ly1=lx1,lz1=1,lelt=36,lelv=lelt) > parameter (lxd=24,lyd=lxd,lzd=1) > parameter (lelx=1,lely=1,lelz=1) > c > parameter (lzl=3 + 2*(ldim-3)) > c > parameter (lx2=lx1-2) > parameter (ly2=ly1-2) > parameter (lz2=1) > parameter (lx3=lx2) > parameter (ly3=ly2) > parameter (lz3=lz2) > c > c parameter (lpelv=lelv,lpelt=lelt,lpert=3) ! perturbation > c parameter (lpx1=lx1,lpy1=ly1,lpz1=lz1) ! array sizes > c parameter (lpx2=lx2,lpy2=ly2,lpz2=lz2) > c > parameter (lpelv=1,lpelt=1,lpert=1) ! perturbation > parameter (lpx1=1,lpy1=1,lpz1=1) ! array sizes > parameter (lpx2=1,lpy2=1,lpz2=1) > c > c > c parameter (lbelv=lelv,lbelt=lelt) ! MHD > c parameter (lbx1=lx1,lby1=ly1,lbz1=lz1) ! array sizes > c parameter (lbx2=lx2,lby2=ly2,lbz2=lz2) > c > parameter (lbelv=1,lbelt=1) ! MHD > parameter (lbx1=1,lby1=1,lbz1=1) ! array sizes > parameter (lbx2=1,lby2=1,lbz2=1) > c > C LX1M=LX1 when there are moving meshes; =1 otherwise > parameter (lx1m=1,ly1m=1,lz1m=1) > parameter (ldimt= 1) ! 3 passive > scalars + T > parameter (ldimt1=ldimt+1) > parameter (ldimt3=ldimt+3) > parameter (lp = 1024) > parameter (lelg = 1152) > c > c Note: In the new code, LELGEC should be about sqrt(LELG) > c > PARAMETER (LELGEC = 1) > PARAMETER (LXYZ2 = 1) > PARAMETER (LXZ21 = 1) > c > PARAMETER (LMAXV=LX1*LY1*LZ1*LELV) > PARAMETER (LMAXT=LX1*LY1*LZ1*LELT) > PARAMETER (LMAXP=LX2*LY2*LZ2*LELV) > PARAMETER (LXZ=LX1*LZ1) > PARAMETER (LORDER=3) > PARAMETER (MAXOBJ=1,MAXMBR=LELT*6,lhis=10) > C > C Common Block Dimensions > C > PARAMETER (LCTMP0 =2*LX1*LY1*LZ1*LELT) > PARAMETER (LCTMP1 =4*LX1*LY1*LZ1*LELT) > C > C The parameter LVEC controls whether an additional 42 field > arrays > C are required for Steady State Solutions. If you are not using > C Steady State, it is recommended that LVEC=1. > C > PARAMETER (LVEC=1) > C > C Uzawa projection array dimensions > C > parameter (mxprev = 20) > parameter (lgmres = 30) > C > C Split projection array dimensions > C > parameter(lmvec = 1) > parameter(lsvec = 1) > parameter(lstore=lmvec*lsvec) > c > c NONCONFORMING STUFF > c > parameter (maxmor = lelt) > C > C Array dimensions > C > COMMON/DIMN/NELV,NELT,NX1,NY1,NZ1,NX2,NY2,NZ2 > $,NX3,NY3,NZ3,NDIM,NFIELD,NPERT,NID > $,NXD,NYD,NZD > > > > c automatically added by makenek > parameter(lxo = lx1) ! max output grid size (lxo>=lx1) > > c automatically added by makenek > c parameter(lpart = 10000 ) ! max number of particles > > c automatically added by makenek > integer ax1,ay1,az1,ax2,ay2,az2 > parameter (ax1=lx1,ay1=ly1,az1=lz1,ax2=lx2,ay2=ly2,az2=lz2) ! > running averages > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > > ------------------------------------------------------------------------ > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > From nek5000-users at lists.mcs.anl.gov Mon Aug 23 15:55:37 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 23 Aug 2010 15:55:37 -0500 (CDT) Subject: [Nek5000-users] IFCHAR & Conj. HT? In-Reply-To: <1654984554.43511282511400903.JavaMail.root@neo-mail-3.tamu.edu> Message-ID: <1076728210.77641282596937658.JavaMail.root@neo-mail-3.tamu.edu> Paul, Could you shine some light on this? - Michael ----- Original Message ----- From: nek5000-users at lists.mcs.anl.gov To: "Nekton User List" Sent: Sunday, August 22, 2010 4:10:00 PM GMT -06:00 US/Canada Central Subject: [Nek5000-users] IFCHAR & Conj. HT? Hi Developers, I have just realized that IFCHAR is not compatible with conjugate heat transfer cases? Is there a reason/solution for this? Thanks, Michael _______________________________________________ Nek5000-users mailing list Nek5000-users at lists.mcs.anl.gov https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Mon Aug 23 16:04:25 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 23 Aug 2010 16:04:25 -0500 (CDT) Subject: [Nek5000-users] Temperature gradient at a point In-Reply-To: References: Message-ID: Hi Pradeep, Why not just solve the conjugate heat transfer problem directly using fluid + solid elements in nek? Also, nek supports full Robin boundary conditions if you wish to do a Newton law of cooling: k*dT/dn . n_hat = h*(T-Tinf), where Tinf is the external temperature and h is the heat transfer coefficient, both of which can be functions of time and space. Regarding gradm1, you would call it from userchk, and store the output in arrays in a common block, e.g., as below. Paul subroutine userchk : common /mygrad/ tx(lx1,ly1,lz1,lelt) $ , ty(lx1,ly1,lz1,lelt) $ , tz(lx1,ly1,lz1,lelt) call gradm1(tx,ty,tz,t) : : subroutine userbc (ix,iy,iz,iside,eg) include 'SIZE' include 'TOTAL' include 'NEKUSE' common /mygrad/ tx(lx1,ly1,lz1,lelt) $ , ty(lx1,ly1,lz1,lelt) $ , tz(lx1,ly1,lz1,lelt) integer e,eg e = gllel(eg) ! global element number to processor-local el. # gtx=tx(ix,iy,iz,e) gty=ty(ix,iy,iz,e) gtz=tz(ix,iy,iz,e) On Mon, 23 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: > Hi Paul, > > I am basically trying to solve a conjugate heat transfer problem in an > iterative manner, for flow over an infinitely long cylinder (2D). > > I need to use the heat transfer at the boundary, to calculate the new > temperature at the boundary for the next time step. The > temperature for the next time step is solved for using this heat flux, by a > function in the usr file using an FEM algorithm for the solid part > (cylinder). The bc type I am using is Temperature - fortran function. > > Regards, > Pradeep > > On Mon, Aug 23, 2010 at 2:50 PM, wrote: > >> >> Pradeep, >> >> if you give me some idea of the nature of your bc, I can >> perhaps help --- there are a large number of bc types already >> supported inside nek >> >> Paul >> >> >> >> On Mon, 23 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: >> >> Hi, >>> >>> I wanted to know if there was a way to find the temperature gradient at a >>> point. I need that information in the userbc function. >>> >>> I tried using gradm1(), but I am not sure how to get the value at a given >>> point. >>> >>> Thanks, >>> Pradeep >>> >>> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> > > > > -- > Pradeep C. Rao > Graduate Research Assistant for FT2L (http://www1.mengr.tamu.edu/FT2L/) > Department of Mechanical Engineering > Texas A&M University > College Station, TX 77843-3123 > > 428 Engineering Physics Building > (713) 210-9769 > uuuu c----------------------------------------------------------------------- C C USER SPECIFIED ROUTINES: C C - boundary conditions C - initial conditions C - variable properties C - local acceleration for fluid (a) C - forcing function for passive scalar (q) C - general purpose routine for checking errors etc. C c----------------------------------------------------------------------- subroutine uservp (ix,iy,iz,eg) include 'SIZE' include 'TOTAL' include 'NEKUSE' integer e,f,eg c e = gllel(eg) udiff =0. utrans=0. return end c----------------------------------------------------------------------- subroutine userf (ix,iy,iz,eg) include 'SIZE' include 'TOTAL' include 'NEKUSE' integer e,f,eg c e = gllel(eg) c Note: this is an acceleration term, NOT a force! c Thus, ffx will subsequently be multiplied by rho(x,t). ffx = 0.0 ffy = 0.0 ffz = 0.0 return end c----------------------------------------------------------------------- subroutine userq (ix,iy,iz,eg) include 'SIZE' include 'TOTAL' include 'NEKUSE' integer e,f,eg c e = gllel(eg) qvol = 0.0 return end c----------------------------------------------------------------------- subroutine userchk include 'SIZE' include 'TOTAL' return end c----------------------------------------------------------------------- subroutine userbc (ix,iy,iz,iside,ieg) include 'SIZE' include 'TOTAL' include 'NEKUSE' ux=0.0 uy=0.0 uz=0.0 temp=0.0 return end c----------------------------------------------------------------------- subroutine useric (ix,iy,iz,ieg) include 'SIZE' include 'TOTAL' include 'NEKUSE' ux=0.0 uy=0.0 uz=0.0 temp=0 return end c----------------------------------------------------------------------- subroutine usrdat include 'SIZE' include 'TOTAL' c return end c----------------------------------------------------------------------- subroutine usrdat2 include 'SIZE' include 'TOTAL' param(66) = 4. ! These give the std nek binary i/o and are param(67) = 4. ! good default values return end c----------------------------------------------------------------------- subroutine usrdat3 include 'SIZE' include 'TOTAL' c return end c----------------------------------------------------------------------- From nek5000-users at lists.mcs.anl.gov Mon Aug 23 16:10:26 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 23 Aug 2010 16:10:26 -0500 (CDT) Subject: [Nek5000-users] IFCHAR & Conj. HT? In-Reply-To: <1076728210.77641282596937658.JavaMail.root@neo-mail-3.tamu.edu> References: <1076728210.77641282596937658.JavaMail.root@neo-mail-3.tamu.edu> Message-ID: Yes - it's not supported at the moment. There was a reason for it at the time -- but I confess I can't recall why just yet. I need to load that into cache but am a bit pre-occupied at the moment. I suggest setting ifchar to F and then reducing DT accordingly - that will allow you to move forward. Note that both of these features (particularly conj. heat transfer) are relatively new or newly resurrected, so we've simply not gotten to the point in the parameter space where both features are tested. If I can recall why it's not turned on it shouldn't be hard to resolve the issue. Hopefully in about 2.5 weeks (I'm on travel over the next couple of weeks). Thanks for your patience! Paul On Mon, 23 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: > > > Paul, > > > > Could you shine some light on this? > > > > - Michael > > > ----- Original Message ----- > From: nek5000-users at lists.mcs.anl.gov > To: "Nekton User List" > Sent: Sunday, August 22, 2010 4:10:00 PM GMT -06:00 US/Canada Central > Subject: [Nek5000-users] IFCHAR & Conj. HT? > > > Hi Developers, > > I have just realized that IFCHAR is not compatible with conjugate heat transfer cases? > > Is there a reason/solution for this? > > Thanks, > Michael > > _______________________________________________ Nek5000-users mailing list Nek5000-users at lists.mcs.anl.gov https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users From nek5000-users at lists.mcs.anl.gov Mon Aug 23 16:13:42 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 23 Aug 2010 16:13:42 -0500 Subject: [Nek5000-users] postpro.f compiler problems In-Reply-To: <4C72DED9.1080905@oddjob.uchicago.edu> References: <4C72BB27.4050608@oddjob.uchicago.edu> <4C72D85E.3000708@oddjob.uchicago.edu> <4C72DED9.1080905@oddjob.uchicago.edu> Message-ID: <4C72E486.3080005@oddjob.uchicago.edu> Ok, it worked! Thank you, Stefan! nek5000-users at lists.mcs.anl.gov wrote: > No, I didn't. I'll uncomment it and try again. Thank you! > Elizabeth > > nek5000-users at lists.mcs.anl.gov wrote: >> Your compiler problem seems to be related to the fact that the lpart >> parameter is commented. Did you comment this parameter? >> >> c automatically added by makenek >> c parameter(lpart = 10000 ) ! max number of particles >> >> Stefan >> >> >> >> On Mon, Aug 23, 2010 at 10:21 PM, > > wrote: >> >> Here is the SIZE file. >> >> Elizabeth >> >> >> nek5000-users at lists.mcs.anl.gov >> wrote: >> >> Hi, >> >> please post your SIZE file. >> >> Stefan >> >> >> >> On Mon, Aug 23, 2010 at 8:17 PM, >> > >> > >> wrote: >> >> Hi, >> I tried to update to the newest version of the code, but I >> am having >> trouble getting it to compile. It is having a problem with >> compiling >> postpro.f. It looks like the compiler fails at line 1364 >> when >> parameter >> is used to set a variable equal to a non-constant quantity >> instead >> of a >> constant one. It looks like this was added in version 537 by >> Stefan. I've included the end of the output from the >> compiler, >> which is the Portland group compiler on Franklin. At Aleks' >> suggestion I tried deleting the subroutine g2gi, but that >> did not >> fix the problem. I was finally able to get it to compile by >> using >> postpro.f from version 536 and deleting the subroutine >> hpts. That subroutine was causing a similar problem. >> Elizabeth >> >> >> PGF90-S-0050-Assumed size array, rst, is not a dummy argument >> (/global/homes/e/ehicks/nek5_svn/trunk/nek/postpro.f: 448) >> PGF90-S-0050-Assumed size array, dist, is not a dummy >> argument >> (/global/homes/e/ehicks/nek5_svn/trunk/nek/postpro.f: 448) >> PGF90-S-0050-Assumed size array, rcode, is not a dummy >> argument >> (/global/homes/e/ehicks/nek5_svn/trunk/nek/postpro.f: 449) >> PGF90-S-0050-Assumed size array, elid, is not a dummy >> argument >> (/global/homes/e/ehicks/nek5_svn/trunk/nek/postpro.f: 449) >> PGF90-S-0050-Assumed size array, proc, is not a dummy >> argument >> (/global/homes/e/ehicks/nek5_svn/trunk/nek/postpro.f: 449) >> 0 inform, 0 warnings, 5 severes, 0 fatal for intpts >> /opt/cray/xt-asyncpe/3.7/bin/ftn: INFO: linux target is >> being used >> ftn -c -O2 -r8 -Mpreprocess -DMPI -DLONGINT8 -DUNDERSCORE >> -DGLOBAL_LONG_LONG >> -I/scratch/scratchdirs/ehicks/nek/runs/run325 >> -I/global/homes/e/ehicks/nek5_svn/trunk/nek -I./ >> /global/homes/e/ehicks/nek5_svn/trunk/nek/qthermal.f -o >> obj/qthermal.o >> PGF90-S-0087-Non-constant expression where constant >> expression >> required (/global/homes/e/ehicks/nek5_svn/trunk/nek/postpro.f >> : 1346) >> 0 inform, 0 warnings, 1 severes, 0 fatal for g2gi >> make: *** [obj/postpro.o] Error 2 >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> >> > > >> >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> >> >> >> ------------------------------------------------------------------------ >> >> >> >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> >> >> C Dimension file to be included >> C >> C HCUBE array dimensions >> C >> parameter (ldim=2) >> parameter (lx1=16,ly1=lx1,lz1=1,lelt=36,lelv=lelt) >> parameter (lxd=24,lyd=lxd,lzd=1) >> parameter (lelx=1,lely=1,lelz=1) >> c >> parameter (lzl=3 + 2*(ldim-3)) >> c >> parameter (lx2=lx1-2) >> parameter (ly2=ly1-2) >> parameter (lz2=1) >> parameter (lx3=lx2) >> parameter (ly3=ly2) >> parameter (lz3=lz2) >> c >> c parameter (lpelv=lelv,lpelt=lelt,lpert=3) ! perturbation >> c parameter (lpx1=lx1,lpy1=ly1,lpz1=lz1) ! array sizes >> c parameter (lpx2=lx2,lpy2=ly2,lpz2=lz2) >> c >> parameter (lpelv=1,lpelt=1,lpert=1) ! perturbation >> parameter (lpx1=1,lpy1=1,lpz1=1) ! array sizes >> parameter (lpx2=1,lpy2=1,lpz2=1) >> c >> c >> c parameter (lbelv=lelv,lbelt=lelt) ! MHD >> c parameter (lbx1=lx1,lby1=ly1,lbz1=lz1) ! array sizes >> c parameter (lbx2=lx2,lby2=ly2,lbz2=lz2) >> c >> parameter (lbelv=1,lbelt=1) ! MHD >> parameter (lbx1=1,lby1=1,lbz1=1) ! array sizes >> parameter (lbx2=1,lby2=1,lbz2=1) >> c >> C LX1M=LX1 when there are moving meshes; =1 otherwise >> parameter (lx1m=1,ly1m=1,lz1m=1) >> parameter (ldimt= 1) ! 3 passive >> scalars + T >> parameter (ldimt1=ldimt+1) >> parameter (ldimt3=ldimt+3) >> parameter (lp = 1024) >> parameter (lelg = 1152) >> c >> c Note: In the new code, LELGEC should be about sqrt(LELG) >> c >> PARAMETER (LELGEC = 1) >> PARAMETER (LXYZ2 = 1) >> PARAMETER (LXZ21 = 1) >> c >> PARAMETER (LMAXV=LX1*LY1*LZ1*LELV) >> PARAMETER (LMAXT=LX1*LY1*LZ1*LELT) >> PARAMETER (LMAXP=LX2*LY2*LZ2*LELV) >> PARAMETER (LXZ=LX1*LZ1) >> PARAMETER (LORDER=3) >> PARAMETER (MAXOBJ=1,MAXMBR=LELT*6,lhis=10) >> C >> C Common Block Dimensions >> C >> PARAMETER (LCTMP0 =2*LX1*LY1*LZ1*LELT) >> PARAMETER (LCTMP1 =4*LX1*LY1*LZ1*LELT) >> C >> C The parameter LVEC controls whether an additional 42 field >> arrays >> C are required for Steady State Solutions. If you are not using >> C Steady State, it is recommended that LVEC=1. >> C >> PARAMETER (LVEC=1) >> C >> C Uzawa projection array dimensions >> C >> parameter (mxprev = 20) >> parameter (lgmres = 30) >> C >> C Split projection array dimensions >> C >> parameter(lmvec = 1) >> parameter(lsvec = 1) >> parameter(lstore=lmvec*lsvec) >> c >> c NONCONFORMING STUFF >> c >> parameter (maxmor = lelt) >> C >> C Array dimensions >> C >> COMMON/DIMN/NELV,NELT,NX1,NY1,NZ1,NX2,NY2,NZ2 >> $,NX3,NY3,NZ3,NDIM,NFIELD,NPERT,NID >> $,NXD,NYD,NZD >> >> >> >> c automatically added by makenek >> parameter(lxo = lx1) ! max output grid size (lxo>=lx1) >> >> c automatically added by makenek >> c parameter(lpart = 10000 ) ! max number of particles >> >> c automatically added by makenek >> integer ax1,ay1,az1,ax2,ay2,az2 >> parameter (ax1=lx1,ay1=ly1,az1=lz1,ax2=lx2,ay2=ly2,az2=lz2) ! >> running averages >> >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> >> >> ------------------------------------------------------------------------ >> >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users From nek5000-users at lists.mcs.anl.gov Mon Aug 23 16:13:48 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 23 Aug 2010 16:13:48 -0500 Subject: [Nek5000-users] Temperature gradient at a point In-Reply-To: References: Message-ID: Hi Paul, Thanks for the detailed reply. The reason I'm not solving it as a conjugate heat transfer problem, is that thermal conductivity is a function of temperature based on some curve fit equations, and I am not sure how to implement that. Thanks, Pradeep On Mon, Aug 23, 2010 at 4:04 PM, wrote: > > Hi Pradeep, > > Why not just solve the conjugate heat transfer problem directly > using fluid + solid elements in nek? > > Also, nek supports full Robin boundary conditions if you wish > to do a Newton law of cooling: k*dT/dn . n_hat = h*(T-Tinf), where Tinf is > the external temperature and h is the heat transfer coefficient, both of > which can be functions of time and space. > > > Regarding gradm1, you would call it from userchk, and store > the output in arrays in a common block, e.g., as below. > > Paul > > subroutine userchk > : > common /mygrad/ tx(lx1,ly1,lz1,lelt) > $ , ty(lx1,ly1,lz1,lelt) > $ , tz(lx1,ly1,lz1,lelt) > > call gradm1(tx,ty,tz,t) > > : > : > > subroutine userbc (ix,iy,iz,iside,eg) > include 'SIZE' > include 'TOTAL' > include 'NEKUSE' > > common /mygrad/ tx(lx1,ly1,lz1,lelt) > $ , ty(lx1,ly1,lz1,lelt) > $ , tz(lx1,ly1,lz1,lelt) > > integer e,eg > > e = gllel(eg) ! global element number to processor-local el. # > > gtx=tx(ix,iy,iz,e) > gty=ty(ix,iy,iz,e) > gtz=tz(ix,iy,iz,e) > > > > > On Mon, 23 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: > > Hi Paul, >> >> I am basically trying to solve a conjugate heat transfer problem in an >> iterative manner, for flow over an infinitely long cylinder (2D). >> >> I need to use the heat transfer at the boundary, to calculate the new >> temperature at the boundary for the next time step. The >> temperature for the next time step is solved for using this heat flux, by >> a >> function in the usr file using an FEM algorithm for the solid part >> (cylinder). The bc type I am using is Temperature - fortran function. >> >> Regards, >> Pradeep >> >> On Mon, Aug 23, 2010 at 2:50 PM, wrote: >> >> >>> Pradeep, >>> >>> if you give me some idea of the nature of your bc, I can >>> perhaps help --- there are a large number of bc types already >>> supported inside nek >>> >>> Paul >>> >>> >>> >>> On Mon, 23 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: >>> >>> Hi, >>> >>>> >>>> I wanted to know if there was a way to find the temperature gradient at >>>> a >>>> point. I need that information in the userbc function. >>>> >>>> I tried using gradm1(), but I am not sure how to get the value at a >>>> given >>>> point. >>>> >>>> Thanks, >>>> Pradeep >>>> >>>> _______________________________________________ >>>> >>> Nek5000-users mailing list >>> Nek5000-users at lists.mcs.anl.gov >>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>> >>> >> >> >> -- >> Pradeep C. Rao >> Graduate Research Assistant for FT2L (http://www1.mengr.tamu.edu/FT2L/) >> Department of Mechanical Engineering >> Texas A&M University >> College Station, TX 77843-3123 >> >> 428 Engineering Physics Building >> (713) 210-9769 >> >> uuuu > c----------------------------------------------------------------------- > C > C USER SPECIFIED ROUTINES: > C > C - boundary conditions > C - initial conditions > C - variable properties > C - local acceleration for fluid (a) > C - forcing function for passive scalar (q) > C - general purpose routine for checking errors etc. > C > c----------------------------------------------------------------------- > subroutine uservp (ix,iy,iz,eg) > include 'SIZE' > include 'TOTAL' > include 'NEKUSE' > > integer e,f,eg > c e = gllel(eg) > > udiff =0. > utrans=0. > return > end > c----------------------------------------------------------------------- > subroutine userf (ix,iy,iz,eg) > include 'SIZE' > include 'TOTAL' > include 'NEKUSE' > > integer e,f,eg > c e = gllel(eg) > > > c Note: this is an acceleration term, NOT a force! > c Thus, ffx will subsequently be multiplied by rho(x,t). > > > ffx = 0.0 > ffy = 0.0 > ffz = 0.0 > > return > end > c----------------------------------------------------------------------- > subroutine userq (ix,iy,iz,eg) > include 'SIZE' > include 'TOTAL' > include 'NEKUSE' > > integer e,f,eg > c e = gllel(eg) > > qvol = 0.0 > > return > end > c----------------------------------------------------------------------- > subroutine userchk > include 'SIZE' > include 'TOTAL' > return > end > c----------------------------------------------------------------------- > subroutine userbc (ix,iy,iz,iside,ieg) > include 'SIZE' > include 'TOTAL' > include 'NEKUSE' > ux=0.0 > uy=0.0 > uz=0.0 > temp=0.0 > return > end > c----------------------------------------------------------------------- > subroutine useric (ix,iy,iz,ieg) > include 'SIZE' > include 'TOTAL' > include 'NEKUSE' > ux=0.0 > uy=0.0 > uz=0.0 > temp=0 > return > end > c----------------------------------------------------------------------- > subroutine usrdat > include 'SIZE' > include 'TOTAL' > c > return > end > c----------------------------------------------------------------------- > subroutine usrdat2 > include 'SIZE' > include 'TOTAL' > > param(66) = 4. ! These give the std nek binary i/o and are > param(67) = 4. ! good default values > > return > end > c----------------------------------------------------------------------- > subroutine usrdat3 > include 'SIZE' > include 'TOTAL' > c > return > end > c----------------------------------------------------------------------- > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > -- Pradeep C. Rao Graduate Research Assistant for FT2L (http://www1.mengr.tamu.edu/FT2L/) Department of Mechanical Engineering Texas A&M University College Station, TX 77843-3123 428 Engineering Physics Building (713) 210-9769 -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Mon Aug 23 16:23:19 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 23 Aug 2010 16:23:19 -0500 (CDT) Subject: [Nek5000-users] Temperature gradient at a point In-Reply-To: References: Message-ID: OK - keep in mind that conductivity can be a function of whatever you'd like... You could thus conceivably time-step to the solution (if you're after a steady state solution), or step in a time-accurate way for an unsteady case. On Mon, 23 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: > Hi Paul, > > Thanks for the detailed reply. The reason I'm not solving it as a conjugate > heat transfer problem, is that thermal conductivity is a function of > temperature based on some curve fit equations, and I am not sure how to > implement that. > > Thanks, > Pradeep > > On Mon, Aug 23, 2010 at 4:04 PM, wrote: > >> >> Hi Pradeep, >> >> Why not just solve the conjugate heat transfer problem directly >> using fluid + solid elements in nek? >> >> Also, nek supports full Robin boundary conditions if you wish >> to do a Newton law of cooling: k*dT/dn . n_hat = h*(T-Tinf), where Tinf is >> the external temperature and h is the heat transfer coefficient, both of >> which can be functions of time and space. >> >> >> Regarding gradm1, you would call it from userchk, and store >> the output in arrays in a common block, e.g., as below. >> >> Paul >> >> subroutine userchk >> : >> common /mygrad/ tx(lx1,ly1,lz1,lelt) >> $ , ty(lx1,ly1,lz1,lelt) >> $ , tz(lx1,ly1,lz1,lelt) >> >> call gradm1(tx,ty,tz,t) >> >> : >> : >> >> subroutine userbc (ix,iy,iz,iside,eg) >> include 'SIZE' >> include 'TOTAL' >> include 'NEKUSE' >> >> common /mygrad/ tx(lx1,ly1,lz1,lelt) >> $ , ty(lx1,ly1,lz1,lelt) >> $ , tz(lx1,ly1,lz1,lelt) >> >> integer e,eg >> >> e = gllel(eg) ! global element number to processor-local el. # >> >> gtx=tx(ix,iy,iz,e) >> gty=ty(ix,iy,iz,e) >> gtz=tz(ix,iy,iz,e) >> >> >> >> >> On Mon, 23 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: >> >> Hi Paul, >>> >>> I am basically trying to solve a conjugate heat transfer problem in an >>> iterative manner, for flow over an infinitely long cylinder (2D). >>> >>> I need to use the heat transfer at the boundary, to calculate the new >>> temperature at the boundary for the next time step. The >>> temperature for the next time step is solved for using this heat flux, by >>> a >>> function in the usr file using an FEM algorithm for the solid part >>> (cylinder). The bc type I am using is Temperature - fortran function. >>> >>> Regards, >>> Pradeep >>> >>> On Mon, Aug 23, 2010 at 2:50 PM, wrote: >>> >>> >>>> Pradeep, >>>> >>>> if you give me some idea of the nature of your bc, I can >>>> perhaps help --- there are a large number of bc types already >>>> supported inside nek >>>> >>>> Paul >>>> >>>> >>>> >>>> On Mon, 23 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: >>>> >>>> Hi, >>>> >>>>> >>>>> I wanted to know if there was a way to find the temperature gradient at >>>>> a >>>>> point. I need that information in the userbc function. >>>>> >>>>> I tried using gradm1(), but I am not sure how to get the value at a >>>>> given >>>>> point. >>>>> >>>>> Thanks, >>>>> Pradeep >>>>> >>>>> _______________________________________________ >>>>> >>>> Nek5000-users mailing list >>>> Nek5000-users at lists.mcs.anl.gov >>>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>>> >>>> >>> >>> >>> -- >>> Pradeep C. Rao >>> Graduate Research Assistant for FT2L (http://www1.mengr.tamu.edu/FT2L/) >>> Department of Mechanical Engineering >>> Texas A&M University >>> College Station, TX 77843-3123 >>> >>> 428 Engineering Physics Building >>> (713) 210-9769 >>> >>> uuuu >> c----------------------------------------------------------------------- >> C >> C USER SPECIFIED ROUTINES: >> C >> C - boundary conditions >> C - initial conditions >> C - variable properties >> C - local acceleration for fluid (a) >> C - forcing function for passive scalar (q) >> C - general purpose routine for checking errors etc. >> C >> c----------------------------------------------------------------------- >> subroutine uservp (ix,iy,iz,eg) >> include 'SIZE' >> include 'TOTAL' >> include 'NEKUSE' >> >> integer e,f,eg >> c e = gllel(eg) >> >> udiff =0. >> utrans=0. >> return >> end >> c----------------------------------------------------------------------- >> subroutine userf (ix,iy,iz,eg) >> include 'SIZE' >> include 'TOTAL' >> include 'NEKUSE' >> >> integer e,f,eg >> c e = gllel(eg) >> >> >> c Note: this is an acceleration term, NOT a force! >> c Thus, ffx will subsequently be multiplied by rho(x,t). >> >> >> ffx = 0.0 >> ffy = 0.0 >> ffz = 0.0 >> >> return >> end >> c----------------------------------------------------------------------- >> subroutine userq (ix,iy,iz,eg) >> include 'SIZE' >> include 'TOTAL' >> include 'NEKUSE' >> >> integer e,f,eg >> c e = gllel(eg) >> >> qvol = 0.0 >> >> return >> end >> c----------------------------------------------------------------------- >> subroutine userchk >> include 'SIZE' >> include 'TOTAL' >> return >> end >> c----------------------------------------------------------------------- >> subroutine userbc (ix,iy,iz,iside,ieg) >> include 'SIZE' >> include 'TOTAL' >> include 'NEKUSE' >> ux=0.0 >> uy=0.0 >> uz=0.0 >> temp=0.0 >> return >> end >> c----------------------------------------------------------------------- >> subroutine useric (ix,iy,iz,ieg) >> include 'SIZE' >> include 'TOTAL' >> include 'NEKUSE' >> ux=0.0 >> uy=0.0 >> uz=0.0 >> temp=0 >> return >> end >> c----------------------------------------------------------------------- >> subroutine usrdat >> include 'SIZE' >> include 'TOTAL' >> c >> return >> end >> c----------------------------------------------------------------------- >> subroutine usrdat2 >> include 'SIZE' >> include 'TOTAL' >> >> param(66) = 4. ! These give the std nek binary i/o and are >> param(67) = 4. ! good default values >> >> return >> end >> c----------------------------------------------------------------------- >> subroutine usrdat3 >> include 'SIZE' >> include 'TOTAL' >> c >> return >> end >> c----------------------------------------------------------------------- >> >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> > > > > -- > Pradeep C. Rao > Graduate Research Assistant for FT2L (http://www1.mengr.tamu.edu/FT2L/) > Department of Mechanical Engineering > Texas A&M University > College Station, TX 77843-3123 > > 428 Engineering Physics Building > (713) 210-9769 > From nek5000-users at lists.mcs.anl.gov Mon Aug 23 16:25:03 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 23 Aug 2010 16:25:03 -0500 (CDT) Subject: [Nek5000-users] Temperature gradient at a point In-Reply-To: References: Message-ID: Hi Pradeep, Dependence of conductivity on time and space is not a problem once one uses non zero p30 in .rea that activates a call to uservp of .usr file If you also need a dependence of conductivity on temperature you may want to consider either using the values from the previous time step or doing extrapolation in time. Best, Aleks On Mon, 23 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: > Hi Paul, > > Thanks for the detailed reply. The reason I'm not solving it as a conjugate > heat transfer problem, is that thermal conductivity is a function of > temperature based on some curve fit equations, and I am not sure how to > implement that. > > Thanks, > Pradeep > > On Mon, Aug 23, 2010 at 4:04 PM, wrote: > >> >> Hi Pradeep, >> >> Why not just solve the conjugate heat transfer problem directly >> using fluid + solid elements in nek? >> >> Also, nek supports full Robin boundary conditions if you wish >> to do a Newton law of cooling: k*dT/dn . n_hat = h*(T-Tinf), where Tinf is >> the external temperature and h is the heat transfer coefficient, both of >> which can be functions of time and space. >> >> >> Regarding gradm1, you would call it from userchk, and store >> the output in arrays in a common block, e.g., as below. >> >> Paul >> >> subroutine userchk >> : >> common /mygrad/ tx(lx1,ly1,lz1,lelt) >> $ , ty(lx1,ly1,lz1,lelt) >> $ , tz(lx1,ly1,lz1,lelt) >> >> call gradm1(tx,ty,tz,t) >> >> : >> : >> >> subroutine userbc (ix,iy,iz,iside,eg) >> include 'SIZE' >> include 'TOTAL' >> include 'NEKUSE' >> >> common /mygrad/ tx(lx1,ly1,lz1,lelt) >> $ , ty(lx1,ly1,lz1,lelt) >> $ , tz(lx1,ly1,lz1,lelt) >> >> integer e,eg >> >> e = gllel(eg) ! global element number to processor-local el. # >> >> gtx=tx(ix,iy,iz,e) >> gty=ty(ix,iy,iz,e) >> gtz=tz(ix,iy,iz,e) >> >> >> >> >> On Mon, 23 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: >> >> Hi Paul, >>> >>> I am basically trying to solve a conjugate heat transfer problem in an >>> iterative manner, for flow over an infinitely long cylinder (2D). >>> >>> I need to use the heat transfer at the boundary, to calculate the new >>> temperature at the boundary for the next time step. The >>> temperature for the next time step is solved for using this heat flux, by >>> a >>> function in the usr file using an FEM algorithm for the solid part >>> (cylinder). The bc type I am using is Temperature - fortran function. >>> >>> Regards, >>> Pradeep >>> >>> On Mon, Aug 23, 2010 at 2:50 PM, wrote: >>> >>> >>>> Pradeep, >>>> >>>> if you give me some idea of the nature of your bc, I can >>>> perhaps help --- there are a large number of bc types already >>>> supported inside nek >>>> >>>> Paul >>>> >>>> >>>> >>>> On Mon, 23 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: >>>> >>>> Hi, >>>> >>>>> >>>>> I wanted to know if there was a way to find the temperature gradient at >>>>> a >>>>> point. I need that information in the userbc function. >>>>> >>>>> I tried using gradm1(), but I am not sure how to get the value at a >>>>> given >>>>> point. >>>>> >>>>> Thanks, >>>>> Pradeep >>>>> >>>>> _______________________________________________ >>>>> >>>> Nek5000-users mailing list >>>> Nek5000-users at lists.mcs.anl.gov >>>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>>> >>>> >>> >>> >>> -- >>> Pradeep C. Rao >>> Graduate Research Assistant for FT2L (http://www1.mengr.tamu.edu/FT2L/) >>> Department of Mechanical Engineering >>> Texas A&M University >>> College Station, TX 77843-3123 >>> >>> 428 Engineering Physics Building >>> (713) 210-9769 >>> >>> uuuu >> c----------------------------------------------------------------------- >> C >> C USER SPECIFIED ROUTINES: >> C >> C - boundary conditions >> C - initial conditions >> C - variable properties >> C - local acceleration for fluid (a) >> C - forcing function for passive scalar (q) >> C - general purpose routine for checking errors etc. >> C >> c----------------------------------------------------------------------- >> subroutine uservp (ix,iy,iz,eg) >> include 'SIZE' >> include 'TOTAL' >> include 'NEKUSE' >> >> integer e,f,eg >> c e = gllel(eg) >> >> udiff =0. >> utrans=0. >> return >> end >> c----------------------------------------------------------------------- >> subroutine userf (ix,iy,iz,eg) >> include 'SIZE' >> include 'TOTAL' >> include 'NEKUSE' >> >> integer e,f,eg >> c e = gllel(eg) >> >> >> c Note: this is an acceleration term, NOT a force! >> c Thus, ffx will subsequently be multiplied by rho(x,t). >> >> >> ffx = 0.0 >> ffy = 0.0 >> ffz = 0.0 >> >> return >> end >> c----------------------------------------------------------------------- >> subroutine userq (ix,iy,iz,eg) >> include 'SIZE' >> include 'TOTAL' >> include 'NEKUSE' >> >> integer e,f,eg >> c e = gllel(eg) >> >> qvol = 0.0 >> >> return >> end >> c----------------------------------------------------------------------- >> subroutine userchk >> include 'SIZE' >> include 'TOTAL' >> return >> end >> c----------------------------------------------------------------------- >> subroutine userbc (ix,iy,iz,iside,ieg) >> include 'SIZE' >> include 'TOTAL' >> include 'NEKUSE' >> ux=0.0 >> uy=0.0 >> uz=0.0 >> temp=0.0 >> return >> end >> c----------------------------------------------------------------------- >> subroutine useric (ix,iy,iz,ieg) >> include 'SIZE' >> include 'TOTAL' >> include 'NEKUSE' >> ux=0.0 >> uy=0.0 >> uz=0.0 >> temp=0 >> return >> end >> c----------------------------------------------------------------------- >> subroutine usrdat >> include 'SIZE' >> include 'TOTAL' >> c >> return >> end >> c----------------------------------------------------------------------- >> subroutine usrdat2 >> include 'SIZE' >> include 'TOTAL' >> >> param(66) = 4. ! These give the std nek binary i/o and are >> param(67) = 4. ! good default values >> >> return >> end >> c----------------------------------------------------------------------- >> subroutine usrdat3 >> include 'SIZE' >> include 'TOTAL' >> c >> return >> end >> c----------------------------------------------------------------------- >> >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> > > > > -- > Pradeep C. Rao > Graduate Research Assistant for FT2L (http://www1.mengr.tamu.edu/FT2L/) > Department of Mechanical Engineering > Texas A&M University > College Station, TX 77843-3123 > > 428 Engineering Physics Building > (713) 210-9769 > From nek5000-users at lists.mcs.anl.gov Mon Aug 23 16:29:22 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 23 Aug 2010 16:29:22 -0500 (CDT) Subject: [Nek5000-users] IFCHAR & Conj. HT? In-Reply-To: Message-ID: <126473798.87381282598962556.JavaMail.root@neo-mail-3.tamu.edu> Hi Paul, Thanks for the information! I know that a lot of the CHT stuff is getting some new attention, and I appreciate? you looking into it. And yes, I have reduced DT down, and it is running very nice, but would be great to take full advantage of the characteristic time stepping. I'll check in again in a few weeks or so. Thanks, Michael ----- Original Message ----- From: nek5000-users at lists. mcs . anl .gov To: nek5000-users at lists. mcs . anl .gov Sent: Monday, August 23, 2010 4:10:26 PM GMT -06:00 US/Canada Central Subject: Re: [Nek5000-users] IFCHAR & Conj. HT? Yes - it's not supported at the moment. There was a reason for it at the time -- but I confess I can't recall why just yet. ? I need to load that into cache but am a bit pre-occupied at the moment. I suggest setting ifchar to F and then reducing DT accordingly - that will allow you to move forward. ?Note that both of these features (particularly conj. heat transfer) are relatively new or newly resurrected, so we've simply not gotten to the point in the parameter space where both features are tested. If I can recall why it's not turned on it shouldn't be hard to resolve the issue. ?Hopefully in about 2.5 weeks (I'm on travel over the next couple of weeks). Thanks for your patience! Paul On Mon, 23 Aug 2010, nek5000-users at lists. mcs . anl .gov wrote: > > > Paul, > > > > Could you shine some light on this? > > > > - Michael > > > ----- Original Message ----- > From: nek5000-users at lists. mcs . anl .gov > To: " Nekton User List" > Sent: Sunday, August 22, 2010 4:10:00 PM GMT -06:00 US/Canada Central > Subject: [Nek5000-users] IFCHAR & Conj. HT? > > > Hi Developers, > > I have just realized that IFCHAR is not compatible with conjugate heat transfer cases? > > Is there a reason/solution for this? > > Thanks, > Michael > > _______________________________________________ Nek5000-users mailing list Nek5000-users at lists. mcs . anl .gov https ://lists. mcs . anl .gov/mailman/ listinfo /nek5000-users _______________________________________________ Nek5000-users mailing list Nek5000-users at lists. mcs . anl .gov https ://lists. mcs . anl .gov/mailman/ listinfo /nek5000-users -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Mon Aug 23 16:31:37 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 23 Aug 2010 16:31:37 -0500 (CDT) Subject: [Nek5000-users] Temperature gradient at a point In-Reply-To: Message-ID: <573676220.87951282599097770.JavaMail.root@neo-mail-3.tamu.edu> Hi Pradeep, I can show you how to do this off-list, there is a way to use the uservp (variable properties) routine in the usr file. Although I am not sure how this effects using the IFCHAR (if you are).? I seem to recall Markus having an issue with PN/PN versus PN/PN-2 when using variable properties, and the PN/PN-2 routine seemed to correct the issue he was having. - Michael ----- Original Message ----- From: nek5000-users at lists.mcs.anl.gov To: nek5000-users at lists.mcs.anl.gov Sent: Monday, August 23, 2010 4:13:48 PM GMT -06:00 US/Canada Central Subject: Re: [Nek5000-users] Temperature gradient at a point Hi Paul, Thanks for the detailed reply. The reason I'm not solving it as a conjugate heat transfer problem, is that thermal conductivity is a function of temperature based on some curve fit equations, and I am not sure how to implement that. Thanks, Pradeep On Mon, Aug 23, 2010 at 4:04 PM, < nek5000-users at lists.mcs.anl.gov > wrote: Hi Pradeep, Why not just solve the conjugate heat transfer problem directly using fluid + solid elements in nek? Also, nek supports full Robin boundary conditions if you wish to do a Newton law of cooling: ?k*dT/dn . n_hat = h*(T-Tinf), where Tinf is the external temperature and h is the heat transfer coefficient, both of which can be functions of time and space. Regarding gradm1, you would call it from userchk, and store the output in arrays in a common block, e.g., as below. Paul ? ? ?subroutine userchk ? ? ?: ? ? ?common /mygrad/ tx(lx1,ly1,lz1,lelt) ? ? $ ? ? ? ? ? ? ?, ty(lx1,ly1,lz1,lelt) ? ? $ ? ? ? ? ? ? ?, tz(lx1,ly1,lz1,lelt) ? ? ?call gradm1(tx,ty,tz,t) ? ? ?: ? ? ?: ? ? ?subroutine userbc (ix,iy,iz,iside,eg) ? ? ?include 'SIZE' ? ? ?include 'TOTAL' ? ? ?include 'NEKUSE' ? ? ?common /mygrad/ tx(lx1,ly1,lz1,lelt) ? ? $ ? ? ? ? ? ? ?, ty(lx1,ly1,lz1,lelt) ? ? $ ? ? ? ? ? ? ?, tz(lx1,ly1,lz1,lelt) ? ? ?integer e,eg ? ? ?e = gllel(eg) ! global element number to processor-local el. # ? ? ?gtx=tx(ix,iy,iz,e) ? ? ?gty=ty(ix,iy,iz,e) ? ? ?gtz=tz(ix,iy,iz,e) On Mon, 23 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: Hi Paul, I am basically trying to solve a conjugate heat transfer problem in an iterative manner, for flow over an infinitely long cylinder (2D). I need to use the heat transfer at the boundary, to calculate the new temperature at the boundary for the next time step. The temperature for the next time step is solved for using this heat flux, by a function in the usr file using an FEM algorithm for the solid part (cylinder). The bc type I am using is Temperature - fortran function. Regards, Pradeep On Mon, Aug 23, 2010 at 2:50 PM, < nek5000-users at lists.mcs.anl.gov > wrote: Pradeep, if you give me some idea of the nature of your bc, I can perhaps help --- there are a large number of bc types already supported inside nek Paul On Mon, 23 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: ?Hi, I wanted to know if there was a way to find the temperature gradient at a point. I need that information in the userbc function. I tried using gradm1(), but I am not sure how to get the value at a given point. Thanks, Pradeep ?_______________________________________________ Nek5000-users mailing list Nek5000-users at lists.mcs.anl.gov https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users -- Pradeep C. Rao Graduate Research Assistant for FT2L ( http://www1.mengr.tamu.edu/FT2L/ ) Department of Mechanical Engineering Texas A&M University College Station, TX 77843-3123 428 Engineering Physics Building (713) 210-9769 uuuu c----------------------------------------------------------------------- C C ?USER SPECIFIED ROUTINES: C C ? ? - boundary conditions C ? ? - initial conditions C ? ? - variable properties C ? ? - local acceleration for fluid (a) C ? ? - forcing function for passive scalar (q) C ? ? - general purpose routine for checking errors etc. C c----------------------------------------------------------------------- ? ? ?subroutine uservp (ix,iy,iz,eg) ? ? ?include 'SIZE' ? ? ?include 'TOTAL' ? ? ?include 'NEKUSE' ? ? ?integer e,f,eg c ? ? e = gllel(eg) ? ? ?udiff =0. ? ? ?utrans=0. ? ? ?return ? ? ?end c----------------------------------------------------------------------- ? ? ?subroutine userf ?(ix,iy,iz,eg) ? ? ?include 'SIZE' ? ? ?include 'TOTAL' ? ? ?include 'NEKUSE' ? ? ?integer e,f,eg c ? ? e = gllel(eg) c ? ? Note: this is an acceleration term, NOT a force! c ? ? Thus, ffx will subsequently be multiplied by rho(x,t). ? ? ?ffx = 0.0 ? ? ?ffy = 0.0 ? ? ?ffz = 0.0 ? ? ?return ? ? ?end c----------------------------------------------------------------------- ? ? ?subroutine userq ?(ix,iy,iz,eg) ? ? ?include 'SIZE' ? ? ?include 'TOTAL' ? ? ?include 'NEKUSE' ? ? ?integer e,f,eg c ? ? e = gllel(eg) ? ? ?qvol ? = 0.0 ? ? ?return ? ? ?end c----------------------------------------------------------------------- ? ? ?subroutine userchk ? ? ?include 'SIZE' ? ? ?include 'TOTAL' ? ? ?return ? ? ?end c----------------------------------------------------------------------- ? ? ?subroutine userbc (ix,iy,iz,iside,ieg) ? ? ?include 'SIZE' ? ? ?include 'TOTAL' ? ? ?include 'NEKUSE' ? ? ?ux=0.0 ? ? ?uy=0.0 ? ? ?uz=0.0 ? ? ?temp=0.0 ? ? ?return ? ? ?end c----------------------------------------------------------------------- ? ? ?subroutine useric (ix,iy,iz,ieg) ? ? ?include 'SIZE' ? ? ?include 'TOTAL' ? ? ?include 'NEKUSE' ? ? ?ux=0.0 ? ? ?uy=0.0 ? ? ?uz=0.0 ? ? ?temp=0 ? ? ?return ? ? ?end c----------------------------------------------------------------------- ? ? ?subroutine usrdat ? ? ?include 'SIZE' ? ? ?include 'TOTAL' c ? ? ?return ? ? ?end c----------------------------------------------------------------------- ? ? ?subroutine usrdat2 ? ? ?include 'SIZE' ? ? ?include 'TOTAL' ? ? ?param(66) = 4. ? ! These give the std nek binary i/o and are ? ? ?param(67) = 4. ? ! good default values ? ? ?return ? ? ?end c----------------------------------------------------------------------- ? ? ?subroutine usrdat3 ? ? ?include 'SIZE' ? ? ?include 'TOTAL' c ? ? ?return ? ? ?end c----------------------------------------------------------------------- _______________________________________________ Nek5000-users mailing list Nek5000-users at lists.mcs.anl.gov https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users -- Pradeep C. Rao Graduate Research Assistant for FT2L ( http://www1.mengr.tamu.edu/FT2L/ ) Department of Mechanical Engineering Texas A&M University College Station, TX 77843-3123 428 Engineering Physics Building (713) 210-9769 _______________________________________________ Nek5000-users mailing list Nek5000-users at lists.mcs.anl.gov https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Mon Aug 23 16:40:04 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 23 Aug 2010 16:40:04 -0500 Subject: [Nek5000-users] Temperature gradient at a point In-Reply-To: References: Message-ID: Thanks Aleks, Will give that a try. On Mon, Aug 23, 2010 at 4:25 PM, wrote: > Hi Pradeep, > > Dependence of conductivity on time and space is not a problem once one uses > non zero p30 in .rea that activates a call to uservp of .usr file > > If you also need a dependence of conductivity on temperature you may want > to consider either using the values from the previous time step or doing > extrapolation in time. > > Best, > Aleks > > > > On Mon, 23 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: > > Hi Paul, >> >> Thanks for the detailed reply. The reason I'm not solving it as a >> conjugate >> heat transfer problem, is that thermal conductivity is a function of >> temperature based on some curve fit equations, and I am not sure how to >> implement that. >> >> Thanks, >> Pradeep >> >> On Mon, Aug 23, 2010 at 4:04 PM, wrote: >> >> >>> Hi Pradeep, >>> >>> Why not just solve the conjugate heat transfer problem directly >>> using fluid + solid elements in nek? >>> >>> Also, nek supports full Robin boundary conditions if you wish >>> to do a Newton law of cooling: k*dT/dn . n_hat = h*(T-Tinf), where Tinf >>> is >>> the external temperature and h is the heat transfer coefficient, both of >>> which can be functions of time and space. >>> >>> >>> Regarding gradm1, you would call it from userchk, and store >>> the output in arrays in a common block, e.g., as below. >>> >>> Paul >>> >>> subroutine userchk >>> : >>> common /mygrad/ tx(lx1,ly1,lz1,lelt) >>> $ , ty(lx1,ly1,lz1,lelt) >>> $ , tz(lx1,ly1,lz1,lelt) >>> >>> call gradm1(tx,ty,tz,t) >>> >>> : >>> : >>> >>> subroutine userbc (ix,iy,iz,iside,eg) >>> include 'SIZE' >>> include 'TOTAL' >>> include 'NEKUSE' >>> >>> common /mygrad/ tx(lx1,ly1,lz1,lelt) >>> $ , ty(lx1,ly1,lz1,lelt) >>> $ , tz(lx1,ly1,lz1,lelt) >>> >>> integer e,eg >>> >>> e = gllel(eg) ! global element number to processor-local el. # >>> >>> gtx=tx(ix,iy,iz,e) >>> gty=ty(ix,iy,iz,e) >>> gtz=tz(ix,iy,iz,e) >>> >>> >>> >>> >>> On Mon, 23 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: >>> >>> Hi Paul, >>> >>>> >>>> I am basically trying to solve a conjugate heat transfer problem in an >>>> iterative manner, for flow over an infinitely long cylinder (2D). >>>> >>>> I need to use the heat transfer at the boundary, to calculate the new >>>> temperature at the boundary for the next time step. The >>>> temperature for the next time step is solved for using this heat flux, >>>> by >>>> a >>>> function in the usr file using an FEM algorithm for the solid part >>>> (cylinder). The bc type I am using is Temperature - fortran function. >>>> >>>> Regards, >>>> Pradeep >>>> >>>> On Mon, Aug 23, 2010 at 2:50 PM, >>>> wrote: >>>> >>>> >>>> Pradeep, >>>>> >>>>> if you give me some idea of the nature of your bc, I can >>>>> perhaps help --- there are a large number of bc types already >>>>> supported inside nek >>>>> >>>>> Paul >>>>> >>>>> >>>>> >>>>> On Mon, 23 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: >>>>> >>>>> Hi, >>>>> >>>>> >>>>>> I wanted to know if there was a way to find the temperature gradient >>>>>> at >>>>>> a >>>>>> point. I need that information in the userbc function. >>>>>> >>>>>> I tried using gradm1(), but I am not sure how to get the value at a >>>>>> given >>>>>> point. >>>>>> >>>>>> Thanks, >>>>>> Pradeep >>>>>> >>>>>> _______________________________________________ >>>>>> >>>>>> Nek5000-users mailing list >>>>> Nek5000-users at lists.mcs.anl.gov >>>>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>>>> >>>>> >>>>> >>>> >>>> -- >>>> Pradeep C. Rao >>>> Graduate Research Assistant for FT2L (http://www1.mengr.tamu.edu/FT2L/) >>>> Department of Mechanical Engineering >>>> Texas A&M University >>>> College Station, TX 77843-3123 >>>> >>>> 428 Engineering Physics Building >>>> (713) 210-9769 >>>> >>>> uuuu >>>> >>> c----------------------------------------------------------------------- >>> C >>> C USER SPECIFIED ROUTINES: >>> C >>> C - boundary conditions >>> C - initial conditions >>> C - variable properties >>> C - local acceleration for fluid (a) >>> C - forcing function for passive scalar (q) >>> C - general purpose routine for checking errors etc. >>> C >>> c----------------------------------------------------------------------- >>> subroutine uservp (ix,iy,iz,eg) >>> include 'SIZE' >>> include 'TOTAL' >>> include 'NEKUSE' >>> >>> integer e,f,eg >>> c e = gllel(eg) >>> >>> udiff =0. >>> utrans=0. >>> return >>> end >>> c----------------------------------------------------------------------- >>> subroutine userf (ix,iy,iz,eg) >>> include 'SIZE' >>> include 'TOTAL' >>> include 'NEKUSE' >>> >>> integer e,f,eg >>> c e = gllel(eg) >>> >>> >>> c Note: this is an acceleration term, NOT a force! >>> c Thus, ffx will subsequently be multiplied by rho(x,t). >>> >>> >>> ffx = 0.0 >>> ffy = 0.0 >>> ffz = 0.0 >>> >>> return >>> end >>> c----------------------------------------------------------------------- >>> subroutine userq (ix,iy,iz,eg) >>> include 'SIZE' >>> include 'TOTAL' >>> include 'NEKUSE' >>> >>> integer e,f,eg >>> c e = gllel(eg) >>> >>> qvol = 0.0 >>> >>> return >>> end >>> c----------------------------------------------------------------------- >>> subroutine userchk >>> include 'SIZE' >>> include 'TOTAL' >>> return >>> end >>> c----------------------------------------------------------------------- >>> subroutine userbc (ix,iy,iz,iside,ieg) >>> include 'SIZE' >>> include 'TOTAL' >>> include 'NEKUSE' >>> ux=0.0 >>> uy=0.0 >>> uz=0.0 >>> temp=0.0 >>> return >>> end >>> c----------------------------------------------------------------------- >>> subroutine useric (ix,iy,iz,ieg) >>> include 'SIZE' >>> include 'TOTAL' >>> include 'NEKUSE' >>> ux=0.0 >>> uy=0.0 >>> uz=0.0 >>> temp=0 >>> return >>> end >>> c----------------------------------------------------------------------- >>> subroutine usrdat >>> include 'SIZE' >>> include 'TOTAL' >>> c >>> return >>> end >>> c----------------------------------------------------------------------- >>> subroutine usrdat2 >>> include 'SIZE' >>> include 'TOTAL' >>> >>> param(66) = 4. ! These give the std nek binary i/o and are >>> param(67) = 4. ! good default values >>> >>> return >>> end >>> c----------------------------------------------------------------------- >>> subroutine usrdat3 >>> include 'SIZE' >>> include 'TOTAL' >>> c >>> return >>> end >>> c----------------------------------------------------------------------- >>> >>> _______________________________________________ >>> Nek5000-users mailing list >>> Nek5000-users at lists.mcs.anl.gov >>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>> >>> >> >> >> -- >> Pradeep C. Rao >> Graduate Research Assistant for FT2L (http://www1.mengr.tamu.edu/FT2L/) >> Department of Mechanical Engineering >> Texas A&M University >> College Station, TX 77843-3123 >> >> 428 Engineering Physics Building >> (713) 210-9769 >> >> _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > -- Pradeep C. Rao Graduate Research Assistant for FT2L (http://www1.mengr.tamu.edu/FT2L/) Department of Mechanical Engineering Texas A&M University College Station, TX 77843-3123 428 Engineering Physics Building (713) 210-9769 -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Mon Aug 23 20:31:40 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 23 Aug 2010 20:31:40 -0500 Subject: [Nek5000-users] Grid to grid interpolation Message-ID: Hi, Recently there was an updated routine in nek called grid-to-grid interpolation (g2gi). I tried to understand it by grepping to source code, but couldn't make much sense out of it. Could anyone brief what it is it for ? I have a nek simulation in which I have got lots of data, but for some comparisons, would like to do a grid refinement (quad/oct from prenek) and restart from the existing field files if that is possible. I know that for h-type refinement, usually the simulation has to be started afresh. But I was wondering if g2gi would make it possible to restart from a previous simulation. Thanks Shriram -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Tue Aug 24 01:45:24 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Tue, 24 Aug 2010 08:45:24 +0200 Subject: [Nek5000-users] Grid to grid interpolation In-Reply-To: References: Message-ID: Shriram: The g2gi() routine can be used to interpolate an existing field file onto a new mesh - so it will do exactly what you want to do. For more details: http://nek5000.mcs.anl.gov/index.php/Data_processing_example Stefan On Tue, Aug 24, 2010 at 3:31 AM, wrote: > Hi, > > Recently there was an updated routine in nek called grid-to-grid > interpolation (g2gi). I tried to understand it by grepping to source code, > but couldn't make much sense out of it. Could anyone brief what it is it for > ? > > I have a nek simulation in which I have got lots of data, but for some > comparisons, would like to do a grid refinement (quad/oct from prenek) and > restart from the existing field files if that is possible. I know that for > h-type refinement, usually the simulation has to be started afresh. But I > was wondering if g2gi would make it possible to restart from a previous > simulation. > > Thanks > Shriram > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Thu Aug 26 03:38:12 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 26 Aug 2010 10:38:12 +0200 Subject: [Nek5000-users] viz-gallery Message-ID: <0401D1F9-7FC7-4F2C-95F6-971B883181D7@lav.mavt.ethz.ch> Dear Nek Users, we're updating our viz-gallery (https://nek5000.mcs.anl.gov/index.php/Visualization_Gallery). We want to encourage you to contribute to our gallery. Just send us (stefanke(at)mcs.anl.gov) a small paragraph about your setup including an appealing image. Cheers, Stefan From nek5000-users at lists.mcs.anl.gov Thu Aug 26 14:18:39 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 26 Aug 2010 14:18:39 -0500 Subject: [Nek5000-users] Temperature gradient at a point In-Reply-To: References: Message-ID: Hi, Is there any reason why a conjucate heat transfer case should not work with IFLOMACH turned on? I did modify the uservp to set utrans = 1./temp. Thanks, Pradeep c----------------------------------------------------------------------- subroutine uservp (ix,iy,iz,ieg) include 'SIZE' include 'TOTAL' include 'NEKUSE' if (ifield.eq.1) then utrans = 1./temp c utrans = param(1) udiff = param(2) else utrans = 1./temp ! thermal properties c utrans = param(7) udiff = param(8) if (ieg .gt. nelgv) then ! properties in the solid udiff = 0.1*param(8) ! conductivity utrans = 1.0 endif endif return end c----------------------------------------------------------------------- On Mon, Aug 23, 2010 at 4:40 PM, Pradeep Rao wrote: > Thanks Aleks, > > Will give that a try. > > > On Mon, Aug 23, 2010 at 4:25 PM, wrote: > >> Hi Pradeep, >> >> Dependence of conductivity on time and space is not a problem once one >> uses non zero p30 in .rea that activates a call to uservp of .usr file >> >> If you also need a dependence of conductivity on temperature you may want >> to consider either using the values from the previous time step or doing >> extrapolation in time. >> >> Best, >> Aleks >> >> >> >> On Mon, 23 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: >> >> Hi Paul, >>> >>> Thanks for the detailed reply. The reason I'm not solving it as a >>> conjugate >>> heat transfer problem, is that thermal conductivity is a function of >>> temperature based on some curve fit equations, and I am not sure how to >>> implement that. >>> >>> Thanks, >>> Pradeep >>> >>> On Mon, Aug 23, 2010 at 4:04 PM, >>> wrote: >>> >>> >>>> Hi Pradeep, >>>> >>>> Why not just solve the conjugate heat transfer problem directly >>>> using fluid + solid elements in nek? >>>> >>>> Also, nek supports full Robin boundary conditions if you wish >>>> to do a Newton law of cooling: k*dT/dn . n_hat = h*(T-Tinf), where Tinf >>>> is >>>> the external temperature and h is the heat transfer coefficient, both of >>>> which can be functions of time and space. >>>> >>>> >>>> Regarding gradm1, you would call it from userchk, and store >>>> the output in arrays in a common block, e.g., as below. >>>> >>>> Paul >>>> >>>> subroutine userchk >>>> : >>>> common /mygrad/ tx(lx1,ly1,lz1,lelt) >>>> $ , ty(lx1,ly1,lz1,lelt) >>>> $ , tz(lx1,ly1,lz1,lelt) >>>> >>>> call gradm1(tx,ty,tz,t) >>>> >>>> : >>>> : >>>> >>>> subroutine userbc (ix,iy,iz,iside,eg) >>>> include 'SIZE' >>>> include 'TOTAL' >>>> include 'NEKUSE' >>>> >>>> common /mygrad/ tx(lx1,ly1,lz1,lelt) >>>> $ , ty(lx1,ly1,lz1,lelt) >>>> $ , tz(lx1,ly1,lz1,lelt) >>>> >>>> integer e,eg >>>> >>>> e = gllel(eg) ! global element number to processor-local el. # >>>> >>>> gtx=tx(ix,iy,iz,e) >>>> gty=ty(ix,iy,iz,e) >>>> gtz=tz(ix,iy,iz,e) >>>> >>>> >>>> >>>> >>>> On Mon, 23 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: >>>> >>>> Hi Paul, >>>> >>>>> >>>>> I am basically trying to solve a conjugate heat transfer problem in an >>>>> iterative manner, for flow over an infinitely long cylinder (2D). >>>>> >>>>> I need to use the heat transfer at the boundary, to calculate the new >>>>> temperature at the boundary for the next time step. The >>>>> temperature for the next time step is solved for using this heat flux, >>>>> by >>>>> a >>>>> function in the usr file using an FEM algorithm for the solid part >>>>> (cylinder). The bc type I am using is Temperature - fortran function. >>>>> >>>>> Regards, >>>>> Pradeep >>>>> >>>>> On Mon, Aug 23, 2010 at 2:50 PM, >>>>> wrote: >>>>> >>>>> >>>>> Pradeep, >>>>>> >>>>>> if you give me some idea of the nature of your bc, I can >>>>>> perhaps help --- there are a large number of bc types already >>>>>> supported inside nek >>>>>> >>>>>> Paul >>>>>> >>>>>> >>>>>> >>>>>> On Mon, 23 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: >>>>>> >>>>>> Hi, >>>>>> >>>>>> >>>>>>> I wanted to know if there was a way to find the temperature gradient >>>>>>> at >>>>>>> a >>>>>>> point. I need that information in the userbc function. >>>>>>> >>>>>>> I tried using gradm1(), but I am not sure how to get the value at a >>>>>>> given >>>>>>> point. >>>>>>> >>>>>>> Thanks, >>>>>>> Pradeep >>>>>>> >>>>>>> _______________________________________________ >>>>>>> >>>>>>> Nek5000-users mailing list >>>>>> Nek5000-users at lists.mcs.anl.gov >>>>>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>>>>> >>>>>> >>>>>> >>>>> >>>>> -- >>>>> Pradeep C. Rao >>>>> Graduate Research Assistant for FT2L (http://www1.mengr.tamu.edu/FT2L/ >>>>> ) >>>>> Department of Mechanical Engineering >>>>> Texas A&M University >>>>> College Station, TX 77843-3123 >>>>> >>>>> 428 Engineering Physics Building >>>>> (713) 210-9769 >>>>> >>>>> uuuu >>>>> >>>> c----------------------------------------------------------------------- >>>> C >>>> C USER SPECIFIED ROUTINES: >>>> C >>>> C - boundary conditions >>>> C - initial conditions >>>> C - variable properties >>>> C - local acceleration for fluid (a) >>>> C - forcing function for passive scalar (q) >>>> C - general purpose routine for checking errors etc. >>>> C >>>> c----------------------------------------------------------------------- >>>> subroutine uservp (ix,iy,iz,eg) >>>> include 'SIZE' >>>> include 'TOTAL' >>>> include 'NEKUSE' >>>> >>>> integer e,f,eg >>>> c e = gllel(eg) >>>> >>>> udiff =0. >>>> utrans=0. >>>> return >>>> end >>>> c----------------------------------------------------------------------- >>>> subroutine userf (ix,iy,iz,eg) >>>> include 'SIZE' >>>> include 'TOTAL' >>>> include 'NEKUSE' >>>> >>>> integer e,f,eg >>>> c e = gllel(eg) >>>> >>>> >>>> c Note: this is an acceleration term, NOT a force! >>>> c Thus, ffx will subsequently be multiplied by rho(x,t). >>>> >>>> >>>> ffx = 0.0 >>>> ffy = 0.0 >>>> ffz = 0.0 >>>> >>>> return >>>> end >>>> c----------------------------------------------------------------------- >>>> subroutine userq (ix,iy,iz,eg) >>>> include 'SIZE' >>>> include 'TOTAL' >>>> include 'NEKUSE' >>>> >>>> integer e,f,eg >>>> c e = gllel(eg) >>>> >>>> qvol = 0.0 >>>> >>>> return >>>> end >>>> c----------------------------------------------------------------------- >>>> subroutine userchk >>>> include 'SIZE' >>>> include 'TOTAL' >>>> return >>>> end >>>> c----------------------------------------------------------------------- >>>> subroutine userbc (ix,iy,iz,iside,ieg) >>>> include 'SIZE' >>>> include 'TOTAL' >>>> include 'NEKUSE' >>>> ux=0.0 >>>> uy=0.0 >>>> uz=0.0 >>>> temp=0.0 >>>> return >>>> end >>>> c----------------------------------------------------------------------- >>>> subroutine useric (ix,iy,iz,ieg) >>>> include 'SIZE' >>>> include 'TOTAL' >>>> include 'NEKUSE' >>>> ux=0.0 >>>> uy=0.0 >>>> uz=0.0 >>>> temp=0 >>>> return >>>> end >>>> c----------------------------------------------------------------------- >>>> subroutine usrdat >>>> include 'SIZE' >>>> include 'TOTAL' >>>> c >>>> return >>>> end >>>> c----------------------------------------------------------------------- >>>> subroutine usrdat2 >>>> include 'SIZE' >>>> include 'TOTAL' >>>> >>>> param(66) = 4. ! These give the std nek binary i/o and are >>>> param(67) = 4. ! good default values >>>> >>>> return >>>> end >>>> c----------------------------------------------------------------------- >>>> subroutine usrdat3 >>>> include 'SIZE' >>>> include 'TOTAL' >>>> c >>>> return >>>> end >>>> c----------------------------------------------------------------------- >>>> >>>> _______________________________________________ >>>> Nek5000-users mailing list >>>> Nek5000-users at lists.mcs.anl.gov >>>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>>> >>>> >>> >>> >>> -- >>> Pradeep C. Rao >>> Graduate Research Assistant for FT2L (http://www1.mengr.tamu.edu/FT2L/) >>> Department of Mechanical Engineering >>> Texas A&M University >>> College Station, TX 77843-3123 >>> >>> 428 Engineering Physics Building >>> (713) 210-9769 >>> >>> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> > > > > -- > Pradeep C. Rao > Graduate Research Assistant for FT2L (http://www1.mengr.tamu.edu/FT2L/) > Department of Mechanical Engineering > Texas A&M University > College Station, TX 77843-3123 > > 428 Engineering Physics Building > (713) 210-9769 > -- Pradeep C. Rao Graduate Research Assistant for FT2L (http://www1.mengr.tamu.edu/FT2L/) Department of Mechanical Engineering Texas A&M University College Station, TX 77843-3123 428 Engineering Physics Building (713) 210-9769 -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Thu Aug 26 15:30:06 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 26 Aug 2010 15:30:06 -0500 (CDT) Subject: [Nek5000-users] Temperature gradient at a point In-Reply-To: <549413914.180281282854521816.JavaMail.root@neo-mail-3.tamu.edu> Message-ID: <897953068.180591282854606056.JavaMail.root@neo-mail-3.tamu.edu> What's the error message you are getting? It may be the case, I know that I have had issues with settings and conj HT before since they are (mainly because of our group) revisiting CHT and fixing issues that we request. The latest issue, like I mentioned, was the new time stepping scheme and CHT. Did the error show up during compiling, or during the run in the output log? - Mike ----- Original Message ----- From: nek5000-users at lists.mcs.anl.gov To: nek5000-users at lists.mcs.anl.gov Sent: Thursday, August 26, 2010 2:18:39 PM GMT -06:00 US/Canada Central Subject: Re: [Nek5000-users] Temperature gradient at a point Hi, Is there any reason why a conjucate heat transfer case should not work with IFLOMACH turned on? I did modify the uservp to set utrans = 1./temp. Thanks, Pradeep c----------------------------------------------------------------------- subroutine uservp (ix,iy,iz,ieg) include 'SIZE' include 'TOTAL' include 'NEKUSE' if (ifield.eq.1) then utrans = 1./temp c utrans = param(1) udiff = param(2) else utrans = 1./temp ! thermal properties c utrans = param(7) udiff = param(8) if (ieg .gt. nelgv) then ! properties in the solid udiff = 0.1*param(8) ! conductivity utrans = 1.0 endif endif return end c----------------------------------------------------------------------- On Mon, Aug 23, 2010 at 4:40 PM, Pradeep Rao < stringsofdurga at gmail.com > wrote: Thanks Aleks, Will give that a try. On Mon, Aug 23, 2010 at 4:25 PM, < nek5000-users at lists.mcs.anl.gov > wrote: Hi Pradeep, Dependence of conductivity on time and space is not a problem once one uses non zero p30 in .rea that activates a call to uservp of .usr file If you also need a dependence of conductivity on temperature you may want to consider either using the values from the previous time step or doing extrapolation in time. Best, Aleks On Mon, 23 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: Hi Paul, Thanks for the detailed reply. The reason I'm not solving it as a conjugate heat transfer problem, is that thermal conductivity is a function of temperature based on some curve fit equations, and I am not sure how to implement that. Thanks, Pradeep On Mon, Aug 23, 2010 at 4:04 PM, < nek5000-users at lists.mcs.anl.gov > wrote: Hi Pradeep, Why not just solve the conjugate heat transfer problem directly using fluid + solid elements in nek? Also, nek supports full Robin boundary conditions if you wish to do a Newton law of cooling: k*dT/dn . n_hat = h*(T-Tinf), where Tinf is the external temperature and h is the heat transfer coefficient, both of which can be functions of time and space. Regarding gradm1, you would call it from userchk, and store the output in arrays in a common block, e.g., as below. Paul subroutine userchk : common /mygrad/ tx(lx1,ly1,lz1,lelt) $ , ty(lx1,ly1,lz1,lelt) $ , tz(lx1,ly1,lz1,lelt) call gradm1(tx,ty,tz,t) : : subroutine userbc (ix,iy,iz,iside,eg) include 'SIZE' include 'TOTAL' include 'NEKUSE' common /mygrad/ tx(lx1,ly1,lz1,lelt) $ , ty(lx1,ly1,lz1,lelt) $ , tz(lx1,ly1,lz1,lelt) integer e,eg e = gllel(eg) ! global element number to processor-local el. # gtx=tx(ix,iy,iz,e) gty=ty(ix,iy,iz,e) gtz=tz(ix,iy,iz,e) On Mon, 23 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: Hi Paul, I am basically trying to solve a conjugate heat transfer problem in an iterative manner, for flow over an infinitely long cylinder (2D). I need to use the heat transfer at the boundary, to calculate the new temperature at the boundary for the next time step. The temperature for the next time step is solved for using this heat flux, by a function in the usr file using an FEM algorithm for the solid part (cylinder). The bc type I am using is Temperature - fortran function. Regards, Pradeep On Mon, Aug 23, 2010 at 2:50 PM, < nek5000-users at lists.mcs.anl.gov > wrote: Pradeep, if you give me some idea of the nature of your bc, I can perhaps help --- there are a large number of bc types already supported inside nek Paul On Mon, 23 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: Hi, I wanted to know if there was a way to find the temperature gradient at a point. I need that information in the userbc function. I tried using gradm1(), but I am not sure how to get the value at a given point. Thanks, Pradeep _______________________________________________ Nek5000-users mailing list Nek5000-users at lists.mcs.anl.gov https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users -- Pradeep C. Rao Graduate Research Assistant for FT2L ( http://www1.mengr.tamu.edu/FT2L/ ) Department of Mechanical Engineering Texas A&M University College Station, TX 77843-3123 428 Engineering Physics Building (713) 210-9769 uuuu c----------------------------------------------------------------------- C C USER SPECIFIED ROUTINES: C C - boundary conditions C - initial conditions C - variable properties C - local acceleration for fluid (a) C - forcing function for passive scalar (q) C - general purpose routine for checking errors etc. C c----------------------------------------------------------------------- subroutine uservp (ix,iy,iz,eg) include 'SIZE' include 'TOTAL' include 'NEKUSE' integer e,f,eg c e = gllel(eg) udiff =0. utrans=0. return end c----------------------------------------------------------------------- subroutine userf (ix,iy,iz,eg) include 'SIZE' include 'TOTAL' include 'NEKUSE' integer e,f,eg c e = gllel(eg) c Note: this is an acceleration term, NOT a force! c Thus, ffx will subsequently be multiplied by rho(x,t). ffx = 0.0 ffy = 0.0 ffz = 0.0 return end c----------------------------------------------------------------------- subroutine userq (ix,iy,iz,eg) include 'SIZE' include 'TOTAL' include 'NEKUSE' integer e,f,eg c e = gllel(eg) qvol = 0.0 return end c----------------------------------------------------------------------- subroutine userchk include 'SIZE' include 'TOTAL' return end c----------------------------------------------------------------------- subroutine userbc (ix,iy,iz,iside,ieg) include 'SIZE' include 'TOTAL' include 'NEKUSE' ux=0.0 uy=0.0 uz=0.0 temp=0.0 return end c----------------------------------------------------------------------- subroutine useric (ix,iy,iz,ieg) include 'SIZE' include 'TOTAL' include 'NEKUSE' ux=0.0 uy=0.0 uz=0.0 temp=0 return end c----------------------------------------------------------------------- subroutine usrdat include 'SIZE' include 'TOTAL' c return end c----------------------------------------------------------------------- subroutine usrdat2 include 'SIZE' include 'TOTAL' param(66) = 4. ! These give the std nek binary i/o and are param(67) = 4. ! good default values return end c----------------------------------------------------------------------- subroutine usrdat3 include 'SIZE' include 'TOTAL' c return end c----------------------------------------------------------------------- _______________________________________________ Nek5000-users mailing list Nek5000-users at lists.mcs.anl.gov https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users -- Pradeep C. Rao Graduate Research Assistant for FT2L ( http://www1.mengr.tamu.edu/FT2L/ ) Department of Mechanical Engineering Texas A&M University College Station, TX 77843-3123 428 Engineering Physics Building (713) 210-9769 _______________________________________________ Nek5000-users mailing list Nek5000-users at lists.mcs.anl.gov https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users -- Pradeep C. Rao Graduate Research Assistant for FT2L ( http://www1.mengr.tamu.edu/FT2L/ ) Department of Mechanical Engineering Texas A&M University College Station, TX 77843-3123 428 Engineering Physics Building (713) 210-9769 -- Pradeep C. Rao Graduate Research Assistant for FT2L ( http://www1.mengr.tamu.edu/FT2L/ ) Department of Mechanical Engineering Texas A&M University College Station, TX 77843-3123 428 Engineering Physics Building (713) 210-9769 _______________________________________________ Nek5000-users mailing list Nek5000-users at lists.mcs.anl.gov https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Thu Aug 26 15:50:08 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 26 Aug 2010 22:50:08 +0200 Subject: [Nek5000-users] Temperature gradient at a point In-Reply-To: References: Message-ID: This combination should work! Stefan On Thu, Aug 26, 2010 at 9:18 PM, wrote: > Hi, > > Is there any reason why a conjucate heat transfer case should not work with > IFLOMACH turned on? > I did modify the uservp to set utrans = 1./temp. > > Thanks, > Pradeep > > c----------------------------------------------------------------------- > subroutine uservp (ix,iy,iz,ieg) > > include 'SIZE' > include 'TOTAL' > include 'NEKUSE' > > if (ifield.eq.1) then > utrans = 1./temp > c utrans = param(1) > udiff = param(2) > > else > > utrans = 1./temp ! thermal properties > c utrans = param(7) > udiff = param(8) > > if (ieg .gt. nelgv) then ! properties in the solid > udiff = 0.1*param(8) ! conductivity > utrans = 1.0 > endif > > endif > > return > end > c----------------------------------------------------------------------- > > > On Mon, Aug 23, 2010 at 4:40 PM, Pradeep Rao wrote: > >> Thanks Aleks, >> >> Will give that a try. >> >> >> On Mon, Aug 23, 2010 at 4:25 PM, wrote: >> >>> Hi Pradeep, >>> >>> Dependence of conductivity on time and space is not a problem once one >>> uses non zero p30 in .rea that activates a call to uservp of .usr file >>> >>> If you also need a dependence of conductivity on temperature you may want >>> to consider either using the values from the previous time step or doing >>> extrapolation in time. >>> >>> Best, >>> Aleks >>> >>> >>> >>> On Mon, 23 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: >>> >>> Hi Paul, >>>> >>>> Thanks for the detailed reply. The reason I'm not solving it as a >>>> conjugate >>>> heat transfer problem, is that thermal conductivity is a function of >>>> temperature based on some curve fit equations, and I am not sure how to >>>> implement that. >>>> >>>> Thanks, >>>> Pradeep >>>> >>>> On Mon, Aug 23, 2010 at 4:04 PM, >>>> wrote: >>>> >>>> >>>>> Hi Pradeep, >>>>> >>>>> Why not just solve the conjugate heat transfer problem directly >>>>> using fluid + solid elements in nek? >>>>> >>>>> Also, nek supports full Robin boundary conditions if you wish >>>>> to do a Newton law of cooling: k*dT/dn . n_hat = h*(T-Tinf), where >>>>> Tinf is >>>>> the external temperature and h is the heat transfer coefficient, both >>>>> of >>>>> which can be functions of time and space. >>>>> >>>>> >>>>> Regarding gradm1, you would call it from userchk, and store >>>>> the output in arrays in a common block, e.g., as below. >>>>> >>>>> Paul >>>>> >>>>> subroutine userchk >>>>> : >>>>> common /mygrad/ tx(lx1,ly1,lz1,lelt) >>>>> $ , ty(lx1,ly1,lz1,lelt) >>>>> $ , tz(lx1,ly1,lz1,lelt) >>>>> >>>>> call gradm1(tx,ty,tz,t) >>>>> >>>>> : >>>>> : >>>>> >>>>> subroutine userbc (ix,iy,iz,iside,eg) >>>>> include 'SIZE' >>>>> include 'TOTAL' >>>>> include 'NEKUSE' >>>>> >>>>> common /mygrad/ tx(lx1,ly1,lz1,lelt) >>>>> $ , ty(lx1,ly1,lz1,lelt) >>>>> $ , tz(lx1,ly1,lz1,lelt) >>>>> >>>>> integer e,eg >>>>> >>>>> e = gllel(eg) ! global element number to processor-local el. # >>>>> >>>>> gtx=tx(ix,iy,iz,e) >>>>> gty=ty(ix,iy,iz,e) >>>>> gtz=tz(ix,iy,iz,e) >>>>> >>>>> >>>>> >>>>> >>>>> On Mon, 23 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: >>>>> >>>>> Hi Paul, >>>>> >>>>>> >>>>>> I am basically trying to solve a conjugate heat transfer problem in an >>>>>> iterative manner, for flow over an infinitely long cylinder (2D). >>>>>> >>>>>> I need to use the heat transfer at the boundary, to calculate the new >>>>>> temperature at the boundary for the next time step. The >>>>>> temperature for the next time step is solved for using this heat flux, >>>>>> by >>>>>> a >>>>>> function in the usr file using an FEM algorithm for the solid part >>>>>> (cylinder). The bc type I am using is Temperature - fortran function. >>>>>> >>>>>> Regards, >>>>>> Pradeep >>>>>> >>>>>> On Mon, Aug 23, 2010 at 2:50 PM, >>>>>> wrote: >>>>>> >>>>>> >>>>>> Pradeep, >>>>>>> >>>>>>> if you give me some idea of the nature of your bc, I can >>>>>>> perhaps help --- there are a large number of bc types already >>>>>>> supported inside nek >>>>>>> >>>>>>> Paul >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Mon, 23 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: >>>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> >>>>>>>> I wanted to know if there was a way to find the temperature gradient >>>>>>>> at >>>>>>>> a >>>>>>>> point. I need that information in the userbc function. >>>>>>>> >>>>>>>> I tried using gradm1(), but I am not sure how to get the value at a >>>>>>>> given >>>>>>>> point. >>>>>>>> >>>>>>>> Thanks, >>>>>>>> Pradeep >>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> >>>>>>>> Nek5000-users mailing list >>>>>>> Nek5000-users at lists.mcs.anl.gov >>>>>>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>>> -- >>>>>> Pradeep C. Rao >>>>>> Graduate Research Assistant for FT2L ( >>>>>> http://www1.mengr.tamu.edu/FT2L/) >>>>>> Department of Mechanical Engineering >>>>>> Texas A&M University >>>>>> College Station, TX 77843-3123 >>>>>> >>>>>> 428 Engineering Physics Building >>>>>> (713) 210-9769 >>>>>> >>>>>> uuuu >>>>>> >>>>> >>>>> c----------------------------------------------------------------------- >>>>> C >>>>> C USER SPECIFIED ROUTINES: >>>>> C >>>>> C - boundary conditions >>>>> C - initial conditions >>>>> C - variable properties >>>>> C - local acceleration for fluid (a) >>>>> C - forcing function for passive scalar (q) >>>>> C - general purpose routine for checking errors etc. >>>>> C >>>>> >>>>> c----------------------------------------------------------------------- >>>>> subroutine uservp (ix,iy,iz,eg) >>>>> include 'SIZE' >>>>> include 'TOTAL' >>>>> include 'NEKUSE' >>>>> >>>>> integer e,f,eg >>>>> c e = gllel(eg) >>>>> >>>>> udiff =0. >>>>> utrans=0. >>>>> return >>>>> end >>>>> >>>>> c----------------------------------------------------------------------- >>>>> subroutine userf (ix,iy,iz,eg) >>>>> include 'SIZE' >>>>> include 'TOTAL' >>>>> include 'NEKUSE' >>>>> >>>>> integer e,f,eg >>>>> c e = gllel(eg) >>>>> >>>>> >>>>> c Note: this is an acceleration term, NOT a force! >>>>> c Thus, ffx will subsequently be multiplied by rho(x,t). >>>>> >>>>> >>>>> ffx = 0.0 >>>>> ffy = 0.0 >>>>> ffz = 0.0 >>>>> >>>>> return >>>>> end >>>>> >>>>> c----------------------------------------------------------------------- >>>>> subroutine userq (ix,iy,iz,eg) >>>>> include 'SIZE' >>>>> include 'TOTAL' >>>>> include 'NEKUSE' >>>>> >>>>> integer e,f,eg >>>>> c e = gllel(eg) >>>>> >>>>> qvol = 0.0 >>>>> >>>>> return >>>>> end >>>>> >>>>> c----------------------------------------------------------------------- >>>>> subroutine userchk >>>>> include 'SIZE' >>>>> include 'TOTAL' >>>>> return >>>>> end >>>>> >>>>> c----------------------------------------------------------------------- >>>>> subroutine userbc (ix,iy,iz,iside,ieg) >>>>> include 'SIZE' >>>>> include 'TOTAL' >>>>> include 'NEKUSE' >>>>> ux=0.0 >>>>> uy=0.0 >>>>> uz=0.0 >>>>> temp=0.0 >>>>> return >>>>> end >>>>> >>>>> c----------------------------------------------------------------------- >>>>> subroutine useric (ix,iy,iz,ieg) >>>>> include 'SIZE' >>>>> include 'TOTAL' >>>>> include 'NEKUSE' >>>>> ux=0.0 >>>>> uy=0.0 >>>>> uz=0.0 >>>>> temp=0 >>>>> return >>>>> end >>>>> >>>>> c----------------------------------------------------------------------- >>>>> subroutine usrdat >>>>> include 'SIZE' >>>>> include 'TOTAL' >>>>> c >>>>> return >>>>> end >>>>> >>>>> c----------------------------------------------------------------------- >>>>> subroutine usrdat2 >>>>> include 'SIZE' >>>>> include 'TOTAL' >>>>> >>>>> param(66) = 4. ! These give the std nek binary i/o and are >>>>> param(67) = 4. ! good default values >>>>> >>>>> return >>>>> end >>>>> >>>>> c----------------------------------------------------------------------- >>>>> subroutine usrdat3 >>>>> include 'SIZE' >>>>> include 'TOTAL' >>>>> c >>>>> return >>>>> end >>>>> >>>>> c----------------------------------------------------------------------- >>>>> >>>>> _______________________________________________ >>>>> Nek5000-users mailing list >>>>> Nek5000-users at lists.mcs.anl.gov >>>>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>>>> >>>>> >>>> >>>> >>>> -- >>>> Pradeep C. Rao >>>> Graduate Research Assistant for FT2L (http://www1.mengr.tamu.edu/FT2L/) >>>> Department of Mechanical Engineering >>>> Texas A&M University >>>> College Station, TX 77843-3123 >>>> >>>> 428 Engineering Physics Building >>>> (713) 210-9769 >>>> >>>> _______________________________________________ >>> Nek5000-users mailing list >>> Nek5000-users at lists.mcs.anl.gov >>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>> >> >> >> >> -- >> Pradeep C. Rao >> Graduate Research Assistant for FT2L (http://www1.mengr.tamu.edu/FT2L/) >> Department of Mechanical Engineering >> Texas A&M University >> College Station, TX 77843-3123 >> >> 428 Engineering Physics Building >> (713) 210-9769 >> > > > > -- > Pradeep C. Rao > Graduate Research Assistant for FT2L (http://www1.mengr.tamu.edu/FT2L/) > Department of Mechanical Engineering > Texas A&M University > College Station, TX 77843-3123 > > 428 Engineering Physics Building > (713) 210-9769 > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Thu Aug 26 15:55:54 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 26 Aug 2010 15:55:54 -0500 Subject: [Nek5000-users] Temperature gradient at a point In-Reply-To: <897953068.180591282854606056.JavaMail.root@neo-mail-3.tamu.edu> References: <549413914.180281282854521816.JavaMail.root@neo-mail-3.tamu.edu> <897953068.180591282854606056.JavaMail.root@neo-mail-3.tamu.edu> Message-ID: Hi Mike, My solution blows up very quickly. I get NaNs with IFLOMACH turned on along with the conjugate heat transfer. With IFLOMACH turned off, and only conjugate heat transfer, I get a solution (it doesn't diverge). Regards, Pradeep Initialization successfully completed 0.48088E-01 sec Starting time loop ... DT/DTCFL/DTFS/DTINIT 0.500E-01 0.138E+00 0.000E+00 0.500E-01 Step 1, t= 5.0000000E-02, DT= 5.0000000E-02, C= 0.181 0.0000E+00 0.0000E+00 Solving for heat Temperature/Passive scalar solution 1.00000000000000002E-008 p22 1 2 New CG1-tolerance (RINIT*epsm) = 3.99554968277585449E-013 4.18439146179870359E-026 1 1 Helmholtz TEMP F: 3.9955E+00 1.0000E-08 5.0000E-01 1.3014E+21 1 2 Helmholtz TEMP F: 1.6451E+00 1.0000E-08 5.0000E-01 1.3014E+21 1 3 Helmholtz TEMP F: 6.9986E-01 1.0000E-08 5.0000E-01 1.3014E+21 1 4 Helmholtz TEMP F: 2.0955E-01 1.0000E-08 5.0000E-01 1.3014E+21 1 5 Helmholtz TEMP F: 5.1302E-02 1.0000E-08 5.0000E-01 1.3014E+21 1 6 Helmholtz TEMP F: 8.2185E-03 1.0000E-08 5.0000E-01 1.3014E+21 1 7 Helmholtz TEMP F: 3.9404E-03 1.0000E-08 5.0000E-01 1.3014E+21 1 8 Helmholtz TEMP F: 8.6209E-04 1.0000E-08 5.0000E-01 1.3014E+21 1 9 Helmholtz TEMP F: 1.7760E-03 1.0000E-08 5.0000E-01 1.3014E+21 1 10 Helmholtz TEMP F: 2.1193E-04 1.0000E-08 5.0000E-01 1.3014E+21 1 11 Helmholtz TEMP F: 7.9794E-05 1.0000E-08 5.0000E-01 1.3014E+21 1 12 Helmholtz TEMP F: 9.6693E-06 1.0000E-08 5.0000E-01 1.3014E+21 1 13 Helmholtz TEMP F: 1.3215E-06 1.0000E-08 5.0000E-01 1.3014E+21 1 14 Helmholtz TEMP F: 6.8863E-11 1.0000E-08 5.0000E-01 1.3014E+21 1 Hmholtz TEMP: 13 6.8863E-11 3.9955E+00 1.0000E-08 1 5.0000E-02 3.0169E-03 Heat done Solving for fluid 1.00000000000000002E-008 p22 1 1 1 1.00000E-06 NaN NaN NaN 1 Divergence 2 1.00000E-06 NaN NaN NaN 1 Divergence 3 1.00000E-06 NaN NaN NaN 1 Divergence 4 1.00000E-06 NaN NaN NaN 1 Divergence 5 1.00000E-06 NaN NaN NaN 1 Divergence 6 1.00000E-06 NaN NaN NaN 1 Divergence 7 1.00000E-06 NaN NaN NaN 1 Divergence 8 1.00000E-06 NaN NaN NaN 1 Divergence 9 1.00000E-06 NaN NaN NaN 1 Divergence 10 1.00000E-06 NaN NaN NaN 1 Divergence 11 1.00000E-06 NaN NaN NaN 1 Divergence 12 1.00000E-06 NaN NaN NaN 1 Divergence 13 1.00000E-06 NaN NaN NaN 1 Divergence 14 1.00000E-06 NaN NaN NaN 1 Divergence 15 1.00000E-06 NaN NaN NaN 1 Divergence 16 1.00000E-06 NaN NaN NaN 1 Divergence 17 1.00000E-06 NaN NaN NaN 1 Divergence 18 1.00000E-06 NaN NaN NaN 1 Divergence 19 1.00000E-06 NaN NaN NaN 1 Divergence 20 1.00000E-06 NaN NaN NaN 1 Divergence 21 1.00000E-06 NaN NaN NaN 1 Divergence 22 1.00000E-06 NaN NaN NaN 1 Divergence 23 1.00000E-06 NaN NaN NaN 1 Divergence 24 1.00000E-06 NaN NaN NaN 1 Divergence 25 1.00000E-06 NaN NaN NaN 1 Divergence 26 1.00000E-06 NaN NaN NaN 1 Divergence 27 1.00000E-06 NaN NaN NaN 1 Divergence 28 1.00000E-06 NaN NaN NaN 1 Divergence 29 1.00000E-06 NaN NaN NaN 1 Divergence 30 1.00000E-06 NaN NaN NaN 1 Divergence 31 1.00000E-06 NaN NaN NaN 1 Divergence 32 1.00000E-06 NaN NaN NaN 1 Divergence 33 1.00000E-06 NaN NaN NaN 1 Divergence 34 1.00000E-06 NaN NaN NaN 1 Divergence 35 1.00000E-06 NaN NaN NaN 1 Divergence 36 1.00000E-06 NaN NaN NaN 1 Divergence 37 1.00000E-06 NaN NaN NaN 1 Divergence 38 1.00000E-06 NaN NaN NaN 1 Divergence 39 1.00000E-06 NaN NaN NaN 1 Divergence 40 1.00000E-06 NaN NaN NaN 1 Divergence 41 1.00000E-06 NaN NaN NaN 1 Divergence 42 1.00000E-06 NaN NaN NaN 1 Divergence 43 1.00000E-06 NaN NaN NaN 1 Divergence 44 1.00000E-06 NaN NaN NaN 1 Divergence 45 1.00000E-06 NaN NaN NaN 1 Divergence 46 1.00000E-06 NaN NaN NaN 1 Divergence 47 1.00000E-06 NaN NaN NaN 1 Divergence 48 1.00000E-06 NaN NaN NaN 1 Divergence 49 1.00000E-06 NaN NaN NaN 1 Divergence 50 1.00000E-06 NaN NaN NaN 1 Divergence 51 1.00000E-06 NaN NaN NaN 1 Divergence 52 1.00000E-06 NaN NaN NaN 1 Divergence 53 1.00000E-06 NaN NaN NaN 1 Divergence 54 1.00000E-06 NaN NaN NaN 1 Divergence On Thu, Aug 26, 2010 at 3:30 PM, wrote: > What's the error message you are getting? > > It may be the case, I know that I have had issues with settings and conj HT > before since they are (mainly because of our group) revisiting CHT and > fixing issues that we request. The latest issue, like I mentioned, was the > new time stepping scheme and CHT. > > Did the error show up during compiling, or during the run in the output > log? > > - Mike > > > ----- Original Message ----- > From: nek5000-users at lists.mcs.anl.gov > To: nek5000-users at lists.mcs.anl.gov > Sent: Thursday, August 26, 2010 2:18:39 PM GMT -06:00 US/Canada Central > Subject: Re: [Nek5000-users] Temperature gradient at a point > > Hi, > > Is there any reason why a conjucate heat transfer case should not work with > IFLOMACH turned on? > I did modify the uservp to set utrans = 1./temp. > > Thanks, > Pradeep > > c----------------------------------------------------------------------- > subroutine uservp (ix,iy,iz,ieg) > include 'SIZE' > include 'TOTAL' > include 'NEKUSE' > > if (ifield.eq.1) then > utrans = 1./temp > c utrans = param(1) > udiff = param(2) > > else > > utrans = 1./temp ! thermal properties > c utrans = param(7) > udiff = param(8) > > if (ieg .gt. nelgv) then ! properties in the solid > udiff = 0.1*param(8) ! conductivity > utrans = 1.0 > endif > > endif > > return > end > c----------------------------------------------------------------------- > > On Mon, Aug 23, 2010 at 4:40 PM, Pradeep Rao wrote: > >> Thanks Aleks, >> >> Will give that a try. >> >> >> On Mon, Aug 23, 2010 at 4:25 PM, wrote: >> >>> Hi Pradeep, >>> >>> Dependence of conductivity on time and space is not a problem once one >>> uses non zero p30 in .rea that activates a call to uservp of .usr file >>> >>> If you also need a dependence of conductivity on temperature you may want >>> to consider either using the values from the previous time step or doing >>> extrapolation in time. >>> >>> Best, >>> Aleks >>> >>> >>> >>> On Mon, 23 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: >>> >>> Hi Paul, >>>> >>>> Thanks for the detailed reply. The reason I'm not solving it as a >>>> conjugate >>>> heat transfer problem, is that thermal conductivity is a function of >>>> temperature based on some curve fit equations, and I am not sure how to >>>> implement that. >>>> >>>> Thanks, >>>> Pradeep >>>> >>>> On Mon, Aug 23, 2010 at 4:04 PM, >>>> wrote: >>>> >>>> >>>>> Hi Pradeep, >>>>> >>>>> Why not just solve the conjugate heat transfer problem directly >>>>> using fluid + solid elements in nek? >>>>> >>>>> Also, nek supports full Robin boundary conditions if you wish >>>>> to do a Newton law of cooling: k*dT/dn . n_hat = h*(T-Tinf), where >>>>> Tinf is >>>>> the external temperature and h is the heat transfer coefficient, both >>>>> of >>>>> which can be functions of time and space. >>>>> >>>>> >>>>> Regarding gradm1, you would call it from userchk, and store >>>>> the output in arrays in a common block, e.g., as below. >>>>> >>>>> Paul >>>>> >>>>> subroutine userchk >>>>> : >>>>> common /mygrad/ tx(lx1,ly1,lz1,lelt) >>>>> $ , ty(lx1,ly1,lz1,lelt) >>>>> $ , tz(lx1,ly1,lz1,lelt) >>>>> >>>>> call gradm1(tx,ty,tz,t) >>>>> >>>>> : >>>>> : >>>>> >>>>> subroutine userbc (ix,iy,iz,iside,eg) >>>>> include 'SIZE' >>>>> include 'TOTAL' >>>>> include 'NEKUSE' >>>>> >>>>> common /mygrad/ tx(lx1,ly1,lz1,lelt) >>>>> $ , ty(lx1,ly1,lz1,lelt) >>>>> $ , tz(lx1,ly1,lz1,lelt) >>>>> >>>>> integer e,eg >>>>> >>>>> e = gllel(eg) ! global element number to processor-local el. # >>>>> >>>>> gtx=tx(ix,iy,iz,e) >>>>> gty=ty(ix,iy,iz,e) >>>>> gtz=tz(ix,iy,iz,e) >>>>> >>>>> >>>>> >>>>> >>>>> On Mon, 23 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: >>>>> >>>>> Hi Paul, >>>>> >>>>>> >>>>>> I am basically trying to solve a conjugate heat transfer problem in an >>>>>> iterative manner, for flow over an infinitely long cylinder (2D). >>>>>> >>>>>> I need to use the heat transfer at the boundary, to calculate the new >>>>>> temperature at the boundary for the next time step. The >>>>>> temperature for the next time step is solved for using this heat flux, >>>>>> by >>>>>> a >>>>>> function in the usr file using an FEM algorithm for the solid part >>>>>> (cylinder). The bc type I am using is Temperature - fortran function. >>>>>> >>>>>> Regards, >>>>>> Pradeep >>>>>> >>>>>> On Mon, Aug 23, 2010 at 2:50 PM, >>>>>> wrote: >>>>>> >>>>>> >>>>>> Pradeep, >>>>>>> >>>>>>> if you give me some idea of the nature of your bc, I can >>>>>>> perhaps help --- there are a large number of bc types already >>>>>>> supported inside nek >>>>>>> >>>>>>> Paul >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Mon, 23 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: >>>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> >>>>>>>> I wanted to know if there was a way to find the temperature gradient >>>>>>>> at >>>>>>>> a >>>>>>>> point. I need that information in the userbc function. >>>>>>>> >>>>>>>> I tried using gradm1(), but I am not sure how to get the value at a >>>>>>>> given >>>>>>>> point. >>>>>>>> >>>>>>>> Thanks, >>>>>>>> Pradeep >>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> >>>>>>>> Nek5000-users mailing list >>>>>>> Nek5000-users at lists.mcs.anl.gov >>>>>>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>>> -- >>>>>> Pradeep C. Rao >>>>>> Graduate Research Assistant for FT2L ( >>>>>> http://www1.mengr.tamu.edu/FT2L/) >>>>>> Department of Mechanical Engineering >>>>>> Texas A&M University >>>>>> College Station, TX 77843-3123 >>>>>> >>>>>> 428 Engineering Physics Building >>>>>> (713) 210-9769 >>>>>> >>>>>> uuuu >>>>>> >>>>> >>>>> c----------------------------------------------------------------------- >>>>> C >>>>> C USER SPECIFIED ROUTINES: >>>>> C >>>>> C - boundary conditions >>>>> C - initial conditions >>>>> C - variable properties >>>>> C - local acceleration for fluid (a) >>>>> C - forcing function for passive scalar (q) >>>>> C - general purpose routine for checking errors etc. >>>>> C >>>>> >>>>> c----------------------------------------------------------------------- >>>>> subroutine uservp (ix,iy,iz,eg) >>>>> include 'SIZE' >>>>> include 'TOTAL' >>>>> include 'NEKUSE' >>>>> >>>>> integer e,f,eg >>>>> c e = gllel(eg) >>>>> >>>>> udiff =0. >>>>> utrans=0. >>>>> return >>>>> end >>>>> >>>>> c----------------------------------------------------------------------- >>>>> subroutine userf (ix,iy,iz,eg) >>>>> include 'SIZE' >>>>> include 'TOTAL' >>>>> include 'NEKUSE' >>>>> >>>>> integer e,f,eg >>>>> c e = gllel(eg) >>>>> >>>>> >>>>> c Note: this is an acceleration term, NOT a force! >>>>> c Thus, ffx will subsequently be multiplied by rho(x,t). >>>>> >>>>> >>>>> ffx = 0.0 >>>>> ffy = 0.0 >>>>> ffz = 0.0 >>>>> >>>>> return >>>>> end >>>>> >>>>> c----------------------------------------------------------------------- >>>>> subroutine userq (ix,iy,iz,eg) >>>>> include 'SIZE' >>>>> include 'TOTAL' >>>>> include 'NEKUSE' >>>>> >>>>> integer e,f,eg >>>>> c e = gllel(eg) >>>>> >>>>> qvol = 0.0 >>>>> >>>>> return >>>>> end >>>>> >>>>> c----------------------------------------------------------------------- >>>>> subroutine userchk >>>>> include 'SIZE' >>>>> include 'TOTAL' >>>>> return >>>>> end >>>>> >>>>> c----------------------------------------------------------------------- >>>>> subroutine userbc (ix,iy,iz,iside,ieg) >>>>> include 'SIZE' >>>>> include 'TOTAL' >>>>> include 'NEKUSE' >>>>> ux=0.0 >>>>> uy=0.0 >>>>> uz=0.0 >>>>> temp=0.0 >>>>> return >>>>> end >>>>> >>>>> c----------------------------------------------------------------------- >>>>> subroutine useric (ix,iy,iz,ieg) >>>>> include 'SIZE' >>>>> include 'TOTAL' >>>>> include 'NEKUSE' >>>>> ux=0.0 >>>>> uy=0.0 >>>>> uz=0.0 >>>>> temp=0 >>>>> return >>>>> end >>>>> >>>>> c----------------------------------------------------------------------- >>>>> subroutine usrdat >>>>> include 'SIZE' >>>>> include 'TOTAL' >>>>> c >>>>> return >>>>> end >>>>> >>>>> c----------------------------------------------------------------------- >>>>> subroutine usrdat2 >>>>> include 'SIZE' >>>>> include 'TOTAL' >>>>> >>>>> param(66) = 4. ! These give the std nek binary i/o and are >>>>> param(67) = 4. ! good default values >>>>> >>>>> return >>>>> end >>>>> >>>>> c----------------------------------------------------------------------- >>>>> subroutine usrdat3 >>>>> include 'SIZE' >>>>> include 'TOTAL' >>>>> c >>>>> return >>>>> end >>>>> >>>>> c----------------------------------------------------------------------- >>>>> >>>>> _______________________________________________ >>>>> Nek5000-users mailing list >>>>> Nek5000-users at lists.mcs.anl.gov >>>>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>>>> >>>>> >>>> >>>> >>>> -- >>>> Pradeep C. Rao >>>> Graduate Research Assistant for FT2L (http://www1.mengr.tamu.edu/FT2L/) >>>> Department of Mechanical Engineering >>>> Texas A&M University >>>> College Station, TX 77843-3123 >>>> >>>> 428 Engineering Physics Building >>>> (713) 210-9769 >>>> >>>> _______________________________________________ >>> Nek5000-users mailing list >>> Nek5000-users at lists.mcs.anl.gov >>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>> >> >> >> >> -- >> Pradeep C. Rao >> Graduate Research Assistant for FT2L (http://www1.mengr.tamu.edu/FT2L/) >> Department of Mechanical Engineering >> Texas A&M University >> College Station, TX 77843-3123 >> >> 428 Engineering Physics Building >> (713) 210-9769 >> > > > > -- > Pradeep C. Rao > Graduate Research Assistant for FT2L (http://www1.mengr.tamu.edu/FT2L/) > Department of Mechanical Engineering > Texas A&M University > College Station, TX 77843-3123 > > 428 Engineering Physics Building > (713) 210-9769 > > _______________________________________________ Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > -- Pradeep C. Rao Graduate Research Assistant for FT2L (http://www1.mengr.tamu.edu/FT2L/) Department of Mechanical Engineering Texas A&M University College Station, TX 77843-3123 428 Engineering Physics Building (713) 210-9769 -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Fri Aug 27 04:58:46 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 27 Aug 2010 11:58:46 +0200 Subject: [Nek5000-users] lelx vs nelx Message-ID: <4C778C56.7060303@lav.mavt.ethz.ch> Hi, I am not exactly understanding what is the LELX parameter and how to use it. ( LELX/LELY/LELZ: Maximum number of elements per processor for global fdm solver ) and why sometime in the code there is this check (for example in the turbchannel's userchk or in genbox.f) : if(nelx.gt.lelx .or. nely.gt.lely .or. nelz.gt.lelz) then if(nid.eq.0) write(6,*) 'ABORT: nel_xyz > lel_xyz!' call exitt endif can anyone help me on that ? thank you a lot francesco -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Fri Aug 27 05:50:55 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 27 Aug 2010 05:50:55 -0500 (CDT) Subject: [Nek5000-users] lelx vs nelx In-Reply-To: <4C778C56.7060303@lav.mavt.ethz.ch> References: <4C778C56.7060303@lav.mavt.ethz.ch> Message-ID: You typically need this only if you are using the fast tensor-product solver with the Pn-Pn-2 method. You can safely set it to 1, and set params 116-118 to 0 in the .rea or .usr file. Paul On Fri, 27 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: > Hi, > > I am not exactly understanding what is the LELX parameter and how to use it. > ( LELX/LELY/LELZ: Maximum number of elements per processor for global fdm > solver ) > > and why sometime in the code there is this check > (for example in the turbchannel's userchk or in genbox.f) : > > if(nelx.gt.lelx .or. nely.gt.lely .or. nelz.gt.lelz) then > if(nid.eq.0) write(6,*) 'ABORT: nel_xyz > lel_xyz!' > call exitt > endif > > can anyone help me on that ? > > > thank you a lot > francesco > From nek5000-users at lists.mcs.anl.gov Fri Aug 27 06:04:38 2010 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 27 Aug 2010 13:04:38 +0200 Subject: [Nek5000-users] lelx vs nelx In-Reply-To: References: <4C778C56.7060303@lav.mavt.ethz.ch> Message-ID: <4C779BC6.3050403@lav.mavt.ethz.ch> Thank you francesco On 08/27/2010 12:50 PM, nek5000-users at lists.mcs.anl.gov wrote: > > You typically need this only if you are using the fast > tensor-product solver with the Pn-Pn-2 method. > > You can safely set it to 1, and set params 116-118 to 0 > in the .rea or .usr file. > > Paul > > > On Fri, 27 Aug 2010, nek5000-users at lists.mcs.anl.gov wrote: > >> Hi, >> >> I am not exactly understanding what is the LELX parameter and how to >> use it. >> ( LELX/LELY/LELZ: Maximum number of elements per processor for global >> fdm solver ) >> >> and why sometime in the code there is this check >> (for example in the turbchannel's userchk or in genbox.f) : >> >> if(nelx.gt.lelx .or. nely.gt.lely .or. nelz.gt.lelz) then >> if(nid.eq.0) write(6,*) 'ABORT: nel_xyz > lel_xyz!' >> call exitt >> endif >> >> can anyone help me on that ? >> >> >> thank you a lot >> francesco >> > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users