From nek5000-users at lists.mcs.anl.gov Sat Feb 1 04:32:02 2014 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Sat, 01 Feb 2014 11:32:02 +0100 Subject: [Nek5000-users] mesh problems (i think) In-Reply-To: References: Message-ID: Hi Josh, Thank you for the suggestion. I was using the format indicated to be used for the reatore2 tool, as sugegsted on the webpages for curved elements and boundary conditions. Now I have changed it to be exactly the same of that produced by the n2to3 tool which I have used to create base3.rea (which is different) and made a step forward: now nek reads some more informations from the .rea file and gives me the following error when I try to run the case: WARNING1 Element mesh mismatch at: i,j,k,ie: 2 2 6 integer Near X = float float float, d: float float float for 6 differente integers (2, 5, 8, 11). The subroutine that produces the error is 'vrdsmsh' but I do not understand what the error means.... Best regards, Jacopo On 31/01/2014 20:10, nek5000-users at lists.mcs.anl.gov wrote: > Hi Jacopo, > > I'm not able to look at your files at this moment, but I do have a quick > suggestion. Nek is normally pretty flexible when it comes to the .rea > formatting except for boundary conditions and curve sides. I would > suggest carefully studying the format of the base3.rea file and make > sure the formatting you are using to create pipe.rea is the same. > > - Josh > > > On Fri, Jan 31, 2014 at 11:34 AM, > wrote: > > Dear Neks, > I am trying to set up a case from scratch. > The goal is to create a complex 3d cylindrical mesh, to start I am > trying to build a pipe. > > 1st attempt: > I created a simple 2d 'base.rea' file (by hand) and extruded it with > n2to3 to create 'base3.rea'. This case works in 3d, the outer edges are > curved and the boundary conditions are correct. > > 2nd attempt: > I wrote a piece of code to create a 3d 'pipe.rea'. I tried to reproduce > the case that I had done by hand but this time without success. > nek runs the case and produces the usual output without giving any > errors but... > Problems: > - the outer edges are not curved > - the boundary conditions are not imposed correctly > > In both cases I used the same .usr and SIZE files. > > It seemed to me that I had a problem with face numbering, so I tried > changing the number of the curved faces in the pipe.rea file. I tried > with all 6 numbers but still no curved faces. > > I have attached both the 'base3.rea' (which works) and the 'pipe.rea' > (which doesn't). I also attached a picture of the output generated when > running the 'pipe.rea' case and the SIZE and .usr files. > All the attached files can be downloaded from my dropbox folder at this > link: > https://dl.dropboxusercontent.com/u/18689429/attachment.zip > > > Can someone help me understand why 'pipe.rea' does not work? > > Best regards, > Jacopo > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > > > > -- > Josh Camp > > "All that is necessary for the triumph of evil is that good men do > nothing" -- Edmund Burke > > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > From nek5000-users at lists.mcs.anl.gov Sat Feb 1 08:44:39 2014 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Sat, 01 Feb 2014 15:44:39 +0100 Subject: [Nek5000-users] mesh problems (i think) In-Reply-To: References: Message-ID: So, I solved one of the two problems: I was really stupidly mistaking faces and edges... The other problem, with boundary conditions, still remains: if I create a pipe with 0 < z < 1 and assign boundary conditions vz = 1 at z = 0 it works fine. If I create the same pipe, simply translated to have -1 < z < 0 and assign boundary conditions vz = 1 at z = -1 it does not work... This is not a problem for me, I simply translate my geometry, but could this be a bug? Best regards, Jacopo On 01/02/2014 11:32, nek5000-users at lists.mcs.anl.gov wrote: > Hi Josh, > > Thank you for the suggestion. > I was using the format indicated to be used for the reatore2 tool, as > sugegsted on the webpages for curved elements and boundary conditions. > Now I have changed it to be exactly the same of that produced by the > n2to3 tool which I have used to create base3.rea (which is different) > and made a step forward: now nek reads some more informations from the > .rea file and gives me the following error when I try to run the case: > > WARNING1 Element mesh mismatch at: > i,j,k,ie: 2 2 6 integer > Near X = float float float, d: float float float > > for 6 differente integers (2, 5, 8, 11). > > The subroutine that produces the error is 'vrdsmsh' but I do not > understand what the error means.... > > Best regards, > Jacopo > > > > On 31/01/2014 20:10, nek5000-users at lists.mcs.anl.gov wrote: >> Hi Jacopo, >> >> I'm not able to look at your files at this moment, but I do have a quick >> suggestion. Nek is normally pretty flexible when it comes to the .rea >> formatting except for boundary conditions and curve sides. I would >> suggest carefully studying the format of the base3.rea file and make >> sure the formatting you are using to create pipe.rea is the same. >> >> - Josh >> >> >> On Fri, Jan 31, 2014 at 11:34 AM, > > wrote: >> >> Dear Neks, >> I am trying to set up a case from scratch. >> The goal is to create a complex 3d cylindrical mesh, to start I am >> trying to build a pipe. >> >> 1st attempt: >> I created a simple 2d 'base.rea' file (by hand) and extruded it with >> n2to3 to create 'base3.rea'. This case works in 3d, the outer >> edges are >> curved and the boundary conditions are correct. >> >> 2nd attempt: >> I wrote a piece of code to create a 3d 'pipe.rea'. I tried to >> reproduce >> the case that I had done by hand but this time without success. >> nek runs the case and produces the usual output without giving any >> errors but... >> Problems: >> - the outer edges are not curved >> - the boundary conditions are not imposed correctly >> >> In both cases I used the same .usr and SIZE files. >> >> It seemed to me that I had a problem with face numbering, so I tried >> changing the number of the curved faces in the pipe.rea file. I tried >> with all 6 numbers but still no curved faces. >> >> I have attached both the 'base3.rea' (which works) and the 'pipe.rea' >> (which doesn't). I also attached a picture of the output generated >> when >> running the 'pipe.rea' case and the SIZE and .usr files. >> All the attached files can be downloaded from my dropbox folder at >> this >> link: >> https://dl.dropboxusercontent.com/u/18689429/attachment.zip >> >> >> Can someone help me understand why 'pipe.rea' does not work? >> >> Best regards, >> Jacopo >> >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> >> >> >> >> -- >> Josh Camp >> >> "All that is necessary for the triumph of evil is that good men do >> nothing" -- Edmund Burke >> >> >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users From nek5000-users at lists.mcs.anl.gov Sun Feb 2 11:09:38 2014 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Sun, 2 Feb 2014 11:09:38 -0600 (CST) Subject: [Nek5000-users] mesh problems (i think) In-Reply-To: References: Message-ID: Hi Jacopo, Did you resolve your translation question? Nek5000 certainly supports the type of mesh you are describing. Paul On Sat, 1 Feb 2014, nek5000-users at lists.mcs.anl.gov wrote: > So, I solved one of the two problems: I was really stupidly mistaking faces > and edges... > > The other problem, with boundary conditions, still remains: if I create a > pipe with 0 < z < 1 and assign boundary conditions vz = 1 at z = 0 it works > fine. > If I create the same pipe, simply translated to have -1 < z < 0 and assign > boundary conditions vz = 1 at z = -1 it does not work... > This is not a problem for me, I simply translate my geometry, but could this > be a bug? > > > Best regards, > Jacopo > > > > On 01/02/2014 11:32, nek5000-users at lists.mcs.anl.gov wrote: >> Hi Josh, >> >> Thank you for the suggestion. >> I was using the format indicated to be used for the reatore2 tool, as >> sugegsted on the webpages for curved elements and boundary conditions. >> Now I have changed it to be exactly the same of that produced by the >> n2to3 tool which I have used to create base3.rea (which is different) >> and made a step forward: now nek reads some more informations from the >> .rea file and gives me the following error when I try to run the case: >> >> WARNING1 Element mesh mismatch at: >> i,j,k,ie: 2 2 6 integer >> Near X = float float float, d: float float float >> >> for 6 differente integers (2, 5, 8, 11). >> >> The subroutine that produces the error is 'vrdsmsh' but I do not >> understand what the error means.... >> >> Best regards, >> Jacopo >> >> >> >> On 31/01/2014 20:10, nek5000-users at lists.mcs.anl.gov wrote: >>> Hi Jacopo, >>> >>> I'm not able to look at your files at this moment, but I do have a quick >>> suggestion. Nek is normally pretty flexible when it comes to the .rea >>> formatting except for boundary conditions and curve sides. I would >>> suggest carefully studying the format of the base3.rea file and make >>> sure the formatting you are using to create pipe.rea is the same. >>> >>> - Josh >>> >>> >>> On Fri, Jan 31, 2014 at 11:34 AM, >> > wrote: >>> >>> Dear Neks, >>> I am trying to set up a case from scratch. >>> The goal is to create a complex 3d cylindrical mesh, to start I am >>> trying to build a pipe. >>> >>> 1st attempt: >>> I created a simple 2d 'base.rea' file (by hand) and extruded it with >>> n2to3 to create 'base3.rea'. This case works in 3d, the outer >>> edges are >>> curved and the boundary conditions are correct. >>> >>> 2nd attempt: >>> I wrote a piece of code to create a 3d 'pipe.rea'. I tried to >>> reproduce >>> the case that I had done by hand but this time without success. >>> nek runs the case and produces the usual output without giving any >>> errors but... >>> Problems: >>> - the outer edges are not curved >>> - the boundary conditions are not imposed correctly >>> >>> In both cases I used the same .usr and SIZE files. >>> >>> It seemed to me that I had a problem with face numbering, so I tried >>> changing the number of the curved faces in the pipe.rea file. I tried >>> with all 6 numbers but still no curved faces. >>> >>> I have attached both the 'base3.rea' (which works) and the 'pipe.rea' >>> (which doesn't). I also attached a picture of the output generated >>> when >>> running the 'pipe.rea' case and the SIZE and .usr files. >>> All the attached files can be downloaded from my dropbox folder at >>> this >>> link: >>> https://dl.dropboxusercontent.com/u/18689429/attachment.zip >>> >>> >>> Can someone help me understand why 'pipe.rea' does not work? >>> >>> Best regards, >>> Jacopo >>> >>> _______________________________________________ >>> Nek5000-users mailing list >>> Nek5000-users at lists.mcs.anl.gov >>> >>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>> >>> >>> >>> >>> -- >>> Josh Camp >>> >>> "All that is necessary for the triumph of evil is that good men do >>> nothing" -- Edmund Burke >>> >>> >>> _______________________________________________ >>> Nek5000-users mailing list >>> Nek5000-users at lists.mcs.anl.gov >>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>> >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > From nek5000-users at lists.mcs.anl.gov Mon Feb 3 10:34:27 2014 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 03 Feb 2014 17:34:27 +0100 Subject: [Nek5000-users] Torq_calc() position of X0 Message-ID: Hello Neks, I worked around the set_obj() and torq_calc() to figure it out. I set my sey_obj() routine to find the wall. If I am not wrong, the output of the torque_calc() is Total torque, viscous force torque and pressure force torque taken about a point X0 If I don't define X0, what point does nek5000 automatically take ? Thank you, Kamal From nek5000-users at lists.mcs.anl.gov Mon Feb 3 15:17:42 2014 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 3 Feb 2014 15:17:42 -0600 (CST) Subject: [Nek5000-users] Torq_calc() position of X0 In-Reply-To: References: Message-ID: Hi Kamal, You definitely want to give an array for x0, else you don't know what you'll be referencing in memory. You can turn off the torque output and just get the drag, which I think is what you want. Paul On Mon, 3 Feb 2014, nek5000-users at lists.mcs.anl.gov wrote: > Hello Neks, > > I worked around the set_obj() and torq_calc() to figure it out. > > I set my sey_obj() routine to find the wall. > > If I am not wrong, the output of the torque_calc() is Total torque, viscous > force torque and pressure force torque taken about a point X0 > > If I don't define X0, what point does nek5000 automatically take ? > > Thank you, > Kamal > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > From nek5000-users at lists.mcs.anl.gov Mon Feb 3 15:21:36 2014 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 3 Feb 2014 22:21:36 +0100 Subject: [Nek5000-users] Torq_calc() position of X0 In-Reply-To: References: Message-ID: Hi Paul, Thanks for the reply. Yes I am looking for the axial viscous force in the whole domain which I guess is nothing but the viscous drag along the z direction. Thanks, Kamal On Feb 3, 2014, at 10:17 PM, nek5000-users at lists.mcs.anl.gov wrote: > > Hi Kamal, > > You definitely want to give an array for x0, else > you don't know what you'll be referencing in memory. > > You can turn off the torque output and just get the > drag, which I think is what you want. > > Paul > > > On Mon, 3 Feb 2014, nek5000-users at lists.mcs.anl.gov wrote: > >> Hello Neks, >> >> I worked around the set_obj() and torq_calc() to figure it out. >> >> I set my sey_obj() routine to find the wall. >> >> If I am not wrong, the output of the torque_calc() is Total torque, viscous force torque and pressure force torque taken about a point X0 >> >> If I don't define X0, what point does nek5000 automatically take ? >> >> Thank you, >> Kamal >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users From nek5000-users at lists.mcs.anl.gov Tue Feb 4 17:14:39 2014 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 05 Feb 2014 00:14:39 +0100 Subject: [Nek5000-users] relocation errors on small example? Message-ID: I am trying to set up an example problem with Nek5000. Unfortunately, I'm hitting the notorious: relocation truncated to fit: R_X86_64_PC32 against symbol error with it. I gather from past posts that this is an issue with grid/problem sizes. However, I'm just using eddy_uv at the moment, with minor modifications to bump the grid size up. Diff appended. By removing some (seemingly) unused common blocks for this problem I managed to get it to link with a 192x192x1 grid, but it aborts quickly, complaining that the problem size is too large. I feel like I must be doing something, because 192x192 of 32bit floats is tiny: 144K. Could anyone point me towards what I've messed up, here? Thanks, -tom -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/x-diff Size: 4440 bytes Desc: ed-diff URL: From nek5000-users at lists.mcs.anl.gov Tue Feb 4 17:51:18 2014 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Tue, 4 Feb 2014 17:51:18 -0600 Subject: [Nek5000-users] relocation errors on small example? In-Reply-To: References: Message-ID: Hi Tom, The variable lx1 in the SIZE file essentially specifies the number of grid points used for _each_ element in the "x" direction (ly1 for "y", lz1 for "z". Note here that by "x", "y", "z", we are really talking about the coordinates defined on the master element, not in physical space). Thus, the total number of grid points you are reserving for each variable (u,v,w, t, etc) is lx1*ly1*lz1*lelt. So for your specification, you are actually attempting to allocate space for a very very large problem. Even if you could allocate that amount of space (as you were able to by removing some variables while compiling), I believe NEK by default is only setup to handle lx1 ~ 30 or below. I'm assuming you are wanting to have 192 points in each direction for your problem? You can either change the number of elements (contained in the .rea file) or vary lx1 in the SIZE file so that lx1*(number of element in each direction) ~= 192. Hopefully this helps! Josh On Tue, Feb 4, 2014 at 5:14 PM, wrote: > I am trying to set up an example problem with Nek5000. Unfortunately, > I'm hitting the notorious: > > relocation truncated to fit: R_X86_64_PC32 against symbol > > error with it. I gather from past posts that this is an issue with > grid/problem sizes. However, I'm just using eddy_uv at the moment, with > minor modifications to bump the grid size up. Diff appended. > > By removing some (seemingly) unused common blocks for this problem I > managed to get it to link with a 192x192x1 grid, but it aborts quickly, > complaining that the problem size is too large. I feel like I must be > doing something, because 192x192 of 32bit floats is tiny: 144K. > > Could anyone point me towards what I've messed up, here? > > Thanks, > > -tom > > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > -- Josh Camp "All that is necessary for the triumph of evil is that good men do nothing" -- Edmund Burke -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Tue Feb 4 19:50:21 2014 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 5 Feb 2014 01:50:21 +0000 Subject: [Nek5000-users] relocation errors on small example? In-Reply-To: References: , Message-ID: Hi Tom, Josh is correct --- one other thing to keep in mind is that the memory layout, controlled by the product lx1*ly1*lz1*lelt is the _per_processor layout. So, if you need more memory for your problem, simply adding more processors (or even processes, in some cases) will suffice to meet the memory demands. Suppose for example that you wanted to run a 1 million point problem (which is of modest size, btw). You could do this with lx1=11,ly1=11,lz1=11, lelt=35 and use 32 processors. Net result is an upper bound of the number of elements being 32 x 35, each element of order N=10. (The lx1=11 comes from the fact that you need N+1 points to prescribe a polynomial of degree N.) There is more explanation of the memory issues in the online manual --- I cut and paste some of it below. Paul Per-processor memory requirements for Nek5000 scale roughly as 400 8-byte words per allocated gridpoint. The number of allo- cated gridpoints per processor is nmax=lx1*ly1*lz1*lelt. (For 3D, lz1=ly1=lx1; for 2D, lz1=1, ly1=lx1.) If required for a particular simulation, more memory may be made available by using addi- tional processors. For example, suppose one needed to run a simu- lation with 6000 elements of order N = 9. To leading order, the total memory requirements would be ? E(N +1)3points?400(wds/pt)? 8bytes/wd = 6000 ? 103 ? 400 ? 8 = 19.2 GB. Assuming there is 400 MB of memory per core available to the user (after account- ing for OS requirements), then one could run this simulation with P ? 19, 200MB/(400MB/proc) = 48 processors. To do so, it would be necessary to set lelt ? 6000/48 = 125. We note two other parameters of interest in the parallel context: lp, the maximum number of processors that can be used. lelg, an upper bound on the number of elements in the simulation. There is a slight memory penalty associated with these variables, so one generally does not want to have them excessively large. It is common, however, to have lp be as large as anticipated for a given case so that the executable can be run without recompiling on any admissable number of processors (Pmem ? P ? E, where Pmem is the value computed above). ________________________________ From: nek5000-users-bounces at lists.mcs.anl.gov [nek5000-users-bounces at lists.mcs.anl.gov] on behalf of nek5000-users at lists.mcs.anl.gov [nek5000-users at lists.mcs.anl.gov] Sent: Tuesday, February 04, 2014 5:51 PM To: nek5000-users Subject: Re: [Nek5000-users] relocation errors on small example? Hi Tom, The variable lx1 in the SIZE file essentially specifies the number of grid points used for _each_ element in the "x" direction (ly1 for "y", lz1 for "z". Note here that by "x", "y", "z", we are really talking about the coordinates defined on the master element, not in physical space). Thus, the total number of grid points you are reserving for each variable (u,v,w, t, etc) is lx1*ly1*lz1*lelt. So for your specification, you are actually attempting to allocate space for a very very large problem. Even if you could allocate that amount of space (as you were able to by removing some variables while compiling), I believe NEK by default is only setup to handle lx1 ~ 30 or below. I'm assuming you are wanting to have 192 points in each direction for your problem? You can either change the number of elements (contained in the .rea file) or vary lx1 in the SIZE file so that lx1*(number of element in each direction) ~= 192. Hopefully this helps! Josh On Tue, Feb 4, 2014 at 5:14 PM, > wrote: I am trying to set up an example problem with Nek5000. Unfortunately, I'm hitting the notorious: relocation truncated to fit: R_X86_64_PC32 against symbol error with it. I gather from past posts that this is an issue with grid/problem sizes. However, I'm just using eddy_uv at the moment, with minor modifications to bump the grid size up. Diff appended. By removing some (seemingly) unused common blocks for this problem I managed to get it to link with a 192x192x1 grid, but it aborts quickly, complaining that the problem size is too large. I feel like I must be doing something, because 192x192 of 32bit floats is tiny: 144K. Could anyone point me towards what I've messed up, here? Thanks, -tom _______________________________________________ Nek5000-users mailing list Nek5000-users at lists.mcs.anl.gov https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users -- Josh Camp "All that is necessary for the triumph of evil is that good men do nothing" -- Edmund Burke -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Wed Feb 5 05:59:17 2014 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 05 Feb 2014 12:59:17 +0100 Subject: [Nek5000-users] velocity projection (p94) for linear simulations Message-ID: Hi Neks, I am running a linear simulation with fixed dt, p93, 94, 95 > 0, filtering and dealiasing. However, it seems velocity projection is not affecting number of Helmholtz iterations which is fixed after a while. I tested for p94=0 and p94>0 and practically got the same number of iterations (and even a slight increase in time for time step when p95>0). However in case of p95=0 and p95>0 I can see the improvement clearly. So, pressure projection is working fine. I need tight tolerances, div=1.e-10, hmh=1.e-11, and reducing Hmholtz iterations can help a lot. Would you please clarify the situation on velocity projection in linear cases? Here is a sample time step : Step1122816, t= 2.8264667E+02, DT= 2.5173018E-04, C= 0.390 6.0349E+04 5.1942E-02 1122816 2.8264667E+02 Perturbation Solve: 1 1122816 Hmholtz VELX: 18 8.3081E-12 1.0343E-01 1.0000E-11 1122816 Hmholtz VELY: 17 8.0312E-12 2.3905E-01 1.0000E-11 ***** 11 alpha: 1.5898E-06 2.6447E-09 1.6940E-09 1.5196E-09 3.6127E-10 8.4097E-10 6.6903E-10 -2.2649E-10 -1.5484E-09 -3.4627E-10 ****** 11 4.0897E-09 1.8192E-10 2.2480E+01 alph12 1122816 U-PRES gmres: 7 9.4727E-11 1.0000E-10 1.8192E-10 6.8312E-03 1.5541E-02 Best, Nima -- Nima Shahriari Linn? Flow Center, Mechanics KTH SE-100 44, Stockholm, Sweden Phone: +46 8 7906876 E-mail: nima at mech.kth.se -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Wed Feb 5 06:09:53 2014 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 5 Feb 2014 06:09:53 -0600 (CST) Subject: [Nek5000-users] velocity projection (p94) for linear simulations In-Reply-To: References: Message-ID: Hi Nima, At present, projection is not turned on for the perturbation velocity. When we originally set up the perturbation scheme we had envisioned (and occasionally used) as many as 50 perturbation fields at one time -- the thinking was that it would take too much memory to save projection fields for all of those. Right now there is a general sense that we need to restructure the projection code so that it can readily handle the variety of use cases that we've been encountering of late. I'll talk with the developers here and hopefully we can come up with a fix soon. Best, Paul On Wed, 5 Feb 2014, nek5000-users at lists.mcs.anl.gov wrote: > Hi Neks, > > I am running a linear simulation with fixed dt, p93, 94, 95 > 0, > filtering and dealiasing. > However, it seems velocity projection is not affecting number of > Helmholtz iterations which is fixed after a while. > > I tested for p94=0 and p94>0 and practically got the same number of > iterations (and even a slight increase in time for time step when p95>0). > However in case of p95=0 and p95>0 I can see the improvement clearly. > So, pressure projection is working fine. > I need tight tolerances, div=1.e-10, hmh=1.e-11, and reducing Hmholtz > iterations can help a lot. > > Would you please clarify the situation on velocity projection in linear > cases? > > Here is a sample time step : > Step1122816, t= 2.8264667E+02, DT= 2.5173018E-04, C= 0.390 6.0349E+04 > 5.1942E-02 > 1122816 2.8264667E+02 Perturbation Solve: 1 > 1122816 Hmholtz VELX: 18 8.3081E-12 1.0343E-01 1.0000E-11 > 1122816 Hmholtz VELY: 17 8.0312E-12 2.3905E-01 1.0000E-11 > ***** 11 alpha: 1.5898E-06 2.6447E-09 1.6940E-09 1.5196E-09 > 3.6127E-10 8.4097E-10 6.6903E-10 -2.2649E-10 -1.5484E-09 -3.4627E-10 > ****** 11 4.0897E-09 1.8192E-10 2.2480E+01 alph12 > 1122816 U-PRES gmres: 7 9.4727E-11 1.0000E-10 1.8192E-10 > 6.8312E-03 1.5541E-02 > > Best, > Nima > > -- > Nima Shahriari > Linn? Flow Center, Mechanics KTH > SE-100 44, Stockholm, Sweden > Phone: +46 8 7906876 > E-mail: nima at mech.kth.se > > From nek5000-users at lists.mcs.anl.gov Wed Feb 5 17:10:10 2014 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 6 Feb 2014 00:10:10 +0100 Subject: [Nek5000-users] Linearised Navier-Stokes: 2D cylinder flow Message-ID: Hi Nek's, I am trying to run a linearised DNS of the flow over a two-dimensional cylinder at Re=60 for illustration purposes using the ext_cyl example. I am however encountering some troubles... I have been able to compute the unstable steady equilibrium using a selective frequency damping approach without any trouble. However, it is quite a different story when it comes to the linearised DNS. Indeed, the perturbation velocity field blows up quite rapidly and I am not sure why. I have tried other flows (lid-driven cavity, boundary layer, ...) just to make sure I had no problem with my nek install and everything was running fine. I have also tried on three different computers to make sure it was not platform-dependent and it actually blew up every time. I have been using the linearised version of nek quite extensively for the past three years and but never had such problem. Is there something I have missed there? Thanks a lot, JC -- Jean-Christophe Loiseau Homepage -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Wed Feb 5 22:11:15 2014 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 5 Feb 2014 22:11:15 -0600 (CST) Subject: [Nek5000-users] Linearised Navier-Stokes: 2D cylinder flow In-Reply-To: References: Message-ID: Hi JC, Are you trying to look at the perturbation from the steady cylinder base flow ? Am curious about your boundary conditions (though I recognize that your bc set for the external cylinder flow shouldn't be too different from the boundary layer case)... Just inquiring to get a better idea of what's going on. Paul On Wed, 5 Feb 2014, nek5000-users at lists.mcs.anl.gov wrote: > Hi Nek's, > > I am trying to run a linearised DNS of the flow over a two-dimensional > cylinder at Re=60 for illustration purposes using the ext_cyl example. I am > however encountering some troubles... I have been able to compute the > unstable steady equilibrium using a selective frequency damping approach > without any trouble. However, it is quite a different story when it comes > to the linearised DNS. Indeed, the perturbation velocity field blows up > quite rapidly and I am not sure why. I have tried other flows (lid-driven > cavity, boundary layer, ...) just to make sure I had no problem with my > nek install and everything was running fine. I have also tried on three > different computers to make sure it was not platform-dependent and it > actually blew up every time. I have been using the linearised version of > nek quite extensively for the past three years and but never had such > problem. > > Is there something I have missed there? > > Thanks a lot, > JC > > -- > Jean-Christophe Loiseau > Homepage > From nek5000-users at lists.mcs.anl.gov Thu Feb 6 11:19:57 2014 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 6 Feb 2014 18:19:57 +0100 Subject: [Nek5000-users] Linearised Navier-Stokes: 2D cylinder flow In-Reply-To: References: Message-ID: Hi Paul, I have simply used the ext_cyl.rea and .map files so I guess it is dirichlet boundary condition at the inlet (which I have set to zero in usrbc since I am looking at perturbations), periodic boundary conditions on the sides, no-slip condition on the wall of the cylinder and classical outflow condition at the outlet. I initially thought this could come from the fact I had been using a call to random_number to initialise vxp and vyp. I made sure to use dsavg to ensure that the velocity on each side of the interface between elements has the same value. It did not change anything. I also tried with a hand-made divergence-free perturbation. Did not changed anything either. I am kind of clueless, particularly because it works fine with all of my other cases. Attached are the files I am using if you want to peak a glance at them. Cheers, JC 2014-02-06 5:11 GMT+01:00 : > > Hi JC, > > Are you trying to look at the perturbation from the > steady cylinder base flow ? > > Am curious about your boundary conditions (though I > recognize that your bc set for the external cylinder > flow shouldn't be too different from the boundary > layer case)... > > Just inquiring to get a better idea of what's going on. > > Paul > > > > On Wed, 5 Feb 2014, nek5000-users at lists.mcs.anl.gov wrote: > > Hi Nek's, >> >> I am trying to run a linearised DNS of the flow over a two-dimensional >> cylinder at Re=60 for illustration purposes using the ext_cyl example. I >> am >> however encountering some troubles... I have been able to compute the >> unstable steady equilibrium using a selective frequency damping approach >> without any trouble. However, it is quite a different story when it comes >> to the linearised DNS. Indeed, the perturbation velocity field blows up >> quite rapidly and I am not sure why. I have tried other flows (lid-driven >> cavity, boundary layer, ...) just to make sure I had no problem with my >> nek install and everything was running fine. I have also tried on three >> different computers to make sure it was not platform-dependent and it >> actually blew up every time. I have been using the linearised version of >> nek quite extensively for the past three years and but never had such >> problem. >> >> Is there something I have missed there? >> >> Thanks a lot, >> JC >> >> -- >> Jean-Christophe Loiseau >> Homepage >> >> _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > -- Jean-Christophe Loiseau Homepage -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 2DCYLINDER_LDNS.tar.gz Type: application/x-gzip Size: 570328 bytes Desc: not available URL: From nek5000-users at lists.mcs.anl.gov Fri Feb 7 07:00:05 2014 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 07 Feb 2014 15:00:05 +0200 Subject: [Nek5000-users] using pretex In-Reply-To: References: Message-ID: Dear NEK's, I am trying to modify a mesh of a given example (turJet) using pretex. After the stage of choosing a name for the session, I enter "1" and then "jet" for the name of previous session. At this stage the program crashes with a message: "Error reading parameters from file". Anyone can help? Thanks in advance, Barak From nek5000-users at lists.mcs.anl.gov Mon Feb 10 15:09:15 2014 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 10 Feb 2014 23:09:15 +0200 Subject: [Nek5000-users] pretex problems In-Reply-To: References: Message-ID: Dear Paul, I am trying to modify a mesh of a given example (turJet) using pretex. After the stage of choosing a name for the session, I enter "1" and then "jet" for the name of previous session. At this stage the program crashes with a message: "Error reading parameters from file". Thanks in advance, Barak From nek5000-users at lists.mcs.anl.gov Wed Feb 12 05:08:18 2014 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 12 Feb 2014 12:08:18 +0100 Subject: [Nek5000-users] Vorticity as initial condition Message-ID: Hi, I have a vorticity field expression and I would like to use it as an initial condition in nek5000 (It a 2D case). Is there a way to do that directly in nek5000, or do I have to compute the inverse laplacian manually before with some other program? Thanks in advance! -- Ismael From nek5000-users at lists.mcs.anl.gov Wed Feb 12 06:47:46 2014 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 12 Feb 2014 06:47:46 -0600 (CST) Subject: [Nek5000-users] Vorticity as initial condition In-Reply-To: References: Message-ID: Hi Ismael, There is no automatic way to do this in Nek. However, a few years ago I coded up a little streamfunction-vorticity solver inside userchk and tested it on a (one, only) 2D benchmark. I've just posted this example to the repo as a demo on how to do this. It might shed some light on what you need to do. Be aware, however, that my demo had periodic boundary conditions, thus there were no issues related to setting the boundary values, etc. That is a slightly deeper topic, but perhaps this will get you started. Paul PS - the new demo directory is examples/eddy_psi_omega On Wed, 12 Feb 2014, nek5000-users at lists.mcs.anl.gov wrote: > Hi, > I have a vorticity field expression and I would like to use it as an > initial condition in nek5000 (It a 2D case). Is there a way to do that > directly in nek5000, or do I have to compute the inverse laplacian manually > before with some other program? > > Thanks in advance! > -- > Ismael > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > From nek5000-users at lists.mcs.anl.gov Wed Feb 12 06:56:26 2014 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 12 Feb 2014 13:56:26 +0100 Subject: [Nek5000-users] Vorticity as initial condition In-Reply-To: References: Message-ID: (Wed, Feb 12, 2014 at 06:47:46AM -0600) nek5000-users at lists.mcs.anl.gov : > I've just posted this example to the repo as a demo on how to do > this. It might shed some light on what you need to do. Be aware, > however, that my demo had periodic boundary conditions, thus there > were no issues related to setting the boundary values, etc. That > is a slightly deeper topic, but perhaps this will get you started. > > Paul > > PS - the new demo directory is examples/eddy_psi_omega Hi Paul, Thanks a lot for your help, I'll try to start from that! Best regards, -- Ismael -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: Digital signature URL: From nek5000-users at lists.mcs.anl.gov Wed Feb 12 07:33:58 2014 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 12 Feb 2014 19:03:58 +0530 Subject: [Nek5000-users] Rayleigh-Benard convection (RBC) with free slip boundary conditions Message-ID: Hi Neks, I am new to nek5000. I am trying to use nek for studying RBC problem with free slip boundary conditions. But in the so far tried runs I got only zero values of total energy as well as Nusselt number which is not expected. Can anyone please help me in this regard. Following are the important files which I used for the test: ***********.box file*********** input.rea 3 2 Box -9 -9 3 0 9 1 0 9 1 0 0.2 0.8 1.0 P ,P ,P ,P ,SYM,SYM P ,P ,P ,P ,SYM,SYM ***************.usr file****************** subroutine rayleigh_const include 'SIZE' include 'INPUT' common /rayleigh_r/ rapr,ta2pr Pr = param(2) eps = param(75) Rc = param(76) Ta2 = param(77) Ra = Rc*(1.0+eps) rapr = ra*pr ta2pr = ta2*pr return end c----------------------------------------------------------------------- subroutine uservp (ix,iy,iz,ieg) include 'SIZE' include 'TOTAL' include 'NEKUSE' udiff = 0 utrans = 0 return end c----------------------------------------------------------------------- subroutine userf (ix,iy,iz,ieg) include 'SIZE' include 'TOTAL' include 'NEKUSE' common /rayleigh_r/ rapr,ta2pr ffx = 0.0 ! uy*Ta2Pr ffy = 0.0 !- ux*Ta2Pr ffz = temp*rapr c write(6,*) ffy,temp,rapr,'ray',ieg return end c----------------------------------------------------------------------- subroutine userq (ix,iy,iz,ieg) include 'SIZE' include 'TOTAL' include 'NEKUSE' qvol = 0.0 source = 0.0 return end c----------------------------------------------------------------------- subroutine userbc (ix,iy,iz,iside,ieg) include 'SIZE' include 'TSTEP' include 'INPUT' include 'NEKUSE' common /rayleigh_r/ rapr,ta2pr ux = 0.0 !sin(2.8284*x)*cos(3.1416*z) uy = 0.0 !sin(2.8284*x)*cos(3.1416*z) uz = 0.0 !sin(2.8284*x)*sin(3.1416*z) temp = 1.0 - z return end c----------------------------------------------------------------------- subroutine useric (ix,iy,iz,ieg) include 'SIZE' include 'TOTAL' include 'NEKUSE' integer idum save idum data idum /99/ c ran = ran1(idum) c The totally ad-hoc random number generator below is preferable c to the above for the simple reason that it gives the same i.c. c independent of the number of processors, which is important for c code verification. ran = 2.e4*(ieg+x*sin(y)) + 1.e4*ix*iy + 1.e5*ix*iz ran = 1.e3*sin(ran) ran = 1.e3*sin(ran) ran = cos(ran) amp = 0.1 temp = 1-z + ran*amp*(1-z)*z*x*(9-x)*y*(9-y) ux=sin(2.8284*x)*cos(3.1416*z) uy=sin(2.8284*y)*cos(3.1416*z) uz=-2.8284/3.1416*cos(2.8284*x)*sin(3.1416*z) &-2.8284/3.1416*cos(2.8284*y)*sin(3.1416*z) return end c----------------------------------------------------------------------- subroutine usrdat return end c----------------------------------------------------------------------- subroutine usrdat3 return end c----------------------------------------------------------------------- subroutine usrdat2 include 'SIZE' include 'TOTAL' common /rayleigh_r/ rapr,ta2pr call rayleigh_const param(66) = 4 param(67) = 4 return end c----------------------------------------------------------------------- subroutine userchk include 'SIZE' include 'TOTAL' parameter(lt=lx1*ly1*lz1*lelv) common /scrns/ tz(lx1*ly1*lz1*lelt) common /rayleigh_r/ rapr,ta2pr common /rayleigh_c/ Ra,Ra1,Ra2,Ra3,Prnum,Ta2,Ek1,Ek2,Ek3,ck real Ek0,Ek,t12,t23 real Ey,Ex,Ez,Et,Ewt,tx,ty,tz save Ek0,Ek,t12,t23 ra=param(76) prnum=param(2) if (nid.eq.0.and.istep.eq.0) open(unit = 79, file = 'glob.dat', $ status = 'new') n = nx1*ny1*nz1*nelv Ek0 = Ek Ewt = (glsc3(vz,t,bm1,n)/volvm1) Ey = (glsc3(vy,vy,bm1,n)/volvm1) Ex = (glsc3(vx,vx,bm1,n)/volvm1) Ez = (glsc3(vz,vz,bm1,n)/volvm1) Et = (glsc3(t,t,bm1,n)/volvm1) Ek = Ex+Ey+Ez sigma = 1.e-4 de = abs(Ek-Ek0)/dt c if (nid.eq.0) write(79,6) istep,time,ra,prnum,Ek,Ex,Ey,Ez, c $ Et,Ewt c 6 format(i7,1p9e13.5) c n = nx1*ny1*nz1*nelv umax = glmax(vx,n) vmax = glmax(vy,n) wmax = glmax(vz,n) if (istep.eq.0) then ifxyo = .true. ! For VisIt do i=1,n tz(i) = t(i,1,1,1,1) + zm1(i,1,1,1) - 1.0 enddo call outpost(vx,vy,vz,pr,tz,' ') else ifxyo = .false. endif c if (nid.eq.0) write(79,6) if (nid.eq.0) write(79,1)istep,time,ra,prnum,umax,vmax,wmax, $ Ex,Ey,Ez,Ek,Et,Ewt 1 format(i9,1p12e14.6) return end ****************.rea file ****************************** ****** PARAMETERS ***** 2.60000 NEKTON VERSION 3 DIMENSIONAL RUN 103 PARAMETERS FOLLOW 1.00000 p1 DENSITY 0.20000 p2 VISCOS 0. 0. 0. 0. 1.00000 p7 RHOCP 1.00000 p8 CONDUCT 0. 0. p10 FINTIME 75000.00 p11 NSTEPS -.0050000 p12 DT 0. p13 IOCOMM 0. p14 IOTIME 1000.000 p15 IOSTEP 0. p16 PSSOLVER 0. 0.250000E-01 p18 GRID -1.00000 p19 INTYPE 4.0000 p20 NORDER 0.000000E-06 p21 DIVERGENCE 0.000000E-08 p22 HELMHOLTZ 0 p23 NPSCAL 1.000000E-03 p24 TOLREL 1.000000E-03 p25 TOLABS 2.01000 p26 COURANT/NTAU 2.00000 p27 TORDER 0. p28 TORDER: mesh velocity (0: p28=p27) 0. p29 magnetic visc if > 0, = -1/Rm if < 0 0. p30 > 0 ==> properties set in uservp() 0. p31 NPERT: #perturbation modes 0. p32 #BCs in re2 file, if > 0 0. 0. 0. 0. 0. 0. 0. 0. 0. p41 1-->multiplicative SEMG 0. p42 0=gmres/1=pcg 0. p43 0=semg/1=schwarz 0. p44 0=E-based/1=A-based prec. 0. p45 Relaxation factor for DTFS 0. p46 reserved 0. p47 vnu: mesh matieral prop 0. 0. 0. 0. 0. p52 IOHIS 0. 0. p54 1,2,3-->fixed flow rate dir=x,y,z 0. p55 vol.flow rate (p54>0) or Ubar (p54<0) 0. 0. 0. 0. p59 !=0 --> full Jac. eval. for each el. 0. p60 !=0 --> init. velocity to small nonzero 0. 0. p62 >0 --> force byte_swap for output 0. p63 =8 --> force 8-byte output 0. p64 =1 --> perturbation restart 0. p65 #iofiles (eg, 0 or 64); <0 --> sep. dirs 4.00000 p66 output : <0=ascii, else binary 4.00000 p67 restart: <0=ascii, else binary 0. p68 iastep: freq for avg_all 0. 0. 0. 0. 0. 0. p74 verbose Helmholtz 0. p75 epsilon for RB criticality (in .usr) 1000. p76 Rayleigh number (in .usr) 0. 0. 0. 0. 0. 0. 0. 0. p84 !=0 --> sets initial timestep if p12>0 0. p85 dt ratio if p84 !=0, for timesteps>0 0. p86 reserved 0. 0. 0. 0. 0. 0. 40.0000 p93 Number of previous pressure solns saved 5.00000 p94 start projecting velocity after p94 step 5.00000 p95 start projecting pressure after p95 step 0. 0. 0. 3.00000 p99 dealiasing: <0--> off/3--> old/4--> new 0. 0. p101 No. of additional filter modes 1.00000 p102 Dump out divergence at each time step .000 p103 weight of stabilizing filter (.01) 4 Lines of passive scalar data follows2 CONDUCT; 2RHOCP 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 13 LOGICAL SWITCHES FOLLOW T IFFLOW T IFHEAT T IFTRAN T T F F F F F F F F F IFNAV & IFADVC (convection in P.S. fields) F F T T T T T T T T T T IFTMSH (IF mesh for this field is T mesh) F IFAXIS F IFSTRS F IFSPLIT F IFMGRID F IFMODEL F IFKEPS F IFMVBD F IFCHAR 8.00000 8.00000 -0.500000 -4.00000 XFAC,YFAC,XZERO,YZERO *** MESH DATA *** 243 3 243 NEL,NDIM,NELV *****************SIZE ************************************* C Dimension file to be included C C HCUBE array dimensions C parameter (ldim=3) parameter (lx1=12,ly1=lx1,lz1=lx1,lelt=31,lelv=lelt) parameter (lxd=18,lyd=lxd,lzd=lxd) parameter (lelx=1,lely=1,lelz=1) parameter (lzl=3 + 2*(ldim-3)) parameter (lx2=lx1-2) parameter (ly2=ly1-2) parameter (lz2=lz1-2) parameter (lx3=lx1) parameter (ly3=ly1) parameter (lz3=lz1) parameter (lp = 8) parameter (lelg = 243) c c parameter (lpelv=lelv,lpelt=lelt,lpert=3) ! perturbation c parameter (lpx1=lx1,lpy1=ly1,lpz1=lz1) ! array sizes c parameter (lpx2=lx2,lpy2=ly2,lpz2=lz2) c parameter (lpelv=1,lpelt=1,lpert=1) ! perturbation parameter (lpx1=1,lpy1=1,lpz1=1) ! array sizes parameter (lpx2=1,lpy2=1,lpz2=1) c c parameter (lbelv=lelv,lbelt=lelt) ! MHD c parameter (lbx1=lx1,lby1=ly1,lbz1=lz1) ! array sizes c parameter (lbx2=lx2,lby2=ly2,lbz2=lz2) c parameter (lbelv=1,lbelt=1) ! MHD parameter (lbx1=1,lby1=1,lbz1=1) ! array sizes parameter (lbx2=1,lby2=1,lbz2=1) C LX1M=LX1 when there are moving meshes; =1 otherwise parameter (lx1m=1,ly1m=1,lz1m=1) parameter (ldimt= 3) ! 3 passive scalars + T parameter (ldimt1=ldimt+1) parameter (ldimt3=ldimt+3) c c Note: In the new code, LELGEC should be about sqrt(LELG) c PARAMETER (LELGEC = 1) PARAMETER (LXYZ2 = 1) PARAMETER (LXZ21 = 1) PARAMETER (LMAXV=LX1*LY1*LZ1*LELV) PARAMETER (LMAXT=LX1*LY1*LZ1*LELT) PARAMETER (LMAXP=LX2*LY2*LZ2*LELV) PARAMETER (LXZ=LX1*LZ1) PARAMETER (LORDER=3) PARAMETER (MAXOBJ=4,MAXMBR=LELT*6) PARAMETER (lhis=100) ! # of pts a proc reads from hpts.in ! Note: lhis*np > npoints in hpts.in C C Common Block Dimensions C PARAMETER (LCTMP0 =2*LX1*LY1*LZ1*LELT) PARAMETER (LCTMP1 =4*LX1*LY1*LZ1*LELT) C C The parameter LVEC controls whether an additional 42 field arrays C are required for Steady State Solutions. If you are not using C Steady State, it is recommended that LVEC=1. C PARAMETER (LVEC=1) C C Uzawa projection array dimensions C parameter (mxprev = 20) parameter (lgmres = 20) C C Split projection array dimensions C parameter(lmvec = 1) parameter(lsvec = 1) parameter(lstore=lmvec*lsvec) c c NONCONFORMING STUFF c parameter (maxmor = lelt) C C Array dimensions C COMMON/DIMN/NELV,NELT,NX1,NY1,NZ1,NX2,NY2,NZ2 $,NX3,NY3,NZ3,NDIM,NFIELD,NPERT,NID $,NXD,NYD,NZD c automatically added by makenek parameter(lxo = lx1) ! max output grid size (lxo>=lx1) c automatically added by makenek parameter(lpart = 1 ) ! max number of particles c automatically added by makenek parameter (lfdm=0) ! == 1 for fast diagonalization method c automatically added by makenek integer ax1,ay1,az1,ax2,ay2,az2 parameter (ax1=lx1,ay1=ly1,az1=lz1,ax2=lx2,ay2=ly2,az2=lz2) ! running averages c automatically added by makenek parameter (lxs=1,lys=lxs,lzs=(lxs-1)*(ldim-2)+1) !New Pressure Preconditioner With thanks and regards Prosenjit -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Wed Feb 12 08:05:23 2014 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 12 Feb 2014 08:05:23 -0600 (CST) Subject: [Nek5000-users] Rayleigh-Benard convection (RBC) with free slip boundary conditions In-Reply-To: References: Message-ID: Hi Prosenjit, There are several 2D RB examples that would be a good starting point. For these, there is also a companion document here: http://www.mcs.anl.gov/~fischer/nek5000/examples.pdf (See section 3). Paul On Wed, 12 Feb 2014, nek5000-users at lists.mcs.anl.gov wrote: > Hi Neks, > I am new to nek5000. I am trying to use nek for studying RBC problem > with free slip boundary conditions. > But in the so far tried runs I got only zero values of total energy as well > as Nusselt number which is not expected. Can anyone please help me in this > regard. > > Following are the important files which I used for the test: > > ***********.box file*********** > input.rea > 3 > 2 > Box > -9 -9 3 > 0 9 1 > 0 9 1 > 0 0.2 0.8 1.0 > P ,P ,P ,P ,SYM,SYM > P ,P ,P ,P ,SYM,SYM > > ***************.usr file****************** > > subroutine rayleigh_const > > include 'SIZE' > include 'INPUT' > > common /rayleigh_r/ rapr,ta2pr > > Pr = param(2) > eps = param(75) > Rc = param(76) > Ta2 = param(77) > Ra = Rc*(1.0+eps) > > rapr = ra*pr > ta2pr = ta2*pr > > return > end > c----------------------------------------------------------------------- > subroutine uservp (ix,iy,iz,ieg) > include 'SIZE' > include 'TOTAL' > include 'NEKUSE' > > udiff = 0 > utrans = 0 > > return > end > c----------------------------------------------------------------------- > subroutine userf (ix,iy,iz,ieg) > include 'SIZE' > include 'TOTAL' > include 'NEKUSE' > > common /rayleigh_r/ rapr,ta2pr > > ffx = 0.0 ! uy*Ta2Pr > ffy = 0.0 !- ux*Ta2Pr > ffz = temp*rapr > c write(6,*) ffy,temp,rapr,'ray',ieg > > return > end > c----------------------------------------------------------------------- > subroutine userq (ix,iy,iz,ieg) > include 'SIZE' > include 'TOTAL' > include 'NEKUSE' > > qvol = 0.0 > source = 0.0 > return > end > c----------------------------------------------------------------------- > subroutine userbc (ix,iy,iz,iside,ieg) > include 'SIZE' > include 'TSTEP' > include 'INPUT' > include 'NEKUSE' > common /rayleigh_r/ rapr,ta2pr > > ux = 0.0 !sin(2.8284*x)*cos(3.1416*z) > uy = 0.0 !sin(2.8284*x)*cos(3.1416*z) > uz = 0.0 !sin(2.8284*x)*sin(3.1416*z) > temp = 1.0 - z > > return > end > c----------------------------------------------------------------------- > subroutine useric (ix,iy,iz,ieg) > include 'SIZE' > include 'TOTAL' > include 'NEKUSE' > integer idum > save idum > data idum /99/ > > c ran = ran1(idum) > > c The totally ad-hoc random number generator below is preferable > c to the above for the simple reason that it gives the same i.c. > c independent of the number of processors, which is important for > c code verification. > > ran = 2.e4*(ieg+x*sin(y)) + 1.e4*ix*iy + 1.e5*ix*iz > ran = 1.e3*sin(ran) > ran = 1.e3*sin(ran) > ran = cos(ran) > amp = 0.1 > > temp = 1-z + ran*amp*(1-z)*z*x*(9-x)*y*(9-y) > > ux=sin(2.8284*x)*cos(3.1416*z) > uy=sin(2.8284*y)*cos(3.1416*z) > uz=-2.8284/3.1416*cos(2.8284*x)*sin(3.1416*z) > &-2.8284/3.1416*cos(2.8284*y)*sin(3.1416*z) > > return > end > c----------------------------------------------------------------------- > subroutine usrdat > return > end > c----------------------------------------------------------------------- > subroutine usrdat3 > return > end > c----------------------------------------------------------------------- > subroutine usrdat2 > include 'SIZE' > include 'TOTAL' > > common /rayleigh_r/ rapr,ta2pr > > call rayleigh_const > > > param(66) = 4 > param(67) = 4 > > return > end > c----------------------------------------------------------------------- > subroutine userchk > include 'SIZE' > include 'TOTAL' > parameter(lt=lx1*ly1*lz1*lelv) > > common /scrns/ tz(lx1*ly1*lz1*lelt) > common /rayleigh_r/ rapr,ta2pr > common /rayleigh_c/ Ra,Ra1,Ra2,Ra3,Prnum,Ta2,Ek1,Ek2,Ek3,ck > > > real Ek0,Ek,t12,t23 > real Ey,Ex,Ez,Et,Ewt,tx,ty,tz > > save Ek0,Ek,t12,t23 > ra=param(76) > prnum=param(2) > if (nid.eq.0.and.istep.eq.0) open(unit = 79, file = 'glob.dat', > $ status = 'new') > n = nx1*ny1*nz1*nelv > Ek0 = Ek > > Ewt = (glsc3(vz,t,bm1,n)/volvm1) > Ey = (glsc3(vy,vy,bm1,n)/volvm1) > Ex = (glsc3(vx,vx,bm1,n)/volvm1) > Ez = (glsc3(vz,vz,bm1,n)/volvm1) > Et = (glsc3(t,t,bm1,n)/volvm1) > Ek = Ex+Ey+Ez > > sigma = 1.e-4 > de = abs(Ek-Ek0)/dt > > c if (nid.eq.0) write(79,6) istep,time,ra,prnum,Ek,Ex,Ey,Ez, > c $ Et,Ewt > c 6 format(i7,1p9e13.5) > > c n = nx1*ny1*nz1*nelv > umax = glmax(vx,n) > vmax = glmax(vy,n) > wmax = glmax(vz,n) > > if (istep.eq.0) then > ifxyo = .true. ! For VisIt > do i=1,n > tz(i) = t(i,1,1,1,1) + zm1(i,1,1,1) - 1.0 > enddo > call outpost(vx,vy,vz,pr,tz,' ') > else > ifxyo = .false. > endif > c if (nid.eq.0) write(79,6) > if (nid.eq.0) write(79,1)istep,time,ra,prnum,umax,vmax,wmax, > $ Ex,Ey,Ez,Ek,Et,Ewt > 1 format(i9,1p12e14.6) > > return > end > > ****************.rea file ****************************** > ****** PARAMETERS ***** > 2.60000 NEKTON VERSION > 3 DIMENSIONAL RUN > 103 PARAMETERS FOLLOW > 1.00000 p1 DENSITY > 0.20000 p2 VISCOS > 0. > 0. > 0. > 0. > 1.00000 p7 RHOCP > 1.00000 p8 CONDUCT > 0. > 0. p10 FINTIME > 75000.00 p11 NSTEPS > -.0050000 p12 DT > 0. p13 IOCOMM > 0. p14 IOTIME > 1000.000 p15 IOSTEP > 0. p16 PSSOLVER > 0. > 0.250000E-01 p18 GRID > -1.00000 p19 INTYPE > 4.0000 p20 NORDER > 0.000000E-06 p21 DIVERGENCE > 0.000000E-08 p22 HELMHOLTZ > 0 p23 NPSCAL > 1.000000E-03 p24 TOLREL > 1.000000E-03 p25 TOLABS > 2.01000 p26 COURANT/NTAU > 2.00000 p27 TORDER > 0. p28 TORDER: mesh velocity (0: p28=p27) > 0. p29 magnetic visc if > 0, = -1/Rm if < 0 > 0. p30 > 0 ==> properties set in uservp() > 0. p31 NPERT: #perturbation modes > 0. p32 #BCs in re2 file, if > 0 > 0. > 0. > 0. > 0. > 0. > 0. > 0. > 0. > 0. p41 1-->multiplicative SEMG > 0. p42 0=gmres/1=pcg > 0. p43 0=semg/1=schwarz > 0. p44 0=E-based/1=A-based prec. > 0. p45 Relaxation factor for DTFS > 0. p46 reserved > 0. p47 vnu: mesh matieral prop > 0. > 0. > 0. > 0. > 0. p52 IOHIS > 0. > 0. p54 1,2,3-->fixed flow rate dir=x,y,z > 0. p55 vol.flow rate (p54>0) or Ubar (p54<0) > 0. > 0. > 0. > 0. p59 !=0 --> full Jac. eval. for each el. > 0. p60 !=0 --> init. velocity to small nonzero > 0. > 0. p62 >0 --> force byte_swap for output > 0. p63 =8 --> force 8-byte output > 0. p64 =1 --> perturbation restart > 0. p65 #iofiles (eg, 0 or 64); <0 --> sep. dirs > 4.00000 p66 output : <0=ascii, else binary > 4.00000 p67 restart: <0=ascii, else binary > 0. p68 iastep: freq for avg_all > 0. > 0. > 0. > 0. > 0. > 0. p74 verbose Helmholtz > 0. p75 epsilon for RB criticality (in .usr) > 1000. p76 Rayleigh number (in .usr) > 0. > 0. > 0. > 0. > 0. > 0. > 0. > 0. p84 !=0 --> sets initial timestep if p12>0 > 0. p85 dt ratio if p84 !=0, for timesteps>0 > 0. p86 reserved > 0. > 0. > 0. > 0. > 0. > 0. > 40.0000 p93 Number of previous pressure solns saved > 5.00000 p94 start projecting velocity after p94 step > 5.00000 p95 start projecting pressure after p95 step > 0. > 0. > 0. > 3.00000 p99 dealiasing: <0--> off/3--> old/4--> new > 0. > 0. p101 No. of additional filter modes > 1.00000 p102 Dump out divergence at each time step > .000 p103 weight of stabilizing filter (.01) > 4 Lines of passive scalar data follows2 CONDUCT; 2RHOCP > 1.00000 1.00000 1.00000 1.00000 1.00000 > 1.00000 1.00000 1.00000 1.00000 > 1.00000 1.00000 1.00000 1.00000 1.00000 > 1.00000 1.00000 1.00000 1.00000 > 13 LOGICAL SWITCHES FOLLOW > T IFFLOW > T IFHEAT > T IFTRAN > T T F F F F F F F F F IFNAV & IFADVC (convection in P.S. > fields) > F F T T T T T T T T T T IFTMSH (IF mesh for this field is T > mesh) > F IFAXIS > F IFSTRS > F IFSPLIT > F IFMGRID > F IFMODEL > F IFKEPS > F IFMVBD > F IFCHAR > 8.00000 8.00000 -0.500000 -4.00000 > XFAC,YFAC,XZERO,YZERO > *** MESH DATA *** > 243 3 243 NEL,NDIM,NELV > > *****************SIZE ************************************* > C Dimension file to be included > C > C HCUBE array dimensions > C > parameter (ldim=3) > parameter (lx1=12,ly1=lx1,lz1=lx1,lelt=31,lelv=lelt) > parameter (lxd=18,lyd=lxd,lzd=lxd) > parameter (lelx=1,lely=1,lelz=1) > > parameter (lzl=3 + 2*(ldim-3)) > > parameter (lx2=lx1-2) > parameter (ly2=ly1-2) > parameter (lz2=lz1-2) > parameter (lx3=lx1) > parameter (ly3=ly1) > parameter (lz3=lz1) > > parameter (lp = 8) > parameter (lelg = 243) > c > c parameter (lpelv=lelv,lpelt=lelt,lpert=3) ! perturbation > c parameter (lpx1=lx1,lpy1=ly1,lpz1=lz1) ! array sizes > c parameter (lpx2=lx2,lpy2=ly2,lpz2=lz2) > c > parameter (lpelv=1,lpelt=1,lpert=1) ! perturbation > parameter (lpx1=1,lpy1=1,lpz1=1) ! array sizes > parameter (lpx2=1,lpy2=1,lpz2=1) > c > c parameter (lbelv=lelv,lbelt=lelt) ! MHD > c parameter (lbx1=lx1,lby1=ly1,lbz1=lz1) ! array sizes > c parameter (lbx2=lx2,lby2=ly2,lbz2=lz2) > c > parameter (lbelv=1,lbelt=1) ! MHD > parameter (lbx1=1,lby1=1,lbz1=1) ! array sizes > parameter (lbx2=1,lby2=1,lbz2=1) > > C LX1M=LX1 when there are moving meshes; =1 otherwise > parameter (lx1m=1,ly1m=1,lz1m=1) > parameter (ldimt= 3) ! 3 passive scalars + T > parameter (ldimt1=ldimt+1) > parameter (ldimt3=ldimt+3) > c > c Note: In the new code, LELGEC should be about sqrt(LELG) > c > PARAMETER (LELGEC = 1) > PARAMETER (LXYZ2 = 1) > PARAMETER (LXZ21 = 1) > > PARAMETER (LMAXV=LX1*LY1*LZ1*LELV) > PARAMETER (LMAXT=LX1*LY1*LZ1*LELT) > PARAMETER (LMAXP=LX2*LY2*LZ2*LELV) > PARAMETER (LXZ=LX1*LZ1) > PARAMETER (LORDER=3) > PARAMETER (MAXOBJ=4,MAXMBR=LELT*6) > PARAMETER (lhis=100) ! # of pts a proc reads from hpts.in > ! Note: lhis*np > npoints in hpts.in > C > C Common Block Dimensions > C > PARAMETER (LCTMP0 =2*LX1*LY1*LZ1*LELT) > PARAMETER (LCTMP1 =4*LX1*LY1*LZ1*LELT) > C > C The parameter LVEC controls whether an additional 42 field arrays > C are required for Steady State Solutions. If you are not using > C Steady State, it is recommended that LVEC=1. > C > PARAMETER (LVEC=1) > C > C Uzawa projection array dimensions > C > parameter (mxprev = 20) > parameter (lgmres = 20) > C > C Split projection array dimensions > C > parameter(lmvec = 1) > parameter(lsvec = 1) > parameter(lstore=lmvec*lsvec) > c > c NONCONFORMING STUFF > c > parameter (maxmor = lelt) > C > C Array dimensions > C > COMMON/DIMN/NELV,NELT,NX1,NY1,NZ1,NX2,NY2,NZ2 > $,NX3,NY3,NZ3,NDIM,NFIELD,NPERT,NID > $,NXD,NYD,NZD > > c automatically added by makenek > parameter(lxo = lx1) ! max output grid size (lxo>=lx1) > > c automatically added by makenek > parameter(lpart = 1 ) ! max number of particles > > c automatically added by makenek > parameter (lfdm=0) ! == 1 for fast diagonalization method > > c automatically added by makenek > integer ax1,ay1,az1,ax2,ay2,az2 > parameter (ax1=lx1,ay1=ly1,az1=lz1,ax2=lx2,ay2=ly2,az2=lz2) ! running > averages > > c automatically added by makenek > parameter (lxs=1,lys=lxs,lzs=(lxs-1)*(ldim-2)+1) !New Pressure > Preconditioner > > > > With thanks and regards > Prosenjit > From nek5000-users at lists.mcs.anl.gov Mon Feb 10 15:44:21 2014 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 10 Feb 2014 15:44:21 -0600 (CST) Subject: [Nek5000-users] pretex problems In-Reply-To: References: Message-ID: Hi Barak, Prenek doesn't support the .re2 format. Unfortunately, that case was set up outside of ANL so I don't have the full history or starting point. I've managed to convert it to a format that can be read by both prex and postx and attach that file here. Paul On Mon, 10 Feb 2014, nek5000-users at lists.mcs.anl.gov wrote: > Dear Paul, > > I am trying to modify a mesh of a given example (turJet) using pretex. > After the stage of choosing a name for the session, I enter "1" and then > "jet" for the name of previous session. At this stage the program crashes > with a message: "Error reading parameters from file". > > Thanks in advance, > Barak > > > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > -------------- next part -------------- A non-text attachment was scrubbed... Name: z.rea.gz Type: application/octet-stream Size: 323401 bytes Desc: URL: From nek5000-users at lists.mcs.anl.gov Wed Feb 12 07:09:58 2014 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 12 Feb 2014 18:39:58 +0530 Subject: [Nek5000-users] Vorticity as initial condition In-Reply-To: References: Message-ID: In a 2-d case involving a vortex dipole, I have used Biot-Savart law to calculate the velocity. This is very inefficient of course, and I could only run it in serial. If you want to see, it is here https://code.google.com/p/cfdlab/source/browse/trunk/nek5000/vortex_dipole/ See initcond.usr and the function usrchk. praveen On Wed, Feb 12, 2014 at 6:26 PM, wrote: > (Wed, Feb 12, 2014 at 06:47:46AM -0600) nek5000-users at lists.mcs.anl.gov : > > I've just posted this example to the repo as a demo on how to do > > this. It might shed some light on what you need to do. Be aware, > > however, that my demo had periodic boundary conditions, thus there > > were no issues related to setting the boundary values, etc. That > > is a slightly deeper topic, but perhaps this will get you started. > > > > Paul > > > > PS - the new demo directory is examples/eddy_psi_omega > > Hi Paul, > Thanks a lot for your help, I'll try to start from that! > > Best regards, > -- > Ismael > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Wed Feb 12 11:23:13 2014 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 12 Feb 2014 18:23:13 +0100 Subject: [Nek5000-users] Vorticity as initial condition In-Reply-To: References: Message-ID: (Wed, Feb 12, 2014 at 06:39:58PM +0530) nek5000-users at lists.mcs.anl.gov : > In a 2-d case involving a vortex dipole, I have used Biot-Savart law to > calculate the velocity. This is very inefficient of course, and I could > only run it in serial. If you want to see, it is here > > https://code.google.com/p/cfdlab/source/browse/trunk/nek5000/vortex_dipole/ > > See initcond.usr and the function usrchk. Hi, I had a try with your code, and it looks very strange with your example case (A lot of noise). However it looks perfect with the one that I want, which is only radial vorticity, so thank you very much for your help! Best regards, -- Ismael -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: Digital signature URL: From nek5000-users at lists.mcs.anl.gov Wed Feb 12 11:34:34 2014 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 12 Feb 2014 18:34:34 +0100 Subject: [Nek5000-users] Vorticity as initial condition In-Reply-To: References: Message-ID: (Wed, Feb 12, 2014 at 06:23:13PM +0100) nek5000-users at lists.mcs.anl.gov : > (Wed, Feb 12, 2014 at 06:39:58PM +0530) nek5000-users at lists.mcs.anl.gov : > > In a 2-d case involving a vortex dipole, I have used Biot-Savart law to > > calculate the velocity. This is very inefficient of course, and I could > > only run it in serial. If you want to see, it is here > > > > https://code.google.com/p/cfdlab/source/browse/trunk/nek5000/vortex_dipole/ > > > > See initcond.usr and the function usrchk. > > Hi, > > I had a try with your code, and it looks very strange with your example > case (A lot of noise). However it looks perfect with the one that I want, > which is only radial vorticity, so thank you very much for your help! Sorry, I spoke too fast: I was looking twice at the same image... I have strange behavior of the initcond routine: you can see here the initial vorticity and the vorticity -> velocity -> vorticity. http://www.normalesup.org/~bouya/temp/vorticity.png They don't look the same at all, even if one can more or less "guess" the calculated value in the middle. Did you have the same kind of issues? Did I miss something? (I tried to follow the steps in the README file) Thanks! Best regards, -- Ismael -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: Digital signature URL: From nek5000-users at lists.mcs.anl.gov Thu Feb 13 01:32:38 2014 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 13 Feb 2014 13:02:38 +0530 Subject: [Nek5000-users] Vorticity as initial condition In-Reply-To: References: Message-ID: Hello Ishmael The code uses a kernel regularization technique to avoid the singularity in the integrand. This has a parameter delta; try to change it and see what happens. If you use the P(n)-P(n-2) scheme, then you could integrate on the pressure mesh. This should avoid the singularity problem and you dont need the regularization. This may be a better approach. Best praveen On Wed, Feb 12, 2014 at 11:04 PM, wrote: > (Wed, Feb 12, 2014 at 06:23:13PM +0100) nek5000-users at lists.mcs.anl.gov : > > (Wed, Feb 12, 2014 at 06:39:58PM +0530) nek5000-users at lists.mcs.anl.gov: > > > In a 2-d case involving a vortex dipole, I have used Biot-Savart law to > > > calculate the velocity. This is very inefficient of course, and I could > > > only run it in serial. If you want to see, it is here > > > > > > > https://code.google.com/p/cfdlab/source/browse/trunk/nek5000/vortex_dipole/ > > > > > > See initcond.usr and the function usrchk. > > > > Hi, > > > > I had a try with your code, and it looks very strange with your example > > case (A lot of noise). However it looks perfect with the one that I want, > > which is only radial vorticity, so thank you very much for your help! > > Sorry, I spoke too fast: I was looking twice at the same image... > I have strange behavior of the initcond routine: you can see here the > initial vorticity and the vorticity -> velocity -> vorticity. > http://www.normalesup.org/~bouya/temp/vorticity.png > > They don't look the same at all, even if one can more or less "guess" the > calculated value in the middle. Did you have the same kind of issues? > Did I miss something? (I tried to follow the steps in the README file) > > Thanks! > > Best regards, > -- > Ismael > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Thu Feb 13 04:40:44 2014 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 13 Feb 2014 11:40:44 +0100 Subject: [Nek5000-users] Vorticity as initial condition In-Reply-To: References: Message-ID: Hi Praveen (Thu, Feb 13, 2014 at 01:02:38PM +0530) nek5000-users at lists.mcs.anl.gov : > This is very inefficient of course, and I could > only run it in serial. It turns out finally that I missed that in your first message, and now it works perfectly, thanks a lot! Best regards, -- Ismael -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: Digital signature URL: From nek5000-users at lists.mcs.anl.gov Thu Feb 13 05:55:52 2014 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 13 Feb 2014 17:25:52 +0530 Subject: [Nek5000-users] Vorticity as initial condition In-Reply-To: References: Message-ID: On Thu, Feb 13, 2014 at 4:10 PM, wrote: > Hi Praveen > > (Thu, Feb 13, 2014 at 01:02:38PM +0530) nek5000-users at lists.mcs.anl.gov : > > This is very inefficient of course, and I could > > only run it in serial. > > It turns out finally that I missed that in your first message, and now it > works perfectly, thanks a lot! > > Best regards, > -- > Ismael > Ok. But it is good to still vary delta and see how the vorticity changes. You can compute error in recomputed vorticity and try to find a good value of delta. Also due to roundoff errors, it may be good to enforce continuity of computed velocity. Looking at the eddy_psi_omega example, I think this is done by call dsavg(vx) call dsavg(vy) Best praveen -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Thu Feb 13 06:10:31 2014 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 13 Feb 2014 12:10:31 +0000 Subject: [Nek5000-users] Building mass and stiffness matrices Message-ID: Dear users and developers, I am using NEK5000 for a 3d unsteady simulation with mixed Dirichlet and periodic bc. For the post-processing (a POD-based dynamics) I need the global mass and stiffness matrices as built by NEK5000 on my mesh. I would also need the matrix having as entries (phi_i, grad(phi_j)) where phi are the basis polynomials. Is it possible to have NEK build these matrices and then save them on file? The ultimate goal is to import them on a PETSc program to perform some algebraic manipulations. I already found a way to import the simulations' results. Surfing the code, I have found some 1-d routines, but I don't know how to extend them to my needs. Thank you in advance for any help or hint. Best regards, Giuseppe From nek5000-users at lists.mcs.anl.gov Thu Feb 13 06:34:46 2014 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 13 Feb 2014 06:34:46 -0600 (CST) Subject: [Nek5000-users] Building mass and stiffness matrices In-Reply-To: References: Message-ID: Hi Giuseppe, Nek uses a completely matrix-free approach. The standard way to generate a matrix in this case is to start passing in unit column vectors, e_j (the jth column of the Identity matrix) and saving the result a_j, the jth column of A. For sparse matrices there is a fair amount of bookeeping required so that you store only the nonzeros. (I normally use compressed sparse row format to do so.) The mass matrix is diagonal and therefore readily retrieved. How large is your problem (i.e., how many elements and what is the order, lxl) ? Note that the number of nonzeros scales like E*(lx1^6). Paul On Thu, 13 Feb 2014, nek5000-users at lists.mcs.anl.gov wrote: > Dear users and developers, > I am using NEK5000 for a 3d unsteady simulation with mixed Dirichlet and > periodic bc. For the post-processing (a POD-based dynamics) I need the global > mass and stiffness matrices as built by NEK5000 on my mesh. I would also need > the matrix having as entries (phi_i, grad(phi_j)) where phi are the basis > polynomials. Is it possible to have NEK build these matrices and then save > them on file? The ultimate goal is to import them on a PETSc program to > perform some algebraic manipulations. I already found a way to import the > simulations' results. Surfing the code, I have found some 1-d routines, but > I don't know how to extend them to my needs. Thank you in advance for any > help or hint. > Best regards, > Giuseppe > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > From nek5000-users at lists.mcs.anl.gov Thu Feb 13 07:04:19 2014 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 13 Feb 2014 13:04:19 +0000 Subject: [Nek5000-users] Building mass and stiffness matrices In-Reply-To: References: Message-ID: Hi Paul, thank you for your answer, now I understand why I couldn't find the matrices in the code. For the mass matrix I think it is clear how to proceed, but how could I do the same for the stiffness matrix and the convective matrix? My problem consists in about 3000 elements, with lx1=10, but I plan to move to 6000 or 9000 elements. I will now work on a subroutine for this task, then I will come back if I find other issues. Giuseppe Il giorno 13/feb/2014, alle ore 13:34, ha scritto: > > Hi Giuseppe, > > Nek uses a completely matrix-free approach. The standard way to generate > a matrix in this case is to start passing in unit column vectors, e_j > (the jth column of the Identity matrix) and saving the result a_j, the > jth column of A. For sparse matrices there is a fair amount of bookeeping required so that you store only the nonzeros. (I normally > use compressed sparse row format to do so.) > > The mass matrix is diagonal and therefore readily retrieved. > > How large is your problem (i.e., how many elements and what is the > order, lxl) ? > > Note that the number of nonzeros scales like E*(lx1^6). > > Paul > > On Thu, 13 Feb 2014, nek5000-users at lists.mcs.anl.gov wrote: > >> Dear users and developers, >> I am using NEK5000 for a 3d unsteady simulation with mixed Dirichlet and >> periodic bc. For the post-processing (a POD-based dynamics) I need the global >> mass and stiffness matrices as built by NEK5000 on my mesh. I would also need >> the matrix having as entries (phi_i, grad(phi_j)) where phi are the basis >> polynomials. Is it possible to have NEK build these matrices and then save >> them on file? The ultimate goal is to import them on a PETSc program to >> perform some algebraic manipulations. I already found a way to import the >> simulations' results. Surfing the code, I have found some 1-d routines, but >> I don't know how to extend them to my needs. Thank you in advance for any >> help or hint. >> Best regards, >> Giuseppe >> >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users From nek5000-users at lists.mcs.anl.gov Thu Feb 13 07:35:02 2014 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 13 Feb 2014 07:35:02 -0600 (CST) Subject: [Nek5000-users] Building mass and stiffness matrices In-Reply-To: References: Message-ID: Hi Giuseppe, Just so you're aware, 3000 elements with lx1=10 in 3D would entail 3 billion nonzeros for the stiffness and convective matrices, each, so 24 GB files for each matrix (actually, 48 GB, because you need the CSR pointers). Presumably you would need to do something clever about parallel i/o. How many processors are you using? Paul On Thu, 13 Feb 2014, nek5000-users at lists.mcs.anl.gov wrote: > Hi Paul, > thank you for your answer, now I understand why I couldn't find the matrices in the code. > For the mass matrix I think it is clear how to proceed, but how could I do the same for the stiffness matrix and the convective matrix? > My problem consists in about 3000 elements, with lx1=10, but I plan to move to 6000 or 9000 elements. > I will now work on a subroutine for this task, then I will come back if I find other issues. > Giuseppe > > > > > Il giorno 13/feb/2014, alle ore 13:34, > ha scritto: > >> >> Hi Giuseppe, >> >> Nek uses a completely matrix-free approach. The standard way to generate >> a matrix in this case is to start passing in unit column vectors, e_j >> (the jth column of the Identity matrix) and saving the result a_j, the >> jth column of A. For sparse matrices there is a fair amount of bookeeping required so that you store only the nonzeros. (I normally >> use compressed sparse row format to do so.) >> >> The mass matrix is diagonal and therefore readily retrieved. >> >> How large is your problem (i.e., how many elements and what is the >> order, lxl) ? >> >> Note that the number of nonzeros scales like E*(lx1^6). >> >> Paul >> >> On Thu, 13 Feb 2014, nek5000-users at lists.mcs.anl.gov wrote: >> >>> Dear users and developers, >>> I am using NEK5000 for a 3d unsteady simulation with mixed Dirichlet and >>> periodic bc. For the post-processing (a POD-based dynamics) I need the global >>> mass and stiffness matrices as built by NEK5000 on my mesh. I would also need >>> the matrix having as entries (phi_i, grad(phi_j)) where phi are the basis >>> polynomials. Is it possible to have NEK build these matrices and then save >>> them on file? The ultimate goal is to import them on a PETSc program to >>> perform some algebraic manipulations. I already found a way to import the >>> simulations' results. Surfing the code, I have found some 1-d routines, but >>> I don't know how to extend them to my needs. Thank you in advance for any >>> help or hint. >>> Best regards, >>> Giuseppe >>> >>> _______________________________________________ >>> Nek5000-users mailing list >>> Nek5000-users at lists.mcs.anl.gov >>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>> >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > From nek5000-users at lists.mcs.anl.gov Thu Feb 13 09:25:47 2014 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 13 Feb 2014 15:25:47 +0000 Subject: [Nek5000-users] Building mass and stiffness matrices In-Reply-To: References: Message-ID: Hi Paul, I am aware of the size of the stiffness and convective matrices. For the moment 48 Gb wouldn't be an issue, and I think parallel i/o can be avoided, since I need to compute and save such matrices only once, then they would be loaded and manipulated in parallel by the PETSc routines. Anyway I know that probably is a hard task, and I'm exploring alternative paths. I am running on 120 cores. Thanks again. Giuseppe Il giorno 13/feb/2014, alle ore 14:35, ha scritto: > > Hi Giuseppe, > > Just so you're aware, 3000 elements with lx1=10 in 3D > would entail 3 billion nonzeros for the stiffness and > convective matrices, each, so 24 GB files for each matrix > (actually, 48 GB, because you need the CSR pointers). > > Presumably you would need to do something clever about > parallel i/o. > > How many processors are you using? > > Paul > > > > > On Thu, 13 Feb 2014, nek5000-users at lists.mcs.anl.gov wrote: > >> Hi Paul, >> thank you for your answer, now I understand why I couldn't find the matrices in the code. >> For the mass matrix I think it is clear how to proceed, but how could I do the same for the stiffness matrix and the convective matrix? >> My problem consists in about 3000 elements, with lx1=10, but I plan to move to 6000 or 9000 elements. >> I will now work on a subroutine for this task, then I will come back if I find other issues. >> Giuseppe >> >> >> >> >> Il giorno 13/feb/2014, alle ore 13:34, >> ha scritto: >> >>> >>> Hi Giuseppe, >>> >>> Nek uses a completely matrix-free approach. The standard way to generate >>> a matrix in this case is to start passing in unit column vectors, e_j >>> (the jth column of the Identity matrix) and saving the result a_j, the >>> jth column of A. For sparse matrices there is a fair amount of bookeeping required so that you store only the nonzeros. (I normally >>> use compressed sparse row format to do so.) >>> >>> The mass matrix is diagonal and therefore readily retrieved. >>> >>> How large is your problem (i.e., how many elements and what is the >>> order, lxl) ? >>> >>> Note that the number of nonzeros scales like E*(lx1^6). >>> >>> Paul >>> >>> On Thu, 13 Feb 2014, nek5000-users at lists.mcs.anl.gov wrote: >>> >>>> Dear users and developers, >>>> I am using NEK5000 for a 3d unsteady simulation with mixed Dirichlet and >>>> periodic bc. For the post-processing (a POD-based dynamics) I need the global >>>> mass and stiffness matrices as built by NEK5000 on my mesh. I would also need >>>> the matrix having as entries (phi_i, grad(phi_j)) where phi are the basis >>>> polynomials. Is it possible to have NEK build these matrices and then save >>>> them on file? The ultimate goal is to import them on a PETSc program to >>>> perform some algebraic manipulations. I already found a way to import the >>>> simulations' results. Surfing the code, I have found some 1-d routines, but >>>> I don't know how to extend them to my needs. Thank you in advance for any >>>> help or hint. >>>> Best regards, >>>> Giuseppe >>>> >>>> _______________________________________________ >>>> Nek5000-users mailing list >>>> Nek5000-users at lists.mcs.anl.gov >>>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>>> >>> _______________________________________________ >>> Nek5000-users mailing list >>> Nek5000-users at lists.mcs.anl.gov >>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users From nek5000-users at lists.mcs.anl.gov Mon Feb 17 23:36:17 2014 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 17 Feb 2014 23:36:17 -0600 (CST) Subject: [Nek5000-users] Building mass and stiffness matrices In-Reply-To: References: Message-ID: Hi Giuseppe, It occurs to me that the right way to address your problem is to dump out the unassembled matrices, which are all block diagonal, with full blocks. Then, in addition, it's easy to write out the matrix that assembles the submatrices into the full sparse matrix. That part should be relatively easy to handle in a framework that is designed to work with the global index set. (Nek doesn't deal with the global indices and for the size matrix you want, in parallel, it wouldn't be easy to generate them.) So, basically, Nek would produce A_L = block_diag {A^e} _{e=1}^E and Boolean assembly matrix Q such that A = Q^T A_L Q The elemental matrices, A^e, are completely full. Thus, for N=9, corresponding to 10 x 10 x 10 = 1000 points in a given element, you would have 1 million nonzeros in each matrix. (For undeformed geometries, some of the matrices are sparser.) The matrix Q^T is rectangular and consists of columns of the identity matrix. (See, e.g., Deville, F., & Mund, 2002). It probably wouldn't take much effort to code up the output routines for this plus some matlab code to demo how to assemble the stiffness matrix. Paul On Thu, 13 Feb 2014, nek5000-users at lists.mcs.anl.gov wrote: > Dear users and developers, > I am using NEK5000 for a 3d unsteady simulation with mixed Dirichlet and > periodic bc. For the post-processing (a POD-based dynamics) I need the global > mass and stiffness matrices as built by NEK5000 on my mesh. I would also need > the matrix having as entries (phi_i, grad(phi_j)) where phi are the basis > polynomials. Is it possible to have NEK build these matrices and then save > them on file? The ultimate goal is to import them on a PETSc program to > perform some algebraic manipulations. I already found a way to import the > simulations' results. Surfing the code, I have found some 1-d routines, but > I don't know how to extend them to my needs. Thank you in advance for any > help or hint. > Best regards, > Giuseppe > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > From nek5000-users at lists.mcs.anl.gov Tue Feb 18 04:24:08 2014 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Tue, 18 Feb 2014 11:24:08 +0100 Subject: [Nek5000-users] Visco-elastic Models Message-ID: Hi Neks, 1) Is there a subroutine which allows me to find the position of maximum velocity along a cross-section (pipe) ? 2) Is it possible to simulate a visco-elastic fluid using NEK5000 ? for example the Oldroyd-B model. Thanks, Kamal From nek5000-users at lists.mcs.anl.gov Tue Feb 18 01:51:15 2014 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Tue, 18 Feb 2014 08:51:15 +0100 Subject: [Nek5000-users] Building mass and stiffness matrices In-Reply-To: References: Message-ID: Hi Paul, It is possible to perform the online phase of rom approach in nek5000? After the computation of pod basis, for example, The reduced stiffness matrix will be of form Z^T A Z, Where A is the stiffness matrix of full discretization and parameter ind., Z is the matrix obtain columnwise by pod basis. That implies to do not extract the stiffness matrix, but to perform a new directed computation of reduced system inside nek5000 and select the parameter by datafile to obtain the online snapshot. In this case the reduced stiffness matrix will be , of course dense, but cheaper. However , it is needed to work around in order to have ah matrix component and perform pointwise the tensor product z^t a z ( maybe with routine mxm). Maybe with new rb-hybrid approch it is possible to extend on the A constructed by block refering each one to a physical domain. Best regard Davide Il marted? 18 febbraio 2014, ha scritto: > > Hi Giuseppe, > > It occurs to me that the right way to address your > problem is to dump out the unassembled matrices, which > are all block diagonal, with full blocks. Then, in addition, > it's easy to write out the matrix that assembles the submatrices > into the full sparse matrix. That part should be relatively > easy to handle in a framework that is designed to work with > the global index set. (Nek doesn't deal with the global > indices and for the size matrix you want, in parallel, it > wouldn't be easy to generate them.) > > So, basically, Nek would produce > > A_L = block_diag {A^e} _{e=1}^E > > and Boolean assembly matrix Q such that > > A = Q^T A_L Q > > The elemental matrices, A^e, are completely full. Thus, for > N=9, corresponding to 10 x 10 x 10 = 1000 points in a given > element, you would have 1 million nonzeros in each matrix. > (For undeformed geometries, some of the matrices are sparser.) > The matrix Q^T is rectangular and consists of columns of the > identity matrix. (See, e.g., Deville, F., & Mund, 2002). > It probably wouldn't take much effort to code up the output > routines for this plus some matlab code to demo how to > assemble the stiffness matrix. > > Paul > > > On Thu, 13 Feb 2014, nek5000-users at lists.mcs.anl.gov wrote: > > Dear users and developers, >> I am using NEK5000 for a 3d unsteady simulation with mixed Dirichlet and >> periodic bc. For the post-processing (a POD-based dynamics) I need the >> global >> mass and stiffness matrices as built by NEK5000 on my mesh. I would also >> need >> the matrix having as entries (phi_i, grad(phi_j)) where phi are the basis >> polynomials. Is it possible to have NEK build these matrices and then >> save >> them on file? The ultimate goal is to import them on a PETSc program to >> perform some algebraic manipulations. I already found a way to import the >> simulations' results. Surfing the code, I have found some 1-d routines, >> but >> I don't know how to extend them to my needs. Thank you in advance for any >> help or hint. >> Best regards, >> Giuseppe >> >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> >> _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > -- Davide Baroli, PhD student MOX - Modeling and Scientific Computing Mathematics Dept. Politecnico di Milano Via Bonardi 9, 20133 Milano, Italy -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Tue Feb 18 13:35:43 2014 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Tue, 18 Feb 2014 13:35:43 -0600 (CST) Subject: [Nek5000-users] Building mass and stiffness matrices In-Reply-To: References: Message-ID: Hi Davide, Yes - this is also possible. In fact, it is really quite easy -- nek is very good at computing weighted inner products. Paul On Tue, 18 Feb 2014, nek5000-users at lists.mcs.anl.gov wrote: > Hi Paul, > It is possible to perform the online phase of rom approach in nek5000? > After the computation of pod basis, for example, > The reduced stiffness matrix will be of form Z^T A Z, Where A is the > stiffness matrix of full discretization and parameter ind., Z is the matrix > obtain columnwise by pod basis. That implies to do not extract > the stiffness matrix, but to perform a new directed computation of reduced > system inside nek5000 and select the parameter by datafile to obtain the > online snapshot. > In this case the reduced stiffness matrix will be , of course dense, but > cheaper. However , it is needed to work around in order to have ah matrix > component and perform pointwise the tensor product z^t a z ( maybe with > routine mxm). Maybe with new rb-hybrid approch it is possible to extend on > the A constructed by block refering each one to a physical domain. > Best regard > Davide > > > Il marted? 18 febbraio 2014, ha scritto: > >> >> Hi Giuseppe, >> >> It occurs to me that the right way to address your >> problem is to dump out the unassembled matrices, which >> are all block diagonal, with full blocks. Then, in addition, >> it's easy to write out the matrix that assembles the submatrices >> into the full sparse matrix. That part should be relatively >> easy to handle in a framework that is designed to work with >> the global index set. (Nek doesn't deal with the global >> indices and for the size matrix you want, in parallel, it >> wouldn't be easy to generate them.) >> >> So, basically, Nek would produce >> >> A_L = block_diag {A^e} _{e=1}^E >> >> and Boolean assembly matrix Q such that >> >> A = Q^T A_L Q >> >> The elemental matrices, A^e, are completely full. Thus, for >> N=9, corresponding to 10 x 10 x 10 = 1000 points in a given >> element, you would have 1 million nonzeros in each matrix. >> (For undeformed geometries, some of the matrices are sparser.) >> The matrix Q^T is rectangular and consists of columns of the >> identity matrix. (See, e.g., Deville, F., & Mund, 2002). >> It probably wouldn't take much effort to code up the output >> routines for this plus some matlab code to demo how to >> assemble the stiffness matrix. >> >> Paul >> >> >> On Thu, 13 Feb 2014, nek5000-users at lists.mcs.anl.gov wrote: >> >> Dear users and developers, >>> I am using NEK5000 for a 3d unsteady simulation with mixed Dirichlet and >>> periodic bc. For the post-processing (a POD-based dynamics) I need the >>> global >>> mass and stiffness matrices as built by NEK5000 on my mesh. I would also >>> need >>> the matrix having as entries (phi_i, grad(phi_j)) where phi are the basis >>> polynomials. Is it possible to have NEK build these matrices and then >>> save >>> them on file? The ultimate goal is to import them on a PETSc program to >>> perform some algebraic manipulations. I already found a way to import the >>> simulations' results. Surfing the code, I have found some 1-d routines, >>> but >>> I don't know how to extend them to my needs. Thank you in advance for any >>> help or hint. >>> Best regards, >>> Giuseppe >>> >>> _______________________________________________ >>> Nek5000-users mailing list >>> Nek5000-users at lists.mcs.anl.gov >>> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >>> >>> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> > > > -- > Davide Baroli, PhD student > MOX - Modeling and Scientific Computing > Mathematics Dept. > Politecnico di Milano > Via Bonardi 9, 20133 Milano, Italy > From nek5000-users at lists.mcs.anl.gov Wed Feb 19 07:56:40 2014 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 19 Feb 2014 13:56:40 +0000 Subject: [Nek5000-users] NEK with MOAB using MPI Message-ID: Dear All, I am trying to run NEK with Moab and have done so successfully when launching the pipe example using one processor (i.e. running './nekmpi pipe 1' ). To then run the example with more processors I have partitioned the input.h5m mesh using mbpart (with Zoltan flags) and tested that HDF5 was compiled well to work with mpi. When I try to run the example on 4 ('./nekmpi pipe 4') processors I get the following error: ---------------------------------------------------------------------------------- read .rea file ASSERT ERROR: 21 in moab.f line 477 ASSERT ERROR: 21 in moab.f line 477 ASSERT ERROR: 21 in moab.f line 477 ASSERT ERROR: 21 in moab.f line 477 call exitt: dying ... backtrace(): obtained 10 stack frames. ./nek5000(print_stack_+0x1a) [0x58497a] ./nek5000(exitt_+0x24b) [0x64bd8b] ./nek5000(imesh_err_+0x170) [0x64b640] ./nek5000(nekmoab_load_+0xec) [0x649c1c] ./nek5000(nekmoab_import_+0x95) [0x64a425] ./nek5000(readat_+0x53f) [0x4cbdef] ./nek5000(nek_init_+0x57) [0x49c327] ./nek5000(main+0x24) [0x499184] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5) [0x7f56b4fad995] ./nek5000() [0x49bb87] ---------------------------------------------------------------------------------- Any idea on what I am doing wrong please? Regards, JP -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Wed Feb 19 11:10:46 2014 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 19 Feb 2014 11:10:46 -0600 Subject: [Nek5000-users] NEK with MOAB using MPI In-Reply-To: References: Message-ID: Before us debugging the Nek side of this, can you try running the following (from the MOAB FAQ at http://trac.mcs.anl.gov/projects/ITAPS/wiki/ParallelRead): mpiexec -np 4 mbconvert -O PARALLEL=READ_PART -O PARTITION=PARALLEL_PARTITION -O PARALLEL_RESOLVE_SHARED_ENTS -O PARALLEL_GHOSTS=3.0.1 -o PARALLEL=WRITE_PART dummy.h5m mbconvert is built as part of the MOAB build and should be in either bin/ (if you ran make install) or tools/. This will test the MOAB side of parallel reading and ghost exchange. - tim On 02/19/2014 07:56 AM, nek5000-users at lists.mcs.anl.gov wrote: > Dear All, > > I am trying to run NEK with Moab and have done so successfully when launching the pipe example using one processor (i.e. > running './nekmpi pipe 1' ). > > To then run the example with more processors I have partitioned the input.h5m mesh using mbpart (with Zoltan flags) and > tested that HDF5 was compiled well to work with mpi. > > When I try to run the example on 4 ('./nekmpi pipe 4') processors I get the following error: > > ---------------------------------------------------------------------------------- > read .rea file > ASSERT ERROR: 21 in moab.f line 477 > > ASSERT ERROR: 21 in moab.f line 477 > > ASSERT ERROR: 21 in moab.f line 477 > > ASSERT ERROR: 21 in moab.f line 477 > > > call exitt: dying ... > > backtrace(): obtained 10 stack frames. > ./nek5000(print_stack_+0x1a) [0x58497a] > ./nek5000(exitt_+0x24b) [0x64bd8b] > ./nek5000(imesh_err_+0x170) [0x64b640] > ./nek5000(nekmoab_load_+0xec) [0x649c1c] > ./nek5000(nekmoab_import_+0x95) [0x64a425] > ./nek5000(readat_+0x53f) [0x4cbdef] > ./nek5000(nek_init_+0x57) [0x49c327] > ./nek5000(main+0x24) [0x499184] > /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5) [0x7f56b4fad995] > ./nek5000() [0x49bb87] > ---------------------------------------------------------------------------------- > > Any idea on what I am doing wrong please? > > Regards, > JP > > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > -- ================================================================ "You will keep in perfect peace him whose mind is steadfast, because he trusts in you." Isaiah 26:3 Tim Tautges Argonne National Laboratory (tautges at mcs.anl.gov) (telecommuting from UW-Madison) phone (gvoice): (608) 354-1459 1500 Engineering Dr. fax: (608) 263-4499 Madison, WI 53706 From nek5000-users at lists.mcs.anl.gov Wed Feb 19 15:07:07 2014 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 19 Feb 2014 23:07:07 +0200 Subject: [Nek5000-users] help in recreating a mesh and other modifications In-Reply-To: References: Message-ID: Dear Paul, I would like to reproduce and then modify a mesh of one of NEK examples - turbJet. Is there available a step-by-step tutorial to do this task? Many thanks, Barak From nek5000-users at lists.mcs.anl.gov Fri Feb 21 12:12:00 2014 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 21 Feb 2014 19:12:00 +0100 Subject: [Nek5000-users] Segmentation Error Message-ID: Hi Neks, I have a setobj() subroutine which defines my wall surface of integral, This subroutine worked well on my local machine for a smaller mesh. Recently I launched a simulation with a big mesh, I found that using the setobj() subroutine gives a error which something looks like this, I have attached the complete error file with this mail along with logfile and .usr file. could some tell me what is going wrong ? *************************************************************** [lomc-itp-43:15165] *** Process received signal *** [lomc-itp-43:15159] *** Process received signal *** [lomc-itp-43:15159] Signal: Segmentation fault (11) [lomc-itp-43:15159] Signal code: Address not mapped (1) [lomc-itp-43:15159] Failing at address: (nil) [lomc-itp-43:15164] *** Process received signal *** [lomc-itp-43:15160] *** Process received signal *** [lomc-itp-43:15160] Signal: Segmentation fault (11) [lomc-itp-43:15160] Signal code: Address not mapped (1) [lomc-itp-43:15160] Failing at address: (nil) [lomc-itp-43:15162] *** Process received signal *** [lomc-itp-43:15162] Signal: Segmentation fault (11) [lomc-itp-43:15162] Signal code: Address not mapped (1) [lomc-itp-43:15162] Failing at address: (nil) [lomc-itp-43:15163] *** Process received signal *** [lomc-itp-43:15163] Signal: Segmentation fault (11) [lomc-itp-43:15163] Signal code: Address not mapped (1) [lomc-itp-43:15163] Failing at address: (nil) [lomc-itp-43:15164] Signal: Segmentation fault (11) [lomc-itp-43:15164] Signal code: Address not mapped (1) [lomc-itp-43:15164] Failing at address: (nil) ******************************************************************** Thanks, Kamal -------------- next part -------------- c----------------------------------------------------------------------- C C USER SPECIFIED ROUTINES: C C - boundary conditions C - initial conditions C - variable properties C - local acceleration for fluid (a) C - forcing function for passive scalar (q) C - general purpose routine for checking errors etc. C c----------------------------------------------------------------------- subroutine uservp (ix,iy,iz,eg) include 'SIZE' include 'TOTAL' include 'NEKUSE' integer e,f,eg c e = gllel(eg) udiff =0. utrans=0. return end c----------------------------------------------------------------------- subroutine userf (ix,iy,iz,eg) include 'SIZE' include 'TOTAL' include 'NEKUSE' integer e,f,eg c e = gllel(eg) c Note: this is an acceleration term, NOT a force! c Thus, ffx will subsequently be multiplied by rho(x,t). ffx = 0.0 ffy = 0.0 ffz = 0.0 return end c----------------------------------------------------------------------- subroutine userq (ix,iy,iz,eg) include 'SIZE' include 'TOTAL' include 'NEKUSE' integer e,f,eg c e = gllel(eg) qvol = 0.0 source = 0.0 return end c----------------------------------------------------------------------- subroutine userchk include 'SIZE' include 'TOTAL' include 'NEKUSE' parameter (lt=lx1*ly1*lz1*lelt) common /myjunk/ vort(lt,3),w1(lt),w2(lt) common /Residual_arrays/ uo(lt), vo(lt), wo(lt) real h1, semi, l2, linf, residu if (mod(istep,iostep).eq.0) then call comp_vort3(vort,w1,w2,vx,vy,vz) call outpost(vort(1,1),vort(1,2),vort(1,3),pr,t,'vrt') end if if(istep.eq.0) then call set_obj call rzero(x0,3) end if if (mod(istep,2).eq.0) call torque_calc(1.0,x0,.true.,.true.) call opsub2(uo,vo,wo,vx,vy,vz) call normvc(h1,semi,l2,linf,uo,vo,wo) residu = l2/dt call opcopy(uo,vo,wo,vx,vy,vz) if(istep.eq.0) then if(nid.eq.0) open(unit=10,file='residu.dat') endif if(nid.eq.0) write(10,*) istep, residu if(istep.eq.nsteps) then if(nid.eq.0) close(10) end if if((istep.gt.10).AND.(residu.lt.1e-8)) then call outpost(vx,vy,vz,pr,t,'BF_') if(nid.eq.0) close(10) call exitt endif c ifxyo = .true. if (iostep.gt.0.and.istep.gt.iostep) ifxyo = .false. return end c----------------------------------------------------------------------- subroutine userbc (ix,iy,iz,iside,ieg) include 'SIZE' include 'TOTAL' include 'NEKUSE' integer e,eg common /mygeom/ xmin,xmax,ymin,ymax delta = 1 if (z.le.0) delta = 0.5 xd=x/delta yd=y/delta rr=xd*xd+yd*yd scale = 2*(0.5/delta)**2 ! Ubar = 1 in inlet pipe (r=0.5) ux=0.0 uy=0.0 uz=scale*(1-rr) temp=0.0 return end c----------------------------------------------------------------------- subroutine useric (ix,iy,iz,ieg) include 'SIZE' include 'TOTAL' include 'NEKUSE' integer e,eg delta = 1 if (z.le.0) delta = 0.5 xd=x/delta yd=y/delta rr=xd*xd+yd*yd scale = 2*(0.5/delta)**2 ! Ubar = 1 in inlet pipe (r=0.5) ux=0.0 uy=0.0 uz=scale*(1-rr) temp=0.0 return end c----------------------------------------------------------------------- subroutine usrdat include 'SIZE' include 'TOTAL' c return end c----------------------------------------------------------------------- subroutine usrdat3 include 'SIZE' include 'TOTAL' c return end c----------------------------------------------------------------------- subroutine usrdat2 ! Modify geometry include 'SIZE' include 'TOTAL' common /mygeom/ xmin,xmax,ymin,ymax n = nx1*ny1*nz1*nelt scal = 0.5 ! Rescale radius from 1.0 to 0.5 call cmult(xm1,scal,n) call cmult(ym1,scal,n) xmin = glmin(xm1,n) ymin = glmin(ym1,n) xmax = glmax(xm1,n) ymax = glmax(ym1,n) smax = 2.0 z0 = 0. z1 = 1. do i=1,n x = xm1(i,1,1,1) y = ym1(i,1,1,1) z = zm1(i,1,1,1) scale = 1. if (z.gt.z0) scale = 1 + (smax-1)*(z-z0)/(z1-z0) if (z.gt.z1) scale = smax xm1(i,1,1,1) = scale*x ym1(i,1,1,1) = scale*y enddo param(59) = 1 ! Force ifdfrm=.true. ( 8/26/03 ) ifxyo = .true. c call outpost(xm1,ym1,zm1,pr,t,' ') c call exitt return end c----------------------------------------------------------------------- subroutine set_obj ! define objects for surface integrals c include 'SIZE' include 'TOTAL' c integer e,f c c Define new objects c nobj = 1 ! for Periodic iobj = 0 do ii=nhis+1,nhis+nobj iobj = iobj+1 hcode(10,ii) = 'I' hcode( 1,ii) = 'F' ! 'F' hcode( 2,ii) = 'F' ! 'F' hcode( 3,ii) = 'F' ! 'F' lochis(1,ii) = iobj enddo nhis = nhis + nobj c if (maxobj.lt.nobj) write(6,*) 'increase maxobj in SIZEu. rm *.o' if (maxobj.lt.nobj) call exitt c nxyz = nx1*ny1*nz1 do e=1,nelv do f=1,2*ndim if (cbc(f,e,1).eq.'W ') then iobj = 1 if (iobj.gt.0) then nmember(iobj) = nmember(iobj) + 1 mem = nmember(iobj) ieg = lglel(e) object(iobj,mem,1) = ieg object(iobj,mem,2) = f c write(6,1) iobj,mem,f,ieg,e,nid,' OBJ' 1 format(6i9,a4) endif c endif enddo enddo write(6,*) 'number',(nmember(k),k=1,4) c return end -------------- next part -------------- /----------------------------------------------------------\\ | _ __ ______ __ __ ______ ____ ____ ____ | | / | / // ____// //_/ / ____/ / __ \\ / __ \\ / __ \\ | | / |/ // __/ / ,< /___ \\ / / / // / / // / / / | | / /| // /___ / /| | ____/ / / /_/ // /_/ // /_/ / | | /_/ |_//_____//_/ |_|/_____/ \\____/ \\____/ \\____/ | | | |----------------------------------------------------------| | | | NEK5000: Open Source Spectral Element Solver | | COPYRIGHT (c) 2008-2010 UCHICAGO ARGONNE, LLC | | Version: 1.0rc1 / SVN r996 | | Web: http://nek5000.mcs.anl.gov | | | \\----------------------------------------------------------/ Number of processors: 8 REAL wdsize : 8 INTEGER wdsize : 4 Beginning session: /home/kamal/neksamples/pipe2000/divpipe.rea timer accuracy: 0.0000000E+00 sec read .rea file nelgt/nelgv/lelt: 25200 25200 3200 lx1 /lx2 /lx3 : 5 5 5 mapping elements to processors 7 3150 3150 25200 25200 NELV 0 3150 3150 25200 25200 NELV 2 3150 3150 25200 25200 NELV 4 3150 3150 25200 25200 NELV 6 3150 3150 25200 25200 NELV 1 3150 3150 25200 25200 NELV 3 3150 3150 25200 25200 NELV 5 3150 3150 25200 25200 NELV RANK 0 IEG 22005 22006 22007 22008 22010 22012 22014 22016 22025 22026 22033 22034 22036 22037 22045 22046 22047 22048 22051 22052 22054 22056 22061 22066 22067 22068 22071 22072 22074 22076 22081 22082 22083 22084 22085 22086 22087 22088 22089 22090 22091 22092 22093 22094 22095 22096 22097 22098 22099 22100 22101 22102 22103 22104 22105 22106 22107 22108 22109 22110 22111 22112 22113 22114 22115 22116 22117 22118 22119 22120 22121 22122 22123 22124 22125 22126 22127 22128 22129 22130 22131 22132 22133 22134 22135 22136 22137 22138 22139 22140 22141 22142 22143 22144 22145 22146 22147 22148 22149 22150 22151 22152 22153 22154 22155 22156 22157 22158 22159 22160 22161 22162 22163 22164 22165 22166 22167 22168 22169 22170 22171 22172 22173 22174 22175 22176 22177 22178 22179 22180 22181 22182 22183 22184 22185 22186 22187 22188 22189 22190 22191 22192 22193 22194 22195 22196 22197 22198 22199 22200 22201 22202 22203 22204 22205 22206 22207 22208 22209 22210 22211 22212 22213 22214 22215 22216 22217 22218 22219 22220 22221 22222 22223 22224 22225 22226 22227 22228 22229 22230 22231 22232 22233 22234 22235 22236 22237 22238 22239 22240 22241 22242 22243 22244 22245 22246 22247 22248 22249 22250 22251 22252 22253 22254 22255 22256 22257 22258 22259 22260 22261 22262 22263 22264 22265 22266 22267 22268 22269 22270 22271 22272 22273 22274 22275 22276 22277 22278 22279 22280 22281 22282 22283 22284 22285 22286 22287 22288 22289 22290 22291 22292 22293 22294 22295 22296 22297 22298 22299 22300 22301 22302 22303 22304 22305 22306 22307 22308 22309 22310 22311 22312 22313 22314 22315 22316 22317 22318 22319 22320 22321 22322 22323 22324 22325 22326 22327 22328 22329 22330 22331 22332 22333 22334 22335 22336 22337 22338 22339 22340 22341 22342 22343 22344 22345 22346 22347 22348 22349 22350 22351 22352 22353 22354 22355 22356 22357 22358 22359 22360 22361 22362 22363 22364 22365 22366 22367 22368 22369 22370 22371 22372 22373 22374 22375 22376 22377 22378 22379 22380 22381 22382 22383 22384 22385 22386 22387 22388 22389 22390 22391 22392 22393 22394 22395 22396 22397 22398 22399 22400 22401 22402 22403 22404 22405 22406 22407 22408 22409 22410 22411 22412 22413 22414 22415 22416 22417 22418 22419 22420 22421 22422 22423 22424 22425 22426 22427 22428 22429 22430 22431 22432 22433 22434 22435 22436 22437 22438 22439 22440 22441 22442 22443 22444 22445 22446 22447 22448 22449 22450 22451 22452 22453 22454 22455 22456 22457 22458 22459 22460 22461 22462 22463 22464 22465 22466 22467 22468 22469 22470 22471 22472 22473 22474 22475 22476 22477 22478 22479 22480 22481 22482 22483 22484 22485 22486 22487 22488 22489 22490 22491 22492 22493 22494 22495 22496 22497 22498 22499 22500 22501 22502 22503 22504 22505 22506 22507 22508 22509 22510 22511 22512 22513 22514 22515 22516 22517 22518 22519 22520 22521 22522 22523 22524 22525 22526 22527 22528 22529 22530 22531 22532 22533 22534 22535 22536 22537 22538 22539 22540 22541 22542 22543 22544 22545 22546 22547 22548 22549 22550 22551 22552 22553 22554 22555 22556 22557 22558 22559 22560 22561 22562 22563 22564 22565 22566 22567 22568 22569 22570 22571 22572 22573 22574 22575 22576 22577 22578 22579 22580 22581 22582 22583 22584 22585 22586 22587 22588 22589 22590 22591 22592 22593 22594 22595 22596 22597 22598 22599 22600 22601 22602 22603 22604 22605 22606 22607 22608 22609 22610 22611 22612 22613 22614 22615 22616 22617 22618 22619 22620 22621 22622 22623 22624 22625 22626 22627 22628 22629 22630 22631 22632 22633 22634 22635 22636 22637 22638 22639 22640 22641 22642 22643 22644 22645 22646 22647 22648 22649 22650 22651 22652 22653 22654 22655 22656 22657 22658 22659 22660 22661 22662 22663 22664 22665 22666 22667 22668 22669 22670 22671 22672 22673 22674 22675 22676 22677 22678 22679 22680 22681 22682 22683 22684 22685 22686 22687 22688 22689 22690 22691 22692 22693 22694 22695 22696 22697 22698 22699 22700 22701 22702 22703 22704 22705 22706 22707 22708 22709 22710 22711 22712 22713 22714 22715 22716 22717 22718 22719 22720 22721 22722 22723 22724 22725 22726 22727 22728 22729 22730 22731 22732 22733 22734 22735 22736 22737 22738 22739 22740 22741 22742 22743 22744 22745 22746 22747 22748 22749 22750 22751 22752 22753 22754 22755 22756 22757 22758 22759 22760 22761 22762 22763 22764 22765 22766 22767 22768 22769 22770 22771 22772 22773 22774 22775 22776 22777 22778 22779 22780 22781 22782 22783 22784 22785 22786 22787 22788 22789 22790 22791 22792 22793 22794 22795 22796 22797 22798 22799 22800 22801 22802 22803 22804 22805 22806 22807 22808 22809 22810 22811 22812 22813 22814 22815 22816 22817 22818 22819 22820 22821 22822 22823 22824 22825 22826 22827 22828 22829 22830 22831 22832 22833 22834 22835 22836 22837 22838 22839 22840 22841 22842 22843 22844 22845 22846 22847 22848 22849 22850 22851 22852 22853 22854 22855 22856 22857 22858 22859 22860 22861 22862 22863 22864 22865 22866 22867 22868 22869 22870 22871 22872 22873 22874 22875 22876 22877 22878 22879 22880 22881 22882 22883 22884 22885 22886 22887 22888 22889 22890 22891 22892 22893 22894 22895 22896 22897 22898 22899 22900 22901 22902 22903 22904 22905 22906 22907 22908 22909 22910 22911 22912 22913 22914 22915 22916 22917 22918 22919 22920 22921 22922 22923 22924 22925 22926 22927 22928 22929 22930 22931 22932 22933 22934 22935 22936 22937 22938 22939 22940 22941 22942 22943 22944 22945 22946 22947 22948 22949 22950 22951 22952 22953 22954 22955 22956 22957 22958 22959 22960 22961 22962 22963 22964 22965 22966 22967 22968 22969 22970 22971 22972 22973 22974 22975 22976 22977 22978 22979 22980 22981 22982 22983 22984 22985 22986 22987 22988 22989 22990 22991 22992 22993 22994 22995 22996 22997 22998 22999 23000 23001 23002 23003 23004 23005 23006 23007 23008 23009 23010 23011 23012 23013 23014 23015 23016 23017 23018 23019 23020 23021 23022 23023 23024 23025 23026 23027 23028 23029 23030 23031 23032 23033 23034 23035 23036 23037 23038 23039 23040 23041 23042 23043 23044 23045 23046 23047 23048 23049 23050 23051 23052 23053 23054 23055 23056 23057 23058 23059 23060 23061 23062 23063 23064 23065 23066 23067 23068 23069 23070 23071 23072 23073 23074 23075 23076 23077 23078 23079 23080 23081 23082 23083 23084 23085 23086 23087 23088 23089 23090 23091 23092 23093 23094 23095 23096 23097 23098 23099 23100 23101 23102 23103 23104 23105 23106 23107 23108 23109 23110 23111 23112 23113 23114 23115 23116 23117 23118 23119 23120 23121 23122 23123 23124 23125 23126 23127 23128 23129 23130 23131 23132 23133 23134 23135 23136 23137 23138 23139 23140 23141 23142 23143 23144 23145 23146 23147 23148 23149 23150 23151 23152 23153 23154 23155 23156 23157 23158 23159 23160 23161 23162 23163 23164 23165 23166 23167 23168 23169 23170 23171 23172 23173 23174 23175 23176 23177 23178 23179 23180 23181 23182 23183 23184 23185 23186 23187 23188 23189 23190 23191 23192 23193 23194 23195 23196 23197 23198 23199 23200 23201 23202 23203 23204 23205 23206 23207 23208 23209 23210 23211 23212 23213 23214 23215 23216 23217 23218 23219 23220 23221 23222 23223 23224 23225 23226 23227 23228 23229 23230 23231 23232 23233 23234 23235 23236 23237 23238 23239 23240 23241 23242 23243 23244 23245 23246 23247 23248 23249 23250 23251 23252 23253 23254 23255 23256 23257 23258 23259 23260 23261 23262 23263 23264 23265 23266 23267 23268 23269 23270 23271 23272 23273 23274 23275 23276 23277 23278 23279 23280 23281 23282 23283 23284 23285 23286 23287 23288 23289 23290 23291 23292 23293 23294 23295 23296 23297 23298 23299 23300 23301 23302 23303 23304 23305 23306 23307 23308 23309 23310 23311 23312 23313 23314 23315 23316 23317 23318 23319 23320 23321 23322 23323 23324 23325 23326 23327 23328 23329 23330 23331 23332 23333 23334 23335 23336 23337 23338 23339 23340 23341 23342 23343 23344 23345 23346 23347 23348 23349 23350 23351 23352 23353 23354 23355 23356 23357 23358 23359 23360 23361 23362 23363 23364 23365 23366 23367 23368 23369 23370 23371 23372 23373 23374 23375 23376 23377 23378 23379 23380 23381 23382 23383 23384 23385 23386 23387 23388 23389 23390 23391 23392 23393 23394 23395 23396 23397 23398 23399 23400 23401 23402 23403 23404 23405 23406 23407 23408 23409 23410 23411 23412 23413 23414 23415 23416 23417 23418 23419 23420 23421 23422 23423 23424 23425 23426 23427 23428 23429 23430 23431 23432 23433 23434 23435 23436 23437 23438 23439 23440 23441 23442 23443 23444 23445 23446 23447 23448 23449 23450 23451 23452 23453 23454 23455 23456 23457 23458 23459 23460 23461 23462 23463 23464 23465 23466 23467 23468 23469 23470 23471 23472 23473 23474 23475 23476 23477 23478 23479 23480 23481 23482 23483 23484 23485 23486 23487 23488 23489 23490 23491 23492 23493 23494 23495 23496 23497 23498 23499 23500 23501 23502 23503 23504 23505 23506 23507 23508 23509 23510 23511 23512 23513 23514 23515 23516 23517 23518 23519 23520 23521 23522 23523 23524 23525 23526 23527 23528 23529 23530 23531 23532 23533 23534 23535 23536 23537 23538 23539 23540 23541 23542 23543 23544 23545 23546 23547 23548 23549 23550 23551 23552 23553 23554 23555 23556 23557 23558 23559 23560 23561 23562 23563 23564 23565 23566 23567 23568 23569 23570 23571 23572 23573 23574 23575 23576 23577 23578 23579 23580 23581 23582 23583 23584 23585 23586 23587 23588 23589 23590 23591 23592 23593 23594 23595 23596 23597 23598 23599 23600 23601 23602 23603 23604 23605 23606 23607 23608 23609 23610 23611 23612 23613 23614 23615 23616 23617 23618 23619 23620 23621 23622 23623 23624 23625 23626 23627 23628 23629 23630 23631 23632 23633 23634 23635 23636 23637 23638 23639 23640 23641 23642 23643 23644 23645 23646 23647 23648 23649 23650 23651 23652 23653 23654 23655 23656 23657 23658 23659 23660 23661 23662 23663 23664 23665 23666 23667 23668 23669 23670 23671 23672 23673 23674 23675 23676 23677 23678 23679 23680 23681 23682 23683 23684 23685 23686 23687 23688 23689 23690 23691 23692 23693 23694 23695 23696 23697 23698 23699 23700 23701 23702 23703 23704 23705 23706 23707 23708 23709 23710 23711 23712 23713 23714 23715 23716 23717 23718 23719 23720 23721 23722 23723 23724 23725 23726 23727 23728 23729 23730 23731 23732 23733 23734 23735 23736 23737 23738 23739 23740 23741 23742 23743 23744 23745 23746 23747 23748 23749 23750 23751 23752 23753 23754 23755 23756 23757 23758 23759 23760 23761 23762 23763 23764 23765 23766 23767 23768 23769 23770 23771 23772 23773 23774 23775 23776 23777 23778 23779 23780 23781 23782 23783 23784 23785 23786 23787 23788 23789 23790 23791 23792 23793 23794 23795 23796 23797 23798 23799 23800 23801 23802 23803 23804 23805 23806 23807 23808 23809 23810 23811 23812 23813 23814 23815 23816 23817 23818 23819 23820 23821 23822 23823 23824 23825 23826 23827 23828 23829 23830 23831 23832 23833 23834 23835 23836 23837 23838 23839 23840 23841 23842 23843 23844 23845 23846 23847 23848 23849 23850 23851 23852 23853 23854 23855 23856 23857 23858 23859 23860 23861 23862 23863 23864 23865 23866 23867 23868 23869 23870 23871 23872 23873 23874 23875 23876 23877 23878 23879 23880 23881 23882 23883 23884 23885 23886 23887 23888 23889 23890 23891 23892 23893 23894 23895 23896 23897 23898 23899 23900 23901 23902 23903 23904 23905 23906 23907 23908 23909 23910 23911 23912 23913 23914 23915 23916 23917 23918 23919 23920 23921 23922 23923 23924 23925 23926 23927 23928 23929 23930 23931 23932 23933 23934 23935 23936 23937 23938 23939 23940 23941 23942 23943 23944 23945 23946 23947 23948 23949 23950 23951 23952 23953 23954 23955 23956 23957 23958 23959 23960 23961 23962 23963 23964 23965 23966 23967 23968 23969 23970 23971 23972 23973 23974 23975 23976 23977 23978 23979 23980 23981 23982 23983 23984 23985 23986 23987 23988 23989 23990 23991 23992 23993 23994 23995 23996 23997 23998 23999 24000 24001 24002 24003 24004 24005 24006 24007 24008 24009 24010 24011 24012 24013 24014 24015 24016 24017 24018 24019 24020 24021 24022 24023 24024 24025 24026 24027 24028 24029 24030 24031 24032 24033 24034 24035 24036 24037 24038 24039 24040 24041 24042 24043 24044 24045 24046 24047 24048 24049 24050 24051 24052 24053 24054 24055 24056 24057 24058 24059 24060 24061 24062 24063 24064 24065 24066 24067 24068 24069 24070 24071 24072 24073 24074 24075 24076 24077 24078 24079 24080 24081 24082 24083 24084 24085 24086 24087 24088 24089 24090 24091 24092 24093 24094 24095 24096 24097 24098 24099 24100 24101 24102 24103 24104 24105 24106 24107 24108 24109 24110 24111 24112 24113 24114 24115 24116 24117 24118 24119 24120 24121 24122 24123 24124 24125 24126 24127 24128 24129 24130 24131 24132 24133 24134 24135 24136 24137 24138 24139 24140 24141 24142 24143 24144 24145 24146 24147 24148 24149 24150 24151 24152 24153 24154 24155 24156 24157 24158 24159 24160 24161 24162 24163 24164 24165 24166 24167 24168 24169 24170 24171 24172 24173 24174 24175 24176 24177 24178 24179 24180 24181 24182 24183 24184 24185 24186 24187 24188 24189 24190 24191 24192 24193 24194 24195 24196 24197 24198 24199 24200 24201 24202 24203 24204 24205 24206 24207 24208 24209 24210 24211 24212 24213 24214 24215 24216 24217 24218 24219 24220 24221 24222 24223 24224 24225 24226 24227 24228 24229 24230 24231 24232 24233 24234 24235 24236 24237 24238 24239 24240 24241 24242 24243 24244 24245 24246 24247 24248 24249 24250 24251 24252 24253 24254 24255 24256 24257 24258 24259 24260 24261 24262 24263 24264 24265 24266 24267 24268 24269 24270 24271 24272 24273 24274 24275 24276 24277 24278 24279 24280 24281 24282 24283 24284 24285 24286 24287 24288 24289 24290 24291 24292 24293 24294 24295 24296 24297 24298 24299 24300 24301 24302 24303 24304 24305 24306 24307 24308 24309 24310 24311 24312 24313 24314 24315 24316 24317 24318 24319 24320 24321 24322 24323 24324 24325 24326 24327 24328 24329 24330 24331 24332 24333 24334 24335 24336 24337 24338 24339 24340 24341 24342 24343 24344 24345 24346 24347 24348 24349 24350 24351 24352 24353 24354 24355 24356 24357 24358 24359 24360 24361 24362 24363 24364 24365 24366 24367 24368 24369 24370 24371 24372 24373 24374 24375 24376 24377 24378 24379 24380 24381 24382 24383 24384 24385 24386 24387 24388 24389 24390 24391 24392 24393 24394 24395 24396 24397 24398 24399 24400 24401 24402 24403 24404 24405 24406 24407 24408 24409 24410 24411 24412 24413 24414 24415 24416 24417 24418 24419 24420 24421 24422 24423 24424 24425 24426 24427 24428 24429 24430 24431 24432 24433 24434 24435 24436 24437 24438 24439 24440 24441 24442 24443 24444 24445 24446 24447 24448 24449 24450 24451 24452 24453 24454 24455 24456 24457 24458 24459 24460 24461 24462 24463 24464 24465 24466 24467 24468 24469 24470 24471 24472 24473 24474 24475 24476 24477 24478 24479 24480 24481 24482 24483 24484 24485 24486 24487 24488 24489 24490 24491 24492 24493 24494 24495 24496 24497 24498 24499 24500 24501 24502 24503 24504 24505 24506 24507 24508 24509 24510 24511 24512 24513 24514 24515 24516 24517 24518 24519 24520 24521 24522 24523 24524 24525 24526 24527 24528 24529 24530 24531 24532 24533 24534 24535 24536 24537 24538 24539 24540 24541 24542 24543 24544 24545 24546 24547 24548 24549 24550 24551 24552 24553 24554 24555 24556 24557 24558 24559 24560 24561 24562 24563 24564 24565 24566 24567 24568 24569 24570 24571 24572 24573 24574 24575 24576 24577 24578 24579 24580 24581 24582 24583 24584 24585 24586 24587 24588 24589 24590 24591 24592 24593 24594 24595 24596 24597 24598 24599 24600 24601 24602 24603 24604 24605 24606 24607 24608 24609 24610 24611 24612 24613 24614 24615 24616 24617 24618 24619 24620 24621 24622 24623 24624 24625 24626 24627 24628 24629 24630 24631 24632 24633 24634 24635 24636 24637 24638 24639 24640 24641 24642 24643 24644 24645 24646 24647 24648 24649 24650 24651 24652 24653 24654 24655 24656 24657 24658 24659 24660 24661 24662 24663 24664 24665 24666 24667 24668 24669 24670 24671 24672 24673 24674 24675 24676 24677 24678 24679 24680 24681 24682 24683 24684 24685 24686 24687 24688 24689 24690 24691 24692 24693 24694 24695 24696 24697 24698 24699 24700 24701 24702 24703 24704 24705 24706 24707 24708 24709 24710 24711 24712 24713 24714 24715 24716 24717 24718 24719 24720 24721 24722 24723 24724 24725 24726 24727 24728 24729 24730 24731 24732 24733 24734 24735 24736 24737 24738 24739 24740 24741 24742 24743 24744 24745 24746 24747 24748 24749 24750 24751 24752 24753 24754 24755 24756 24757 24758 24759 24760 24761 24762 24763 24764 24765 24766 24767 24768 24769 24770 24771 24772 24773 24774 24775 24776 24777 24778 24779 24780 24781 24782 24783 24784 24785 24786 24787 24788 24789 24790 24791 24792 24793 24794 24795 24796 24797 24798 24799 24800 24801 24802 24803 24804 24805 24806 24807 24808 24809 24810 24811 24812 24813 24814 24815 24816 24817 24818 24819 24820 24821 24822 24823 24824 24825 24826 24827 24828 24829 24830 24831 24832 24833 24834 24835 24836 24837 24838 24839 24840 24841 24842 24843 24844 24845 24846 24847 24848 24849 24850 24851 24852 24853 24854 24855 24856 24857 24858 24859 24860 24861 24862 24863 24864 24865 24866 24867 24868 24869 24870 24871 24872 24873 24874 24875 24876 24877 24878 24879 24880 24881 24882 24883 24884 24885 24886 24887 24888 24889 24890 24891 24892 24893 24894 24895 24896 24897 24898 24899 24900 24901 24902 24903 24904 24905 24906 24907 24908 24909 24910 24911 24912 24913 24914 24915 24916 24917 24918 24919 24920 24921 24922 24923 24924 24925 24926 24927 24928 24929 24930 24931 24932 24933 24934 24935 24936 24937 24938 24939 24940 24941 24942 24943 24944 24945 24946 24947 24948 24949 24950 24951 24952 24953 24954 24955 24956 24957 24958 24959 24960 24961 24962 24963 24964 24965 24966 24967 24968 24969 24970 24971 24972 24973 24974 24975 24976 24977 24978 24979 24980 24981 24982 24983 24984 24985 24986 24987 24988 24989 24990 24991 24992 24993 24994 24995 24996 24997 24998 24999 25000 25001 25002 25003 25004 25005 25006 25007 25008 25009 25010 25011 25012 25013 25014 25015 25016 25017 25018 25019 25020 25021 25022 25023 25024 25025 25026 25027 25028 25029 25030 25031 25032 25033 25034 25035 25036 25037 25038 25039 25040 25041 25042 25043 25044 25045 25046 25047 25048 25049 25050 25051 25052 25053 25054 25055 25056 25057 25058 25059 25060 25061 25062 25063 25064 25065 25066 25067 25068 25069 25070 25071 25072 25073 25074 25075 25076 25077 25078 25079 25080 25081 25082 25083 25084 25085 25086 25087 25088 25089 25090 25091 25092 25093 25094 25095 25096 25097 25098 25099 25100 25101 25102 25103 25104 25105 25106 25107 25108 25109 25110 25111 25112 25113 25114 25115 25116 25117 25118 25119 25120 25121 25122 25123 25124 25125 25126 25127 25128 25129 25130 25131 25132 25133 25134 25135 25136 25137 25138 25139 25140 25141 25142 25143 25144 25145 25146 25147 25148 25149 25150 25151 25152 25153 25154 25155 25156 25157 25158 25159 25160 25161 25162 25163 25164 25165 25166 25167 25168 25169 25170 25171 25172 25173 25174 25175 25176 25177 25178 25179 25180 25181 25182 25183 25184 25185 25186 25187 25188 25189 25190 25191 25192 25193 25194 25195 25196 25197 25198 25199 25200 element load imbalance: 0 3150 3150 done :: mapping elements to processors 0 objects found done :: read .rea file 0.70946 sec Reset the target Courant number to .5 setup mesh topology Right-handed check complete for 25200 elements. OK. setvert3d: 5 975293 1655693 975293 975293 call usrsetvert done :: usrsetvert gs_setup: 11763 unique labels shared pairwise times (avg, min, max): 1.16765e-05 1.11103e-05 1.24931e-05 crystal router : 3.6639e-05 3.60012e-05 3.76225e-05 all reduce : 0.00013693 0.00013659 0.000137186 used all_to_all method: pairwise handle bytes (avg, min, max): 3.4036e+06 3375444 3417612 buffer bytes (avg, min, max): 47052 25616 59104 setupds time 3.2100E-01 seconds 0 5 975293 25200 8 max multiplicity done :: setup mesh topology call usrdat done :: usrdat generate geometry data vol_t,vol_v: 486.94699737572313 486.94699737572313 done :: generate geometry data call usrdat2 done :: usrdat2 regenerate geometry data 1 vol_t,vol_v: 473.85702446339303 473.85702446339303 NOTE: All elements deformed , param(59) ^=0 done :: regenerate geometry data 1 verify mesh topology -1.0000000000000000 1.0000000000000000 Xrange -1.0000000000000000 1.0000000000000000 Yrange -5.0000000000000000 150.00000000000000 Zrange done :: verify mesh topology 118 Parameters from file:/home/kamal/neksamples/pipe2000/divpipe.rea 1 1.00000 p001 DENSITY 2 -2000.000 p002 VISCOS 7 1.00000 p007 RHOCP 8 1.00000 p008 CONDUCT 11 1000000.000 p011 NSTEPS 12 -0.100000E-02 p012 DT 15 1000.0000 p015 IOSTEP 18 -20.0000 p018 GRID < 0 --> # cells on screen 19 -1.00000 p019 INTYPE 20 10.0000 p020 NORDER 21 0.100000E-05 p021 DIVERGENCE 22 0.100000E-06 p022 HELMHOLTZ 24 0.100000E-01 p024 TOLREL 25 0.100000E-01 p025 TOLABS 26 1.00000 p026 COURANT/NTAU 27 2.00000 p027 TORDER 28 0.00000 p028 TORDER: mesh velocity (0: p28=p27) 59 0.00000 p059 !=0 --> full Jac. eval. for each el. 65 1.00000 p065 #iofiles (eg, 0 or 64); <0 --> sep. dirs 66 6.00000 p066 output : <0=ascii, else binary 67 6.00000 p067 restart: <0=ascii, else binary 93 20.0000 p093 Number of previous pressure solns saved 94 3.00000 p094 start projecting velocity after p94 step 95 5.00000 p095 start projecting pressure after p95 step 99 0.00000 p099 dealiasing: <0--> off/3--> old/4--> new 102 1.00000 p102 Dump out divergence at each time step 103 0.100000E-01 p103 weight of stabilizing filter (.01) IFTRAN = T IFFLOW = T IFHEAT = F IFSPLIT = T IFLOMACH = F IFUSERVP = F IFUSERMV = F IFSTRS = F IFCHAR = F IFCYCLIC = F IFAXIS = F IFMVBD = F IFMELT = F IFMODEL = F IFKEPS = F IFMOAB = F IFNEKNEK = F IFSYNC = T IFVCOR = F IFINTQ = F IFCWUZ = F IFSWALL = F IFGEOM = F IFSURT = F IFWCNO = F IFTMSH for field 1 = F IFADVC for field 1 = T IFNONL for field 1 = F Dealiasing enabled, lxd= 8 Estimated eigenvalues EIGAA = 1.6450710020462991 EIGGA = 283918.58986179990 EIGAE = 4.10805594218079453E-004 EIGAS = 2.08099221708910814E-005 EIGGE = 283918.58986179990 EIGGS = 2.0000000000000000 verify mesh topology -1.0000000000000000 1.0000000000000000 Xrange -1.0000000000000000 1.0000000000000000 Yrange -5.0000000000000000 150.00000000000000 Zrange done :: verify mesh topology E-solver strategy: 0 itr mg_nx: 1 2 4 mg_ny: 1 2 4 mg_nz: 1 2 4 call usrsetvert done :: usrsetvert gs_setup: 786 unique labels shared pairwise times (avg, min, max): 1.89245e-06 1.69277e-06 2.00272e-06 crystal router : 5.65052e-06 5.60284e-06 5.6982e-06 all reduce : 1.60843e-05 1.59979e-05 1.62125e-05 used all_to_all method: pairwise handle bytes (avg, min, max): 235250 233148 236388 buffer bytes (avg, min, max): 3144 1712 3952 setupds time 5.6469E-03 seconds 1 2 28124 25200 setvert3d: 4 503170 704770 503170 503170 call usrsetvert done :: usrsetvert gs_setup: 6664 unique labels shared pairwise times (avg, min, max): 9.01222e-06 8.4877e-06 9.20296e-06 crystal router : 2.15054e-05 2.13861e-05 2.16961e-05 all reduce : 8.18253e-05 8.1706e-05 8.18968e-05 used all_to_all method: pairwise handle bytes (avg, min, max): 1.90516e+06 1888956 1913300 buffer bytes (avg, min, max): 26656 14512 33488 setupds time 1.8922E-01 seconds 2 4 503170 25200 setvert3d: 3 187447 212647 187447 187447 call usrsetvert done :: usrsetvert gs_setup: 3005 unique labels shared pairwise times (avg, min, max): 4.90844e-06 4.60148e-06 5.31673e-06 crystal router : 1.1456e-05 1.13964e-05 1.14918e-05 all reduce : 3.45439e-05 3.42846e-05 3.46899e-05 used all_to_all method: pairwise handle bytes (avg, min, max): 849046 841524 852892 buffer bytes (avg, min, max): 12020 6544 15104 setupds time 1.0921E-01 seconds 3 3 187447 25200 setvert3d: 5 975293 1655693 975293 975293 call usrsetvert done :: usrsetvert gs_setup: 11763 unique labels shared pairwise times (avg, min, max): 1.13785e-05 1.0705e-05 1.19925e-05 crystal router : 3.50952e-05 3.48091e-05 3.54052e-05 all reduce : 0.000132442 0.000132108 0.000132585 used all_to_all method: pairwise handle bytes (avg, min, max): 3.4036e+06 3375444 3417612 buffer bytes (avg, min, max): 47052 25616 59104 setupds time 3.1718E-01 seconds 4 5 975293 25200 regenerate geometry data 1 vol_t,vol_v: 473.85702446339303 473.85702446339303 NOTE: All elements deformed , param(59) ^=0 done :: regenerate geometry data 1 h1_mg_nx: 1 2 4 h1_mg_ny: 1 2 4 h1_mg_nz: 1 2 4 call usrsetvert done :: usrsetvert gs_setup: 786 unique labels shared pairwise times (avg, min, max): 1.85966e-06 1.81198e-06 1.90735e-06 crystal router : 5.47469e-06 5.38826e-06 5.50747e-06 all reduce : 1.67131e-05 1.66178e-05 1.68085e-05 used all_to_all method: pairwise handle bytes (avg, min, max): 235250 233148 236388 buffer bytes (avg, min, max): 3144 1712 3952 setupds time 9.0699E-03 seconds 5 2 28124 25200 setvert3d: 4 503170 704770 503170 503170 call usrsetvert done :: usrsetvert gs_setup: 6664 unique labels shared pairwise times (avg, min, max): 7.52509e-06 7.00951e-06 7.82013e-06 crystal router : 2.11596e-05 2.10047e-05 2.13146e-05 all reduce : 8.20041e-05 8.18014e-05 8.20875e-05 used all_to_all method: pairwise handle bytes (avg, min, max): 1.90516e+06 1888956 1913300 buffer bytes (avg, min, max): 26656 14512 33488 setupds time 1.9658E-01 seconds 6 4 503170 25200 setvert3d: 3 187447 212647 187447 187447 call usrsetvert done :: usrsetvert gs_setup: 3005 unique labels shared pairwise times (avg, min, max): 4.78625e-06 4.3869e-06 5.19753e-06 crystal router : 1.0848e-05 1.06812e-05 1.10149e-05 all reduce : 3.41892e-05 3.40939e-05 3.42846e-05 used all_to_all method: pairwise handle bytes (avg, min, max): 849046 841524 852892 buffer bytes (avg, min, max): 12020 6544 15104 setupds time 1.0998E-01 seconds 7 3 187447 25200 setvert3d: 5 975293 1655693 975293 975293 call usrsetvert done :: usrsetvert gs_setup: 11763 unique labels shared pairwise times (avg, min, max): 1.11341e-05 1.05143e-05 1.1611e-05 crystal router : 3.53426e-05 3.50237e-05 3.56197e-05 all reduce : 0.000133416 0.000133109 0.000133705 used all_to_all method: pairwise handle bytes (avg, min, max): 3.4036e+06 3375444 3417612 buffer bytes (avg, min, max): 47052 25616 59104 setupds time 3.2016E-01 seconds 8 5 975293 25200 setvert3d: 5 975293 1655693 975293 975293 call usrsetvert done :: usrsetvert gs_setup: 11763 unique labels shared pairwise times (avg, min, max): 1.10209e-05 1.02997e-05 1.13964e-05 crystal router : 3.50356e-05 3.47853e-05 3.52859e-05 all reduce : 0.000133777 0.000133705 0.000133896 used all_to_all method: pairwise handle bytes (avg, min, max): 3.4036e+06 3375444 3417612 buffer bytes (avg, min, max): 47052 25616 59104 setupds time 3.1833E-01 seconds 9 5 975293 25200 setvert3d: 7 2388739 5538739 2388739 2388739 call usrsetvert done :: usrsetvert gs_setup: 26281 unique labels shared pairwise times (avg, min, max): 2.32637e-05 2.19822e-05 2.39849e-05 crystal router : 7.87288e-05 7.81059e-05 7.94172e-05 all reduce : 0.00026685 0.000265884 0.000267482 used all_to_all method: pairwise handle bytes (avg, min, max): 7.72743e+06 7665588 7757948 buffer bytes (avg, min, max): 105124 57232 132032 setupds time 6.9914E-01 seconds 10 7 2388739 25200 setup h1 coarse grid, nx_crs= 2 call usrsetvert done :: usrsetvert gs_setup: 786 unique labels shared pairwise times (avg, min, max): 1.97589e-06 1.78814e-06 2.19345e-06 crystal router : 4.18425e-06 4.1008e-06 4.29153e-06 all reduce : 1.62125e-05 1.62125e-05 1.62125e-05 used all_to_all method: pairwise handle bytes (avg, min, max): 235250 233148 236388 buffer bytes (avg, min, max): 3144 1712 3952 done :: setup h1 coarse grid 3.1613740921020508 sec call usrdat3 done :: usrdat3 set initial conditions nekuic (1) for ifld 1 call nekuic for vel xyz min -1.0000 -1.0000 -5.0000 uvwpt min -0.10000E-19 -0.10000E-19 -0.12378E-05 -0.10000E-19 -0.10000E-19 PS min 0.0000 0.0000 0.99000E+22 xyz max 1.0000 1.0000 150.00 uvwpt max 0.80000E-19 0.80000E-19 2.0000 0.80000E-19 0.80000E-19 PS max 0.0000 0.0000 -0.99000E+22 done :: set initial conditions call userchk schfile:/home/kamal/neksamples/pipe2000/divpipe.sch call outfld: ifpsco: F 0 0.0000E+00 Write checkpoint: 0 0 OPEN: vrtdivpipe0.f00001 0 0.0000E+00 done :: Write checkpoint file size = 86. MB avg data-throughput = 264.1MB/s io-nodes = 1 number 640 0 0 0 number 624 0 0 0 number 624 0 0 0 number 630 0 0 0 number 624 0 0 0 number 640 0 0 0 number 624 0 0 0 number 634 0 0 0 0 0.00000000000E+00 1.24088603539E-10 1.12552189325E-25 1.24088603539E-10 1dragx 0 0.00000000000E+00 -1.75487705534E-10 -6.86560965255E-26 -1.75487705534E-10 1dragy 0 0.00000000000E+00 5.33411814488E-01 -2.85097622410E-20 5.33411814488E-01 1dragz 0 0.00000000000E+00 9.42869304453E-08 7.45679573773E-24 9.42869304453E-08 1torqx 0 0.00000000000E+00 6.66710102897E-08 1.20869464013E-23 6.66710102897E-08 1torqy 0 0.00000000000E+00 7.42311196653E-17 -1.56051540126E-32 7.42311196653E-17 1torqz -------------- next part -------------- [lomc-itp-43:15191] *** Process received signal *** [lomc-itp-43:15191] Signal: Segmentation fault (11) [lomc-itp-43:15191] Signal code: Address not mapped (1) [lomc-itp-43:15191] Failing at address: (nil) [lomc-itp-43:15185] *** Process received signal *** [lomc-itp-43:15185] Signal: Segmentation fault (11) [lomc-itp-43:15185] Signal code: Address not mapped (1) [lomc-itp-43:15185] Failing at address: (nil) [lomc-itp-43:15186] *** Process received signal *** [lomc-itp-43:15186] Signal: Segmentation fault (11) [lomc-itp-43:15186] Signal code: Address not mapped (1) [lomc-itp-43:15186] Failing at address: (nil) [lomc-itp-43:15188] *** Process received signal *** [lomc-itp-43:15188] Signal: Segmentation fault (11) [lomc-itp-43:15188] Signal code: Address not mapped (1) [lomc-itp-43:15188] Failing at address: (nil) [lomc-itp-43:15189] *** Process received signal *** [lomc-itp-43:15189] Signal: Segmentation fault (11) [lomc-itp-43:15189] Signal code: Address not mapped (1) [lomc-itp-43:15189] Failing at address: (nil) [lomc-itp-43:15184] *** Process received signal *** [lomc-itp-43:15184] Signal: Segmentation fault (11) [lomc-itp-43:15184] Signal code: Address not mapped (1) [lomc-itp-43:15184] Failing at address: (nil) more error [lomc-itp-43:15186] [ 0] /lib/x86_64-linux-gnu/libc.so.6(+0x364a0) [0x7ff4bdcdf4a0] [lomc-itp-43:15186] *** End of error message *** [lomc-itp-43:15188] [ 0] /lib/x86_64-linux-gnu/libc.so.6(+0x364a0) [0x7fee28d0f4a0] [lomc-itp-43:15188] *** End of error message *** [lomc-itp-43:15190] Signal: Segmentation fault (11) [lomc-itp-43:15190] Signal code: Address not mapped (1) [lomc-itp-43:15190] Failing at address: (nil) [lomc-itp-43:15184] [ 0] /lib/x86_64-linux-gnu/libc.so.6(+0x364a0) [0x7f029087f4a0] [lomc-itp-43:15184] *** End of error message *** [lomc-itp-43:15189] [ 0] /lib/x86_64-linux-gnu/libc.so.6(+0x364a0) [0x7f172c1664a0] [lomc-itp-43:15189] *** End of error message *** [lomc-itp-43:15191] [ 0] /lib/x86_64-linux-gnu/libc.so.6(+0x364a0) [0x7f8c6bc984a0] [lomc-itp-43:15191] *** End of error message *** [lomc-itp-43:15185] [ 0] /lib/x86_64-linux-gnu/libc.so.6(+0x364a0) [0x7fa56d27a4a0] [lomc-itp-43:15185] *** End of error message *** [lomc-itp-43:15187] *** Process received signal *** [lomc-itp-43:15187] Signal: Segmentation fault (11) [lomc-itp-43:15187] Signal code: Address not mapped (1) [lomc-itp-43:15187] Failing at address: (nil) [lomc-itp-43:15187] [ 0] /lib/x86_64-linux-gnu/libc.so.6(+0x364a0) [0x7f1140dbf4a0] [lomc-itp-43:15190] [ 0] /lib/x86_64-linux-gnu/libc.so.6(+0x364a0) [0x7f5eee3594a0] [lomc-itp-43:15190] *** End of error message *** [lomc-itp-43:15187] *** End of error message *** From nek5000-users at lists.mcs.anl.gov Mon Feb 24 15:56:43 2014 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 24 Feb 2014 15:56:43 -0600 Subject: [Nek5000-users] Lid driven cavity using nek5000 Message-ID: Hello, I'm trying to solve a simple lid driven cavity problem with nek5000. I set up a coarse 3d mesh and prescribed the velocity for the lid according to Bouffanis et al. 2008 PoF (to ensure that there is no discontinuity between the lid and the walls),I'm getting a vanishing jacobian error. I'm not sure how nek solves, but for the moving wall do I have to prescribe the mesh velocity equal to the lid velocity or is there a way to specify that the mesh doesn't really move in time? Thanks for your help! -- Regards Shriram From nek5000-users at lists.mcs.anl.gov Mon Feb 24 16:27:29 2014 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 24 Feb 2014 16:27:29 -0600 (CST) Subject: [Nek5000-users] Lid driven cavity using nek5000 In-Reply-To: References: Message-ID: Hi Shriram, Just use the 'v ' boundary condition on the top lid and that should suffice. You can generate this mesh from genbox. Best, Paul On Mon, 24 Feb 2014, nek5000-users at lists.mcs.anl.gov wrote: > Hello, > > I'm trying to solve a simple lid driven cavity problem with nek5000. I set up > a coarse 3d mesh and prescribed the velocity for the lid according to > Bouffanis et al. 2008 PoF (to ensure that there is no discontinuity between > the lid and the walls),I'm getting a vanishing jacobian error. I'm not sure > how nek solves, but for the moving wall do I have to prescribe the mesh > velocity equal to the lid velocity or is there a way to specify that the mesh > doesn't really move in time? > > Thanks for your help! > > -- > Regards > Shriram > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > From nek5000-users at lists.mcs.anl.gov Mon Feb 24 16:36:28 2014 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 24 Feb 2014 23:36:28 +0100 Subject: [Nek5000-users] subroutine for drag Message-ID: Hi neks, I have three pipes attached together, I compute drag using the subroutine. I want to compute the drag only for two pipes, Could some one tell me how to change the routine just to take only the two pipes into account. Is nobj the control variable for it ? Thanks, Kamal. From nek5000-users at lists.mcs.anl.gov Tue Feb 25 05:10:47 2014 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Tue, 25 Feb 2014 12:10:47 +0100 Subject: [Nek5000-users] Weakly non linear Navier-Stokes In-Reply-To: References: Message-ID: Hi Neks ! We are interested in studying the weakly non linear dynamics of a perturbation. We are therefore using Nek in perturbation mode, and we have implemented the usually omitted 2nd order non linear perturbation convection terms in advabp() in perturb.f as follows : subroutine advabp C C Eulerian scheme, add convection term to forcing function C at current time step. C include 'SIZE' include 'INPUT' include 'SOLN' include 'MASS' include 'TSTEP' C COMMON /SCRNS/ TA1 (LX1*LY1*LZ1*LELV) $ , TA2 (LX1*LY1*LZ1*LELV) $ , TA3 (LX1*LY1*LZ1*LELV) $ , TB1 (LX1*LY1*LZ1*LELV) $ , TB2 (LX1*LY1*LZ1*LELV) $ , TB3 (LX1*LY1*LZ1*LELV) C ntot1 = nx1*ny1*nz1*nelv ntot2 = nx2*ny2*nz2*nelv c if (if3d) then call opcopy (tb1,tb2,tb3,vx,vy,vz) ! Save velocity call opcopy (vx,vy,vz,vxp(1,jp),vyp(1,jp),vzp(1,jp)) ! U <-- dU call convop (ta1,tb1) ! du.grad U call convop (ta2,tb2) call convop (ta3,tb3) call opcopy (vx,vy,vz,tb1,tb2,tb3) ! Restore velocity c do i=1,ntot1 tmp = bm1(i,1,1,1)*vtrans(i,1,1,1,ifield) bfxp(i,jp) = bfxp(i,jp)-tmp*ta1(i) bfyp(i,jp) = bfyp(i,jp)-tmp*ta2(i) bfzp(i,jp) = bfzp(i,jp)-tmp*ta3(i) enddo c call convop (ta1,vxp(1,jp)) ! U.grad dU call convop (ta2,vyp(1,jp)) call convop (ta3,vzp(1,jp)) c do i=1,ntot1 tmp = bm1(i,1,1,1)*vtrans(i,1,1,1,ifield) bfxp(i,jp) = bfxp(i,jp)-tmp*ta1(i) bfyp(i,jp) = bfyp(i,jp)-tmp*ta2(i) bfzp(i,jp) = bfzp(i,jp)-tmp*ta3(i) enddo ! ADD NON LINEAR TERM call opcopy (tb1,tb2,tb3,vx,vy,vz) call opcopy (vx,vy,vz,vxp(1,jp),vyp(1,jp),vzp(1,jp)) call convop (ta1,vxp(1,jp)) call convop (ta2,vyp(1,jp)) call convop (ta3,vzp(1,jp)) call opcopy (vx,vy,vz,tb1,tb2,tb3) do i=1,ntot1 tmp = bm1(i,1,1,1)*vtrans(i,1,1,1,ifield) bfxp(i,jp) = bfxp(i,jp)-tmp*ta1(i) bfyp(i,jp) = bfyp(i,jp)-tmp*ta2(i) bfzp(i,jp) = bfzp(i,jp)-tmp*ta3(i) enddo c else ! 2D c call opcopy (tb1,tb2,tb3,vx,vy,vz) ! Save velocity call opcopy (vx,vy,vz,vxp(1,jp),vyp(1,jp),vzp(1,jp)) ! U <-- dU call convop (ta1,tb1) ! du.grad U call convop (ta2,tb2) call opcopy (vx,vy,vz,tb1,tb2,tb3) ! Restore velocity c do i=1,ntot1 tmp = bm1(i,1,1,1)*vtrans(i,1,1,1,ifield) bfxp(i,jp) = bfxp(i,jp)-tmp*ta1(i) bfyp(i,jp) = bfyp(i,jp)-tmp*ta2(i) enddo c call convop (ta1,vxp(1,jp)) ! U.grad dU call convop (ta2,vyp(1,jp)) c do i=1,ntot1 tmp = bm1(i,1,1,1)*vtrans(i,1,1,1,ifield) bfxp(i,jp) = bfxp(i,jp)-tmp*ta1(i) bfyp(i,jp) = bfyp(i,jp)-tmp*ta2(i) enddo ! ADD NON LINEAR TERM call opcopy (tb1,tb2,tb3,vx,vy,vz) call opcopy (vx,vy,vz,vxp(1,jp),vyp(1,jp),vzp(1,jp)) call convop (ta1,vxp(1,jp)) call convop (ta2,vyp(1,jp)) call opcopy (vx,vy,vz,tb1,tb2,tb3) do i=1,ntot1 tmp = bm1(i,1,1,1)*vtrans(i,1,1,1,ifield) bfxp(i,jp) = bfxp(i,jp)-tmp*ta1(i) bfyp(i,jp) = bfyp(i,jp)-tmp*ta2(i) enddo c endif c return end Is this the correct implementation ? Are there any other modifications to be made ? An alternative would obviously be to add the non linear perturbation convection term in USERF, would that be a safer solution ? Any help would be greatly appreciated ! Best, Holly -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Tue Feb 25 12:01:26 2014 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Tue, 25 Feb 2014 13:01:26 -0500 Subject: [Nek5000-users] set boundary condition on spectral element Message-ID: Hello, I have a question about setting boundary condition on spectral element mesh when I am not sure that on the boundary condition I have spectral mesh or not?! like pipe with some extension in inlet/outlet that boundary condition is located at the beginning of pipe but the mesh started from extension. What I thought was using normal surface to specify spectral boundary condition there! Is that right?! my question is to how set the boundary condition on spectral element? Thank you, Ami -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Fri Feb 28 10:15:30 2014 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 28 Feb 2014 17:15:30 +0100 Subject: [Nek5000-users] MPI problems Message-ID: Dear Nek?s I have a problem when running the code on more than one processor. The problem just appeared after a recent update of the system (debian) on our cluster. The code was working perfectly before the update but now I cannot run jobs in parallel anymore. MPI works with other softwares, but not with NEK. In particular I have the following problem: When I execute a parallel run using the script nekmpi eddy_uv 4 the command execute 4 different jobs running on a single processor rather than a single job running on 4 processors. I attach the log to this mail (log.out) . After a few second three jobs are killed and only one remains active. A similar problem was also found on a new machine with a fresh installation of debian. It seems that the scripts is not able to set the correct value of the variable np (or_np). Anyone found a similar problem ? Any explanation for such behavior ? Any advice to solve the problem ? Thanks in advance Flavio Platform uname -a Linux cfd 2.6.32-5-amd64 #1 SMP Mon Sep 23 22:14:43 UTC 2013 x86_64 GNU/Linux ompi_info Package: Open MPI manuel at ce170155 Distribution Open MPI: 1.4.2 Open MPI SVN revision: r23093 Open MPI release date: May 04, 2010 Open RTE: 1.4.2 Open RTE SVN revision: r23093 Open RTE release date: May 04, 2010 OPAL: 1.4.2 OPAL SVN revision: r23093 OPAL release date: May 04, 2010 Ident string: 1.4.2 Prefix: /usr Configured architecture: x86_64-pc-linux-gnu Configure host: ce170155 Configured by: manuel Configured on: Wed Sep 1 15:58:32 UTC 2010 Configure host: ce170155 Built by: root Built on: Wed Sep 1 16:01:42 UTC 2010 Built host: ce170155 C bindings: yes C++ bindings: yes Fortran77 bindings: yes (all) Fortran90 bindings: yes Fortran90 bindings size: small C compiler: gcc C compiler absolute: /usr/lib/ccache/gcc C++ compiler: g++ C++ compiler absolute: /usr/lib/ccache/g++ Fortran77 compiler: gfortran Fortran77 compiler abs: /usr/bin/gfortran Fortran90 compiler: gfortran Fortran90 compiler abs: /usr/bin/gfortran C profiling: yes C++ profiling: yes Fortran77 profiling: yes Fortran90 profiling: yes C++ exceptions: no Thread support: posix (mpi: no, progress: no) Sparse Groups: no Internal debug support: no MPI parameter check: runtime Memory profiling support: no Memory debugging support: no libltdl support: yes Heterogeneous support: yes mpirun default --prefix: no MPI I/O support: yes MPI_WTIME support: gettimeofday Symbol visibility support: yes FT Checkpoint support: yes (checkpoint thread: no) MCA backtrace: execinfo (MCA v2.0, API v2.0, Component v1.4.2) MCA memory: ptmalloc2 (MCA v2.0, API v2.0, Component v1.4.2) MCA paffinity: linux (MCA v2.0, API v2.0, Component v1.4.2) MCA carto: auto_detect (MCA v2.0, API v2.0, Component v1.4.2) MCA carto: file (MCA v2.0, API v2.0, Component v1.4.2) MCA maffinity: first_use (MCA v2.0, API v2.0, Component v1.4.2) MCA maffinity: libnuma (MCA v2.0, API v2.0, Component v1.4.2) MCA timer: linux (MCA v2.0, API v2.0, Component v1.4.2) MCA installdirs: env (MCA v2.0, API v2.0, Component v1.4.2) MCA installdirs: config (MCA v2.0, API v2.0, Component v1.4.2) MCA crs: none (MCA v2.0, API v2.0, Component v1.4.2) MCA dpm: orte (MCA v2.0, API v2.0, Component v1.4.2) MCA pubsub: orte (MCA v2.0, API v2.0, Component v1.4.2) MCA allocator: basic (MCA v2.0, API v2.0, Component v1.4.2) MCA allocator: bucket (MCA v2.0, API v2.0, Component v1.4.2) MCA coll: basic (MCA v2.0, API v2.0, Component v1.4.2) MCA coll: hierarch (MCA v2.0, API v2.0, Component v1.4.2) MCA coll: inter (MCA v2.0, API v2.0, Component v1.4.2) MCA coll: self (MCA v2.0, API v2.0, Component v1.4.2) MCA coll: sm (MCA v2.0, API v2.0, Component v1.4.2) MCA coll: sync (MCA v2.0, API v2.0, Component v1.4.2) MCA coll: tuned (MCA v2.0, API v2.0, Component v1.4.2) MCA io: romio (MCA v2.0, API v2.0, Component v1.4.2) MCA mpool: fake (MCA v2.0, API v2.0, Component v1.4.2) MCA mpool: rdma (MCA v2.0, API v2.0, Component v1.4.2) MCA mpool: sm (MCA v2.0, API v2.0, Component v1.4.2) MCA pml: cm (MCA v2.0, API v2.0, Component v1.4.2) MCA pml: crcpw (MCA v2.0, API v2.0, Component v1.4.2) MCA pml: csum (MCA v2.0, API v2.0, Component v1.4.2) MCA pml: ob1 (MCA v2.0, API v2.0, Component v1.4.2) MCA pml: v (MCA v2.0, API v2.0, Component v1.4.2) MCA bml: r2 (MCA v2.0, API v2.0, Component v1.4.2) MCA rcache: vma (MCA v2.0, API v2.0, Component v1.4.2) MCA btl: ofud (MCA v2.0, API v2.0, Component v1.4.2) MCA btl: openib (MCA v2.0, API v2.0, Component v1.4.2) MCA btl: self (MCA v2.0, API v2.0, Component v1.4.2) MCA btl: sm (MCA v2.0, API v2.0, Component v1.4.2) MCA btl: tcp (MCA v2.0, API v2.0, Component v1.4.2) MCA topo: unity (MCA v2.0, API v2.0, Component v1.4.2) MCA osc: pt2pt (MCA v2.0, API v2.0, Component v1.4.2) MCA osc: rdma (MCA v2.0, API v2.0, Component v1.4.2) MCA crcp: bkmrk (MCA v2.0, API v2.0, Component v1.4.2) MCA iof: hnp (MCA v2.0, API v2.0, Component v1.4.2) MCA iof: orted (MCA v2.0, API v2.0, Component v1.4.2) MCA iof: tool (MCA v2.0, API v2.0, Component v1.4.2) MCA oob: tcp (MCA v2.0, API v2.0, Component v1.4.2) MCA odls: default (MCA v2.0, API v2.0, Component v1.4.2) MCA ras: slurm (MCA v2.0, API v2.0, Component v1.4.2) MCA ras: tm (MCA v2.0, API v2.0, Component v1.4.2) MCA rmaps: load_balance (MCA v2.0, API v2.0, Component v1.4.2) MCA rmaps: rank_file (MCA v2.0, API v2.0, Component v1.4.2) MCA rmaps: round_robin (MCA v2.0, API v2.0, Component v1.4.2) MCA rmaps: seq (MCA v2.0, API v2.0, Component v1.4.2) MCA rml: ftrm (MCA v2.0, API v2.0, Component v1.4.2) MCA rml: oob (MCA v2.0, API v2.0, Component v1.4.2) MCA routed: binomial (MCA v2.0, API v2.0, Component v1.4.2) MCA routed: direct (MCA v2.0, API v2.0, Component v1.4.2) MCA routed: linear (MCA v2.0, API v2.0, Component v1.4.2) MCA plm: rsh (MCA v2.0, API v2.0, Component v1.4.2) MCA plm: slurm (MCA v2.0, API v2.0, Component v1.4.2) MCA plm: tm (MCA v2.0, API v2.0, Component v1.4.2) MCA snapc: full (MCA v2.0, API v2.0, Component v1.4.2) MCA filem: rsh (MCA v2.0, API v2.0, Component v1.4.2) MCA errmgr: default (MCA v2.0, API v2.0, Component v1.4.2) MCA ess: env (MCA v2.0, API v2.0, Component v1.4.2) MCA ess: hnp (MCA v2.0, API v2.0, Component v1.4.2) MCA ess: singleton (MCA v2.0, API v2.0, Component v1.4.2) MCA ess: slurm (MCA v2.0, API v2.0, Component v1.4.2) MCA ess: tool (MCA v2.0, API v2.0, Component v1.4.2) MCA grpcomm: bad (MCA v2.0, API v2.0, Component v1.4.2) MCA grpcomm: basic (MCA v2.0, API v2.0, Component v1.4.2) -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: eddy_uv.rea Type: application/octet-stream Size: 131687 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: log.out Type: application/octet-stream Size: 97595 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: SIZE Type: application/octet-stream Size: 3262 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: compiler.out Type: application/octet-stream Size: 92365 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Fri Feb 28 13:59:49 2014 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 28 Feb 2014 20:59:49 +0100 Subject: [Nek5000-users] MPI problems In-Reply-To: References: Message-ID: Hi, I had a similar problem once. Changing mpiexec to mpiexec.mpich2 in the nekbmpi script solved it as far as I was concerned. Might be a similar problem? Cheers, JC 2014-02-28 17:15 GMT+01:00 : > Dear Nek's > > I have a problem when running the code on more than one processor. The > problem just appeared after a recent update of the system (debian) on our > cluster. The code was working perfectly before the update but now I cannot > run jobs in parallel anymore. MPI works with other softwares, but not with > NEK. In particular I have the following problem: > > When I execute a parallel run using the script nekmpi eddy_uv 4 > > the command execute 4 different jobs running on a single processor rather > than a single job running on 4 processors. I attach the log to this mail > (log.out) . After a few second three jobs are killed and only one remains > active. A similar problem was also found on a new machine with a fresh > installation of debian. It seems that the scripts is not able to set the > correct value of the variable np (or_np). > Anyone found a similar problem ? Any explanation for such behavior ? Any > advice to solve the problem ? > > Thanks in advance > > > Flavio > > > > > > > > > Platform > uname -a > Linux cfd 2.6.32-5-amd64 #1 SMP Mon Sep 23 22:14:43 UTC 2013 x86_64 > GNU/Linux > > > > > > > ompi_info > Package: Open MPI manuel at ce170155 Distribution > Open MPI: 1.4.2 > Open MPI SVN revision: r23093 > Open MPI release date: May 04, 2010 > Open RTE: 1.4.2 > Open RTE SVN revision: r23093 > Open RTE release date: May 04, 2010 > OPAL: 1.4.2 > OPAL SVN revision: r23093 > OPAL release date: May 04, 2010 > Ident string: 1.4.2 > Prefix: /usr > Configured architecture: x86_64-pc-linux-gnu > Configure host: ce170155 > Configured by: manuel > Configured on: Wed Sep 1 15:58:32 UTC 2010 > Configure host: ce170155 > Built by: root > Built on: Wed Sep 1 16:01:42 UTC 2010 > Built host: ce170155 > C bindings: yes > C++ bindings: yes > Fortran77 bindings: yes (all) > Fortran90 bindings: yes > Fortran90 bindings size: small > C compiler: gcc > C compiler absolute: /usr/lib/ccache/gcc > C++ compiler: g++ > C++ compiler absolute: /usr/lib/ccache/g++ > Fortran77 compiler: gfortran > Fortran77 compiler abs: /usr/bin/gfortran > Fortran90 compiler: gfortran > Fortran90 compiler abs: /usr/bin/gfortran > C profiling: yes > C++ profiling: yes > Fortran77 profiling: yes > Fortran90 profiling: yes > C++ exceptions: no > Thread support: posix (mpi: no, progress: no) > Sparse Groups: no > Internal debug support: no > MPI parameter check: runtime > Memory profiling support: no > Memory debugging support: no > libltdl support: yes > Heterogeneous support: yes > mpirun default --prefix: no > MPI I/O support: yes > MPI_WTIME support: gettimeofday > Symbol visibility support: yes > FT Checkpoint support: yes (checkpoint thread: no) > MCA backtrace: execinfo (MCA v2.0, API v2.0, Component v1.4.2) > MCA memory: ptmalloc2 (MCA v2.0, API v2.0, Component v1.4.2) > MCA paffinity: linux (MCA v2.0, API v2.0, Component v1.4.2) > MCA carto: auto_detect (MCA v2.0, API v2.0, Component > v1.4.2) > MCA carto: file (MCA v2.0, API v2.0, Component v1.4.2) > MCA maffinity: first_use (MCA v2.0, API v2.0, Component v1.4.2) > MCA maffinity: libnuma (MCA v2.0, API v2.0, Component v1.4.2) > MCA timer: linux (MCA v2.0, API v2.0, Component v1.4.2) > MCA installdirs: env (MCA v2.0, API v2.0, Component v1.4.2) > MCA installdirs: config (MCA v2.0, API v2.0, Component v1.4.2) > MCA crs: none (MCA v2.0, API v2.0, Component v1.4.2) > MCA dpm: orte (MCA v2.0, API v2.0, Component v1.4.2) > MCA pubsub: orte (MCA v2.0, API v2.0, Component v1.4.2) > MCA allocator: basic (MCA v2.0, API v2.0, Component v1.4.2) > MCA allocator: bucket (MCA v2.0, API v2.0, Component v1.4.2) > MCA coll: basic (MCA v2.0, API v2.0, Component v1.4.2) > MCA coll: hierarch (MCA v2.0, API v2.0, Component v1.4.2) > MCA coll: inter (MCA v2.0, API v2.0, Component v1.4.2) > MCA coll: self (MCA v2.0, API v2.0, Component v1.4.2) > MCA coll: sm (MCA v2.0, API v2.0, Component v1.4.2) > MCA coll: sync (MCA v2.0, API v2.0, Component v1.4.2) > MCA coll: tuned (MCA v2.0, API v2.0, Component v1.4.2) > MCA io: romio (MCA v2.0, API v2.0, Component v1.4.2) > MCA mpool: fake (MCA v2.0, API v2.0, Component v1.4.2) > MCA mpool: rdma (MCA v2.0, API v2.0, Component v1.4.2) > MCA mpool: sm (MCA v2.0, API v2.0, Component v1.4.2) > MCA pml: cm (MCA v2.0, API v2.0, Component v1.4.2) > MCA pml: crcpw (MCA v2.0, API v2.0, Component v1.4.2) > MCA pml: csum (MCA v2.0, API v2.0, Component v1.4.2) > MCA pml: ob1 (MCA v2.0, API v2.0, Component v1.4.2) > MCA pml: v (MCA v2.0, API v2.0, Component v1.4.2) > MCA bml: r2 (MCA v2.0, API v2.0, Component v1.4.2) > MCA rcache: vma (MCA v2.0, API v2.0, Component v1.4.2) > MCA btl: ofud (MCA v2.0, API v2.0, Component v1.4.2) > MCA btl: openib (MCA v2.0, API v2.0, Component v1.4.2) > MCA btl: self (MCA v2.0, API v2.0, Component v1.4.2) > MCA btl: sm (MCA v2.0, API v2.0, Component v1.4.2) > MCA btl: tcp (MCA v2.0, API v2.0, Component v1.4.2) > MCA topo: unity (MCA v2.0, API v2.0, Component v1.4.2) > MCA osc: pt2pt (MCA v2.0, API v2.0, Component v1.4.2) > MCA osc: rdma (MCA v2.0, API v2.0, Component v1.4.2) > MCA crcp: bkmrk (MCA v2.0, API v2.0, Component v1.4.2) > MCA iof: hnp (MCA v2.0, API v2.0, Component v1.4.2) > MCA iof: orted (MCA v2.0, API v2.0, Component v1.4.2) > MCA iof: tool (MCA v2.0, API v2.0, Component v1.4.2) > MCA oob: tcp (MCA v2.0, API v2.0, Component v1.4.2) > MCA odls: default (MCA v2.0, API v2.0, Component v1.4.2) > MCA ras: slurm (MCA v2.0, API v2.0, Component v1.4.2) > MCA ras: tm (MCA v2.0, API v2.0, Component v1.4.2) > MCA rmaps: load_balance (MCA v2.0, API v2.0, Component > v1.4.2) > MCA rmaps: rank_file (MCA v2.0, API v2.0, Component v1.4.2) > MCA rmaps: round_robin (MCA v2.0, API v2.0, Component > v1.4.2) > MCA rmaps: seq (MCA v2.0, API v2.0, Component v1.4.2) > MCA rml: ftrm (MCA v2.0, API v2.0, Component v1.4.2) > MCA rml: oob (MCA v2.0, API v2.0, Component v1.4.2) > MCA routed: binomial (MCA v2.0, API v2.0, Component v1.4.2) > MCA routed: direct (MCA v2.0, API v2.0, Component v1.4.2) > MCA routed: linear (MCA v2.0, API v2.0, Component v1.4.2) > MCA plm: rsh (MCA v2.0, API v2.0, Component v1.4.2) > MCA plm: slurm (MCA v2.0, API v2.0, Component v1.4.2) > MCA plm: tm (MCA v2.0, API v2.0, Component v1.4.2) > MCA snapc: full (MCA v2.0, API v2.0, Component v1.4.2) > MCA filem: rsh (MCA v2.0, API v2.0, Component v1.4.2) > MCA errmgr: default (MCA v2.0, API v2.0, Component v1.4.2) > MCA ess: env (MCA v2.0, API v2.0, Component v1.4.2) > MCA ess: hnp (MCA v2.0, API v2.0, Component v1.4.2) > MCA ess: singleton (MCA v2.0, API v2.0, Component v1.4.2) > MCA ess: slurm (MCA v2.0, API v2.0, Component v1.4.2) > MCA ess: tool (MCA v2.0, API v2.0, Component v1.4.2) > MCA grpcomm: bad (MCA v2.0, API v2.0, Component v1.4.2) > MCA grpcomm: basic (MCA v2.0, API v2.0, Component v1.4.2) > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > -- Jean-Christophe Loiseau Homepage -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Fri Feb 28 18:01:46 2014 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Sat, 1 Mar 2014 01:01:46 +0100 Subject: [Nek5000-users] MPI problems In-Reply-To: References: Message-ID: Yes you were right ! Changing mpiexec to mpiexec.mpich2 solved the problem ! Thanks a lot Flavio On 28/feb/2014, at 20:59, nek5000-users at lists.mcs.anl.gov wrote: > Hi, > > I had a similar problem once. Changing mpiexec to mpiexec.mpich2 in the nekbmpi script solved it as far as I was concerned. Might be a similar problem? > > Cheers, > JC > > > 2014-02-28 17:15 GMT+01:00 : > Dear Nek?s > > I have a problem when running the code on more than one processor. The problem just appeared after a recent update of the system (debian) on our cluster. The code was working perfectly before the update but now I cannot run jobs in parallel anymore. MPI works with other softwares, but not with NEK. In particular I have the following problem: > > When I execute a parallel run using the script nekmpi eddy_uv 4 > > the command execute 4 different jobs running on a single processor rather than a single job running on 4 processors. I attach the log to this mail (log.out) . After a few second three jobs are killed and only one remains active. A similar problem was also found on a new machine with a fresh installation of debian. It seems that the scripts is not able to set the correct value of the variable np (or_np). > Anyone found a similar problem ? Any explanation for such behavior ? Any advice to solve the problem ? > > Thanks in advance > > > Flavio > > > > > > > > > Platform > uname -a > Linux cfd 2.6.32-5-amd64 #1 SMP Mon Sep 23 22:14:43 UTC 2013 x86_64 GNU/Linux > > > > > > > ompi_info > Package: Open MPI manuel at ce170155 Distribution > Open MPI: 1.4.2 > Open MPI SVN revision: r23093 > Open MPI release date: May 04, 2010 > Open RTE: 1.4.2 > Open RTE SVN revision: r23093 > Open RTE release date: May 04, 2010 > OPAL: 1.4.2 > OPAL SVN revision: r23093 > OPAL release date: May 04, 2010 > Ident string: 1.4.2 > Prefix: /usr > Configured architecture: x86_64-pc-linux-gnu > Configure host: ce170155 > Configured by: manuel > Configured on: Wed Sep 1 15:58:32 UTC 2010 > Configure host: ce170155 > Built by: root > Built on: Wed Sep 1 16:01:42 UTC 2010 > Built host: ce170155 > C bindings: yes > C++ bindings: yes > Fortran77 bindings: yes (all) > Fortran90 bindings: yes > Fortran90 bindings size: small > C compiler: gcc > C compiler absolute: /usr/lib/ccache/gcc > C++ compiler: g++ > C++ compiler absolute: /usr/lib/ccache/g++ > Fortran77 compiler: gfortran > Fortran77 compiler abs: /usr/bin/gfortran > Fortran90 compiler: gfortran > Fortran90 compiler abs: /usr/bin/gfortran > C profiling: yes > C++ profiling: yes > Fortran77 profiling: yes > Fortran90 profiling: yes > C++ exceptions: no > Thread support: posix (mpi: no, progress: no) > Sparse Groups: no > Internal debug support: no > MPI parameter check: runtime > Memory profiling support: no > Memory debugging support: no > libltdl support: yes > Heterogeneous support: yes > mpirun default --prefix: no > MPI I/O support: yes > MPI_WTIME support: gettimeofday > Symbol visibility support: yes > FT Checkpoint support: yes (checkpoint thread: no) > MCA backtrace: execinfo (MCA v2.0, API v2.0, Component v1.4.2) > MCA memory: ptmalloc2 (MCA v2.0, API v2.0, Component v1.4.2) > MCA paffinity: linux (MCA v2.0, API v2.0, Component v1.4.2) > MCA carto: auto_detect (MCA v2.0, API v2.0, Component v1.4.2) > MCA carto: file (MCA v2.0, API v2.0, Component v1.4.2) > MCA maffinity: first_use (MCA v2.0, API v2.0, Component v1.4.2) > MCA maffinity: libnuma (MCA v2.0, API v2.0, Component v1.4.2) > MCA timer: linux (MCA v2.0, API v2.0, Component v1.4.2) > MCA installdirs: env (MCA v2.0, API v2.0, Component v1.4.2) > MCA installdirs: config (MCA v2.0, API v2.0, Component v1.4.2) > MCA crs: none (MCA v2.0, API v2.0, Component v1.4.2) > MCA dpm: orte (MCA v2.0, API v2.0, Component v1.4.2) > MCA pubsub: orte (MCA v2.0, API v2.0, Component v1.4.2) > MCA allocator: basic (MCA v2.0, API v2.0, Component v1.4.2) > MCA allocator: bucket (MCA v2.0, API v2.0, Component v1.4.2) > MCA coll: basic (MCA v2.0, API v2.0, Component v1.4.2) > MCA coll: hierarch (MCA v2.0, API v2.0, Component v1.4.2) > MCA coll: inter (MCA v2.0, API v2.0, Component v1.4.2) > MCA coll: self (MCA v2.0, API v2.0, Component v1.4.2) > MCA coll: sm (MCA v2.0, API v2.0, Component v1.4.2) > MCA coll: sync (MCA v2.0, API v2.0, Component v1.4.2) > MCA coll: tuned (MCA v2.0, API v2.0, Component v1.4.2) > MCA io: romio (MCA v2.0, API v2.0, Component v1.4.2) > MCA mpool: fake (MCA v2.0, API v2.0, Component v1.4.2) > MCA mpool: rdma (MCA v2.0, API v2.0, Component v1.4.2) > MCA mpool: sm (MCA v2.0, API v2.0, Component v1.4.2) > MCA pml: cm (MCA v2.0, API v2.0, Component v1.4.2) > MCA pml: crcpw (MCA v2.0, API v2.0, Component v1.4.2) > MCA pml: csum (MCA v2.0, API v2.0, Component v1.4.2) > MCA pml: ob1 (MCA v2.0, API v2.0, Component v1.4.2) > MCA pml: v (MCA v2.0, API v2.0, Component v1.4.2) > MCA bml: r2 (MCA v2.0, API v2.0, Component v1.4.2) > MCA rcache: vma (MCA v2.0, API v2.0, Component v1.4.2) > MCA btl: ofud (MCA v2.0, API v2.0, Component v1.4.2) > MCA btl: openib (MCA v2.0, API v2.0, Component v1.4.2) > MCA btl: self (MCA v2.0, API v2.0, Component v1.4.2) > MCA btl: sm (MCA v2.0, API v2.0, Component v1.4.2) > MCA btl: tcp (MCA v2.0, API v2.0, Component v1.4.2) > MCA topo: unity (MCA v2.0, API v2.0, Component v1.4.2) > MCA osc: pt2pt (MCA v2.0, API v2.0, Component v1.4.2) > MCA osc: rdma (MCA v2.0, API v2.0, Component v1.4.2) > MCA crcp: bkmrk (MCA v2.0, API v2.0, Component v1.4.2) > MCA iof: hnp (MCA v2.0, API v2.0, Component v1.4.2) > MCA iof: orted (MCA v2.0, API v2.0, Component v1.4.2) > MCA iof: tool (MCA v2.0, API v2.0, Component v1.4.2) > MCA oob: tcp (MCA v2.0, API v2.0, Component v1.4.2) > MCA odls: default (MCA v2.0, API v2.0, Component v1.4.2) > MCA ras: slurm (MCA v2.0, API v2.0, Component v1.4.2) > MCA ras: tm (MCA v2.0, API v2.0, Component v1.4.2) > MCA rmaps: load_balance (MCA v2.0, API v2.0, Component v1.4.2) > MCA rmaps: rank_file (MCA v2.0, API v2.0, Component v1.4.2) > MCA rmaps: round_robin (MCA v2.0, API v2.0, Component v1.4.2) > MCA rmaps: seq (MCA v2.0, API v2.0, Component v1.4.2) > MCA rml: ftrm (MCA v2.0, API v2.0, Component v1.4.2) > MCA rml: oob (MCA v2.0, API v2.0, Component v1.4.2) > MCA routed: binomial (MCA v2.0, API v2.0, Component v1.4.2) > MCA routed: direct (MCA v2.0, API v2.0, Component v1.4.2) > MCA routed: linear (MCA v2.0, API v2.0, Component v1.4.2) > MCA plm: rsh (MCA v2.0, API v2.0, Component v1.4.2) > MCA plm: slurm (MCA v2.0, API v2.0, Component v1.4.2) > MCA plm: tm (MCA v2.0, API v2.0, Component v1.4.2) > MCA snapc: full (MCA v2.0, API v2.0, Component v1.4.2) > MCA filem: rsh (MCA v2.0, API v2.0, Component v1.4.2) > MCA errmgr: default (MCA v2.0, API v2.0, Component v1.4.2) > MCA ess: env (MCA v2.0, API v2.0, Component v1.4.2) > MCA ess: hnp (MCA v2.0, API v2.0, Component v1.4.2) > MCA ess: singleton (MCA v2.0, API v2.0, Component v1.4.2) > MCA ess: slurm (MCA v2.0, API v2.0, Component v1.4.2) > MCA ess: tool (MCA v2.0, API v2.0, Component v1.4.2) > MCA grpcomm: bad (MCA v2.0, API v2.0, Component v1.4.2) > MCA grpcomm: basic (MCA v2.0, API v2.0, Component v1.4.2) > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > > > > -- > Jean-Christophe Loiseau > Homepage > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users -------------- next part -------------- An HTML attachment was scrubbed... URL: