From nek5000-users at lists.mcs.anl.gov Thu Dec 1 11:31:45 2016 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 1 Dec 2016 18:31:45 +0100 Subject: [Nek5000-users] uservp MHD error Message-ID: Hello All, I want to run MHD case with different magnetic diffusivities in different region of the domain. Can we run MHD run with different magnetic diffusivities? Assuming that udiff=magnetic diffusivity (eta), in the uservp file I added the following 1.0000000E+00 p30 > 0 ==> properties set in uservp() subroutine uservp (ix,iy,iz,ieg) include 'SIZE' include 'TOTAL' include 'NEKUSE' if (ifield.eq.1) then ! velocity utrans = param(1) udiff = param(2) elseif (ifield.eq.ifldmhd) then ! B-field utrans = 1.0 R1 = sqrt(x*x+y*y); if(R1 .le. 2.0) then udiff= param(29) else udiff= 1000.0D0*param(29) endif endif return end I get an error done :: set initial conditions ERROR: Non-positive diffusivity ( 0.00 ) specified for field 2, group 0 element ERROR: Non-positive diffusivity ( 0.00 ) specified for field 2, group 0 element Attached are the .usr, .rea and the logfile When I set the p30=0.000 and just utrans=0.0 udiff=0.0 in the uservp, the simulation is running fine. Thank you in advance. Cheers, Sandeep -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: gpf.usr Type: application/octet-stream Size: 12369 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: gpf.rea Type: application/octet-stream Size: 14433846 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: gpf.log.16 Type: application/octet-stream Size: 18825 bytes Desc: not available URL: From nek5000-users at lists.mcs.anl.gov Thu Dec 1 15:28:21 2016 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 1 Dec 2016 21:28:21 +0000 Subject: [Nek5000-users] uservp MHD error In-Reply-To: References: Message-ID: Hi Sandeep, The MHD implementation in Nek5000 works only for constant properties so far but it is possible to generalize it to a case with ifuservp=.true. Let me know if you want to discuss this offline: obabko at mcs.anl.gov Aleks ________________________________ From: nek5000-users-bounces at lists.mcs.anl.gov [nek5000-users-bounces at lists.mcs.anl.gov] on behalf of nek5000-users at lists.mcs.anl.gov [nek5000-users at lists.mcs.anl.gov] Sent: Thursday, December 01, 2016 11:31 AM To: nek5000-users at lists.mcs.anl.gov Subject: [Nek5000-users] uservp MHD error Hello All, I want to run MHD case with different magnetic diffusivities in different region of the domain. Can we run MHD run with different magnetic diffusivities? Assuming that udiff=magnetic diffusivity (eta), in the uservp file I added the following 1.0000000E+00 p30 > 0 ==> properties set in uservp() subroutine uservp (ix,iy,iz,ieg) include 'SIZE' include 'TOTAL' include 'NEKUSE' if (ifield.eq.1) then ! velocity utrans = param(1) udiff = param(2) elseif (ifield.eq.ifldmhd) then ! B-field utrans = 1.0 R1 = sqrt(x*x+y*y); if(R1 .le. 2.0) then udiff= param(29) else udiff= 1000.0D0*param(29) endif endif return end I get an error done :: set initial conditions ERROR: Non-positive diffusivity ( 0.00 ) specified for field 2, group 0 element ERROR: Non-positive diffusivity ( 0.00 ) specified for field 2, group 0 element Attached are the .usr, .rea and the logfile When I set the p30=0.000 and just utrans=0.0 udiff=0.0 in the uservp, the simulation is running fine. Thank you in advance. Cheers, Sandeep -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Thu Dec 1 21:21:28 2016 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 1 Dec 2016 20:21:28 -0700 Subject: [Nek5000-users] Questions on turbChannel In-Reply-To: References: Message-ID: Hi Marco, >From my experience, I got benefitted by remodifying the eddy_visc code, by having the element loop inside the subroutine as well. i.e. in the eddy_visc() structure all the calculations of the stresses, etc. done as follows. The benefit is that with the previous snippet, my code froze, whenever the number of processors did not divide nelv, i.e. whenever nelv/np had a non-zero remainder. The following code gets rid of this short-coming. subroutine eddy_visc(ediff,cs) include 'SIZE' include 'TOTAL' integer e real zn0,zn01,zn02 integer ntot real cs,kappa,z0 real ediff(lx1,ly1,lz1,lelv) common /dynsmg/ sij (lx1,ly1,lz1,ldim,ldim) $ , dg2 (lx1,ly1,lz1,lelv) $ , snrm(lx1,ly1,lz1,lelv) cstat = 0.16 t ntot = nx1*ny1*nz1 call set_grid_spacing(dg2) if(nid.eq.0) write(6,*) 'Calculating eddy visosity',session do e=1,nelv call comp_gije(sij,vx(1,1,1,e),vy(1,1,1,e),vz(1,1,1,e),e) call comp_sije(sij) call mag_tensor_e(snrm(1,1,1,e),sij) call cmult(snrm(1,1,1,e),2.0,ntot) do k = 1,nz1 do j = 1,ny1 do i = 1,nx1 ediff(i,j,k,e) = param(2) + & (cstat**2)*snrm(i,j,k,e) enddo enddo enddo enddo return end 2. ifsplit flag is the for the formulation, ifsplit = true, --> Pn Pn, and ifsplit = false, --> PnPn-2, and ifstrs formulation works only for PnPn-2 formulation (atleast in the older versions). if you use PnPn-2 formulation, ifexplvis = .true. has no effect. 3. set_ds_filt() is the routine required for the second level of filtering in dynamic Smagorinsky, .i.e. required for calculation of L_ij, M_ij. The first level of filtering is assumed to be coming from the coarse grid. I would recommend reading the mathematics of dynamic Smagorinsky model to better understand the procedure. (See http://www.scholarpedia.org/article/Turbulence:_Subgrid-Scale_Modeling) Best Regards, Tanmoy On Wed, Nov 30, 2016 at 7:56 AM, wrote: > Hello, > > I have a few questions on the turbChannel example. > > 1) the ?eddy viscosity ? is computed in the function eddy_visc and called > in a loop over the elements of mesh 1. Inside the function eddy_visc there > is the following piece of code: > > * if (e.eq.nelv) then ! planar avg and define nu_tau * > * ?* > * ntot = nx1*ny1*nz1*nelv* > * do i=1,ntot* > * cdyn = 0* > * if (den(i,1).gt.0) cdyn = 0.5*num(i,1)/den(i,1)* > * cdyn = 0.16*0.16 ! max(cdyn,0.) ! AS ALTERNATIVE, could > clip ediff* > * ediff(i,1) = param(2)+cdyn*dg2(i,1)*snrm(i,1)* > * enddo* > * endif* > > eddy that stores the eddy viscosity is updated when the last element of > mesh 1 is called. Is there any reason why it should be done in this manner? > > 2) I am still confused over the differences between ?ifsplit?, ?ifexplvis? > and ?ifstrs?. From the archive and the documentation I understand the > following (please correct me if I am wrong): > > - ?ifexplvis? is set to true when the user specified the viscosity > coefficient in uservp. It does not required, however, the stress formation > meaning the the diffusive term in the NS equations looks like \mu(x,t) > \nabla^2 \vec{u}. > - when ?ifsplit? is set to true, the viscosity is split in a explicit > part and an implicit part. If this correct, how and where are defined the > explicit and implicit parts? > - ?ifstrs?: when set to true, ?ifexplvis? and ?if split? do not have > any effect. > > > 3) could someone shed some light on the subroutine ?set_ds_filt? in > turbChannel.usr, please? It seems to compute some sort of filter that is > then used to compute M_{i,j} and L_{i,j} > > > Thanks, > Marco > > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Fri Dec 2 01:01:20 2016 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 2 Dec 2016 15:01:20 +0800 Subject: [Nek5000-users] Nek5000-users Digest, Vol 94, Issue 1 Message-ID: Hi Sandeep, Is your self-defined udiiff a non-dimensional one? If not, maybe you can try to set dimensionless properties in uservp subroutine, then the non-positive error may disappear. Cheers, Bolun ---------------------------------------------------------------- Bolun Xu University of Science and Technology of China Department of Modern Mechanics Hefei, Anhui, China -----????----- ???: nek5000-users-bounces at lists.mcs.anl.gov [mailto:nek5000-users-bounces at lists.mcs.anl.gov] ?? nek5000-users-request at lists.mcs.anl.gov ????: 2016?12?2? 4:18 ???: nek5000-users at lists.mcs.anl.gov ??: Nek5000-users Digest, Vol 94, Issue 1 Send Nek5000-users mailing list submissions to nek5000-users at lists.mcs.anl.gov To subscribe or unsubscribe via the World Wide Web, visit https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users or, via email, send a message with subject or body 'help' to nek5000-users-request at lists.mcs.anl.gov You can reach the person managing the list at nek5000-users-owner at lists.mcs.anl.gov When replying, please edit your Subject line so it is more specific than "Re: Contents of Nek5000-users digest..." Today's Topics: 1. uservp MHD error (nek5000-users at lists.mcs.anl.gov) ---------------------------------------------------------------------- Message: 1 Date: Thu, 1 Dec 2016 18:31:45 +0100 From: nek5000-users at lists.mcs.anl.gov To: nek5000-users at lists.mcs.anl.gov Subject: [Nek5000-users] uservp MHD error Message-ID: Content-Type: text/plain; charset="utf-8" Hello All, I want to run MHD case with different magnetic diffusivities in different region of the domain. Can we run MHD run with different magnetic diffusivities? Assuming that udiff=magnetic diffusivity (eta), in the uservp file I added the following 1.0000000E+00 p30 > 0 ==> properties set in uservp() subroutine uservp (ix,iy,iz,ieg) include 'SIZE' include 'TOTAL' include 'NEKUSE' if (ifield.eq.1) then ! velocity utrans = param(1) udiff = param(2) elseif (ifield.eq.ifldmhd) then ! B-field utrans = 1.0 R1 = sqrt(x*x+y*y); if(R1 .le. 2.0) then udiff= param(29) else udiff= 1000.0D0*param(29) endif endif return end I get an error done :: set initial conditions ERROR: Non-positive diffusivity ( 0.00 ) specified for field 2, group 0 element ERROR: Non-positive diffusivity ( 0.00 ) specified for field 2, group 0 element Attached are the .usr, .rea and the logfile When I set the p30=0.000 and just utrans=0.0 udiff=0.0 in the uservp, the simulation is running fine. Thank you in advance. Cheers, Sandeep -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: gpf.usr Type: application/octet-stream Size: 12369 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: gpf.rea Type: application/octet-stream Size: 14433846 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: gpf.log.16 Type: application/octet-stream Size: 18825 bytes Desc: not available URL: ------------------------------ _______________________________________________ Nek5000-users mailing list Nek5000-users at lists.mcs.anl.gov https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users End of Nek5000-users Digest, Vol 94, Issue 1 ******************************************** From nek5000-users at lists.mcs.anl.gov Fri Dec 2 08:32:09 2016 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 2 Dec 2016 14:32:09 +0000 Subject: [Nek5000-users] Questions on turbChannel In-Reply-To: References: Message-ID: Hi Tanmoy, thank you for taking the time to answer my questions and sharing your piece of code. It is very helpful. Marco On Dec 1, 2016, at 10:21 PM, nek5000-users at lists.mcs.anl.gov wrote: Hi Marco, From my experience, I got benefitted by remodifying the eddy_visc code, by having the element loop inside the subroutine as well. i.e. in the eddy_visc() structure all the calculations of the stresses, etc. done as follows. The benefit is that with the previous snippet, my code froze, whenever the number of processors did not divide nelv, i.e. whenever nelv/np had a non-zero remainder. The following code gets rid of this short-coming. subroutine eddy_visc(ediff,cs) include 'SIZE' include 'TOTAL' integer e real zn0,zn01,zn02 integer ntot real cs,kappa,z0 real ediff(lx1,ly1,lz1,lelv) common /dynsmg/ sij (lx1,ly1,lz1,ldim,ldim) $ , dg2 (lx1,ly1,lz1,lelv) $ , snrm(lx1,ly1,lz1,lelv) cstat = 0.16 t ntot = nx1*ny1*nz1 call set_grid_spacing(dg2) if(nid.eq.0) write(6,*) 'Calculating eddy visosity',session do e=1,nelv call comp_gije(sij,vx(1,1,1,e),vy(1,1,1,e),vz(1,1,1,e),e) call comp_sije(sij) call mag_tensor_e(snrm(1,1,1,e),sij) call cmult(snrm(1,1,1,e),2.0,ntot) do k = 1,nz1 do j = 1,ny1 do i = 1,nx1 ediff(i,j,k,e) = param(2) + & (cstat**2)*snrm(i,j,k,e) enddo enddo enddo enddo return end 2. ifsplit flag is the for the formulation, ifsplit = true, --> Pn Pn, and ifsplit = false, --> PnPn-2, and ifstrs formulation works only for PnPn-2 formulation (atleast in the older versions). if you use PnPn-2 formulation, ifexplvis = .true. has no effect. 3. set_ds_filt() is the routine required for the second level of filtering in dynamic Smagorinsky, .i.e. required for calculation of L_ij, M_ij. The first level of filtering is assumed to be coming from the coarse grid. I would recommend reading the mathematics of dynamic Smagorinsky model to better understand the procedure. (See http://www.scholarpedia.org/article/Turbulence:_Subgrid-Scale_Modeling) Best Regards, Tanmoy On Wed, Nov 30, 2016 at 7:56 AM, > wrote: Hello, I have a few questions on the turbChannel example. 1) the ?eddy viscosity ? is computed in the function eddy_visc and called in a loop over the elements of mesh 1. Inside the function eddy_visc there is the following piece of code: if (e.eq.nelv) then ! planar avg and define nu_tau ? ntot = nx1*ny1*nz1*nelv do i=1,ntot cdyn = 0 if (den(i,1).gt.0) cdyn = 0.5*num(i,1)/den(i,1) cdyn = 0.16*0.16 ! max(cdyn,0.) ! AS ALTERNATIVE, could clip ediff ediff(i,1) = param(2)+cdyn*dg2(i,1)*snrm(i,1) enddo endif eddy that stores the eddy viscosity is updated when the last element of mesh 1 is called. Is there any reason why it should be done in this manner? 2) I am still confused over the differences between ?ifsplit?, ?ifexplvis? and ?ifstrs?. From the archive and the documentation I understand the following (please correct me if I am wrong): * ?ifexplvis? is set to true when the user specified the viscosity coefficient in uservp. It does not required, however, the stress formation meaning the the diffusive term in the NS equations looks like \mu(x,t) \nabla^2 \vec{u}. * when ?ifsplit? is set to true, the viscosity is split in a explicit part and an implicit part. If this correct, how and where are defined the explicit and implicit parts? * ?ifstrs?: when set to true, ?ifexplvis? and ?if split? do not have any effect. 3) could someone shed some light on the subroutine ?set_ds_filt? in turbChannel.usr, please? It seems to compute some sort of filter that is then used to compute M_{i,j} and L_{i,j} Thanks, Marco _______________________________________________ Nek5000-users mailing list Nek5000-users at lists.mcs.anl.gov https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users _______________________________________________ Nek5000-users mailing list Nek5000-users at lists.mcs.anl.gov https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Sun Dec 4 17:15:54 2016 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 5 Dec 2016 00:15:54 +0100 Subject: [Nek5000-users] Genbox periodic simple case Message-ID: Dear Users, I have used genbox for ages but now I need to split a channel in the x direction and cannot make genmap recognize the periodic boundary conditions. Can a periodic boundary condition be separated by a box? I need periodic boundary conditions in x and z. This is my .box: ------------------------- parameters.rea -3 1 # Left box Box1 -10 -10 -10 0.000 2.000 1.00 0.000 2.000 1.00 0.000 2.000 1.00 P ,E ,W ,W ,P ,P bc's # Right box Box2 -10 -10 -10 2.000 4.000 1.00 0.000 2.000 1.00 0.000 2.000 1.00 E ,P ,W ,W ,P ,P bc's -------------------------- I thought it should be really simple but I cannot get it to work... any help/insight? Regards, JP -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Sun Dec 4 22:37:52 2016 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 5 Dec 2016 04:37:52 +0000 Subject: [Nek5000-users] Genbox periodic simple case In-Reply-To: References: Message-ID: Hi JP, Periodicity can only be induced on a single "sub box" ... So you'll need to rethink your approach a bit. (I tried to look at your example, but am guessing that is simpler than your true target.) I can see that there are some configurations that definitely wouldn't work in genbox. Here is one idea: Set the "P " bcs to " " (3 blanks). Then, just run the output through pretex (text-based prenek ) and when you get to the BC menu just SET ENTIRE LEVEL AUTO-PERIODIC. This should just work... (beware, there are some tricky issues with periodicity when using millions of elements -- it's a workflow issue that we're trying to clean up). Paul ________________________________ From: nek5000-users-bounces at lists.mcs.anl.gov [nek5000-users-bounces at lists.mcs.anl.gov] on behalf of nek5000-users at lists.mcs.anl.gov [nek5000-users at lists.mcs.anl.gov] Sent: Sunday, December 04, 2016 5:15 PM To: nek5000-users Subject: [Nek5000-users] Genbox periodic simple case Dear Users, I have used genbox for ages but now I need to split a channel in the x direction and cannot make genmap recognize the periodic boundary conditions. Can a periodic boundary condition be separated by a box? I need periodic boundary conditions in x and z. This is my .box: ------------------------- parameters.rea -3 1 # Left box Box1 -10 -10 -10 0.000 2.000 1.00 0.000 2.000 1.00 0.000 2.000 1.00 P ,E ,W ,W ,P ,P bc's # Right box Box2 -10 -10 -10 2.000 4.000 1.00 0.000 2.000 1.00 0.000 2.000 1.00 E ,P ,W ,W ,P ,P bc's -------------------------- I thought it should be really simple but I cannot get it to work... any help/insight? Regards, JP -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Sun Dec 4 23:12:09 2016 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 5 Dec 2016 13:12:09 +0800 Subject: [Nek5000-users] Parallel speedup on supercomputer Tianhe-2 Message-ID: Dear Neks, I'm using Nek5000 to simulate turbulent Rayleigh-Benard convection, which is governed by the coupled Navier-Stokes equations and convective heat equation. I'm running the code on a supercomputer, Tianhe-2, located in Guangzhou, China. Each computer node in Tianhe-2 has 24 cores (2 Xeon E5 12-core CPUs) and 64GB memory. I find the speedup curve is not linear on a single node. For example, a 24-task job is only 8 times faster than the serial one. However, the performance with an increasing number of nodes is quite good. I don't know whether there is any parameter in nek500 that I can change in order to improve the speedup performance of the individual nodes. Thanks in advance! Best regards, Wei XU ---------- Wei XU Ph.D. Candidate Nano Science and Technology Program The Hong Kong University of Science and Technology -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Mon Dec 5 04:23:39 2016 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 05 Dec 2016 11:23:39 +0100 Subject: [Nek5000-users] Parallel speedup on supercomputer Tianhe-2 In-Reply-To: References: Message-ID: What's your problem size (number of elements and polynomial order)? Let's assume t_MPI << t (this holds if your problem size is reasonably large). Even in this limit you don't get a linear intra-node speedup simply because Nek5000 is not purely compute bound and the cumulative memory bandwidth is saturated with N cores (N < total number of cores). Cheers, Stefan From: on behalf of Reply-To: Date: Monday, December 5, 2016 at 6:12 AM To: Subject: [Nek5000-users] Parallel speedup on supercomputer Tianhe-2 Dear Neks, I'm using Nek5000 to simulate turbulent Rayleigh-Benard convection, which is governed by the coupled Navier-Stokes equations and convective heat equation. I'm running the code on a supercomputer, Tianhe-2, located in Guangzhou, China. Each computer node in Tianhe-2 has 24 cores (2 Xeon E5 12-core CPUs) and 64GB memory. I find the speedup curve is not linear on a single node. For example, a 24-task job is only 8 times faster than the serial one. However, the performance with an increasing number of nodes is quite good. I don't know whether there is any parameter in nek500 that I can change in order to improve the speedup performance of the individual nodes. Thanks in advance! Best regards, Wei XU ---------- Wei XU Ph.D. Candidate Nano Science and Technology Program The Hong Kong University of Science and Technology _______________________________________________ Nek5000-users mailing list Nek5000-users at lists.mcs.anl.gov https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Mon Dec 5 09:52:40 2016 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 5 Dec 2016 15:52:40 +0000 Subject: [Nek5000-users] Maintain flow rate in angled geometry Message-ID: Dear Neks, Recently I have been running test simulations of an angled square duct. A sketch is shown below. The shapes of the inlet and outlet are identical. The actual domain is longer than shown here. ____________________ | \ | _________________ \ \ \ \ \ \ \ \ / \ / Simulations with velocity inlet and outflow conditions worked well. Now I want to set up periodic boundary conditions at the inlet and outlet. I'm wondering how can I maintain a constant flow rate in this simulation? In the normal periodic channel flow simulation, setting parameters p54 and p55 ensures a constant mean velocity. But in my case, the flow changes direction. Does anyone know how to set constant flow rate/mean velocity in this type of simulation? Thank you very much in advance. Kind regards, Tony -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Tue Dec 6 02:25:22 2016 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Tue, 6 Dec 2016 16:25:22 +0800 Subject: [Nek5000-users] Parallel speedup on supercomputer Tianhe-2 Message-ID: Dear Stefan, Thank you for your reply. There are 139056 elements and the polynomial order is 7 (lx1=8). I measure the solver time to compute the speedup. For example, the serial job takes 2564.59s and the same job with 24 tasks takes 302.18s. The speedup is about 8.5. This is on a single Tianhe-2 node. The speedup between nodes is quite good. I also test the code on my 36-core computer (Dual Xeon E5 18-Core). I can only get about 12 times speedup when I use 36 tasks. It is also about 1/3. Best regards, Wei XU From: nek5000-users at lists.mcs.anl.gov To: Subject: Re: [Nek5000-users] Parallel speedup on supercomputer Tianhe-2 Message-ID: Content-Type: text/plain; charset="utf-8" What's your problem size (number of elements and polynomial order)? Let's assume t_MPI << t (this holds if your problem size is reasonably large). Even in this limit you don't get a linear intra-node speedup simply because Nek5000 is not purely compute bound and the cumulative memory bandwidth is saturated with N cores (N < total number of cores). Cheers, Stefan From: on behalf of < nek5000-users at lists.mcs.anl.gov> Reply-To: Date: Monday, December 5, 2016 at 6:12 AM To: Subject: [Nek5000-users] Parallel speedup on supercomputer Tianhe-2 Dear Neks, I'm using Nek5000 to simulate turbulent Rayleigh-Benard convection, which is governed by the coupled Navier-Stokes equations and convective heat equation. I'm running the code on a supercomputer, Tianhe-2, located in Guangzhou, China. Each computer node in Tianhe-2 has 24 cores (2 Xeon E5 12-core CPUs) and 64GB memory. I find the speedup curve is not linear on a single node. For example, a 24-task job is only 8 times faster than the serial one. However, the performance with an increasing number of nodes is quite good. I don't know whether there is any parameter in nek500 that I can change in order to improve the speedup performance of the individual nodes. Thanks in advance! Best regards, Wei XU -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Tue Dec 6 01:50:09 2016 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Tue, 6 Dec 2016 08:50:09 +0100 Subject: [Nek5000-users] Genbox periodic simple case Message-ID: Hi JP, I had a similar issue with genbox when I wanted to create a simple mesh where periodic BCs are shared between different boxes. I haven't tested Paul's suggestion yet but I wrote a genbox inspired python script that did the job in my case. You can find it here: https://github.com/Steffen1989/code-snippets (genbox.py) The Input is very similar to genbox but you can specify a periodic BC shared between two boxes with P01, a second one with P02, ... and the same with internal BCs, i.e. E01, E02, ... This can create a 2d mesh, which is afterwards extruded with n2to3 in the third direction. Note that it has not been tested thoroughly but you are welcome to give it a try and contact me if you run into troubles. \Steffen From nek5000-users at lists.mcs.anl.gov Tue Dec 6 09:34:04 2016 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Tue, 6 Dec 2016 16:34:04 +0100 Subject: [Nek5000-users] Parallel speedup on supercomputer Tianhe-2 In-Reply-To: References: Message-ID: That'a reasonable. The core compete for shared resources (L3 and DDR). > On 6 Dec 2016, at 09:25, nek5000-users at lists.mcs.anl.gov wrote: > > Dear Stefan, > > Thank you for your reply. There are 139056 elements and the polynomial order is 7 (lx1=8). I measure the solver time to compute the speedup. For example, the serial job takes 2564.59s and the same job with 24 tasks takes 302.18s. The speedup is about 8.5. This is on a single Tianhe-2 node. The speedup between nodes is quite good. > > I also test the code on my 36-core computer (Dual Xeon E5 18-Core). I can only get about 12 times speedup when I use 36 tasks. It is also about 1/3. > > Best regards, > Wei XU > > > From: nek5000-users at lists.mcs.anl.gov > To: > Subject: Re: [Nek5000-users] Parallel speedup on supercomputer > Tianhe-2 > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > What's your problem size (number of elements and polynomial order)? > > Let's assume t_MPI << t (this holds if your problem size is reasonably large). Even in this limit you don't get a linear intra-node speedup simply because Nek5000 is not purely compute bound and the cumulative memory bandwidth is saturated with N cores (N < total number of cores). > > Cheers, > > Stefan > > > From: on behalf of > Reply-To: > Date: Monday, December 5, 2016 at 6:12 AM > To: > Subject: [Nek5000-users] Parallel speedup on supercomputer Tianhe-2 > > Dear Neks, > > I'm using Nek5000 to simulate turbulent Rayleigh-Benard convection, which is governed by the coupled Navier-Stokes equations and convective heat equation. I'm running the code on a supercomputer, Tianhe-2, located in Guangzhou, China. Each computer node in Tianhe-2 has 24 cores (2 Xeon E5 12-core CPUs) and 64GB memory. I find the speedup curve is not linear on a single node. For example, a 24-task job is only 8 times faster than the serial one. However, the performance with an increasing number of nodes is quite good. I don't know whether there is any parameter in nek500 that I can change in order to improve the speedup performance of the individual nodes. > > Thanks in advance! > > Best regards, > > Wei XU > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Wed Dec 7 01:36:05 2016 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 7 Dec 2016 08:36:05 +0100 Subject: [Nek5000-users] Genbox periodic simple case Message-ID: Thank you both Paul and Steffen for your quick response and help. I will try them out and let you know if I run into problems. Thanks again, JP > > ------------------------------ > > Message: 2 > Date: Tue, 6 Dec 2016 08:50:09 +0100 > From: nek5000-users at lists.mcs.anl.gov > To: > Subject: [Nek5000-users] Genbox periodic simple case > Message-ID: > > Content-Type: text/plain; charset="utf-8"; format=flowed > > Hi JP, > > I had a similar issue with genbox when I wanted to create a simple mesh > where periodic BCs are shared between different boxes. I haven't tested > Paul's suggestion yet but I wrote a genbox inspired python script that > did the job in my case. > > You can find it here: https://github.com/Steffen1989/code-snippets > (genbox.py) > The Input is very similar to genbox but you can specify a periodic BC > shared between two boxes with P01, a second one with P02, ... and the > same with internal BCs, i.e. E01, E02, ... > > This can create a 2d mesh, which is afterwards extruded with n2to3 in > the third direction. > > Note that it has not been tested thoroughly but you are welcome to give > it a try and contact me if you run into troubles. > > > \Steffen > > > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > > End of Nek5000-users Digest, Vol 94, Issue 5 > ******************************************** > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Wed Dec 7 03:39:32 2016 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 7 Dec 2016 17:39:32 +0800 Subject: [Nek5000-users] Parallel speedup on supercomputer Tianhe-2 Message-ID: Dear Stefan, Thank you very much! Can I do anything on the Nek5000 side to reduce the problem? Best regards Wei XU Date: Tue, 6 Dec 2016 16:34:04 +0100 From: nek5000-users at lists.mcs.anl.gov To: nek5000-users at lists.mcs.anl.gov Subject: Re: [Nek5000-users] Parallel speedup on supercomputer Tianhe-2 Message-ID: Content-Type: text/plain; charset="us-ascii" That'a reasonable. The core compete for shared resources (L3 and DDR). -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Wed Dec 7 04:58:07 2016 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 7 Dec 2016 11:58:07 +0100 Subject: [Nek5000-users] uservp MHD Message-ID: Hello All, Thank you Aleks for your reply. I removed ? if(ifield.eq.1) and ? elseif(ifield.eq.ifldmhd) from uservp and did some more testing by setting p30>0 in the rea file Now my uservp contains one of the below cases Case1: utrans = param(1) R1 = sqrt(x*x+y*y); if(R1 .le. 2.0) then udiff= param(29) else udiff= 5.0D0*param(29) endif OR Case 2: alpha=10.0D0 one = 1. pi = 4.*atan(one) Rc = 6.7D0 utrans = param(1) R1 = sqrt(x*x+y*y); if(R1 .le. 2.0) then udiff= param(29) elseif(R1.gt.2.0 .and. R1.le.6.7) then sf=sin((R1-2.0D0)/(Rc-2.0D0)*pi/2.0D0) udiff= param(29)+(alpha-1.0D0)*param(29)*sf else udiff= alpha*param(29) endif When I run the code I DONOT get an error for both the above cases and the evolution of magnetic field is different from the case of uniform properties. I am solving a kinematic dynamo so I impose velocity field in the userchk, and velocity field is not affected by the change in viscosity. My questions are whether am I getting correct solution or not for uservp MHD? 1) If I do the above in the uservp, is the code changing the magnetic diffusivity at the desired location? 2) When the magnetic diffusivity is varying with space then there is an additional term in the B eqs, i.e., (curl B) x grad(eta), does nek5000 take into account this additional term. In short, what I'm looking for is, our fluid domain (eta1) is surrounded by an external domain (eta2), where the magnetic diffusivity eta2=10 or 100*eta1. The change in viscosity doesn't matter because the velocity field is forced to zero in the outer domain. Thanks in advance. Yours sincerely, Sandeep > Hi Sandeep, > > The MHD implementation in Nek5000 works only for constant properties so > far but it is possible to generalize it to a case with ifuservp=.true. Let > me know if you want to discuss this offline: obabko at mcs.anl.gov > > Aleks > ________________________________ > From: nek5000-users-bounces at lists.mcs.anl.gov [ > nek5000-users-bounces at lists.mcs.anl.gov] on behalf of > nek5000-users at lists.mcs.anl.gov [nek5000-users at lists.mcs.anl.gov] > Sent: Thursday, December 01, 2016 11:31 AM > To: nek5000-users at lists.mcs.anl.gov > Subject: [Nek5000-users] uservp MHD error > > Hello All, > > I want to run MHD case with different magnetic diffusivities in different > region of the domain. > Can we run MHD run with different magnetic diffusivities? > Assuming that udiff=magnetic diffusivity (eta), in the uservp file I added > the following > > 1.0000000E+00 p30 > 0 ==> properties set in uservp() > > subroutine uservp (ix,iy,iz,ieg) > include 'SIZE' > include 'TOTAL' > include 'NEKUSE' > > ?? > if (ifield.eq.1) then ! velocity > ? ? > utrans = param(1) > ? ? > udiff = param(2) > ?? > elseif (ifield.eq.ifldmhd) then ! B-field ?? > utrans = 1.0 > > ? > R1 = sqrt(x*x+y*y); > ? ?i > ? > ? > f(R1 .le. 2.0) then > ? ? > udiff= param(29) > ? ? > else > ? ? > udiff= 1000.0D0*param(29) > ? ? > endif > endif > return > end > > I get an error > > done :: set initial conditions > > ERROR: Non-positive diffusivity ( 0.00 ) specified for field 2, > group 0 element > ERROR: Non-positive diffusivity ( 0.00 ) specified for field 2, > group 0 element > > Attached are the .usr, .rea and the logfile > > When I set the p30=0.000 and just utrans=0.0 udiff=0.0 in the uservp, the > simulation is running fine. > Thank you in advance. > > Cheers, > Sandeep > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: gpf.usr Type: application/octet-stream Size: 12199 bytes Desc: not available URL: From nek5000-users at lists.mcs.anl.gov Wed Dec 7 05:01:42 2016 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 07 Dec 2016 12:01:42 +0100 Subject: [Nek5000-users] Parallel speedup on supercomputer Tianhe-2 In-Reply-To: References: Message-ID: Why do think something is wrong i.e. you have a problem? From: on behalf of Reply-To: Date: Wednesday, December 7, 2016 at 10:39 AM To: Subject: Re: [Nek5000-users] Parallel speedup on supercomputer Tianhe-2 Dear Stefan, Thank you very much! Can I do anything on the Nek5000 side to reduce the problem? Best regards Wei XU Date: Tue, 6 Dec 2016 16:34:04 +0100 From: nek5000-users at lists.mcs.anl.gov To: nek5000-users at lists.mcs.anl.gov Subject: Re: [Nek5000-users] Parallel speedup on supercomputer Tianhe-2 Message-ID: Content-Type: text/plain; charset="us-ascii" That'a reasonable. The core compete for shared resources (L3 and DDR). _______________________________________________ Nek5000-users mailing list Nek5000-users at lists.mcs.anl.gov https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Wed Dec 7 02:31:10 2016 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 07 Dec 2016 11:31:10 +0300 Subject: [Nek5000-users] =?utf-8?q?file2?= In-Reply-To: References: Message-ID: A non-text attachment was scrubbed... Name: chan_395_LES_inflow.000008 Type: application/octet-stream Size: 399360 bytes Desc: not available URL: From nek5000-users at lists.mcs.anl.gov Wed Dec 7 03:34:18 2016 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 7 Dec 2016 09:34:18 +0000 Subject: [Nek5000-users] How to create mesh? Message-ID: Hi Stefan, Would you please tell me how to show the meshing which is generated by genbox or prenek? Thanks, Jian -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Wed Dec 7 06:32:50 2016 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 7 Dec 2016 20:32:50 +0800 Subject: [Nek5000-users] Parallel speedup on supercomputer Tianhe-2 Message-ID: Dear Stefan, OK. I see. Thank you very much! Best, Wei XU Date: Wed, 07 Dec 2016 12:01:42 +0100 From: nek5000-users at lists.mcs.anl.gov To: Subject: Re: [Nek5000-users] Parallel speedup on supercomputer Tianhe-2 Message-ID: Content-Type: text/plain; charset="utf-8" Why do think something is wrong i.e. you have a problem? -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Wed Dec 7 06:39:46 2016 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 07 Dec 2016 12:39:46 +0000 Subject: [Nek5000-users] Error reading bigger domains in postnek. Message-ID: Hi Neks, I have been having trouble reading my domain with 3 Scalars, having 3072 elements in Postnek. The same domain can be however read without the Scalars. Also the same domain with lesser elements, like 768, can be read fine with/without scalars. I am getting this kind of an error in the terminal. postx window width and/or height > MAX_WINDOWW/H! -*-Helvetica-Medium-R-Normal--*-*-*-*-*-*-*-* NEKTON Version 2.6 Enter Session Name --Default= g40cond_as Beginning Session g40cond_as renaming SESSION.NAME SESSION.NAME ~ Session name: g40cond_as renaming g40cond_as.plt01 g40cond_as.plt01~ LTRUNC: string: 28 Tue Oct 11 15:13:33 PDT 2016 Reading geometry from g40cond_as .rea XYZ Min,Max: -40.0000 40.0000 -40.0000 40.0000 0.00000 1.00000 3072 3017 1 TRYING TO READ BC 3072 3072 2 TRYING TO READ BC 3072 3072 3 TRYING TO READ BC 3072 3072 4 TRYING TO READ BC 3072 3072 5 TRYING TO READ BC LTRUNC: string: 80 E E I E E I E E I E E E E I E E E E E E E E E E E E E LTRUNC: string: 80 E E I E E I E E I E E E E I E E E E E E E E E E E E E LTRUNC: string: 80 E E I E E I E E I E E E E I E E E E E E E E E E E E E this is iffmat: F 6.0000000 E E I E E I E E I E E E E I E E E E E E E E E E E E E E E I E E I E E I E E E E I E E E E E E E E E E E E E opening file: 80 E E I E E I E E I E E E E I E E E E E E E E E E E E E this is param(66): 6.0000000 LTRUNC: string: 80 E E I E E I E E I E E E E I E E E E E E E E E E E E E WARNING: Could not open file. E E I E E I E E I E E E E I E E E E E E E E E E E E E ** ERROR ** Can't open file g40cond_as.his Any help is appreciated. Thanks, Saikat. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Thu Dec 8 13:08:28 2016 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 8 Dec 2016 19:08:28 +0000 Subject: [Nek5000-users] How to create mesh? Message-ID: Hi Jian, There are multiple ways to visualize a mesh (let?s say your mesh is in a1.rea): 1. Visit - You could run your case with Nek with lx1 set to 3 (or even 2 for a 2D case) in the SIZE file and in usrdat2 in a1.usr copy the following two lines: call outpost(vx,vy,vz,pr,t,? ?) call exitt This will outpost a fld file and exit Nek. You can then run Visit as usual to visualize the mesh (please note that if you use lx1 = 3, your mesh that you see will be twice as dense as the actual mesh). Also to my knowledge, lx1=2 will work only for 2D mesh if you take this route. 2. Prenek - go to the folder that has a1.rea and run prenek: a. When it asks for a name of the session, type in a dummy case name (for example: zz) and hit enter. b. Click on ?READ PREVIOUS PARAMETERS? and write ?a1? c. Click on ?BUILD FROM FILE? and write ?a1? d. If the mesh is not shown on the screen in prenek, hit ?REDRAW MESH? e. You can then use the zoom option to setup your view You can also use postnek to get vector plots of your mesh if you are trying to get a high-quality picture of your mesh for an article. Let us know if you need help with that. Hope this helps, Ketan From nek5000-users at lists.mcs.anl.gov Mon Dec 12 08:58:25 2016 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 12 Dec 2016 15:58:25 +0100 Subject: [Nek5000-users] Periodic and non-periodic BC, mesh consistency check In-Reply-To: <247C7BC9-3BAE-4F70-AB62-9867A5C0D6A8@gmail.com> References: <247C7BC9-3BAE-4F70-AB62-9867A5C0D6A8@gmail.com> Message-ID: Dear Neks I am trying to simulate a channel flow, with a passive scalar that is introduced in the channel as a point source near the entrance of the channel. clearly, the scalar is not periodic. I dont seem to be able to set periodic boundary conditions for the velocity in the streamwise direction, as well as non-periodic boundary conditions for the scalar (i tried pretty much any combination of non-periodic boundary conditions). i get the error : Mesh consistency check failed. EXITING in VRDSMSH. is Nek5000 not supporting periodic and non-periodic conditions for velocity and scalar respectively ? Thanks for your help, Agnese Agnese Seminara -------------------------------- CNRS Laboratoire de physique de la mati?re condens?e Parc Valrose avenue J Vallot 06108 Nice, France +33 (0) 492 076 775 http://sites.unice.fr/site/aseminara/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Tue Dec 13 02:47:31 2016 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Tue, 13 Dec 2016 16:47:31 +0800 Subject: [Nek5000-users] Parallel speedup on supercomputer Tianhe-2 In-Reply-To: References: Message-ID: Dear Stefan, Sorry, I didn't make myself clear. What I meant is whether we can further improve the parallel computing performance, as I am only using about 1/3 of the computing power of each node. Thank you very much! Best regards, Wei XU Date: Wed, 07 Dec 2016 12:01:42 +0100 From: nek5000-users at lists.mcs.anl.gov To: Subject: Re: [Nek5000-users] Parallel speedup on supercomputer Tianhe-2 Message-ID: Content-Type: text/plain; charset="utf-8" Why do think something is wrong i.e. you have a problem? From: on behalf of < nek5000-users at lists.mcs.anl.gov> Reply-To: Date: Wednesday, December 7, 2016 at 10:39 AM To: Subject: Re: [Nek5000-users] Parallel speedup on supercomputer Tianhe-2 Dear Stefan, Thank you very much! Can I do anything on the Nek5000 side to reduce the problem? Best regards Wei XU Date: Tue, 6 Dec 2016 16:34:04 +0100 From: nek5000-users at lists.mcs.anl.gov To: nek5000-users at lists.mcs.anl.gov Subject: Re: [Nek5000-users] Parallel speedup on supercomputer Tianhe-2 Message-ID: Content-Type: text/plain; charset="us-ascii" That'a reasonable. The core compete for shared resources (L3 and DDR). _______________________________________________ Nek5000-users mailing list Nek5000-users at lists.mcs.anl.gov https://lists.mcs.anl. gov/mailman/listinfo/nek5000-users -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Date: Tue, 6 Dec 2016 16:25:22 +0800 From: nek5000-users at lists.mcs.anl.gov To: nek5000-users at lists.mcs.anl.gov Subject: Re: [Nek5000-users] Parallel speedup on supercomputer Tianhe-2 Message-ID: Content-Type: text/plain; charset="utf-8" Dear Stefan, Thank you for your reply. There are 139056 elements and the polynomial order is 7 (lx1=8). I measure the solver time to compute the speedup. For example, the serial job takes 2564.59s and the same job with 24 tasks takes 302.18s. The speedup is about 8.5. This is on a single Tianhe-2 node. The speedup between nodes is quite good. I also test the code on my 36-core computer (Dual Xeon E5 18-Core). I can only get about 12 times speedup when I use 36 tasks. It is also about 1/3. Best regards, Wei XU -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Tue Dec 13 05:30:50 2016 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Tue, 13 Dec 2016 12:30:50 +0100 Subject: [Nek5000-users] Parallel speedup on supercomputer Tianhe-2 Message-ID: Typically people look at inter-nocde scaling when they talk about parallel performance. Intra-node scaling (the case you look at) is a different story. Let?s consider two extremes: 1. STREAM triad benchmark is purely memory bound. On E5-2570v3 (Intel Haswell) you?ll get ~40 GB/sec using 2 cores (1 out of 18 cores per socket). If this would scale in a linear way we should see ~720GB/s using all 36 cores. However, in reality you'll get an aggregated peak bandwith ~90/sec. So it does _not_ scale linearly. In fact it levels off after 12 cores (6 cores per socket). 2. DGEMM benchmark is purely memory bound. On E5-2570v3 (Intel Haswell) you?ll get ~80% total peak floating points performance using all 36 cores. It scales more or less in a linear way with the number of cores. Nek5000 (like _all_ other PDE solvers) is somewhere in between. On most systems we'll get 5-20% of peak depending on the hardware architecture and polynomial order. Compare this with other PDE based solvers. Cheers, Stefan -----Original message----- > From:nek5000-users at lists.mcs.anl.gov > Sent: Tuesday 13th December 2016 11:48 > To: nek5000-users at lists.mcs.anl.gov > Subject: Re: [Nek5000-users] Parallel speedup on supercomputer Tianhe-2 > > Dear Stefan, > > Sorry, I didnt make myself clear. What I meant is whether we can further improve the parallel computing performance, as I am only using about 1/3 of the computing power of each node. Thank you very much! > > Best regards, > Wei XU > > Date: Wed, 07 Dec 2016 12:01:42 +0100
From:?nek5000-users at lists.mcs.anl.gov
To: >
Subject: Re: [Nek5000-users] Parallel speedup on supercomputer
? ? ? ? Tianhe-2
Message-ID:
? ? ? ? >
Content-Type: text/plain; charset="utf-8"

Why do think something is wrong i.e. you have a problem?

From: > on behalf of >
Reply-To: >
Date: Wednesday, December 7, 2016 at 10:39 AM
To: >
Subject: Re: [Nek5000-users] Parallel speedup on supercomputer Tianhe-2

Dear Stefan,

Thank you very much! Can I do anything on the Nek5000 side to reduce the problem?

Best regards
Wei XU

Date: Tue, 6 Dec 2016 16:34:04 +0100
From:?nek5000-users at lists.mcs.anl.gov
To:?nek5000-users at lists.mcs.anl.gov
Subject: Re: [Nek5000-users] Parallel speedup on supercomputer
? ? ? ? Tianhe-2
Message-ID:
? ? ? ? >
Content-Type: text/plain; charset="us-ascii"

Thata reasonable. The core compete for shared resources (L3 and DDR).

_______________________________________________ Nek5000-users mailing list?Nek5000-users at lists. mcs.anl. gov ?https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users

-------------- next part --------------
An HTML attachment was scrubbed...
URL: >

------------------------------ > > Date: Tue, 6 Dec 2016 16:25:22 +0800
From:?nek5000-users at lists.mcs.anl.gov
To:?nek5000-users at lists.mcs.anl.gov
Subject: Re: [Nek5000-users] Parallel speedup on supercomputer
? ? ? ? Tianhe-2
Message-ID:
? ? ? ? >
Content-Type: text/plain; charset="utf-8"

Dear Stefan,

Thank you for your reply. There are 139056 elements and the polynomial
order is 7 (lx1=8).? I measure the solver time to compute the speedup. For
example, the serial job takes 2564.59s and the same job with 24 tasks takes
302.18s. The speedup is about 8.5. This is on a single Tianhe-2 node. The
speedup between nodes is quite good.

I also test the code on my 36-core computer (Dual Xeon E5 18-Core). I can
only get about 12 times speedup when I use 36 tasks. It is also about 1/3.

Best regards,
Wei XU > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users From nek5000-users at lists.mcs.anl.gov Mon Dec 12 11:00:26 2016 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 12 Dec 2016 17:00:26 +0000 Subject: [Nek5000-users] moving mesh and outlet BC Message-ID: Dear Neks, Here is a video from a contracting pipe that has an outlet BC on the non-moving side boundary. The code morphes the mesh, but the solution is not converged (message: Unconverged Helmholtz3/Mesh). The velocity direction correctly follows the moving boundary, but the pressure is switching only at a full cycle thus resulting in not being compliant with velocity direction every other ? cycle. My guess is that the outlet boundary may not be suitable for incoming velocity resulting from the expansion stroke. Has anyone solved a similar problem and what is the best approach to take? Any advice is appreciated. Thanks, -emilian -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 2cycle.mpg Type: video/mpeg Size: 907264 bytes Desc: 2cycle.mpg URL: From nek5000-users at lists.mcs.anl.gov Wed Dec 14 08:31:17 2016 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 14 Dec 2016 15:31:17 +0100 Subject: [Nek5000-users] How to call mpi_barrier in usrcheck Message-ID: Hi Neks, I am currently facing a problem where I need to synchronize all my processors when I'm reading and writing files. Otherwise I would get an error of type 'end of file'. Can someone explain to me how to use the command call MPI_BARRIER with an example of syntax that I would integrate to the usr check. Thank you in advance, Best regards, Arnold *Arnold Wakim* Doctorant, D?partement d?A?rodynamique Fondamentale et Exp?rimentale T?l: +33 1 46 23 51 83 ONERA - The French Aerospace Lab -Centre de Meudon 8, rue des Vertugadins - 92190 MEUDON Nous suivre sur : www.onera.fr | Twitter | LinkedIn | Facebook -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image1 Type: image/jpeg Size: 2151 bytes Desc: not available URL: From nek5000-users at lists.mcs.anl.gov Wed Dec 14 10:34:09 2016 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 14 Dec 2016 17:34:09 +0100 Subject: [Nek5000-users] How to call mpi_barrier in usrcheck In-Reply-To: References: Message-ID: Just call nekgsync() Cheers, Stefan -----Original message----- > From:nek5000-users at lists.mcs.anl.gov > Sent: Wednesday 14th December 2016 17:32 > To: nek5000-users at lists.mcs.anl.gov > Subject: [Nek5000-users] How to call mpi_barrier in usrcheck > > Hi Neks, > > I am currently facing a problem where I need to synchronize all my > processors when I'm reading and writing files. Otherwise I would get > an error of type 'end of file'. > > Can someone explain to me how to use the command call MPI_BARRIER > with an example of syntax that I would integrate to the usr check. > > Thank you in advance, > > Best regards, > > Arnold > Arnold > Wakim > Doctorant, > D?partement > d?A?rodynamique > Fondamentale > et Exp?rimentale > T?l: > +33 1 46 23 51 83 > > ONERA?-?The > French Aerospace Lab?-Centre de Meudon > 8, rue des Vertugadins > - 92190 MEUDON > > Nous suivre sur :?www.onera.fr ?|?Twitter ?|? > LinkedIn ?|?Facebook _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users From nek5000-users at lists.mcs.anl.gov Wed Dec 14 08:44:33 2016 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 14 Dec 2016 14:44:33 +0000 Subject: [Nek5000-users] maketools all fails at postnek on RHEL6.4 Message-ID: Using the tarball at https://github.com/Nek5000/nek5000/archive/master.tar.gz to compile locally on my RHEL6.4 server. During maketools event, only the genmap and n2to3 tools are compiled and the make event fails out as shown below. Is this attributed to BIGMEM="true" in maketools, or something else? Can anyone help? Error and gcc/gfortran information below... Thanks Richard P [user at rhel64 tools]$ gedit maketools [user at rhel64 tools]$ ./maketools all /nfs/home/user/Nek5000 ---------------------- Make genmap... ---------------------- make[1]: Entering directory `/home/user/Nek5000-master/tools/genmap' gfortran -mcmodel=medium -fdefault-real-8 -o /nfs/home/user/Nek5000/genmap genmap.o byte.o make[1]: Leaving directory `/home/user/Nek5000-master/tools/genmap' ---------------------- Make n2to3... ---------------------- make[1]: Entering directory `/home/user/Nek5000-master/tools/n2to3' gfortran -mcmodel=medium -o /nfs/home/user/Nek5000/n2to3 n2to3.o byte.o make[1]: Leaving directory `/home/user/Nek5000-master/tools/n2to3' ---------------------- Make postnek... ---------------------- make[1]: Entering directory `/home/user/Nek5000-master/tools/postnek' gfortran -mcmodel=medium -c postnek.f postnek.f:1977.19: ifbswap = if_byte_swap_test(test) 1 Warning: Extension: Conversion from INTEGER(4) to LOGICAL(4) at (1) gfortran -mcmodel=medium -c postnek2.f gfortran -mcmodel=medium -c postnek3.f postnek3.f:1552.20: COMMON /CTMP4/ wk(LXYZ,3,3) 1 Warning: Named COMMON block 'ctmp4' at (1) shall be of the same size gfortran -mcmodel=medium -c postnek5.f postnek5.f:711.21: COMMON /ccdiag/ iee,iii 1 Warning: Named COMMON block 'ccdiag' at (1) shall be of the same size gfortran -mcmodel=medium -c postnek6.f gfortran -mcmodel=medium -c tsort.f gfortran -mcmodel=medium -c postnek8.f devices.inc:22.19: Included at basics.inc:168: Included at postnek8.f:96: COMMON/SCALE/XFAC ,YFAC ,XZERO ,YZERO 1 Warning: Named COMMON block 'scale' at (1) shall be of the same size gfortran -mcmodel=medium -c postnek9.f gfortran -mcmodel=medium -c plot.f gfortran -mcmodel=medium -c getfld.f gfortran -mcmodel=medium -c legend.f gfortran -mcmodel=medium -c userf.f userf.f:1282.21: common /rlobjs/ x_l_objs(mopts),y_l_objs(mopts),z_l_objs(mopts) 1 Warning: Named COMMON block 'rlobjs' at (1) shall be of the same size userf.f:1816.21: common /rlobjs/ avg(6,0:mopts-1,3) 1 Warning: Named COMMON block 'rlobjs' at (1) shall be of the same size userf.f:5261.20: common /ctmp1/ uavg(nxm*nelm),vavg(nxm*nelm),wavg(nxm*nelm) 1 Warning: Named COMMON block 'ctmp1' at (1) shall be of the same size gfortran -mcmodel=medium -c trap.f gfortran -mcmodel=medium -c animate.f gfortran -mcmodel=medium -c genxyz.f gfortran -mcmodel=medium -c screen.f gfortran -mcmodel=medium -c g3d.f gfortran -mcmodel=medium -c subs.f gfortran -mcmodel=medium -c xinterface.f gfortran -mcmodel=medium -c locglob.f gfortran -mcmodel=medium -c postnek5a.f gfortran -mcmodel=medium -c ../../nek/3rd_party/blas.f gfortran: ../../nek/3rd_party/blas.f: No such file or directory gfortran: no input files make[1]: *** [blas.o] Error 1 make[1]: Leaving directory `/home/user/Nek5000-master/tools/postnek' make: *** [all] Error 1 ###########gcc and gfortran details for server########## [user at rhel64 tools]$ rpm -qa | grep gcc gcc-4.4.7-3.el6.x86_64 compat-gcc-34-3.4.6-19.el6.x86_64 gcc-gfortran-4.4.7-3.el6.x86_64 libgcc-4.4.7-3.el6.i686 gcc-c++-4.4.7-3.el6.x86_64 libgcc-4.4.7-3.el6.x86_64 [user at rhel64 tools]$ rpm -qa | grep gfortran gcc-gfortran-4.4.7-3.el6.x86_64 libgfortran-4.4.7-3.el6.x86_64 compat-libgfortran-41-4.1.2-39.el6.x86_64 Richard B Powell Technical Architect II Operations/Server Engineering AREVA, Inc. 3315 Old Forest Road OF-60 Lynchburg, VA 24501 Phone: +1 434-832-3894 Fax: +1 434-382-3894 -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Wed Dec 14 09:09:30 2016 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 14 Dec 2016 15:09:30 +0000 Subject: [Nek5000-users] maketools all fails at postnek on RHEL6.4 In-Reply-To: References: Message-ID: Hi Richard, I find on my mac that I have to set BIGMEM="false" to get make tools to complete. That being said, it's been awhile since I've built maketools, so I'm not certain if there is another open issue at present. We're getting close to a new release where everything should soon be sorted out. Paul ________________________________ From: nek5000-users-bounces at lists.mcs.anl.gov [nek5000-users-bounces at lists.mcs.anl.gov] on behalf of nek5000-users at lists.mcs.anl.gov [nek5000-users at lists.mcs.anl.gov] Sent: Wednesday, December 14, 2016 8:44 AM To: nek5000-users at lists.mcs.anl.gov Subject: Re: [Nek5000-users] maketools all fails at postnek on RHEL6.4 Using the tarball at https://github.com/Nek5000/nek5000/archive/master.tar.gz to compile locally on my RHEL6.4 server. During maketools event, only the genmap and n2to3 tools are compiled and the make event fails out as shown below. Is this attributed to BIGMEM=?true? in maketools, or something else? Can anyone help? Error and gcc/gfortran information below? Thanks Richard P [user at rhel64 tools]$ gedit maketools [user at rhel64 tools]$ ./maketools all /nfs/home/user/Nek5000 ---------------------- Make genmap... ---------------------- make[1]: Entering directory `/home/user/Nek5000-master/tools/genmap' gfortran -mcmodel=medium -fdefault-real-8 -o /nfs/home/user/Nek5000/genmap genmap.o byte.o make[1]: Leaving directory `/home/user/Nek5000-master/tools/genmap' ---------------------- Make n2to3... ---------------------- make[1]: Entering directory `/home/user/Nek5000-master/tools/n2to3' gfortran -mcmodel=medium -o /nfs/home/user/Nek5000/n2to3 n2to3.o byte.o make[1]: Leaving directory `/home/user/Nek5000-master/tools/n2to3' ---------------------- Make postnek... ---------------------- make[1]: Entering directory `/home/user/Nek5000-master/tools/postnek' gfortran -mcmodel=medium -c postnek.f postnek.f:1977.19: ifbswap = if_byte_swap_test(test) 1 Warning: Extension: Conversion from INTEGER(4) to LOGICAL(4) at (1) gfortran -mcmodel=medium -c postnek2.f gfortran -mcmodel=medium -c postnek3.f postnek3.f:1552.20: COMMON /CTMP4/ wk(LXYZ,3,3) 1 Warning: Named COMMON block 'ctmp4' at (1) shall be of the same size gfortran -mcmodel=medium -c postnek5.f postnek5.f:711.21: COMMON /ccdiag/ iee,iii 1 Warning: Named COMMON block 'ccdiag' at (1) shall be of the same size gfortran -mcmodel=medium -c postnek6.f gfortran -mcmodel=medium -c tsort.f gfortran -mcmodel=medium -c postnek8.f devices.inc:22.19: Included at basics.inc:168: Included at postnek8.f:96: COMMON/SCALE/XFAC ,YFAC ,XZERO ,YZERO 1 Warning: Named COMMON block 'scale' at (1) shall be of the same size gfortran -mcmodel=medium -c postnek9.f gfortran -mcmodel=medium -c plot.f gfortran -mcmodel=medium -c getfld.f gfortran -mcmodel=medium -c legend.f gfortran -mcmodel=medium -c userf.f userf.f:1282.21: common /rlobjs/ x_l_objs(mopts),y_l_objs(mopts),z_l_objs(mopts) 1 Warning: Named COMMON block 'rlobjs' at (1) shall be of the same size userf.f:1816.21: common /rlobjs/ avg(6,0:mopts-1,3) 1 Warning: Named COMMON block 'rlobjs' at (1) shall be of the same size userf.f:5261.20: common /ctmp1/ uavg(nxm*nelm),vavg(nxm*nelm),wavg(nxm*nelm) 1 Warning: Named COMMON block 'ctmp1' at (1) shall be of the same size gfortran -mcmodel=medium -c trap.f gfortran -mcmodel=medium -c animate.f gfortran -mcmodel=medium -c genxyz.f gfortran -mcmodel=medium -c screen.f gfortran -mcmodel=medium -c g3d.f gfortran -mcmodel=medium -c subs.f gfortran -mcmodel=medium -c xinterface.f gfortran -mcmodel=medium -c locglob.f gfortran -mcmodel=medium -c postnek5a.f gfortran -mcmodel=medium -c ../../nek/3rd_party/blas.f gfortran: ../../nek/3rd_party/blas.f: No such file or directory gfortran: no input files make[1]: *** [blas.o] Error 1 make[1]: Leaving directory `/home/user/Nek5000-master/tools/postnek' make: *** [all] Error 1 ###########gcc and gfortran details for server########## [user at rhel64 tools]$ rpm -qa | grep gcc gcc-4.4.7-3.el6.x86_64 compat-gcc-34-3.4.6-19.el6.x86_64 gcc-gfortran-4.4.7-3.el6.x86_64 libgcc-4.4.7-3.el6.i686 gcc-c++-4.4.7-3.el6.x86_64 libgcc-4.4.7-3.el6.x86_64 [user at rhel64 tools]$ rpm -qa | grep gfortran gcc-gfortran-4.4.7-3.el6.x86_64 libgfortran-4.4.7-3.el6.x86_64 compat-libgfortran-41-4.1.2-39.el6.x86_64 Richard B Powell Technical Architect II Operations/Server Engineering AREVA, Inc. 3315 Old Forest Road OF-60 Lynchburg, VA 24501 Phone: +1 434-832-3894 Fax: +1 434-382-3894 -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Wed Dec 14 09:09:39 2016 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 14 Dec 2016 16:09:39 +0100 Subject: [Nek5000-users] How to call mpi_barrier in usrcheck In-Reply-To: References: Message-ID: Thank you Stefan, this works wonders, Best, Arnold Le 14/12/2016 ? 17:34, nek5000-users at lists.mcs.anl.gov a ?crit : > Just call nekgsync() > > Cheers, > Stefan > > > -----Original message----- >> From:nek5000-users at lists.mcs.anl.gov >> Sent: Wednesday 14th December 2016 17:32 >> To: nek5000-users at lists.mcs.anl.gov >> Subject: [Nek5000-users] How to call mpi_barrier in usrcheck >> >> Hi Neks, >> >> I am currently facing a problem where I need to synchronize all my >> processors when I'm reading and writing files. Otherwise I would get >> an error of type 'end of file'. >> >> Can someone explain to me how to use the command call MPI_BARRIER >> with an example of syntax that I would integrate to the usr check. >> >> Thank you in advance, >> >> Best regards, >> >> Arnold >> Arnold >> Wakim >> Doctorant, >> D?partement >> d?A?rodynamique >> Fondamentale >> et Exp?rimentale >> T?l: >> +33 1 46 23 51 83 >> >> ONERA - The >> French Aerospace Lab -Centre de Meudon >> 8, rue des Vertugadins >> - 92190 MEUDON >> >> Nous suivre sur : www.onera.fr | Twitter | >> LinkedIn | Facebook _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > -- *Arnold Wakim* Doctorant, D?partement d?A?rodynamique Fondamentale et Exp?rimentale T?l: +33 1 46 23 51 83 ONERA - The French Aerospace Lab -Centre de Meudon 8, rue des Vertugadins - 92190 MEUDON Nous suivre sur : www.onera.fr | Twitter | LinkedIn | Facebook -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image1 Type: image/jpeg Size: 2151 bytes Desc: not available URL: From nek5000-users at lists.mcs.anl.gov Wed Dec 14 11:10:23 2016 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 14 Dec 2016 18:10:23 +0100 Subject: [Nek5000-users] maketools all fails at postnek on RHEL6.4 In-Reply-To: References: Message-ID: Can you try again with release/v16.0.0 (https://github.com/Nek5000/Nek5000/archive/release/v16.0.0.zip) Cheers, Stefan -----Original message----- > From:nek5000-users at lists.mcs.anl.gov > Sent: Wednesday 14th December 2016 18:06 > To: nek5000-users at lists.mcs.anl.gov > Subject: Re: [Nek5000-users] maketools all fails at postnek on RHEL6.4 > > Using the tarball at https://github.com/Nek5000/nek5000/archive/master.tar.gz to compile locally on my RHEL6.4 server.? During maketools event, only the genmap and n2to3 tools are compiled and the make event fails out as shown below.? Is > this attributed to BIGMEM=true in maketools, or something else?? Can anyone help?? Error and gcc/gfortran information below > Thanks > Richard P > [user at rhel64 tools]$ gedit maketools > [user at rhel64 tools]$ ./maketools all /nfs/home/user/Nek5000 > ---------------------- > Make genmap... > ---------------------- > make[1]: Entering directory `/home/user/Nek5000-master/tools/genmap' > gfortran -mcmodel=medium -fdefault-real-8 -o /nfs/home/user/Nek5000/genmap genmap.o byte.o? > make[1]: Leaving directory `/home/user/Nek5000-master/tools/genmap' > ---------------------- > Make n2to3... > ---------------------- > make[1]: Entering directory `/home/user/Nek5000-master/tools/n2to3' > gfortran -mcmodel=medium?? -o /nfs/home/user/Nek5000/n2to3 n2to3.o byte.o?? > make[1]: Leaving directory `/home/user/Nek5000-master/tools/n2to3' > ---------------------- > Make postnek... > ---------------------- > make[1]: Entering directory `/home/user/Nek5000-master/tools/postnek' > gfortran -mcmodel=medium -c??? postnek.f > postnek.f:1977.19: > ???????? ifbswap = if_byte_swap_test(test)????????????????????????????? > ???????????????????1 > Warning: Extension: Conversion from INTEGER(4) to LOGICAL(4) at (1) > gfortran -mcmodel=medium -c??? postnek2.f > gfortran -mcmodel=medium -c??? postnek3.f > postnek3.f:1552.20: > ????? COMMON /CTMP4/ wk(LXYZ,3,3)?????????????????????????????????????? > ????????????????????1 > Warning: Named COMMON block 'ctmp4' at (1) shall be of the same size > gfortran -mcmodel=medium -c??? postnek5.f > postnek5.f:711.21: > ????? COMMON /ccdiag/ iee,iii?????????????????????????????????????????? > ?????????????????????1 > Warning: Named COMMON block 'ccdiag' at (1) shall be of the same size > gfortran -mcmodel=medium -c??? postnek6.f > gfortran -mcmodel=medium -c??? tsort.f > gfortran -mcmodel=medium -c??? postnek8.f > devices.inc:22.19: > ??? Included at basics.inc:168: > ??? Included at postnek8.f:96: > ????? COMMON/SCALE/XFAC ,YFAC ,XZERO ,YZERO???????????????????????????? > ???????????????????1 > Warning: Named COMMON block 'scale' at (1) shall be of the same size > gfortran -mcmodel=medium -c??? postnek9.f > gfortran -mcmodel=medium -c??? plot.f > gfortran -mcmodel=medium -c??? getfld.f > gfortran -mcmodel=medium -c??? legend.f > gfortran -mcmodel=medium -c??? userf.f > userf.f:1282.21: > ????? common /rlobjs/??? x_l_objs(mopts),y_l_objs(mopts),z_l_objs(mopts) > ???????????????????? 1 > Warning: Named COMMON block 'rlobjs' at (1) shall be of the same size > userf.f:1816.21: > ????? common /rlobjs/ avg(6,0:mopts-1,3)??????????????????????????????? > ?????????????????????1 > Warning: Named COMMON block 'rlobjs' at (1) shall be of the same size > userf.f:5261.20: > ????? common /ctmp1/ uavg(nxm*nelm),vavg(nxm*nelm),wavg(nxm*nelm)?????? > ????????????????????1 > Warning: Named COMMON block 'ctmp1' at (1) shall be of the same size > gfortran -mcmodel=medium -c??? trap.f > gfortran -mcmodel=medium -c??? animate.f > gfortran -mcmodel=medium -c??? genxyz.f > gfortran -mcmodel=medium -c??? screen.f > gfortran -mcmodel=medium -c??? g3d.f > gfortran -mcmodel=medium -c??? subs.f > gfortran -mcmodel=medium -c??? xinterface.f > gfortran -mcmodel=medium -c??? locglob.f > gfortran -mcmodel=medium -c??? postnek5a.f > gfortran -mcmodel=medium -c??? ../../nek/3rd_party/blas.f > gfortran: ../../nek/3rd_party/blas.f: No such file or directory > gfortran: no input files > make[1]: *** [blas.o] Error 1 > make[1]: Leaving directory `/home/user/Nek5000-master/tools/postnek' > make: *** [all] Error 1 > ###########gcc and gfortran details for server########## > [user at rhel64 tools]$ rpm -qa | grep gcc > gcc-4.4.7-3.el6.x86_64 > compat-gcc-34-3.4.6-19.el6.x86_64 > gcc-gfortran-4.4.7-3.el6.x86_64 > libgcc-4.4.7-3.el6.i686 > gcc-c-4.4.7-3.el6.x86_64 > libgcc-4.4.7-3.el6.x86_64 > [user at rhel64 tools]$ rpm -qa | grep gfortran > gcc-gfortran-4.4.7-3.el6.x86_64 > libgfortran-4.4.7-3.el6.x86_64 > compat-libgfortran-41-4.1.2-39.el6.x86_64 > Richard B Powell > Technical Architect II > Operations/Server Engineering > AREVA, Inc. > 3315 Old Forest Road OF-60 > Lynchburg, VA? 24501 > Phone: 1 434-832-3894 > Fax: 1 434-382-3894 > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users From nek5000-users at lists.mcs.anl.gov Wed Dec 14 11:18:00 2016 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 14 Dec 2016 18:18:00 +0100 Subject: [Nek5000-users] maketools all fails at postnek on RHEL6.4 In-Reply-To: References: Message-ID: Paul, this is not related to BIGMEM. He is using the master branch which had a bug in the make system. Stefan -----Original message----- > From:nek5000-users at lists.mcs.anl.gov > Sent: Wednesday 14th December 2016 18:11 > To: nek5000-users at lists.mcs.anl.gov > Subject: Re: [Nek5000-users] maketools all fails at postnek on RHEL6.4 > > > Hi Richard, > > I find on my mac that I have to set BIGMEM="false" to get make tools to complete. > > That being said, it's been awhile since I've built maketools, so I'm not certain if there > is another open issue at present. > > We're getting close to a new release where everything should soon be sorted out. > > Paul > > ----------- > From: nek5000-users-bounces at lists.mcs.anl.gov [nek5000-users-bounces at lists.mcs.anl.gov] on behalf of nek5000-users at lists.mcs.anl.gov [nek5000-users at lists.mcs.anl.gov] > Sent: Wednesday, December 14, 2016 8:44 AM > To: nek5000-users at lists.mcs.anl.gov > Subject: Re: [Nek5000-users] maketools all fails at postnek on RHEL6.4 > > Using the tarball at https://github.com/Nek5000/nek5000/archive/master.tar.gz to compile locally on my RHEL6.4 server.? During maketools event, only the genmap and n2to3 tools are compiled and the make event fails out as shown below.? Is > this attributed to BIGMEM=?true? in maketools, or something else?? Can anyone help?? Error and gcc/gfortran information below? > Thanks > Richard P > [user at rhel64 tools]$ gedit maketools > [user at rhel64 tools]$ ./maketools all /nfs/home/user/Nek5000 > ---------------------- > Make genmap... > ---------------------- > make[1]: Entering directory `/home/user/Nek5000-master/tools/genmap' > gfortran -mcmodel=medium -fdefault-real-8 -o /nfs/home/user/Nek5000/genmap genmap.o byte.o? > make[1]: Leaving directory `/home/user/Nek5000-master/tools/genmap' > ---------------------- > Make n2to3... > ---------------------- > make[1]: Entering directory `/home/user/Nek5000-master/tools/n2to3' > gfortran -mcmodel=medium?? -o /nfs/home/user/Nek5000/n2to3 n2to3.o byte.o?? > make[1]: Leaving directory `/home/user/Nek5000-master/tools/n2to3' > ---------------------- > Make postnek... > ---------------------- > make[1]: Entering directory `/home/user/Nek5000-master/tools/postnek' > gfortran -mcmodel=medium -c??? postnek.f > postnek.f:1977.19: > ???????? ifbswap = if_byte_swap_test(test)????????????????????????????? > ???????????????????1 > Warning: Extension: Conversion from INTEGER(4) to LOGICAL(4) at (1) > gfortran -mcmodel=medium -c??? postnek2.f > gfortran -mcmodel=medium -c??? postnek3.f > postnek3.f:1552.20: > ????? COMMON /CTMP4/ wk(LXYZ,3,3)?????????????????????????????????????? > ????????????????????1 > Warning: Named COMMON block 'ctmp4' at (1) shall be of the same size > gfortran -mcmodel=medium -c??? postnek5.f > postnek5.f:711.21: > ????? COMMON /ccdiag/ iee,iii?????????????????????????????????????????? > ?????????????????????1 > Warning: Named COMMON block 'ccdiag' at (1) shall be of the same size > gfortran -mcmodel=medium -c??? postnek6.f > gfortran -mcmodel=medium -c??? tsort.f > gfortran -mcmodel=medium -c??? postnek8.f > devices.inc:22.19: > ??? Included at basics.inc:168: > ??? Included at postnek8.f:96: > ????? COMMON/SCALE/XFAC ,YFAC ,XZERO ,YZERO???????????????????????????? > ???????????????????1 > Warning: Named COMMON block 'scale' at (1) shall be of the same size > gfortran -mcmodel=medium -c??? postnek9.f > gfortran -mcmodel=medium -c??? plot.f > gfortran -mcmodel=medium -c??? getfld.f > gfortran -mcmodel=medium -c??? legend.f > gfortran -mcmodel=medium -c??? userf.f > userf.f:1282.21: > ????? common /rlobjs/??? x_l_objs(mopts),y_l_objs(mopts),z_l_objs(mopts) > ???????????????????? 1 > Warning: Named COMMON block 'rlobjs' at (1) shall be of the same size > userf.f:1816.21: > ????? common /rlobjs/ avg(6,0:mopts-1,3)??????????????????????????????? > ?????????????????????1 > Warning: Named COMMON block 'rlobjs' at (1) shall be of the same size > userf.f:5261.20: > ????? common /ctmp1/ uavg(nxm*nelm),vavg(nxm*nelm),wavg(nxm*nelm)?????? > ????????????????????1 > Warning: Named COMMON block 'ctmp1' at (1) shall be of the same size > gfortran -mcmodel=medium -c??? trap.f > gfortran -mcmodel=medium -c??? animate.f > gfortran -mcmodel=medium -c??? genxyz.f > gfortran -mcmodel=medium -c??? screen.f > gfortran -mcmodel=medium -c??? g3d.f > gfortran -mcmodel=medium -c??? subs.f > gfortran -mcmodel=medium -c??? xinterface.f > gfortran -mcmodel=medium -c??? locglob.f > gfortran -mcmodel=medium -c??? postnek5a.f > gfortran -mcmodel=medium -c??? ../../nek/3rd_party/blas.f > gfortran: ../../nek/3rd_party/blas.f: No such file or directory > gfortran: no input files > make[1]: *** [blas.o] Error 1 > make[1]: Leaving directory `/home/user/Nek5000-master/tools/postnek' > make: *** [all] Error 1 > ###########gcc and gfortran details for server########## > [user at rhel64 tools]$ rpm -qa | grep gcc > gcc-4.4.7-3.el6.x86_64 > compat-gcc-34-3.4.6-19.el6.x86_64 > gcc-gfortran-4.4.7-3.el6.x86_64 > libgcc-4.4.7-3.el6.i686 > gcc-c-4.4.7-3.el6.x86_64 > libgcc-4.4.7-3.el6.x86_64 > [user at rhel64 tools]$ rpm -qa | grep gfortran > gcc-gfortran-4.4.7-3.el6.x86_64 > libgfortran-4.4.7-3.el6.x86_64 > compat-libgfortran-41-4.1.2-39.el6.x86_64 > Richard B Powell > Technical Architect II > Operations/Server Engineering > AREVA, Inc. > 3315 Old Forest Road OF-60 > Lynchburg, VA? 24501 > Phone: 1 434-832-3894 > Fax: 1 434-382-3894 > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users From nek5000-users at lists.mcs.anl.gov Wed Dec 14 09:18:52 2016 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 14 Dec 2016 15:18:52 +0000 Subject: [Nek5000-users] (no subject) Message-ID: This issue seems to encompass the problem you're having. It also addresses how to fix it. https://github.com/Nek5000/Nek5000/issues/84 Hope this helps, Lane -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Wed Dec 14 09:25:03 2016 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 14 Dec 2016 09:25:03 -0600 Subject: [Nek5000-users] maketools all fails at postnek on RHEL6.4 In-Reply-To: References: Message-ID: Good morning, I just tested using the master version of the v16.0.0 release using both BIGMEM = ?true? and BIGMEM = ?false? on the ALCF systems. Output below (same for both): [lfick at cetuslac1 tools]$ ./maketools all ---------------------- Make genmap... ---------------------- make[1]: Entering directory `/gpfs/mira-home/lfick/Nek5000/tools/genmap' gfortran -fdefault-real-8 -o /home/lfick/bin/genmap genmap.o byte.o make[1]: Leaving directory `/gpfs/mira-home/lfick/Nek5000/tools/genmap' ---------------------- Make n2to3... ---------------------- make[1]: Entering directory `/gpfs/mira-home/lfick/Nek5000/tools/n2to3' gfortran -o /home/lfick/bin/n2to3 n2to3.o byte.o make[1]: Leaving directory `/gpfs/mira-home/lfick/Nek5000/tools/n2to3' ---------------------- Make postnek... ---------------------- make[1]: Entering directory `/gpfs/mira-home/lfick/Nek5000/tools/postnek' gfortran -c ../../nek/3rd_party/blas.f gfortran: ../../nek/3rd_party/blas.f: No such file or directory gfortran: no input files make[1]: *** [blas.o] Error 1 make[1]: Leaving directory `/gpfs/mira-home/lfick/Nek5000/tools/postnek' make: *** [all] Error 1 I?ve noticed this before, but when compiling specific tools (reatore2 in my case) it works fine. Lambert Fick Graduate Research Assistant | Department of Nuclear Engineering Texas A&M University +1 (979) 422-8317 | Lambert.Fick at tamu.edu > On Dec 14, 2016, at 11:10 AM, nek5000-users at lists.mcs.anl.gov wrote: > > Can you try again with release/v16.0.0 (https://github.com/Nek5000/Nek5000/archive/release/v16.0.0.zip) > > Cheers, > Stefan > > > -----Original message----- >> From:nek5000-users at lists.mcs.anl.gov >> Sent: Wednesday 14th December 2016 18:06 >> To: nek5000-users at lists.mcs.anl.gov >> Subject: Re: [Nek5000-users] maketools all fails at postnek on RHEL6.4 >> >> Using the tarball at https://github.com/Nek5000/nek5000/archive/master.tar.gz to compile locally on my RHEL6.4 server. During maketools event, only the genmap and n2to3 tools are compiled and the make event fails out as shown below. Is >> this attributed to BIGMEM=true in maketools, or something else? Can anyone help? Error and gcc/gfortran information below >> Thanks >> Richard P >> [user at rhel64 tools]$ gedit maketools >> [user at rhel64 tools]$ ./maketools all /nfs/home/user/Nek5000 >> ---------------------- >> Make genmap... >> ---------------------- >> make[1]: Entering directory `/home/user/Nek5000-master/tools/genmap' >> gfortran -mcmodel=medium -fdefault-real-8 -o /nfs/home/user/Nek5000/genmap genmap.o byte.o >> make[1]: Leaving directory `/home/user/Nek5000-master/tools/genmap' >> ---------------------- >> Make n2to3... >> ---------------------- >> make[1]: Entering directory `/home/user/Nek5000-master/tools/n2to3' >> gfortran -mcmodel=medium -o /nfs/home/user/Nek5000/n2to3 n2to3.o byte.o >> make[1]: Leaving directory `/home/user/Nek5000-master/tools/n2to3' >> ---------------------- >> Make postnek... >> ---------------------- >> make[1]: Entering directory `/home/user/Nek5000-master/tools/postnek' >> gfortran -mcmodel=medium -c postnek.f >> postnek.f:1977.19: >> ifbswap = if_byte_swap_test(test) >> 1 >> Warning: Extension: Conversion from INTEGER(4) to LOGICAL(4) at (1) >> gfortran -mcmodel=medium -c postnek2.f >> gfortran -mcmodel=medium -c postnek3.f >> postnek3.f:1552.20: >> COMMON /CTMP4/ wk(LXYZ,3,3) >> 1 >> Warning: Named COMMON block 'ctmp4' at (1) shall be of the same size >> gfortran -mcmodel=medium -c postnek5.f >> postnek5.f:711.21: >> COMMON /ccdiag/ iee,iii >> 1 >> Warning: Named COMMON block 'ccdiag' at (1) shall be of the same size >> gfortran -mcmodel=medium -c postnek6.f >> gfortran -mcmodel=medium -c tsort.f >> gfortran -mcmodel=medium -c postnek8.f >> devices.inc:22.19: >> Included at basics.inc:168: >> Included at postnek8.f:96: >> COMMON/SCALE/XFAC ,YFAC ,XZERO ,YZERO >> 1 >> Warning: Named COMMON block 'scale' at (1) shall be of the same size >> gfortran -mcmodel=medium -c postnek9.f >> gfortran -mcmodel=medium -c plot.f >> gfortran -mcmodel=medium -c getfld.f >> gfortran -mcmodel=medium -c legend.f >> gfortran -mcmodel=medium -c userf.f >> userf.f:1282.21: >> common /rlobjs/ x_l_objs(mopts),y_l_objs(mopts),z_l_objs(mopts) >> 1 >> Warning: Named COMMON block 'rlobjs' at (1) shall be of the same size >> userf.f:1816.21: >> common /rlobjs/ avg(6,0:mopts-1,3) >> 1 >> Warning: Named COMMON block 'rlobjs' at (1) shall be of the same size >> userf.f:5261.20: >> common /ctmp1/ uavg(nxm*nelm),vavg(nxm*nelm),wavg(nxm*nelm) >> 1 >> Warning: Named COMMON block 'ctmp1' at (1) shall be of the same size >> gfortran -mcmodel=medium -c trap.f >> gfortran -mcmodel=medium -c animate.f >> gfortran -mcmodel=medium -c genxyz.f >> gfortran -mcmodel=medium -c screen.f >> gfortran -mcmodel=medium -c g3d.f >> gfortran -mcmodel=medium -c subs.f >> gfortran -mcmodel=medium -c xinterface.f >> gfortran -mcmodel=medium -c locglob.f >> gfortran -mcmodel=medium -c postnek5a.f >> gfortran -mcmodel=medium -c ../../nek/3rd_party/blas.f >> gfortran: ../../nek/3rd_party/blas.f: No such file or directory >> gfortran: no input files >> make[1]: *** [blas.o] Error 1 >> make[1]: Leaving directory `/home/user/Nek5000-master/tools/postnek' >> make: *** [all] Error 1 >> ###########gcc and gfortran details for server########## >> [user at rhel64 tools]$ rpm -qa | grep gcc >> gcc-4.4.7-3.el6.x86_64 >> compat-gcc-34-3.4.6-19.el6.x86_64 >> gcc-gfortran-4.4.7-3.el6.x86_64 >> libgcc-4.4.7-3.el6.i686 >> gcc-c-4.4.7-3.el6.x86_64 >> libgcc-4.4.7-3.el6.x86_64 >> [user at rhel64 tools]$ rpm -qa | grep gfortran >> gcc-gfortran-4.4.7-3.el6.x86_64 >> libgfortran-4.4.7-3.el6.x86_64 >> compat-libgfortran-41-4.1.2-39.el6.x86_64 >> Richard B Powell >> Technical Architect II >> Operations/Server Engineering >> AREVA, Inc. >> 3315 Old Forest Road OF-60 >> Lynchburg, VA 24501 >> Phone: 1 434-832-3894 >> Fax: 1 434-382-3894 >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Wed Dec 14 09:57:03 2016 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 14 Dec 2016 15:57:03 +0000 Subject: [Nek5000-users] maketools all fails at postnek on RHEL6.4 In-Reply-To: References: Message-ID: Stefan, this zip provides the following and appears to generate more tools before one error shown below at the prenek stage. This may be a lead on this new error ... https://www.technovelty.org/c/relocation-truncated-to-fit-wtf.html [user at rhel64 tools]$ ./maketools all /nfs/home/user/Nek5000 ---------------------- Make genbox... ---------------------- make[1]: Entering directory `/home/user/Nek5000-release-v16.0.0/tools/genbox' gfortran -c -fdefault-real-8 genbox.f gcc -c -DUNDERSCORE byte.c gfortran -fdefault-real-8 -o /nfs/home/user/Nek5000/genbox genbox.o byte.o make[1]: Leaving directory `/home/user/Nek5000-release-v16.0.0/tools/genbox' ---------------------- Make int_tp... ---------------------- make[1]: Entering directory `/home/user/Nek5000-release-v16.0.0/tools/int_tp' gfortran -c int_tp.f gfortran -c ../../core/speclib.f gfortran -c mxm44f2.f gcc -c -DUNDERSCORE byte.c gfortran -c int_tp_gbox.f gfortran -o /nfs/home/user/Nek5000/int_tp int_tp.o speclib.o mxm44f2.o byte.o int_tp_gbox.o make[1]: Leaving directory `/home/user/Nek5000-release-v16.0.0/tools/int_tp' ---------------------- Make n2to3... ---------------------- make[1]: Entering directory `/home/user/Nek5000-release-v16.0.0/tools/n2to3' gfortran -c n2to3.f gcc -c -DUNDERSCORE byte.c gfortran -o /nfs/home/user/Nek5000/n2to3 n2to3.o byte.o make[1]: Leaving directory `/home/user/Nek5000-release-v16.0.0/tools/n2to3' ---------------------- Make postnek... ---------------------- make[1]: Entering directory `/home/user/Nek5000-release-v16.0.0/tools/postnek' gfortran -c postnek.f gfortran -c postnek2.f gfortran -c postnek3.f postnek3.f:1552.20: COMMON /CTMP4/ wk(LXYZ,3,3) 1 Warning: Named COMMON block 'ctmp4' at (1) shall be of the same size gfortran -c postnek5.f postnek5.f:711.21: COMMON /ccdiag/ iee,iii 1 Warning: Named COMMON block 'ccdiag' at (1) shall be of the same size gfortran -c postnek6.f gfortran -c tsort.f gfortran -c postnek8.f devices.inc:22.19: Included at basics.inc:168: Included at postnek8.f:96: COMMON/SCALE/XFAC ,YFAC ,XZERO ,YZERO 1 Warning: Named COMMON block 'scale' at (1) shall be of the same size gfortran -c postnek9.f gfortran -c plot.f gfortran -c getfld.f gfortran -c legend.f gfortran -c userf.f userf.f:1282.21: common /rlobjs/ x_l_objs(mopts),y_l_objs(mopts),z_l_objs(mopts) 1 Warning: Named COMMON block 'rlobjs' at (1) shall be of the same size userf.f:1816.21: common /rlobjs/ avg(6,0:mopts-1,3) 1 Warning: Named COMMON block 'rlobjs' at (1) shall be of the same size userf.f:5261.20: common /ctmp1/ uavg(nxm*nelm),vavg(nxm*nelm),wavg(nxm*nelm) 1 Warning: Named COMMON block 'ctmp1' at (1) shall be of the same size gcc -c -DUNDERSCORE revert.c gfortran -c trap.f gfortran -c animate.f gfortran -c genxyz.f gfortran -c screen.f gfortran -c g3d.f gfortran -c subs.f gfortran -c xinterface.f gfortran -c locglob.f gfortran -c postnek5a.f gfortran -c ../../core/3rd_party/blas.f ../../core/3rd_party/blas.f:9764.15: GO TO IGO,(120,150,180,210) 1 Warning: Deleted feature: Assigned GOTO statement at (1) ../../core/3rd_party/blas.f:9770.72: ASSIGN 120 TO IGO 1 Warning: Deleted feature: ASSIGN statement at (1) ../../core/3rd_party/blas.f:9782.72: ASSIGN 150 TO IGO 1 Warning: Deleted feature: ASSIGN statement at (1) ../../core/3rd_party/blas.f:9795.72: ASSIGN 180 TO IGO 1 Warning: Deleted feature: ASSIGN statement at (1) ../../core/3rd_party/blas.f:9806.72: ASSIGN 210 TO IGO 1 Warning: Deleted feature: ASSIGN statement at (1) ../../core/3rd_party/blas.f:17277.15: GO TO IGO,(120,150,180,210) 1 Warning: Deleted feature: Assigned GOTO statement at (1) ../../core/3rd_party/blas.f:17283.72: ASSIGN 120 TO IGO 1 Warning: Deleted feature: ASSIGN statement at (1) ../../core/3rd_party/blas.f:17295.72: ASSIGN 150 TO IGO 1 Warning: Deleted feature: ASSIGN statement at (1) ../../core/3rd_party/blas.f:17308.72: ASSIGN 180 TO IGO 1 Warning: Deleted feature: ASSIGN statement at (1) ../../core/3rd_party/blas.f:17319.72: ASSIGN 210 TO IGO 1 Warning: Deleted feature: ASSIGN statement at (1) gcc -c -O2 -DUNDERSCORE xdriver.c gfortran -c scrdmp.f gfortran -c coef.f gfortran -c postnek7.f gfortran -c speclib.f gfortran -c mxm.f gcc -c -DUNDERSCORE byte.c gfortran -c ssyev.f gfortran -c iolib.f gfortran -o /nfs/home/user/Nek5000/postx postnek.o postnek2.o postnek3.o postnek5.o postnek6.o tsort.o postnek8.o postnek9.o plot.o getfld.o legend.o userf.o revert.o trap.o animate.o genxyz.o screen.o g3d.o subs.o xinterface.o locglob.o postnek5a.o blas.o xdriver.o scrdmp.o coef.o postnek7.o speclib.o mxm.o byte.o ssyev.o iolib.o -L/usr/lib/X11 -lX11 -lm Linux 2.6.32-358.el6.x86_64 gfortran -c iolib_no_graph.f gfortran -o /nfs/home/user/Nek5000/postex postnek.o postnek2.o postnek3.o postnek5.o postnek6.o tsort.o postnek8.o postnek9.o plot.o getfld.o legend.o userf.o revert.o trap.o animate.o genxyz.o screen.o g3d.o subs.o xinterface.o locglob.o postnek5a.o blas.o xdriver.o scrdmp.o coef.o postnek7.o speclib.o mxm.o byte.o ssyev.o iolib_no_graph.o -L/usr/lib/X11 -lX11 -lm make[1]: Leaving directory `/home/user/Nek5000-release-v16.0.0/tools/postnek' ---------------------- Make reatore2... ---------------------- make[1]: Entering directory `/home/user/Nek5000-release-v16.0.0/tools/reatore2' gfortran -c reatore2.f gcc -c -DUNDERSCORE byte.c gfortran -c strings.f gfortran -o /nfs/home/user/Nek5000/reatore2 reatore2.o byte.o strings.o gfortran -c re2torea.f gfortran -o /nfs/home/user/Nek5000/re2torea re2torea.o byte.o strings.o make[1]: Leaving directory `/home/user/Nek5000-release-v16.0.0/tools/reatore2' ---------------------- Make genmap... ---------------------- make[1]: Entering directory `/home/user/Nek5000-release-v16.0.0/tools/genmap' gfortran -c -fdefault-real-8 genmap.f genmap.f:3054.22: common /carrayw/ w1 (lpts) , w2 (lpts) 1 Warning: Named COMMON block 'carrayw' at (1) shall be of the same size genmap.f:4017.22: common /arrayi2/ jdual(32*lelm) , vdual(32*lelm) 1 Warning: Named COMMON block 'arrayi2' at (1) shall be of the same size gcc -c -DUNDERSCORE ../../core/byte.c gfortran -fdefault-real-8 -o /nfs/home/user/Nek5000/genmap genmap.o byte.o make[1]: Leaving directory `/home/user/Nek5000-release-v16.0.0/tools/genmap' ---------------------- Make nekmerge... ---------------------- make[1]: Entering directory `/home/user/Nek5000-release-v16.0.0/tools/nekmerge' gfortran -c nekmerge.f gfortran -c reader.f gcc -c -DUNDERSCORE byte.c gfortran -c strings.f gfortran -c tsort.f gfortran -o /nfs/home/user/Nek5000/nekmerge nekmerge.o reader.o byte.o strings.o tsort.o make[1]: Leaving directory `/home/user/Nek5000-release-v16.0.0/tools/nekmerge' ---------------------- Make prenek... ---------------------- make[1]: Entering directory `/home/user/Nek5000-release-v16.0.0/tools/prenek' gfortran -c prenek.f gfortran -c curve.f gfortran -c edit.f gfortran -c build.f gfortran -c build1.f gfortran -c build2.f build2.f:323.10: xcs(3,1) = 0. 1 Warning: Array reference at (1) is out of bounds (3 > 2) in dimension 1 build2.f:324.10: ycs(3,1) = radius 1 Warning: Array reference at (1) is out of bounds (3 > 2) in dimension 1 build2.f:325.10: zcs(3,1) = 0. 1 Warning: Array reference at (1) is out of bounds (3 > 2) in dimension 1 build2.f:332.20: common /ctmp0/ sphctr(3),xcs(4,24),ycs(4,24),zcs(4,24) 1 Warning: Named COMMON block 'ctmp0' at (1) shall be of the same size gfortran -c bound.f gfortran -c plot.f gfortran -c xinterface.f gfortran -c glomod.f glomod.f:2120.9: ja(2) = w(2,1) 1 Warning: Array reference at (1) is out of bounds (2 > 1) in dimension 1 gfortran -c legend.f gfortran -c vprops.f gfortran -c iolib.f gfortran -c subs.f gfortran -c zipper2.f zipper2.f:1580.20: COMMON /CTMP2/ XP(NXM3),YP(NXM3),ZP(NXM3),RRL(3) 1 Warning: Named COMMON block 'ctmp2' at (1) shall be of the same size zipper2.f:2849.20: COMMON /CTMP2/ XP(NXM3),YP(NXM3),ZP(NXM3),RRL(3) 1 Warning: Named COMMON block 'ctmp2' at (1) shall be of the same size zipper2.f:4465.20: common /ctmp0/ jx(nxm*3),jyt(nym*3),jzt(nzm*3) 1 Warning: Named COMMON block 'ctmp0' at (1) shall be of the same size zipper2.f:4464.20: common /ctmp2/ xp(nxm3),yp(nxm3),zp(nxm3),wk(ldw) 1 Warning: Named COMMON block 'ctmp2' at (1) shall be of the same size zipper2.f:4612.20: common /ctmp0/ xcb(2,2,2),ycb(2,2,2),zcb(2,2,2),w(ldw) 1 Warning: Named COMMON block 'ctmp0' at (1) shall be of the same size gfortran -c postnek6.f gfortran -c screen.f gcc -c -DUNDERSCORE -Dr8 revert.c gfortran -c crs.f gfortran -c mxm.f gcc -c -DUNDERSCORE -Dr8 xdriver.c gfortran -o /nfs/home/user/Nek5000/prex prenek.o curve.o edit.o build.o build1.o build2.o bound.o plot.o xinterface.o glomod.o legend.o vprops.o iolib.o subs.o zipper2.o postnek6.o screen.o revert.o crs.o mxm.o xdriver.o -L/usr/lib/X11 -lX11 -lm curve.o: In function `hex_transition_3d_e_': curve.f:(.text+0x560a): relocation truncated to fit: R_X86_64_PC32 against symbol `ctmp2_' defined in COMMON section in zipper2.o curve.f:(.text+0x567f): relocation truncated to fit: R_X86_64_PC32 against symbol `ctmp2_' defined in COMMON section in zipper2.o curve.f:(.text+0x56ac): relocation truncated to fit: R_X86_64_PC32 against symbol `ctmp2_' defined in COMMON section in zipper2.o curve.f:(.text+0x56d9): relocation truncated to fit: R_X86_64_PC32 against symbol `ctmp2_' defined in COMMON section in zipper2.o curve.o: In function `hexagon_refine_e_': curve.f:(.text+0x5ca0): relocation truncated to fit: R_X86_64_PC32 against symbol `ctmp2_' defined in COMMON section in zipper2.o curve.f:(.text+0x5d3a): relocation truncated to fit: R_X86_64_PC32 against symbol `ctmp2_' defined in COMMON section in zipper2.o curve.f:(.text+0x5d8c): relocation truncated to fit: R_X86_64_PC32 against symbol `ctmp2_' defined in COMMON section in zipper2.o build.o: In function `new_out_': build.f:(.text+0xba7f): relocation truncated to fit: R_X86_64_32S against symbol `c_xyz_' defined in COMMON section in build.o build.f:(.text+0xbabc): relocation truncated to fit: R_X86_64_32S against symbol `c_xyz_' defined in COMMON section in build.o build.f:(.text+0xbaf9): relocation truncated to fit: R_X86_64_32S against symbol `c_xyz_' defined in COMMON section in build.o build.f:(.text+0xbc7c): additional relocation overflows omitted from the output glomod.o: In function `runlapsm2d_': glomod.f:(.text+0xbc53): undefined reference to `col2_' glomod.f:(.text+0xbc75): undefined reference to `col2_' glomod.f:(.text+0xbcf0): undefined reference to `col2_' glomod.f:(.text+0xbd0f): undefined reference to `col2_' glomod.o: In function `runoptsm2d_': glomod.f:(.text+0xc25e): undefined reference to `col2_' glomod.o:glomod.f:(.text+0xc28f): more undefined references to `col2_' follow collect2: ld returned 1 exit status make[1]: *** [prex] Error 1 make[1]: Leaving directory `/home/user/Nek5000-release-v16.0.0/tools/prenek' make: *** [all] Error 1 [user at rhel64 tools]$ ls -la /nfs/home/user/Nek5000 total 4312 drwxr-xr-x 2 user users 4096 Dec 14 10:37 . drwx------. 39 user ansys 4096 Dec 14 10:36 .. -rwxr-xr-x 1 user users 122691 Dec 14 10:37 genbox -rwxr-xr-x 1 user users 119786 Dec 14 10:37 genmap -rwxr-xr-x 1 user users 168611 Dec 14 10:37 int_tp -rwxr-xr-x 1 user users 74445 Dec 14 10:37 n2to3 -rwxr-xr-x 1 user users 54406 Dec 14 10:37 nekmerge -rwxr-xr-x 1 user users 1896156 Dec 14 10:37 postex -rwxr-xr-x 1 user users 1897203 Dec 14 10:37 postx -rwxr-xr-x 1 user users 28257 Dec 14 10:37 re2torea -rwxr-xr-x 1 user users 26682 Dec 14 10:37 reatore2 [user at rhel64 tools]$ -----Original Message----- From: nek5000-users-bounces at lists.mcs.anl.gov [mailto:nek5000-users-bounces at lists.mcs.anl.gov] On Behalf Of nek5000-users at lists.mcs.anl.gov Sent: Wednesday, December 14, 2016 12:10 PM To: nek5000-users at lists.mcs.anl.gov Subject: Re: [Nek5000-users] maketools all fails at postnek on RHEL6.4 Can you try again with release/v16.0.0 (https://github.com/Nek5000/Nek5000/archive/release/v16.0.0.zip) Cheers, Stefan -----Original message----- > From:nek5000-users at lists.mcs.anl.gov > Sent: Wednesday 14th December 2016 18:06 > To: nek5000-users at lists.mcs.anl.gov > Subject: Re: [Nek5000-users] maketools all fails at postnek on RHEL6.4 > > Using the tarball at > https://github.com/Nek5000/nek5000/archive/master.tar.gz to compile > locally on my RHEL6.4 server.? During maketools event, only the genmap > and n2to3 tools are compiled and the make event fails out as shown > below.? Is this attributed to BIGMEM=true in maketools, or something > else?? Can anyone help?? Error and gcc/gfortran information below > Thanks Richard P > [user at rhel64 tools]$ gedit maketools > [user at rhel64 tools]$ ./maketools all /nfs/home/user/Nek5000 > ---------------------- > Make genmap... > ---------------------- > make[1]: Entering directory `/home/user/Nek5000-master/tools/genmap' > gfortran -mcmodel=medium -fdefault-real-8 -o > /nfs/home/user/Nek5000/genmap genmap.o byte.o > make[1]: Leaving directory `/home/user/Nek5000-master/tools/genmap' > ---------------------- > Make n2to3... > ---------------------- > make[1]: Entering directory `/home/user/Nek5000-master/tools/n2to3' > gfortran -mcmodel=medium?? -o /nfs/home/user/Nek5000/n2to3 n2to3.o > byte.o > make[1]: Leaving directory `/home/user/Nek5000-master/tools/n2to3' > ---------------------- > Make postnek... > ---------------------- > make[1]: Entering directory `/home/user/Nek5000-master/tools/postnek' > gfortran -mcmodel=medium -c??? postnek.f > postnek.f:1977.19: > ???????? ifbswap = if_byte_swap_test(test) > ???????????????????1 > Warning: Extension: Conversion from INTEGER(4) to LOGICAL(4) at (1) > gfortran -mcmodel=medium -c??? postnek2.f gfortran -mcmodel=medium -c??? > postnek3.f > postnek3.f:1552.20: > ????? COMMON /CTMP4/ wk(LXYZ,3,3) > ????????????????????1 > Warning: Named COMMON block 'ctmp4' at (1) shall be of the same size > gfortran -mcmodel=medium -c??? postnek5.f > postnek5.f:711.21: > ????? COMMON /ccdiag/ iee,iii > ?????????????????????1 > Warning: Named COMMON block 'ccdiag' at (1) shall be of the same size > gfortran -mcmodel=medium -c??? postnek6.f gfortran -mcmodel=medium -c??? > tsort.f gfortran -mcmodel=medium -c??? postnek8.f > devices.inc:22.19: > ??? Included at basics.inc:168: > ??? Included at postnek8.f:96: > ????? COMMON/SCALE/XFAC ,YFAC ,XZERO ,YZERO > ???????????????????1 > Warning: Named COMMON block 'scale' at (1) shall be of the same size > gfortran -mcmodel=medium -c??? postnek9.f gfortran -mcmodel=medium -c??? > plot.f gfortran -mcmodel=medium -c??? getfld.f gfortran > -mcmodel=medium -c??? legend.f gfortran -mcmodel=medium -c??? userf.f > userf.f:1282.21: > ????? common /rlobjs/??? > x_l_objs(mopts),y_l_objs(mopts),z_l_objs(mopts) > ???????????????????? 1 > Warning: Named COMMON block 'rlobjs' at (1) shall be of the same size > userf.f:1816.21: > ????? common /rlobjs/ avg(6,0:mopts-1,3) > ?????????????????????1 > Warning: Named COMMON block 'rlobjs' at (1) shall be of the same size > userf.f:5261.20: > ????? common /ctmp1/ uavg(nxm*nelm),vavg(nxm*nelm),wavg(nxm*nelm) > ????????????????????1 > Warning: Named COMMON block 'ctmp1' at (1) shall be of the same size > gfortran -mcmodel=medium -c??? trap.f gfortran -mcmodel=medium -c??? > animate.f gfortran -mcmodel=medium -c??? genxyz.f gfortran > -mcmodel=medium -c??? screen.f gfortran -mcmodel=medium -c??? g3d.f > gfortran -mcmodel=medium -c??? subs.f gfortran -mcmodel=medium -c??? > xinterface.f gfortran -mcmodel=medium -c??? locglob.f gfortran > -mcmodel=medium -c??? postnek5a.f gfortran -mcmodel=medium -c??? > ../../nek/3rd_party/blas.f > gfortran: ../../nek/3rd_party/blas.f: No such file or directory > gfortran: no input files > make[1]: *** [blas.o] Error 1 > make[1]: Leaving directory `/home/user/Nek5000-master/tools/postnek' > make: *** [all] Error 1 > ###########gcc and gfortran details for server########## > [user at rhel64 tools]$ rpm -qa | grep gcc > gcc-4.4.7-3.el6.x86_64 > compat-gcc-34-3.4.6-19.el6.x86_64 > gcc-gfortran-4.4.7-3.el6.x86_64 > libgcc-4.4.7-3.el6.i686 > gcc-c-4.4.7-3.el6.x86_64 > libgcc-4.4.7-3.el6.x86_64 > [user at rhel64 tools]$ rpm -qa | grep gfortran > gcc-gfortran-4.4.7-3.el6.x86_64 > libgfortran-4.4.7-3.el6.x86_64 > compat-libgfortran-41-4.1.2-39.el6.x86_64 > Richard B Powell > Technical Architect II > Operations/Server Engineering > AREVA, Inc. > 3315 Old Forest Road OF-60 > Lynchburg, VA? 24501 > Phone: 1 434-832-3894 > Fax: 1 434-382-3894 > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users _______________________________________________ Nek5000-users mailing list Nek5000-users at lists.mcs.anl.gov https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users From nek5000-users at lists.mcs.anl.gov Wed Dec 14 13:33:12 2016 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 14 Dec 2016 20:33:12 +0100 Subject: [Nek5000-users] Project from the velocity grid to the pressure grid. Message-ID: Hi Neks, I am trying to extrude 2D fields .fld to 3D fields. To do so I followed what was said there http://lists.mcs.anl.gov/pipermail/nek5000-users/2013-October/002333.html . It worked fine for the velocities and the temperature fields, but for the pressure, I am having some troubles. I am using the PN/PN-2 approach, and so my velocities and pressure are not defined on the same grid, but the pressure values that I can recover from the .fld are mapped on the velocity grid. My question then would be: is there a (quick) way to have the pressure "projected back" on its grid ? Did anyone has this problem ? Thank you for your help, Pierre-Emmanuel des Boscs. From nek5000-users at lists.mcs.anl.gov Thu Dec 15 07:13:09 2016 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 15 Dec 2016 21:13:09 +0800 Subject: [Nek5000-users] Parallel speedup on supercomputer Tianhe-2 Message-ID: Dear Stefan, Thank you so much for your detailed explanation! Now I understand the scaling. Best regards, Wei XU -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Mon Dec 19 10:19:33 2016 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 19 Dec 2016 17:19:33 +0100 Subject: [Nek5000-users] Mixed boundary condition for fluid velocity Message-ID: Dear Nek users and developers, I am working on turbulent flow over porous walls. Instead of no slip condition for rigid walls, I need to implement the following boundary condition for the fluid velocity on the porous walls u = c1 Sigma . N + F(Sigma) where, c1 ? a scalar constant u ? fluid velocity Sigma ? fluid stresse tensor N ? Normal vector F(Sigma) ? a simple linear function of fluid stresses. If F(Sigma)=0, then the implementation is straightforward (similar the convection boundary condition implementation for heat transfer problem). It is not clear to me how to incorporate F(Sigma). Could someone explain me how can I add an unknown term (involving surface integration) into the boundary conditions? I have already looked into the code but could not understand where the surface integral coming out of the integration by parts (in the stress formulation) is added to the system matrix in the code. Could someone also point me out where this is done? Thank you very much, Best regards, Sudhakar KTH-Mechanics. From nek5000-users at lists.mcs.anl.gov Wed Dec 21 15:21:37 2016 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 21 Dec 2016 22:21:37 +0100 Subject: [Nek5000-users] Problem using genmap with large numbers of elements Message-ID: Dear Nek Users, We are about to set up a new case with about 29692 in 2D, which we would like to extrude to about 150 elements in 3D, giving a total of about 30k times 150 = 4.5M elements. n2to3 works fine, but genmap gives a "periodic mismatch error". Having a lower number of extrusion elements, say 30k times 70 = 2.1M elements, works fine. Changing the tolerance in genmap does not fix it. Before diving into the code of n2to3 and genmap, we wanted to ask whether somebody has experienced that before, and whether you'd know a potential fix for the problem. We used both an older version of genmap and the latest github version. Thanks a lot! Ricardo From nek5000-users at lists.mcs.anl.gov Wed Dec 21 17:23:09 2016 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 21 Dec 2016 23:23:09 +0000 Subject: [Nek5000-users] Problem using genmap with large numbers of elements In-Reply-To: References: Message-ID: Dear Ricardo, I believe there is a fix to this... I'm not 100% certain of the status in the repo. Perhaps one of the other developers is aware... Will try to find out more. Best, Paul ________________________________________ From: nek5000-users-bounces at lists.mcs.anl.gov [nek5000-users-bounces at lists.mcs.anl.gov] on behalf of nek5000-users at lists.mcs.anl.gov [nek5000-users at lists.mcs.anl.gov] Sent: Wednesday, December 21, 2016 3:21 PM To: nek5000-users at lists.mcs.anl.gov Subject: [Nek5000-users] Problem using genmap with large numbers of elements Dear Nek Users, We are about to set up a new case with about 29692 in 2D, which we would like to extrude to about 150 elements in 3D, giving a total of about 30k times 150 = 4.5M elements. n2to3 works fine, but genmap gives a "periodic mismatch error". Having a lower number of extrusion elements, say 30k times 70 = 2.1M elements, works fine. Changing the tolerance in genmap does not fix it. Before diving into the code of n2to3 and genmap, we wanted to ask whether somebody has experienced that before, and whether you'd know a potential fix for the problem. We used both an older version of genmap and the latest github version. Thanks a lot! Ricardo _______________________________________________ Nek5000-users mailing list Nek5000-users at lists.mcs.anl.gov https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users From nek5000-users at lists.mcs.anl.gov Thu Dec 22 04:38:32 2016 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 22 Dec 2016 11:38:32 +0100 Subject: [Nek5000-users] double mesh ? Message-ID: Hi i am quite new, so i?m sorry if i?m getting things wrong. I am trying to simulate a turbulent channel flow, with a passive scalar. in the streamwise direction, i would like to impose periodic boundary conditions for the flow, and non-periodic for the scalar. in fact the scalar has a point source near the inlet, and i want to see how that develops downstream. as far as i understand this is not supported by nek5000, because the last mesh point is identified with the first mesh point when periodic BC are imposed. correct ? my question is : could i define two meshes, one for the scalar and one for the velocity ? thanks a lot for your help !!! agnese Agnese Seminara -------------------------------- CNRS Laboratoire de physique de la mati?re condens?e Parc Valrose avenue J Vallot 06108 Nice, France +33 (0) 492 076 775 http://sites.unice.fr/site/aseminara/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Thu Dec 22 07:52:12 2016 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 22 Dec 2016 14:52:12 +0100 Subject: [Nek5000-users] Problem using genmap with large numbers of elements In-Reply-To: References: Message-ID: Yes, this bug is reported here https://github.com/Nek5000/Nek5000/issues/125 but has not been resolved yet. Stefan -----Original message----- > From:nek5000-users at lists.mcs.anl.gov > Sent: Thursday 22nd December 2016 0:23 > To: nek5000-users at lists.mcs.anl.gov > Subject: Re: [Nek5000-users] Problem using genmap with large numbers of elements > > > Dear Ricardo, > > I believe there is a fix to this... I'm not 100% certain of the status in the repo. > > Perhaps one of the other developers is aware... > > Will try to find out more. > > Best, Paul > > ________________________________________ > From: nek5000-users-bounces at lists.mcs.anl.gov [nek5000-users-bounces at lists.mcs.anl.gov] on behalf of nek5000-users at lists.mcs.anl.gov [nek5000-users at lists.mcs.anl.gov] > Sent: Wednesday, December 21, 2016 3:21 PM > To: nek5000-users at lists.mcs.anl.gov > Subject: [Nek5000-users] Problem using genmap with large numbers of elements > > Dear Nek Users, > We are about to set up a new case with about 29692 in 2D, which we > would like to extrude to about 150 elements in 3D, giving a total of > about 30k times 150 = 4.5M elements. n2to3 works fine, but genmap > gives a "periodic mismatch error". Having a lower number of extrusion > elements, say 30k times 70 = 2.1M elements, works fine. Changing the > tolerance in genmap does not fix it. > > Before diving into the code of n2to3 and genmap, we wanted to ask > whether somebody has experienced that before, and whether you'd know a > potential fix for the problem. > > We used both an older version of genmap and the latest github version. > > Thanks a lot! > Ricardo > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > From nek5000-users at lists.mcs.anl.gov Thu Dec 22 14:32:58 2016 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 22 Dec 2016 20:32:58 +0000 Subject: [Nek5000-users] Problem using genmap with large numbers of elements In-Reply-To: References: , Message-ID: I usually run genmap in 64 bit precision --- To me, it looks like maketools does not build it this way ? (Or am I mis-reading the script?) I'm looking at tools/genmap/makefile ... genmap can be built with -r8 I'm not certain about the other tools. Certainly n2to3 could be build with -r8 Paul ________________________________________ From: nek5000-users-bounces at lists.mcs.anl.gov [nek5000-users-bounces at lists.mcs.anl.gov] on behalf of nek5000-users at lists.mcs.anl.gov [nek5000-users at lists.mcs.anl.gov] Sent: Thursday, December 22, 2016 7:52 AM To: nek5000-users at lists.mcs.anl.gov Subject: Re: [Nek5000-users] Problem using genmap with large numbers of elements Yes, this bug is reported here https://github.com/Nek5000/Nek5000/issues/125 but has not been resolved yet. Stefan -----Original message----- > From:nek5000-users at lists.mcs.anl.gov > Sent: Thursday 22nd December 2016 0:23 > To: nek5000-users at lists.mcs.anl.gov > Subject: Re: [Nek5000-users] Problem using genmap with large numbers of elements > > > Dear Ricardo, > > I believe there is a fix to this... I'm not 100% certain of the status in the repo. > > Perhaps one of the other developers is aware... > > Will try to find out more. > > Best, Paul > > ________________________________________ > From: nek5000-users-bounces at lists.mcs.anl.gov [nek5000-users-bounces at lists.mcs.anl.gov] on behalf of nek5000-users at lists.mcs.anl.gov [nek5000-users at lists.mcs.anl.gov] > Sent: Wednesday, December 21, 2016 3:21 PM > To: nek5000-users at lists.mcs.anl.gov > Subject: [Nek5000-users] Problem using genmap with large numbers of elements > > Dear Nek Users, > We are about to set up a new case with about 29692 in 2D, which we > would like to extrude to about 150 elements in 3D, giving a total of > about 30k times 150 = 4.5M elements. n2to3 works fine, but genmap > gives a "periodic mismatch error". Having a lower number of extrusion > elements, say 30k times 70 = 2.1M elements, works fine. Changing the > tolerance in genmap does not fix it. > > Before diving into the code of n2to3 and genmap, we wanted to ask > whether somebody has experienced that before, and whether you'd know a > potential fix for the problem. > > We used both an older version of genmap and the latest github version. > > Thanks a lot! > Ricardo > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > _______________________________________________ Nek5000-users mailing list Nek5000-users at lists.mcs.anl.gov https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users From nek5000-users at lists.mcs.anl.gov Thu Dec 22 14:36:40 2016 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 22 Dec 2016 20:36:40 +0000 Subject: [Nek5000-users] Problem using genmap with large numbers of elements In-Reply-To: References: , , Message-ID: OK... I see that genmap is indeed wdsize=8... never mind... ________________________________________ From: nek5000-users-bounces at lists.mcs.anl.gov [nek5000-users-bounces at lists.mcs.anl.gov] on behalf of nek5000-users at lists.mcs.anl.gov [nek5000-users at lists.mcs.anl.gov] Sent: Thursday, December 22, 2016 2:32 PM To: nek5000-users at lists.mcs.anl.gov Subject: Re: [Nek5000-users] Problem using genmap with large numbers of elements I usually run genmap in 64 bit precision --- To me, it looks like maketools does not build it this way ? (Or am I mis-reading the script?) I'm looking at tools/genmap/makefile ... genmap can be built with -r8 I'm not certain about the other tools. Certainly n2to3 could be build with -r8 Paul ________________________________________ From: nek5000-users-bounces at lists.mcs.anl.gov [nek5000-users-bounces at lists.mcs.anl.gov] on behalf of nek5000-users at lists.mcs.anl.gov [nek5000-users at lists.mcs.anl.gov] Sent: Thursday, December 22, 2016 7:52 AM To: nek5000-users at lists.mcs.anl.gov Subject: Re: [Nek5000-users] Problem using genmap with large numbers of elements Yes, this bug is reported here https://github.com/Nek5000/Nek5000/issues/125 but has not been resolved yet. Stefan -----Original message----- > From:nek5000-users at lists.mcs.anl.gov > Sent: Thursday 22nd December 2016 0:23 > To: nek5000-users at lists.mcs.anl.gov > Subject: Re: [Nek5000-users] Problem using genmap with large numbers of elements > > > Dear Ricardo, > > I believe there is a fix to this... I'm not 100% certain of the status in the repo. > > Perhaps one of the other developers is aware... > > Will try to find out more. > > Best, Paul > > ________________________________________ > From: nek5000-users-bounces at lists.mcs.anl.gov [nek5000-users-bounces at lists.mcs.anl.gov] on behalf of nek5000-users at lists.mcs.anl.gov [nek5000-users at lists.mcs.anl.gov] > Sent: Wednesday, December 21, 2016 3:21 PM > To: nek5000-users at lists.mcs.anl.gov > Subject: [Nek5000-users] Problem using genmap with large numbers of elements > > Dear Nek Users, > We are about to set up a new case with about 29692 in 2D, which we > would like to extrude to about 150 elements in 3D, giving a total of > about 30k times 150 = 4.5M elements. n2to3 works fine, but genmap > gives a "periodic mismatch error". Having a lower number of extrusion > elements, say 30k times 70 = 2.1M elements, works fine. Changing the > tolerance in genmap does not fix it. > > Before diving into the code of n2to3 and genmap, we wanted to ask > whether somebody has experienced that before, and whether you'd know a > potential fix for the problem. > > We used both an older version of genmap and the latest github version. > > Thanks a lot! > Ricardo > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > _______________________________________________ Nek5000-users mailing list Nek5000-users at lists.mcs.anl.gov https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users _______________________________________________ Nek5000-users mailing list Nek5000-users at lists.mcs.anl.gov https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users From nek5000-users at lists.mcs.anl.gov Thu Dec 22 19:10:06 2016 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 23 Dec 2016 02:10:06 +0100 Subject: [Nek5000-users] Problem using genmap with large numbers of elements In-Reply-To: References: Message-ID: I looked at the genmap issue, and I am quite sure that the fix is to simply comment out the lines c if(nelt.ge.1000000.and.cbl(f,e).eq.'P ') c $ call copyi4(bl(1,f,e),buf(5),1) !Integer ... as suggested also in the pdf on github, https://github.com/Nek5000/Nek5000/issues/125. The reason is that buf(5),buf(6) contain the element number as a real*8, so no conversion is needed. Therefore, all went ok up to about 2.5M elements, because then 4 bytes were sufficient to store the float. The corresponding lines for wdsizi=4 should remain, though. Philipp On 2016-12-22 21:36, nek5000-users at lists.mcs.anl.gov wrote: > > OK... I see that genmap is indeed wdsize=8... never mind... > > ________________________________________ > From: nek5000-users-bounces at lists.mcs.anl.gov [nek5000-users-bounces at lists.mcs.anl.gov] on behalf of nek5000-users at lists.mcs.anl.gov [nek5000-users at lists.mcs.anl.gov] > Sent: Thursday, December 22, 2016 2:32 PM > To: nek5000-users at lists.mcs.anl.gov > Subject: Re: [Nek5000-users] Problem using genmap with large numbers of elements > > I usually run genmap in 64 bit precision --- > > To me, it looks like maketools does not build it this way ? (Or am I mis-reading the script?) > > I'm looking at tools/genmap/makefile ... > > genmap can be built with -r8 > > I'm not certain about the other tools. Certainly n2to3 could be build with -r8 > > Paul > > ________________________________________ > From: nek5000-users-bounces at lists.mcs.anl.gov [nek5000-users-bounces at lists.mcs.anl.gov] on behalf of nek5000-users at lists.mcs.anl.gov [nek5000-users at lists.mcs.anl.gov] > Sent: Thursday, December 22, 2016 7:52 AM > To: nek5000-users at lists.mcs.anl.gov > Subject: Re: [Nek5000-users] Problem using genmap with large numbers of elements > > Yes, this bug is reported here https://github.com/Nek5000/Nek5000/issues/125 but has not been resolved yet. > Stefan > > > -----Original message----- >> From:nek5000-users at lists.mcs.anl.gov >> Sent: Thursday 22nd December 2016 0:23 >> To: nek5000-users at lists.mcs.anl.gov >> Subject: Re: [Nek5000-users] Problem using genmap with large numbers of elements >> >> >> Dear Ricardo, >> >> I believe there is a fix to this... I'm not 100% certain of the status in the repo. >> >> Perhaps one of the other developers is aware... >> >> Will try to find out more. >> >> Best, Paul >> >> ________________________________________ >> From: nek5000-users-bounces at lists.mcs.anl.gov [nek5000-users-bounces at lists.mcs.anl.gov] on behalf of nek5000-users at lists.mcs.anl.gov [nek5000-users at lists.mcs.anl.gov] >> Sent: Wednesday, December 21, 2016 3:21 PM >> To: nek5000-users at lists.mcs.anl.gov >> Subject: [Nek5000-users] Problem using genmap with large numbers of elements >> >> Dear Nek Users, >> We are about to set up a new case with about 29692 in 2D, which we >> would like to extrude to about 150 elements in 3D, giving a total of >> about 30k times 150 = 4.5M elements. n2to3 works fine, but genmap >> gives a "periodic mismatch error". Having a lower number of extrusion >> elements, say 30k times 70 = 2.1M elements, works fine. Changing the >> tolerance in genmap does not fix it. >> >> Before diving into the code of n2to3 and genmap, we wanted to ask >> whether somebody has experienced that before, and whether you'd know a >> potential fix for the problem. >> >> We used both an older version of genmap and the latest github version. >> >> Thanks a lot! >> Ricardo >> >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > From nek5000-users at lists.mcs.anl.gov Fri Dec 23 19:56:45 2016 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Sat, 24 Dec 2016 09:56:45 +0800 (GMT+08:00) Subject: [Nek5000-users] Compiling problem in Nek5000 tools Message-ID: Dear Nek5000 users, I'm a new user. When I was compiling Nek5000 tools as Nek5000 Manual, I met a problem as below. And I surf the Internet for help and found that I should add "-mcmodel=medium" in makefile. But I don't know how to deal with this. Can anyone help me to solve this problem? Thanks a lot! after ./maketools all I got this Best wishes! Weicheng Hu December 24th, 2016 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screenshot from 2016-12-24 09-09-29.png Type: image/png Size: 191995 bytes Desc: not available URL: From nek5000-users at lists.mcs.anl.gov Mon Dec 26 16:50:55 2016 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 26 Dec 2016 22:50:55 +0000 Subject: [Nek5000-users] Compiling problem in Nek5000 tools In-Reply-To: References: Message-ID: Hi Weicheng, Did you add '-mcmodel=medium' in your makenek file and try again? You can simply do this by uncommenting (removing '#') the line, https://github.com/Nek5000/Nek5000/blob/develop/core/makenek#L21 in default makenek and changing it to, G="-mcmodel=medium" Regards, Thilina ________________________________ From: nek5000-users-bounces at lists.mcs.anl.gov [nek5000-users-bounces at lists.mcs.anl.gov] on behalf of nek5000-users at lists.mcs.anl.gov [nek5000-users at lists.mcs.anl.gov] Sent: Friday, December 23, 2016 7:56 PM To: nek5000-users at lists.mcs.anl.gov Subject: [Nek5000-users] Compiling problem in Nek5000 tools Dear Nek5000 users, I'm a new user. When I was compiling Nek5000 tools as Nek5000 Manual, I met a problem as below. And I surf the Internet for help and found that I should add "-mcmodel=medium" in makefile. But I don't know how to deal with this. Can anyone help me to solve this problem? Thanks a lot! after ./maketools all I got this [cid:3d189f28$1$1592e8b6998$Coremail$13115277$bjtu.edu.cn] Best wishes! Weicheng Hu December 24th, 2016 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screenshot from 2016-12-24 09-09-29.png Type: image/png Size: 191995 bytes Desc: Screenshot from 2016-12-24 09-09-29.png URL: From nek5000-users at lists.mcs.anl.gov Tue Dec 27 18:15:13 2016 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Tue, 27 Dec 2016 17:15:13 -0700 Subject: [Nek5000-users] Compiling problem in Nek5000 tools In-Reply-To: References: Message-ID: Hi Weichang, Did you check, if your maketools script contains the the BIGMEM parmeter as true. If not please change so, Best Regards, Tanmoy On Mon, Dec 26, 2016 at 3:50 PM, wrote: > Hi Weicheng, > > Did you add '-mcmodel=medium' in your makenek file and try again? > > You can simply do this by uncommenting (removing '#') the line, > https://github.com/Nek5000/Nek5000/blob/develop/core/makenek#L21 > > in default makenek and changing it to, > G="-mcmodel=medium" > > > Regards, > Thilina > ------------------------------ > *From:* nek5000-users-bounces at lists.mcs.anl.gov [ > nek5000-users-bounces at lists.mcs.anl.gov] on behalf of > nek5000-users at lists.mcs.anl.gov [nek5000-users at lists.mcs.anl.gov] > *Sent:* Friday, December 23, 2016 7:56 PM > *To:* nek5000-users at lists.mcs.anl.gov > *Subject:* [Nek5000-users] Compiling problem in Nek5000 tools > > Dear Nek5000 users, > I'm a new user. When I was compiling Nek5000 tools as Nek5000 Manual, I > met a problem as below. And I surf the Internet for help and found that I > should add "-mcmodel=medium" in makefile. But I don't know how to deal with > this. Can anyone help me to solve this problem? Thanks a lot! > > after ./maketools all > I got this > > > Best wishes! > Weicheng Hu > December 24th, 2016 > > > > > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screenshot from 2016-12-24 09-09-29.png Type: image/png Size: 191995 bytes Desc: not available URL: From nek5000-users at lists.mcs.anl.gov Tue Dec 27 23:28:21 2016 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 28 Dec 2016 13:28:21 +0800 (GMT+08:00) Subject: [Nek5000-users] Compiling problem in Nek5000 tools In-Reply-To: References: Message-ID: Hi Tanmoy, Thanks a lot! I solved this problem as what you said! Best Regards, Weicheng -----????----- ???: nek5000-users at lists.mcs.anl.gov ????: 2016?12?28? ??? ???: nek5000-users at lists.mcs.anl.gov ??: ??: Re: [Nek5000-users] Compiling problem in Nek5000 tools Hi Weichang, Did you check, if your maketools script contains the the BIGMEM parmeter as true. If not please change so, Best Regards, Tanmoy On Mon, Dec 26, 2016 at 3:50 PM, wrote: Hi Weicheng, Did you add '-mcmodel=medium' in your makenek file and try again? You can simply do this by uncommenting (removing '#') the line, https://github.com/Nek5000/Nek5000/blob/develop/core/makenek#L21 in default makenek and changing it to, G="-mcmodel=medium" Regards, Thilina From:nek5000-users-bounces at lists.mcs.anl.gov [nek5000-users-bounces at lists.mcs.anl.gov] on behalf of nek5000-users at lists.mcs.anl.gov [nek5000-users at lists.mcs.anl.gov] Sent: Friday, December 23, 2016 7:56 PM To:nek5000-users at lists.mcs.anl.gov Subject: [Nek5000-users] Compiling problem in Nek5000 tools Dear Nek5000 users, I'm a new user. When I was compiling Nek5000 tools as Nek5000 Manual, I met a problem as below. And I surf the Internet for help and found that I should add "-mcmodel=medium" in makefile. But I don't know how to deal with this. Can anyone help me to solve this problem? Thanks a lot! after ./maketools all I got this Best wishes! Weicheng Hu December 24th, 2016 _______________________________________________ Nek5000-users mailing list Nek5000-users at lists.mcs.anl.gov https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Tue Dec 27 21:12:55 2016 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 28 Dec 2016 11:12:55 +0800 (GMT+08:00) Subject: [Nek5000-users] Compiling problem in Nek5000 tools In-Reply-To: References: Message-ID: Dear Tanmoy, Thanks a lot! The compiling problem disappeared as what you said! Best Regards, Weicheng -----????----- ???: nek5000-users at lists.mcs.anl.gov ????: 2016?12?28? ??? ???: nek5000-users at lists.mcs.anl.gov ??: ??: Re: [Nek5000-users] Compiling problem in Nek5000 tools Hi Weichang, Did you check, if your maketools script contains the the BIGMEM parmeter as true. If not please change so, Best Regards, Tanmoy On Mon, Dec 26, 2016 at 3:50 PM, wrote: Hi Weicheng, Did you add '-mcmodel=medium' in your makenek file and try again? You can simply do this by uncommenting (removing '#') the line, https://github.com/Nek5000/Nek5000/blob/develop/core/makenek#L21 in default makenek and changing it to, G="-mcmodel=medium" Regards, Thilina From:nek5000-users-bounces at lists.mcs.anl.gov [nek5000-users-bounces at lists.mcs.anl.gov] on behalf of nek5000-users at lists.mcs.anl.gov [nek5000-users at lists.mcs.anl.gov] Sent: Friday, December 23, 2016 7:56 PM To:nek5000-users at lists.mcs.anl.gov Subject: [Nek5000-users] Compiling problem in Nek5000 tools Dear Nek5000 users, I'm a new user. When I was compiling Nek5000 tools as Nek5000 Manual, I met a problem as below. And I surf the Internet for help and found that I should add "-mcmodel=medium" in makefile. But I don't know how to deal with this. Can anyone help me to solve this problem? Thanks a lot! after ./maketools all I got this Best wishes! Weicheng Hu December 24th, 2016 _______________________________________________ Nek5000-users mailing list Nek5000-users at lists.mcs.anl.gov https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screenshot from 2016-12-24 09-09-29.png Type: image/png Size: 191995 bytes Desc: not available URL: