From nek5000-users at lists.mcs.anl.gov Wed Apr 1 12:21:47 2015 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 1 Apr 2015 18:21:47 +0100 Subject: [Nek5000-users] heat flux BC Message-ID: Hi Neks, I still trying to understand the calculation works in Nek5000 Sorry for some basic questions as not much of example on the flux BC: 1. If I put the boundary condition as flux, do I need to calculate the resultant heat transfer coefficient or can I just obtain thoroughly from the code? because I haven't found the HTC output any inside the code, but whenever I grab the 'HC' it gives me some value. So what would be the value of HC that I obtain there if I cannot get it directly from the code? 2. If I specify the flux number in the userbc for nondimensionalized equation, how the deltaT is normalized is it to the bulk temperature? 3. All values that specified in the usr file for dimensionalized has to be SI unit isn't it? as the flux should be in W/m2? Regards, -- Muhsin Mohd Amin Graduate Student Department of Mechanical Engineering The University of Sheffield -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Wed Apr 1 12:26:51 2015 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 1 Apr 2015 18:26:51 +0100 Subject: [Nek5000-users] VisIt error: The first time step in a Nek file must contain a mesh In-Reply-To: References: Message-ID: Hi Saleh, sorry for the late reply, but one solution that I found is that reading the result in the Linux based system. I try to use Paraview, and it works to read the results in linux, but will get same error as in VisIt in windows. I'm not pretty sure what would be the problem. Regards, Muhsin On 16 March 2015 at 21:23, wrote: > Hi Muhsin, > > Could you solve your problem? It's weird but I kinda have the same issue. > I'm working on an example for heat transfer in a rectangle (2D problem) and > when I use elements of the form 2X2 or 4X4 it works fine. However, when I > increase the number of elements to, say, 8X8 (to exploit using more nodes > of course), although the solver works fine and generates all the > corresponding fld files as expected but there is the error of "The first > time step in a Nek file must contain a mesh" in Visit. As I said, for fewer > number of elements for very same problem I don't get the problem. I > appreciate if you or anyone share any solutions here. Just to reiterate, I > also checked the first line of first time step and meta data, etc. but > found no issue. > > PS is there a way to export results as csv without referring to Visit? If > it is so, I'd sometimes prefer to use it. > > Best, > Saleh > > On 12/18/2014 7:54 AM, nek5000-users at lists.mcs.anl.gov wrote: > > Hi all, > > I get error of mesh need to contain the first time step in Nek file > everytime I run .nek5000 file from results of some example (turbchannel, > blasius and expansion). However for the 3dbox example, I got no error to > open it. > I've checked the first line in the first time step file (.f00001) and have > the geometry (X) in them. I also changed the logic for coordinate output to > (T COORDINATES) and different param66, but still getting the same error. > Perhaps even if I put the mesh output for each timestep not just in first > timestep, I still getting the same error. Note that I've already set the > right plugin and type file during open the .Nek5000 file. In addition, I've > checked the same parameters for 3dbox example which is same with the other > example rea file, plus 3dbox still gives me no error even if i not switch > the coordinate output to true (F COORDINATES) > > I really hope that anyone who has encountered the same problem and has > solved it to assist me. That would be really helpful. > > Regards, > -- > Muhsin Mohd Amin > Graduate Student > Department of Mechanical Engineering > The University of Sheffield > > > _______________________________________________ > Nek5000-users mailing listNek5000-users at lists.mcs.anl.govhttps://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > -- Muhsin Mohd Amin Graduate Student Department of Mechanical Engineering The University of Sheffield -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Thu Apr 2 00:44:00 2015 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 2 Apr 2015 00:44:00 -0500 Subject: [Nek5000-users] mesh merging Message-ID: Dear all, I have decided a cylinder-box-transition mesh (box hole at centre), with another mesh in box. I tried to use prenek to merge them, but the curved side values vanish and error comes in - 2773 2 4 Matrix: SELF!! 1 SELF!! 1189 1377 1189 1377 2 SELF!! 1360 1206 1360 1206 cont: SELF!! 2773 ?? ABORT: SELF-CHK 1 5 2773 0 Try to tighten the mesh tolerance! 3024 8 10 Matrix T: slfchk 3024 0 Then I tried to check the meshes individually. I set up the BCs on prenek and output a new .rea. Both works fine. Again I tried another way out - nekmerge. It gives me error only when I input the cylinder mesh - At line 100 of file reader.f (unit = 10, file = 'box1.rea') Fortran runtime error: Bad value during floating point read Which was not encountered in box mesh. Anyone knows the solution? Thank you very much. Regards, Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Fri Apr 3 07:04:14 2015 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 3 Apr 2015 12:04:14 +0000 Subject: [Nek5000-users] MPI_TAG_UB too small Message-ID: Dear Neks, I'm having a problem running my simulation with thousands of processors. In my case, I have 6090000 elements and I ran the simulation with 8520 procs. However, the simulation aborted with the error message 'MPI_TAG_UB too small'. I checked the code and found that if "nval.lt.(10000+max(lp,lelg))", the simulation would abort. But in the subroutine 'mpi_attr_get', ival is set to be 9999999. In this case, nval should be larger than 10000+max(lp,lelg). So, why did the simulation not run? Hope anyone can help me on this. Thank you very much. Best regards, Tony -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Fri Apr 3 11:04:16 2015 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 03 Apr 2015 18:04:16 +0200 Subject: [Nek5000-users] MPI_TAG_UB too small In-Reply-To: References: Message-ID: Dear all, indeed, we have experienced the same problem with the recent Cray MPI Library; the maximum tag size was reduced to 2**21 (the MPI standard requires 2^16, but most libraries have 2^32). The problem in Nek is that in the initial distribution of the mesh and velocities onto the processors, the element number is used as a message tag, which obviously fails if you have more than 2**21 elements. The check you mention (nval.lt.(10000+max(lp,lelg)) is exactly checking that (mpi_dummf.f is only used in serial runs, in parallel runs mpi_attr_get is a routine provided by the current mpi library and returns the maximum tag number of the implementation). Anyway, we have recently fixed this problem by changing the way the tag is used (local element number vs global one). Maybe the easiest would be to merge these changes in the repo? Otherwise, I can also send the changed files, but then of course one has to be careful to have matching revisions. Best regards, Philipp On 2015-04-03 14:04, nek5000-users at lists.mcs.anl.gov wrote: > Dear Neks, > > > I'm having a problem running my simulation with thousands of processors. > In my case, I have 6090000 elements and I ran the simulation with 8520 > procs. However, the simulation aborted with the error message > 'MPI_TAG_UB too small'. > > > I checked the code and found that if "nval.lt.(10000+max(lp,lelg))", the > simulation would abort. But in the subroutine 'mpi_attr_get', ival is > set to be 9999999. In this case, nval should be larger than > 10000+max(lp,lelg). So, why did the simulation not run? > > > Hope anyone can help me on this. Thank you very much. > > > Best regards, > > Tony > > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > From nek5000-users at lists.mcs.anl.gov Fri Apr 3 13:22:24 2015 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 3 Apr 2015 18:22:24 +0000 Subject: [Nek5000-users] MPI_TAG_UB too small In-Reply-To: References: Message-ID: Dear Philipp, I see. Thank you very much for the explanation. It would be great if you could help me make relevant changes or send me the files you mentioned if that is at all possible. Best regards, Tony ________________________________________ From: nek5000-users-bounces at lists.mcs.anl.gov on behalf of nek5000-users-request at lists.mcs.anl.gov Sent: 03 April 2015 18:00 To: nek5000-users at lists.mcs.anl.gov Subject: Nek5000-users Digest, Vol 74, Issue 2 Send Nek5000-users mailing list submissions to nek5000-users at lists.mcs.anl.gov To subscribe or unsubscribe via the World Wide Web, visit https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users or, via email, send a message with subject or body 'help' to nek5000-users-request at lists.mcs.anl.gov You can reach the person managing the list at nek5000-users-owner at lists.mcs.anl.gov When replying, please edit your Subject line so it is more specific than "Re: Contents of Nek5000-users digest..." Today's Topics: 1. MPI_TAG_UB too small (nek5000-users at lists.mcs.anl.gov) 2. Re: MPI_TAG_UB too small (nek5000-users at lists.mcs.anl.gov) ---------------------------------------------------------------------- Message: 1 Date: Fri, 3 Apr 2015 12:04:14 +0000 From: nek5000-users at lists.mcs.anl.gov To: "nek5000-users at lists.mcs.anl.gov" Subject: [Nek5000-users] MPI_TAG_UB too small Message-ID: Content-Type: text/plain; charset="iso-8859-1" Dear Neks, I'm having a problem running my simulation with thousands of processors. In my case, I have 6090000 elements and I ran the simulation with 8520 procs. However, the simulation aborted with the error message 'MPI_TAG_UB too small'. I checked the code and found that if "nval.lt.(10000+max(lp,lelg))", the simulation would abort. But in the subroutine 'mpi_attr_get', ival is set to be 9999999. In this case, nval should be larger than 10000+max(lp,lelg). So, why did the simulation not run? Hope anyone can help me on this. Thank you very much. Best regards, Tony -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 2 Date: Fri, 03 Apr 2015 18:04:16 +0200 From: nek5000-users at lists.mcs.anl.gov To: nek5000-users at lists.mcs.anl.gov Subject: Re: [Nek5000-users] MPI_TAG_UB too small Message-ID: Content-Type: text/plain; charset=windows-1252; format=flowed Dear all, indeed, we have experienced the same problem with the recent Cray MPI Library; the maximum tag size was reduced to 2**21 (the MPI standard requires 2^16, but most libraries have 2^32). The problem in Nek is that in the initial distribution of the mesh and velocities onto the processors, the element number is used as a message tag, which obviously fails if you have more than 2**21 elements. The check you mention (nval.lt.(10000+max(lp,lelg)) is exactly checking that (mpi_dummf.f is only used in serial runs, in parallel runs mpi_attr_get is a routine provided by the current mpi library and returns the maximum tag number of the implementation). Anyway, we have recently fixed this problem by changing the way the tag is used (local element number vs global one). Maybe the easiest would be to merge these changes in the repo? Otherwise, I can also send the changed files, but then of course one has to be careful to have matching revisions. Best regards, Philipp On 2015-04-03 14:04, nek5000-users at lists.mcs.anl.gov wrote: > Dear Neks, > > > I'm having a problem running my simulation with thousands of processors. > In my case, I have 6090000 elements and I ran the simulation with 8520 > procs. However, the simulation aborted with the error message > 'MPI_TAG_UB too small'. > > > I checked the code and found that if "nval.lt.(10000+max(lp,lelg))", the > simulation would abort. But in the subroutine 'mpi_attr_get', ival is > set to be 9999999. In this case, nval should be larger than > 10000+max(lp,lelg). So, why did the simulation not run? > > > Hope anyone can help me on this. Thank you very much. > > > Best regards, > > Tony > > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > ------------------------------ _______________________________________________ Nek5000-users mailing list Nek5000-users at lists.mcs.anl.gov https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users End of Nek5000-users Digest, Vol 74, Issue 2 ******************************************** From nek5000-users at lists.mcs.anl.gov Fri Apr 3 16:59:21 2015 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 03 Apr 2015 23:59:21 +0200 Subject: [Nek5000-users] MPI_TAG_UB too small In-Reply-To: References: Message-ID: Dear Tony, we aim at committing the necessary changes to the repository by Monday. Hope that this is fine. Best regards, Philipp On 2015-04-03 20:22, nek5000-users at lists.mcs.anl.gov wrote: > Dear Philipp, > > I see. Thank you very much for the explanation. It would be great if you could help me make relevant changes or send me the files you mentioned if that is at all possible. > > Best regards, > Tony > > ________________________________________ > From: nek5000-users-bounces at lists.mcs.anl.gov on behalf of nek5000-users-request at lists.mcs.anl.gov > Sent: 03 April 2015 18:00 > To: nek5000-users at lists.mcs.anl.gov > Subject: Nek5000-users Digest, Vol 74, Issue 2 > > Send Nek5000-users mailing list submissions to > nek5000-users at lists.mcs.anl.gov > > To subscribe or unsubscribe via the World Wide Web, visit > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > or, via email, send a message with subject or body 'help' to > nek5000-users-request at lists.mcs.anl.gov > > You can reach the person managing the list at > nek5000-users-owner at lists.mcs.anl.gov > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of Nek5000-users digest..." > > > Today's Topics: > > 1. MPI_TAG_UB too small (nek5000-users at lists.mcs.anl.gov) > 2. Re: MPI_TAG_UB too small (nek5000-users at lists.mcs.anl.gov) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Fri, 3 Apr 2015 12:04:14 +0000 > From: nek5000-users at lists.mcs.anl.gov > To: "nek5000-users at lists.mcs.anl.gov" > > Subject: [Nek5000-users] MPI_TAG_UB too small > Message-ID: > > Content-Type: text/plain; charset="iso-8859-1" > > Dear Neks, > > > I'm having a problem running my simulation with thousands of processors. In my case, I have 6090000 elements and I ran the simulation with 8520 procs. However, the simulation aborted with the error message 'MPI_TAG_UB too small'. > > > I checked the code and found that if "nval.lt.(10000+max(lp,lelg))", the simulation would abort. But in the subroutine 'mpi_attr_get', ival is set to be 9999999. In this case, nval should be larger than 10000+max(lp,lelg). So, why did the simulation not run? > > > Hope anyone can help me on this. Thank you very much. > > > Best regards, > > Tony > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > ------------------------------ > > Message: 2 > Date: Fri, 03 Apr 2015 18:04:16 +0200 > From: nek5000-users at lists.mcs.anl.gov > To: nek5000-users at lists.mcs.anl.gov > Subject: Re: [Nek5000-users] MPI_TAG_UB too small > Message-ID: > > Content-Type: text/plain; charset=windows-1252; format=flowed > > Dear all, > > indeed, we have experienced the same problem with the recent Cray MPI > Library; the maximum tag size was reduced to 2**21 (the MPI standard > requires 2^16, but most libraries have 2^32). The problem in Nek is that > in the initial distribution of the mesh and velocities onto the > processors, the element number is used as a message tag, which obviously > fails if you have more than 2**21 elements. The check you mention > (nval.lt.(10000+max(lp,lelg)) is exactly checking that (mpi_dummf.f is > only used in serial runs, in parallel runs mpi_attr_get is a routine > provided by the current mpi library and returns the maximum tag number > of the implementation). > > Anyway, we have recently fixed this problem by changing the way the tag > is used (local element number vs global one). Maybe the easiest would be > to merge these changes in the repo? Otherwise, I can also send the > changed files, but then of course one has to be careful to have matching > revisions. > > Best regards, > Philipp > > On 2015-04-03 14:04, nek5000-users at lists.mcs.anl.gov wrote: >> Dear Neks, >> >> >> I'm having a problem running my simulation with thousands of processors. >> In my case, I have 6090000 elements and I ran the simulation with 8520 >> procs. However, the simulation aborted with the error message >> 'MPI_TAG_UB too small'. >> >> >> I checked the code and found that if "nval.lt.(10000+max(lp,lelg))", the >> simulation would abort. But in the subroutine 'mpi_attr_get', ival is >> set to be 9999999. In this case, nval should be larger than >> 10000+max(lp,lelg). So, why did the simulation not run? >> >> >> Hope anyone can help me on this. Thank you very much. >> >> >> Best regards, >> >> Tony >> >> >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> > > > ------------------------------ > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > > End of Nek5000-users Digest, Vol 74, Issue 2 > ******************************************** > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > From nek5000-users at lists.mcs.anl.gov Sat Apr 4 12:48:02 2015 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Sat, 4 Apr 2015 17:48:02 +0000 Subject: [Nek5000-users] MPI_TAG_UB too small In-Reply-To: References: Message-ID: Dear Philipp, That would be great. Thanks again. Best regards, Tony ________________________________________ From: nek5000-users-bounces at lists.mcs.anl.gov on behalf of nek5000-users-request at lists.mcs.anl.gov Sent: 04 April 2015 18:00 To: nek5000-users at lists.mcs.anl.gov Subject: Nek5000-users Digest, Vol 74, Issue 3 Send Nek5000-users mailing list submissions to nek5000-users at lists.mcs.anl.gov To subscribe or unsubscribe via the World Wide Web, visit https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users or, via email, send a message with subject or body 'help' to nek5000-users-request at lists.mcs.anl.gov You can reach the person managing the list at nek5000-users-owner at lists.mcs.anl.gov When replying, please edit your Subject line so it is more specific than "Re: Contents of Nek5000-users digest..." Today's Topics: 1. Re: MPI_TAG_UB too small (nek5000-users at lists.mcs.anl.gov) 2. Re: MPI_TAG_UB too small (nek5000-users at lists.mcs.anl.gov) ---------------------------------------------------------------------- Message: 1 Date: Fri, 3 Apr 2015 18:22:24 +0000 From: nek5000-users at lists.mcs.anl.gov To: "nek5000-users at lists.mcs.anl.gov" Subject: Re: [Nek5000-users] MPI_TAG_UB too small Message-ID: Content-Type: text/plain; charset="iso-8859-1" Dear Philipp, I see. Thank you very much for the explanation. It would be great if you could help me make relevant changes or send me the files you mentioned if that is at all possible. Best regards, Tony ________________________________________ From: nek5000-users-bounces at lists.mcs.anl.gov on behalf of nek5000-users-request at lists.mcs.anl.gov Sent: 03 April 2015 18:00 To: nek5000-users at lists.mcs.anl.gov Subject: Nek5000-users Digest, Vol 74, Issue 2 Send Nek5000-users mailing list submissions to nek5000-users at lists.mcs.anl.gov To subscribe or unsubscribe via the World Wide Web, visit https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users or, via email, send a message with subject or body 'help' to nek5000-users-request at lists.mcs.anl.gov You can reach the person managing the list at nek5000-users-owner at lists.mcs.anl.gov When replying, please edit your Subject line so it is more specific than "Re: Contents of Nek5000-users digest..." Today's Topics: 1. MPI_TAG_UB too small (nek5000-users at lists.mcs.anl.gov) 2. Re: MPI_TAG_UB too small (nek5000-users at lists.mcs.anl.gov) ---------------------------------------------------------------------- Message: 1 Date: Fri, 3 Apr 2015 12:04:14 +0000 From: nek5000-users at lists.mcs.anl.gov To: "nek5000-users at lists.mcs.anl.gov" Subject: [Nek5000-users] MPI_TAG_UB too small Message-ID: Content-Type: text/plain; charset="iso-8859-1" Dear Neks, I'm having a problem running my simulation with thousands of processors. In my case, I have 6090000 elements and I ran the simulation with 8520 procs. However, the simulation aborted with the error message 'MPI_TAG_UB too small'. I checked the code and found that if "nval.lt.(10000+max(lp,lelg))", the simulation would abort. But in the subroutine 'mpi_attr_get', ival is set to be 9999999. In this case, nval should be larger than 10000+max(lp,lelg). So, why did the simulation not run? Hope anyone can help me on this. Thank you very much. Best regards, Tony -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 2 Date: Fri, 03 Apr 2015 18:04:16 +0200 From: nek5000-users at lists.mcs.anl.gov To: nek5000-users at lists.mcs.anl.gov Subject: Re: [Nek5000-users] MPI_TAG_UB too small Message-ID: Content-Type: text/plain; charset=windows-1252; format=flowed Dear all, indeed, we have experienced the same problem with the recent Cray MPI Library; the maximum tag size was reduced to 2**21 (the MPI standard requires 2^16, but most libraries have 2^32). The problem in Nek is that in the initial distribution of the mesh and velocities onto the processors, the element number is used as a message tag, which obviously fails if you have more than 2**21 elements. The check you mention (nval.lt.(10000+max(lp,lelg)) is exactly checking that (mpi_dummf.f is only used in serial runs, in parallel runs mpi_attr_get is a routine provided by the current mpi library and returns the maximum tag number of the implementation). Anyway, we have recently fixed this problem by changing the way the tag is used (local element number vs global one). Maybe the easiest would be to merge these changes in the repo? Otherwise, I can also send the changed files, but then of course one has to be careful to have matching revisions. Best regards, Philipp On 2015-04-03 14:04, nek5000-users at lists.mcs.anl.gov wrote: > Dear Neks, > > > I'm having a problem running my simulation with thousands of processors. > In my case, I have 6090000 elements and I ran the simulation with 8520 > procs. However, the simulation aborted with the error message > 'MPI_TAG_UB too small'. > > > I checked the code and found that if "nval.lt.(10000+max(lp,lelg))", the > simulation would abort. But in the subroutine 'mpi_attr_get', ival is > set to be 9999999. In this case, nval should be larger than > 10000+max(lp,lelg). So, why did the simulation not run? > > > Hope anyone can help me on this. Thank you very much. > > > Best regards, > > Tony > > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > ------------------------------ _______________________________________________ Nek5000-users mailing list Nek5000-users at lists.mcs.anl.gov https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users End of Nek5000-users Digest, Vol 74, Issue 2 ******************************************** ------------------------------ Message: 2 Date: Fri, 03 Apr 2015 23:59:21 +0200 From: nek5000-users at lists.mcs.anl.gov To: nek5000-users at lists.mcs.anl.gov Subject: Re: [Nek5000-users] MPI_TAG_UB too small Message-ID: Content-Type: text/plain; charset=windows-1252 Dear Tony, we aim at committing the necessary changes to the repository by Monday. Hope that this is fine. Best regards, Philipp On 2015-04-03 20:22, nek5000-users at lists.mcs.anl.gov wrote: > Dear Philipp, > > I see. Thank you very much for the explanation. It would be great if you could help me make relevant changes or send me the files you mentioned if that is at all possible. > > Best regards, > Tony > > ________________________________________ > From: nek5000-users-bounces at lists.mcs.anl.gov on behalf of nek5000-users-request at lists.mcs.anl.gov > Sent: 03 April 2015 18:00 > To: nek5000-users at lists.mcs.anl.gov > Subject: Nek5000-users Digest, Vol 74, Issue 2 > > Send Nek5000-users mailing list submissions to > nek5000-users at lists.mcs.anl.gov > > To subscribe or unsubscribe via the World Wide Web, visit > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > or, via email, send a message with subject or body 'help' to > nek5000-users-request at lists.mcs.anl.gov > > You can reach the person managing the list at > nek5000-users-owner at lists.mcs.anl.gov > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of Nek5000-users digest..." > > > Today's Topics: > > 1. MPI_TAG_UB too small (nek5000-users at lists.mcs.anl.gov) > 2. Re: MPI_TAG_UB too small (nek5000-users at lists.mcs.anl.gov) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Fri, 3 Apr 2015 12:04:14 +0000 > From: nek5000-users at lists.mcs.anl.gov > To: "nek5000-users at lists.mcs.anl.gov" > > Subject: [Nek5000-users] MPI_TAG_UB too small > Message-ID: > > Content-Type: text/plain; charset="iso-8859-1" > > Dear Neks, > > > I'm having a problem running my simulation with thousands of processors. In my case, I have 6090000 elements and I ran the simulation with 8520 procs. However, the simulation aborted with the error message 'MPI_TAG_UB too small'. > > > I checked the code and found that if "nval.lt.(10000+max(lp,lelg))", the simulation would abort. But in the subroutine 'mpi_attr_get', ival is set to be 9999999. In this case, nval should be larger than 10000+max(lp,lelg). So, why did the simulation not run? > > > Hope anyone can help me on this. Thank you very much. > > > Best regards, > > Tony > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > ------------------------------ > > Message: 2 > Date: Fri, 03 Apr 2015 18:04:16 +0200 > From: nek5000-users at lists.mcs.anl.gov > To: nek5000-users at lists.mcs.anl.gov > Subject: Re: [Nek5000-users] MPI_TAG_UB too small > Message-ID: > > Content-Type: text/plain; charset=windows-1252; format=flowed > > Dear all, > > indeed, we have experienced the same problem with the recent Cray MPI > Library; the maximum tag size was reduced to 2**21 (the MPI standard > requires 2^16, but most libraries have 2^32). The problem in Nek is that > in the initial distribution of the mesh and velocities onto the > processors, the element number is used as a message tag, which obviously > fails if you have more than 2**21 elements. The check you mention > (nval.lt.(10000+max(lp,lelg)) is exactly checking that (mpi_dummf.f is > only used in serial runs, in parallel runs mpi_attr_get is a routine > provided by the current mpi library and returns the maximum tag number > of the implementation). > > Anyway, we have recently fixed this problem by changing the way the tag > is used (local element number vs global one). Maybe the easiest would be > to merge these changes in the repo? Otherwise, I can also send the > changed files, but then of course one has to be careful to have matching > revisions. > > Best regards, > Philipp > > On 2015-04-03 14:04, nek5000-users at lists.mcs.anl.gov wrote: >> Dear Neks, >> >> >> I'm having a problem running my simulation with thousands of processors. >> In my case, I have 6090000 elements and I ran the simulation with 8520 >> procs. However, the simulation aborted with the error message >> 'MPI_TAG_UB too small'. >> >> >> I checked the code and found that if "nval.lt.(10000+max(lp,lelg))", the >> simulation would abort. But in the subroutine 'mpi_attr_get', ival is >> set to be 9999999. In this case, nval should be larger than >> 10000+max(lp,lelg). So, why did the simulation not run? >> >> >> Hope anyone can help me on this. Thank you very much. >> >> >> Best regards, >> >> Tony >> >> >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> > > > ------------------------------ > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > > End of Nek5000-users Digest, Vol 74, Issue 2 > ******************************************** > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > ------------------------------ _______________________________________________ Nek5000-users mailing list Nek5000-users at lists.mcs.anl.gov https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users End of Nek5000-users Digest, Vol 74, Issue 3 ******************************************** From nek5000-users at lists.mcs.anl.gov Sun Apr 5 04:49:12 2015 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Sun, 5 Apr 2015 09:49:12 +0000 Subject: [Nek5000-users] turbflow_outflow() - emergency Message-ID: Dear all, I am using Nek5000 to do DNS of jet flow. As my turbulent flow reaches the end of the domain the CFL number increases rapidly and thus the simulation crashes. My understanding of this issue is that it is caused by a turbulent backflow, thus one should try to accelerate the flow a little in this region to prevent this from occurring. After trying to reduce dt and divergence parameter I included the subroutine accl_outflow() as described in the expansion example. This was unsuccessful, thus I also tried to implement the turb_outflow() subroutine which again did not solve my problem. I am unsure how to proceed, the only way forward that I can tell is to make a variable timestep. I look forward to hearing your responses, Friedrich -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Sun Apr 5 10:46:08 2015 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Sun, 5 Apr 2015 15:46:08 +0000 Subject: [Nek5000-users] turbflow_outflow() - emergency In-Reply-To: References: Message-ID: Hi Friedrich, There are some alternatives to the nozzle (turb_outflow) outflow boundary condition, alternatives on which we work right now a) stabilized outflow boundary condition (Dong et al. 2014) b) convective boundary conditions These have been tested on simple 2d cases but not on full turbulence. Let us know if you are willing to try and test them for your case and we can send the files. Oana ________________________________ From: nek5000-users-bounces at lists.mcs.anl.gov [nek5000-users-bounces at lists.mcs.anl.gov] on behalf of nek5000-users at lists.mcs.anl.gov [nek5000-users at lists.mcs.anl.gov] Sent: Sunday, April 05, 2015 4:49 AM To: nek5000-users at lists.mcs.anl.gov Subject: [Nek5000-users] turbflow_outflow() - emergency Dear all, I am using Nek5000 to do DNS of jet flow. As my turbulent flow reaches the end of the domain the CFL number increases rapidly and thus the simulation crashes. My understanding of this issue is that it is caused by a turbulent backflow, thus one should try to accelerate the flow a little in this region to prevent this from occurring. After trying to reduce dt and divergence parameter I included the subroutine accl_outflow() as described in the expansion example. This was unsuccessful, thus I also tried to implement the turb_outflow() subroutine which again did not solve my problem. I am unsure how to proceed, the only way forward that I can tell is to make a variable timestep. I look forward to hearing your responses, Friedrich -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Mon Apr 6 10:42:24 2015 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 6 Apr 2015 15:42:24 +0000 Subject: [Nek5000-users] turbflow_outflow() - emergency Message-ID: Oana, I am very much willing to try this for my case. I look forward to receiving the files, Friedrich -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Tue Apr 7 12:02:05 2015 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Tue, 7 Apr 2015 21:32:05 +0430 Subject: [Nek5000-users] Non-Conforming Curved Elements Message-ID: Dear Neks, Are Non-Conforming Curved elements currently supported by Nek5000? Thank you in advance. Kind regards, Bijan -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Wed Apr 8 00:56:07 2015 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 8 Apr 2015 00:56:07 -0500 Subject: [Nek5000-users] =?utf-8?q?MPI=5FTAG=5FUB_too_small?= Message-ID: Hi Tony, It seems it may take a little longer to get the changes in the repo, but they were tested by Philipp's group so it should be fine. I tried sending the patch on the mailing list but the attachment didn't go through yet. Could you give me your email or write to me at oanam at anl.gov. Regards, Oana ----- Reply message ----- From: nek5000-users at lists.mcs.anl.gov To: "nek5000-users at lists.mcs.anl.gov" Subject: [Nek5000-users] MPI_TAG_UB too small Date: Sat, Apr 4, 2015 12:48 Dear Philipp, That would be great. Thanks again. Best regards, Tony ________________________________________ From: nek5000-users-bounces at lists.mcs.anl.gov on behalf of nek5000-users-request at lists.mcs.anl.gov Sent: 04 April 2015 18:00 To: nek5000-users at lists.mcs.anl.gov Subject: Nek5000-users Digest, Vol 74, Issue 3 Send Nek5000-users mailing list submissions to nek5000-users at lists.mcs.anl.gov To subscribe or unsubscribe via the World Wide Web, visit https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users or, via email, send a message with subject or body 'help' to nek5000-users-request at lists.mcs.anl.gov You can reach the person managing the list at nek5000-users-owner at lists.mcs.anl.gov When replying, please edit your Subject line so it is more specific than "Re: Contents of Nek5000-users digest..." Today's Topics: 1. Re: MPI_TAG_UB too small (nek5000-users at lists.mcs.anl.gov) 2. Re: MPI_TAG_UB too small (nek5000-users at lists.mcs.anl.gov) ---------------------------------------------------------------------- Message: 1 Date: Fri, 3 Apr 2015 18:22:24 +0000 From: nek5000-users at lists.mcs.anl.gov To: "nek5000-users at lists.mcs.anl.gov" Subject: Re: [Nek5000-users] MPI_TAG_UB too small Message-ID: Content-Type: text/plain; charset="iso-8859-1" Dear Philipp, I see. Thank you very much for the explanation. It would be great if you could help me make relevant changes or send me the files you mentioned if that is at all possible. Best regards, Tony ________________________________________ From: nek5000-users-bounces at lists.mcs.anl.gov on behalf of nek5000-users-request at lists.mcs.anl.gov Sent: 03 April 2015 18:00 To: nek5000-users at lists.mcs.anl.gov Subject: Nek5000-users Digest, Vol 74, Issue 2 Send Nek5000-users mailing list submissions to nek5000-users at lists.mcs.anl.gov To subscribe or unsubscribe via the World Wide Web, visit https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users or, via email, send a message with subject or body 'help' to nek5000-users-request at lists.mcs.anl.gov You can reach the person managing the list at nek5000-users-owner at lists.mcs.anl.gov When replying, please edit your Subject line so it is more specific than "Re: Contents of Nek5000-users digest..." Today's Topics: 1. MPI_TAG_UB too small (nek5000-users at lists.mcs.anl.gov) 2. Re: MPI_TAG_UB too small (nek5000-users at lists.mcs.anl.gov) ---------------------------------------------------------------------- Message: 1 Date: Fri, 3 Apr 2015 12:04:14 +0000 From: nek5000-users at lists.mcs.anl.gov To: "nek5000-users at lists.mcs.anl.gov" Subject: [Nek5000-users] MPI_TAG_UB too small Message-ID: Content-Type: text/plain; charset="iso-8859-1" Dear Neks, I'm having a problem running my simulation with thousands of processors. In my case, I have 6090000 elements and I ran the simulation with 8520 procs. However, the simulation aborted with the error message 'MPI_TAG_UB too small'. I checked the code and found that if "nval.lt.(10000+max(lp,lelg))", the simulation would abort. But in the subroutine 'mpi_attr_get', ival is set to be 9999999. In this case, nval should be larger than 10000+max(lp,lelg). So, why did the simulation not run? Hope anyone can help me on this. Thank you very much. Best regards, Tony -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 2 Date: Fri, 03 Apr 2015 18:04:16 +0200 From: nek5000-users at lists.mcs.anl.gov To: nek5000-users at lists.mcs.anl.gov Subject: Re: [Nek5000-users] MPI_TAG_UB too small Message-ID: Content-Type: text/plain; charset=windows-1252; format=flowed Dear all, indeed, we have experienced the same problem with the recent Cray MPI Library; the maximum tag size was reduced to 2**21 (the MPI standard requires 2^16, but most libraries have 2^32). The problem in Nek is that in the initial distribution of the mesh and velocities onto the processors, the element number is used as a message tag, which obviously fails if you have more than 2**21 elements. The check you mention (nval.lt.(10000+max(lp,lelg)) is exactly checking that (mpi_dummf.f is only used in serial runs, in parallel runs mpi_attr_get is a routine provided by the current mpi library and returns the maximum tag number of the implementation). Anyway, we have recently fixed this problem by changing the way the tag is used (local element number vs global one). Maybe the easiest would be to merge these changes in the repo? Otherwise, I can also send the changed files, but then of course one has to be careful to have matching revisions. Best regards, Philipp On 2015-04-03 14:04, nek5000-users at lists.mcs.anl.gov wrote: > Dear Neks, > > > I'm having a problem running my simulation with thousands of processors. > In my case, I have 6090000 elements and I ran the simulation with 8520 > procs. However, the simulation aborted with the error message > 'MPI_TAG_UB too small'. > > > I checked the code and found that if "nval.lt.(10000+max(lp,lelg))", the > simulation would abort. But in the subroutine 'mpi_attr_get', ival is > set to be 9999999. In this case, nval should be larger than > 10000+max(lp,lelg). So, why did the simulation not run? > > > Hope anyone can help me on this. Thank you very much. > > > Best regards, > > Tony > > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > ------------------------------ _______________________________________________ Nek5000-users mailing list Nek5000-users at lists.mcs.anl.gov https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users End of Nek5000-users Digest, Vol 74, Issue 2 ******************************************** ------------------------------ Message: 2 Date: Fri, 03 Apr 2015 23:59:21 +0200 From: nek5000-users at lists.mcs.anl.gov To: nek5000-users at lists.mcs.anl.gov Subject: Re: [Nek5000-users] MPI_TAG_UB too small Message-ID: Content-Type: text/plain; charset=windows-1252 Dear Tony, we aim at committing the necessary changes to the repository by Monday. Hope that this is fine. Best regards, Philipp On 2015-04-03 20:22, nek5000-users at lists.mcs.anl.gov wrote: > Dear Philipp, > > I see. Thank you very much for the explanation. It would be great if you could help me make relevant changes or send me the files you mentioned if that is at all possible. > > Best regards, > Tony > > ________________________________________ > From: nek5000-users-bounces at lists.mcs.anl.gov on behalf of nek5000-users-request at lists.mcs.anl.gov > Sent: 03 April 2015 18:00 > To: nek5000-users at lists.mcs.anl.gov > Subject: Nek5000-users Digest, Vol 74, Issue 2 > > Send Nek5000-users mailing list submissions to > nek5000-users at lists.mcs.anl.gov > > To subscribe or unsubscribe via the World Wide Web, visit > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > or, via email, send a message with subject or body 'help' to > nek5000-users-request at lists.mcs.anl.gov > > You can reach the person managing the list at > nek5000-users-owner at lists.mcs.anl.gov > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of Nek5000-users digest..." > > > Today's Topics: > > 1. MPI_TAG_UB too small (nek5000-users at lists.mcs.anl.gov) > 2. Re: MPI_TAG_UB too small (nek5000-users at lists.mcs.anl.gov) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Fri, 3 Apr 2015 12:04:14 +0000 > From: nek5000-users at lists.mcs.anl.gov > To: "nek5000-users at lists.mcs.anl.gov" > > Subject: [Nek5000-users] MPI_TAG_UB too small > Message-ID: > > Content-Type: text/plain; charset="iso-8859-1" > > Dear Neks, > > > I'm having a problem running my simulation with thousands of processors. In my case, I have 6090000 elements and I ran the simulation with 8520 procs. However, the simulation aborted with the error message 'MPI_TAG_UB too small'. > > > I checked the code and found that if "nval.lt.(10000+max(lp,lelg))", the simulation would abort. But in the subroutine 'mpi_attr_get', ival is set to be 9999999. In this case, nval should be larger than 10000+max(lp,lelg). So, why did the simulation not run? > > > Hope anyone can help me on this. Thank you very much. > > > Best regards, > > Tony > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > ------------------------------ > > Message: 2 > Date: Fri, 03 Apr 2015 18:04:16 +0200 > From: nek5000-users at lists.mcs.anl.gov > To: nek5000-users at lists.mcs.anl.gov > Subject: Re: [Nek5000-users] MPI_TAG_UB too small > Message-ID: > > Content-Type: text/plain; charset=windows-1252; format=flowed > > Dear all, > > indeed, we have experienced the same problem with the recent Cray MPI > Library; the maximum tag size was reduced to 2**21 (the MPI standard > requires 2^16, but most libraries have 2^32). The problem in Nek is that > in the initial distribution of the mesh and velocities onto the > processors, the element number is used as a message tag, which obviously > fails if you have more than 2**21 elements. The check you mention > (nval.lt.(10000+max(lp,lelg)) is exactly checking that (mpi_dummf.f is > only used in serial runs, in parallel runs mpi_attr_get is a routine > provided by the current mpi library and returns the maximum tag number > of the implementation). > > Anyway, we have recently fixed this problem by changing the way the tag > is used (local element number vs global one). Maybe the easiest would be > to merge these changes in the repo? Otherwise, I can also send the > changed files, but then of course one has to be careful to have matching > revisions. > > Best regards, > Philipp > > On 2015-04-03 14:04, nek5000-users at lists.mcs.anl.gov wrote: >> Dear Neks, >> >> >> I'm having a problem running my simulation with thousands of processors. >> In my case, I have 6090000 elements and I ran the simulation with 8520 >> procs. However, the simulation aborted with the error message >> 'MPI_TAG_UB too small'. >> >> >> I checked the code and found that if "nval.lt.(10000+max(lp,lelg))", the >> simulation would abort. But in the subroutine 'mpi_attr_get', ival is >> set to be 9999999. In this case, nval should be larger than >> 10000+max(lp,lelg). So, why did the simulation not run? >> >> >> Hope anyone can help me on this. Thank you very much. >> >> >> Best regards, >> >> Tony >> >> >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> > > > ------------------------------ > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > > End of Nek5000-users Digest, Vol 74, Issue 2 > ******************************************** > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > ------------------------------ _______________________________________________ Nek5000-users mailing list Nek5000-users at lists.mcs.anl.gov https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users End of Nek5000-users Digest, Vol 74, Issue 3 ******************************************** _______________________________________________ Nek5000-users mailing list Nek5000-users at lists.mcs.anl.gov https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Wed Apr 8 01:00:06 2015 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 8 Apr 2015 01:00:06 -0500 Subject: [Nek5000-users] =?utf-8?q?turbflow=5Foutflow=28=29_-_emergency?= Message-ID: Hi Friedrich, It would be much easier if we communicate off list. Could you please drop me a line at oanam at anl.gov and I can send you the files with the appropriate explanations. Could also be that some other boundary condition already in Nek can be used, such as 'on', so it could be useful if you describe to me your case a bit via email Regards, Oana ----- Reply message ----- From: nek5000-users at lists.mcs.anl.gov To: "nek5000-users at lists.mcs.anl.gov" Subject: [Nek5000-users] turbflow_outflow() - emergency Date: Mon, Apr 6, 2015 10:42 Oana, I am very much willing to try this for my case. I look forward to receiving the files, Friedrich -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Fri Apr 10 09:23:28 2015 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 10 Apr 2015 16:23:28 +0200 Subject: [Nek5000-users] Regarding Restart Message-ID: Hi Neks, I am facing a weird problem, in terms of restarting the simulations. I place the restart file blah.f0000* in the same folder from where I launch the simulations, but still I have this error ************************************* byte_read() :: fopen failure2! ERROR: Error reading restart header in mfi_prepare ierr= 1 ************************************* To over come this problem, I was replacing the "name" variable in ' byte.c ' with the path of the restart file and it really helps. I want to get rid of this. Could some please tell me why I am getting this problem. Thanks, Kamal PS: My SESSION.NAME file contains the name of the file and the path. From nek5000-users at lists.mcs.anl.gov Fri Apr 10 23:46:07 2015 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Fri, 10 Apr 2015 23:46:07 -0500 Subject: [Nek5000-users] Mode decomposition with parallelism Message-ID: Hi, I am doing a cylindrical cell for Rayleigh Benard convection. Although I can save the data down and write another script to analyze on my laptop, it would be far better to do it during the running process. Anyone has done this before? I would like to have an example script, then modify it to suit my case. Or, is there any suggestion for that? Thanks. Regards, Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Tue Apr 14 09:42:22 2015 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Tue, 14 Apr 2015 15:42:22 +0100 Subject: [Nek5000-users] heat flux BC Message-ID: Hi again Neks, Sorry to ask again on these problem as there is no answer yet. I've been trying to understand how the flux boundary condition works in here because I've tried to work on both nondimensional and dimensionalized problem. However, if I put constant value for the heat flux at the boundary in userbc, I keep getting a constant temperature value after a developed length in the downstream which should not be the case as the flow temperature should keep increasing by the energy in especially for the dimensionalized problem. Would anyone mind to share with me your knowledge on this? Thanks Muhsin ---------- Forwarded message ---------- From: Muhsin Mohd Amin Date: 1 April 2015 at 18:21 Subject: heat flux BC To: nek5000-users at lists.mcs.anl.gov Hi Neks, I still trying to understand the calculation works in Nek5000 Sorry for some basic questions as not much of example on the flux BC: 1. If I put the boundary condition as flux, do I need to calculate the resultant heat transfer coefficient or can I just obtain thoroughly from the code? because I haven't found the HTC output any inside the code, but whenever I grab the 'HC' it gives me some value. So what would be the value of HC that I obtain there if I cannot get it directly from the code? 2. If I specify the flux number in the userbc for nondimensionalized equation, how the deltaT is normalized is it to the bulk temperature? 3. All values that specified in the usr file for dimensionalized has to be SI unit isn't it? as the flux should be in W/m2? Regards, -- Muhsin Mohd Amin Graduate Student Department of Mechanical Engineering The University of Sheffield -- Muhsin Mohd Amin Graduate Student Department of Mechanical Engineering The University of Sheffield -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Tue Apr 14 10:50:51 2015 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Tue, 14 Apr 2015 10:50:51 -0500 Subject: [Nek5000-users] Mode decomposition with parallelism In-Reply-To: References: Message-ID: Dear all, About the previous request, there are a few points to specify. We would like to do POD for velocity which varies with time. We want to observe the strength of different modes, plotting against time. It would need to capture the details of flow reversal at Ra ~ 1e8. The details of the POD might not be very important. I can modify the orthogonal function to suit my case, but I would like to have an example first. Also, there should be around a few hundred thousands to millions of elements. Therefore, it is rather essential to do it efficiently on cluster. Hope this will help clarify. Regards, Simon 2015-04-10 23:46 GMT-05:00 >: > Hi, > > I am doing a cylindrical cell for Rayleigh Benard convection. Although I > can save the data down and write another script to analyze on my laptop, it > would be far better to do it during the running process. > > Anyone has done this before? I would like to have an example script, then > modify it to suit my case. Or, is there any suggestion for that? > > Thanks. > > Regards, > Simon > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Tue Apr 7 10:34:15 2015 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Tue, 7 Apr 2015 15:34:15 +0000 Subject: [Nek5000-users] MPI_TAG_UB too small In-Reply-To: References: , Message-ID: Hi Tony, I am sending off the files from Philipp since it seems we are not 100% ready to do the commit, but these files were already tested so I think there should be no problem. In case you want to understand more what's going on I'd recommend to do a diff just to see the differences from the main code before you merge. Oana ________________________________________ From: nek5000-users-bounces at lists.mcs.anl.gov [nek5000-users-bounces at lists.mcs.anl.gov] on behalf of nek5000-users at lists.mcs.anl.gov [nek5000-users at lists.mcs.anl.gov] Sent: Saturday, April 04, 2015 12:48 PM To: nek5000-users at lists.mcs.anl.gov Subject: Re: [Nek5000-users] MPI_TAG_UB too small Dear Philipp, That would be great. Thanks again. Best regards, Tony ________________________________________ From: nek5000-users-bounces at lists.mcs.anl.gov on behalf of nek5000-users-request at lists.mcs.anl.gov Sent: 04 April 2015 18:00 To: nek5000-users at lists.mcs.anl.gov Subject: Nek5000-users Digest, Vol 74, Issue 3 Send Nek5000-users mailing list submissions to nek5000-users at lists.mcs.anl.gov To subscribe or unsubscribe via the World Wide Web, visit https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users or, via email, send a message with subject or body 'help' to nek5000-users-request at lists.mcs.anl.gov You can reach the person managing the list at nek5000-users-owner at lists.mcs.anl.gov When replying, please edit your Subject line so it is more specific than "Re: Contents of Nek5000-users digest..." Today's Topics: 1. Re: MPI_TAG_UB too small (nek5000-users at lists.mcs.anl.gov) 2. Re: MPI_TAG_UB too small (nek5000-users at lists.mcs.anl.gov) ---------------------------------------------------------------------- Message: 1 Date: Fri, 3 Apr 2015 18:22:24 +0000 From: nek5000-users at lists.mcs.anl.gov To: "nek5000-users at lists.mcs.anl.gov" Subject: Re: [Nek5000-users] MPI_TAG_UB too small Message-ID: Content-Type: text/plain; charset="iso-8859-1" Dear Philipp, I see. Thank you very much for the explanation. It would be great if you could help me make relevant changes or send me the files you mentioned if that is at all possible. Best regards, Tony ________________________________________ From: nek5000-users-bounces at lists.mcs.anl.gov on behalf of nek5000-users-request at lists.mcs.anl.gov Sent: 03 April 2015 18:00 To: nek5000-users at lists.mcs.anl.gov Subject: Nek5000-users Digest, Vol 74, Issue 2 Send Nek5000-users mailing list submissions to nek5000-users at lists.mcs.anl.gov To subscribe or unsubscribe via the World Wide Web, visit https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users or, via email, send a message with subject or body 'help' to nek5000-users-request at lists.mcs.anl.gov You can reach the person managing the list at nek5000-users-owner at lists.mcs.anl.gov When replying, please edit your Subject line so it is more specific than "Re: Contents of Nek5000-users digest..." Today's Topics: 1. MPI_TAG_UB too small (nek5000-users at lists.mcs.anl.gov) 2. Re: MPI_TAG_UB too small (nek5000-users at lists.mcs.anl.gov) ---------------------------------------------------------------------- Message: 1 Date: Fri, 3 Apr 2015 12:04:14 +0000 From: nek5000-users at lists.mcs.anl.gov To: "nek5000-users at lists.mcs.anl.gov" Subject: [Nek5000-users] MPI_TAG_UB too small Message-ID: Content-Type: text/plain; charset="iso-8859-1" Dear Neks, I'm having a problem running my simulation with thousands of processors. In my case, I have 6090000 elements and I ran the simulation with 8520 procs. However, the simulation aborted with the error message 'MPI_TAG_UB too small'. I checked the code and found that if "nval.lt.(10000+max(lp,lelg))", the simulation would abort. But in the subroutine 'mpi_attr_get', ival is set to be 9999999. In this case, nval should be larger than 10000+max(lp,lelg). So, why did the simulation not run? Hope anyone can help me on this. Thank you very much. Best regards, Tony -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 2 Date: Fri, 03 Apr 2015 18:04:16 +0200 From: nek5000-users at lists.mcs.anl.gov To: nek5000-users at lists.mcs.anl.gov Subject: Re: [Nek5000-users] MPI_TAG_UB too small Message-ID: Content-Type: text/plain; charset=windows-1252; format=flowed Dear all, indeed, we have experienced the same problem with the recent Cray MPI Library; the maximum tag size was reduced to 2**21 (the MPI standard requires 2^16, but most libraries have 2^32). The problem in Nek is that in the initial distribution of the mesh and velocities onto the processors, the element number is used as a message tag, which obviously fails if you have more than 2**21 elements. The check you mention (nval.lt.(10000+max(lp,lelg)) is exactly checking that (mpi_dummf.f is only used in serial runs, in parallel runs mpi_attr_get is a routine provided by the current mpi library and returns the maximum tag number of the implementation). Anyway, we have recently fixed this problem by changing the way the tag is used (local element number vs global one). Maybe the easiest would be to merge these changes in the repo? Otherwise, I can also send the changed files, but then of course one has to be careful to have matching revisions. Best regards, Philipp On 2015-04-03 14:04, nek5000-users at lists.mcs.anl.gov wrote: > Dear Neks, > > > I'm having a problem running my simulation with thousands of processors. > In my case, I have 6090000 elements and I ran the simulation with 8520 > procs. However, the simulation aborted with the error message > 'MPI_TAG_UB too small'. > > > I checked the code and found that if "nval.lt.(10000+max(lp,lelg))", the > simulation would abort. But in the subroutine 'mpi_attr_get', ival is > set to be 9999999. In this case, nval should be larger than > 10000+max(lp,lelg). So, why did the simulation not run? > > > Hope anyone can help me on this. Thank you very much. > > > Best regards, > > Tony > > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > ------------------------------ _______________________________________________ Nek5000-users mailing list Nek5000-users at lists.mcs.anl.gov https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users End of Nek5000-users Digest, Vol 74, Issue 2 ******************************************** ------------------------------ Message: 2 Date: Fri, 03 Apr 2015 23:59:21 +0200 From: nek5000-users at lists.mcs.anl.gov To: nek5000-users at lists.mcs.anl.gov Subject: Re: [Nek5000-users] MPI_TAG_UB too small Message-ID: Content-Type: text/plain; charset=windows-1252 Dear Tony, we aim at committing the necessary changes to the repository by Monday. Hope that this is fine. Best regards, Philipp On 2015-04-03 20:22, nek5000-users at lists.mcs.anl.gov wrote: > Dear Philipp, > > I see. Thank you very much for the explanation. It would be great if you could help me make relevant changes or send me the files you mentioned if that is at all possible. > > Best regards, > Tony > > ________________________________________ > From: nek5000-users-bounces at lists.mcs.anl.gov on behalf of nek5000-users-request at lists.mcs.anl.gov > Sent: 03 April 2015 18:00 > To: nek5000-users at lists.mcs.anl.gov > Subject: Nek5000-users Digest, Vol 74, Issue 2 > > Send Nek5000-users mailing list submissions to > nek5000-users at lists.mcs.anl.gov > > To subscribe or unsubscribe via the World Wide Web, visit > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > or, via email, send a message with subject or body 'help' to > nek5000-users-request at lists.mcs.anl.gov > > You can reach the person managing the list at > nek5000-users-owner at lists.mcs.anl.gov > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of Nek5000-users digest..." > > > Today's Topics: > > 1. MPI_TAG_UB too small (nek5000-users at lists.mcs.anl.gov) > 2. Re: MPI_TAG_UB too small (nek5000-users at lists.mcs.anl.gov) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Fri, 3 Apr 2015 12:04:14 +0000 > From: nek5000-users at lists.mcs.anl.gov > To: "nek5000-users at lists.mcs.anl.gov" > > Subject: [Nek5000-users] MPI_TAG_UB too small > Message-ID: > > Content-Type: text/plain; charset="iso-8859-1" > > Dear Neks, > > > I'm having a problem running my simulation with thousands of processors. In my case, I have 6090000 elements and I ran the simulation with 8520 procs. However, the simulation aborted with the error message 'MPI_TAG_UB too small'. > > > I checked the code and found that if "nval.lt.(10000+max(lp,lelg))", the simulation would abort. But in the subroutine 'mpi_attr_get', ival is set to be 9999999. In this case, nval should be larger than 10000+max(lp,lelg). So, why did the simulation not run? > > > Hope anyone can help me on this. Thank you very much. > > > Best regards, > > Tony > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > ------------------------------ > > Message: 2 > Date: Fri, 03 Apr 2015 18:04:16 +0200 > From: nek5000-users at lists.mcs.anl.gov > To: nek5000-users at lists.mcs.anl.gov > Subject: Re: [Nek5000-users] MPI_TAG_UB too small > Message-ID: > > Content-Type: text/plain; charset=windows-1252; format=flowed > > Dear all, > > indeed, we have experienced the same problem with the recent Cray MPI > Library; the maximum tag size was reduced to 2**21 (the MPI standard > requires 2^16, but most libraries have 2^32). The problem in Nek is that > in the initial distribution of the mesh and velocities onto the > processors, the element number is used as a message tag, which obviously > fails if you have more than 2**21 elements. The check you mention > (nval.lt.(10000+max(lp,lelg)) is exactly checking that (mpi_dummf.f is > only used in serial runs, in parallel runs mpi_attr_get is a routine > provided by the current mpi library and returns the maximum tag number > of the implementation). > > Anyway, we have recently fixed this problem by changing the way the tag > is used (local element number vs global one). Maybe the easiest would be > to merge these changes in the repo? Otherwise, I can also send the > changed files, but then of course one has to be careful to have matching > revisions. > > Best regards, > Philipp > > On 2015-04-03 14:04, nek5000-users at lists.mcs.anl.gov wrote: >> Dear Neks, >> >> >> I'm having a problem running my simulation with thousands of processors. >> In my case, I have 6090000 elements and I ran the simulation with 8520 >> procs. However, the simulation aborted with the error message >> 'MPI_TAG_UB too small'. >> >> >> I checked the code and found that if "nval.lt.(10000+max(lp,lelg))", the >> simulation would abort. But in the subroutine 'mpi_attr_get', ival is >> set to be 9999999. In this case, nval should be larger than >> 10000+max(lp,lelg). So, why did the simulation not run? >> >> >> Hope anyone can help me on this. Thank you very much. >> >> >> Best regards, >> >> Tony >> >> >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> > > > ------------------------------ > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > > > End of Nek5000-users Digest, Vol 74, Issue 2 > ******************************************** > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > ------------------------------ _______________________________________________ Nek5000-users mailing list Nek5000-users at lists.mcs.anl.gov https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users End of Nek5000-users Digest, Vol 74, Issue 3 ******************************************** _______________________________________________ Nek5000-users mailing list Nek5000-users at lists.mcs.anl.gov https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users -------------- next part -------------- A non-text attachment was scrubbed... Name: tag.tgz Type: application/x-compressed-tar Size: 86522 bytes Desc: tag.tgz URL: From nek5000-users at lists.mcs.anl.gov Mon Apr 20 09:17:27 2015 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 20 Apr 2015 16:17:27 +0200 Subject: [Nek5000-users] use n2to3 with symmetry boundary condition In-Reply-To: References: Message-ID: Hello, Dear all, I am currently using n2to3 to generate a 3d mesh from a 2d mesh. I am wondering if it is possible to specify symmetry boundary condition in the z-direction, for one side of the domain. As I checked from the web page of n2to3 https://nek5000.mcs.anl.gov/index.php/N2to3 it seems that, I can only specify 'P, v, O', is it indeed like this hence we can not impose symmetry BC in this way? thanks in advance, Lailai -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Mon Apr 20 16:22:31 2015 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 20 Apr 2015 21:22:31 +0000 (UTC) Subject: [Nek5000-users] Pressure gradient Message-ID: Hi,Is there any problem to specify a fixed pressure gradient rather than fixed flow rate?I want to simulate a channel flow of non-Newtonian fluids, and it is easier to use pressure gradient. Hamid -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Mon Apr 27 05:52:11 2015 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 27 Apr 2015 12:52:11 +0200 (CEST) Subject: [Nek5000-users] distribute a 1d profile to 3d domain Message-ID: Dear nek' users I've a 1d profile u(y) computed with "planar_average_s" subroutine. So now i want to redistribute this profile to the whole domain in order to obtain a 3d field u(x,y,z) How can i do this? Thanks -- Mr. Mirko FaranoPh.D. student at Politecnico di BariTel.: +39-0883529289 / +39-0805963462Mob.: +39-3202342719Department of Mechanics Mathematics and Management - Fluid Machinery and Energy SystemsVia Re David, 200 - 70125 BARI (Italy)mirko.farano at poliba.itIn joint supervision with theEcole Nationale Sup?rieure d'Arts et M?tiers de Paris - ParistechTel.: +33-144246436Mob.: +33-789822134Dynfluid Laboratory151 Boulevard de l'H?pital - 75013 Paris (France)mirko.farano at ensam.eu -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Mon Apr 27 14:02:54 2015 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Mon, 27 Apr 2015 19:02:54 +0000 Subject: [Nek5000-users] distribute a 1d profile to 3d domain In-Reply-To: References: Message-ID: Hi Mirko, I once did a similar operation but for duplicating a z-profile (resulted from the planar_average_z() routine) with the following code. Aleks c----------------------------------------------------------------------- subroutine fill3d_1d_t(u,ua) c c Fill 3d array with 1d t-values by replicating them in r-s planes c include 'SIZE' include 'PARALLEL' include 'ZPER' ! define nelx,y,z here! c real u(nx1,ny1,nx1,nelv),ua(nz1,nelz) integer e,eg,ex,ey,ez c do e=1,nelt c eg = lglel(e) call get_exyz(ex,ey,ez,eg,nelx,nely,nelz) c do k=1,nz1 do j=1,ny1,node do i=1,nx1 u(i,j,k,e) = ua(k,ez) enddo enddo enddo enddo c return end c----------------------------------------------------------------------- ________________________________ From: nek5000-users-bounces at lists.mcs.anl.gov [nek5000-users-bounces at lists.mcs.anl.gov] on behalf of nek5000-users at lists.mcs.anl.gov [nek5000-users at lists.mcs.anl.gov] Sent: Monday, April 27, 2015 5:52 AM To: nek5000-users at lists.mcs.anl.gov Subject: [Nek5000-users] distribute a 1d profile to 3d domain Dear nek' users I've a 1d profile u(y) computed with "planar_average_s" subroutine. So now i want to redistribute this profile to the whole domain in order to obtain a 3d field u(x,y,z) How can i do this? Thanks -- Mr. Mirko Farano Ph.D. student at Politecnico di Bari Tel.: +39-0883529289 / +39-0805963462 Mob.: +39-3202342719 Department of Mechanics Mathematics and Management - Fluid Machinery and Energy Systems Via Re David, 200 - 70125 BARI (Italy) mirko.farano at poliba.it In joint supervision with the Ecole Nationale Sup?rieure d'Arts et M?tiers de Paris - Paristech Tel.: +33-144246436 Mob.: +33-789822134 Dynfluid Laboratory 151 Boulevard de l'H?pital - 75013 Paris (France) mirko.farano at ensam.eu -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Tue Apr 28 11:20:40 2015 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Tue, 28 Apr 2015 11:20:40 -0500 (CDT) Subject: [Nek5000-users] distribute a 1d profile to 3d domain In-Reply-To: References: Message-ID: Dear Mirko, It looks like call planar_fill_s with the appropraite arguments should work... Paul On Mon, 27 Apr 2015, nek5000-users at lists.mcs.anl.gov wrote: > Dear nek' users > > I've a 1d profile u(y) computed with "planar_average_s" subroutine. So now i want to redistribute this profile to the whole domain in order to obtain a 3d field u(x,y,z) How can i do this? > > Thanks > > -- Mr. Mirko FaranoPh.D. student at Politecnico di BariTel.: +39-0883529289 / +39-0805963462Mob.: +39-3202342719Department of Mechanics Mathematics and Management - Fluid Machinery and Energy SystemsVia Re David, 200 - 70125 BARI (Italy)mirko.farano at poliba.itIn joint supervision with theEcole Nationale Sup??rieure d'Arts et M??tiers de Paris - ParistechTel.: +33-144246436Mob.: +33-789822134Dynfluid Laboratory151 Boulevard de l'H??pital - 75013 Paris (France)mirko.farano at ensam.eu > From nek5000-users at lists.mcs.anl.gov Tue Apr 28 14:28:28 2015 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Tue, 28 Apr 2015 21:28:28 +0200 Subject: [Nek5000-users] Open Postdoc Position at KTH Stockholm, Sweden with Nek5000 Message-ID: Dear all, Sorry to post it here, but it might be of interest to some of you: There is an open Postdoc position at KTH Stockholm, Sweden, dealing with Nek5000: https://www.kth.se/en/om/work-at-kth/lediga-jobb/what:job/jobID:64250/where:4/ From the announcement: In the recent FET-HPC call under Horizon 2020 a number of exascale computing projects were funded and the successful candidate is expected to join the ExaFLOW project, a collaboration of major European CFD (from Sweden, UK, Germany and Switzerland) and HPC centres (from Sweden, Germany, and UK) that aims at designing, implementing, and testing novel CFD methods with the potential to reach the exascale. The successful candidate is expected to contribute to the efficient implementation of novel algorithms, e.g. in the area of adaptivity, and also to assume a managerial role in the project. The implementations will be based on the NEK5000 code in collaboration with the KTH Mechanics Department. Best regards, Philipp From nek5000-users at lists.mcs.anl.gov Tue Apr 28 15:33:48 2015 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Tue, 28 Apr 2015 22:33:48 +0200 (CEST) Subject: [Nek5000-users] R: Re: distribute a 1d profile to 3d domain Message-ID: Hi Paul and Aleks, Thanks for the quick answer. I am a new Nek5000 user and i'm not yet familiar with all functions and subroutines. However, I have compared the two functions "planar_fill_t (u, ua)" and "fill3d_1d_t (u, ua)", and they seem to do both for me. However I have used "fill3d_1d_t (u, ua)" and it seems to work. I'll keep you aware! -- Mr. Mirko FaranoPh.D. student at Politecnico di BariTel.: +39-0883529289 / +39-0805963462Mob.: +39-3202342719Department of Mechanics Mathematics and Management - Fluid Machinery and Energy SystemsVia Re David, 200 - 70125 BARI (Italy)mirko.farano at poliba.itIn joint supervision with theEcole Nationale Sup?rieure d'Arts et M?tiers de Paris - ParistechTel.: +33-144246436Mob.: +33- 789822134Dynfluid Laboratory151 Boulevard de l'H?pital - 75013 Paris (France) mirko.farano at ensam.eu >----Messaggio originale---- >Da: nek5000-users at lists.mcs.anl.gov >Data: 28-apr-2015 18.20 >A: >Ogg: Re: [Nek5000-users] distribute a 1d profile to 3d domain > > >Dear Mirko, > >It looks like > > call planar_fill_s > >with the appropraite arguments should work... > >Paul > > > >On Mon, 27 Apr 2015, nek5000-users at lists.mcs.anl.gov wrote: > >> Dear nek' users >> >> I've a 1d profile u(y) computed with "planar_average_s" subroutine. So now i want to redistribute this profile to the whole domain in order to obtain a 3d field u(x,y,z) How can i do this? >> >> Thanks >> >> -- Mr. Mirko FaranoPh.D. student at Politecnico di BariTel.: +39-0883529289 / +39-0805963462Mob.: +39-3202342719Department of Mechanics Mathematics and Management - Fluid Machinery and Energy SystemsVia Re David, 200 - 70125 BARI (Italy)mirko.farano at poliba.itIn joint supervision with theEcole Nationale Sup??rieure d'Arts et M??tiers de Paris - ParistechTel.: +33-144246436Mob.: +33- 789822134Dynfluid Laboratory151 Boulevard de l'H??pital - 75013 Paris (France) mirko.farano at ensam.eu >>_______________________________________________ >Nek5000-users mailing list >Nek5000-users at lists.mcs.anl.gov >https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > From nek5000-users at lists.mcs.anl.gov Wed Apr 29 11:47:22 2015 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 29 Apr 2015 16:47:22 +0000 (UTC) Subject: [Nek5000-users] MOAB Cubit problem Message-ID: Hi,I use Trelis (V 15) to generate mesh. However, when I try to use mbconvert to generate h5m file, it says [0]MOAB ERROR: --------------------- Error Message ------------------------------------[0]MOAB ERROR: This doesn't appear to be a .cub file![0]MOAB ERROR: load_file() line 310 in src/io/Tqdcfr.cpp[0]MOAB ERROR: --------------------- Error Message ------------------------------------[0]MOAB ERROR: Failed to load file after trying all possible readers![0]MOAB ERROR: serial_load_file() line 634 in src/Core.cpp[0]MOAB ERROR: load_file() line 523 in src/Core.cppFailed to load "/home/hamid/nek5_svn/moab_conjht/cubit01.cub".Error code: MB_FAILURE (16)Error message: Failed to load file after trying all possible readers I have downloaded other cub files from repository and mbconvert works fine on them.?However opening and saving these files with Trelis causes same problem.I think the file format in Trelis is different than Cubit. Is it true? if yes what is the solution? Hamid -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Wed Apr 29 13:23:23 2015 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 29 Apr 2015 13:23:23 -0500 Subject: [Nek5000-users] MOAB Cubit problem In-Reply-To: References: Message-ID: Hamid, It is possible that the file formats are quite different between Cubit and Trelis. We have not tested with the Trelis versions so far and do plan to acquire a license to make these conversions possible. Meanwhile, if you want to use Exodus or any other format that MOAB can understand directly, that would simplify your workflow. We will keep you updated about the progress, if you're interested. Vijay On Wed, Apr 29, 2015 at 11:47 AM, wrote: > Hi, > I use Trelis (V 15) to generate mesh. However, when I try to use mbconvert > to generate h5m file, it says > > [0]MOAB ERROR: --------------------- Error Message > ------------------------------------ > [0]MOAB ERROR: This doesn't appear to be a .cub file! > [0]MOAB ERROR: load_file() line 310 in src/io/Tqdcfr.cpp > [0]MOAB ERROR: --------------------- Error Message > ------------------------------------ > [0]MOAB ERROR: Failed to load file after trying all possible readers! > [0]MOAB ERROR: serial_load_file() line 634 in src/Core.cpp > [0]MOAB ERROR: load_file() line 523 in src/Core.cpp > Failed to load "/home/hamid/nek5_svn/moab_conjht/cubit01.cub". > Error code: MB_FAILURE (16) > Error message: Failed to load file after trying all possible readers > > I have downloaded other cub files from repository and mbconvert works fine > on them. > However opening and saving these files with Trelis causes same problem. > I think the file format in Trelis is different than Cubit. Is it true? if > yes what is the solution? > > Hamid > > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > From nek5000-users at lists.mcs.anl.gov Wed Apr 29 13:59:04 2015 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 29 Apr 2015 14:59:04 -0400 Subject: [Nek5000-users] MOAB Cubit problem In-Reply-To: References: Message-ID: Hamid, You have to export your mesh (defined as a block with all side-sets) in trelis to an Exodus format (try .g, e.g.), then convert it to h5m. Mohsen On 2015-04-29 02:23 PM, nek5000-users at lists.mcs.anl.gov wrote: > Hamid, > > It is possible that the file formats are quite different between Cubit > and Trelis. We have not tested with the Trelis versions so far and do > plan to acquire a license to make these conversions possible. > Meanwhile, if you want to use Exodus or any other format that MOAB can > understand directly, that would simplify your workflow. We will keep > you updated about the progress, if you're interested. > > Vijay > > On Wed, Apr 29, 2015 at 11:47 AM, wrote: >> Hi, >> I use Trelis (V 15) to generate mesh. However, when I try to use mbconvert >> to generate h5m file, it says >> >> [0]MOAB ERROR: --------------------- Error Message >> ------------------------------------ >> [0]MOAB ERROR: This doesn't appear to be a .cub file! >> [0]MOAB ERROR: load_file() line 310 in src/io/Tqdcfr.cpp >> [0]MOAB ERROR: --------------------- Error Message >> ------------------------------------ >> [0]MOAB ERROR: Failed to load file after trying all possible readers! >> [0]MOAB ERROR: serial_load_file() line 634 in src/Core.cpp >> [0]MOAB ERROR: load_file() line 523 in src/Core.cpp >> Failed to load "/home/hamid/nek5_svn/moab_conjht/cubit01.cub". >> Error code: MB_FAILURE (16) >> Error message: Failed to load file after trying all possible readers >> >> I have downloaded other cub files from repository and mbconvert works fine >> on them. >> However opening and saving these files with Trelis causes same problem. >> I think the file format in Trelis is different than Cubit. Is it true? if >> yes what is the solution? >> >> Hamid >> >> >> _______________________________________________ >> Nek5000-users mailing list >> Nek5000-users at lists.mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users >> > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users -- Mohsen Behzad, Ph.D. Research Associate, Mechanical and Industrial Eng. Dept., University of Toronto From nek5000-users at lists.mcs.anl.gov Thu Apr 30 10:00:07 2015 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 30 Apr 2015 16:00:07 +0100 Subject: [Nek5000-users] lxd from old versions of NekBone In-Reply-To: <554242F6.1090606@gmail.com> References: <554242F6.1090606@gmail.com> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi Neks, I would like to compare performance results from an earlier version of NekBone wit NekBone 3.1. However I see that lxd is no longer required in the SIZE file. What value should I use in earlier versions to have exactly the same run on the 3.1 version? Thanks, Manos -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.14 (GNU/Linux) iQEcBAEBAgAGBQJVQkN2AAoJEIhcgrtekvd+FfUH/j+k1Wyq0bSPeORrpbR0dR9j 8lpSzsqJQrdxj5GQE5YQOgaHTIx1dVLJSLYaW//jLWeKvcrFJlotiCiAiMoi/4IX dP35Ut/b4KEyUxSr8Lc5IxHBY4usgTu00GFIa5rCPAo/f+Pakp0PNIjJyYxtjfdA p5Gbia34qfSL09JkXYNWSLQKygqOp3UFh2N3++P2Bo3S28ptuwKnvx5bAqIG+GBC 3f3urVFe5X+IzpQ3SOhDLJ8qUc6cZxuJBASP9bY60S+bLNQmdwMpQ3X0yVHpc/GD xYQfWBLo2v0IELHo9PqHrA//KTx22b0JQDGpQm696v7E1giLXNpIHVVMaOizKgs= =MwJe -----END PGP SIGNATURE----- -- The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From nek5000-users at lists.mcs.anl.gov Thu Apr 30 13:29:47 2015 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 30 Apr 2015 18:29:47 +0000 (UTC) Subject: [Nek5000-users] MOAB Cubit problem Message-ID: Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Wed Apr 29 14:28:07 2015 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Wed, 29 Apr 2015 19:28:07 +0000 (UTC) Subject: [Nek5000-users] [MOAB-dev] MOAB Cubit problem In-Reply-To: <7fb783c7e7be4e7489da43a857310618@LUCKMAN.anl.gov> References: <7fb783c7e7be4e7489da43a857310618@LUCKMAN.anl.gov> Message-ID: If the mesh file is not too large, please send us the mesh file that breaks mbconvert. Thanks. Rajeev Jain? 630-252-3176 /?630-252-5986 (fax) jain at mcs.anl.gov From: Vijay S. Mahadevan To: nek5000-users ; "moab-dev at mcs.anl.gov" Sent: Wednesday, April 29, 2015 1:23 PM Subject: Re: [MOAB-dev] [Nek5000-users] MOAB Cubit problem Hamid, It is possible that the file formats are quite different between Cubit and Trelis. We have not tested with the Trelis versions so far and do plan to acquire a license to make these conversions possible. Meanwhile, if you want to use Exodus or any other format that MOAB can understand directly, that would simplify your workflow. We will keep you updated about the progress, if you're interested. Vijay On Wed, Apr 29, 2015 at 11:47 AM,? wrote: > Hi, > I use Trelis (V 15) to generate mesh. However, when I try to use mbconvert > to generate h5m file, it says > > [0]MOAB ERROR: --------------------- Error Message > ------------------------------------ > [0]MOAB ERROR: This doesn't appear to be a .cub file! > [0]MOAB ERROR: load_file() line 310 in src/io/Tqdcfr.cpp > [0]MOAB ERROR: --------------------- Error Message > ------------------------------------ > [0]MOAB ERROR: Failed to load file after trying all possible readers! > [0]MOAB ERROR: serial_load_file() line 634 in src/Core.cpp > [0]MOAB ERROR: load_file() line 523 in src/Core.cpp > Failed to load "/home/hamid/nek5_svn/moab_conjht/cubit01.cub". > Error code: MB_FAILURE (16) > Error message: Failed to load file after trying all possible readers > > I have downloaded other cub files from repository and mbconvert works fine > on them. > However opening and saving these files with Trelis causes same problem. > I think the file format in Trelis is different than Cubit. Is it true? if > yes what is the solution? > > Hamid > > > _______________________________________________ > Nek5000-users mailing list > Nek5000-users at lists.mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/nek5000-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nek5000-users at lists.mcs.anl.gov Thu Apr 30 09:57:58 2015 From: nek5000-users at lists.mcs.anl.gov (nek5000-users at lists.mcs.anl.gov) Date: Thu, 30 Apr 2015 15:57:58 +0100 Subject: [Nek5000-users] lxd from old versions of NekBone In-Reply-To: References: Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi Neks, I would like to compare performance results from an earlier version of NekBone wit NekBone 3.1. However I see that lxd is no longer required in the SIZE file. What value should I use in earlier versions to have exactly the same run on the 3.1 version? Thanks, Manos -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.14 (GNU/Linux) iQEcBAEBAgAGBQJVQkL2AAoJEIhcgrtekvd+ViIIAMl51hCEnuJKH90zNeP8z+DJ TEFmzhnb20ioH05TmH5OMCkpT9VpWZj6zjgXHbAFBZmMXFv06RNOH51qwvbSwaVR 1f+9JaM1nOt6aYo9o6bDbsxj+odqC2t62T9PkgM1T72Gn1YUz1IpJl+29/amg1nC mP0CvbM6ceLeY+eUkw4SWR2ms9kPUDcpa8Ss/+A7UbPl8drCxoChA9HaSKAgK4fY EPqToBNwVPeAI/cLvyNvrLCUIMwlGwWyrXOHqoEtWvEREjpF7KY5jWCU3LTRjxwO aOHWWUBfZ0DLxqU9y6cpY9Zrbk12gL1iYhgi50UN3Ld2yYn2Xg5OHKPuFKQsTBg= =MBxV -----END PGP SIGNATURE-----