<div dir="ltr"><div>I'll give automatic workers a try. If i<br>That would make sense since the passive workers aren't per-client.<br><br></div><div>- Tim<br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Sep 11, 2014 at 1:58 PM, Mihael Hategan <span dir="ltr"><<a href="mailto:hategan@mcs.anl.gov" target="_blank">hategan@mcs.anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Can you try automatic workers?<br>
<br>
Passive workers is something I didn't quite get to yet. I believe that<br>
they should only be allowed in shared mode. Thoughts?<br>
<span class="HOEnZb"><font color="#888888"><br>
Mihael<br>
</font></span><div class="HOEnZb"><div class="h5"><br>
On Thu, 2014-09-11 at 13:54 -0500, Tim Armstrong wrote:<br>
> Yeah, local workers, this was started with start-coaster-service with conf<br>
> file:<br>
><br>
> export WORKER_MODE=local<br>
> export IPADDR=127.0.0.1<br>
> export SERVICE_PORT=53363<br>
> export JOBSPERNODE=4<br>
> export LOGDIR=$(pwd)<br>
><br>
><br>
> - Tim<br>
><br>
> On Thu, Sep 11, 2014 at 1:44 PM, Mihael Hategan <<a href="mailto:hategan@mcs.anl.gov">hategan@mcs.anl.gov</a>> wrote:<br>
><br>
> > Passive workers?<br>
> ><br>
> > On Thu, 2014-09-11 at 13:26 -0500, Tim Armstrong wrote:<br>
> > > Oops, forgot about that<br>
> > ><br>
> > ><br>
> > ><br>
> > > On Thu, Sep 11, 2014 at 1:23 PM, Mihael Hategan <<a href="mailto:hategan@mcs.anl.gov">hategan@mcs.anl.gov</a>><br>
> > wrote:<br>
> > ><br>
> > > > The coaster logging was broken, and that brokenness caused it to print<br>
> > > > everything on stdout. That got fixed, so the actual log is now<br>
> > > > in ./cps*.log.<br>
> > > ><br>
> > > > So I probably need that log.<br>
> > > ><br>
> > > > Mihael<br>
> > > ><br>
> > > > On Thu, 2014-09-11 at 13:10 -0500, Tim Armstrong wrote:<br>
> > > > > I meant the github master., but it turns out that I had had the wrong<br>
> > > > Swift<br>
> > > > > on my path. Apologies for the confusion.<br>
> > > > ><br>
> > > > > I've rerun with the current one.<br>
> > > > ><br>
> > > > > I'm getting a null pointer exception on line 226 of<br>
> > > > > BlockQueueProcessor.java. Adding some printfs revealed that<br>
> > settings was<br>
> > > > > null.<br>
> > > > ><br>
> > > > > Log attached.<br>
> > > > ><br>
> > > > > - Tim<br>
> > > > ><br>
> > > > > Job: Job(id:0 600.000s)<br>
> > > > > Settings: null<br>
> > > > > java.lang.NullPointerException<br>
> > > > > at<br>
> > > > ><br>
> > > ><br>
> > org.globus.cog.abstraction.coaster.service.job.manager.BlockQueueProcessor.checkJob(BlockQueueProcessor.java:228)<br>
> > > > > at<br>
> > > > ><br>
> > > ><br>
> > org.globus.cog.abstraction.coaster.service.job.manager.BlockQueueProcessor.enqueue1(BlockQueueProcessor.java:210)<br>
> > > > > at<br>
> > > > ><br>
> > > ><br>
> > org.globus.cog.abstraction.coaster.service.job.manager.BlockQueueProcessor.enqueue(BlockQueueProcessor.java:204)<br>
> > > > > at<br>
> > > > ><br>
> > > ><br>
> > org.globus.cog.abstraction.coaster.service.job.manager.JobQueue.enqueue(JobQueue.java:103)<br>
> > > > > at<br>
> > > > ><br>
> > > ><br>
> > org.globus.cog.abstraction.coaster.service.SubmitJobHandler.requestComplete(SubmitJobHandler.java:96)<br>
> > > > > at<br>
> > > > ><br>
> > > ><br>
> > org.globus.cog.coaster.handlers.RequestHandler.receiveCompleted(RequestHandler.java:112)<br>
> > > > > at<br>
> > > > ><br>
> > > ><br>
> > org.globus.cog.coaster.channels.AbstractCoasterChannel.handleRequest(AbstractCoasterChannel.java:589)<br>
> > > > > at<br>
> > > > ><br>
> > > ><br>
> > org.globus.cog.coaster.channels.AbstractStreamCoasterChannel.step(AbstractStreamCoasterChannel.java:175)<br>
> > > > > at<br>
> > > > org.globus.cog.coaster.channels.Multiplexer.run(Multiplexer.java:90)<br>
> > > > ><br>
> > > > > On Thu, Sep 11, 2014 at 12:41 PM, Tim Armstrong <<br>
> > > > <a href="mailto:tim.g.armstrong@gmail.com">tim.g.armstrong@gmail.com</a>><br>
> > > > > wrote:<br>
> > > > ><br>
> > > > > > I thought I was running the latest trunk, I'll rebuild and see if<br>
> > I can<br>
> > > > > > reproduce the issue.<br>
> > > > > ><br>
> > > > > > - Tim<br>
> > > > > ><br>
> > > > > > On Thu, Sep 11, 2014 at 12:39 PM, Mihael Hategan <<br>
> > <a href="mailto:hategan@mcs.anl.gov">hategan@mcs.anl.gov</a>><br>
> > > > > > wrote:<br>
> > > > > ><br>
> > > > > >> The method "getMetaChannel()" has been removed. Where did you get<br>
> > the<br>
> > > > > >> code from?<br>
> > > > > >><br>
> > > > > >> Mihael<br>
> > > > > >><br>
> > > > > >> On Thu, 2014-09-11 at 12:<a href="tel:16%20-0500" value="+61160500">16 -0500</a>, Tim Armstrong wrote:<br>
> > > > > >> > I'm seeing failures when running Swift/T tests with<br>
> > > > > >> > start-coaster-service.sh.<br>
> > > > > >> ><br>
> > > > > >> > E.g. the turbine test coaster-exec-1. I can provide<br>
> > instructions<br>
> > > > for<br>
> > > > > >> > running the test if needed (roughly, you need to build Swift/T<br>
> > with<br>
> > > > > >> coaster<br>
> > > > > >> > support enabled, then make tests/coaster-exec-1.result in the<br>
> > > > turbine<br>
> > > > > >> > directory). The github swift-t release is up to date if you<br>
> > want<br>
> > > > to use<br>
> > > > > >> > that.<br>
> > > > > >> ><br>
> > > > > >> > Full log is attached, stack trace excerpt is below.<br>
> > > > > >> ><br>
> > > > > >> > - Tim<br>
> > > > > >> ><br>
> > > > > >> > 2014-09-11 12:11:13,708-0500 INFO BlockQueueProcessor<br>
> > Starting...<br>
> > > > > >> > id=0911-1112130<br>
> > > > > >> > Using threaded sender for TCPChannel [type: server, contact:<br>
> > > > > >> <a href="http://127.0.0.1:48242" target="_blank">127.0.0.1:48242</a><br>
> > > > > >> > ]<br>
> > > > > >> > 2014-09-11 12:11:13,708-0500 INFO AbstractStreamCoasterChannel<br>
> > > > Using<br>
> > > > > >> > threaded sender for TCPChannel [type: server, contact:<br>
> > > > <a href="http://127.0.0.1:48242" target="_blank">127.0.0.1:48242</a>]<br>
> > > > > >> > org.globus.cog.coaster.channels.ChannelException: Invalid<br>
> > channel:<br>
> > > > null<br>
> > > > > >> > @id://null-nullS<br>
> > > > > >> > at<br>
> > > > > >> ><br>
> > > > > >><br>
> > > ><br>
> > org.globus.cog.coaster.channels.ChannelManager.getMetaChannel(ChannelManager.java:452)<br>
> > > > > >> > at<br>
> > > > > >> ><br>
> > > > > >><br>
> > > ><br>
> > org.globus.cog.coaster.channels.ChannelManager.reserveChannel(ChannelManager.java:226)<br>
> > > > > >> > at<br>
> > > > > >> ><br>
> > > > > >><br>
> > > ><br>
> > org.globus.cog.abstraction.coaster.service.job.manager.PassiveQueueProcessor.setClientChannelContext(PassiveQueueProcessor.java:41)<br>
> > > > > >> > at<br>
> > > > > >> ><br>
> > > > > >><br>
> > > ><br>
> > org.globus.cog.abstraction.coaster.service.job.manager.JobQueue.setClientChannelContext(JobQueue.java:135)<br>
> > > > > >> > at<br>
> > > > > >> ><br>
> > > > > >><br>
> > > ><br>
> > org.globus.cog.abstraction.coaster.service.SubmitJobHandler.requestComplete(SubmitJobHandler.java:77)<br>
> > > > > >> > at<br>
> > > > > >> ><br>
> > > > > >><br>
> > > ><br>
> > org.globus.cog.coaster.handlers.RequestHandler.receiveCompleted(RequestHandler.java:88)<br>
> > > > > >> > at<br>
> > > > > >> ><br>
> > > > > >><br>
> > > ><br>
> > org.globus.cog.coaster.channels.AbstractCoasterChannel.handleRequest(AbstractCoasterChannel.java:527)<br>
> > > > > >> > at<br>
> > > > > >> ><br>
> > > > > >><br>
> > > ><br>
> > org.globus.cog.coaster.channels.AbstractStreamCoasterChannel.step(AbstractStreamCoasterChannel.java:173)<br>
> > > > > >> > at<br>
> > > > > >><br>
> > org.globus.cog.coaster.channels.Multiplexer.run(Multiplexer.java:70)<br>
> > > > > >> > provider=local<br>
> > > > > >> > 2014-09-11 12:11:13,930-0500 INFO ExecutionTaskHandler<br>
> > > > provider=local<br>
> > > > > >> > org.globus.cog.coaster.channels.ChannelException: Invalid<br>
> > channel:<br>
> > > > null<br>
> > > > > >> > @id://null-nullS<br>
> > > > > >> > at<br>
> > > > > >> ><br>
> > > > > >><br>
> > > ><br>
> > org.globus.cog.coaster.channels.ChannelManager.getMetaChannel(ChannelManager.java:452)<br>
> > > > > >> > at<br>
> > > > > >> ><br>
> > > > > >><br>
> > > ><br>
> > org.globus.cog.coaster.channels.ChannelManager.getMetaChannel(ChannelManager.java:432)<br>
> > > > > >> > at<br>
> > > > > >> ><br>
> > > > > >><br>
> > > ><br>
> > org.globus.cog.coaster.channels.ChannelManager.reserveLongTerm(ChannelManager.java:472)<br>
> > > > > >> > at<br>
> > > > > >> ><br>
> > > > > >><br>
> > > ><br>
> > org.globus.cog.abstraction.coaster.service.SubmitJobHandler.requestComplete(SubmitJobHandler.java:80)<br>
> > > > > >> > at<br>
> > > > > >> ><br>
> > > > > >><br>
> > > ><br>
> > org.globus.cog.coaster.handlers.RequestHandler.receiveCompleted(RequestHandler.java:88)<br>
> > > > > >> > at<br>
> > > > > >> ><br>
> > > > > >><br>
> > > ><br>
> > org.globus.cog.coaster.channels.AbstractCoasterChannel.handleRequest(AbstractCoasterChannel.java:527)<br>
> > > > > >> > at<br>
> > > > > >> ><br>
> > > > > >><br>
> > > ><br>
> > org.globus.cog.coaster.channels.AbstractStreamCoasterChannel.step(AbstractStreamCoasterChannel.java:173)<br>
> > > > > >> > at<br>
> > > > > >><br>
> > org.globus.cog.coaster.channels.Multiplexer.run(Multiplexer.java:70)<br>
> > > > > >> > Handler(tag: 38907, SUBMITJOB) sending error: Could not<br>
> > deserialize<br>
> > > > job<br>
> > > > > >> > description<br>
> > > > > >> > org.globus.cog.coaster.ProtocolException: Could not deserialize<br>
> > job<br>
> > > > > >> > description<br>
> > > > > >> > at<br>
> > > > > >> ><br>
> > > > > >><br>
> > > ><br>
> > org.globus.cog.abstraction.coaster.service.SubmitJobHandler.requestComplete(SubmitJobHandler.java:84)<br>
> > > > > >> > at<br>
> > > > > >> ><br>
> > > > > >><br>
> > > ><br>
> > org.globus.cog.coaster.handlers.RequestHandler.receiveCompleted(RequestHandler.java:88)<br>
> > > > > >> > at<br>
> > > > > >> ><br>
> > > > > >><br>
> > > ><br>
> > org.globus.cog.coaster.channels.AbstractCoasterChannel.handleRequest(AbstractCoasterChannel.java:527)<br>
> > > > > >> > at<br>
> > > > > >> ><br>
> > > > > >><br>
> > > ><br>
> > org.globus.cog.coaster.channels.AbstractStreamCoasterChannel.step(AbstractStreamCoasterChannel.java:173)<br>
> > > > > >> > at<br>
> > > > > >><br>
> > org.globus.cog.coaster.channels.Multiplexer.run(Multiplexer.java:70)<br>
> > > > > >> > Caused by: org.globus.cog.coaster.channels.ChannelException:<br>
> > Invalid<br>
> > > > > >> > channel: null@id://null-nullS<br>
> > > > > >> > at<br>
> > > > > >> ><br>
> > > > > >><br>
> > > ><br>
> > org.globus.cog.coaster.channels.ChannelManager.getMetaChannel(ChannelManager.java:452)<br>
> > > > > >> > at<br>
> > > > > >> ><br>
> > > > > >><br>
> > > ><br>
> > org.globus.cog.coaster.channels.ChannelManager.getMetaChannel(ChannelManager.java:432)<br>
> > > > > >> > at<br>
> > > > > >> ><br>
> > > > > >><br>
> > > ><br>
> > org.globus.cog.coaster.channels.ChannelManager.reserveLongTerm(ChannelManager.java:472)<br>
> > > > > >> > at<br>
> > > > > >> ><br>
> > > > > >><br>
> > > ><br>
> > org.globus.cog.abstraction.coaster.service.SubmitJobHandler.requestComplete(SubmitJobHandler.java:80)<br>
> > > > > >> > ... 4 more<br>
> > > > > >> > 2014-09-11 12:11:13,937-0500 INFO RequestReply Handler(tag:<br>
> > 38907,<br>
> > > > > >> > SUBMITJOB) sending error: Could not deserialize job description<br>
> > > > > >> > org.globus.cog.coaster.ProtocolException: Could not deserialize<br>
> > job<br>
> > > > > >> > description<br>
> > > > > >> > at<br>
> > > > > >> ><br>
> > > > > >><br>
> > > ><br>
> > org.globus.cog.abstraction.coaster.service.SubmitJobHandler.requestComplete(SubmitJobHandler.java:84)<br>
> > > > > >> > at<br>
> > > > > >> ><br>
> > > > > >><br>
> > > ><br>
> > org.globus.cog.coaster.handlers.RequestHandler.receiveCompleted(RequestHandler.java:88)<br>
> > > > > >> > at<br>
> > > > > >> ><br>
> > > > > >><br>
> > > ><br>
> > org.globus.cog.coaster.channels.AbstractCoasterChannel.handleRequest(AbstractCoasterChannel.java:527)<br>
> > > > > >> > at<br>
> > > > > >> ><br>
> > > > > >><br>
> > > ><br>
> > org.globus.cog.coaster.channels.AbstractStreamCoasterChannel.step(AbstractStreamCoasterChannel.java:173)<br>
> > > > > >> > at<br>
> > > > > >><br>
> > org.globus.cog.coaster.channels.Multiplexer.run(Multiplexer.java:70)<br>
> > > > > >> > Caused by: org.globus.cog.coaster.channels.ChannelException:<br>
> > Invalid<br>
> > > > > >> > channel: null@id://null-nullS<br>
> > > > > >> > at<br>
> > > > > >> ><br>
> > > > > >><br>
> > > ><br>
> > org.globus.cog.coaster.channels.ChannelManager.getMetaChannel(ChannelManager.java:452)<br>
> > > > > >> > at<br>
> > > > > >> ><br>
> > > > > >><br>
> > > ><br>
> > org.globus.cog.coaster.channels.ChannelManager.getMetaChannel(ChannelManager.java:432)<br>
> > > > > >> > at<br>
> > > > > >> ><br>
> > > > > >><br>
> > > ><br>
> > org.globus.cog.coaster.channels.ChannelManager.reserveLongTerm(ChannelManager.java:472)<br>
> > > > > >> > at<br>
> > > > > >> ><br>
> > > > > >><br>
> > > ><br>
> > org.globus.cog.abstraction.coaster.service.SubmitJobHandler.requestComplete(SubmitJobHandler.java:80)<br>
> > > > > >> > ... 4 more<br>
> > > > > >> ><br>
> > > > > >> ><br>
> > > > > >> > On Thu, Sep 11, 2014 at 10:30 AM, Tim Armstrong <<br>
> > > > > >> <a href="mailto:tim.g.armstrong@gmail.com">tim.g.armstrong@gmail.com</a>><br>
> > > > > >> > wrote:<br>
> > > > > >> ><br>
> > > > > >> > > This all sounds great.<br>
> > > > > >> > ><br>
> > > > > >> > > Just to check that I've understood correctly, from the<br>
> > client's<br>
> > > > point<br>
> > > > > >> of<br>
> > > > > >> > > view:<br>
> > > > > >> > > * The per-client settings behave the same if -shared is not<br>
> > > > provided.<br>
> > > > > >> > > * Per-client settings are ignored if -shared is provided<br>
> > > > > >> > ><br>
> > > > > >> > > I had one question:<br>
> > > > > >> > > * Do automatically allocated workers work with per-client<br>
> > > > settings? I<br>
> > > > > >> > > understand there were some issues related to sharing workers<br>
> > > > between<br>
> > > > > >> > > clients. Was the solution to have separate worker pools, or<br>
> > is<br>
> > > > this<br>
> > > > > >> just<br>
> > > > > >> > > not supported?<br>
> > > > > >> > ><br>
> > > > > >> > > - Tim<br>
> > > > > >> > ><br>
> > > > > >> > > On Mon, Sep 8, 2014 at 2:38 PM, Mihael Hategan <<br>
> > > > <a href="mailto:hategan@mcs.anl.gov">hategan@mcs.anl.gov</a>><br>
> > > > > >> > > wrote:<br>
> > > > > >> > ><br>
> > > > > >> > >> So...<br>
> > > > > >> > >><br>
> > > > > >> > >> There were bugs. Lots of bugs.<br>
> > > > > >> > >> I did some work over the weekend to fix some of these and<br>
> > clean<br>
> > > > up<br>
> > > > > >> the<br>
> > > > > >> > >> coaster code. Here's a summary:<br>
> > > > > >> > >><br>
> > > > > >> > >> - there was some stuff in the low level coaster code to deal<br>
> > with<br>
> > > > > >> > >> persisting coaster channels over multiple connections with<br>
> > > > various<br>
> > > > > >> > >> options, like periodic connections, client or server<br>
> > initiated<br>
> > > > > >> > >> connections, buffering of commands, etc. None of this was<br>
> > used by<br>
> > > > > >> Swift,<br>
> > > > > >> > >> and the code was pretty messy. I removed that.<br>
> > > > > >> > >> - there were some issues with multiple clients:<br>
> > > > > >> > >> * improper shutdown of relevant workers when a client<br>
> > > > disconnected<br>
> > > > > >> > >> * the worker task dispatcher was a singleton and had a<br>
> > > > reference to<br>
> > > > > >> > >> one block allocator, whereas multiple clients involved<br>
> > multiple<br>
> > > > > >> > >> allocators.<br>
> > > > > >> > >> - there were a bunch of locking issues in the C client that<br>
> > > > valgrind<br>
> > > > > >> > >> caught<br>
> > > > > >> > >> - the idea of remote job ids was a bit hard to work with.<br>
> > This<br>
> > > > > >> remote id<br>
> > > > > >> > >> was the job id that the service assigned to a job. This is<br>
> > > > necessary<br>
> > > > > >> > >> because two different clients can submit jobs with the same<br>
> > id.<br>
> > > > The<br>
> > > > > >> > >> remote id would be communicated to the client as the reply<br>
> > to the<br>
> > > > > >> submit<br>
> > > > > >> > >> request. However, it was entirely possible for a notification<br>
> > > > about<br>
> > > > > >> job<br>
> > > > > >> > >> status to be sent to the client before the submit reply was.<br>
> > > > Since<br>
> > > > > >> > >> notifications were sent using the remote-id, the client would<br>
> > > > have no<br>
> > > > > >> > >> idea what job the notifications belonged to. Now, the server<br>
> > > > might<br>
> > > > > >> need<br>
> > > > > >> > >> a unique job id, but there is no reason why it cannot use the<br>
> > > > client<br>
> > > > > >> id<br>
> > > > > >> > >> when communicating the status to a client. So that's there<br>
> > now.<br>
> > > > > >> > >> - the way the C client was working, its jobs ended up not<br>
> > going<br>
> > > > to<br>
> > > > > >> the<br>
> > > > > >> > >> workers, but the local queue. The service settings now allow<br>
> > > > > >> specifying<br>
> > > > > >> > >> the provider/jobManager/url to be used to start blocks, and<br>
> > jobs<br>
> > > > are<br>
> > > > > >> > >> routed appropriately if they do not have the batch job flag<br>
> > set.<br>
> > > > > >> > >><br>
> > > > > >> > >> I also added a shared service mode. We discussed this before.<br>
> > > > > >> Basically<br>
> > > > > >> > >> you start the coaster service with "-shared<br>
> > > > <service.properties>" and<br>
> > > > > >> > >> all the settings are read from that file. In this case, all<br>
> > > > clients<br>
> > > > > >> > >> share the same worker pool, and client settings are ignored.<br>
> > > > > >> > >><br>
> > > > > >> > >> The C client now has a multi-job testing tool which can<br>
> > submit<br>
> > > > many<br>
> > > > > >> jobs<br>
> > > > > >> > >> with the desired level of concurrency.<br>
> > > > > >> > >><br>
> > > > > >> > >> I have tested the C client with both shared and non-shared<br>
> > mode,<br>
> > > > with<br>
> > > > > >> > >> various levels of jobs being sent, with either one or two<br>
> > > > concurrent<br>
> > > > > >> > >> clients.<br>
> > > > > >> > >><br>
> > > > > >> > >> I haven't tested manual workers.<br>
> > > > > >> > >><br>
> > > > > >> > >> I've also decided that during normal operation (i.e. client<br>
> > > > connects,<br>
> > > > > >> > >> submits jobs, shuts down gracefully), there should be no<br>
> > > > exceptions<br>
> > > > > >> in<br>
> > > > > >> > >> the coaster log. I think we should stick to that principle.<br>
> > This<br>
> > > > was<br>
> > > > > >> the<br>
> > > > > >> > >> case last I tested, and we should consider any deviation from<br>
> > > > that<br>
> > > > > >> to be<br>
> > > > > >> > >> a problem. Of course, there are some things for which there<br>
> > is no<br>
> > > > > >> > >> graceful shut down, such as ctrl+C-ing a manual worker.<br>
> > > > Exceptions<br>
> > > > > >> are<br>
> > > > > >> > >> fine in that case.<br>
> > > > > >> > >><br>
> > > > > >> > >> So anyway, let's start from here.<br>
> > > > > >> > >><br>
> > > > > >> > >> Mihael<br>
> > > > > >> > >><br>
> > > > > >> > >> On Fri, 2014-09-05 at 13:09 -0500, Tim Armstrong wrote:<br>
> > > > > >> > >> > Thanks, let me know if there's anything I can help do.<br>
> > > > > >> > >> ><br>
> > > > > >> > >> > - Tim<br>
> > > > > >> > >> ><br>
> > > > > >> > >> ><br>
> > > > > >> > >> > On Fri, Sep 5, 2014 at 12:57 PM, Mihael Hategan <<br>
> > > > > >> <a href="mailto:hategan@mcs.anl.gov">hategan@mcs.anl.gov</a>><br>
> > > > > >> > >> wrote:<br>
> > > > > >> > >> ><br>
> > > > > >> > >> > > Thanks. It also seems that there is an older bug in<br>
> > there in<br>
> > > > > >> which the<br>
> > > > > >> > >> > > client connection is not properly accounted for and<br>
> > things<br>
> > > > start<br>
> > > > > >> > >> failing<br>
> > > > > >> > >> > > two minutes after the client connects (which is also<br>
> > probably<br>
> > > > > >> why you<br>
> > > > > >> > >> > > didn't see this in runs with many short client<br>
> > connections).<br>
> > > > I'm<br>
> > > > > >> not<br>
> > > > > >> > >> > > sure why the fix for that bug isn't in the trunk code.<br>
> > > > > >> > >> > ><br>
> > > > > >> > >> > > In any event, I'll set up a client submission loop and<br>
> > fix<br>
> > > > all<br>
> > > > > >> these<br>
> > > > > >> > >> > > things.<br>
> > > > > >> > >> > ><br>
> > > > > >> > >> > > Mihael<br>
> > > > > >> > >> > ><br>
> > > > > >> > >> > > On Fri, 2014-09-05 at 12:<a href="tel:13%20-0500" value="+61130500">13 -0500</a>, Tim Armstrong wrote:<br>
> > > > > >> > >> > > > Ok, here it is with the additional debug messages.<br>
> > Source<br>
> > > > code<br>
> > > > > >> > >> change is<br>
> > > > > >> > >> > > > in commit 890c41f2ba701b10264553471590096d6f94c278.<br>
> > > > > >> > >> > > ><br>
> > > > > >> > >> > > > Warning: the tarball will expand to several gigabytes<br>
> > of<br>
> > > > logs.<br>
> > > > > >> > >> > > ><br>
> > > > > >> > >> > > > I had to do multiple client runs to trigger it. It<br>
> > seems<br>
> > > > like<br>
> > > > > >> the<br>
> > > > > >> > >> > > problem<br>
> > > > > >> > >> > > > might be triggered by abnormal termination of the<br>
> > client.<br>
> > > > > >> First 18<br>
> > > > > >> > >> runs<br>
> > > > > >> > >> > > > went fine, problem only started when I ctrl-c-ed the<br>
> > > > swift/t<br>
> > > > > >> run #19<br>
> > > > > >> > >> > > before<br>
> > > > > >> > >> > > > the run #20 that exhibited delays.<br>
> > > > > >> > >> > > ><br>
> > > > > >> > >> > > ><br>
> > > > <a href="http://people.cs.uchicago.edu/~tga/files/worker-logs3.tar.gz" target="_blank">http://people.cs.uchicago.edu/~tga/files/worker-logs3.tar.gz</a><br>
> > > > > >> > >> > > ><br>
> > > > > >> > >> > > > - Tim<br>
> > > > > >> > >> > > ><br>
> > > > > >> > >> > > ><br>
> > > > > >> > >> > > > On Fri, Sep 5, 2014 at 8:55 AM, Tim Armstrong <<br>
> > > > > >> > >> <a href="mailto:tim.g.armstrong@gmail.com">tim.g.armstrong@gmail.com</a><br>
> > > > > >> > >> > > ><br>
> > > > > >> > >> > > > wrote:<br>
> > > > > >> > >> > > ><br>
> > > > > >> > >> > > > > It's here:<br>
> > > > > >> > >> > > > ><br>
> > > > > >> > >><br>
> > > > <a href="http://people.cs.uchicago.edu/~tga/files/coaster-service.out.full.gz" target="_blank">http://people.cs.uchicago.edu/~tga/files/coaster-service.out.full.gz</a><br>
> > > > > >> .<br>
> > > > > >> > >> > > > ><br>
> > > > > >> > >> > > > > I'll add some extra debug messages in the coaster C++<br>
> > > > client<br>
> > > > > >> and<br>
> > > > > >> > >> see<br>
> > > > > >> > >> > > if I<br>
> > > > > >> > >> > > > > can recreate the scenario.<br>
> > > > > >> > >> > > > ><br>
> > > > > >> > >> > > > > - Tim<br>
> > > > > >> > >> > > > ><br>
> > > > > >> > >> > > > ><br>
> > > > > >> > >> > > > > On Thu, Sep 4, 2014 at 7:27 PM, Mihael Hategan <<br>
> > > > > >> > >> <a href="mailto:hategan@mcs.anl.gov">hategan@mcs.anl.gov</a>><br>
> > > > > >> > >> > > > > wrote:<br>
> > > > > >> > >> > > > ><br>
> > > > > >> > >> > > > >> Ok, so that's legit.<br>
> > > > > >> > >> > > > >><br>
> > > > > >> > >> > > > >> It does look like shut down workers are not properly<br>
> > > > > >> accounted<br>
> > > > > >> > >> for in<br>
> > > > > >> > >> > > > >> some places (and I believe Yadu submitted a bug for<br>
> > > > this).<br>
> > > > > >> > >> However, I<br>
> > > > > >> > >> > > do<br>
> > > > > >> > >> > > > >> not see the dead time you mention in either of the<br>
> > last<br>
> > > > two<br>
> > > > > >> sets<br>
> > > > > >> > >> of<br>
> > > > > >> > >> > > > >> logs. It looks like each client instance submits a<br>
> > > > continous<br>
> > > > > >> > >> stream of<br>
> > > > > >> > >> > > > >> jobs.<br>
> > > > > >> > >> > > > >><br>
> > > > > >> > >> > > > >> So let's get back to the initial log. Can I have the<br>
> > > > full<br>
> > > > > >> > >> service log?<br>
> > > > > >> > >> > > > >> I'm trying to track what happened with the jobs<br>
> > > > submitted<br>
> > > > > >> before<br>
> > > > > >> > >> the<br>
> > > > > >> > >> > > > >> first big pause.<br>
> > > > > >> > >> > > > >><br>
> > > > > >> > >> > > > >> Also, a log message in<br>
> > CoasterClient::updateJobStatus()<br>
> > > > (or<br>
> > > > > >> > >> friends)<br>
> > > > > >> > >> > > > >> would probably help a lot here.<br>
> > > > > >> > >> > > > >><br>
> > > > > >> > >> > > > >> Mihael<br>
> > > > > >> > >> > > > >><br>
> > > > > >> > >> > > > >> On Thu, 2014-09-04 at 15:34 -0500, Tim Armstrong<br>
> > wrote:<br>
> > > > > >> > >> > > > >> > Should be here:<br>
> > > > > >> > >> > > > >> ><br>
> > > > > >> > >> > > > >> ><br>
> > > > <a href="http://people.cs.uchicago.edu/~tga/worker-logs2.tar.gz" target="_blank">http://people.cs.uchicago.edu/~tga/worker-logs2.tar.gz</a><br>
> > > > > >> > >> > > > >> ><br>
> > > > > >> > >> > > > >> ><br>
> > > > > >> > >> > > > >> ><br>
> > > > > >> > >> > > > >> ><br>
> > > > > >> > >> > > > >> > On Thu, Sep 4, 2014 at 3:03 PM, Mihael Hategan <<br>
> > > > > >> > >> <a href="mailto:hategan@mcs.anl.gov">hategan@mcs.anl.gov</a><br>
> > > > > >> > >> > > ><br>
> > > > > >> > >> > > > >> wrote:<br>
> > > > > >> > >> > > > >> ><br>
> > > > > >> > >> > > > >> > > The first worker "failing" is 0904-20022331.<br>
> > The log<br>
> > > > > >> looks<br>
> > > > > >> > >> funny<br>
> > > > > >> > >> > > at<br>
> > > > > >> > >> > > > >> the<br>
> > > > > >> > >> > > > >> > > end.<br>
> > > > > >> > >> > > > >> > ><br>
> > > > > >> > >> > > > >> > > Can you git pull and re-run? The worker is<br>
> > getting<br>
> > > > some<br>
> > > > > >> > >> command<br>
> > > > > >> > >> > > at the<br>
> > > > > >> > >> > > > >> > > end there and doing nothing about it and I<br>
> > wonder<br>
> > > > why.<br>
> > > > > >> > >> > > > >> > ><br>
> > > > > >> > >> > > > >> > > Mihael<br>
> > > > > >> > >> > > > >> > ><br>
> > > > > >> > >> > > > >> > > On Thu, 2014-09-04 at 14:35 -0500, Tim Armstrong<br>
> > > > wrote:<br>
> > > > > >> > >> > > > >> > > > Ok, now I have some worker logs:<br>
> > > > > >> > >> > > > >> > > ><br>
> > > > > >> > >> > > > >> > > ><br>
> > > > > >> > >><br>
> > <a href="http://people.cs.uchicago.edu/~tga/2014-9-4-worker-logs.tar.gz" target="_blank">http://people.cs.uchicago.edu/~tga/2014-9-4-worker-logs.tar.gz</a><br>
> > > > > >> > >> > > > >> > > ><br>
> > > > > >> > >> > > > >> > > > There's nothing obvious I see in the worker<br>
> > logs<br>
> > > > that<br>
> > > > > >> would<br>
> > > > > >> > >> > > > >> indicate why<br>
> > > > > >> > >> > > > >> > > > the connection was broken.<br>
> > > > > >> > >> > > > >> > > ><br>
> > > > > >> > >> > > > >> > > > - Tim<br>
> > > > > >> > >> > > > >> > > ><br>
> > > > > >> > >> > > > >> > > ><br>
> > > > > >> > >> > > > >> > > > On Thu, Sep 4, 2014 at 1:11 PM, Tim Armstrong<br>
> > <<br>
> > > > > >> > >> > > > >> <a href="mailto:tim.g.armstrong@gmail.com">tim.g.armstrong@gmail.com</a><br>
> > > > > >> > >> > > > >> > > ><br>
> > > > > >> > >> > > > >> > > > wrote:<br>
> > > > > >> > >> > > > >> > > ><br>
> > > > > >> > >> > > > >> > > > > This is all running locally on my laptop,<br>
> > so I<br>
> > > > > >> think we<br>
> > > > > >> > >> can<br>
> > > > > >> > >> > > rule<br>
> > > > > >> > >> > > > >> out<br>
> > > > > >> > >> > > > >> > > 1).<br>
> > > > > >> > >> > > > >> > > > ><br>
> > > > > >> > >> > > > >> > > > > It also seems like it's a state the coaster<br>
> > > > service<br>
> > > > > >> gets<br>
> > > > > >> > >> into<br>
> > > > > >> > >> > > > >> after a<br>
> > > > > >> > >> > > > >> > > few<br>
> > > > > >> > >> > > > >> > > > > client sessions: generally the first<br>
> > coaster run<br>
> > > > > >> works<br>
> > > > > >> > >> fine,<br>
> > > > > >> > >> > > then<br>
> > > > > >> > >> > > > >> > > after a<br>
> > > > > >> > >> > > > >> > > > > few runs the problem occurs more frequently.<br>
> > > > > >> > >> > > > >> > > > ><br>
> > > > > >> > >> > > > >> > > > > I'm going to try and get worker logs, in the<br>
> > > > > >> meantime<br>
> > > > > >> > >> i've got<br>
> > > > > >> > >> > > > >> some<br>
> > > > > >> > >> > > > >> > > > > jstacks (attached).<br>
> > > > > >> > >> > > > >> > > > ><br>
> > > > > >> > >> > > > >> > > > > Matching service logs (largish) are here if<br>
> > > > needed:<br>
> > > > > >> > >> > > > >> > > > ><br>
> > > > <a href="http://people.cs.uchicago.edu/~tga/service.out.gz" target="_blank">http://people.cs.uchicago.edu/~tga/service.out.gz</a><br>
> > > > > >> > >> > > > >> > > > ><br>
> > > > > >> > >> > > > >> > > > ><br>
> > > > > >> > >> > > > >> > > > > On Wed, Sep 3, 2014 at 10:35 PM, Mihael<br>
> > Hategan<br>
> > > > <<br>
> > > > > >> > >> > > > >> <a href="mailto:hategan@mcs.anl.gov">hategan@mcs.anl.gov</a>><br>
> > > > > >> > >> > > > >> > > > > wrote:<br>
> > > > > >> > >> > > > >> > > > ><br>
> > > > > >> > >> > > > >> > > > >> Ah, makes sense.<br>
> > > > > >> > >> > > > >> > > > >><br>
> > > > > >> > >> > > > >> > > > >> 2 minutes is the channel timeout. Each live<br>
> > > > > >> connection<br>
> > > > > >> > >> is<br>
> > > > > >> > >> > > > >> guaranteed<br>
> > > > > >> > >> > > > >> > > to<br>
> > > > > >> > >> > > > >> > > > >> have some communication for any 2 minute<br>
> > time<br>
> > > > > >> window,<br>
> > > > > >> > >> > > partially<br>
> > > > > >> > >> > > > >> due to<br>
> > > > > >> > >> > > > >> > > > >> periodic heartbeats (sent every 1 minute).<br>
> > If<br>
> > > > no<br>
> > > > > >> > >> packets flow<br>
> > > > > >> > >> > > > >> for the<br>
> > > > > >> > >> > > > >> > > > >> duration of 2 minutes, the connection is<br>
> > > > assumed<br>
> > > > > >> broken<br>
> > > > > >> > >> and<br>
> > > > > >> > >> > > all<br>
> > > > > >> > >> > > > >> jobs<br>
> > > > > >> > >> > > > >> > > > >> that were submitted to the respective<br>
> > workers<br>
> > > > are<br>
> > > > > >> > >> considered<br>
> > > > > >> > >> > > > >> failed.<br>
> > > > > >> > >> > > > >> > > So<br>
> > > > > >> > >> > > > >> > > > >> there seems to be an issue with the<br>
> > > > connections to<br>
> > > > > >> some<br>
> > > > > >> > >> of<br>
> > > > > >> > >> > > the<br>
> > > > > >> > >> > > > >> > > workers,<br>
> > > > > >> > >> > > > >> > > > >> and it takes 2 minutes to detect them.<br>
> > > > > >> > >> > > > >> > > > >><br>
> > > > > >> > >> > > > >> > > > >> Since the service seems to be alive<br>
> > (although a<br>
> > > > > >> jstack<br>
> > > > > >> > >> on the<br>
> > > > > >> > >> > > > >> service<br>
> > > > > >> > >> > > > >> > > > >> when thing seem to hang might help), this<br>
> > > > leaves<br>
> > > > > >> two<br>
> > > > > >> > >> > > > >> possibilities:<br>
> > > > > >> > >> > > > >> > > > >> 1 - some genuine network problem<br>
> > > > > >> > >> > > > >> > > > >> 2 - the worker died without properly<br>
> > closing<br>
> > > > TCP<br>
> > > > > >> > >> connections<br>
> > > > > >> > >> > > > >> > > > >><br>
> > > > > >> > >> > > > >> > > > >> If (2), you could enable worker logging<br>
> > > > > >> > >> > > > >> > > > >> (Settings::Key::WORKER_LOGGING_LEVEL =<br>
> > > > "DEBUG") to<br>
> > > > > >> see<br>
> > > > > >> > >> if<br>
> > > > > >> > >> > > > >> anything<br>
> > > > > >> > >> > > > >> > > shows<br>
> > > > > >> > >> > > > >> > > > >> up.<br>
> > > > > >> > >> > > > >> > > > >><br>
> > > > > >> > >> > > > >> > > > >> Mihael<br>
> > > > > >> > >> > > > >> > > > >><br>
> > > > > >> > >> > > > >> > > > >> On Wed, 2014-09-03 at 20:26 -0500, Tim<br>
> > > > Armstrong<br>
> > > > > >> wrote:<br>
> > > > > >> > >> > > > >> > > > >> > Here are client and service logs, with<br>
> > part<br>
> > > > of<br>
> > > > > >> > >> service log<br>
> > > > > >> > >> > > > >> edited<br>
> > > > > >> > >> > > > >> > > down<br>
> > > > > >> > >> > > > >> > > > >> to<br>
> > > > > >> > >> > > > >> > > > >> > be a reasonable size (I have the full<br>
> > thing<br>
> > > > if<br>
> > > > > >> > >> needed, but<br>
> > > > > >> > >> > > it<br>
> > > > > >> > >> > > > >> was<br>
> > > > > >> > >> > > > >> > > over a<br>
> > > > > >> > >> > > > >> > > > >> > gigabyte).<br>
> > > > > >> > >> > > > >> > > > >> ><br>
> > > > > >> > >> > > > >> > > > >> > One relevant section is from 19:49:35<br>
> > > > onwards.<br>
> > > > > >> The<br>
> > > > > >> > >> client<br>
> > > > > >> > >> > > > >> submits 4<br>
> > > > > >> > >> > > > >> > > > >> jobs<br>
> > > > > >> > >> > > > >> > > > >> > (its limit), but they don't complete<br>
> > until<br>
> > > > > >> 19:51:32<br>
> > > > > >> > >> or so<br>
> > > > > >> > >> > > (I<br>
> > > > > >> > >> > > > >> can see<br>
> > > > > >> > >> > > > >> > > > >> that<br>
> > > > > >> > >> > > > >> > > > >> > one task completed based on ncompleted=1<br>
> > in<br>
> > > > the<br>
> > > > > >> > >> > > check_tasks log<br>
> > > > > >> > >> > > > >> > > > >> message).<br>
> > > > > >> > >> > > > >> > > > >> > It looks like something has happened with<br>
> > > > broken<br>
> > > > > >> > >> pipes and<br>
> > > > > >> > >> > > > >> workers<br>
> > > > > >> > >> > > > >> > > being<br>
> > > > > >> > >> > > > >> > > > >> > lost, but I'm not sure what the ultimate<br>
> > > > cause of<br>
> > > > > >> > >> that is<br>
> > > > > >> > >> > > > >> likely to<br>
> > > > > >> > >> > > > >> > > be.<br>
> > > > > >> > >> > > > >> > > > >> ><br>
> > > > > >> > >> > > > >> > > > >> > - Tim<br>
> > > > > >> > >> > > > >> > > > >> ><br>
> > > > > >> > >> > > > >> > > > >> ><br>
> > > > > >> > >> > > > >> > > > >> ><br>
> > > > > >> > >> > > > >> > > > >> > On Wed, Sep 3, 2014 at 6:20 PM, Mihael<br>
> > > > Hategan <<br>
> > > > > >> > >> > > > >> <a href="mailto:hategan@mcs.anl.gov">hategan@mcs.anl.gov</a><br>
> > > > > >> > >> > > > >> > > ><br>
> > > > > >> > >> > > > >> > > > >> wrote:<br>
> > > > > >> > >> > > > >> > > > >> ><br>
> > > > > >> > >> > > > >> > > > >> > > Hi Tim,<br>
> > > > > >> > >> > > > >> > > > >> > ><br>
> > > > > >> > >> > > > >> > > > >> > > I've never seen this before with pure<br>
> > Java.<br>
> > > > > >> > >> > > > >> > > > >> > ><br>
> > > > > >> > >> > > > >> > > > >> > > Do you have logs from these runs?<br>
> > > > > >> > >> > > > >> > > > >> > ><br>
> > > > > >> > >> > > > >> > > > >> > > Mihael<br>
> > > > > >> > >> > > > >> > > > >> > ><br>
> > > > > >> > >> > > > >> > > > >> > > On Wed, 2014-09-03 at 16:49 -0500, Tim<br>
> > > > > >> Armstrong<br>
> > > > > >> > >> wrote:<br>
> > > > > >> > >> > > > >> > > > >> > > > I'm running a test Swift/T script<br>
> > that<br>
> > > > submit<br>
> > > > > >> > >> tasks to<br>
> > > > > >> > >> > > > >> Coasters<br>
> > > > > >> > >> > > > >> > > > >> through<br>
> > > > > >> > >> > > > >> > > > >> > > the<br>
> > > > > >> > >> > > > >> > > > >> > > > C++ client and I'm seeing some odd<br>
> > > > behaviour<br>
> > > > > >> > >> where task<br>
> > > > > >> > >> > > > >> > > > >> > > > submission/execution is stalling for<br>
> > ~2<br>
> > > > > >> minute<br>
> > > > > >> > >> periods.<br>
> > > > > >> > >> > > > >> For<br>
> > > > > >> > >> > > > >> > > > >> example, I'm<br>
> > > > > >> > >> > > > >> > > > >> > > > seeing submit log messages like<br>
> > > > "submitting<br>
> > > > > >> > >> > > > >> > > > >> > > > urn:133-1409778135377-1409778135378:<br>
> > > > > >> > >> /bin/hostname" in<br>
> > > > > >> > >> > > > >> bursts of<br>
> > > > > >> > >> > > > >> > > > >> several<br>
> > > > > >> > >> > > > >> > > > >> > > > seconds with a gap of roughly 2<br>
> > minutes<br>
> > > > in<br>
> > > > > >> > >> between,<br>
> > > > > >> > >> > > e.g.<br>
> > > > > >> > >> > > > >> I'm<br>
> > > > > >> > >> > > > >> > > seeing<br>
> > > > > >> > >> > > > >> > > > >> > > bursts<br>
> > > > > >> > >> > > > >> > > > >> > > > with the following intervals in my<br>
> > logs.<br>
> > > > > >> > >> > > > >> > > > >> > > ><br>
> > > > > >> > >> > > > >> > > > >> > > > 16:07:04,603 to 16:07:10,391<br>
> > > > > >> > >> > > > >> > > > >> > > > 16:09:07,377 to 16:09:13,076<br>
> > > > > >> > >> > > > >> > > > >> > > > 16:11:10,005 to 16:11:16,770<br>
> > > > > >> > >> > > > >> > > > >> > > > 16:13:13,291 to 16:13:19,296<br>
> > > > > >> > >> > > > >> > > > >> > > > 16:15:16,000 to 16:15:21,602<br>
> > > > > >> > >> > > > >> > > > >> > > ><br>
> > > > > >> > >> > > > >> > > > >> > > > From what I can tell, the delay is<br>
> > on the<br>
> > > > > >> coaster<br>
> > > > > >> > >> > > service<br>
> > > > > >> > >> > > > >> side:<br>
> > > > > >> > >> > > > >> > > the<br>
> > > > > >> > >> > > > >> > > > >> C<br>
> > > > > >> > >> > > > >> > > > >> > > > client is just waiting for a<br>
> > response.<br>
> > > > > >> > >> > > > >> > > > >> > > ><br>
> > > > > >> > >> > > > >> > > > >> > > > The jobs are just being submitted<br>
> > > > through the<br>
> > > > > >> > >> local job<br>
> > > > > >> > >> > > > >> > > manager, so<br>
> > > > > >> > >> > > > >> > > > >> I<br>
> > > > > >> > >> > > > >> > > > >> > > > wouldn't expect any delays there.<br>
> > The<br>
> > > > tasks<br>
> > > > > >> are<br>
> > > > > >> > >> also<br>
> > > > > >> > >> > > just<br>
> > > > > >> > >> > > > >> > > > >> > > "/bin/hostname",<br>
> > > > > >> > >> > > > >> > > > >> > > > so should return immediately.<br>
> > > > > >> > >> > > > >> > > > >> > > ><br>
> > > > > >> > >> > > > >> > > > >> > > > I'm going to continue digging into<br>
> > this<br>
> > > > on my<br>
> > > > > >> > >> own, but<br>
> > > > > >> > >> > > the<br>
> > > > > >> > >> > > > >> 2<br>
> > > > > >> > >> > > > >> > > minute<br>
> > > > > >> > >> > > > >> > > > >> delay<br>
> > > > > >> > >> > > > >> > > > >> > > > seems like a big clue: does anyone<br>
> > have<br>
> > > > an<br>
> > > > > >> idea<br>
> > > > > >> > >> what<br>
> > > > > >> > >> > > could<br>
> > > > > >> > >> > > > >> cause<br>
> > > > > >> > >> > > > >> > > > >> stalls<br>
> > > > > >> > >> > > > >> > > > >> > > in<br>
> > > > > >> > >> > > > >> > > > >> > > > task submission of 2 minute duration?<br>
> > > > > >> > >> > > > >> > > > >> > > ><br>
> > > > > >> > >> > > > >> > > > >> > > > Cheers,<br>
> > > > > >> > >> > > > >> > > > >> > > > Tim<br>
> > > > > >> > >> > > > >> > > > >> > ><br>
> > > > > >> > >> > > > >> > > > >> > ><br>
> > > > > >> > >> > > > >> > > > >> > ><br>
> > > > > >> > >> > > > >> > > > >><br>
> > > > > >> > >> > > > >> > > > >><br>
> > > > > >> > >> > > > >> > > > >><br>
> > > > > >> > >> > > > >> > > > ><br>
> > > > > >> > >> > > > >> > ><br>
> > > > > >> > >> > > > >> > ><br>
> > > > > >> > >> > > > >> > ><br>
> > > > > >> > >> > > > >><br>
> > > > > >> > >> > > > >><br>
> > > > > >> > >> > > > >><br>
> > > > > >> > >> > > > ><br>
> > > > > >> > >> > ><br>
> > > > > >> > >> > ><br>
> > > > > >> > >> > ><br>
> > > > > >> > >><br>
> > > > > >> > >><br>
> > > > > >> > >><br>
> > > > > >> > ><br>
> > > > > >><br>
> > > > > >><br>
> > > > > >><br>
> > > > > ><br>
> > > ><br>
> > > ><br>
> > > ><br>
> ><br>
> ><br>
> ><br>
<br>
<br>
</div></div></blockquote></div><br></div>