You may want to consider disabling your hypervisor's time syncing and relying on ntpd. It won't jump forwards/backwards in large increments, but rather make micro adjustments (iirc the max by default is 128ms) that will slow down or speed up the clock without affecting time sensitive apps - such as LFS.
Will be honest I'd be interested in knowing what hypervisor you're using though...
Normally this happens once every two weeks, one week.. Tops.
Now much more often. Maybe the additional checks are causing more TCP traffic that the buffers cannot handle it anymore? Massive connection loss also occurs when I try to send too many buttons at once. Tried to improve TCP buffer handling couple of months back but better then the current situation I can't achieve. Normally it's stable anyway. (Despite some wouldblock errors in log).
That's mysterious. There are only a few bytes more sent when someone joins so it can't be an overload. Let's see what happens with that. I'd be surprised if LFS changes were the cause, maybe something funny with your connection or ours.
But what does concern me quite a lot is how your S1 and S2 hosts got into that state where anyone connecting would get a JOOS, so the hosts remained empty. It seems that happened after an OVERFLOW around the time when there were some TCP issues.
Have you (or anyone else) had that problem before? I am finding it hard to see how the test patch changes could have introduced that as a new bug. And where I look for the answer depends a bit on if it's a new or old bug.
Yeah, i couldn't connect to LFSCART server too at one point (server not found on master server). That said, we had massive timeouts during race, not sure if 500servers or the dedi was the cause, i'm sure someone will bring further info on this, if it's dedi issue.
Not yet, but at least part of the OVERFLOW problem is many years old, mass disconnects do take place and that is certainly nothing new. So I'm going to try to figure out exactly what can cause that, and under what circumstances the host cannot recover from it.
At this point I think it's the overflow that put the cargame hosts into the bad state and I'm guessing it's nothing to do with B3 and that it's just a coincidence that this unusual bug appeared on the first day of trying with a new version, and also that rare internet problems near our servers caused some master server and InSim relay disconnections.