People talked about it on discord a while back, so here's a summary to the best of my recollection: Lazy works on U6 but not afterwards. The big differences for people are the recent FFB improvements and a bug that meant that serving a stop-go penalty in a custom pit stall would stick you to it. If you can avoid getting a stop-go on a custom layout and you don't need the FFB tweaks, you can use U6 to get Lazy.
Scawen is looking to put the recent skin fix into an official version, so hopefully they can be back in sync soon
A small update 'U13' for the dedicated host (DCon) is attached.
It allows /pps 12 in internet multiplayer mode.
I don't think it makes much difference, as it is only an upper limit on "position packets per second" but in some situations it could help. It will *not* cure the problems of latency and can make some problems resulting from latency even worse (namely CPU usage on your local computer during the 'catch up' from when the packet was sent on the remote computer, to the current time).
The previous maximum was 6.
The command /pps X was found not to be working to change pps while already running but it does now work and there is a message on guest computers when the resulting packet is received.
This is a compatible update.
EDIT: removed attachment, it's now the official test patch.
There is more to the calculation of the actual resulting packets per second. The pps setting is only an upper limit. LFS already tries to estimate how soon a new packet is needed, based on how much the controls have moved. For minor steering / throttle / brake adjustments it will not go up to the rate set by pps. I think some of the worst inaccuracy problems come up when inputs do not change much but the car is in some state where slight inaccuracies at the start of the prediction give a quite different outcome at the end of the prediction. In this case the remote instances of a car may end up in quite a different place from the local instance but LFS doesn't send a packet quickly because the controller inputs haven't changed much.
I'm not sure if that's explained very well. There is also a maximum time between position packets of 1.28 seconds. This time is used as a starting value after each position packet is sent, then any changes in the analogue inputs over the coming frames reduce that time to a smaller value. More and more has changed, so send the next packet sooner. Of course any 'digital inputs' (like a gear change) cause the packet to be sent immediately.
That explains what I was seeing on the straights on the test server, the same update frequency I saw before on 4/6pps servers. As the cars turned for a corner, it was noticeably more frequent. I'm sure others will have some feedback and observations, and we plan to test this en masse tomorrow after GT2C on the death rally server, which should give us a good test case for close racing, collisions etc..
That may also explain part of why fast cars (BF1 in particular) can look like it's jumping all over the place in fast corners - due to the agility of the car and massive speed, a very small change in input can result in a huge change in position a short time later.
From our testing earlier, the general feeling was that it was better overall. Especially glancing car to car contact seemed less violent.
We didn't have many people in the server (though we did have quite a large range of latencies) so any CPU limitations wouldn't have reared up.
Will have to see if we can wrangle enough people for testing tomorrow.
I'm not sure what latency threshold it's thought would cause a problem, but for reference, 12 PPS is still at least 2 Round Trip Times for basically anywhere within Europe. It's nearly 5x the RTT from me to my server in Northern Europe.
Presumably, a higher PPS should also improve the perceived latency for many people? At 6pps, a large input made just after a packet was sent would wait for about 150ms before the next packet can be sent. 12pps would reduce that down to about 70ms. Those times are before taking into account Internet latency, which would increase the delay.
For reference, one-way Internet transmission time for a UDP position packet should typically be less (often significantly less) than 180ms between Australia and Europe.
A server with players up to 2/3 around the world away would typically see position transmission times of <100ms. That's a server based in Europe covering players between the whole of North America (and a lot of South America) to Singapore.
If it turns out that high latency is a significant cause of problems for high packet rates, perhaps it would be possible for LFS (or InSim, if given latency data) to dynamically adjust the pps to take into account the latency of players currently connected to the server?
That worst case situation of fast cars jumping about (e.g. BF1 as you mentioned) seems to me the most important thing to focus on at first. If I could do a better estimate of when it's important to send the next packet, that would help a lot. We can't just send full packet rate per second for all cars all the time as it would be a massive increase in bandwidth and CPU usage, so the solutions are really about the estimate and also making sure the remote car is initialised the best way it can be at the start of the prediction.
Do you know any simple and reproducible ways to create the worst looking effects, in these situations where high latency is not the cause of the problem? You mentioned the BF1 at high speed in corners, which I remember seeing.
Of course I also remember seeing cars all over the place due to users with a poor or distant internet connection but we can't really solve that. But if you can think of any simple ways to cause large displacement issues on good connections, that could help me as examples I can consider.
I can start DCon on a remote Linux computer using Wine and it is easy, at least on that server. I realise we are relying on external software so that's not ideal but it seems to work well. What are the problems with that?
BF1 at high speed corners seems to be the most obvious manifestation of it. From memory, it does seem to still happen on good (<40ms reported by LFS) connections.
We've used DCon on wine for years. The only real problem we have is that Linux doesn't close the listen port fully for 60s, which prevents us from quickly restarting the server.
There may be some workarounds for that, but it seems the issue is quite complex involving various sides not fully conforming to the TCP spec or something. I think it rears its head because of conflicting differences between wine/windows and the Linux kernel.
From experiments today I've found a few things that should be adjusted to send more position packets when needed. I think it should send packets more quickly for a small change in steering and this should probably be speed related, because a small steering change has more effect on your car's position at higher speeds. Maybe it should look at the resulting angle of the front wheels rather than the user input, to make it more consistent between cars. I think it may also be possible to reduce a slight jiggle on the steering each time a position packet is received which probably reduces the prediction accuracy.
I have very limited experience with Wine, but if I understand correctly it runs entirely in user space. As such, any sockets that are opened by a Windows binary and not closed properly by it (or Wine) will presumably be freed up instantly if you simply kill Wine, no? (For sure there may be an even simpler answer of course.)
Apparently, running "wineserver -k" (which kills all wine processes) doesn't solve the issue.
From what I can tell from various bug reports, it's a difference in the socket implementation between Windows and Linux. After more digging, it seems to be possible to reproduce from a native Linux application, so isn't strictly speaking a wine issue, however it shows up a lot in wine because of the difference in behaviour.
It seems to be related to the socket getting stuck in TIME_WAIT. When an application that has an open TCP socket is killed, Windows will also clean up the socket, which immediately allows you to re-bind to it. In Linux, the kernel doesn't clean up the socket until the TIME_WAIT timeout (which is 60s) has elapsed.
The behaviour on Linux (sort of) makes sense when the program is still running, however once the program terminates it would make sense to also kill the socket. Linux seems to be unique in that it doesn't. The kernel devs have supposedly marked a bug report as WONTFIX, but I can't find the actual report to see what was discussed (if anything).
The problem might go away if all connections to the socket are gracefully closed before the program is terminated - I'm not sure if that's possible to do with DCon?
So I've read and I didn't suggest changing the TIME_WAIT timeout.
However, that's not how Windows behaves and it's possible (and common) to close a socket and start a new session without this issue on Linux. You can restart a web server without having to wait 60 seconds for the kernel to clean it up.
It's possible that Windows does something that Linux does not to forcibly end the session during winsock cleanup ("connection reset by peer" etc). Wine presumably either doesn't or can't replicate this behaviour.
Anywho, this is going off-topic somewhat. The only thing that matters here is whether gracefully closing the socket might solve the issue or not (though it's not that big of a deal anyway).
Is it confirmed (or are you able to confirm) that LFS doesn't close a socket gracefully when exiting? I would have thought that it closed all sockets when shutting down multiplayer, and shut down multiplayer before exiting. If not, can you tell me which socket is left open and which method of exiting LFS (or DCon) makes this happen?
I assume this is mostly the listen socket as LFS gives "TCP Socket : bind failed" when starting up again.
Not sure whether LFS tries to gracefully close the sockets in all cases, but a couple of things from memory:
The other day I closed LFS (full game client in Windows that was running a server itself) using the windows X button and the master server thought I was still online afterwards, until that timed out. I didn't test whether the listen socket was still active, but as I said it appears that Windows will clean up listen sockets regardless.
For the DCon in Linux, we use "kill -TERM", which is supposed to be a request to cleanly close. I'm not sure what the behaviour is via wine though.
I've just tested it manually on my local Linux server and it appears that the kernel releases the socket fine if there are no connected players, but does not if there are. Doing CTRL+C in the terminal has the same behaviour. That sounds to me like LFS isn't closing the client connections when asked to terminate (or at least isn't waiting for the connection close to complete). TERM is supposed to be "almost the same" as INT (ie CTRL+C).
netstat shows connections to both s11.lfs.net and the listen socket in TIME_WAIT when this happens.
s11.lfs.net probably won't affect restarting DCon, as outgoing connections will use a new local port.
If no clients were connected, only s11.lfs.net is showing in TIME_WAIT and not the listen socket.
My client can immediately join another server after killing DCon, so I guess there must have been some communication to the master server.
If the DCon crashes (or the server drops offline) clients usually can't connect to another server until the connections time out.
Hmm interesting. Mine was definitely the main listen socket (I didn't connect an InSim client).
Presumably that means that either of the listen sockets will get stuck in TIME_WAIT if something was connected.
I haven't had chance to test yet, but it's possible that Windows will allow a process to take over a TIME_WAIT socket if it's no-longer associated with a process, but Linux won't. (Assuming you're using Windows to test).