I have very limited experience with Wine, but if I understand correctly it runs entirely in user space. As such, any sockets that are opened by a Windows binary and not closed properly by it (or Wine) will presumably be freed up instantly if you simply kill Wine, no? (For sure there may be an even simpler answer of course.)
Apparently, running "wineserver -k" (which kills all wine processes) doesn't solve the issue.
From what I can tell from various bug reports, it's a difference in the socket implementation between Windows and Linux. After more digging, it seems to be possible to reproduce from a native Linux application, so isn't strictly speaking a wine issue, however it shows up a lot in wine because of the difference in behaviour.
It seems to be related to the socket getting stuck in TIME_WAIT. When an application that has an open TCP socket is killed, Windows will also clean up the socket, which immediately allows you to re-bind to it. In Linux, the kernel doesn't clean up the socket until the TIME_WAIT timeout (which is 60s) has elapsed.
The behaviour on Linux (sort of) makes sense when the program is still running, however once the program terminates it would make sense to also kill the socket. Linux seems to be unique in that it doesn't. The kernel devs have supposedly marked a bug report as WONTFIX, but I can't find the actual report to see what was discussed (if anything).
The problem might go away if all connections to the socket are gracefully closed before the program is terminated - I'm not sure if that's possible to do with DCon?
So I've read and I didn't suggest changing the TIME_WAIT timeout.
However, that's not how Windows behaves and it's possible (and common) to close a socket and start a new session without this issue on Linux. You can restart a web server without having to wait 60 seconds for the kernel to clean it up.
It's possible that Windows does something that Linux does not to forcibly end the session during winsock cleanup ("connection reset by peer" etc). Wine presumably either doesn't or can't replicate this behaviour.
Anywho, this is going off-topic somewhat. The only thing that matters here is whether gracefully closing the socket might solve the issue or not (though it's not that big of a deal anyway).
Is it confirmed (or are you able to confirm) that LFS doesn't close a socket gracefully when exiting? I would have thought that it closed all sockets when shutting down multiplayer, and shut down multiplayer before exiting. If not, can you tell me which socket is left open and which method of exiting LFS (or DCon) makes this happen?
I assume this is mostly the listen socket as LFS gives "TCP Socket : bind failed" when starting up again.
Not sure whether LFS tries to gracefully close the sockets in all cases, but a couple of things from memory:
The other day I closed LFS (full game client in Windows that was running a server itself) using the windows X button and the master server thought I was still online afterwards, until that timed out. I didn't test whether the listen socket was still active, but as I said it appears that Windows will clean up listen sockets regardless.
For the DCon in Linux, we use "kill -TERM", which is supposed to be a request to cleanly close. I'm not sure what the behaviour is via wine though.
I've just tested it manually on my local Linux server and it appears that the kernel releases the socket fine if there are no connected players, but does not if there are. Doing CTRL+C in the terminal has the same behaviour. That sounds to me like LFS isn't closing the client connections when asked to terminate (or at least isn't waiting for the connection close to complete). TERM is supposed to be "almost the same" as INT (ie CTRL+C).
netstat shows connections to both s11.lfs.net and the listen socket in TIME_WAIT when this happens.
s11.lfs.net probably won't affect restarting DCon, as outgoing connections will use a new local port.
If no clients were connected, only s11.lfs.net is showing in TIME_WAIT and not the listen socket.
My client can immediately join another server after killing DCon, so I guess there must have been some communication to the master server.
If the DCon crashes (or the server drops offline) clients usually can't connect to another server until the connections time out.
Hmm interesting. Mine was definitely the main listen socket (I didn't connect an InSim client).
Presumably that means that either of the listen sockets will get stuck in TIME_WAIT if something was connected.
I haven't had chance to test yet, but it's possible that Windows will allow a process to take over a TIME_WAIT socket if it's no-longer associated with a process, but Linux won't. (Assuming you're using Windows to test).
It's intended as a moderate improvement. I have tried to increase packet frequency carefully where it is needed, while making sure it is never decreased. The idea is to reduce glitching but without causing the negative effects that could come from packet overload.
I'm not saying it's perfect or the best it can be, but I believe it is noticeably better, after 1.5 days' work.
Where I say "sender" below I mean the local computer where a car is being driven. When I say "receiver" I mean someone observing that car on a remote computer.
Reduced steering glitch each time a position packet is received
- this requires the sender and receiver to have the new version
Position packets are sent more frequently in response to steering
- packet frequency is further increased at higher speeds
- this requires only the sender to have the new version
Maximum packets per second (/pps) has been increased to 12
- this doesn't change much except in specific circumstances
- FIX: /pps command while in multiplayer was not sent to guests
- this requires only the server to have the new version
It is interesting to see the comparison between the live online recording (with high ping) and the MPR of that online session.
While watching this, I thought of an interesting point to make about higher packet rate.
That point is: Whatever you see in that video that is different between the live recording and the MPR, *cannot* be fixed by sending more packets.
Or to say that another way: Glitches that *could* be improved by a higher packet rate, are visible in an MPR.
EDIT: It's related to what I keep saying, that the problems of latency (a.k.a. high ping) cannot be solved by higher packer rate. There may be ways to improve predicted positions or the results of collisions, but these solutions are not so simple as increasing the packet rate.
EDIT 2: To be more precise, it's only the glitches in Wizard DK's MPR that can be corrected by sending more position packets. His MPR is a recording of all the packets sent. kagurazakayukari's MPR is almost identical but in a few places misses a few packets that seem to have got lost. The online recording is the same as kagurazakayukari's MPR but with latency added.
I don't think so. I could be wrong though as it's a complex system. But do you get this shifting problem reliably enough to know if it happens in single player or only in multiplayer?
As far as I know, that is not your problem. I'm guessing you have a good ping to the server, if I remember the server was started in Finland? But if kagurazakayukari is connecting from China, then it is his distance from the server that causes this problem.
My only point is that I expect kagurazakayukari would see this type of lag from anyone else who is connected to the server, not specifically Wizard DK.
Unfortunately it doesn't From what I saw last night, other people don't have same issue. Sometimes A long period of over-timed does happen, but when that happen it always let my ping up above 500ms+ (while can still see other cars driving normaly) then after 30sec lost connection, should be my ISP's fault. DK kind of lag...I don't know, perhaps I should upload live recording so you can see what's happened. At the time he disappered in lap 2 it did have some latency change, but when he started lap 3, I had a more serious connection issue(1000ms+) for a short period, but his car was okay. So it probably not my problem.
that sideways flick i do.
but they can be far longer online so i get to see a car fly into a wall ,hit it, spin around ,spraying up dust or smoke, then suddenly it disappear to reappear just infront of me. sometimes that also result in me simply driving through them (in lag spike), sort of like LFS decides to create an alternate future but then decides thats not gonna work(ish).
actually that area on blw has some attraction to this issue too. its VERY often i see lagspikes in this particular area near the asphalt pad in the grass on right side. when lag is present ,this place i often see someone crash the wall. if you let it run for the next lap you will see i glitch in that lap around this place too. im thinking there must be some spot around there that is more easy glitching than other places. i think this also happens in a spot in fernbay. but maybe worth to have in mind.from Avon tire arch/bridge to the motorway bridge. this spot is very profound. but sometimes it goes further,probably according to how high their ping is, it goes past the rallycross exit and into that turn too,when really bad.also seen lots of cars go into a wall there to later appear beneath the start line,thinking it updates when going across startline ,hence they reappear here.. theres another spot just before bridge on straight in blw. T1 to chicane got a spot too. idk if related. but thought i mention it as its quite often i see it in these places.
I updated the server for GTI Thursday to test version Dcon. So we will see tommorow, we got alot of players from all around the world, usually its full server, so we will get alot of feedback hopefully.