The online racing simulator
Searching in All forums
(677 results)
Degats
S3 licensed
Quote from Kid222 :As far as i've seen, there's no warping like in LFS, the cars slide on the surface more smoothly and "continuously".

Most other games don't simulate the physics of other cars in multiplayer, they just smooth the positions of the cars around. It may look consistent (though the body language of other cars doesn't look realistic at all) but I'd amazed if the positions are actually that accurate. There are some games that *look* smooth, but when you compare different players' viewpoints, the visual positions of the cars can be wrong by several metres.

I'd be interested to know what these other games are, because as much as LFS seems to get complaints about netcode these days, it doesn't seem to be as bad as pretty much any other game I've seen when there's a bit of lag involved. LFS is just more obvious than some, because it doesn't try to fake things.


From what I've seen and heard so far, I think LFS would definitely benefit from 12pps (or higher). It seems to be better in nearly every way, even for contact at high ping.
I'm yet to have the opportunity to test it in a very full server, but if there are CPU issues in some cases, I think it should be the choice of the server admin as to what is acceptable.
Degats
S3 licensed
Quote from PeterN :I've had TIME_WAIT just now even when using /exit from the console.

It is the insim port that gets stuck if an insim client was connected at the time LFS was shut down.

Hmm interesting. Mine was definitely the main listen socket (I didn't connect an InSim client).
Presumably that means that either of the listen sockets will get stuck in TIME_WAIT if something was connected.

I haven't had chance to test yet, but it's possible that Windows will allow a process to take over a TIME_WAIT socket if it's no-longer associated with a process, but Linux won't. (Assuming you're using Windows to test).
Degats
S3 licensed
Quote from Scawen :Is it confirmed (or are you able to confirm) that LFS doesn't close a socket gracefully when exiting? I would have thought that it closed all sockets when shutting down multiplayer, and shut down multiplayer before exiting. If not, can you tell me which socket is left open and which method of exiting LFS (or DCon) makes this happen?

I assume this is mostly the listen socket as LFS gives "TCP Socket : bind failed" when starting up again.

Not sure whether LFS tries to gracefully close the sockets in all cases, but a couple of things from memory:


The other day I closed LFS (full game client in Windows that was running a server itself) using the windows X button and the master server thought I was still online afterwards, until that timed out. I didn't test whether the listen socket was still active, but as I said it appears that Windows will clean up listen sockets regardless.



For the DCon in Linux, we use "kill -TERM", which is supposed to be a request to cleanly close. I'm not sure what the behaviour is via wine though.

I've just tested it manually on my local Linux server and it appears that the kernel releases the socket fine if there are no connected players, but does not if there are. Doing CTRL+C in the terminal has the same behaviour. That sounds to me like LFS isn't closing the client connections when asked to terminate (or at least isn't waiting for the connection close to complete). TERM is supposed to be "almost the same" as INT (ie CTRL+C).

netstat shows connections to both s11.lfs.net and the listen socket in TIME_WAIT when this happens.
s11.lfs.net probably won't affect restarting DCon, as outgoing connections will use a new local port.
If no clients were connected, only s11.lfs.net is showing in TIME_WAIT and not the listen socket.

My client can immediately join another server after killing DCon, so I guess there must have been some communication to the master server.
If the DCon crashes (or the server drops offline) clients usually can't connect to another server until the connections time out.
Last edited by Degats, .
Degats
S3 licensed
Quote from cargame.nl :This delay is a security measurement as packets from a previous session can still be alive.

I do not think it is good practice to just ignore this and reinstate a fresh session on the same port and accept any TCP packet which is still out there.

https://stackoverflow.com/questions/337115/setting-time-wait-tcp

So I've read and I didn't suggest changing the TIME_WAIT timeout.

However, that's not how Windows behaves and it's possible (and common) to close a socket and start a new session without this issue on Linux. You can restart a web server without having to wait 60 seconds for the kernel to clean it up.

It's possible that Windows does something that Linux does not to forcibly end the session during winsock cleanup ("connection reset by peer" etc). Wine presumably either doesn't or can't replicate this behaviour.


Anywho, this is going off-topic somewhat. The only thing that matters here is whether gracefully closing the socket might solve the issue or not (though it's not that big of a deal anyway).
Degats
S3 licensed
Quote from Neilser :I have very limited experience with Wine, but if I understand correctly it runs entirely in user space. As such, any sockets that are opened by a Windows binary and not closed properly by it (or Wine) will presumably be freed up instantly if you simply kill Wine, no? (For sure there may be an even simpler answer of course.)

Apparently, running "wineserver -k" (which kills all wine processes) doesn't solve the issue.


From what I can tell from various bug reports, it's a difference in the socket implementation between Windows and Linux. After more digging, it seems to be possible to reproduce from a native Linux application, so isn't strictly speaking a wine issue, however it shows up a lot in wine because of the difference in behaviour.

It seems to be related to the socket getting stuck in TIME_WAIT. When an application that has an open TCP socket is killed, Windows will also clean up the socket, which immediately allows you to re-bind to it. In Linux, the kernel doesn't clean up the socket until the TIME_WAIT timeout (which is 60s) has elapsed.

The behaviour on Linux (sort of) makes sense when the program is still running, however once the program terminates it would make sense to also kill the socket. Linux seems to be unique in that it doesn't. The kernel devs have supposedly marked a bug report as WONTFIX, but I can't find the actual report to see what was discussed (if anything).


The problem might go away if all connections to the socket are gracefully closed before the program is terminated - I'm not sure if that's possible to do with DCon?


See https://forum.winehq.org/viewtopic.php?t=22758 and https://bugs.winehq.org/show_bug.cgi?id=26031
Degats
S3 licensed
1) Sort of. We have a web tracker in the works, but we haven't nailed down a feature list for it yet.

2) There are parts of the InSim side that aren't yet set up to handle AI. There are some unique aspects of AI that complicate things. I intend to get them working at some point, but atm we have no need for it so it's a low priority for us.
I'll try to remember you when we get far enough to test, but it'll be a while Wink
Degats
S3 licensed
Quote from Scawen :There is only one way I can think of that this could be done, and that would be to watch a live replay, slightly behind time. I was thinking about this when watching a race on Sim Broadcasts. As mentioned, increasing pps may bring some small benefits, but cannot improve things anywhere near as much as some people seem to think. Packet frequency cannot solve latency. I was thinking if the commentator of sim broadcasts could have one instance of LFS saving a MPR, and another instance watching that MPR, a second or two behind time, this could in theory allow two things.

1) the smoothing you see in replays
2) allow rewind to replay something interesting before switching back to real time

I do know this isn't very easy. I'm not sure how far away it is from being possible but I'll have a look into it.

I've had some ideas for a while of how to implement instant (or not so instant) replays in the Sim Broadcasts software, though I haven't had chance to start testing anything as yet.

Running the stream on a slightly delayed replay to smooth out some things is an interesting idea, I'll have to have a play at some point. It may cause some logistics issues with the commentary/director team though.
Is there a maximum time between file writes? I see there seems to be a 4k buffer, but if it can take longer than a second or two between writes, this may be problematic.


On the subject of the temp replays, would it be possible to change LFS' the way LFS reads (or just saves) replays? Currently if the temp file is open in another LFS instance, that replay will be lost when the session ends/restarts on the live LFS instance. The replay for the next sessions seems to be able to record fine though... Creating a copy rather than renaming might be the easiest way to fix.

I also looked into a related weird bug the other day, where if LFS opens another program (using /exec) while a replay is being recorded, LFS then fails to record new replays from that point. I'm not sure what the cause is, but at least some of the time, Windows seems to think that the temp file is being locked by that spawned process and not LFS.
Procmon shows some weird errors, but no attempt by the spawned process to access the file at all (see attached). I can move this to a bug report thread if needed.
Degats
S3 licensed
Quote from Scawen :Do you know any simple and reproducible ways to create the worst looking effects, in these situations where high latency is not the cause of the problem? You mentioned the BF1 at high speed in corners, which I remember seeing.

BF1 at high speed corners seems to be the most obvious manifestation of it. From memory, it does seem to still happen on good (<40ms reported by LFS) connections.

Quote from Scawen :I can start DCon on a remote Linux computer using Wine and it is easy, at least on that server. I realise we are relying on external software so that's not ideal but it seems to work well. What are the problems with that?

We've used DCon on wine for years. The only real problem we have is that Linux doesn't close the listen port fully for 60s, which prevents us from quickly restarting the server.
There may be some workarounds for that, but it seems the issue is quite complex involving various sides not fully conforming to the TCP spec or something. I think it rears its head because of conflicting differences between wine/windows and the Linux kernel.
Degats
S3 licensed
Quote from Tristatron :I don't think we're on the same page here. Let's use the example of an XRT (45° steering lock, 720° steering wheel rotation), then compare it to an XRR with the suggested changes (45° steering lock, 540° steering wheel rotation). If I took the XRT and moved my mouse 2 inches to the left which results in the steering wheel turning 90°, then took the XRR and moved my mouse the same distance it would also turn the steering wheel 90°. However since the XRR only has 540° of steering wheel rotation to work with, the angle of the tyres on the XRR will be greater than the XRT despite the physical input being the same and the steering wheel being rotated the same amount. This is annoying for muscle memory and is already annoying when using cars like the FXO which are limited to 30° steering lock, but still have the full 720° steering wheel rotation which is why I added the 2nd suggestion at the end. Hopefully this was a bit more clear.

OK, so there are two things going on here.

You talk about "muscle memory" on mouse.
The way LFS is at the moment and using your example above:
XRT has 45° steering lock with 720° wheel turn, XRR also (hypothetically) has 45° steering lock but only 540° wheel turn.
If you move your mouse 2", the front wheels will turn the same amount on both cars, as they both have the same lock.
Your muscle memory moving the mouse the same distance will result in the cars moving in the same way (ignoring understeer). Your mouse doesn't have more "wheel turn" on cars that have it, it's just the *visual representation* of the steering wheel on screen will be different.

LFS mouse control is based on where the pointer is relative to the edges of the game window. If you turn on the mouse pointer, you can see it clearly. This means that when your pointer is at the far right of the window, your car will be full lock to the right regardless of whether the limit is 30° or 45°.

If it's the visual representation of the steering wheel being different that's throwing you off, then you may be better off turning off the steering wheel to stop it being a distraction, and rely on your muscle memory instead.


Regarding the different steering lock/wheel turn on the different cars, this is completely realistic.
If you drive two different cars IRL, you will get different amounts of lock and wheel turn. A drift car with a fast rack and 50° lock will move the front wheels very quickly, whereas an old road car with no power steering and 30° lock will take many turns of the steering wheel to go lock to lock. It's just something that drivers have to get used to.

---------

Quote from nexttime :1) Wouldn't streaming be much smoother already with higher PPS? People in streamed races are generally more hardcore, and have low latency internet. It's bit more serious than your casual Blackwood FBM.

And nowadays people have CRAZY pc's Scawen. They aren't even that expensive anymore. I have 8 core 16 threads Ryzen with 16 GB ram. And that's like average Ryzen, not even considered top end, I can open 10 LFS instances and run max AI on all of them. (Probably more actually) LFS is already way too friendly to old hardware. I can run LFS with a 20 year old PC. Literally. Maybe not with max graphics and max AI or 999 FPS, but it WILL run.Big grin And that's probably also because you don't cut corners and code cleanly.

2) I think this one would require a new broadcasting interface with specific controls. It wouldn't be nice to see the guys fiddling with the menu and trying to find where the incident happened, going back and forth in replay etc. Something like a "control room" would be amazing, where viewers don't see what's being done and when the specific replay is ready, just "send" it to the main stream, maybe even choose different views to play it from, and then it goes back to live automatically after replay ends. Hell maybe even a quick LFS logo animation before and at the end of the replay, not to jump straight into it and confuse people. Streamers would basically get the replay and angles, the playback speed ready, and send it in during a calm moment in the race.

1) Indeed, PCs are a *lot* more powerful now than they used to be. Multiple cores won't help in LFS atm as it's single-threaded, but as an example, the single-core performance of Ryzen CPUs has increased by more than 50% since 1st gen Ryzen. Additionally, 1st gen Ryzen had about 2x the single-thread performance as the previous gen AMD CPU.
[Edit: for reference, my laptop is 8(!) times faster single-threadded, than the desktop CPU I used for my first few years of LFS]

The additional calculations that might be necessary for higher PPS sound like a perfect candidate for multi-threading as presumably there's a lot of independence in the calculations. Something to think about for the multi-thread rewrite Scawen has suggested might happen eventually.


2) Indeed, having replays on-stream would require extra tools, but would be perfectly doable now with what LFS gives programmers access to. I've had several ideas of how to implement such things on my (very long) Sim Broadcasts todo list for some time Wink
Last edited by Degats, .
Degats
S3 licensed
That may also explain part of why fast cars (BF1 in particular) can look like it's jumping all over the place in fast corners - due to the agility of the car and massive speed, a very small change in input can result in a huge change in position a short time later.


From our testing earlier, the general feeling was that it was better overall. Especially glancing car to car contact seemed less violent.
We didn't have many people in the server (though we did have quite a large range of latencies) so any CPU limitations wouldn't have reared up.

Will have to see if we can wrangle enough people for testing tomorrow.


Edit:
I'm not sure what latency threshold it's thought would cause a problem, but for reference, 12 PPS is still at least 2 Round Trip Times for basically anywhere within Europe. It's nearly 5x the RTT from me to my server in Northern Europe.

Presumably, a higher PPS should also improve the perceived latency for many people? At 6pps, a large input made just after a packet was sent would wait for about 150ms before the next packet can be sent. 12pps would reduce that down to about 70ms. Those times are before taking into account Internet latency, which would increase the delay.


For reference, one-way Internet transmission time for a UDP position packet should typically be less (often significantly less) than 180ms between Australia and Europe.
A server with players up to 2/3 around the world away would typically see position transmission times of <100ms. That's a server based in Europe covering players between the whole of North America (and a lot of South America) to Singapore.


Edit2:
If it turns out that high latency is a significant cause of problems for high packet rates, perhaps it would be possible for LFS (or InSim, if given latency data) to dynamically adjust the pps to take into account the latency of players currently connected to the server?
Last edited by Degats, .
Degats
S3 licensed
Quote from Tristatron :If you do end up implementing the changes to maximum lock could you also consider allowing the steering wheel to turn 720° for those cars as it does for the other road cars? The problem is that driving these cars with 45° steering lock but only 540° steering wheel rotation makes them feel extremely twitchy in comparison to the other cars since you're effectively increasing the steering sensitivity (this is from my personal experiences on mouse but I'd imagine it's a similar feeling on other devices as well).

Even better would be allowing us to tweak the max steering wheel rotation ourselves for individual car/setups (obviously keeping 720° as the limit) but I feel that might be too much to ask for at the moment.

There are already ways to use larger rotation on a steering wheel if needed (assuming your controller has enough rotation to do so). Sensitivity is only affected by in-game rotation limits if you're set up for your wheel to be 1:1. This can already be changed to increase/reduce sensitivity in various ways, depending on how linear you want your steering.

Changing the in-car rotation limit won't affect mouse/joystick control at all, it's only a visual thing for those types of controllers.
Degats
S3 licensed
IMO, allowing more tyre types on more cars would open up some interesting possibilities.

Regarding it being potentially unrealistic - there's nothing really stopping people doing it in the real world as long as the tyres exist for the rim size.

All tyre types would work on the various GTRs.

At least the junior single-seaters (FBM, FOX, MRT) would make sense to have Road & Supers. Several real-world series run on all-weather tyres. Even F1 cars sometimes use them for show runs or some types of testing, so they'd probably make sense for *all* single-seaters. (Hybrid/knobbly makes no real-world sense, but might at least be fun).

If it turns out they're too soft, then the new tyre physics would be a good opportunity to tweak the compounds.


Related to this, and I'm not sure if it would count as "quick", but would be a nice visual for streams especially: Would it be feasible to allow different tyre wall textures for different compounds? (ala F1, BTCC, Indy Car etc)



I also agree that it would be worth testing higher pps. If it doesn't work, then at least we'll know - and if performance depends on certain factors, I think it should be available as an option for server admins to choose, with the caveat.

I'm not sure what calculations are going on underneath to account for lag, but is it possible that performance could be improved for higher PPS by not bothering to run physics on cars that are further away (or not close enough to need collision detection) in some cases? More regular packets could reduce the need for so much physics-based prediction, as the motion with no prediction at all would be smoother. (probably wouldn't be a quick change though)



Again, not sure if this is in the scope of this thread; I can move it out into a separate thread if you want.
After playing around with the cameras a lot for Sim Broadcasts, I have a few (hopefully simple) InSim camera requests:

1. Can the minimum FOV on track cameras be (significantly) reduced? 10 degrees isn't much zoom by TV camera standards, which was particularly frustrating when setting up the Rockingham "BTCC" cameras. This would allow for many more realistic camera angles.
From what I read somewhere recently, a lot of "normal" TV cameras can go to 1 degree ish. The Nikon P1000 goes to 0.8 and I've seen F1 cameras take similar or closer shots of the moon.

2. Is there a reason why InSim can't set the FOV for custom onboard cameras? The value is there, but the documentation says it can't be set. I currently have to do a hacky workaround with the /fov command, but it prevents smooth transitions and can cause a noticeable flicker.

3. Would it be possible to have an option for custom onboard views be relative to the "centre view" position (from options > view) rather than relative to whatever the client's saved custom view is? This would make InSim controlled views much more consistent for the general public (and less error prone for streams).
3 b. It would also be nice if it were possible to set the mirror & clocks mode via InSim for similar reasons.

4. The MCI packet rate appears to be limited to 25Hz (40ms). Would it be simple to change the limit to the physics rate of 100Hz? (I have plans...)



That'll do for now I think, sorry Wink
Degats
S3 licensed
Quote from Eclipsed :Been using this tool for years now,sometimes as an option to post stats of events,sometimes just an option to copy/paste results in spreadsheets or check pitstop times.
But unfortunatelly latelly I bumped into LFSStats' limits - it just cannot deal with bigger grid than older standard of 32. Yesterday's RTFR had 39 starters,app throws some error and quits when trying to run the MPR...
Anyone willing and being able to fix this "small" problem?

I fixed this for someone else a few weeks ago.

In the zip are the 2 modified files, they're from the 2.01 source IIRC, so they just need dropping in place of the existing ones in that version (make a backup first in case I misremembered the version)
Degats
S3 licensed
Simbroadcasts.tv #9
Degats
S3 licensed
In addition to this, it would be nice if there were better control over hiding chat, especially via InSim.

Currently, from our testing, it appears that the - key to hide chat is completely ignored when in Shift+U mode. On live broadcasts, we have to switch to another camera view (onboard) for LFS to accept the - key.


It would be useful to have more control/visibility over this via InSim:

1. Currently, there's no way for InSim to block/hide the chat from what I can tell. Faking a - press via SCH doesn't seem to work. There's also no way to know the current status of blocked messages, other than parsing MSO when - is pressed.

2. When pressing Shift+F, there seems to be 3 states in some cases, which change in a cycle: Hide -> Hide more -> show. InSim seems to only be aware of hidden or not, with no differentiation between the two levels of hidden. There isn't even a STA sent on the second Shift+F press.
I don't know everything that the "hide more" actually hides, but if that includes *all* chat messages, then opening that up to InSim control could be a solution for some cases.
It appears that the "hide more" state only works for replays, is that correct? Is it just the equivalent of hide UI + block chat, or does it do more?
Degats
S3 licensed
TC-R BF1 #16 (new, 4096)
Degats
S3 licensed
TC-R #169 new
Degats
S3 licensed
TC-R XRT updated 8, 83; new 16
Degats
S3 licensed
From what I can tell, it is technically possible and valid to send/receive a packet with a zero byte payload*, though I have no idea why anyone would try to send a packet like that.

It's possible CloudFlare does some header only communication for some reason, which doesn't require a payload. They do weird things with headers sometimes...



*I was researching this while trying to debug some Sim Broadcasts stuff
Degats
S3 licensed
Something worth noting regarding the replay in my screenshot, that was a replay of a race I'd just been in. I don't think I'd restarted LFS in the meantime, so it may have been a download that was still stuck from the race itself.


Edit: Said race happened *before* Victor turned off CloudFlare, even if I couldn't load the replay sometime after. It's still possible that turning off CF has solved the issue.

Wild stab in the dark in case it is a CloudFlare issue - could it have been that the first time someone downloads a particular skin, CF's attempt to load and cache does something weird when someone requests the file the first time? All of the events I've been in lately will likely have had multiple brand-new skins uploaded for them.
Last edited by Degats, .
Degats
S3 licensed
Quote from Scawen :k_badam says he is using the latest test patch. I guess it's not related to the test patch, because it has been going fine since April without any problem. But to confirm that, are any of you getting the same fault with the official version?

FWIW, I'm also using the test patch, though looking at my downloads I only changed from U7 to U11 on October 3rd. I've only noticed this issue over the last few weeks, so that *may* be related, though could be a red herring. I'd previously been using U7 since ~July last year with no issues.

Quote from Scawen :The screenshot posted by Degats might be a clue. The negative number might indicate corruption (perhaps by a buffer overrun) or an unexpected way through the code.

Degats, would it be possible to share that replay? (Or anyone else who got that problem when starting a replay). If the same thing happens when I try to start the replay then I should be able to catch it. If this is reproducible by any means then I'm sure I'll be able to catch it.

When I reopened it after restarting LFS, it downloaded all the skins fine, so it seems to be somewhat random.

Here are some replays with a lot of skins, all of which at least one person had trouble with when on the server. It's possible you might be able to hit the issue purely based on the number of skins.

http://replays.newdimensionracing.com/mpr/LRL/2020/LRL2020_Round12_Race.mpr
http://replays.newdimensionracing.com/mpr/TBOC/2020/TBOC2020_Round2_SprintRace.mpr
https://tc-racing.co.uk/downloads/trr/BL2R_TRR2020_R40_R1b.mpr
Degats
S3 licensed
I just had this when loading a replay (-12/8 screenshot attached).

The skins don't download over several hours; it gets stuck permanently until you close LFS from what I can tell.
It seems to be a random skin every time, restarting LFS will usually successfully download the one that was stuck, but may or may not get stuck with another.
Last edited by Degats, .
Degats
S3 licensed
Quote from Racon :If I understand it correctly, more packets would also mean more physics processing, as each packet causes some recalculations to account for ping.

AFAICT, when a new packet turns up, it just corrects the initial conditions for the next physics loop.

More pps should mean that the prediction error wouldn't get so large, but I don't see how it would cause more physics processing.

It could potentially even reduce the distance to other cars where physics calculations could be turned off (assuming this is already a thing), as less prediction would be needed.



Edit:
Servers' connection speeds and bandwidth limits have increased a *lot* since the current pps settings were done. There's a lot of room for improvement.

I was talking to Pete about this a couple of months ago (along with other broadcasting stuff) and did some calculations - there should be plenty of bandwidth headroom to increase both pps *and* max number of cars on track a decent amount with no issues as far as we're concerned.
Last edited by Degats, .
FGED GREDG RDFGDR GSFDG