The online racing simulator
TEST PATCH 0.6E6 (NOW E7 - 3D support)
(132 posts, closed, started )
Scawen, I'd like to know something kind of related to the DX9 topic, though I don't want people to start a discussion about this, so let's hope people behave well hehe.

Some years ago you said that making LFS multi-threaded could be done in about 1 month (IIRC). Would it make sense to do this change just after/before the DX9? I know it's impossible to do at the same time because it would be a hell to debug that, but it could be done in the same test patch period to have a mega revamped LFS version once it's done.


I'll understand if you dont want to answer this right now, when you are still implementing the 3D, and sorry for this offtopic post
Hi Scawen,

I played the Demo of Live for Speed years ago on LAN's with some friends. When I've heard Live for Speed now has rift support i bought it instantly.

Thanks so much for this!

Soo cool to see the cars and tracks that i know from years ago all in virtual reality now

I like the Integration a lot so far. I play with 1920x1080 and i havent noticed any eye strain.

LOL just saw just2fast quoted something that ive posted on oculusvr (small world )
Moved the off topic (Windows and Direct3D versions) discussion to a separate thread.
The AI in LFS
Teaching the AI works to an extent, but the AI never take any evasive maneuvers to avoid slamming into something, the AI don't give you any room when they are being overtaken, and they sometimes slam into you when you're wheel to wheel with them, as if they are four year olds. I believe these things need to be addressed in a future patch.
And they will. But not now.
-
(BorislavB) DELETED by Scawen : spam
-
(sHiFt3R) DELETED by Flame CZE : question not relevant to this test patch
Quote from XGN Vendetta :Teaching the AI works to an extent, but the AI never take any evasive maneuvers to avoid slamming into something, the AI don't give you any room when they are being overtaken, and they sometimes slam into you when you're wheel to wheel with them, as if they are four year olds. I believe these things need to be addressed in a future patch.

2 Things:

First, wrong thread as such a comment would belong in the Improvement Suggestions thread.

Second, the AI no longer learn. When Scawen last overhauled the AI, he removed the learning abilities (in the interest of speed). They will take some evasive action (I've tested it by parking a car in a corner at the apex and they do adjust their line) but don't have enough responses to be truly intelligent.
I just wanted to say well done! This is sterling work you have done here Scawen. As a long time user of LFS it is refreshing to see such awesome implementation of this new tech. I look forward to many nights of racing in VR.

Well done Sir!



ps lol my first post since joining in 2003.
Quote from Rank Outsider :I look forward to many nights of racing in VR.

Actually we don't have night in LFS, so its always day in VR
Thanks!

Sorry about the delay getting the 3D (virtual monitor) version of the 2D interface finished. The changes to the interface were more complicated than expected and got me working a lot in the 3D engine library.

One thing leads to another as usual and now I'm trying to convert the 3D engine library to DX9. Started yesterday and by the evening, LFS could compile and run... as far as the entry screen. The problem is when it loads a track, it tries to create vertex shaders using the old code and that doesn't work. I have plenty of reference material and Alex Evans has given me an explanation of how to go about translating the DX8 assembly vertex shaders into HLSL (High Level Shading Language for DirectX). That is new to me but important to understand.

Hopefully it's not too hard to do the translation and get LFS up and running in DX9 this week. First I'm going to install XP on my new SSD so I can use DX9 in the environment it was designed for. Don't expect any miracles - LFS will look the same as it did in DX8! But it allows more graphical development in the future.
With VC6? I don't get it anymore, thought that wasn't possible / desired.

Anyway, I wonder if the InSim buttons can / will be presented in a readable way in 3D.
Scawen, do you maybe plan to add Nvidia 3D Vision support in LFS too?

While it works absolutely great as is, having ingame options for it maybe wouldn't be a bad idea?
Or it's too complicated/Nvidia requirements?
Quote from cargame.nl :With VC6? I don't get it anymore, thought that wasn't possible / desired.

Yes, VC6 is pretty good and was used by a lot of people for a long time. There is no problem with DX9 in VC6.

I plan to move onto VS2010 or VS2013 when we are doing a physically incompatible patch. This is just because of the floating point operations that the optimiser rearranges in ways that are mathematically equivalent but don't produce the same result. E.g. compare a*b+a*c and a*(b+c) ... mathematically the same but on a computer this will give different results with certain floating point values due to rounding error. And that would be enough to send 100% of hotlaps OOS within a second and no-one could connect online either. Not a really bit deal but then there is no immediate need to move from VC6 either. When the physics changes, all hotlaps will be OOS and that's a good time to change compiler.

Quote from cargame.nl :Anyway, I wonder if the InSim buttons can / will be presented in a readable way in 3D.

They will just work the same as everything else. The will appear bigger in the Rift than they do now so should be readable if they are as big as the standard buttons.
Hmm I've read back and it was the Rift SDK which was a problem with VC6. All these incompatible stories got mixed up in my head a bit.

Trying DX9, I think thats excellent news. It doesn't give direct visual graphical improvements yet but only the potential can give excitement.
Quote from Scawen :
Sorry about the delay getting the 3D (virtual monitor) version of the 2D interface finished.

No problem, I'm busy playing lfs in VR anyway

Quote from Scawen :Don't expect any miracles - LFS will look the same as it did in DX8! But it allows more graphical development in the future.

DX9 is good news, because the correction of chromatic aberration seems to be possible. And of course DX9 will allow other graphics improvement in the future. In the end better graphics is nice, but proper Rift support is by far more important (for me).
Quote from Scawen :
I plan to move onto VS2010 or VS2013...

Just don't ...
UI is too much done fancy way and not so useful IMO.
VS2010 is okay ... , but I use VS2008 best evaarr
Quote from just2fast :... because the correction of chromatic aberration ...

What do you mean by that? I can't figure it out. Chromatic aberration is optical error of lenses used at cameras, while LFS 3D engine has no such errors, i.e. there's nothing to "correct". What am I missing?

edit: or you want to have that error in TV view implemented? Why? Looks fugly. I prefer perfect lenses.
Quote from Ped7g :What do you mean by that? I can't figure it out. Chromatic aberration is optical error of lenses used at cameras, while LFS 3D engine has no such errors, i.e. there's nothing to "correct". What am I missing?

edit: or you want to have that error in TV view implemented? Why? Looks fugly. I prefer perfect lenses.

If I'm not terrible wrong is something related to the Oculus, and not the normal rendering on your screen.
Quote from Whiskey :If I'm not terrible wrong is something related to the Oculus, and not the normal rendering on your screen.

Yes that´s true. Nothing really to do with lfs, but with the Rift implementation.
Through the lenses the colors at the edge become a little distorted. But in software (here lfs) it can be compensated. It´s not that annoying at the moment.
Chromatic aberration correction has a huge benefit in image quality with Minecrift (Minecraft VR mod) and it should definitely be added to LFS if possible.
Is anyone else having anti-aliasing problems? It doesn't seem to be working. I have tried disabling and enabling nvidia global setting, nothing changes. Also, I don't see an increase in quality (down sampling) when I change the resolution to 1920 x 1080.

Besides that, this is working great!
Unfortunately, DX8 (and I think it's the same in DX9) do not support antialiasing on render target textures. LFS renders to a texture before distorting that onto the screen.

I think other games do the same thing. Do they suffer from the antialiasing issue? Or do some of them somehow do it well, while others have the same problem as LFS?

It is theoretically possible to add antialiasing by adding another stage - render each eye's view to a massive square texture then reduce that by 50% onto the main render texture (using bilinear filtering to produce each pixel on the antialiased render target as an average of 4 pixels) before finally distorting that to the screen.

There is a problem with this - the massive square texture would be really big, in the region of 2048x2048 which is a big screen to render. This would probably affect the frame rate. It could produce good results near the centre of your vision but there will still be artifacts introduced by the final distortion, specially as trilinear filtering is not available because of the lack of a mipmap of the antialiased render texture.

About downsampling, I have heard that figure of 1920x1080 a few times. But that is a 16:9 resolution whereas the Rift is 16:10 so I would think you should really use 1920x1200 if it is available. However, the render target size in LFS at the moment is chosen to suit a final screen output size of 1280x800, so I would expect that you would not get much benefit from choosing a higher screen size. I am adding something to my notes now, to attempt to choose a render target size appropriate to the actual final render screen size, which would mean you would get more benefit from downsampling.
Dodgy AA is common in modern games with deferred rendering engines, hence stuff such as FXAA coming along.

But then, a modern GPU can render to large buffers with no real performance penalty. May be worth an option?
The final distortion is probably done with a 2D bilineary interpolating table, and if I was writing that code I would make that do the re-sizing(50%) also, so that the only more workload needed was the big render. And i actually think that also would make the artifacts less, because it has more precise data to interpolate from.

But I do not know where the final distortion is done in this case and if that is inside an API maybe this is not possible.
Some say rendering too twice the size and then reducing the image by 50% is the best way to do AA.
This thread is closed

TEST PATCH 0.6E6 (NOW E7 - 3D support)
(132 posts, closed, started )
FGED GREDG RDFGDR GSFDG