The online racing simulator
Racing Line, not quite there...
Racing Line, still not there...

So, my current algorithm is not working. Actually a few things I have made note of from these results is somehow the racing line is pushing itself out of bounds of the track edges... Interestingly I did put code to collide against the edges and keep the nodes of racing line within those bounds... Not working. Hmm.

I need a bigger bag of magic.
Are those racing lines the result of various iterations of the algorithm?
Different iterations of the same algorithm yes.
The pth files have the racing line built into them. I'd love to see these items rebuilt to reflect the racing lines used by the WR holders for each track and car combination. You could then use this as a baseline for comparison as to how the algorithm is doing.
That is actually a brilliant idea. I will certainly use add a quick loader just to see how this line compares with the racing line of FE1 once I get it computed. By the way, the second method is coming along quite amazing. It certainly still has some strange bug where the line bends in on itself, but once I get that figured out, I think this has a bit of promise. Here are a handful of screenshots...

Racingline Computation Screenshot 1
Racingline Computation Screenshot 2
Racingline Computation Screenshot 3
Racingline Computation Screenshot 4
Racingline Computation Screenshot 5
Racingline Computation Screenshot 6

It was showing some serious promise, and then somehow it still managed to get through the track edges... I don't quite understand that, but perhaps fixing the folding problem will avoid the edge problem?

That chicane is a beauty though! The red-line resembling the racing line started at the center line that I computed earlier.


.... Later .....

So I decided to tune the springs to be much softer, but this took 100s of times longer to compute. While letting this compute I started writing a new portion to the algorithm, I am going to constrain each of the nodes to only move left/right from where it originally started, which will be much easier and faster to compute if it is within track bounds. It is less physical since the node won't move around quite like it was prior to. While writing this I let the softer springs continue, which was quite promising. VERY SLOW. These shots are taken over almost 30 minutes or more and nearly 30k iterations. It did still cross the track so I decided to go add the new part of the algorithm. Shots of that will come later....

Racing line screenshot 1
Racing line screenshot 2
Racing line screenshot 3
Racing line screenshot 4
Racing line screenshot 5
Racing line screenshot 6 (off track)
Racing line screenshot 7
Racing line screenshot 8


.... Even Later .....

These are after 8k to 15k iterations, but took no more than 5 minutes and was "decent" after only seconds. I will slow it down in a final video. You can see it hasn't quite reached the edge of the final turn, this might be from the number of iterations, or perhaps just the algorithm has subtle weaknesses. I can say I am amazed with my results, which can be calculated (theoretically) for any track given the left and right edges. :-) Rock on!

The final results!!!!!!!!
Final Computer Racing Line Screenshot 1
Final Computer Racing Line Screenshot 2
Final Computer Racing Line Screenshot 3
Final Computer Racing Line Screenshot 4

Going to make a few adjustments and load up the LFS PTH files racing line as per Dygear's suggestion. That should be neat. ... It is neat: The gray line is the LFS line from the PATH file. It does point out my line has weaknesses and or is over using some curbs. I do know that I was a little too cautious when I created some track edges, and may not have been cautious enough in other places. Fairly similar but also different.

Racing Line Comparison Screenshot 1
Racing Line Comparison Screenshot 2
Racing Line Comparison Screenshot 3
Racing Line Comparison Screenshot 4
Awesome, I'm glad it worked out! Keep it up!
That looks very nice!

I can't really tell the dimensions of the car, is it possible that in "Racing line screenshot 6 (off track)" the vehicle would still have two wheels on the track? To me it looks like that and I think it would be a valid path.

From the looks of it, your path looks faster than the file one (gray), and I'd have to point out that a.i. is supposed to be fun so maybe it was intentionally made not to ride the curbs that hard, so the humans would stand a chance. Just a guess.
I would expect the stored path to be geometrically(?) optimal, and each a.i. difficulty would have some tolerance in keeping close to it.

I know this isn't your aim (hotlapping ai), but I still want to ask the question: can you run a couple of tests using the default path vs your path and see what the lap time difference is?
Depending how hard it is to change the racing line at that point I may try to do just that. (Remember the drive won't know the world coordinates of places on the racing line, only get to judge/know where it is from the reference points). My racing line does look like it would be faster in some parts, and ye the car width should keep two wheels on the surface at that part, I believe, but there is a BIG curb on that corner that would kill the speed, as it is the line I computed may have the right wheels taking too much curb on two of the corners.

Still a long ways from getting the driver to drive again, but I am happy with the progress. My next plan is to clean up some of the code, slow down and prettify the visualizer for the racing line part, and make a video of the process as well as write up a small paper on how I managed to have data only for the left and right edge of a track, and compute the racing line.

Also I will need to write a lot of the code again without a visualizer, to speed up the processing without the visualization so I am not forced to wait for it as I move forward with the project, but that should be relatively easy compared to what I just accomplished.
So this has been out on youtube for about a week without announcement. Here it is finally. A video of how I computed the racing line for Fern Bay Club. I still need to clean up the code from the visualizers and make it perform in much less than visual time. But working with the visualizations provided a very neat way to actually SHOW how I've done this.

With that said, enjoy: (Recommend to watch in 720 to see the lines a bit better).

Artificial Intelligence in Rac ... Computing the Racing Line
Impressive improvement, you are getting really close to a "logical" racing line, and you are going forward a lot faster than what I have expected at first
The next step I think I am going to take is making the visual sensor that will eventually be what the artificial driver will see, and judge distances from. A hurdle I have is I would like the terrain of LFS (at the very least) to block the line of sight like it would for us.

I tried loading the SMX file for Fern Bay hoping this would provide a simple mesh for the terrain, however, it does not. Actually I am quite confused what it contained. I loaded the format as specified, and do see a lot (too many drops the framerate without optimizations) of "objects" but they are not very useful from what I can tell. I tried both CW and CCW winding, and I can see what I guess are the pit lane garages, and some grandstands and many other "boxes" many more than I'd expect, flat ones, etc. I am 99.8% sure I loaded the file format correctly, and am displaying it as it should be, but the coloring was way off (too dark).

Does anyone have experience with the contents in the SMX files? And does anyone know a way I could get access to, at the very least generalized terrain data for the tracks (in specific Fern Bay Club at the moment). Is the following screen shot what I should expect (after changing the colors to a brighter 0xFFAAAAAA ARGB)

Fern Bay Simple Meshes Loaded
Thanks Dawesdust, that actually looks pretty similar to what I have, which would be expected given the SMX is a specified format so the loaders will be the same. You can see my screen shot, perhaps you didn't yet, of what my outcome with the SMX file was. Not quite pretty. I wonder if Dygear has seen them in action and can let me know if my results are expected.
Ah, I missed your screenshot. Another person you might want to query is Vic. He is always on IRC in the #liveforspeed channel on GameSurge. Best bet is to catch him during the day in North America, or really really late at night.

Looking at the screenshot, it looks like the values are strangely multiplied somehow. If you look at it and decrease the spacing.. it almost seems to link up perfectly. I'm not certain on that being the solution, but it seems plausible.
You're brilliant. I thought the same but for some reason it didn't click when I thought it... Reading what you said made it click! One moment...


And back with the fix. It seems I do indeed get the track terrain data, and MUCH MORE! Correctly loaded

Edit: It isn't quite ALL the terrain, but it might be enough to work magic. The giant cliff "jump" at the FE1 chicane doesn't appear for me. But I think enough appears to block visuals appropriately, hopefully. Awesome!

And colored... Lighting was "dark" before from the ordering of the triangles in my rendering system. I win check that out! The cliff jump is there well enough! AWESOME.

Just in case someone may have missed it with the discussion that just took place, go check out this post where I linked to a video on youtube of all the computations taking place visually: Artificial Intelligence in Rac ... Computing the Racing Line
-
(dawesdust_12) DELETED by dawesdust_12 : Disregard. I'm drunk.
After the red line reaches convergence, a white line is shown. Is that one the LFS AI line or something?
Hm, your optimisation does not take acceleration/braking into account and the race direction as I can see - it tries to make it as smooth as possible while for example "late apexing" comes from the fact that cars can brake smoother and faster as comparing to acceleration.

You might try this optimisation (probably the best paper on this subject and most promising universal solution so far): http://game.itu.dk/cig2010/pro ... /papers/cig10_048_083.pdf

I wanted to use it in my AI project for LFS but I already run out of time for that
I can't load any of your images, Tim, so I don't know for sure what went wrong. But it looks like you got the answer from Dustin anyway.

[Edit]Never mind, they are loading for me now. The last image you posted with color is as much data as you can get out of the SMX files unfortunately. There is some sort of relationship between that and the texture files as well, I'm sure of it but I never really got so far as to check it out.
Dygear; what I got from the SMX in color was actually more than I was expecting but very close to what I was hoping for. There should be enough terrain and features for proper line of sight testing.

Tommy; No it doesn't include direction, braking or speed into the racing line at this time, and for now I think that is okay. It is just the geometrical smoothest curve. I do want to remove the constraint optimization/stabilization that I put on the point masses, I figure adding another stronger spring from one point to another will help keep them approximately the same distance, but for this to work I need to find why the points were able to pop of the track and fix that.

I would also like to work on allowing the line to take into account long straights so corner entry/exit could be compromised slightly from the geometrical line, thus allowing more gains. However, for current purposes I will consider this the line my artificial driver will start with, it does get a lot better than what is seen in the video; letting it run for 3000 iterations is much better than the 300 shown, the movement becomes less pronounce though as the springs become more equalized.

Keling; Yes the white line shown is the line the Live For Speed artificial drivers take. I do wonder if Live For Speed reads the PTH files or not, if it does I could see how my computed racing line is working in comparison. I can say it probably doesn't work very well since the line hits a few big curbs that will upset the car. Still, something I may try at some point, it could be interesting.
Very interesting project that you're doing! I've read the whole thread and there has been a lot of intelligent discussions and progress (even though it has been running for quite some time).

It interests me because I obviously like LFS and my study is AI (not specifically game-related). Recently, before I ran into your post, I actually had the idea to try out how a Neural Network could drive the cars on a specific track. The input and output of the simulation would be similar to yours, although I would probably use exact track position initially, instead of reference points (the NN would sort of make the reference points by itself).

Saying this, I don't plan to start on that project now due to another project idea not related to racing, but I'll be sure to follow your project as you go forward.

As you're still somewhat undecided about controlling the AI, I'll shed some views on it (don't take me for an expert) as an addition to what a few other posters have already said.

Neural Networks:
As I understand you have been reading a bit about these. There are multiple different types, but I didn't look into most of the advanced ones (like Echo State networks, which can simulate temporal events).

A Multi-layered Perceptron network (MLP) is probably suitable for a racing simulater. As input you could have the track position, speed, gear, RPM and all similar things that you're already using. Then you send this through the black box of 1 or 2 hidden layers to the output, which would contain the actions to control the car.

The nice thing about such a network is that you don't need to worry about programming the actions of the driver, as the network figures this out by itself by comparing to an example situation (a computed racing line for example). In the end it would be human-like, including unpredictability and a chance of errors, which seems to fit the project.

There are some disadvantages though.

You don't have a lot of control about the network other than your training set and some parameters. Next to this you don't exactly know what is going on in the hidden layer (you could say the behaviour in there is "encoded"), so you have to rely on the network to fix its errors by telling it that it's doing something wrong.

Constructing the training set well could be tricky. A MLP would generally learn through backpropagation, which means at a certain point in time (one or more frames) you need to exactly specify the network's output (steering angle/throttle etc.) for the situation at that point in time. This can be tricky to specify, and you'll want many diverse desired output specifications so that it can learn the whole track and deal with as many situations as possible.

When you create too many output examples for the network to learn from, it will only use the ouput you have shown it to compute its actions (if there are enough neurons). Now, the network has no need to generalise situations, meaning that it does not need to find some action for a situation it hasn't seen before (you taught it appropriate outputs for each), which sort of defeats the purpose of a NN. This is called overfitting but it is not a huge problem, as you would likely train a whole track at a time, resulting in a different network for each track. So, a well-trained network would be more consistent and less prone to error on a given track. The main problem is generating/showing output examples for enough situations (and there are a lot of different input combinations on a single track!).

Lastly, people have tested NN's on simulated racing cars and while the results are impressive, they don't quite match up to the more standard pre-programmed behaviours for the AI drivers as far as I've seen. There are some cool youtube video's about this.

Evolutionary (Genetic) Algorithms
The great thing about these is that it is based on evolution, which means you let nature do all the work, while you sit back and watch.

GA's tend to be pretty CPU expensive as it is purely trial and error based. You randomly create some individuals and let them randomly mutate, mate and die for a very long time. It uses a fitness function, describing how well each individual is doing, which comes from the Darwinian idea of survival of the fittest. So, individuals with a higher fitness survive longer and produce more offspring, slowly making the whole population better over time.

I described it in an abstract way so I'll elaborate how I would see it working aimed at simulating race drivers.

In my view, it would best be combined with a multi-layered perceptron network and then leaving out the training of it. The training is replaced by evolution to let the network "evolve" into a better racing driver. The structure of the network (input, output and hidden layer) stay the same, and then you let trial and error find the optimum weight values for the neurons.

The drawback is that this would take a long time probably (a very rough estimate might be 40 cars running for 3000 iterations), although progress would be gradually visible along the way (some spikes up and down likely). This is an issue as I don't see a way to speed up the LFS driving tests if that is the platform you're going with. At each iteration you would check the fitness of each car and apply evolutionary actions such as mutating (randomly changing a network's values), mating (combining parts of the networks of 2 cars), and survival (keeping the population equally sized by deleting the cars with the lowest fitness; children resulting from mating will take up their place).

The tricky thing is always to compute the fitness of an individual. You would probably give a bonus for the distance driven and the the average velocity. Penalties would consist of minus points for going off-track/hitting obstacles/blowing up your tyres or engines.

When I said 40 cars, I meant that you have 40 networks running for each iteration and at the end you take the one with the highest fitness.

It might not result in an extremely fast driver, but it is certainly fun to just let them go and see what happens, how they improve.

-----

I'm sorry for the long post and I fully understand if you don't want to employ machine learning (could certainly be a bigger challenge, with no guaranteed results), but I simply displayed them as an option to consider if anything. If time allowed, I would work on it myself. But I am certainly open for discussion about anything that I can help with if you continue this project.
Noodleguitar, sorry I never responded sooner, I meant to and normally I don't ignore responses! I am probably going to avoid genetic/learning algorithms for this project due to complexity and I am attempting to make the AI think and drive how we would in regards to reference points and visual/physical sensors.

At everyone following the thread, I've recently picked the project up again and making progress toward actually developing the Visual Sensor that the ai driver will use to estimate the distance from the reference points (hopefully never using exact positions). My goal is to have the Visual Sensor estimate the velocity of the car, hopefully accurately-ish. Possibly even predict where the car will end up.

So the visual sensor will run and give the driver this information:

refPoint1 (-1, 2)
refPoint2 (2, 5)
refPoint3 (7, 5)
refPoint4 (4, 2)

A moment later the visual sensor will run and give the driver this information:

refPoint1 (NA) (no longer in field of view)
refPoint2 (0, 3)
refPoint3 (5, 3)
refPoint4 (2, 0)

The coordinates are not the position of the reference point, but the distance (or an estimate of the distance as I talked about long ago in this thread) of the reference point from the drivers eye. So taking the distance before, and the distance after we can average all these differences and we should get the average velocity of the car... In this case it is easy to see the eye move (2, 2) closer to each of the reference points.

This will get trickier as the car is turning while moving since reference points far away will tend to move a distance purely based on the rotation of the car, even if the car didn't move forward much. So, I will run with the theory and see how it works when I get there.

I plan to make another video with the visual sensor showing what it can and cannot see.

Thank everyone for their support and someday maybe my ai driver will be driving around a track!
I got the LineOfSight code working!!! It was a great reward to see the driver trying to watch a single cone as I moved the car around the FE1, and it would be rendered as visible/not correct. Then I tried adding the rest of the cones and the first performance issue from this project has finally risen.

It made the InSim connection TimeOut. So while it works, it is far too slow. I first eliminate any further checks if the cone is behind the driven. Then since I have the terrain organized into a QuadTree, I eliminate all triangles from quads that are not in the LineOfSight, and test the rest until a triangle is hit, resulting the driver can't see the cone.

There are certainly some minor optimizations I can make, but I've made most obvious ones and the QuadTree should eliminate most of the unnecessary tests. I could put it in a separate thread, but that will only help so much- and not nearly enough.

(Note, my machine is a bit older now. It has a 2.4ghz Intel Core 2 Duo, with 2gb ram, running Windows XP). It also has to be running LiveForSpeed. So between running the AIRS project and LFS both cores are already at max, so threading won't help.

I could just let the Visual Sensor 'see' all the cones within the field of view, but that goes against some of the thoughts behind the project by allowing the AI Driver to see more than a player could.
I have the visual sensor working to some degree, a lot faster than it was a few days ago. Currently working on another optimization so that it can truly work in real-time with LFS in the background, essentially running two full games on the same machine!

Not sure if anyone is still interested in the project, I know it takes time between updates, but I think that is changing. At least, I've been making some good progress lately, and hope to get a few of the sensors accomplished. I've now got it so that it will capture the location of all cars on the server, and handle players joining/leaving/pitting/spec etc... Which is not a necessary step currently, but when the AI driver starts driving, that will get it one step closer for someone (you) to race it!

I'm can't wait to try estimating the speed from the visuals provided by the visual sensor. Also, have some screen shots to share with this visual sensor. Enjoy!

Visual Sensor 1
Visual Sensor 2
Visual Sensor 3

In all of those shots only a subset of the LEFT edge is attempting to be located visually, which means the AI can see more than what the story tells. However, it proves the Visual Sensor is working as expected.

Green lines indicate the AI driver can see that cone, no terrain is in the way. A yellow line means the cone is not in the field of vision (behind the driver as he currently has 180* fov). A red line is for the cones that failed to be shown because of the terrain, also shown is the triangle that blocked this vision.

Once I get the optimized version I might try to record a short video of it in 'real-time', however, I am not sure my system can handle LFS, AIRS and Video Capture... Might be challenging.
Quote from blackbird04217 :
Not sure if anyone is still interested in the project, I know it takes time between updates, but I think that is changing. At least, I've been making some good progress lately.

Really interesting to follow something like this and watching it develop, looks like a mammoth task to complete but looking promising so far, keep at it
You'll always have some followers like me that, despite not understanding everything you tell us, find what you do cool enough to wait for some news haha

FGED GREDG RDFGDR GSFDG