The online racing simulator
Not sure if you'd want to consider this next one as it may be very much outside what you had in mind, but this sounds like a prime candidate for a neural network driver. Years ago I wrote one of these that was trained by a particle swarm algorithm (back prop algos and stuff confuse me, particle swarm is 100 times easier). It was a path following system , but the steering controller was controlled by the neural network. Pretty sure the throttle/brake was too but I'm not 100% on that.

It was cool and actually worked pretty well. I'd run everything in super fast forward to move things along. Starting with a bunch of totally random brains, it'd have a few seconds to get as far as it could around the track. It'd score it, try the next one, wash, repeat. They'd bumble around for awhile but eventually one one zoom down the first straight and crash in turn one. Then another one would do it a little later, then more of them. Eventually one would get through the corner and zoom into the next. Every now and then one would just take off like a bat out of hell and run around the whole track. It didn't even need to learn all the corners. It was doing it just using the next few dots that were coming up. How they were spaced, etc.. Once they were all making laps I think I had it start scoring them based on lap times so they just kept getting faster and faster. I seem to remember them eventually getting very close to human best laps. Unfortunately I had stop after three days of development on it and go work out of the country for a month, so I never touched it again. It was fascinating to watch though.

This is somewhat similar to what you might have in mind where the AI doesn't actually know what corner it's in, what direction it's facing, or what is coming next. It just sees a bunch of dots. Granted, you'd be dealing with a lot more information (I think mine was chasing two or three dots at the most), but I bet it'd work if you're feeding it a bunch of nearby reference points forward in Jared's field of view. It'd just take time to let it train and you'd have to score their fitness differently. Off road excursions might not work so well because how do you train an AI to figure it's way back to the track, but you could switch to something else there if you wanted.

I tell you what though, there is no way I could have tuned the steering controller that well by hand. The computer learned to do it better through evolution than I could ever do it in about five minutes. You could give them an impossible car to drive, something very loose, and they would actually learn to drive it and be drifting around the track. It was almost spooky to watch it sometimes...
What's the AIRS world coordinate system, or should I not be concerned with it?
I should probably make a diagram, but for now try drawing that in your head as you follow along.

AIRS is the world the driver lives in, it contains all the information needed for the driver. It has several parts, most importantly being the "Scanner" which is how the AIRS world gets information about the world.

LFS is obviously the physical simulation for the car the driver controls. The "LFSScanner" will create connections to LFS InSim, OutSim, OutGauge protocols and read layout files, car info and mesh files and even the provided terrain mesh files to gather all this information and feed it (through the scanner) to the AIRS world.

The driver then uses sensors, Visual Sensor, Physical Sensor and Car Sensor to know what is happening, and those sensors are fed data from the Scanner.

Ultimately, if I created my own racing simulator, I could make a Scanner to feed AIRS and the driver would then be able to control it. Now, I know I will make some assumptions based on 'reasonably realistic physics', and I'll probably unknowingly make assumptions based on the LFS physics, so I don't know how well Jared would do just changing simulations like that. In theory though, it is possible.


The Visual Sensor will take each ReferencePoint and test it against the drivers transform matrix. If within field of view, (85 degrees left/right of straight forward), it will then test if the ReferencePoint is hidden. If it is still visible, it will bring the direction/distance into driverSpace using the drivers transform matrix. In the future it will also fluctuate the direction/distance based on estimation so that the driver doesn't have 100% accuracy, however I'm skipping this for quite some time.. The visible points are stored with information about the direction (driverSpace) and distance from the drivers eye to the visible point.

The final process of the Visual Sensor will then take these visible points, the ones based in driverSpace and possibly not 100% accurate, and transform them back to perceive the drivers position in world space. This will include the error of estimation once I get that far in the project.

The MemoryUnit will then track 10 snapshots, with each snapshot containing up to 32 visible reference points, the perceived position, and some other information from the Physical Sensor; directionOfMotion and feltForces.

The PredictionUnit will use the snapshots in the MemoryUnit to predict the future course the car will be following. My process of thought here, is instead of making Jared drive from point to point, (like I did in my previous attempt half way through the thread), he will instead attempt to drive and stay on track based on the prediction unit. So once the car is moving, the prediction unit will have a vector where it thinks the car will be in 1 second. If that vector falls of the track, the driver will (hopefully) be able to take appropriate actions to slow the car down, if needed, and to turn and stay on track.

Does this make sense where the current state of the project is?

As far as genetic learning, it is something that comes up a lot but, I think I will let that sit on the bench because, I can't speed up the simulation and I think it'd add more complexity trying to tune it. Can't wait until Jared does his first complete lap.

Anyone have any guesses as to a lap time? (XRG at FE1) I'm going to guess 1:45
No sure why you guys are focusing on this. Geometric inverse doesn't sound like the challenging part of such a project.

Most TORCS robots seem to rely on pre-computed racing lines or maps in one way or another. If you really want to do it in real time, the easy way at the moment would be projecting a prediction curve no shorter than the longest corner along the forward direction of the track, make the present position fixed, and aply your smoothing algo on it. Then the speed limit of a corner would have to propagate backwards somehow with a car performance model that the AI is aware of. When the estimated current speed intersects with the backwards propagated brake speed curve, the AI starts braking, and then steer to follow the smoothed line at corner entry by steering.
Quote :No sure why you guys are focusing on this. Geometric inverse doesn't sound like the challenging part of such a project.

That was also reason behind my earlier question.
If your math is correct then the position/velocity that your visual sensor returns will just be the same that insim could have easily given you directly. Obtaining these informations to me has little to do with AI, it is solely a geometric problem.
The AI part is in my opinion the part where the drivers acts on these informations or predicts things.

To me the visual sensor approach only makes sense in two scenarios:
1) If the input data is not perfectly accurate. (as you now mentioned)
For example if the driver eyes misjudge the distance to a point but he still has to make the turn.
Of course could introduce a random factor but it would be a bit artificial?
After all, the informations you have are perfectly accurate.

2) To later use it in a real world car, that "sees" its enviroment with a camera and image processing and where insim-style data is not available.

This is bit old project but maybe of interesst:
2D racing game where one had to write the AIs for the drivers. (similiar to what the more popular TORCS has too)
Found it very interessting to see how the different AIs did around different tracks, their passing algorithm etc. Though my own AI never quite worked and I never sent it to the competetions..
Most of the AIs there are maybe more "state machine" than true AI.
For example this one:
Here some more AIs are described:

Here is bit descriped what the AIs "see" and what they can do:
( for full tutorial)

Sorry for bit offtopic, you asked if others still follow and that is what came up in my mind when reading the thread.
Quote from Gutholz :
To me the visual sensor approach only makes sense in two scenarios:
1) If the input data is not perfectly accurate. (as you now mentioned)
For example if the driver eyes misjudge the distance to a point but he still has to make the turn.
Of course could introduce a random factor but it would be a bit artificial?

It may be artificially added through the Visual Sensor, but it is, will be, added for an important reason; our vision is estimations. Now, it might be 98% accurate, or maybe 90% accurate, but in either case this was important to me that the driver works backwards using the sensors (Visual, Physical and Car) to work out where he is.

This will cause the driver to have variation as a real driver would by "messing up" their estimation.
I agree that different geometric inverse designs will introduce different error distributions, some more realistic than the others. But do you have the data for modeling human perception error in racing conditions? Probably not. So how good is your artificially created error model compared to, say, a simple Gaussian acting on world coordinates directly? How do you justify it?

I understand the desire to built a well-defined and well-abstracted interface and "do things the right way" from the beginning. But you're prototyping in a very early stage with the general architecture mostly as vague concepts. You're not any where close to production code.
I'm not so sure that I am actually prototyping, certainly trying something new, but the prototyping happened much earlier in this thread when I had proven that I could get all the data I needed from LFS, and that I could control a car, to some degree, in LFS using a Virtual Controller.

The geometric inverse is not what will introduce the error at all, actually that would be a bad inverse, with the exception of error in the floating point computations.

It is already done this way, and I'm not going to remove it to simply move the world coordinate for estimation. Actually as it stands for this moment I want the AI to have 100% accuracy with his Vision Sensor and perceiving his world space position to eliminate any problems. This is exactly how it works right now, with a note of where error will be added in the future.

coneA, coneB, coneC, coneD ... coneN all have positions in worldSpace.
The Visual Sensor knows the worldSpace position and orientation of the car/driver.
The Visual Sensor tests each cone, for field of view, some cones will fail here and not continue.
Then the cone is tested for visibility, cones hidden behind terrain will fail here and not continue.
Finally the Visual Sensor will take the cone and bring it into driverSpace: computing the direction to, and distance to the cone from the drivers eye. there will be an estimation here, that doesn't exist yet.. The distance and direction in driverSpace is stored.

Only once all cones go through that process will the Visual Sensor then perceive the world space position by applying the inverse steps for each distance and direction. Averaging each of those together to get the perceived position in worldSpace. It is done this way so that once the estimation randomness is added, the position is not given.

This is the way my brain thought to solve the problem while sticking to my own beliefs in the project. It sure would be easy to say:

perceivedPosition = worldPosition + estimationError;

But then it isn't based on the same estimation errors in the Visual Sensor, and, I feel, is just fed to the AI. I want to make it clear I'm currently working without any error algorithm in the Visual Sensor, and will continue doing so as it will be a hard enough job without adding the errors.

As you've pointed out earlier there is much more difficult, challenging and interesting things about the project than perceiving the worldSpace position. I think it became a big deal because I sat down to attempt solving it without resources before deciding to use the transform matrix, which oddly enough was used before Todd mentioned using it, and I explained that right after he responded, I admit I should have made another post prior to that explained the solution I came up with. I usually do that and in this case it came later but was explained in post #322.

I am sorry if I seem stubborn on this point but I don't find reason to change something that is already working, especially when I feel the alternative is less true to the overall idea of the project. I may be bad at explaining my overall ideas.

I do appreciate the thoughts, ideas and conversation this is sparking.

I have found a new problem, which I always had in the back of my mind, that will need to be solved before I can go much further. Early indications were that the sensors jumped around, and visually in AIRS the car would jump around. The cause is simply the delay in input from LFS, networking, etc. But it exists, I've gone on ignoring it, knowing it would add a little error to the AI (that is unintentional error) but figured it could work.

Anyways, I recently sped up the rate the AI driver senses the world and controls the car to 100hz (from something like 20hz) and the problem magnified itself. The Prediction Unit can no longer create curved paths of prediction, and sometimes even the straight paths get messed up. As you can see in the video of the Prediction Unit, it is very jumpy based on the input problem.

The reason for the problem is the Memory Unit can have multiple states of memory that are identical. So when averageVelocity is computed from the information in the memory it has a lot of error, enough to start confusing the Prediction Unit.

The best solution I know will be somewhat difficult to add, but would be added to the LFS Scanning side of the project. That is, to add client side prediction to what the LFS Scanner reports to the AIRS project. I figure tracking the last three OutSimPackets and time between each packet and then either interpolating between them (delaying the AI's knowledge further) or predicting the future path of the car (which could be inaccurate when a new packet comes), and it may need to be a combination of the two, interpolate until time reached then predict. As of note I do already have LFS sending me OutSimPackets as fast as it will send them

I figure I'd try the prediction first, it will be built similar to the Prediction Unit but used far prior than the AI ever sees the data. Doing this should smooth out the results and help the Memory Unit keep different values for each memory.
Quote from Keling :the easy way at the moment would be projecting a prediction curve no shorter than the longest corner along the forward direction of the track, make the present position fixed, and aply your smoothing algo on it. Then the speed limit of a corner would have to propagate backwards somehow with a car performance model that the AI is aware of. When the estimated current speed intersects with the backwards propagated brake speed curve, the AI starts braking, and then steer to follow the smoothed line at corner entry by steering.

with all due respect where is there room for response to other drivers on the track around the driver?

relying on path points around the circuit then implies either having multiple paths to allow passing which results in two ai cars on side by side rails or having basically a switch statement and two driving modes, one for normal and one for passing.

imo treating the car beside the driver, the apex and the runout limit of the corner all as equivalent points in the environment and calculating a response from that would be more organic.
Quote from blackbird04217 :Now, I know I will make some assumptions based on 'reasonably realistic physics', and I'll probably unknowingly make assumptions based on the LFS physics, so I don't know how well Jared would do just changing simulations like that.

or in changing setups. talk about a challenging problem! i believe LFS ai just drives the same for all cars/setups since e.g. during passing a TBO ai driver "changes lanes" so fast he often sends his car into a skid it must be assumed that rate of lane change is necessary so passing works in the BF1...

it's so interesting to read this thread. sounds like a great approach and much fun to ponder so far.
Quote from CarlLefrancois :...

imo treating the car beside the driver, the apex and the runout limit of the corner all as equivalent points in the environment and calculating a response from that would be more organic.

Opponents move at approximately the same rate as the AI's vehicle. They have to be treated differently.

To driver around other cars, the AI would need to predict the opponent's line as well. So there will be multiple lines to smooth and the speed model would be used in an iterative manner to decide what parts of the line can overlap.
The AI system I wrote for VRC around 10-12 years ago used the lateral distance to the nearest car as a modifier into the steering controller. Basically the steering controller would try to follow a path by adjusting steering based on how far away it was from the path along with velocity (basically like a spring/damper). IIRC I added another term to deal with the nearest car, so it would turn toward the desired path, but if another car was close enough it would turn away. Once it got far away enough from the path it would stop trying to move over further, otherwise they would just run off the road when you came sailing through and not put up any fight.

Throttle/braking was done in much the same way. There was a target speed at each path node. It was basically just a simple P controller at the core if I recall correctly, but then it'd modify it if it was behind another car so it wouldn't ram into the back of it (usually).

I was pretty happy with it and proud of it, but a lot of players hated it. I remember one guy insisting the black car was out to get him, that it was much more aggressive than the rest and so on. Wasn't true at all, the physics engine didn't know or care which car was which. I think what happened is he probably got too close to it in the wrong spot on a track once or twice which caused it to react to him differently than he was used to. From there on out he behaved differently around that car which then caused that AI to react differently to him. In essence he gave it the bad personality he was ascribing to it. I got a kick of that one.

Unfortunately I think blackbird has got a really long way to go here even though it's pretty cool what he's doing. It's an interesting experience and for sure educational, but the hardest part is still ahead: Making a car drive competitively. Making one putter around a track at 25mph is easy. Making one that can get lap times as good as most humans is really hard, especially if you don't cheat a little bit by giving the AI more tire grip or something like that. I had to cheat in VRC a little to make it fast enough for the top guys.

Eventually I ditched the AI and do not plan on doing it again. I learned the hard way that life is just too short for that nightmare. I wish blackbird luck anyway.
As far as introducing error on purpose goes, I found that I couldn't get my own AI to be how I wanted it even without any error. I.e., my own error found its way into the AI all by itself and made it worse than I wanted it. Adding more error may be an interesting exercise because you're mimicking a human, but as someone else wisely pointed out, if the error is not modelled the way real human error works, it's kind of pointless. If you're worried about making AI that is too perfect, well... It's not likely to happen. Quite the opposite in all likelihood.

If the AI is running 100 cycles per second or something, what are you to do really? Randomize some error into the driverA/driverB positions? If so, how much error and how often do you randomize them? 100 times per second? 10? 1? Do you do it the same twice in a row or do you give the error some velocity? And is that how perceptual error in a human works? Modelling the human error could probably be categorized as an entire field unto itself.

I don't want to discourage blackbird if he's having fun, but at the same time, one thing I've learned is that there's sometimes a fine line between taking on a big challenge and biting off more than I can chew. Keep at it if you're having fun, but just realize you haven't gotten to the really hard part of all this yet after 5 or so years of tinkering with all this: Driving a race car at a human's pace. I'd get into that as soon as you can so you know if you even want to continue with AI after seeing what needs to be learned on the vehicle dynamics side to do it reasonably well.
The primary goal has never actually been to drive at competitive speeds, although it would obviously be a nice thing to achieve. As you stated, it is a project to have fun with, which is why it takes as long as it does. I actually suspect I'm running out of steam, but I am interested to see the artificial driver make a few laps using the prediction unit instead of the "drive to point".

Last weekend I started on a drive to point action, so I could get the AI back on track, and I figured constantly moving the point along the racing line would then be reasonable for the driver to make a lap. Unfortunately, even at slow speeds, the AI gets too wobbly, the racing line too close to the track edge and eventually loses control by the chicane of FE1.

This is a good deal due to rate of input not being smooth enough, so I need to add a little client side prediction at what the next LFS inputs will be.

I want to make it clear I'm not adding error yet, nor will I try until I get other things how I want. The reason for the error is so the AI doesn't take the 'exact' line every time simply based on the sensors, not to make the AI worse, although admittedly it will not make it a better/faster driver.
I understand you're not adding error now, but are just thinking about it for later.

It's good that you've gotten it driving. You've no doubt seen what I mean about adding error into it making things that much harder to control. The AI will be error prone right from the start because it's tough to get them to do what you want anyway. I spent a lot of time just tuning the steering controller in my system in order to minimize path following errors as much as possible. Making it drive sloppy is relatively easy. Being accurate is a lot tougher as you're probably finding out.
Well, I am again running out of steam on this project, as slightly hinted to in my last posting. I did get the client side prediction implemented for OutsimPacks, however to my surprise that did not help. Which I've tried investigating, but for some reason the prediction unit gets extremely fragile and breaks as the time step decreases, but leaving the time step where it wants to be, will leave the driver with invalid information. It made complete sense why this would happen without the OutSim prediction code, but not sure why it would continue.

The prediction unit is no where near as smooth as it was in the video and unfortunately because of this I have been draining myself trying to get it smoother. Without the smoothness of the prediction unit I am unable to make the AI driver follow the racing line based on his prediction. Hell, just moving a "go to point" along the racing line makes the XRG too wobbly and eventually fails.

I was really looking forward to getting the driver to navigate around an LFS track using the prediction unit before getting to this point, however I know the project is not over and I will revisit it again in the future with more steam, so remain subscribed to the thread for future updates!
sounds like a problem that makes more sense when coming back to it spontaneously after a pause.

on a side note i believe this may be Scawen's approach to development and the reason why he achieves such elegant results. we can consider two major categories of development style, what my profs called "hacking" and proper analysis. hacking in that sense is making a rapid prototype and then attempting small modifications to it in order to move the behaviour forward in the direction desired. things like looking at a formula and saying "maybe if i put +1 at the end it will give me the right result". in comparison analysis is a deduction from the problem and solution as to what the correct function is and then implementation of that function.

hacking works well and has the illusion of moving forward quickly, but at later stages of a complex project you get diminishing returns sometimes equating to painting yourself into a corner. proper analysis is a nightmare when going into new terrain and development can appear stopped for ages since no temporary or intermediate versions of the function are developed or tried. once the puzzle all clicks into place, though, the result is smooth and worth its weight in gold.


reading this thread started a thought experiment in my head about the minimum mathematical description of the driver and environment needed for the ai to decide which inputs were best in a racing application. i made a list of variables that turned out so much longer than i thought at first haha but my intuition tells me there is a clear and elegant solution to this problem. let me sketch what i was thinking without any promises of rigor.

define a circuit as a series of corners
define a corner as these data points:
-a (mostly) straight lead-up vector to the turn-in point
-a vector describing the outside limit of the track near the turn-in point
-an apex point
-a vector for the edge of the track that defines the track-out limit

this part is what makes most sense, the idea being that optimum turn-in and corner exit curves can be calculated based on the expected speed at the turn-in point and the length of track to the following corner. this allows the "mid-point" of the corner to not be in the geometric middle but in the race-optimal middle. example: for a corner you approach at high speed but that has only a short distance following it before the next corner, you want to break later and apex later, so more than the geometric half of the corner is used to turn-in. this allows deduction of a vector starting at the apex and going toward the outside of the corner, say the "corner normal vector".

these points and vectors would be the static track data. adding some uncertainty or error about the exact locations of these points would give a realistic human-like performance to the ai since we describe near-perfect turn-in and corner exit arcs but have to struggle to see exactly where turn-in point and apex are as we bump along the circuit toward a corner.

simplified modifications to these static data would work to respond to "enemy" vehicles ( ;-) ). for example, if another car is on the inside when approaching a corner, the effective apex would be a point one car width along the "corner normal vector" from the static apex.

the only other data needed (aside from the steady-state cornering limit) is about the turn-in and corner exit capabilities of the car at the limit. my gut tells me this is as simple as retaining a factor that describes how rapidly the cornering radius can be tightened on turn-in and another factor describing how much the increase in cornering radius can be restrained under full acceleration on corner exit. this factor would have to be a learned dynamic value to respond to vehicle, setup, tire wear, damage, etc.

the idea is that for a race application you are only interested in the limit. that means that given the static corner data: effective turn-in wall, effective track-out wall, effective apex and corner normal, you can apply turn-in factor to calculate the correct speed at the turn-in point, the turn-in arc as a bezier and the corner-exit arc, and update the throttle steering and brake as progress along those arcs is observed.

if in the middle of following these arcs something unexpected happened, new arcs could be calculated and followed. a stopped car appearing at the corner exit could imply a new effective track-out wall.

by making this minimal model, the points and vectors (and thus arcs) can move in real time and reduce the inputs to the calculations needed to achieve a complex racing line down to a few parameters.

What I did in the original VRC was even simpler than that: Make a path which is a bunch of points with a direction vector (2D only) at each one to describe a hermite spline. Each point also has a target speed the car tries to maintain. Compute the distance the car is away from that path and use it as an input to the steering controller. Do the same for velocity in the cross track direction, i.e., how quickly you're moving left or right of the path. This way it's following a curve rather than racing from one point to the next and making sudden steering changes at each point.

The steering controller would then get modified further depending on nearby traffic by trying to maintain some distance to the left/right of the nearest car. Throttle/brake was a simpler controller and just determined by the target speed versus current speed for the most part. There was a bit more to it but that was basically it.

There were multiple paths recorded by me driving fast laps in slow motion which turned out to be a good way to get really good laps in our R/C car sim. Every lap the car would choose a new line at random so the cars weren't all doing the same thing every time.

I did cheat a little bit, the physics were simplified a little bit. The tire model was much simpler, there was no drivetrain model for the AI so the longitudinal tire forces could be controlled directly with the throttle. This meant no yaw moment changes due to the diffs or solid axles, so it was almost like a built in stability control system. Add a bit of tire grip and lower the center of gravity and voila, they could beat most drivers around the track.


Now you're perhaps getting a peek at some of the issues I was talking about. The hardest part of all this is making the cars drive where you actually want them to, hence you'll find there's probably no point in trying to model the human error aspect of it. I spent a lot of time on that and never got it as good as I wanted, and that was with physics that cheated a little bit on the AI side. Doing the same thing on a full vehicle model is a substantial challenge. I haven't driven LFS in a long time, but from what I remember, Scawen did a really great job on his AI. Bet I could learn a thing or two from him in that department.
CarlLefrancois: That is close to how I develop, although there are other times where I'll go into prototyping mode and get incredible amounts of stuff done in very short time, often though that ends up falling to pieces because the code quality was suffering from "trying to go to fast". I've certainly witnessed the walk before you run building of a project works much better for the long term, which is how I know I'll be able to come back to the AI project after a break and still get things done.

I also already have the racing line computed by the AI driver, giving him only the left/right edges. It comes out reasonable well.

jtw62074: I've already fully grasped the challenges of getting the AI to behave exactly as I want, and I don't even have a notion of how I want the AI to behave, which makes that even more challenging. But a large portion of the project, probably the largest portion of the project, has been dedicated to making the AI approach driving in a way similar to what we do. Obviously that approach differs even from person to person, but hey, I'm just playing with different things to see what results.

I've never expected the AI driver to be fast, or fun for that matter. Just seeing what happens.


As a side note, I did open up the project and got the prediction line curving properly, only if the framerate of memories/predictions is turned down. I don't understand why that is required now that I've added the client side prediction.
I'm waking this project up for at least today. Felt some motivation to try a few things, including adding a new track. That has actually been quite an uphill battle with some of the assumptions I made previously. I did eventually get AS1R data to be loaded, however, it fails computing the center-line of the track. must have messed up ordering the layout cones in some way.

A bit frustrated, I decided to try again with AS2. There shouldn't be anything special about making the layout file beyond making sure NOT to modify or add a cone out of order. Such that, each cone you add along the left edge of the track must be further along the track than the previous. I have a feeling, but no way to tell, that I miss clicked and added a cone behind one, which makes the track out of order.

For temporary purposes I've added a list of sub-actions for the driver, which will allow the driver to shift and turn at the same time. I also have the driver shifting up to third, and to some degree 'braking' for a turn. This is not yet using the prediction unit, and is only using the racing line. The AI is still very slow, but lap times dropped from FE1 (1:30.82) down to (1:10.96). The attached replay is with the AI driving, following the racing-line pretty well with three states: "Full Speed 50% throttle", "Take it Easy 30% throttle" and "Full Brakes 0% throttle, 50% brakes.

When the AI is shifting in this replay he is not updating the thought process that controls turning or brakes, which is why I've added the sub-actions. More replays may follow. Lap 5 is the fastest lap, and as you can see the racing line clips a tire, this is why I wanted to go to another track and attempt to build the track edges a bit better.
Attached files
airs_fe1_xrg_5_laps.spr - 136.2 KB - 372 views
That's pretty amazing that you now have managed to alter the way the AI performs inside the game! It clearly are following some parameteres / commands. Just getting to this step has clearly been a long road, and it still is a long way to go, yet you deserve a clap on the shoulder for great work!
Actually he did not just "alter" the existing AI but made a new one.

The replay does not look too bad! XRG is tricky too because handles like a boat and think it requires finer throttle controlls than some other cars.
Maybe carpark/autocross is good for testing too? Less curbs or bumps in road that the AI maybe can not deal with.
From where are you getting positions? Looks like IS_MCI (that low frequency, low resolution wheel movements)
Try OutSim (higher fequency, higher resolution), or atleast do some smoothing between packets (update positions even when there is no packets, still there is problem of low resolution heading).
While ago I did something similar, but it isn't exactly AI (Inteligent) just drive by predefined path.
DANIEL-CRO - I am actually already getting the data from OutSim, but the Sensors, MemoryUnit and PredictionUnit update fairly slow because of the problems I ran into above. It is essentially why I've lost some steam on the project. I did make a very good attempt to smooth the values coming in from OutSim even more and it still fail when I turn the speed of the Sensors and MemoryUnit / PredictionUnit to be faster. Fails in the sense that the PredictionUnit no longer has any curve, and it acts very bizarrely.

Another issue that you may be unaware of is my machine capabilities are being pressed extremely hard. My machine is a bit outdated: (Intel Core 2duo 2.4ghz, 2gb ram) Graphics cards should not be the slow down, however I am running LFS and the AIRS project on the same machine, both take significant resources as each of them want to run full throttle, and I've already taken a lot of steps to optimize things in AIRS. Ultimately more machine could help, but the reason of the choppy steering is the delay I've set on the sensors, which was set to make the predictions more accurate.

I'm not sure why running the sensors faster caused the predictions to be less accurate, I designed and implemented it in a way that time between memories shouldn't impact it, but to my surprise running the sensors and memory/prediction units at the faster speed causes glitches. Also, even though I am getting the position of the car through OutSim, the AI Driver is estimating the position based on the visual reference points, and this is what it uses to drive, along with the fact that I don't update the "drive to this point on the racing line" in real-time, which could help the driver be more smooth, and smooth is fast so it might be worth investigating. At the speed he is traveling I doubt the jerky steering input is causing traction issues.

Gutholz - Thanks for the comments. It would be POSSIBLE to create a "track" on the autocross park to avoid the curbing and bumps, but I've made it so it will load track information based on the track reported by the InSim, in hopes that one day the AI Driver could drive around any track that has a layout made for it. That said, it is something I could hack in, "if the track is AU, then load the AU track" which would be easy enough, so maybe worth considering.

If you want to attempt to make an autocross layout, and can follow specific directions, I could give those to you and attempt that. Also I am open to suggestions for other cars to start with instead of the XRG. I would prefer rear-wheel drive, but at this time the driver can get into any car with an h-shifter. Funny, the driver doesn't yet know how to use a sequential / paddle shifter but that wouldn't be terribly hard to edit. I am open to suggestions and if you want to guess a lap time around FE1 with a particular car I can easily setup a 5 lap race for the driver and see how close of a guess you came. (It might be possible that some cars would require more tweaking which won't happen at this time. In some cars, if the driver gets going too fast on the front straight he doesn't brake early or hard enough before T1 and slides off.)

The Very End - As Gutholz said I've actually created my own AI driver. A completely separate program that is controlling the Player car in LFS through a virtual controller. Back in the thread Dygear was interested in seeing how the PTH files the AI uses (and the racing line displayed in LFS) compares with the racing line my algorithm generated. There was also an interesting comment of modifying the PTH files based on the WR replays. In my Computing the Racingline video I added these paths as the white line that appears near the end of each section.

I thank everyone who has followed this project, and I will return to it with more energy at a later point in time. I am happy to have the driver successfully driving around FE1. The driver is not yet using the Prediction Unit very much and that is something I'd like to improve upon, but I need to make it run faster as I was discussing with DANIEL-CRO.
How often per second does it recieve info/update controlls?
For human drivers it is also more difficult to drive with low FPS.

By the way I found interessting that your "Computing the Racingline" does, as far i understood, not actually uses anything to related car physics or racing, but more some general approach. (not sure what to call it really)
And yet it comes up with a solution that looks like a sensible racing line.
But also a problem maybe, if the actual car itself is not taken into account at all? Different cars (or setups even) require different lines.
Probally perfect pathfinding does not matter unless the pathfollowing is perfect too, just a thought that crossed my mind.

By the way a graphic of your 5 laps replay:
Except for the spins it seems to always drive the same path pretty well.
But in T1 there is already small difference, maybe from sliding when braking. I think the faster it goes it might be more difficult to drive so consistently, when it gets nearer to limit of grip which in the end racing is all about.