The online racing simulator
I'm getting the impression you might not quite be understanding what a genetic algorithm is. Forgive me if I'm wrong about that.

The idea generally is to change a value at the bit level. For instance if you had two parent values of 500 and 480, they are as follows in binary:

500 -> 0000 0001 1111 0100
480 -> 0000 0001 1110 0000

You then mate them by picking a point to splice them together which is essentially how DNA splicing works. If we picked the center in this case the child would become:

0000 0001 1110 0000

In this case the result is also 480 just like one of the parents, so whether the value becomes unique or not and different from either of the parents really depends. If instead the splice was done on the last 4 bytes you'd get this:

0000 0001 1111 <splice point> 0000

Here you get 496 which is between both parents and unique to both. This works well if you're homing in on a number between the parents, but if the number you really need is 29000 or something, you're going to be stuck waiting for a random mutation to just happen to land you in a better spot. That could take awhile. If the splice point is itself random, you've got 32 bits here to play with so in order to flip the first bit and get this:

1000 0001 1111 0000 -> decimal = 33264

...you might have to wait 32 * chancesOfAMutation generations to even see a number anywhere near that 29000 number that you really want.

Might you instead be doing it at the 32 byte boundaries, something like the following?

struct DNAStrand
{
float floatValue1
float floatValue2
.
.
.
}

And then create a child DNAStrand by mixing floatValue1 of parent 1 with floatValue2 of parent 2? If so, even though you may get a unique child now and then, floatValue1 and floatValue2 don't ever change except by mutation. In that case you aren't getting as much variation as you could be perhaps?

Anyway, there's a lot of things like this that could be going on in your simulation. Without more specific information or code, there's not much more I can say.
Todd, well written, lots of good points in there. You had the concept of what I was doing with the genetic algorithm perfectly correct, sorry for the poor explanation before. The input was essentially the t, interpolating between the outputs for reference point X and X + 1, and the out puts was -1 to 1 steering and -1 to 1 throttle/brake.

While the 't' value is computed in an iterative process, it isn't using anything as smart as bisecting half the segment each time, the primary reason I do it is to detect validity of laps. Sure this can be improved - a lot, I do realize this. And really the boost in speed could happen by not calling the bezier interpolate while iterating through; I will see about changing it to line segments, and iterate to find the closest line segment, which should be much much faster. Another option for optimizing this, was to make and have each car contain a "tracker" that has information about where it was previously located, then perform the iteration going away from the tracker in both ways until the distance grows. Doing either, or both, of these optimizations should eliminate the problem completely - I just haven't done it.



steeringOutput = leftRightDistance * k1 + leftRightVelocity * k2;

Maybe I'm being pessimistic, or silly, or thinking wrongly about something but I feel that steeringOutput computed this way is somewhat using logic to do so. Although there are no if (leftRightDistance > k1) like I was writing before, the if statement is slightly hidden when leftRightDistance has negative/positive values. I do like it though, it is pretty simple, and I will give it a try when I play with this project again.

Actually might be quick enough to throw in, modify the gene count to 2 (for k1 and k2) and see what happens with throttle stuck at 50%. (100% would surely throw them off track at some point).
I believe I do understand the Genetic Algorithm fairly well, but given it is my first experience with it there may be some things that I messed up. I did not choose to go the purely binary way, and instead used floats like you stated. It looks similar to this:


struct Chromosome
{
float genes[kMaximumGenes];

//For each gene, if chance is within kMutationChance, add a random amount max of kMutationAmount
void Mutate(void);

//If chance is within kCrossoverChance, choose a random number 1 to kMaximumGenes and swap the genes
void Crossover(Chromosome& offspring1, Chromosome& offspring2);
}

For my particular situation I had, and this changed constantly as I attempted to configure the GA:


kMaximumGenes = numberOfReferencePoints * 2;
kCrossoverChance = 40%
kMutationChance = 10%
kMutationAmount = +/- 0.2 (at max, the actual mutation is random within this amount)

The population runs for a generation, and before spawning the next generation compute the totalBestFit by adding all the fitnesses up. This allows me to choose chromosomes randomly, while giving the 'better' genes a better chance to spawn again. I do not remove chromosomes once selected, so ultimately it could be selected several times, I was unsure on this detail, but it makes sense to me that it would remain in the pool.

If we had a population of 4 chromosomes, with the following fitness:

A: 60
B: 32
C: 5
D: 3

Then total fitness is 100, for simplicity. A then has a 60% chance of being selected each time the GetRandomChromosome() is called, which it will be called 4 times. B has a 32% chance, and C and D are fairly small. This means A and B are more likely to spawn again, and then possibly crossover, and possibly mutate....

This is how I understood the genetic algorithm / process to work. Let me try real quick the way you mentioned above using only 2 genes, to modify k values.
Quote from blackbird04217 :The input was essentially the t, interpolating between the outputs for reference point X and X + 1, and the out puts was -1 to 1 steering and -1 to 1 throttle/brake.

So you're determining throttle/brake and steering only from the t value and nothing else? Correct me if I'm misunderstanding, but I'm getting the impression that the t value is just how far along you are along the curve. The point behind you is 0, the point ahead is 1, if you're half way between then t = 0.5. Is that correct?

If yes, how is "I am 50% of the way to the next point" used to determine whether to turn left or right, let alone tell it how much to steer or brake or whatever? I can't imagine how that would be enough information. This is what I mean by it wouldn't really matter if the real value was 51% and you came up with 50%. An AI can't drive a car by only asking if the gas cap is open or not.

Quote from blackbird04217 :
Maybe I'm being pessimistic, or silly, or thinking wrongly about something but I feel that steeringOutput computed this way is somewhat using logic to do so. Although there are no if (leftRightDistance > k1) like I was writing before, the if statement is slightly hidden when leftRightDistance has negative/positive values.

Call it what you want, but this is how all controllers work, more or less. You have input variables and a function that produces output variables. I would not call it "logic" until you have an "if" or other logical statement in there (and/or/xor/etc). Using an equation like this is different because the value starts at 0 when you're at 0 distance and velocity and increases (rather than jumping suddenly) as those values increase.

There is such a thing as "fuzzy logic" which might be more intuitive for you and worth a look, but those are more suited to "expert systems" and aren't particularly well suited to problems like AI car controllers (maybe good for parallel parking, but that's probably about it). You'll end up back in the same boat where you're hand tuning numbers, only a lot more of them this time. Walk before you run. These aren't systems that learn on their own normally anyway, but there's no reason they can't be trained by neural networks using genetic and particle swarm algo (or whatever else). Anyway, this would be getting ahead of where you are right now I think.

Quote from blackbird04217 :The population runs for a generation, and before spawning the next generation compute the totalBestFit by adding all the fitnesses up. This allows me to choose chromosomes randomly, while giving the 'better' genes a better chance to spawn again. I do not remove chromosomes once selected, so ultimately it could be selected several times, I was unsure on this detail, but it makes sense to me that it would remain in the pool.

If we had a population of 4 chromosomes, with the following fitness:

A: 60
B: 32
C: 5
D: 3

Then total fitness is 100, for simplicity. A then has a 60% chance of being selected each time the GetRandomChromosome() is called, which it will be called 4 times. B has a 32% chance, and C and D are fairly small. This means A and B are more likely to spawn again, and then possibly crossover, and possibly mutate....

This is how I understood the genetic algorithm / process to work. Let me try real quick the way you mentioned above using only 2 genes, to modify k values

Total fitness of the population? You're trying to evolve one good car, not a whole population of them. Survival of the fittest. Let the bad ones die with no chance of reproducing. That's how real evolution works.

Imagine if you had a generation of 10 cars, 1 by shear luck drives really well on the first generation and the other 9 are crap. What you want is that really good car to survive and mate (even better if it mates with another really good car probably, possibly several times with different mates). You don't want that really good car to get lost just because the total fitness of some other generation was better, which really just means the average car in that group was better than the average car in the group before it. You don't want a good average car in a good average group, you want the best car that has ever lived, period. I think you might actually be breeding between generations and that might be part of why they're getting stuck. That stops evolution.

Following that, normally what you'd do is score each individual car. I wouldn't add up the fitness scores to get a total like you're doing. Instead in your example where:

A: 60
B: 32
C: 5
D: 3

This just means A and B have the highest chance of reproducing, or they mate more than the others or something similar. You could just take the best 50% of them (or whatever fills your population), mate them with each other, then throw the rest away. That's what this is modelling about evolution: A group of animals where only the fittest get to reproduce and the rest die off along with their particular gene combinations. The dead ones that didn't get a chance to reproduce don't get any further contribution to the gene pool. You just do it in a way where the population stays the same every generation, but it's important to let the old ones die. You don't want the bad creatures to reproduce.

With the 4 car example there, you want to produce 4 new cars. You could do that several ways, up to you. One way would be to mate the best 2 together to produce 1 or 2 or even 3 cars (high birthrate, these are horny cars). Maybe your percentage chance stuff is actually working that way, I'm not sure, but the idea is to lose C and D entirely and replace them with something better. Hopefully A and B or B and C make something even better than A.

Anyway, just sit and think about fixed sized animal populations for awhile. That's how this idea of genetic algorithms came about.
Here's another issue with using the DNAStrand type of approach. Here's my example again but this time using k1 and k2 values:


struct Chromosome
{
float k1 //first gene, steering controller input k1
float k2 //second gene, steering controller input k2
}

So here we're doing it your way instead of at the bit level to show a potential problem. Suppose we mate these two guys together:

k1[0] = 1
k2[0] = 10

k1[1] = 2
k2[2] = 20

I can't tell for sure from your code, but you might want to check to see if all that is really happening here is that you're just swapping the k1 and k2 values back and forth. I.e., you could end up with these possible children:

child.k1[0] = 1
child.k2[0] = 10

child.k1[1] = 1
child.k2[1] = 20

child.k1[2] = 2
child.k2[2] = 10

child.k1[3] = 2
child.k2[3] = 20


And that's it. Do you see a problem here? The only possible values for k1 are 1 and 2 for the children. Same goes for k2 where the only possibilities are 10 and 20. Sure, you get a unique child, but the gene is at such a high level that the steering response to cross track distance is not changed by breeding. The only way k1 could become anything other than a value that already exists in the gene pool is for a mutation to happen and that can take a really long time. That's probably not really what you want. You want k1 to be determined at a deeper level than that so the children might have a value of 1.5 or 4 or something.

In other words, k1 might be determined by more than 1 gene. It's a combination of genes that determines the steering response to cross track distance rather than just one gene. That's where the bit level stuff comes in. In my neural network approach to this, that's indeed what would have happened had I used a genetic algo instead of particle swarm. The genes were the links themselves (lots of them) to determine the final k1 value. So two parents with a k1 value of 1 and 2 would not be very likely to actually produce an offspring with k1 of 1 or 2.

Granted, your way might still work, it just might take a lot longer. I'm not entirely sure. I wrote a few little virtual creature simulations in a manner similar to yours where the genes were things like attack speed, metabolism, eye sight distance, and so on. When I mated them I split the whole structure up, like taking the first 5 numbers (bytes in my case) from one parent and inserting them into the beginning of the other parent. This way the child was really half one parent and half the other while the basic sequence was preserved. There wasn't a hop skotch game going on with the gene swapping where one parent gives genes 2,5, and 7 on one turn while on another it gives genes 1,2, and 3. I'm not sure if this really matters, it might just be in my head. Your way might be doing the same thing or it may be equivalent in the end anyway, I'm not sure. Just wanted to throw that out there to spur some more thinking.
And following that thought, this is where particle swarm helps. The offspring will almost certainly not be what they were shown there. Instead all the k1 and k2 values will just deviate a little bit from what they were before. For instance, there's really no mating or crossover. Instead the amount that each agent's k1 or k2 value changes is controlled by the fitness score and a simple computation, basically. (Think of k1 and k2 as having "velocities" which are just the amount that those values will change. Each agent has its own velocities that are adjusting in a way to chase the best agent that has existed so far throughout history, basically).

So while you started with these two:

k1[0] = 1
k2[0] = 10

k1[1] = 2
k2[2] = 20

Let's just pretend for a second that the best k1 value was actually 1.5, and the best k2 value was 15, but the population hasn't figured that out yet. You could end up with these two as the values swarm around on their hunt for those optimums:

child.k1[0] = 1.1
child.k2[0] = 12

child.k1[1] = 1.8
child.k2[1] = 18

They don't get stuck waiting for a mutation, instead they have a good probability of progressing in the right direction immediately.
I think you misunderstood how the totalFitness was being used. I implemented it the way that I understood from my readings and a few videos I watched prior to. Now we could go ahead and do what you are suggesting, it does actually make the most sense for finding the best path the quickest way possible, and maybe I will modify the GA so that it can handle this case and just kill off the bottom X% of the population each generation so the better portion breeds more.

I was actually using the totalFitness for that current generation, not keeping it around for previous generations also. So basically in my previous example totalFitness = 100. Car A: fitness of 60, had a 60% chance of breeding because 60 / 100 = 0.6. The totalFitness does not need be 100. It can be any number and dividing the fitness for an individual by the totalFitness for that generation gives that individuals chance of breeding. Again, I understand your thought about removing the bottom of the barrel, it was just that this is how I understood and saw from the resources I had at the time.

I also understand where you are coming from with the differences between a gene being a bit and a gene being a byte, although I am still not quite understanding how to take a bunch of bits and apply them to the algorithm though, and just messing with bits and interpreting as a float or value seems less useful?, so I will hold off on that for now. Although to be fair, I do need to, and will, reread what you said earlier and recently so that I can understand it a bit better and get all the information you've given me, thanks very much!

I did manage to toss in a new controller using the two genes to represent k1 and k2 in the function: steering = leftRightDistance * k1 + leftRightVelocity * k2; and it manages to get beyond the hairpin of turn 1 once the throttle input is lowered. I had to stick it down to 20% throttle. Obviously I will tie the throttle and brakes to some genes and functions as well, but considering I am working right now, I can't spend too much time playing around with it.

Thanks very much for the discussion and all the shared information!
I'm still not following you on totalFitness of the group as a whole. Generally what you do is have a fresh population within a generation derived from the previous one. A "total fitness" of the whole population in a generation is kind of meaningless because what you're really interested in is the fitness of each individual member of that generation. If Bob, Tom, and Bill are competing for the #1 spot in a mating game and score like this:

Bob = 10
Tom = 5
Bill = 1

Those are the fitness scores, one for each member. Who cares that the total score is 16? What you want to know is that Bob is the best, Tom is next best, followed by Bill. Maybe Bob gets to have 2 kids, Tom has 1, and Bill goes home alone. On the next generation, Bob, Tom and Bill are all gone and removed. Only their children survive and reproduce again. What does it matter that their children all added together score 12 or 1049? The best ones of that group are the ones that get the mating chances regardless of the group total score. So yeah, I don't really understand why the totalFitness of the group is even there.

Anyway, good to hear the k1/k2 thing worked a little better at least and got you through turn 1.
That is the idea and why the totalFitness is important... Using your example,

Bill 10/16 has a 62.5% chance of being selected to reproduce each time a chromosome is grabbed.

Tom 5/16 31.25% chance.
Bill 6.25% chance.

Which makes Bill much more likely to become an influencing chromosome in the next generation. The totalFitness added up is only used to make the selection give the better chromosomes better odds.

Going back to what you said earlier, it'd be better to remove the bottom portion of the barrel before this. But this is the way I understood it, giving the more fit chromosomes more chance of being selected more times. Of course this does not guarantee it will be selected, which could be undesirable but I was described as it also has chances of still helping to do that as well.

Maybe this will help understand what I've done and how it does what you said above.


Chromosome GetRandomChromosome(allChromosomes, totalFitness)
randomValue = Random(0, 1);
elapsed = 0;
foreach chromosome in allChromosomes
elapsed += chromosome.fitness / totalFitness;
if (elapsed >= randomValue)
return chromosome;
return allChromosome.back(); //should never actually reach here.

EDIT: You could also also achieve the same thing removing the division and calling Random(0, totalFitness);
Ahha, I understand totalFitness as you're using it now. Thanks.

Getting rid of the population or stopping the mating isn't really necessary. That will happen just by chance anyway the way you're doing it. Nevermind what I said, carry on.
It has been a fair bit of time again since I've made forward progress, but I've uploaded two videos recently that you guys may be interested in, the first is of the artificial driver controlling the car around FE1 exactly as I had it back in June, but I'm now just creating and uploading a video of it instead of just a replay.

Artificial Intelligence in Rac ... mulations: Driving Around

The other is of the small genetic algorithm that was previously discussed and that I've put some more effort into tweaking. It actually turned out pretty reasonable - but a pain to tune. I've realized some things I've done wrong but not sure how much more I will push the genetic algorithm technique. The original intention of this was to try out other techniques, and learn them, and see if it would be good to apply to the AIRS project. With the tuning difficulty, and only 1 artificial driver in control at a time in LFS this would be too extreme. Not to mention the simulation these bots are running in is extremely simple with no camber or grip changes, no weight transfer or loss of grip. So I think a genetic algorithm is out of play for AIRS, which means a neural network is also out.

Artificial Intelligence in Rac ... ations: Genetic Algorithm
(I read most posts but just respond to random fragments)
Did you consider that your "Genetic Algorithm" (the 2D one) maybe failed because the task was too hard? If all drivers never make it past the first turn then it is hard to select the best ones for further evolving.
Maybe one driver is "better" but still not "good enough" to show a better result than a "bad" driver. So both run off the track at the same point and it is impossible to judge which did better.
-> Did you try what happens with just a straight track?

(Like you have a bunch of outdoor survial expertes and want to see who is the toughest. So you have a competetion to jump into an active vulcano and everybody indiscriminately just dies in the lava. )
I see what your saying, actually I got them to get beyond the first turn in that last video I posted, it just took tuning the fitness function, and at that it wouldn't really work for the AIRS with LFS project. I've been thinking a lot about that portion of the project a bit, to make the LFS artificial driver faster, but it is tricky. Ultimately I need to figure out how to fix the prediction unit to be smoother and more accurate. It is often accurate, but as you can see in the driving around video, it bounces around too much, and that would cause the driver to have bad information.

If the prediction unit was working as expected, it would give the driver the ability to read the future of the car and feel a bit better "am I on the limit" and react to being over the limit whether understeering or oversteering - which is essentially the next step needed.

I need to take a few steps backward in-order to go forward. I've learned the curve implementation I added to "add other tracks easier" actually doesn't work in practice. I don't want to keep the AI doing only FE1 because I don't want to accidentally hardcode values/algorithms that work good on FE1 but not on other tracks. So I probably need to remove the curves. I do need to spend some time on optimizing, possibly even remove the terrain occlusion from the visual sensor (it is reasonably slow).

I've got the artificial driver to follow a line, now I need to teach the driver to do so at the limits of the car, on the verge of understeering or oversteering but not over the limit. Of course this means I'll need to teach the driver how to handle the car over the limits as well.
I've had a break through last night. Sometimes on this project all I need is a little bit of time and voila new bits come to me. I've been pondering how to handle the situation with the many different variables at play and how to make the artificial driver faster without hardcoding information specific to FE1, the XRG or the particular setup being used. I went through reading some of my older posts about the reference points and how I wanted to use them previously, (way back when I had wanted to have braking, turn in, apex points instead of just left/right edge, though in my defense I just never got that far... yet)

Last night the light bulb went off, I can modify how I compute the racing line ever so slightly and add a brand new reference point for the driver to keep track of. If you recall from previous bits, the artificial driver is already driving towards a driveToPoint that moves in front of the car along the racing line. This new reference point will be added at the start of every corner on the racing line, or at least after some length of a straight section, to be determined by the angle between the hinges used in the computation.

This new TurnPoint will go into the drivers memory unit, and will hold some information that cannot be changed, where the turn begins, and the distance and position at the end. With this information I should be able to determine if the driveToPoint is within a turn, and/or if the car is within a turn. When it is within a turn the driver will attempt to drive at the limits of the car or try smoothly accelerating to the limit.

The TurnPoint will also hold a few values that will change over time like brakingPoint and entrySpeed. As the driver is approaching a turn, he will begin braking once he reaches the brakingPoint and attempt to reach entrySpeed before reaching the actual turn in. I can then set these values to long braking areas and slow entry speeds which should allow the driver to smoothly enter the corner, but obviously not be at the limit. The driver could then check on the conditions of the car, how much at the limit was the car at various points, and determine if more braking area is needed, extend the brakingPoint. If more or less speed is required to take the corner quickly but safe, it can also be modified in the long term memory.

This should allow the driver to tune his braking points and driving as I originally wanted to using pseudo-reference points. This "TurnPoint" won't be the same as the points on left/right track edges, instead it will exist only in drivers long term memory, and always be "seen" or at least thought of.

...

Now, this gets me back to a slightly minor problem, which last nights brilliance solved most of but still a slight issue exists. This method of driving requires me to semi-accurately judge how "at the limit" the car is at a given point. This is the job of the prediction unit however instead of using the future predicted path, (which as we can see has been unreliable and may eventually need to be fixed or stripped out), it will need to predict several immediate states of the car:
  • isOverSteering() - (can be determined by comparing carHeading with velocityDirection)
  • isUnderSteering() - (possibly determined by comparing steeringAngle vs yaw rotation?)
  • isFrontLocked() / isRearLocked() - (can't be accurately determined that I am aware of, but would be great!)
  • isOverBraking() - (possible determined by comparing longitudinal G forces with a value specific for the car)
  • isOverThrottling() - (can be determined by comparing carSpeed (speedometer) with velocityMagnitude (physical speed))
So the unknown is how accurate I can determine understeer condition and over braking conditions. Both are needed for the driver to tune corner entry and determine if the car is at or over the limit. I also need to find a way, which I haven't figured out yet, how to determine that the car is NEAR the limit. It is easy to determine over the limit (any one or more of the above conditions is true), and reasonable easy to determine under the limit (all of the above conditions are false), but the artificial driver should be able to find when the car has neared the limit so he doesn't push too far. The best thought here so far, and I don't really like it, is to determine if one of the conditions were true within the last tenth or two tenths of a second, then assume the car is near the limit. Of course this requires driving over the limit to be near the limit...

Those functions won't only be black and white, for instance isOverSteering() can and likely will use a different function quantifyOverSteering() to determine if the car is in an oversteer state... Something like:

bool isOverSteering() { return quantifyOverSteering() > limitThreshold; }

So it MAY be possible, to determine if the car is near the limit if one or more of the states are less than the limitThreshold but greater than some grandmaThreshold.

If I haven't lost you yet, great! If I have, and this is likely to be the case, please ask questions. Get involved! It makes me think more about the solutions, and keeps my motivation up.

....

So this means I need to do several things:
  1. I should seriously consider removing the curve path, and possibly even reducing visual sensor wasted time.
  2. Modify the Racing Line Computer to find/separate straights and turns.
  3. Create TurnPoint reference points and put in the long-term memory unit with default slow values.
  4. Modify prediction unit to accurately determine state of car, and if it is near limit.
  5. Create a driving state to get the artificial driver to drive near the limit and handle over limit cases.
  6. Make driver modify long term memory based on what happened at different points during the turn.
This is a pretty heft list, but I think if I pull it off correctly the driver might be reasonable fast without hard-coded values specific to a car/track/setup configuration. It will be amazing to see the airs_artificial_driver with lap times closer to 107% of WRs
This thread really got me interested in artificial intelligence. It's a shame I still don't have the knowledge to even start working on one. Great job with this, it's nice to see it coming along, and honestly considering how it works (with all the data being transferred from LFS, analysed, use it for predictions, etc) I'd say it looks quite fast already. I wish you good luck with it and I hope it will only get better. And thanks for sharing your progress in this thread, it really makes for a nice read
@vladvlad, knowledge is not exactly a requirement to start working on anything you want to create, be it a movie, game, house or what have you. Time, energy, dedication and the desire to do so can push you to learn new things, try new things and persist at the task even in the darkest of times when you seem only to fail. The trick is finding what you truly want to work on an pursuing it completely, so don't be afraid to learn the knowledge you don't yet have, actually most of this thread is about my learning experience.

...

Regarding progress with the project, I have cleaned up a few sections and added minor features will help from and thanks to amp88. I need to rework that area a little bit more. I also started, but didn't get far at all, trying to compute the corner points from the racing line computer. I also started, and also didn't get far, predicting the state of the car with isOversteering(), isUndersteering(), isOverBraking(), isOverThrottle(), but before I did much I realized that my isOverBraking() method will not produce good results, and I haven't found a good solution yet.

My 'breakthrough' idea was to use the longitudinal g-forces, in thinking that if the tires are locked it will be less than the ideal force ... but so would braking less than the threshold, so does this indicate braking below limit, or above? Unfortunately I have no great solution here. The only recovering thought I have, given the information from LFS / the sensors, is to set the brake balance in a manner that locks the drive wheels and then check the speedometer vs physical speed like the isOverThrottling(), but this is not optimum and requires the artificial driver to have a specific setup, which I currently use a specified setup to more accurately judge if my changes made the driver slower/faster.
Last night I started a major refactoring of some of the base code in hopes to add some features and easier debugging information to the AIRS in LFS project. This went fairly well, there was a minor hitch in the first part of the refactoring but I got that solved by essentially reading through each change from the previous commit. Thank you source control!

However today, I've encountered the strangest issue with the artificial driver. For some odd and unexplainable reason the driver seems to drive slowly about 5 to 10 meters to the left of the racing line for almost a full lap, then it seems to kick in and find the racing line and start driving faster and on the line as expected doing the ~1:02 to 1:03 range of laptimes.

Mind you during this refactoring nothing logical should have changed and I was trying to remain meticulous about this. I decided then to build the previous version, which excludes any possibility of the refactoring, and the behavior is identical. I am extremely confused why the driver is behaving this way. The only other change I had done in the midst of this was attempting the dinput8.dll injection that MadCatX suggested, however the dll is not being injected at the time of these issues.

I'm at a loss as to how this is possible. The artificial driver follows the racing line, even though all positions seem correct INCLUDING the position it wants to drive towards. Then about 80% through the first lap it corrects itself on the back chicane and starts racing around properly.

My hopes was to get things refactored a bit, improve the primary AIRS application and InSim/OutSim/OutGuage connections. Primarily to collect more state information, and add ability to send messages to LFS, add a way to debug the sensors / track information with player controlled car, and some other support to make it easier to develop further so I could continue along the idea of determining corners vs straights on the racing line and detecting the actual limits of the car.

So, other than the artificial driver suddenly becoming immensely stupid on the first lap, everything else has been going good.
So I've just modified my firewall and had a few users successfully connect to the server I use for the artificial driver to drive. He doesn't drive safely, so if you join the server please just spectate for now. You can join the server: SRS:TestingGrounds

lfs://join=SRS:TestingGrounds
Quote from blackbird04217 :So I've just modified my firewall and had a few users successfully connect to the server I use for the artificial driver to drive. He doesn't drive safely, so if you join the server please just spectate for now. You can join the server: SRS:TestingGrounds

lfs://join=SRS:TestingGrounds

Host not found?
It may not always be up but there were a few visitors earlier today that came to check out the broken driver as I described above. Want to know the hilarious thing, it isn't broke anymore ... at least it isn't reproducing currently.

There is some very weird gremlin somewhere. Gutholz was on the server as I rebuilt from a previous commit (thanks source control) and still ran into the same issue. So I went back even further and finally got a build that worked flawlessly. I stopped using the newest build because a few others had joined, and why not show the less idiotic driver...

Well, during the last hour I've been looking through every file and change in the all the diffs from previous to current and the only thing I think could remotely have the effect I'm seeing is changing the number of memories from 25 to 10, and even that sounds extremely odd to be a cause but is the closest possibility.

All the other changes were involved with separating the virtual controller class so I could, eventually, run LFS and the VirtualController on a different PC than the AIRS Brain for performance reasons, and maybe even server stability? None of the other changes should cause the driver to remain almost exactly 10 meters to the left of the racing line he is trying to drive on. Actually the driver logic didn't even change, beyond adding or moving the "WaitForGreenLights" state which I can't see how that could relate.

So, I update to the latest revision (the one just before my refactoring, the one that Gutholz watch fail) and it works flawlessly. I run the version I have refactored... it works flawlessly. NOTHING CHANGED from the time I started seeing these issues until now except perhaps several restarts of LFS and AIRS... What the hell. These are not fun gremlins or problems to deal with, and I'll be happy if the driver never attempts to drive 10m to the left of the racing line.

Oh, and when he was driving 10m to the left, everything from my debug visuals shows that he knows he should be 10m to the right to get on line. It shows that the desired angle is greater than he was actually turning. Which at one point had me wondering about hitting some buttons that reinitialized controls to have caused the problem, except I even tried recalibrating them to offset that possibility. Honestly, I'm at a loss of what possibly could cause this behavior.

Now on to some more fun stuff: Forward Progress.
After a little bit of trouble (from running another copy of LFS and forgetting to turn of OutSim and OutGauge messages) I think I fixed a few issues that I ran into with the detecting the artificial driver's car.

First I had previously been checking to see if it was player controlled, which worked great while I was doing this solo, it controls the only "player" car. But when other people joined the server and jumped in the session, this confused the artificial driver code, a lot. That was easy to solve.

When that wasn't confusing the driver there was a situation where the driver would constant try starting the car, even though the car was already turned on because he was getting the wrong OutGauge info. I'm not sure how that is, maybe it was based on the previous stuff, or during my tests because OutGauge was sending packets (from two copies of LFS) when I wasn't expecting it. This has also been addressed, so I think the AI driver will behave better with other players on the server now, but only time will tell.
LFS will send copies of ALL drivers out gauge in a lot of circumstances. You have to filter by ID. There's also a option in the OG settings that allow you to control the circumstances that OG data is sent.
Yea I had already added the filter for the PLID of the artificial driver, so that issue won't pop up again.

Given the issues I saw with the prediction unit, the jumpiness in the AIRS display and a general curiosity I think my next plan of action will be to find a way to accurately determine the difference between the LFS World time and the AIRS world time*. I know there is a delay in the communications, and I assume this delay to be 50-100ms, but will try determining it and maybe finding a way to make the display run smoothly, even if it needs some form of client prediction**.

The refactoring is going pretty good, I just need to finish removing the old bits of code that have been replaced, and hope that the issue, with the first lap 10 meters to the left of the racing line, never appears again. It didn't appear to be an issue due to refactoring though, perhaps a change or two before it but again I went through each change in each file since the copy that worked and found nothing that would have changed that behavior, so it was a strange issue that has since disappeared.

I've also had some interest from someone that may want to try creating some more track layouts to get the driver working on different tracks, but to do that I'll need to modify some of that code again for improvements, probably something I should have done a long time ago anyway. This won't be that difficult, I just need to add an additional step that sorts the track edge reference points by nearest before processing them - although I may end up removing the curve code I wrote.***

So the overall list of things I want to do with the project:

1) Remove the old bits of code from the new refactoring job.
2) Get an accurate calculation of the LFS-AIRS time difference.
3) Improve the layout parsing to not be so strict about ordering.
4) Remove parts of the "to curve" in the layout parsing where it caused issued.
5) Process the track to create reference points where corners, straights and braking points are.
6) Detect if the car is accelerating or decelerating at the limit.
7) Detect if the car is understeering or oversteering.
8) Create a new driving state behavior that uses the braking point references and limit detection to modify the braking point.

Hopefully when I get these steps complete the driver will have a more clear sense of the information, (hopefully not so jumpy?), and with any luck with modifying brake points and driving at the limit of the cars capabilities should make the driver faster. I would really like to see a time in the 57 to 58 second range on FE1 with the XRG once the driver understands the limits of the car. I would be shocked and amazed if the times are in the 53 to 55 second range. This is not world record pace by any means, but currently the PB of airs_artificial_driver is 1:01.70


*I'm not entirely sure how to do this accurately as there are actually many parts to the problem including the network latency. I should be able to figure that out with the InSim ping packet, but that would be the InSim delay and not necessarily the OutSim packet which contains vital information about the location of the car, actually I think OutSim/OutGauge are the required packets for the driver to work, InSim being great for the other car locations and general state. Since I run the AIRS world in a fixed time step, my thought was to take the network latency, some InSim state information about LFS timing (need to look it up), and the time stamps in the OutSim/OutGauge packets, and with this information I should be able to determine how much time has passed since the information has been updated.

**Ultimately I am a bit confused why this didn't work previously since I have essentially added this to the OutSim packs that are used for the artificial drivers car, so it is apparent to me that something must be wrong, but I've taken several looks into it and it seems correct. I'll keep digging and if I can get a smooth display to work, that should, in theory, make the prediction unit also become smooth.

***The curve code was initially meant to make the placement of track edges easier by creating a curve that went through all the reference points. Initially this sounded like a great idea because it does mean that less cones need to be used to create the corners. In hindsight it is not working quite as expected in all scenarios, particularly when there is a long straight section. The problem here is that the curve code can, in some situations, create a weird curve along a straight that sort of doubles back on itself. I ran into this issue in particular when I was trying to make a layout for the dragstrip (to get the artificial driver launching on green lights better) but unless I places a lot of cones all the way along the straight these silly curves came into play. I don't think it should be too hard to remove.
I spent the night removing a bunch of files from the codebase and ran into a few troubles with the new timer, but got it figured out so everything is working as it was before. Just before I started removing the files, I had the artificial driver running laps and he was on a streak, I'm not entirely sure what was different (no logic was changed, at all) but he was running faster than I had seen before, beating his 1:01.70 PB down to a 1:01.44! It was pretty neat to watch, but I always test with the same airs_driver setup and no wind. His fuel load changes from time to time, but I had not messed with it. He does randomly have better runs from time to time, probably latency and framerate helps a lot.

There is still plenty in the code base that does not need to be there, all the visualizer code is now useless and no longer functioning, but I have a hard time just deleting it because it took time to write and we neat to see. I guess I half think I'll want it in the future even though that would need effort to get working again. There are also other files like my deformable tire, point mass system and some basic physics that I added to the project back when I was swinging back and forth between writing my own physical world for the testing.

There is still a LOT of code that contributes to the racing line computation, all the communications/data collection with LFS and the many different states for the driver. But for those interested, the code base is currently at about 226 files and 38000 lines of code (including 7500 blank lines, 9000 lines of comments). I'd say at least 10% to 15% is not used in the way the project is working now.

With that out of the way, I also started working on tracking the time a bit better, and found the InSim packet TINY_GTH which allows me to get the time in LFS and I can then compare that to what time OutSim and OutGauge are stating. From the print out I see the latency of a full round message through InSim is about 30-80ms with spikes upto 150ms.

Both OutSim and Outgauge are typically 30-50ms behind that time I grab, which leads me to believe they are actually delayed by 50-90ms, but I could be wrong. I don't have a great way to show this information so it just spews through the debug console for now and isn't the easiest to actually watch and I'm taking the accuracy with a grain of salt.

I've also added a way to guess at the order of the reference points without requiring the cones to be added to a layout. I've still yet to test this currently, need to make a new track to do so, any suggestions? but if it works as expected maybe someone could help make layouts for driver to know how to drive at other locations, in any situation it should allow a reference point to be moved after if it is found to be problematic...

So tonight was a very productive night even though one of the items remains untested and not all the timing information has been displayed how I initially wanted.
The classic BL1 might be a fun track for the AI to learn.

FGED GREDG RDFGDR GSFDG