The online racing simulator
Already been asked, at the start of this page or the end of the first. . .

The only question of how to do this, that is left remaining, it the reference points themselves, well the instructions/memories that are attached to them. Course that is the most important part of the project, but it is the point of scratching my head.

I am fiddling with the idea mentioned above, having a driver race around recording the data - but that seems almost more tricky than somehow telling the AI to just go. So yea, thanks for pointing out the obvious but I have been getting caught up in that knot already, though I have most of everything else all planned out - in a nice clean way so the AI is quite independent of the other systems involved. The car needs to talk to the PhysicalSensor every frame / often, the car needs to be talked to by the AIController and the GameWorld needs to talk to AIWorld once at load up of the track.

But none of this will be set in motion until I can find a way for the AI to be instructed; be it pre-recording a lap, or learning from scratch. I like the sounds of pre-recording but I have a lot to look into. How do you signify an event as important for recording. IE a driver changes throttle/braking input constantly, but I need to record only the large changes.
#77 - col
Quote from blackbird04217 :
The only question of how to do this, that is left remaining, it the reference points themselves, well the instructions/memories that are attached to them. Course that is the most important part of the project, but it is the point of scratching my head.

For a human, reference points change, they are just a guide.
If your AI is to drive by references, you need to develop some sort of system that allows it to update reference points. e.g. discard old ones and create new ones. For this, you need to give it vision!.
IMO to create an AI that doesn't 'cheat' you'll need some sort of visual processing system.
Quote :

I am fiddling with the idea mentioned above, having a driver race around recording the data - but that seems almost more tricky than somehow telling the AI to just go.

No. an AI that can start from scratch will be MUCH more difficult than one that is given a close approximation as a starting point.
One non trivial issue with starting from scratch is that searching a solution domain for an optimum can be very difficult.
When 'searching' for an optimum line, you can think of the solution domain as a landscape. Searching in a simple way e.g. "is the new line 'better' (higher) than the old one" will rarely result in finding the optimum. This is because in the landscape there are many local 'hills', so naively following 'is it better', just ends up with you stuck at the summit of one of these, rather than on the peak of Everest (the optimum line).

If you start somewhere in the foothills of everest, you can use more basic methods to find route to the summit.
Starting randomly somewhere on earth is going to be a tougher challenge.


cheers

Col
Quote from col :For this, you need to give it vision!.
IMO to create an AI that doesn't 'cheat' you'll need some sort of visual processing system.

Huh? Either I'm being massively stupid or you're using visual very loosely, but I don't see why it needs to be visual at all. If you can programmatically reference things by points, which is all your eyes and brain do together, why are you proposing making it rely on visual inputs? Why make it hard on yourself when you can already reference the location of other vehicles and your vehicle on the track programmatically?
#79 - col
Quote from the_angry_angel :Huh? Either I'm being massively stupid or you're using visual very loosely, but I don't see why it needs to be visual at all. If you can programmatically reference things by points, which is all your eyes and brain do together, why are you proposing making it rely on visual inputs? Why make it hard on yourself when you can already reference the location of other vehicles and your vehicle on the track programmatically?

blackbird04217 has stated that he wants to build an AI that has only the same inputs available as a human. anything else would be cheating.
IMO for this rule to be followed, any system of synthetic markers placed by the developer would be a 'cheat'.

personally, I would do as you suggest, using all available data on track and other cars that is available from the game engine. However, that is not what this project is about!

And FWIW, your eyes don't reference things by points at all. One of the problems of dealing with this kind of AI project is that what people consciously think they are doing and considering, and what information is actually being subconsciously processed are two completely different things.

Col
Quote from col :blackbird04217 has stated that he wants to build an AI that has only the same inputs available as a human. anything else would be cheating.
IMO for this rule to be followed, any system of synthetic markers placed by the developer would be a 'cheat'.

personally, I would do as you suggest, using all available data on track and other cars that is available from the game engine. However, that is not what this project is about!

As someone said on the first page: You need to know the limits of the track. Whilst I am trying to give the AI what the player has to work with I also know there are *some* constraints to deal with. I have considered going about it by using each vertex in the terrain as a reference point, being far more reference points in the world for the AI to work with thus giving it more "player like input" But it does not need to visually see. One of my examples above shows the bare minimum needed to go around the Blackwood turn. Go under the bridge and start getting to the outside. Look for the distance markers and brake, turn towards the tires getting as close to them at apex as you can. This turn could be taken with 3 to 5 reference points alone - in a completely black environment.

I'm not here to try reading back screen data, we all know computing power can't handle that in the least bit so that would be a massive disadvantage to the AI. Mainly I want to remove "HERE DRIVE HERE" from the AI. I want to make them MORE AWARE of the environment, and using this combined with memories, instructions (or something undefined) to go around the track. This is where some issues are playing with the design, I am looking for ways to solve it still. Been looking for about three days, but you know, I was looking for a challenge.

Quote :
And FWIW, your eyes don't reference things by points at all. One of the problems of dealing with this kind of AI project is that what people consciously think they are doing and considering, and what information is actually being subconsciously processed are two completely different things.

Col

I kinda beg to differ. I completely agree that exactly what everyone thinks they do are different then what they actually do when it comes to subconscious levels. But you can't tell me we don't work on points. If you were placed in a room, completely white, had the lighting setup *JUST* perfectly to make every surface the same exact white. And there were no bumps in this room, no surface changes. 100% flat, and everything the same color. And you didn't cast a shadow on any surface... This is a very hypothetical situation but you know the results because it can be tested in the computer world. You would be completely confused. The only thing telling you that you are on the bottom of the room is the feeling of gravity. You could run full speed from one end, and you would never know where you would SMACK into the other wall. You put just a small handful of irregularities in the room, and you can know tell where you are. These irregularities are reference POINTS.

I see you you think we don't see in points though, out in a massive field with no horizon, only grass below and empty blue sky above. And you can walk in a circle and stand where you once were; based on millions of these reference points because each blade of grass is different and our mind is powerful enough to compute the differences and say here is where we want to be. I hope this illustrated how we do actually use reference points, and the only bad thing for the AI will be it has fewer points than the player.

So back to the first quote you made, where these 'synthetic points' are considered cheating in your mind. Why would making a single point, with minimal information; position in world, and a name/id be cheating in your mind. You see a cone and identify it as a different cone than the others, on a very subconscious level, and you can estimate where you are based on estimating the direction the cone is and how far away it is from you. I don't see how it is cheating by turning the cone into a point of information/interest if that is kinda what it is. Surely the brain sees a single cone and can interpret multiple points immediately: at least four on the base and one at the top, also any dirty spots become good points as well. I just want to know how turning that into a point for the AI to detect means the AI is cheating. How not giving the AI a direct "GOTO HERE BY DOING THIS" line. That is one of many goals of the project.


Quote from col :
No. an AI that can start from scratch will be MUCH more difficult than one that is given a close approximation as a starting point.

Well take what I said there loosely as the ending was semi-sarcastic but mostly pointing out that either route was difficult. I don't see why with a learning algorithm having the AI start from scratch would be any different than no learning algorithm and starting from an example. I get your mountain example very clearly, but I don't see how having a starting point fixes that issue. Either way, it won't be easy to capture data from a human player driving a track because I need to try separating the layers of driving: Attempt, Prevention and Correction. The only layer that the AI records is attempts, and stores it in little memories/instructions based on ref points. prevention is second nature and correction tries keeping the AI on track when their car starts sliding out from them. How do you define a braking point? First contact of brakes, what if its only used to keep the car under balance over some form of bump? That would be a prevention layer. That is where it becomes "as difficult as" teaching the AI to go from scratch. Again, this is used loosely
#81 - col
Quote from blackbird04217 :... One of my examples above shows the bare minimum needed to go around the Blackwood turn. Go under the bridge and start getting to the outside. Look for the distance markers and brake, turn towards the tires getting as close to them at apex as you can. This turn could be taken with 3 to 5 reference points alone - in a completely black environment.

If the reference points are used by the AI to triangulate its position and allow it to monitor its rotation and position with respect to an internal track map, then that would work. (but in that case, you are giving the AI a simplified form of vision - just one that needs to be constructed per track).

On the other hand, if the reference points are just used for "start braking now", "start turning now", then the whole system will fall apart in all but the simplest of environments.
Quote :

I kinda beg to differ. I completely agree that exactly what everyone thinks they do are different then what they actually do when it comes to subconscious levels. But you can't tell me we don't work on points.

You (ie your conscious mind) will focus on points of reference, but you don't control the car with your conscious mind - its not fast or coordinated enough.
If I created a special hypothetical visor for you that only allowed you to see what your conscious mind was focused on, you would not be able to drive.
It has been discovered that skilled sportsmen cannot function at anything like their usual ability if they think about the details of what they are doing.
That's because your conscious mind (that thinks in points of reference)
just can't do it.

Part of the reason that focusing your conscious on things like braking and turn-in points works is because it gives your conscious mind something to do while the subconscious part gets on with the real driving.

Quote :

If you were placed in a room, completely white, had the lighting setup *JUST* perfectly to make every surface the same exact white. And there were no bumps in this room, no surface changes. 100% flat, and everything the same color. And you didn't cast a shadow on any surface... This is a very hypothetical situation but you know the results because it can be tested in the computer world. You would be completely confused. The only thing telling you that you are on the bottom of the room is the feeling of gravity. You could run full speed from one end, and you would never know where you would SMACK into the other wall. You put just a small handful of irregularities in the room, and you can know tell where you are. These irregularities are reference POINTS.

There are specific physical parts of your brain dedicated to things like processing shadows caused by light from above. If you take away natural visual cues of this sort, people (and animals) become confused. A few single points won't be enough to fix this.
In your hypothetical room even with the points, a human would soon become very disoriented.
Quote :

I see you you think we don't see in points though.

I know we don't see in points.

I just pulled my copy of 'Visual and Auditory Perception' by Gerald M. Murch, and searched in vain for the chapter about how we see in points.

We see symultaneously in different ways, e.g different parts of the eye distinguish colour and contrast. We see to have a very acute ability to percieve contour and edges.
At a higher level we are very good at separating a scene into distinct objects, and use a veriety of processes to do this.
Its all very complicated - but points have NOTHING to do with it.

Quote :

, out in a massive field with no horizon, only grass below and empty blue sky above.

Thats just nonsense. how can there be grass below, blue above and yet no horizon?!
Quote :

So back to the first quote you made, where these 'synthetic points' are considered cheating in your mind.

No, not in my mind. I would use every trick in the book to make the AI SEEM more human if it was my project.
But going by your ealier posts suggesting that any extra knowledge thats not available to the player is cheating, these points are cheating in your mind.
I guess you must have changed your mind?
Quote :

I just want to know how turning that into a point for the AI to detect means the AI is cheating. How not giving the AI a direct "GOTO HERE BY DOING THIS" line. That is one of many goals of the project.


Going by what seemed to be your initial restrictions, a point that has been placed by the designer or programmer specifically to aid the AI would not fit with the goal of only using input available to a human driver, and therefor be cheating.
Quote :

I get your mountain example very clearly, but I don't see how having a starting point fixes that issue.


Maybe you didn't get it so clearly then?
Quote :
Either way, it won't be easy to capture data from a human player driving a track because I need to try separating the layers of driving: Attempt, Prevention and Correction. The only layer that the AI records is attempts, and stores it in little memories/instructions based on ref points. prevention is second nature and correction tries keeping the AI on track when their car starts sliding out from them. How do you define a braking point? First contact of brakes, what if its only used to keep the car under balance over some form of bump? That would be a prevention layer. That is where it becomes "as difficult as" teaching the AI to go from scratch. Again, this is used loosely

There is no such thing as 'second nature' in a computer program.

It seems to me that to get closer to something that might work, you have to stop thinking so much like a human, and more like an AI
You are a computer program - you have no sense of good or bad, success or failure, you have no motivation or desire.
For these to be simulated, your creator must provide you with various metrics by which to assess the validity of your output!
------------------------

You need some way for your AI to store and match (e,g, recognise) patterns.

these patterns might be the relationships between position velocity, rotation, the relative position of any reference points in the environment, input to throttle, input to brake, input to steering, expected reaction, success, failure.. etc etc.
In addition, you will still need your internal track map combined with an internal ideal racing line.
Also using the same or similar pattern matching, a separate sub process to adapt to other cars being in the environment.

This might be one way to build in the desired ability to adapt.
It would also make it fairly trivial to use existing human replays as training data.

Interestingly, there would be so much to process that you will have to split it all up over time giving something like the delayed reactions of a human.

Neural nets don't get used much in game AI (maybe they do now, I'm a bit out of date), but for the sort of thing you're attempting, they would be a good place to start researching.

Fuzzy logic would be another good thing to research

I really think its a shame that you're not interested in persuing visual processing. If I had the time, thats where the interesting stuff is.

even if it takes lots of cpu, there's nothing to stop you uisng a separate PC just for the AI with a web cam for input and cobling together output to a virtual controller device.

It would be very cool if you could get it to drive different sims !


Col
Umm, what?

------

You just completely confused me on several layers. First thing bugging me most: no horizon statement. What I meant was no visible horizon of mountains, trees etc. Just grass forever.

About the "we don't actually see points" I wasn't going into the miracle's of how the eye works and how the brain interprets these signals. Though if you wanna turn it around, in the game/simulation you see points; they are called pixels. But besides that, I wasn't being 100% literal with the "we see points" but you can see how I broke it down into "we see points", if not ask again and I will try explaining both those situations above again.

Quote :
In your hypothetical room even with the points, a human would soon become very disoriented.

That was exactly my point to make, a human with no appropriate shadows would be very confused, add a few dots and they would at least know where to stop running before they slammed into the wall again, even if dots are not on the wall they run to. My point there was that points are used for reference. No other knowledge needed.

Quote :On the other hand, if the reference points are just used for "start braking now", "start turning now", then the whole system will fall apart in all but the simplest of environments.

Well, then that might be the case of this project, but who really knows the outcome.

Quote :If the reference points are used by the AI to triangulate its position and allow it to monitor its rotation and position with respect to an internal track map

What does position mean if it is not relative to something else? Nothing. I am trying to make it so the AI doesn't care about their position but DOES care about their position relative to other objects, how close am I? What direction is it pointing. I am not saying the AI won't triangulate it's position based on the points, that is a very plausible thing to do - though it will still be estimated.


Quote :No, not in my mind. I would use every trick in the book to make the AI SEEM more human if it was my project.
But going by your ealier posts suggesting that any extra knowledge thats not available to the player is cheating, these points are cheating in your mind.
I guess you must have changed your mind?

This didn't tell me how you considered it cheating. My earlier posts said I want to get rid of information such as a direct racing line to follow. Not that a developer couldn't put a cone that holds reference point information. In either case, I don't see how knowing these reference points give the AI more information than the player - So I still don't see how it is causing the AI to have more information than the player? Please don't dodge the question if you have an example of it cheating.

Quote :
Quote from me :I get your mountain example very clearly, but I don't see how having a starting point fixes that issue.

Maybe you didn't get it so clearly then?

No, I understand 100% perfectly what you mean, if your on a peak and comparing something you tried you will consider it worse than what you already have - even when your not to the top of everest yet. But I don't get how having a "starting point" in this situation helps at all as it could be that the trainer / starting point is actually at that smaller peak - therefor the AI doesn't go anywhere else.

It is important to know that I haven't had major plans with the AI actually learning the track, I've toyed around with it here and there - and thats where that comment originally came from, me toying around with the AI learning from nothing, or spectating a human trainer that gives them their info.

Quote :It seems to me that to get closer to something that might work, you have to stop thinking so much like a human, and more like an AI

Possibly but currently I have been thinking on a higher level, and have dipped into the AI level a few times. If you've noticed a few posts up I wrote that I have it all formed together except how to achieve these instructions/memories - which ever it is. However the AI is bringing up a reference point and going this is where I turn.

Quote :I really think its a shame that you're not interested in persuing visual processing. If I had the time, thats where the interesting stuff is.

This isn't remotely interesting to me. Actually I despise graphical coding, and even though this isn't graphical coding it is the reverse. My interest is in the AI part of this equation though it was more of a thought.

To be honest I am becoming more doubtful about the developmental stage of the project. If I can come up with a clear way to use these reference points and train the AI somehow then I might be more apt, but I still have a lot of thinking there and it is becoming the brickwall. I know there are ways of doing it, but I am considering the best way to achieve. So far my ideas fail in some regards even if it works in others.
Any progress on the coding front?
Not exactly. I haven't started the coding front as I am still working on the technical design. Been a bit busy the last couple days with family issues. I pretty much stand at the same spot where I ran into the brick wall, but for an update this is what I got.

Most of the high-level stuff, non-technical stuff has been decided.
I started working on the technical design, to prove how things interact with other parts.
I am surprised that I was able to make 1 point of entry; AIWorld.
- Once during loading all reference points are placed into the AIWorld by exact location. (Possibly a terrain mesh, and car meshes for visual obstuction)
- At the beginning of the frame the AIWorld is notified about the AIPhysicalSensor for each car; including tire traction, g-forces etc.
- At the end of the frame the AIWorld gives back the AICarController information.

The only thing needed, which probably happens at load time, is to load a memory of the track into the AIDriver's memory.

--------

Which leads me to the large problem of how to make the memory, whether the memory is actually 'memories' or 'instructions', I use both terms as I have no decision on how it is to be used. But instructions mean the AI will learn from an example that a consistent developer controller car will give them. Some special mode to let the AIDriver specate a developer and set their knowledge of what to do and where to do it...

The problem I am having is by recording a developers actions, I will likely get information that I don't want the AI to have: "Correct this little oversteer here" or "tap the brake lightly to stabilize car here". I don't know if this information is good for the AI or not, and there are so many changes in input that I can't just register a change.

I have thought of adding some sort of reference point that tells the AI to aim near it when it passes another one that it was aiming at. so there would be one of these per corner - however that is going away from what I wanted to remove: "Drive to here, then to here" line. But I've been trying to use this type of point only based on the other reference points around. So the instruction is not: "DRIVE HERE" but rather "aim to get these X reference points to this orientation".

So a few problems remain:

Getting the memory/instructions - and interpreting them where to go. Will post later, I have to go deal with family issues and people are rushing me a lot right now.
hi!

i read the first page and thats about it... so sorry if im behind, but ive noticed this argument going on a lot

Quote :- I'm oversteering.
vs
- The visually perceived angular momentum of the car is greater than anticipated from prior experience given my current input, combined with a weakened/reversed force feedback that is a result of the suspension geometry and aligning torque of the front wheels when the car points in a different direction than the direction of travel, combined with the increasing loudness and pitch of the sound that the tyres generate due to micro-vibrations of the rubber sliding and hopping on the track surface.

in a real race scenario, you do use the second way of thought.
first, you steer into a corner.
next, you expect a reaction.
if the reaction is not what you expected, you try and think why.
if the car is beginning to point more towards the turn than expected, you know from experience that you are oversteering.
this is confirmed when you begin to hear the tires squeal.
you apply counter steering and wait for the car to stabilize so that you are holding your turn.
you then apply more counter steer and pull out of the slide.


thats how i drive.

for a test, take your fastest setup for your favorite course and increase the rear sway bar rate by 20-50%

next, go and drive a DIFFERENT car around the track for a few laps.
then drive your modified setup car but without thinking ahead that you will need to drive differently.

you will realize that your brain will take the above path of reasoning.

the thing that separates a human from the computer is that the computer perceives the "impending motion" of a slide, whereas a human perceives the effects of a slide and has to react.

for further proof, make a new setup for the fz5
adjust front springs and free length so that you have the softest longest springs possible.
set front swaybars to 0

set rear springs to max and ride height same as front
set rear swaybars to max

set brake bias at whatever you would normally use and turn traction control off (ABS can be on i guess)

if you set the AI to drive around the track, and try and follow them in this setup, it will be impossible
even with the AI on easy mode, they will pass you very easily.
you are constantly reacting to slides whereas they are sensing the slide before it has a chance to occur.

this is the problem with the AI in LFS
No, my whole point was that your brain only "internally" uses that train of thought, but it is not what you consciously perceive. All the info that arrives at your mind is "oversteer, do something!" If you had to actively, consciously go through that process again and again you'd fail at every oversteer situation because you'd be far too slow to react. All that really arrives it the concise filtered down information of what is happening, and once you have enough experience and repetition under your belt even that is shortcircuited by muscle memory that reacts correctly before you even know what happened. In cars it might not be as obvious, but driving motorcycles would be disastrous every single time were it not for that natural automation.

What I asked with that paragraph was just which one of these scenarios - which input level - blackbird meant with "only human input."
I'm going to have to agree with Android on this one, the only thing that matters is you know your oversteering. Any one thing, or multiple combined things in that "second way of thought" is how a human can perceive that they are oversteering. The most important effects are physical feelings and knowledge of tire traction (via sound/feel through steering wheel etc.). That is where the AI sensory will stop in this situation.

You're point about the humans going through the process is slightly skewed. I am sure during a surprise instance that it takes a moment for "wtf happened and what do I do" but when your driving at the limits you know which tires are likely to start sliding, and you are already countering that, subconsciously or not, while driving at the limit. Take my favorite example: Westhill Rev with the LX6, (You'll see another post about this shortly - still writing it in the other window). After the chicane, and before the final split the car is traveling up hill, very important to keep at the limits of the rear tires while not going over. It is an extremely balanced act of throttle and steering input. And for fast laps its important not to go over the threshold of the tires as it loses time. The sim racer uses visuals to interpret as "physical feeling", but being in the car would give you so much more accurate information than just that. The tires also produce the sounds and in this situation you are listening closely to the rear tires, to keep them at the limit without going past it.

Now of course there are situations, several times each lap, in that section where the driver goes above the traction limits and starts sliding. And chances are even before this slide the actions have been taken to correct them. If you read a bit of the second page you would see I have a "Prevention Layer" and a "Correction Layer" which separate these things. As a driver you know you want to stay at the limit and go up that hill. You're own prevention layer forces the foot off the throttle by a bit so that the traction stays, and the correction layer is used when you're slipping too much. Of course in the human brain this is happening so quick, and we can't say that it happens on multiple levels.

And Android to answer your question if I haven't yet, I want to use a "Physical Sensor" for the AI to have feelings of their motion, and the estimated traction of the tires. (Each one, or front and back only)

I will be back to post a few questions about my progress as soon as I finish trying to figure out the best way to ask. Some things have been taken the wrong way in the thread, probably my own confusion added to the mix, but I want to try explaining it correctly.
Okay, I am back and going to *try* explaining a question in the correct form so that it is understood. I find that the wording is likely to be taken the wrong way, since some people never figured out my actual intention with the project, really it's likely I have been unclear - and also likely that the intention slightly changes... But besides that.

As a reminder the main intention was to try doing AI in a different way, removing some of the "cheats" that are okay for game AI but not really okay for simulation AI, and in doing so I figured to try using only the information a human gets.

With that in mind, and the difficult task of using these reference points as instructions / memories I have been putting some thought into how to achieve this. So I started driving some more laps in LFS, just for fun really. And I realized something pretty important - but I also have the feeling it goes against the original intent of this project.

As I was driving around Westhill Rev, I noticed that the perception of the left and right of the track is just as important as the reference points. This is not a new discovery as I pointed this out somewhere on page 1. The left/right are important when it comes to forming the 'best line' through a corner or around the track. Then I began wondering if it would be possible to form the driving line from knowing only the edges of the track. The basic line would of course just take all the corners as wide as possible, though this is not exactly the best possible line ever. This doesn't sound like a very easy task, but I believe it would be mathematically possible to smooth the curve based on the left/right edges of the track... And after thinking about that, I began asking a question that a few others did, why not give the AI knowledge of this line. Either precomputed or by giving them the information from an editor of some sort...

Thinking about giving the AI knowledge of the driving line goes against some things I have said/planned since the beginning of the project. But as I was thinking about this, in a human we know the line because of our instincts or something. And I was thinking that the AI would use it like this. The most important thing to know is I do NOT want the AI to drive from point A to point B then from point B to point C... That is the way current AI systems generally work, and I feel the placement of the AI car is too exact, their actions are like a robot and they rarely make mistakes in the same manner a human does. So, I began thinking maybe using a driving line, but in the manner that the AI doesn't drive TO a point but rather towards a point it might work.

The Idea:
--------------------------
Using the general ideas of the normal approach, but with a twist. The driver does not need get to know the exact destination point; though they do get to triangulate the point using their reference points. Since the drivers reference points are estimated/approximate, the destination point will change slightly from lap to lap. (Slightly depends on the driving situation and other factors that the reference point estimation uses.)

Unlike the normal approach I will try using less "DriveTo" points meaning instead of several per corner the AI will get at most 3 per corner: Entry, Apex and Exit. There will be none per straight because the next "Entry" marker would work well enough for where to go. I feel this is going against the initial goals, but I also wonder if it really is since the driver has to estimate where to drive to based on their reference points. The more reference points they can use to triangulate the position the more accurate their point. Meaning, if the reference points close to the DriveTo point are blocked then the DriveTo point becomes less accurate and harder to hit.

In this situation I don't think the AI uses reference points as 'braking' points anymore although perhaps it does and I am missing a step. Perhaps what I am describing happens on a layer above the "Attempt Layer" in my previous idea. Perhaps this is a "Direction Layer" and the "Attempt Layer" tries following the direction as fast as it can.

----------

Now this presents a very interesting problem when it comes to a blind hill. Like AS1. In real situation a human would practice that corner a few times at a slower speed, then speeding it up. It's a blind spot, unknowns. The memory would build and reference points let you know whats behind the hill and where to aim - even if you can't see it exactly. So I wonder if my new "DriveTo" point, although hidden behind a hill, would still be estimated based on the Tree-RP and when visible the TireStack-RP which will allow the AI to judge better at where they are aiming. Perhaps it fails altogether; but I still need to get this working as an idea before I try coding it!

This is about as much thought as I have been able to put into this as of lately. My main worry is that it goes against the initial ideas. Which I am becoming less sure about to be honest. I thought I had a clear idea, but apparently I don't since i am starting to wonder if this fits with the initial goal or not.

For those that have followed the thread, and *actually understand the question* please I want your input!
i understand the problem very well (or so i think)
part of my reasoning for my thought process could be that i do not use a wheel, but a gamepad.

from what little i know about the human brain, i can say that the human brain is a rather terrible processor.
it can process a lot of data, but it can only really recognize patterns and is most efficient at processing visual data...
so if you take the corner entry example, you have to first judge your speed in terms of relative speed... am i going slow, fast, really fast, or OH SHIT fast.
next.
is anything different than it was last time?
it is at this point that you start recalling how much reaction you should be getting for your inputs ect.

next, since you are almost guaranteed to not be on the exact same line as last time, you have to figure out where you are and how to get to where you want to be.
identify your position based on whats around you, and look to the apex (or as much as you can see of it) to gather where you would like to be.

if you are going too fast (according to your experience and speed level) you must slow down, and the amount and time and braking force are determined again by experience.
i think you are on the right track by separating the data into layers, and the "drive to" points... you will need more than 3 points per turn however... either that or a "by way of" line, which could be the "ideal" line thru the turn that the AI strives to take
---
when it comes to it, i think what you need to do is create AI that drives based on vectors just as you have said, but also needs to be able to learn from mistakes... otherwise it will end up taking the same line each race... may not be same from lap to lap, but if left uninterrupted, it will be the same from race to race unless a degree of random error is thrown in.

good luck with the project
Well it would be the same race to race, my line on a track that I have enough experience/practice with is the pretty close to the same from race to race. Its the small differences that add up per lap. "I went slightly wide here because of some reason- boiling down to: I missed a reference point by just a little bit (too early or too late).

I'm not ignoring your ideas, but it is important that I hear from some of the others that have been a pretty large portion of this thread on the above discussion. I am trying to find out if adding these "DriveTo" points goes against my original goals in a way that it should not be followed. It is hard for me to decide either way, so I want other opinions of those who have been following...
Quote from blackbird04217 :
In this situation I don't think the AI uses reference points as 'braking' points anymore although perhaps it does and I am missing a step. Perhaps what I am describing happens on a layer above the "Attempt Layer" in my previous idea. Perhaps this is a "Direction Layer" and the "Attempt Layer" tries following the direction as fast as it can.

----------

Now this presents a very interesting problem when it comes to a blind hill. Like AS1. In real situation a human would practice that corner a few times at a slower speed, then speeding it up. It's a blind spot, unknowns. The memory would build and reference points let you know whats behind the hill and where to aim - even if you can't see it exactly. So I wonder if my new "DriveTo" point, although hidden behind a hill, would still be estimated based on the Tree-RP and when visible the TireStack-RP which will allow the AI to judge better at where they are aiming. Perhaps it fails altogether; but I still need to get this working as an idea before I try coding it!

This is about as much thought as I have been able to put into this as of lately. My main worry is that it goes against the initial ideas. Which I am becoming less sure about to be honest. I thought I had a clear idea, but apparently I don't since i am starting to wonder if this fits with the initial goal or not.

For those that have followed the thread, and *actually understand the question* please I want your input!

I think you sould use a field of recognition which is followed by a line of decision. And from that line and your actual place on it you should start your vector to the center of the next field of recognition.

At the first lap the field of recognition would be great, this would simulate the not knowing of this track. In this field the ai should figure out where should he turn, could and should corret his line angle and speed. On the line of decision the ai would create a vector to the center of the next field of recognition.

The learninc courve could be achived as the ai should each time recreate the field size (at the ideal state whisch could't be achieved it should be a point, at the beginning it should be wide as the track and wery long) after examining his speed, angle, and time at the next field of recognition. After a while the ai should analise more info, like delta of the tire temp. As for the randomness you should add a random diversion from the calculated ideal points. (More experienced the AI so less the diversion the ideal points).


Something like this




vector
\ \ \
\ \ \
\ \ \
|-------------------------------| Line of decision
|-------------------------------|
| |
| |
| | Field of recognition
| |
| |
|-------------------------------|
| |
| |
| |
Road

#92 - VT-1
VT-1 - That was quite an interesting video; although the AI of that car uses GPS to know position rather then using the cones as reference points. From what I gathered in the video. So basically that would be like attaching an AI technique that most games use, to a real car - just compute the best line, then follow it. If all the cones were removed a human driver could not drive the same course again, but that AI could nail it perfectly.

I've decided to put the project on hold for my own lack of interest in continuing right now. I got a LOT of useful information from this thread, and will likely continue the project sometime. Who knows, next week I might come back with some sort of update :P But for the time being I don't have the interest because other things are being more interesting to me, and I still need to find a j-o-b. Getting quite stressed because of the lack of a job.
I only read the first page and then randomly some of the following postings so this might have been said before.
Imo the whole idea of recreating line-of-sight-vectors to reference points is a waste of effort and time. It might be interessting from a math point of view but does help very little with making a realistic/good ai.
For example a human driver can just have a look and see he is 2 meters from the right side of the track. The ai should just be able to get all informations about position, speed etc by calling some functions, the geometric way just makes things alot more complicated but adds nothing new.

http://rars.sourceforge.net/
RARS is a racing league for AIs, driving around in a rather simple (compared to lfs) game but none of the AIs cheats. The AIs get some informations like radius of the next corner, direction/speed of travel, other cars near by and that is all. Some AIs use "learning", others just try to stay in the middle of the track. But most are able to drive quite good, even on randomly created tracks. Some manoveurs look quite realistic, even though realistic driving is not the idea of rars (its about being fastest)
Very interessting to have a look at!
If you have some experience in programming you can have your first AI crashing off the track in half an hour or so.

http://www.berniw.org/trb/index.php
TORCS also supports programming your own racing "robot" but i have not tried that yet.
Quote from Gutholz :I only read the first page and then randomly some of the following postings so this might have been said before.
Imo the whole idea of recreating line-of-sight-vectors to reference points is a waste of effort and time. It might be interessting from a math point of view but does help very little with making a realistic/good ai.

Well you are, like everyone else, entitled to your own opinions. But remember I wasn't trying for 'good' AI, or AI that drives in realistic manners. Just that they use information in a far different way than they have in previous attempts. I don't know if you read my post previous to yours where I have put the project on hold for the time, but currently that is the situation as I need to focus on finding a job.

Quote : For example a human driver can just have a look and see he is 2 meters from the right side of the track. The ai should just be able to get all informations about position, speed etc by calling some functions, the geometric way just makes things alot more complicated but adds nothing new.

Well, again in my opinion it *could* add a lot of new things as I am solving a problem in a different way, decoupling the AI from the physics system as much as possible and originally I was attempting to not give the AI known information, though a few posts up you can see that my idea is starting to conflict a bit.

Either way, it adds new knowledge to me showing me why this would or would not work, show the benefits of such a system as well as what it lacks, and again it is a new algorithm. Whether or not it is better, faster, more realistic - all of that can't be known unless I go through with the idea and test it out. It is very likely that one disadvantage to this system is the cost of the algorithm, the AI constantly checking it's reference points. Though it might not be too much cost if done correctly.

About RARS and TORQS I've heard and seen them but not used either.

--------

I do propose a question though, if you understood all of the above and can see that it is for the attempt of a _new_ way to try things vs doing the same old proven routine.

If using reference points, and the line-of-sight somehow magically make the AI have a better understanding of driving would that be worth the effort and time of this test? (Basically if after creating this algorithm the AI drove like they do using the existing and in my opinion cheating/wrong AI algorithms)

As I stated in the paper I wrote, the AI routines that are common for racing games is great; for games. But when accuracy is required I think th AI need to behave more appropriately as well.
I am thinking about using LFS as a starting/testing point for my AI - if and only if I can be sure I can access everything I need.

If anyone wants to help I would be very grateful as it will take some time to get this working. Programming effort is not the only thing I need, although I would love to just have the plug'n'play part done for me.

This inspiration came while watching the 5hr AI Endurance Race last night, and I will be doing more of those; I want to add my own AI player in the mix and see if I can race Scawens AI.

However, step by step:
- Make sure I have all the information needed.
- Get the information working and tested.
- Get the AI to drive the car around a track with normal 'driving line'.
- Add the reference points and the AI can only judge where the driving line is based on these points.
- Add a single AI in a head-to-head style race.
- Add multiple AI once things are going well.

------------------
So the first step is to make sure LFS can handle all the information needed. For that I need to know all the information that I actually do need, vs information that the AI Driver should not be using. Then I need to detect the best way to extract this information from LFS using InSim or other means. A track will need to be picked for the testing grounds;

The new algorithm;
Well, it has been said over and over again that I didn't want to tell the AI where to go and what to do. IE; don't have a line to follow from point to point. After really thinking about this, the AI needs that line. Humans get it from intuition or feeling - they just know. The AI could calculate the best possible line using some algorithm knowing the left and right edges of the track; but to eliminate that complication for now I have decided to give in and give the AI the driving line. After putting thought into it I have decided that this is not cheating for the AI IF the AI does not have GPS positioning on these driving points.

Basically, if the AI knew their exact world position and the exact world position of their DriveTo point they will behave exactly as LFS AI does at this time. Which in my mind is cheating. The first stage will be to get the AI to work using this cheating method, but then the reference points will be added, and the only thing the AI can do after that is know that the DriveTo point is to the left of Cone A by X distance. And they will scan for Cone A, which will give them an estimation based on some of the things discussed in this thread. Then the AI will 'triangulate' or know approximately where they should DriveTo, but that point will change slightly each lap based on random variance and other things that effect the Drivers judgement.

Some of the more obvious things that this will need from LFS:

Controller I/O
- Get and Set Steering Position; an axis control is required.
- Get and Set Throttle, Brake and Clutch Position; axis control req.
- Get and Set Gear Position (for H-shift cars)
- Get and Set Up Shift Button (for sequential)
- Get and Set Down Shift Button (for sequential)
- Ignition control.
- Pit request controls; ability to change pit schedule.
- Pit Speed Limiter.
- Possibly need the handbrake? To apply when parking.
Car Information
- Detection of the accurate position of the car.
- Detection of the accurate velocity of the car.
- Detection of the accurate direction of the car.
- Detection of tire traction at each tire.
- Current Fuel Level
Track Information
- The left edge of the track all the way around.
- The right edge of the track all the way around.
- The optimum driving line - perhaps passing lines?
- The left edge of the pitlane from entrance to exit.
- The right edge of the pitlane from entrance to exit.
- The driving line of pitlane;
- The start of pit speed limit.
- The end of pit speed limit.

*Quite possibly a few things that I can't think of at this moment.

---------------------
I already know there are a few things I can not get from LFS for my algorithm; like whether the AI driver can see another driver behind them in the mirror. But I don't even have confidence that that would matter; the attempt itself would be quite a fun challenge.

I believe all the track information can come from making a layout and setting yellow cones on the left track edge, red cones on the right track edge. blue and green cones for pit lane. And so on... This assumes that the LYT format allows different IDs per object / or at least so that I can tell the difference and where the exact position is. Need to look into that a little more; but I believe that will be sufficient for Reference Points and Track Lines. From what I've read it should be possible using the Index of each object, this however brings up the problem of the driving line possibly not being in the order as placed; which would be a requirement to know which point to drive to.

This is not a statement to say I have started, far from it. But it is a statement saying that I may actually start this if I find the motivation to do so - this would likely be something I would share. I don't know about open source sharing as its likely I use the AI code on other projects in the future.
Does the command system have the ability to set the position of an axis?

Like the steering wheel?

I see axis name, steer.
And buttons can be pressed; though I think they are released.
I am ready to run the test of controlling the car but now I can't find a way to do so- which leads me to believe that doing this in LFS will not work; will continue looking but hopefully someone knows.

EDIT: Still have found nothing that works; /press in LFS simulates a key press and release all at the same time which means I can't even make the AI use keyboard.
Well AndriodXP there has been 'progress' but also a wall on the coding front.

As can be read in my posts above I decided to try using LFS as a test bed for the AI Project. I made my InSim connection which is ready to receive OutGauge and OutSim information if required. I also did the planning, however I can't control a car from my code. I attempted the /press command, but that releases the key stroke.

After that I decided I wasn't going to look further because it was probably for a reason, though you've probably noticed my thread where I watched the AI drive for 5hrs, actually I've done this a lot lately; not only endurance but different cars, tracks; long races, short races; And all sorts of watching. I've learned a few interesting things, have a few interesting theories and am still playing with different tests in LFS; which made me decide to look for some form a virtual thing where my AI can run LFS controllers in some fashion.

I stumbled upon PPJoy, which was mentioned on page one by someone when virtual control was first brought up. I successfully turned the wheel in LFS using my program. However, it doesn't center the wheel properly. I made sure my values were correct, and they are. But dor some reason LFS interprets ALL of the 'virtual' axis to be offcenter, not just the steering. I tried messing with some configs for PPJoy but still couldn't fix it. I am deciding that LFS is not the place to do this under the constraint of proper control, and probably more problems to come.

I did request that /hold [key] and /release [key] were added to the command system in LFS, as well as /setaxis [axis] [pos]

If those get added I may try implementing this in LFS again, until then I will attempt to look into other things; also with the PPJoy it wouldn't be as shareable as you'd think - not the easiest thing to install and use for the general public.

----------------

On the other hand, I was thinking a bit more about driver response. We have already discussed, and I've placed into my plans to give the driver a response time; maximum speed of control and reaction times etc... But I began wondering, what about driver smoothness; error in control. Sure there is error in guessing where the car is on track, but there is also error with control - not only reaction time, but think of going on a perfectly straight section. It's likely a less experienced player is fighting to keep the car smooth and straight, where-as the experienced player is fine. The only problem I see is adding TOO many places of error. Not necessarily too much error, since that can be dialed in later; but too many sources.

I've also figured out that it will be impossible to use a pure random method on finding the distance to a reference point; because this will likely cause the AI to never drive towards the same point. Sure we want random variance with the guesstimating, but it needs to be based on the previous frame(s) in some form or another. Basically something interesting will have to be developed to keep the value realistic compared to previous frames, but also randomly moving according to the Drivers state and actual distance etc. . .
i would love to be a betatester....i cant code yet, so thats all i can really help with...not sure what exactly you are doing either....are you making a video game, or just a ai program

Artificial Intelligence in Racing Simulations Project
(570 posts, started )
FGED GREDG RDFGDR GSFDG