Post? Don't you mean posts? There has been quite some discussion going on. By the way one thought I had was for the AI to pickup hints from what the player does. I am thinking during 'development' time I would race the track with some mode for the AI that records my input changes based on the reference points then the AI simply plays back that memory each time. Though it would be different obviously due to the estimation error in the ref points. I have yet to decide on the technical part here.
Blackbird, do you have any experience or interest in artificial neural networks (ANNs)?
One thought I kicked around for awhile a couple years ago sounds a bit similar to what you're looking at doing and might very well have some hope of working with an ANN somewhat well. The reference points would have to be in pairs evenly spaced around the whole track, probably. I.e., on the left and right track edge every xx meters would be two points across from each other at about 90 degrees to the track direction. If you used the relative position (compared to the car position) of the nearest __ pairs of points in front of the car (always the same number of points) and fed them into some input nodes of an ANN, then used other things like velocity, slip angles, yaw torque, steering angle, etc., an ANN might very well learn how to spit out a throttle/brake and steering position or gain rate. If you leave "training on" the cars can accidentally figure out how to get around the track faster periodically while running the sim.
You could train the ANN by driving a few laps yourself until they can at least manage to get around on their own, then let them go and play around to get faster. Or if you have a lot of time, use a particle swarm algorithm and let the AI try to learn its way around the track all on its own. If I were to do it I'd start out on a small oval track or something simple.
I haven't tried that, but did experiment a year or so ago with an ANN and particle swarm algo to drive a basic steering controller for Virtual RC Racing. This was much simpler as it was trying to follow a predetermined line and that's all. So it was basically using only one node at any given time. The ANN did indeed learn most of the time how to whiz around the track at a pretty good pace, but it was not quite competitive with a hand written and tuned controller. However, I only played with this for a couple of days before being forced to drop it, so I'm pretty sure it could have been improved a lot. It's really just evolving neural network controllers via something similar to natural selection, but it's a simpler approach to implement than using genetic algorithms. I'm not sure if it would have been easier or harder to do with regular cars versus the little RC cars though. There was an F1 type of game awhile back that did this so well that the cars were virtually unbeatable. They had to use "younger" generations in order to keep it fun for the masses.
During my brief attempts at this in VRC, one funny thing was there were some tracks where the car would usually turn around immediately at the start and learn the track backwards instead of forwards, requiring the addition of learning penalties in a desperate attempt to get the cars to go the right way around! Usually after 5-10 minutes or so (time accelerated so "simulated time" is much higher) the cars learned to get around the track fairly quickly. It's fun to watch them learn and get better without touching anything for sure.
You might find this is all a lot more difficult to do than you're anticipating. You don't know yet what troubles you're going to run into that will soon be eating up all of your time. So may I make a suggestion? Try using slip angles, rpm, position, and whatever else you can get directly at first. If you're happy with the results you could then expand this into using sound to determine rpm and understeer/oversteer and so forth to emulate human perception. My worry is that if you do this right away, you're adding a complex layer on top of everything right away which has a high chance of seriously clouding issues and slowing you down. For example, if the car is always turning the wrong way at the corner, is it because of a miscalculation in the reference point stuff or is it something funky in this "human perception" layer? When doing experimental stuff like this I usually start simple and go from there. With this approach I get a much greater insight into what the system is really doing (compared to what I thought it would do) and am more likely to see where things are wrong or can be improved just by watching what the car is doing. You start to see a lot of things other people won't.
Ok, please read all of the current paragraphs before making a judgment as there are some big statements that I don't want to be taken the wrong way. First though I answer your question, no I do not have any experience with ANNs, and I am not sure there is much interest for them on this exact project. Whilst I have been considering having the AI learn, it is beyond the scope of this project due to desires and knowledge. Though I do not consider knowledge a bound in any situation, I just need to read and learn which is not fearful to me at all.
Ok, but this is the kicker, and I don't want people to run away thinking Noob, run away!. I have very limited AI experience, though if you read my document and most of the thread you can see I have knowledge of the common techniques. So I just don't have the experience in practice. I can only think of a few projects, one being my RacerX project a few years back, where I added AI. RacerX had a track that could be created out of tiles, 90 degree turns. I wanted an AI driver that could traverse the track, regardless of the shape. Notice quickly was not part of my goal. So I took a conventional approach, added information baked into the tiles and had AI following a track, regardless of shape. However, even when the AI tended to come to a stop before turning I did not continue editing and adding; my goal was complete and I probably learned a few things on the side.
Now the important part, the low-level stuff that will have a lot of non-programmers running if they haven't already. I may not have vast amounts of experience working with AI, but I do consider myself very knowledgeable about general code design. My favorite part of coding is designing the core, from foundation up, and the building it and watching great things happen. Not always good things, but it is great when reasons finally click of why something doesn't work. My aim for this project has never been about fast AI. It is simply about running away from the normal AI approach to answer the questions; "Why is AI and physics almost always combined? Why does the AI need more accurate information than a human would get? How can the information be detached, and limited? Can the AI still traverse a track with limited knowledge?" I think you get the idea. So, being completely honest, I do not care if this fails, horribly horribly fails.
Assuming I get some of my questions answered, those above and many more in my head, I will call this project a great success. Because designing the code structures is my favorite part, but knowing why a structure works and more importantly how it does not work is the reason that code design is my favorite part. But if someone came and answered my question, "The AI needs to know about physics because. . ." I would be right back immediately with "Why?" just like my little niece, in that toddler stage, why why why, how?
So the goal is to learn, more than to create AI that work. And even if the AI work I don't see them being fast unless I spend much more time on the project.
Okay, now time to switch into another gear and talk about the project itself. I don't plan on adding a sound interpretation for the tires, the physics system will interact with an AI sensor, but the AI will not know the numbers involved or know about the physics system itself. This will likely make sense later. As far as starting with the normal layer of information and then stripping it away, I feel it would be too much to do. As you can see the project is experimental and from the ground up it uses different information than a traditional system. You are correct in saying it will pinpoint issues, and if I was getting paid to do this or the outcome of fast AI was a priority then I would almost *need* to go that route to keep fixing the correct problems. However from above you know that this is about structure, and limitations of design and simply learning about the ideas here. And in that situation I think building the system as I see fit, and finding where the time will be spent just based on incoming issues will be easier than possible. I am quite confident that I can pinpoint the reason something is mis-behaving even in a multi-sectioned project. Though I will not deny the truth, it would be easier to do so one piece at a time.
I have started the designs on paper of how this will be achieved, if pursued into the development stage. The huge unknown at this moment is in the Reference Points. What do they contain, and how do they contain it. Direction and Distance is the easy part, with distance being estimated using rand. Based on the actual distance, and angle from view direction. Making it harder to estimate distances on objects further away or in the peripheral vision. But that information alone does not tell the AI driver anything. So possibly the driver sees a reference point, looks into their memory, which has some sort of script or input knowledge. This area is fuzzy because it is the limits of the current design. So the thread will certainly become more technical, and programmers can see what I am actually trying to do. Also non-programmers please don't hesitate to add anything you want to "how you drive". I am most interested in learning how the human drives, as I am trying to mimic the process.
Learning from watching people drive isn't the most useful. Knowing what they were thinking and 'how' they tried something is the largely useful. The problem is finding people that understand how to explain this, and being able to read the ones that can't. It is a known fact that reference points tell us where we are and what we do. But besides that, not much more can be said from my end I wouldn't mind watching a replay of someone driving a track for the first time, if they were to give a solid writeup answering several questions I could come up with, but like I said it is a problem with finding people who know how to explain, or can explain well enough that I can understand. Basic English is not what I mean by that statement either, though I would be willing to bet most programmers already knew that, I just wanted to clarify.
Are you assuming that humans aren't constantly interpreting physics data when they are driving or playing sport, etc? Because as humans we have very good equipment for interpreting our physical world with surprising accuracy when highly practised.
For example accomplished machine operators are able to judge the size of small gaps within a few thousands of an inch, a veteran grocer is able to estimate by feel within a few grams how much a bunch of bananas weigh, an accomplished golfer is able to judge very accurately the amount of force required along with an appropriate trajectory of the ball to get a hole in one allowing for wind, temperature, humidity and distance among other things. Likewise an experienced sniper can hit a target from a long distance allowing for similar factors. A highly tuned ear of a veteran sound technician can isolate one frequency out of thousands in a concert or recording situation and make the necessary adjustments to improve the sound quality. A prize winning photographer is able to asses a scene and compose a shot of a life time. The human body and mind can be trained to extraordinary levels of accuracy.
Effectively the whole human body is a bunch of sensors constantly receiving all manner of data that our central massively parallel computer is continually interpreting and then re-interpreting. So in my mind it is not a question of weather an AI should have physics data as input but that the physics input should be scaled in accuracy depending on the knowledge and experience of the AI, as it would be for humans. Also the other major flaw that humans have over AI is that our emotions can cloud or effect how we interpret the data we receive. Which is the very thing that makes racing what it is, if it wasn't for this one aspect racing would be entirely frustrating and unrewarding!
So an AI's ability to interpret data accurately also needs to vary in real time depending on what's happening around them and also on there current confidence level, etc. I.e in a race as a human you can start out with a quite optimism which if luck goes your way might build to supreme confidence, then if something happens to rattle your cage your confidence might be shattered for a number of laps before building again to quite optimism towards the end of the race. Using that as an example of a drivers emotional state at different stages in the race, you will also notice that is closely linked to how accurately we are perceiving the physics of our world at each stage. In the quite optimism stage we are most likely judging distance and our reaction times are about average or slightly above our competitors, when we are supremely confident is when we feel like the car and our body are one and everything just has a natural flow and feels like we have ample time to read, interpret and react to any situation. On the other hand when our cage has been rattled we find our "racing mind" is being distracted by questioning thoughts of what we know, have my tyres gone off too much? did my suspension get damaged? I don't think I can go as fast as the other guy and I don't know why! When this happens we feel like the signals our body is receiving are out of sync with how the car is reacting and we find ourselves struggling to read, interpret and react to situations as they arise.
So in my mind it's not the data that the AI use that needs to be scrutinised so much as the way that data is read then interpreted and finally acted upon to make it more human like.
I am not assuming that humans are incapable of knowing the physics at all, actually what you do with the car depends on your knowledge and assumption of the physical world around you. However, you do not know numbers, you know relations. You know gravity pulls down, and going over a crest will give you less traction. This is based on your experience of a car getting loose in this situation, but you can apply it to anytime in the future that you interpret a crest situation.
The act of removing the dependencies from the AI and Physics system is mostly because of two things; I feel they should not be related, and second a lot of common AI techniques use this dependency. You are completely correct in saying we know the physical world around us, but I think you are wrong about our knowledge of physics. Take the golf example, although lets simplify it a bit.
A golfer hits a ball, depending on the angle of the swing the ball will aim towards the left or right. Depending on how hard the hit the ball will go far or stay near. And finally depending on the length of contact with the club the ball will travel higher, stay airborne longer. (I don't actually play golf so I could be wrong, either way this still works for my example.) So a good golfer can take these three actions with the ball and use them to get a hole in one everytime. Then we add some wind, a golfer that has experienced wind will know to aim away from the hole in order to take account for the wind, whereas another golfer wouldn't know that experience yet.
I believe this is setup exactly the same as the situation you did but slightly more detailed. Now lets move from Earth to say the Moon, or Mars or some other area . . . The gravitational pull is different, the air resistance is different or non-existing completely. If we took the identical golf course in both places, then the golfer who never played in the environment would likely over shoot the hole even if aimed correctly. But as you stated, they would learn the new physical environment, probably faster than you or I even think possible. BUT here is the point, that I am making;
Even with the different environment, the controls the golfer has available is still identical. Aiming the swing changes the direction the ball travels, hitting the ball harder increases distance and keeping the ball in contact with the club longer increases height. So the physics completely changed, but the player still knows how to play the game, they just need to take a few shots to get settled in the new physical environment. Does this explain what I am trying to do a bit better?
As a racer we know what to do in understeer/oversteer situations depending on what caused it in the first place, and really we could move to a slippery surface, a gripy surface or low gravity situation and our actions would still be similar, we still only have a steering wheel and pedals to act with. The physics is irrelevant to our knowledge, but *everything* to our experience.
I also wanted to mention my plans on your clouded vision. I 110% agree situations need to make the driver estimate more clearly or less clearly and that is planned in this project. A situation where the driver lost control will give the vision sensor an overload and likely be very wrong. I plan that this area will be where the AI can become more or less difficult, simply by changing what the driver is affected to and by how much. A driver behind adding pressure can decrease senses as well, and my plan is for the AI to have things to pay attention to...
I believe it was said somewhere in the thread, but the driver will need to turn their 'head' to focus on different objects, so if they wanted to check the mirrors that will make other parts of their vision temporarily less accurate. I am still searching a way to pull this off like I want. Because it is important, to me to have this as accurately as I can. Meaning, us humans can shut our eyes for a split second and still know what the scene will look like when we open them, We pre-computed the positions based on our knowledge of the previous two (or more) interpretations. If you did this with the brakes on, or under hard acceleration it is much harder to line up your vision with what you had expected upon opening your eyes. Of course a driver doesn't close their eyes, but I am mostly referring to the knowledge of our blind spots. Writing this post put a few things in perspective for me, as far as the prediction thing goes.
Yes I do see were your going with this and (from a none programmers point of view) don't see any reason why you wouldn't see some success with that route. I guess what I was trying to say is that from a human player point of view I suspect that AI base on a different method would most likely still end up lifeless and unrewarding to race against. As it is the fallible human aspect of racing that makes racing racing.
I guess how you implement the human fallibility aspect is what will make the difference to whether the AI is believable or not which is why I was musing about reading data (accuracy of measurement), interpreting data (knowing what to do) and reacting to data (whether or not reacting in a timely manner is possible) in a general sense. [/edit]
I would have thought it would be possible to make an AI system that could use multiple 'systems' for driving with. Perhaps allowing you to turn some off, introduce new ones, or run with several at the same time...
That way you can try several ways of coding the AI, compare each method (not just in AI ability, but also in CPU load), and perhaps learn what is best and why....
Off topic, sort of: Is that what they call OOP, with each 'method' being an 'object'?
Where as I would like to aim this question at the general, I don't think it will be possible, so I turn to the programmers around to see if they have ideas.
Two cones on the outside of a corner show where the start and end of the corner is. One cone at the halfway point on the inside of the corner shows the middle. With nothing else, including no other traffic, how can I go about solving this as close to a human would. (See left side of attached image, the right side is for those with a lack of imagination, it should be seen, at least mostly, by only dots - and it is not required that this is a 90 degree turn.)
You can assume that the AI Driver has been through here before, so maybe each reference point tells the driver to do something. But in technical terms, what? And how? This is where I am stumped now. I do have a solid structure, I think, on the rest of the project. The AI Sensors are the easy part, so you have these points (as estimations of course) and you may have memories/instructions attached to them but how:
Here has been my thoughts mostly from this thread.
Take Dot1, which is the closest, outside point and also is approximately where the driver should be anyway just based on previous knowledge and car placement. That point will tell me to brake when I get to a certain distance from it; say 70 meters. So I am driving at X speed, closing in on the dot. Once I estimate I am closer than 70 meters I begin braking as hard as I can. My next instruction tells me to turn, and likely the direction I want to aim?, So when the distance drops below that threshold I begin turning. (Some of the turning will go into 'natural behavior' which keeps the car under control while at the limits). The next point would be the apex point, I was trying to get as close to it as I could with the given circumstances; speed, traction, lucky/unlucky corner estimations... But I know once that point is 90 degrees from my driving perspective, that it is time to start unwinding the car to setup for the next corner. Get on the throttle again and behave as if I was on a straight.
Okay, so that was literally the first time I thought of it and wrote it out and I don't really know how it sounds, will re-read after I post. But you can see the idea. Driver has a list of instructions to follow. Each instruction is a major change in car behavior, and attached to a reference point. Each instruction should also be attached to other ref points incase they are blocked by another vehicle the instruction can still play. It should be noted that distance is not always the best judge for an instruction. Possible even have 'ranges' for it to play out on both distance, and direction/angle. Theorizing here.
Any ideas? Because the rest of this seems do-able, this is my main challenge, and remember I have no objections to training the AI by having a human make a few runs and the AI being attached to the car gathering information about the control changes at specific reference points. But a problem here exists with the fact that I want few instructions, not small changes in throttle control or braking.
I am trying to make a list of what is important to this project and how to go about solving it. I will make a post again shortly.
That was actually the exact thought that started this project originally. But I didn't find the coders that I was hoping for, other than that TORCS thing but it gives me a weary feeling. My original idea had a few programmers work with eachother to create a realistic and challenging AI driver, while creating this we would be competing our ideas against eachother... But then I realized that I wanted to bring my ideas to be proven first, like was said earlier, I may have publically mentioned this project too soon.
You're close to the right idea with OOP. Put it this way;
Driver is an object. Player is a driver, and AIDriver is a driver, and they all can do the same things if designed. So when Driver::GetInput() is called, the G25/KB or whatever is polled in the Player's implementation while the AI performs its things and returns the input. The method calling the Driver::GetInput() doesn't care where it comes from, as long as it gets the input. Much the same way, in your idea above OOP could be used to create AIAggessive, AICoward etc. . .
The cones thing. Once you get to the turn in point you steer the car. If you aren't turning enough then you either apply more lock or slow the car. If you are steering too much then you are using too much lock or going too slowly. A driver should be able to judge (based either on prior experience, studying the circuit, or just car knowledge combined with the use of reference vectors) roughly how fast he can go at the first attempt, and refines braking points, turning points, turning/apex speeds, throttle applications to get the lap time as low as possible.
I think when I get to a new track I tend to study the circuit map, which gives me an idea of the lines and braking distances based on prior experience (I know, roughly, how far it takes to brake from 130mph to, say, 50mph etc). As I approach a corner for the first time I'll choose an arbitrary braking point, try to turn on a classical constant radius line, and get on the power at the apex (unless studying the circuit map suggests that one or more of those are clearly very wrong). Subsequent laps are spent perfecting that - braking later, turning later, getting on the power earlier, until lap times suffer. But there are multiple ways to skin a cat, and this is where driving styles come from.
It might be just as quick to "brake super late with lots of rear brake bias and use the rotation of the car due to braking (and coming off the brakes) to get to the apex with a lot of speed, then get on the power and wrestle the car to the exit" than it is to "brake earlier, turn in a little later, get on the power way before the apex and let the car find its own exit point".
Do we actually do that? No! I watch Youtube videos. I play simulators of the circuit (not very much help this one, as computer games aren't accurate enough). I watch other cars. I walk the circuit. Perhaps one mentally sorts corners into types, and uses a basic 'style' through the corner prior to refining it for that specific corner. Or perhaps the human brain, on the first time, just guesses and hopes (tempering the guessing and hoping with experience?).
Driving a car isn't a 'simple' thing to do it seems. Yet the human brain makes it seem so, so easy most of the time.
@Tristan: what do you think about my "motions!" megapost in regards to accuracy? Is how I think someone drives actually happening for you too, or does that only work in a sim environment that is far less random than reality?
Despite the promise of a "wall of text", it didn't take long to read and was easily understood. If only most English speakers (myself included) could write that coherently...
But yes, that is how I drive - I think. The thing is, I just get in the car and drive it. I don't think HOW I'm achieving it in those terms. Of course I study my driving to an extent, but not in such elementary terms.
However, you are right. At no point whilst driving do you know the amount of grip you have or are about to have. And you don't even know your speed most of time. So gear changes, or trail braking often come down to a level of timing (i.e. making the circuit flow) as well as positional data (get off brake pedal when I pass the head of that kerb).
Indeed, downforce is a strange thing. You can't learn how much grip the tyres have a low speed and use the information at high speed (like a road car (ignoring lift)). In high speed corners especially it takes quite a lot of bravery to trust the invisible (and potentially absent) grip of downforce. I've had a rear wing failure in a 110mph corner, and it wasn't fun. I've also been caught out by understeer and oversteer from following cars. But I've no amassed enough experience to cope with most conditions.
The phrase for downforce cars is "The faster you go, the faster you can go". Of course it's not actually accurate, but it gets the point across.
I like the trigger based system you came up with. I think driving is like that. I now have a database of "problem, reaction" that aids me catching slides from various causes, or avoiding accidents, or judging braking distances when miles off line on the first lap with cool brakes/tyres. And the more "automatic" the driving becomes the quicker and less mistake prone you become - in theory. What actually happens is that you get quicker and find yourself in a whole new set of circumstances where the existing database might not be valid. So I do a corner quicker than I've ever done it before (my corner entry triggers, say, are extra refined), and find that my oversteer->countersteer trigger is no longer sufficient. Whether I catch it is a matter of luck, reactions and 'feel' (whatever that is). Either way, an entry is logged in my database - either "don't do that again" or "I can do it, but I'll need to be twice as fast in catching a slide at this speed".
This is a fascinating topic, even without actually trying to program anything or understand programming.
Whilst sim environments are less random, the same principles apply. Real life isn't totally random either - you can predict the range of likely variables. You know if it's a windy day. You know how old your tyres are. You know if it's wet, and you can see how much spray or puddles there are. Oil spills will catch you out (see Oulton Park, 2008!), because they are effectively random, but the track itself changes, but within a relatively narrow zone. You don't start a new lap to find that instead of being a baking hot day it's now a freezing day with a thick layer of snow!
Wow! First I can't believe I missed the wall of text after looking for it two times before this. (Last post on the first page was missed while I responded to Dygear apparently and was skipped each time I searched for it)
AndroidXP - great input in that wall of text, it was actually where I was heading with my thoughts except I went a slightly different version of the same thing. I don't know if you saw my post above about the technical ways of doing such a thing that you were talking about. But regardless I will explain my idea that I've had for a little while now, though it will sound quite like your wall of text.
I think that there are several "layers" going on at the same time while driving, perhaps layers is not what I actually mean is going on, but it is a good way to think of it, and better yet might be useful during implementation. So there may be more layers than what I describe, but for now I will keep it simple; I hope. Attempt, Prevention, Correction. I don't think these names fit but I couldn't find anything better as this is the first I write about the thought.
The Attempt layer is where memories and instructions are stored, processed and attempted. We've agreed that at X point we know to brake, that would be included on this layer. This is the drivers overall goal, to follow the ideal line. This layer does not care how much traction is really available, what condition the car is in or if the tires are warmed up. It just knows, "this is what I want to attempt to do".
The Prevention layer is a bit different as it knows what is being attempted. But this is the layer saying, that this attempt is likely not the best case scenario. Prevention is where the brain processes the actions before they are executed, trying to prevent brakes locking up, or throttle oversteer. The prevention layer will use the grip levels of the tires and know to try keeping them below the limits, not by much but try not pushing too hard.
The Correction layer is what happens when it all goes wrong, which it will. Okay now we are oversteering, lets figure out why and change something. I think perfect use of Androids conditions -> errors -> resolve actions would work perfectly on this level. Also this level of thinking may require reaction time / pre-thinking. You know near the limits that the correction layer may be required, and in certain situations you already know how to react before you need to in small twitch movements - that would be preparing / pre-thinking. And then you have the moments where, oh my that was an oil spot and my rear tires just lost grip completely. It will take a moment, not long but a moment to react.
So far we have been focusing on driving around the track, but I would likely add this into another layer: Car Control What I mean by this layer is; Do I need to shift? Will I need to shift soon, therefor moving my hand to the shifter to prepare? Fuel okay, should I request pitstop? etc.
Car control might even be the layer where the finalized attempts are computed into actual actions. I did notice there is a difference between where I want to go and what I need to do to go there, and I think that these layers combine to make what I want to do actually happen, even if the input to do so is different. Man I need to learn how to say what I want without being overly confusing.
I think we are getting close to something that will be very usable and helpful to me. All the input in here has been great to read and very helpful in deciding what to do.
By coding in a manner that completes the specific goals. The technical side of this is still being researched, and will then be developed. Currently the discussion is going great for the general ideas, and once I narrow things down it will transition into more technical ideas. Although in all seriousness a lot of technical things have been discussed already in a very non-technical way. This seems to make great sense for everyone.
Are you worried about how it is pulled off? I don't get where the question comes from.
I'm also interested - as someone who has programmed AI (rather than just thought about it)
It is very interesting to me reading this thread, that you seem to be sure that the computer has an advantage (it cheats) over a human, in that its possible for the computer to react more quickly.
Secondly, your insistence that all that is required is a small set of triggers or markers seems naive.
A human has a brain that has the innate ability to do the equivalent of complex calculus almost instantly, combining multiple conflicting inputs and processing massive amounts of data. Desktop computers can't compete with this in real time.
To create an AI that doesn't 'cheat', in the sense that it has only the same inputs as a human, you would need to process visual input from the screen memory. That in itself is a very complex and difficult task.
On the other hand, if its valid to 'cheat' enough for this not to be needed, why not go all the way and include data from the track map, tyres etc., and allow the AI to react fast. (IMO, this is the only sensible way forward.)
I reckon that by using a few markers (like your cones) and a simple dynamic internal racing line, you might get to the stage where your AI can drive around the track.
Getting to the stage where it can keep the car at or near the limit will be a completely different level of complexity.
If you are going to build in a 'poor reaction' or some sort of 'random' quality into the AI, you will most likely just end up with oscillation and loss of control, or very poor times with no ability to stay on the limit.
It has been mentioned in the thread that, while humans don't have instant reactions, we have the ability to predict whats going to happen. THIS IS EXTREMELY SIGNIFICANT.
If your AI is not going to cheat, and it is to have its reaction speed limited, you will have to write some AMAZING code to compete with a human brains ability predict the future by super snazzy human brain calculus power!
I really hope that you can do this, however, from reading the thread, it seems to me that before going any further, you need to do some simple experiments with a very basic 2d model (like a mariokart type of thing), just to get an idea of what the real problems are.
I'm worried that you might be seriously underestimating how difficult this type of problem really is !
Sorry if that sounded smug, wasn't the intention. More of sarcasm, though the last bit does sound smug even when I read it back, I was simply curious.
As for the implementation, I do not expect an easy challenge, as I've said before. But I also don't expect AI to be challenging to a human either. I originally had different ideas for the project, and I will keep those ideas open but I have deviated from the original plan for a few reasons.
I agree the brain processes a crap load more than a computer ever can. But we process more off our previous experiences, searching for the one(s) that most match the situation and then learn from those mistakes - all in an extremely short time. As said, I don't plan to get anywhere where I have OMG AI. As I was saying a few posts up about the layering of decisions, the brain does these layers all at once. Best described, and likely implemented in layers because it abstracts things - in practice things can change.
Its meant as an exercise, practice, learning experience most of all an interesting project. If the AI is miraculously amazing then I would be more than delighted, but I do not expect such results - actually I expect the AI will hardly go around the track correctly, let alone at the limit of the car!
I have started building a very, very simple world while I continue developing my information and ideas. The world will consist of a very simple, flat track and several cones, which indicate reference points. The track will be turned off and on and is mostly available for the humans perspective. I still have a few hurdles and problems to get over, but I will be that much closer if I can get a small environment with a car and *very* basic physics. These physics will not include collision for the time, it will likely not deal with weight transfer. I do hope for some form of friction to test for oversteer / understeer characteristics but beyond that I don't know how far it will get.
I may likely be underestimating things to a degree, but don't overestimate the project here. Even if I am underestimating, I am not worried about doing so, the worst case scenario for me is it doesn't work. It's not like this a multimillion dollar project depends on this - if so I would certainly take a safe route vs trying new things.
I understand why you think that, but I don't mean you watching the video, I mean the AI 'parsing' the reply and seeing how the faster drivers take the turns, use the accelerator, and things of that nature.
And drivers have to learn the car as much as they have to learn the track also.
I see, yea I've been thinking that the AI would be 'pre-trained' by watching a developer race the track for 5 laps or so. Averaging where and what they do with some magical method that I have no ideas about, and using that as their base line "intent" layer. Meaning that is what the AI tries to achieve and the rest just happens. This could later be developed into recording the players laps and thus the AI take on a form of the player, though that might be less desired. The 5 laps I was talking about would be someone very consistent driving to set a good example.