The online racing simulator
Searching in All forums
(850 results)
blackbird04217
S3 licensed
Quote from col :blackbird04217 has stated that he wants to build an AI that has only the same inputs available as a human. anything else would be cheating.
IMO for this rule to be followed, any system of synthetic markers placed by the developer would be a 'cheat'.

personally, I would do as you suggest, using all available data on track and other cars that is available from the game engine. However, that is not what this project is about!

As someone said on the first page: You need to know the limits of the track. Whilst I am trying to give the AI what the player has to work with I also know there are *some* constraints to deal with. I have considered going about it by using each vertex in the terrain as a reference point, being far more reference points in the world for the AI to work with thus giving it more "player like input" But it does not need to visually see. One of my examples above shows the bare minimum needed to go around the Blackwood turn. Go under the bridge and start getting to the outside. Look for the distance markers and brake, turn towards the tires getting as close to them at apex as you can. This turn could be taken with 3 to 5 reference points alone - in a completely black environment.

I'm not here to try reading back screen data, we all know computing power can't handle that in the least bit so that would be a massive disadvantage to the AI. Mainly I want to remove "HERE DRIVE HERE" from the AI. I want to make them MORE AWARE of the environment, and using this combined with memories, instructions (or something undefined) to go around the track. This is where some issues are playing with the design, I am looking for ways to solve it still. Been looking for about three days, but you know, I was looking for a challenge.

Quote :
And FWIW, your eyes don't reference things by points at all. One of the problems of dealing with this kind of AI project is that what people consciously think they are doing and considering, and what information is actually being subconsciously processed are two completely different things.

Col

I kinda beg to differ. I completely agree that exactly what everyone thinks they do are different then what they actually do when it comes to subconscious levels. But you can't tell me we don't work on points. If you were placed in a room, completely white, had the lighting setup *JUST* perfectly to make every surface the same exact white. And there were no bumps in this room, no surface changes. 100% flat, and everything the same color. And you didn't cast a shadow on any surface... This is a very hypothetical situation but you know the results because it can be tested in the computer world. You would be completely confused. The only thing telling you that you are on the bottom of the room is the feeling of gravity. You could run full speed from one end, and you would never know where you would SMACK into the other wall. You put just a small handful of irregularities in the room, and you can know tell where you are. These irregularities are reference POINTS.

I see you you think we don't see in points though, out in a massive field with no horizon, only grass below and empty blue sky above. And you can walk in a circle and stand where you once were; based on millions of these reference points because each blade of grass is different and our mind is powerful enough to compute the differences and say here is where we want to be. I hope this illustrated how we do actually use reference points, and the only bad thing for the AI will be it has fewer points than the player.

So back to the first quote you made, where these 'synthetic points' are considered cheating in your mind. Why would making a single point, with minimal information; position in world, and a name/id be cheating in your mind. You see a cone and identify it as a different cone than the others, on a very subconscious level, and you can estimate where you are based on estimating the direction the cone is and how far away it is from you. I don't see how it is cheating by turning the cone into a point of information/interest if that is kinda what it is. Surely the brain sees a single cone and can interpret multiple points immediately: at least four on the base and one at the top, also any dirty spots become good points as well. I just want to know how turning that into a point for the AI to detect means the AI is cheating. How not giving the AI a direct "GOTO HERE BY DOING THIS" line. That is one of many goals of the project.


Quote from col :
No. an AI that can start from scratch will be MUCH more difficult than one that is given a close approximation as a starting point.

Well take what I said there loosely as the ending was semi-sarcastic but mostly pointing out that either route was difficult. I don't see why with a learning algorithm having the AI start from scratch would be any different than no learning algorithm and starting from an example. I get your mountain example very clearly, but I don't see how having a starting point fixes that issue. Either way, it won't be easy to capture data from a human player driving a track because I need to try separating the layers of driving: Attempt, Prevention and Correction. The only layer that the AI records is attempts, and stores it in little memories/instructions based on ref points. prevention is second nature and correction tries keeping the AI on track when their car starts sliding out from them. How do you define a braking point? First contact of brakes, what if its only used to keep the car under balance over some form of bump? That would be a prevention layer. That is where it becomes "as difficult as" teaching the AI to go from scratch. Again, this is used loosely
blackbird04217
S3 licensed
Already been asked, at the start of this page or the end of the first. . .

The only question of how to do this, that is left remaining, it the reference points themselves, well the instructions/memories that are attached to them. Course that is the most important part of the project, but it is the point of scratching my head.

I am fiddling with the idea mentioned above, having a driver race around recording the data - but that seems almost more tricky than somehow telling the AI to just go. So yea, thanks for pointing out the obvious but I have been getting caught up in that knot already, though I have most of everything else all planned out - in a nice clean way so the AI is quite independent of the other systems involved. The car needs to talk to the PhysicalSensor every frame / often, the car needs to be talked to by the AIController and the GameWorld needs to talk to AIWorld once at load up of the track.

But none of this will be set in motion until I can find a way for the AI to be instructed; be it pre-recording a lap, or learning from scratch. I like the sounds of pre-recording but I have a lot to look into. How do you signify an event as important for recording. IE a driver changes throttle/braking input constantly, but I need to record only the large changes.
blackbird04217
S3 licensed
I see, yea I've been thinking that the AI would be 'pre-trained' by watching a developer race the track for 5 laps or so. Averaging where and what they do with some magical method that I have no ideas about, and using that as their base line "intent" layer. Meaning that is what the AI tries to achieve and the rest just happens. This could later be developed into recording the players laps and thus the AI take on a form of the player, though that might be less desired. The 5 laps I was talking about would be someone very consistent driving to set a good example.
blackbird04217
S3 licensed
Sorry if that sounded smug, wasn't the intention. More of sarcasm, though the last bit does sound smug even when I read it back, I was simply curious.

As for the implementation, I do not expect an easy challenge, as I've said before. But I also don't expect AI to be challenging to a human either. I originally had different ideas for the project, and I will keep those ideas open but I have deviated from the original plan for a few reasons.

I agree the brain processes a crap load more than a computer ever can. But we process more off our previous experiences, searching for the one(s) that most match the situation and then learn from those mistakes - all in an extremely short time. As said, I don't plan to get anywhere where I have OMG AI. As I was saying a few posts up about the layering of decisions, the brain does these layers all at once. Best described, and likely implemented in layers because it abstracts things - in practice things can change.

Its meant as an exercise, practice, learning experience most of all an interesting project. If the AI is miraculously amazing then I would be more than delighted, but I do not expect such results - actually I expect the AI will hardly go around the track correctly, let alone at the limit of the car!

I have started building a very, very simple world while I continue developing my information and ideas. The world will consist of a very simple, flat track and several cones, which indicate reference points. The track will be turned off and on and is mostly available for the humans perspective. I still have a few hurdles and problems to get over, but I will be that much closer if I can get a small environment with a car and *very* basic physics. These physics will not include collision for the time, it will likely not deal with weight transfer. I do hope for some form of friction to test for oversteer / understeer characteristics but beyond that I don't know how far it will get.

I may likely be underestimating things to a degree, but don't overestimate the project here. Even if I am underestimating, I am not worried about doing so, the worst case scenario for me is it doesn't work. It's not like this a multimillion dollar project depends on this - if so I would certainly take a safe route vs trying new things.
blackbird04217
S3 licensed
Simple answer for the non-programmer:

Type a bunch of 0's and 1's in the proper order.


Less simple answer, still for the non-programmer:

By coding in a manner that completes the specific goals. The technical side of this is still being researched, and will then be developed. Currently the discussion is going great for the general ideas, and once I narrow things down it will transition into more technical ideas. Although in all seriousness a lot of technical things have been discussed already in a very non-technical way. This seems to make great sense for everyone.

Are you worried about how it is pulled off? I don't get where the question comes from.
blackbird04217
S3 licensed
Wow! First I can't believe I missed the wall of text after looking for it two times before this. (Last post on the first page was missed while I responded to Dygear apparently and was skipped each time I searched for it)

AndroidXP - great input in that wall of text, it was actually where I was heading with my thoughts except I went a slightly different version of the same thing. I don't know if you saw my post above about the technical ways of doing such a thing that you were talking about. But regardless I will explain my idea that I've had for a little while now, though it will sound quite like your wall of text.

I think that there are several "layers" going on at the same time while driving, perhaps layers is not what I actually mean is going on, but it is a good way to think of it, and better yet might be useful during implementation. So there may be more layers than what I describe, but for now I will keep it simple; I hope. Attempt, Prevention, Correction. I don't think these names fit but I couldn't find anything better as this is the first I write about the thought.

The Attempt layer is where memories and instructions are stored, processed and attempted. We've agreed that at X point we know to brake, that would be included on this layer. This is the drivers overall goal, to follow the ideal line. This layer does not care how much traction is really available, what condition the car is in or if the tires are warmed up. It just knows, "this is what I want to attempt to do".

The Prevention layer is a bit different as it knows what is being attempted. But this is the layer saying, that this attempt is likely not the best case scenario. Prevention is where the brain processes the actions before they are executed, trying to prevent brakes locking up, or throttle oversteer. The prevention layer will use the grip levels of the tires and know to try keeping them below the limits, not by much but try not pushing too hard.

The Correction layer is what happens when it all goes wrong, which it will. Okay now we are oversteering, lets figure out why and change something. I think perfect use of Androids conditions -> errors -> resolve actions would work perfectly on this level. Also this level of thinking may require reaction time / pre-thinking. You know near the limits that the correction layer may be required, and in certain situations you already know how to react before you need to in small twitch movements - that would be preparing / pre-thinking. And then you have the moments where, oh my that was an oil spot and my rear tires just lost grip completely. It will take a moment, not long but a moment to react.

So far we have been focusing on driving around the track, but I would likely add this into another layer: Car Control What I mean by this layer is; Do I need to shift? Will I need to shift soon, therefor moving my hand to the shifter to prepare? Fuel okay, should I request pitstop? etc.

Car control might even be the layer where the finalized attempts are computed into actual actions. I did notice there is a difference between where I want to go and what I need to do to go there, and I think that these layers combine to make what I want to do actually happen, even if the input to do so is different. Man I need to learn how to say what I want without being overly confusing.

I think we are getting close to something that will be very usable and helpful to me. All the input in here has been great to read and very helpful in deciding what to do.
blackbird04217
S3 licensed
Quote from tristancliffe :I would have thought it would be possible to make an AI system that could use multiple 'systems' for driving with. Perhaps allowing you to turn some off, introduce new ones, or run with several at the same time...

That way you can try several ways of coding the AI, compare each method (not just in AI ability, but also in CPU load), and perhaps learn what is best and why....

Off topic, sort of: Is that what they call OOP, with each 'method' being an 'object'?

That was actually the exact thought that started this project originally. But I didn't find the coders that I was hoping for, other than that TORCS thing but it gives me a weary feeling. My original idea had a few programmers work with eachother to create a realistic and challenging AI driver, while creating this we would be competing our ideas against eachother... But then I realized that I wanted to bring my ideas to be proven first, like was said earlier, I may have publically mentioned this project too soon.

You're close to the right idea with OOP. Put it this way;

Driver is an object. Player is a driver, and AIDriver is a driver, and they all can do the same things if designed. So when Driver::GetInput() is called, the G25/KB or whatever is polled in the Player's implementation while the AI performs its things and returns the input. The method calling the Driver::GetInput() doesn't care where it comes from, as long as it gets the input. Much the same way, in your idea above OOP could be used to create AIAggessive, AICoward etc. . .
blackbird04217
S3 licensed
Where as I would like to aim this question at the general, I don't think it will be possible, so I turn to the programmers around to see if they have ideas.

Two cones on the outside of a corner show where the start and end of the corner is. One cone at the halfway point on the inside of the corner shows the middle. With nothing else, including no other traffic, how can I go about solving this as close to a human would. (See left side of attached image, the right side is for those with a lack of imagination, it should be seen, at least mostly, by only dots - and it is not required that this is a 90 degree turn.)

You can assume that the AI Driver has been through here before, so maybe each reference point tells the driver to do something. But in technical terms, what? And how? This is where I am stumped now. I do have a solid structure, I think, on the rest of the project. The AI Sensors are the easy part, so you have these points (as estimations of course) and you may have memories/instructions attached to them but how:

Here has been my thoughts mostly from this thread.

Take Dot1, which is the closest, outside point and also is approximately where the driver should be anyway just based on previous knowledge and car placement. That point will tell me to brake when I get to a certain distance from it; say 70 meters. So I am driving at X speed, closing in on the dot. Once I estimate I am closer than 70 meters I begin braking as hard as I can. My next instruction tells me to turn, and likely the direction I want to aim?, So when the distance drops below that threshold I begin turning. (Some of the turning will go into 'natural behavior' which keeps the car under control while at the limits). The next point would be the apex point, I was trying to get as close to it as I could with the given circumstances; speed, traction, lucky/unlucky corner estimations... But I know once that point is 90 degrees from my driving perspective, that it is time to start unwinding the car to setup for the next corner. Get on the throttle again and behave as if I was on a straight.

Okay, so that was literally the first time I thought of it and wrote it out and I don't really know how it sounds, will re-read after I post. But you can see the idea. Driver has a list of instructions to follow. Each instruction is a major change in car behavior, and attached to a reference point. Each instruction should also be attached to other ref points incase they are blocked by another vehicle the instruction can still play. It should be noted that distance is not always the best judge for an instruction. Possible even have 'ranges' for it to play out on both distance, and direction/angle. Theorizing here.

Any ideas? Because the rest of this seems do-able, this is my main challenge, and remember I have no objections to training the AI by having a human make a few runs and the AI being attached to the car gathering information about the control changes at specific reference points. But a problem here exists with the fact that I want few instructions, not small changes in throttle control or braking.

I am trying to make a list of what is important to this project and how to go about solving it. I will make a post again shortly.
blackbird04217
S3 licensed
I also wanted to mention my plans on your clouded vision. I 110% agree situations need to make the driver estimate more clearly or less clearly and that is planned in this project. A situation where the driver lost control will give the vision sensor an overload and likely be very wrong. I plan that this area will be where the AI can become more or less difficult, simply by changing what the driver is affected to and by how much. A driver behind adding pressure can decrease senses as well, and my plan is for the AI to have things to pay attention to...

I believe it was said somewhere in the thread, but the driver will need to turn their 'head' to focus on different objects, so if they wanted to check the mirrors that will make other parts of their vision temporarily less accurate. I am still searching a way to pull this off like I want. Because it is important, to me to have this as accurately as I can. Meaning, us humans can shut our eyes for a split second and still know what the scene will look like when we open them, We pre-computed the positions based on our knowledge of the previous two (or more) interpretations. If you did this with the brakes on, or under hard acceleration it is much harder to line up your vision with what you had expected upon opening your eyes. Of course a driver doesn't close their eyes, but I am mostly referring to the knowledge of our blind spots. Writing this post put a few things in perspective for me, as far as the prediction thing goes.
blackbird04217
S3 licensed
I am not assuming that humans are incapable of knowing the physics at all, actually what you do with the car depends on your knowledge and assumption of the physical world around you. However, you do not know numbers, you know relations. You know gravity pulls down, and going over a crest will give you less traction. This is based on your experience of a car getting loose in this situation, but you can apply it to anytime in the future that you interpret a crest situation.

The act of removing the dependencies from the AI and Physics system is mostly because of two things; I feel they should not be related, and second a lot of common AI techniques use this dependency. You are completely correct in saying we know the physical world around us, but I think you are wrong about our knowledge of physics. Take the golf example, although lets simplify it a bit.

A golfer hits a ball, depending on the angle of the swing the ball will aim towards the left or right. Depending on how hard the hit the ball will go far or stay near. And finally depending on the length of contact with the club the ball will travel higher, stay airborne longer. (I don't actually play golf so I could be wrong, either way this still works for my example.) So a good golfer can take these three actions with the ball and use them to get a hole in one everytime. Then we add some wind, a golfer that has experienced wind will know to aim away from the hole in order to take account for the wind, whereas another golfer wouldn't know that experience yet.

I believe this is setup exactly the same as the situation you did but slightly more detailed. Now lets move from Earth to say the Moon, or Mars or some other area . . . The gravitational pull is different, the air resistance is different or non-existing completely. If we took the identical golf course in both places, then the golfer who never played in the environment would likely over shoot the hole even if aimed correctly. But as you stated, they would learn the new physical environment, probably faster than you or I even think possible. BUT here is the point, that I am making;

Even with the different environment, the controls the golfer has available is still identical. Aiming the swing changes the direction the ball travels, hitting the ball harder increases distance and keeping the ball in contact with the club longer increases height. So the physics completely changed, but the player still knows how to play the game, they just need to take a few shots to get settled in the new physical environment. Does this explain what I am trying to do a bit better?

As a racer we know what to do in understeer/oversteer situations depending on what caused it in the first place, and really we could move to a slippery surface, a gripy surface or low gravity situation and our actions would still be similar, we still only have a steering wheel and pedals to act with. The physics is irrelevant to our knowledge, but *everything* to our experience.
blackbird04217
S3 licensed
Quote from Dygear :I can only offer my prospective, only hope that it helps.

Now I'm going to read AndroidXP's epic post, so expect some edits here.

[edit] Here's a crazy idea, what about learning from watching other drivers on the track? I do that!

Learning from watching people drive isn't the most useful. Knowing what they were thinking and 'how' they tried something is the largely useful. The problem is finding people that understand how to explain this, and being able to read the ones that can't. It is a known fact that reference points tell us where we are and what we do. But besides that, not much more can be said from my end I wouldn't mind watching a replay of someone driving a track for the first time, if they were to give a solid writeup answering several questions I could come up with, but like I said it is a problem with finding people who know how to explain, or can explain well enough that I can understand. Basic English is not what I mean by that statement either, though I would be willing to bet most programmers already knew that, I just wanted to clarify.
blackbird04217
S3 licensed
Quote from jtw62074 :Blackbird, do you have any experience or interest in artificial neural networks (ANNs)?

Ok, please read all of the current paragraphs before making a judgment as there are some big statements that I don't want to be taken the wrong way. First though I answer your question, no I do not have any experience with ANNs, and I am not sure there is much interest for them on this exact project. Whilst I have been considering having the AI learn, it is beyond the scope of this project due to desires and knowledge. Though I do not consider knowledge a bound in any situation, I just need to read and learn which is not fearful to me at all.

Ok, but this is the kicker, and I don't want people to run away thinking Noob, run away!. I have very limited AI experience, though if you read my document and most of the thread you can see I have knowledge of the common techniques. So I just don't have the experience in practice. I can only think of a few projects, one being my RacerX project a few years back, where I added AI. RacerX had a track that could be created out of tiles, 90 degree turns. I wanted an AI driver that could traverse the track, regardless of the shape. Notice quickly was not part of my goal. So I took a conventional approach, added information baked into the tiles and had AI following a track, regardless of shape. However, even when the AI tended to come to a stop before turning I did not continue editing and adding; my goal was complete and I probably learned a few things on the side.

Now the important part, the low-level stuff that will have a lot of non-programmers running if they haven't already. I may not have vast amounts of experience working with AI, but I do consider myself very knowledgeable about general code design. My favorite part of coding is designing the core, from foundation up, and the building it and watching great things happen. Not always good things, but it is great when reasons finally click of why something doesn't work. My aim for this project has never been about fast AI. It is simply about running away from the normal AI approach to answer the questions; "Why is AI and physics almost always combined? Why does the AI need more accurate information than a human would get? How can the information be detached, and limited? Can the AI still traverse a track with limited knowledge?" I think you get the idea. So, being completely honest, I do not care if this fails, horribly horribly fails.

Assuming I get some of my questions answered, those above and many more in my head, I will call this project a great success. Because designing the code structures is my favorite part, but knowing why a structure works and more importantly how it does not work is the reason that code design is my favorite part. But if someone came and answered my question, "The AI needs to know about physics because. . ." I would be right back immediately with "Why?" just like my little niece, in that toddler stage, why why why, how?

So the goal is to learn, more than to create AI that work. And even if the AI work I don't see them being fast unless I spend much more time on the project.

Quote from jtw62074 :
You might find this is all a lot more difficult to do than you're anticipating. You don't know yet what troubles you're going to run into that will soon be eating up all of your time. So may I make a suggestion? Try using slip angles, rpm, position, and whatever else you can get directly at first. If you're happy with the results you could then expand this into using sound to determine rpm and understeer/oversteer and so forth to emulate human perception. My worry is that if you do this right away, you're adding a complex layer on top of everything right away which has a high chance of seriously clouding issues and slowing you down. For example, if the car is always turning the wrong way at the corner, is it because of a miscalculation in the reference point stuff or is it something funky in this "human perception" layer? When doing experimental stuff like this I usually start simple and go from there. With this approach I get a much greater insight into what the system is really doing (compared to what I thought it would do) and am more likely to see where things are wrong or can be improved just by watching what the car is doing. You start to see a lot of things other people won't.

Okay, now time to switch into another gear and talk about the project itself. I don't plan on adding a sound interpretation for the tires, the physics system will interact with an AI sensor, but the AI will not know the numbers involved or know about the physics system itself. This will likely make sense later. As far as starting with the normal layer of information and then stripping it away, I feel it would be too much to do. As you can see the project is experimental and from the ground up it uses different information than a traditional system. You are correct in saying it will pinpoint issues, and if I was getting paid to do this or the outcome of fast AI was a priority then I would almost *need* to go that route to keep fixing the correct problems. However from above you know that this is about structure, and limitations of design and simply learning about the ideas here. And in that situation I think building the system as I see fit, and finding where the time will be spent just based on incoming issues will be easier than possible. I am quite confident that I can pinpoint the reason something is mis-behaving even in a multi-sectioned project. Though I will not deny the truth, it would be easier to do so one piece at a time.

I have started the designs on paper of how this will be achieved, if pursued into the development stage. The huge unknown at this moment is in the Reference Points. What do they contain, and how do they contain it. Direction and Distance is the easy part, with distance being estimated using rand. Based on the actual distance, and angle from view direction. Making it harder to estimate distances on objects further away or in the peripheral vision. But that information alone does not tell the AI driver anything. So possibly the driver sees a reference point, looks into their memory, which has some sort of script or input knowledge. This area is fuzzy because it is the limits of the current design. So the thread will certainly become more technical, and programmers can see what I am actually trying to do. Also non-programmers please don't hesitate to add anything you want to "how you drive". I am most interested in learning how the human drives, as I am trying to mimic the process.
blackbird04217
S3 licensed
Quote from Dygear :Now I'm going to read AndroidXP's epic post, so expect some edits here.

Post? Don't you mean posts? There has been quite some discussion going on. By the way one thought I had was for the AI to pickup hints from what the player does. I am thinking during 'development' time I would race the track with some mode for the AI that records my input changes based on the reference points then the AI simply plays back that memory each time. Though it would be different obviously due to the estimation error in the ref points. I have yet to decide on the technical part here.
blackbird04217
S3 licensed
My guess would be take the interpretation as you want it since it is a competition. I could be wrong though since I am not the one running it. Props on the idea though, sounds interesting and good luck with participants. This should create some interesting layouts.
blackbird04217
S3 licensed
Quote from Dygear :I'm basically asking, what input from the 'world' are you going to use. I might not know my actual X and Y in the track, but I know where I'm relativity speaking. If I was an AI, I would know where I am. So I would take that into consideration, as well as knowing what turns are coming up next so I can get the best line though all of the turns. Also, forget racing lines ... They don't count for me, and your AI might just find a better one, so don't use the one in game. I use breaking points only to fine tune where I press the break, I can get around a turn without one just fine, I might be slower, but I'm still getting around it.

Why does the AI need to know their exact X,Y on the track if you don't. I don't think they do, but I think like you said they need to know their position - relatively speaking - to each object around or near the track. I will place a large wager on the fact that you use reference points more than just braking and turning. And part of that wager extends to the fact that you will not get around a corner just fine without these reference points.

I made an example above:
Quote from blackbird04217 :
Think of the two corners after the long stretch in BL1 (normal direction). Without starting LFS I can invision this perfectly, and I would bet you can too. You know you have a right hand turn coming up because you remember the bridge and the hill telling you a right comes after the straight. (With never playing LFS with no mini-map you have no idea what comes next.) That, combined with knowledge of the wide corner approach tells you to get to the left of the track. You see the small meter signs counting down, and start braking hard at the appropriate spot - which again is from those ref points. then you see the tires, and turn in towards them.

Now, imagine the world being completely empty; no track, tires, distance markers or bridge. Drive the corner knowing your exact position, but not knowing anything else. You want to get from position <X,Y> to <X+10, Y+10> but you can't drive straight there. You can't do it, well with math you can. But it isn't how you drive. Driving is all based on reference points, and estimations.

The Math.Random() will be called in the Pre-AI step. It will be in the vision sensor where the AI is estimating the distances to each object. I am hoping these estimations will allow the AI to brake at a slightly different spot each time. Have you watch the super 'judgment' of an AI in LFS or 90% of the games? They turn-in, brake and accelerate at the same spots on the track, not making a mistake - (until you enter traffic which is not being talked about yet so irrelevant here).

I don't think your intention was to say that reference points were unneeded, but I think that reference points are 'all' you work with. The only difference is the number of points your mind handles vs the number that the AI will handle.

Quote from Dygear :
Here's an experiment, what's your favorite track? BL1? Go to BL1 and just drive at a pedestrian pace and see if you can do a whole lap. How much far do you get, I think you might be surprised ... I bet you could do better in real life feeling the tilt and pitch of the car knowing where that is. Your AI is going to 'feel' more then the real user is going to 'feel'.


And you can emulate estimation of the mind in other ways. Build them to be perfect, you can add imperfection later.

I almost always drive at 'pedestrian pace' when learning a track for the first time. I don't know what I am supposed to be surprised about here though, so I would like to know what to look for and I will certainly be up for trying it!

As for emulating the estimation and building the AI to be perfect - adding imperfections later. I don't know how much you understand about the project at this point, I don't know how much you have actually read in the thread or document that I wrote, but I am trying to go against the normal way of thinking about AI. I havn't started the technical designs of this system - which might be useful for the programmer side. I don't want to add imperfection later, though the imperfection in the estimation will happen based on some sort of algorithm or setting based on how the AI settings and such.

Quote from Dygear :
That's a great point. I 'feel' the areo grip from the car by comparing it to the mechanical grip I feel from a car without wings, or by taking off some wing, going around a turn, putting on some wing going around a turn and seeing how much faster I can go. Being a driver is much more then just knowing where to break, it's also knowing how to setup your car. Knowing what you can do to go faster, but changing the setup of your car.

Some times I don't cope with the down force loss, I don't care, I know it does not matter some times and I go with it. So long as I don't feel that the down force loss is going to cause me to lose control or go off the track. But that's animation, your AI is going have to think also where it's going to be, not just where it's been and where it is.

I have other opinions on a few things here. One being that you can feel the aero on the car. I agree existing knowledge tells me that more speed = more downforce therefore more traction in most situations, but that you can feel that, is where I disagree. Sure, you can feel it in G forces or speed increases around corner, but that you can read it, feel it change etc is something I don't think you can. (You can to a degree by knowing that all of a sudden you are understeering because you read that your front wheels lost traction. You can associate the loss of traction with the aero based on knowing you just got behind a car.)


Quote from Dygear :
I actually meant that in more then just the way you took it. I see the total grip of the tire at any given time as a resource, the total grip vs the amount of grip used to accelerate, break, turn. So, I can use the tires grip to turn, and accelerate but I can do either of that to it's maximum because I'm using the grip to do two things at once. Some times, it's for the best, some times it's not. Try thinking about the balance of the car as a resource, you can stabilize a car around a turn by adding break along with with turning. That's alot more abstract then I'd like it to be because I can't tell you why this works.

I see. I understand now what you mean and the AI should take that into consideration based on judging their cars traction limits at each tire. Trying to keep the tire "AT LIMIT" as much as possible without pressing beyond the limits.

Quote from Dygear :
To really make a good AI, you need to read some books about the theory of driving. Good drivers will make a good AI.

I have been reading several books on racing, as far as the driver and basic setup skills are concerned, for the last few years.

Thanks for the interest, hope you offer more
blackbird04217
S3 licensed
Quote from Dygear :So, here is my question to you, what reference points are you going to use, because the human mind uses much more then just breaking markers.

I would like to know more about what you mean from this statement. Sure I understand it to a point but I don't think the human uses anything more than reference points; I don't know if you read the part where we do use different type of ref points in different ways, but reference points is the only information about the track that we use.

You are completely correct, the human has 'hundreds' of points along the left, and right side of the track. And these points are mostly used only to know how wide you can go without hitting grass; where to pass a car, and as you said the angle of attack around a specific corner. But all in all these are still reference points, and in the human mind only used by estimation. It is this estimation that causes mistakes, and also allows the car to regain control by detecting the mistake.

--------

You did add an entire element that I overlooked during my investigation into this though; resource management. You are completely correct that during a race, especially a long race, the driver should be aware of their tires, fuel and other resources that have wear.

I also overlooked, in a sense, the aerodynamics of some cars. I am trying to remove as much of the physics knowledge from the AI. I would like to believe simply 'knowing' an estimation of how much traction each tire has would be enough for the AI driver. I will have to consider a way to keep the AI detached from the physics system while gaining a basic understanding of a car with downforce - that could be tricky.

How does a human mind read how much downforce is on the car at a given time? How do we automatically cope with coming behind a car and losing the downforce?

There are a few questions anyone can answer, though it will be hard because it is probably mostly subconsciously.
blackbird04217
S3 licensed
You would essentially need to program a sound transferring over the net. Each player that wanted to tune into the channel would download this separate application. It wouldn't need to do anything with InSim at all actually. But transferring sound like that is not a trivial task and would take a lot of bandwidth when tons of players join in. Voice transferring works a little differently because it is fine to have low quality, mis-bits here and there. Generally if noticed its not a terrible thing and ussually can go unnoticed.

With music though quality matters more, I'm not sure why but listen to music over Ventrilo, Team Speak or Skype... You will understand what I mean.

----------

The other alternative is to write this application, and have all the songs and voice files available for download. Watch for copyright infringment here obviously. In this situation all the people who want to listen to the 'radio' would have it on their PC, and only get a few networked messages when they connect to see what song is playing and the current spot of that song...

---------

I don't think this is worth it in the long run unless your planning on making some form of radiostation application where listening to it gives the player a solid form of entertainment. I am not trying to be negative, but consider if people will get an entertainment boost from this when they can listen to WMP or whatever already?
blackbird04217
S3 licensed
I've seen TORCS but to me it didn't seem all that active - I could be proven wrong.

I haven't seen the other, which I am looking into because it seems to me a lot more active.
blackbird04217
S3 licensed
Heh, I understand. And I have put a lot of thought in the few days that I've been thinking of the idea. I wouldn't mind using LFS if two things were to happen: Someone wrote all the stuff that dealt with it. IE: If I had the information my sensors would need available to me, and the AI outputs would automatically work when I would be fine. But I see too many hurdles with using AI to do it, hurdles that wouldn't be their by just writing a small 2D app or something similar.

As far as the language, sorry for my choice. It's just I know C++ in and out. I know a little C# as well, but dealing with it sometimes annoys me. I like the freedom, I guess.

As far as the technical stuff, I have some of it in my head. The biggest hurdle is converting the reference points into usable data. I may need to add an 'ideal line' that is only useful to tell the AI that this is what you want to be AIMING at. Although I have a feeling that then the AI doesn't use the reference points and the idea is back to square one. I know there must be a solution using only the edges of the track, and maybe the AI sensor will always find the 4 nearest track points. 2 for left and 2 for right, and use this to guess at where they should be going if their reference points are not telling them any information...

The technical design of the reference point has not been done yet either. But I know there will be different types holding different things. The most basic holds a direction and length. Seperated only for a more optimized system. The detailed points would have some sort of memory or instructions attached, which the AI could change when they feel something went better.
blackbird04217
S3 licensed
Quote from obsolum :Oh how I dread the holidays... I wish I could just sleep through the whole thing and wake up somewhere mid-january.

And here I am trying to find a way so I don't need to waste 6hrs a day sleeping . . . Grr, wonder if there is a way . . . "hmmm"
blackbird04217
S3 licensed
Be with family, watch the battles commence... Hopefully get a few peaceful programming moments in. Possibly go out for a hike? HA Did enough of that **** this year!
blackbird04217
S3 licensed
Ok guys, for those that are interested which a few obviously are but not as many as I was thinking, must be the programmer section? Should I ask to move to Off-topic? As my first post says I didn't know where I should put it.

Anyways... For those interested I have attached the PDF document. Its a little lengthy but it describes the thoughts I've been having. If you've read the thread you might find a few things but likely know a lot of what is in it. This is not a technical read, and is actual meant for the general public. I am now starting on the second portion of the document which will go into greater detail of the technical requirements and math of the project. I expect that this will answer my question; Whether I should attempt this project or not.

Please post any thoughts you have on how you think AI gets advantages over the player, and anything related.
blackbird04217
S3 licensed
Quote from AndroidXP :How so? Set up a separate PC with the AI that joins you in a LAN game. Whoop, there is your "AI" driver.


You know, this might actually be where we differ. From that comment it seems you're thinking about this whole idea very abstractly, more from a conceptual point of view, whereas I tend to slip directly to concrete programmatic solutions (not saying that either is better, just that we apparently tackle this topic from a different point of view).

I never thought about someone using it as a LAN AI, though that is kinda interesting I still don't know if I wanna try facing all the challenges of integration with LFS, like I said way above, making it for LFS would likely be as challenging as making it work for a simplistic environment.

Yes, I am trying to keep it conceptual still. Like I said I wanna try new, not the old proven techniques. And its more natural to try new from the human perspective: especially since I am trying to eliminate cheats/tricks that racing AI has developed over the years.

My objective is not currently for an optimized algorithm for use in racing games.
blackbird04217
S3 licensed
Quote from tristancliffe :How this works in terms of programming I have no idea though. I can only discuss, in very vague terms, the human side of it.

Its the programming side that is gonna make me wonder about following through. But from a human standpoint I have a question from you.

On a small track, FE1, SO3, AS1 and such, how many reference points would you think you know of. I mean, in general, for the ideal situation, no car around you to obstruct view.

My guess on a small circuit is probably 75 to 80 points, but I could be wrong.

I have been thinking of how to do these reference points and I thought of something based on what you've written above somewhere.

My original thought had "Detailed Points" close to an action, something that you look at. "General Points" which are far away, you use your peripheral vision on these usually, but you use them during points in time that you can't see the Detailed Point. And then I had "Track Points" the left and right edges of the track, which are only meant as guides. I want to make the AI so that these points are only meant to see if it will hit a wall, or to know if they can move left/right more. But I don't see the left/right points as detailed in anyway. If a curb starts near a turn that would be a detailed point, even if it is near the track point.
blackbird04217
S3 licensed
Quote from AndroidXP :Oh of course you do. But you also plot a certain line on the track that you know (from experience) is the best line to follow (Quote: "[...] distance from my line [...]"). By that I don't mean you visually imagine the ideal line as if you pressed '4' in LFS, but you subconsciously know this line and you follow it lap after lap to the best of your capabilities. Since you can't just visually plot the line with colours indicating when to brake/accelerate, you use reference points on the track to give you hints when to do what (brake, turn in, etc.). Extensive experience also lets you drive the track when you're off the line (due to an obstacle or driving error) and you know from the learned car behaviour what the most efficient path to return to the line is.

So you do have an internal line you know to follow, why can't the AI have one? If you do give the AI this line (since a human has it in his mind) then all you use the reference points for is storing the data of when to engage what input to make the car follow that line. Storing that abstractly as "start braking as soon as the cone leaves my field of view" or "the data on my ideal line says start braking now" is just substituting one way to encode data for another.

BINGO!

But not quite there. I think this will help, the point is maybe the AI gets this "best line" as reference points as well. Just for the knowledge of, that is where I would be under ideal circumstances... However, the ideal line shouldn't contain information about braking/throttle and stuff - that is where I feel the AI does not make "observation mistakes" and "estimations". Sure we know the best line because our brain knows to take a corner wide to keep moving fast. Or to take a corner with a late apex because the long stretch after it. But that line is not our visual cue for braking, accelerating or turning - unless you press 4 in LFS and watch it.

------

Umm I missed a post earlier, just before tristancliffe's first. And it wouldn't work: I don't have line of site using the LFS data... There are hills and cars. The cars I could kinda work in. But it was the last comment on reading the sound data of the tires that got me. You write me some code that opens the mic, records LFS at that moment in time, processes the data and gives me a value stating: UNDER LIMIT, NEAR LIMIT, AT LIMIT and OVER LIMIT from that : / Cause that's scary to me trying to get that info from LFS.

I don't see how sharing it would be a benifit to anyone though as it wouldn't contol an AI driver.
FGED GREDG RDFGDR GSFDG