it is more simple then that (i think, i am just starting to learn visual basic, so not really sure)
couldnt you make it so if they hear the sound of the tires scrubbing, they push no harder, or if they hear the full on slip they back off (they are sound files, so when it is activated then you have them take apropiate action )
There is no way to just detect when a sound file was played. All you got is the end result, the audio stream generated by LFS that is a combination of various potentially heavily distorted or completely artificial sound bits. Good luck matching the output to one specific sound file.
I don't have "sndskid" to check if it is enabled or not. I can't magically do things with sounds. Where as you are new to programming let this be a quick lesson you will learn shortly; you can only work within X constraints; not X+1. Meaning that I can only do so much with LFS, I am limited to InSim, OutSim, and OutGauge for detection of RaceRestarts, CarPosition, etc... I am limited by a layout file to read information for my reference points etc, and also limited by PPJoy for the virtual controller.
While I am making my AI work within tight limits, using LFS does tighten the limits a bit more; though it is worth continuing to try because of the physical accuracy in LFS and the potential. That said, sound processing as you are requesting would need to be done more like the following;
Get sound input from lineout or something that records the speaker output. Kinda like a microphone, but a different channel.
Use this raw sound data, and compare, using some tricky algorithm, to that of a skid sound, to detect which tire(s) and how much skidding is going on.
Then hand the AI that value through it's physical sensor; a value that a physics system could tell me. One that I can't currently get from LFS, although I am hoping to find a way to fake it well enough with the information LFS does give me.
It's not about VB or C++, it's about knowing programming. The language just dictates the syntax - a good programmer can learn pretty much any programming language with little effort if he wants to. In the end it's all the same thing anyway, just with different comfort levels and gotchas.
VB is probably a good starting point for a newcomer as it's pretty easy to read and doesn't put all that many pitfalls in your way. If you want to do programming as more than a occasional hobby though, realise that the main goal should not be to learn language X, but to use language X as a tool to learn programming. Once you get a feel for how stuff works in the computer world, grasping the programming concepts gets much easier. Your remark "i thought you were tearing into lfs itself" shows that there still might be quite a way ahead of you, though . You can't just "tear into" LFS and access variables like you imagine, since that would require the actual source code. If you wanted to do that without the source code, the only way would be modifying the assembler code directly, but this is already very VERY low level stuff and doing anything more than changing some values in memory (like car horsepower) or skipping/redirecting function calls (most commonly used for skipping those nasty copy protection checks) would turn out rather complicated and daunting for even the simplest things.
Even in Visual Basic you wouldn't be able to do that. I understood all the logic you were coming from, and that sndskid was used as an example as well. The only way this would work is if you had the source to the project, and could access the sound directly; regardless of syntax/language issues.
So your teacher was able to do this because he had some sort of Object named sndskid. At anytime in the code with that object you could check: if sndskid.isEnabled() then doSomething(); But since I don't, and won't ever have the LFS source code, then I can not detect their sound objects, and test in that sense.
It was worth mentioning, but hopefully you now understand how it works and its limitation.
Back on topic; can anyone think of a way using position, direction, heading and angular velocity to detect _which_ tires are UNDER, NEAR, AT or OVER_LIMIT?? I don't think this is even possible, though I am still trying to get outside the box. Perhaps there is other bits of information that LFS has. Android and I have discussed oversteer/understeer using these, and in those specific cases it would be as simple as detecting oversteer and setting both rear tires to OVER_LIMIT. (Vice versa for understeer). However there are situations, like hard braking/accelerating, where the car is not understeering/oversteering but the tires are OVER_LIMIT and this (as of now) is what I am trying to detect using what LFS gives me; and as pointed out by both Android and myself, it may not have fool-proof solution.
EDIT2: If anyone does find a reasonable way I will be paying close attention here, and if shared would be extremely happy. Even if the code is in another language as long as I can understand what math is going on, and the theory behind it I would be willing to try it!
the only thing that comes to mind (bare with me im still a noob) make it act sort of like an abs sensor, it senses sudden major decreases in wheel speed. or it could relate wheel speed to velocity, if i make no sense il be quiet
^ The problem is that we don't have the information of "wheel speed." Please have a look at the OutSim/OutGauge documentation so you know what information we can actually access
Now maybe that's a stupid question, but do you really need that information? I mean, is your AI model already that set in stone that it actually requires the per-wheel grip state (even if it's just three possible states per wheel)?
The only real problem I can see is the one of wheel lockup under braking, but other than that the information available from OutSim and OutGauge should be enough to get the AI moving around a track trying to follow a line, just from a "my car points here but I want it to go there" point of view. If you really need it you could infer the tyre grip limit state from the current slip angle/ratio of a tyre that you can calculate from the car heading vs. velocity vector, steering angle and speedo vs. actual speed ratio. You basically "know" how much traction a tyre can provide (if not you simply measure it - externally) and the LFS<>AI interface then has to "guess" the tyre state by sort-of doing a mini tyre simulation.
Or wait, why not simply take the acceleration? The acceleration lets you calculate how much work the tyres do in total (vs speed and mass of the car), and the current car heading / steering lets you split up this total to the tyres with a bit of math magic. Then you just need to compare the (estimated) work done by the tyre to what you think it can do at max and set the state accordingly (though of course you'll never go beyond max to give you a definite "over limit" state). Or so. Well, I dunno, it's late, I should go to bed.
@AndroidXP: I do believe that the AI need to know that information about each of the tires, as that is the information that keeps the AI driving at the limit and handling what happens when the AI is over the limit. This I think is key to getting the AI to be fast without knowing the exact physical constraints; although I will likely be proven wrong.
As far as getting the AI to drive around the track at 10mph, I am pretty sure this information it not required. And for braking, like you mentioned earlier just use cars with ABS systems for now, or even turn the brake aid on in LFS. However, that doesn't help when dealing with wheel spin like my current LX6 issue, and again back to my previous point about cornering at the limit. As far as only having four values; UNDER, NEAR, AT and OVER_LIMIT I think that it might need more; certainly not less; however this may be debatable at another time, since it might be important to know the actual percentage of the limit that the tire is at; example 0-100% 0-80% = Under Limit 80-95% = Near Limit, 95-100 = At Limit and > 100% is Over Limit.
@logitekg25: Yea, no wheel speed available; I am out of ideas at this moment but I am sure something will come up. As said to AndroidXP, if I wanted to have the car aim for a speed of ~10 to 15mph I believe I could get it to navigate a track, however that is obviously not very competitive...
On the otherhand, if a set was made that limited a car to drive 15mph because of gearing then it might be quite interesting, and thus competitive in a very _slow_ way. (Let the driver with the shortest overall line win).
About the only thing that might work is OutSim. If you can detect a vibration that a rumblestrip or something would give, would that work somehow? Combine that with some type of skid detection from OutGauge (direction vs angle_velocity right?) and thats a start to testing some kind of limit.. Other than that, I dunno
What about to measure how the car reacts to the control inputs?
If braking, then the ideal braking point is at the largest amount of -delta speed. So the max acceleration point is at the maximum +delta speed. If you loose grip, these would decrase dramatically
I dont think you need the wheel lock information, only the know the problem itself (like understeer, oversteer, brakelock, wheelspin). With the car diretion and the car moving vector plus the controller standings you could get the problem i think.
Could you post a link (or send it to my e-mail: kelemenlajos'snail'google.com) to the outsim documetation from LFS, i didn't find any
I think you're a bit pessimistic regarding the reachable speed Obviously this isn't a one-fits-all solution for sims other than LFS, but we do know that the tyres' optimal slip angle is somewhere between 6 to 10° and the optimal slip ratio is about 5 to 10%. All the AI would need to do is keep the car within those slip angles (rear tyres slip angle = car heading vs. velocity vector, front tyres slip angle = rear tyres slip angle + steering) and slip ratios (that we admittedly only more or less accurately know for the driven wheels) and go as fast as possible within those constraints.
Taking a corner could work like this roughly:
AI accelerates on straight towards "rough brakepoint" (marked by placed track node)
AI reaches "rough brakepoint," starts looking for brake reference points (placed by hand near visual markers)
AI finds brake reference points in satisfactory constellation (distance, angle, etc.)
AI's next target is now "turn in point"
AI starts braking
Brake error correction routines kick in necessary
- This would normally have anti-lockup
- Tries to keep the car pointed at / going towards the target while braking
Turn in reference point constellation is reached
AI releases brakes
AI turns into corner, towards "apex point"
Over-/understeer correction routines kick in as necessary
- If the desired angular momentum (stored as metadata, or learned) cannot be reached within front tyre slip ratio constraints, perform anti-understeer actions depending on current car state.
- If the desired angular momentum is exceeded or the slip angle of tyres is exceeded, perform anti-oversteer actions
Apex reference point constellation is reached
AI's next target is now "corner exit"
AI starts accelerating
Oversteer / anti-slip correction routines keep the car at the limit / from going full throttle
Corner exit reference point constellation is reached
AI's next target is now "rough brakepoint" of next corner
Of course there might be intermediate navigation points put somewhere to guide the car around the track for slight bends that don't require braking.
However, nowhere do you really need the tyre state itself - keeping it at the limit would work like:
Apex reached, AI wants to go full throttle
Grip sensor reports 6° slip angle and 2% slip ratio, so the tyre can't really give much more, throttle reduced to 25%
time passes, AI wants to go full throttle
Grip sensor reports 4° slip angle and 2% slip ratio, throttle reduced to 33%
time passes, AI wants to go full throttle
Grip sensor reports 3° slip angle and 3% slip ratio, throttle reduced to 50%
time passes, AI wants to go full throttle
Grip sensor reports 3° slip angle and 5% slip ratio, throttle reduced to 75%
time passes, AI wants to go full throttle
Grip sensor reports 6° slip angle and 10% slip ratio, the last throttle input was too much, too soon, throttle reduced to 50%
The throttle reduction could then be restrained further by also looking at the past actions and restricting the maximum throttle delta so it can't go to 100% immediately, even if it wanted.
So by knowing the current slip angles you can prevent the throttle from being unreasonably high (that's the "guess" you do on how much throttle can be applied on a certain slip angle/ratio without going overboard), while at the same time you can keep learning by increasing or reducing the guessed value depending on how often you overstepped your limit (which is passing a certain threshold value of slip angle/ratio that causes a little more aggressive fix-the-situation action to be taken).
Unfortunately, it doesn't. Have a look at the G-meter while braking on the limit and while locking your wheels. The difference will be minimal - far too unreliable to draw conclusions from. Same goes for wheelspin.
It's in your LFS directory in the "docs" folder, a file named InSim.txt (contains info about all LFS interfaces).
Well Android you certainly brought up an area I hadn't been able to think of, though I don't know how one would go about implementing this. Really if it could be implemented it could _possibly_ give me some 'idea' of the tire states that I am looking for although I see some issues with attempting to implement this;
First we don't know wheel speed; although like you stated for driven wheels we can take a delayed guess since speedometer has built in delay. This could have an effect where the AI is driving along, hitting the throttle just fine and then thinks "I'm spinning my drive wheels" even though the issue has either been re corrected already or traction was gained.
Time 0; Car starts accelerating from 0 mph. (Throttle Input: 100%)
Time passes: Car hits some grass while accelerating causing excessive wheel spin. (Throttle Input: !00%)
Time passes: Car gets back on tarmac and wheel spin is sorted out. (Throttle Input: 100%)
Time passes: A sensor detects the wheel speed is greater than the car speed. The AI uses this to reduce throttle input. (Throttle Input 70%)
Time passes: Eventually the sensor tells the AI everything matches great. (Throttle Input: 100%)
So in this example the AI behaves a little wonky when it hits a small patch of grass. Off the top of my head I can't think of many other places where this comes into play, although I have a feeling more often than not the delayed speedometer reading would bite me. There would need to be information in the GripSensor that shouldn't be there; how much throttle/braking is going on. This would need to be there so the GripSensor can report accurately depending on slowing down/speeding up since the speedometer is delayed.
This said, I might be getting ahead of myself. First thing that is required is to get the AI to go around the track. I wish I hadn't messed up my copy of LFS right now, as I want to run a test in the XFG to see what is the 'optimum' speed to drive Fernbay Club - WITHOUT going over the limits. Which is to say, what speed would you only need steering input and never need to worry about hitting the brakes? Once a car goes around a track successfully I think the project will stand in a much different position; because even then I can start adding my tests by reference points.
Ohhh, am i stupid x2 becouse i didnt get it that the outsim is manly for controlling the motion simulator
I didnt look in it becouse it's name was insim
It think im not understand something. I was't worte G-meter. Vielleicht würde es besser auf deutsch zu screiben
If you press the peadal then the more you press it the more the car would accelerate. If you loose grip the acceleration would't be growing the same rate as before. But for these you sould count in the position and track data (uphill/downhill). And for the braking goes the same. But i think for these you should store the achived datas for different situations. (Like braking uphill is more efficient than downhill)
Hmm, you bring up a good point about the hills, though my AI (as of now) is developed with little to no current information about hills in LFS. Think of it as flattening the track and just knowing where you should be - this could bring up issues in the future; bridges and other things, but I think it may be negligible for now.
Forgive me there has been some very thorough posts and it's quite a long thread so I only skimmed the first few pages. Interesting you mention this because I have been thinking or rather 'day-dreaming' about making AI cars not only drive like a human but to model top drivers to see how you compare against them, such as Fangio, Clark, Schumacher etc.
Also, last semester I did a AI module which was very interesting. The two dominant programming methods are artificial Neural Networks and Evolutionary Algorithms. The former I created for coursework using Java Neural Network Simulator (JNNS), not only are they extremely easy to create and train but produce scarily 'humanoid' type output.
I would strongly recommend you create a base Neural Network to learn how to drive over very many training examples (epochs) rather than picking out a corner-by-corner algorithm which is doomed to fail IMO. Using a NN the drivers would be able to use their experience to race against you which is the key to success in racing.
What's more, once you have a trained NN it is extremely fast to produce output and each one will be unique. So thinking aloud you could even pick out those which more/less experience for pro/novice difficulties or even those which show driving traits similar to the pros. You may even find that by presenting them certain driving conditions they learn how to drive with a certain style. Not only that but you could pit Schumacher against Fangio or whatever.
Another important reason I would say is that the NN must learn from your driving, so if you tend to cut them up and or move around in the braking area they will become aware of it take necessary action before the event occurs.
edit: Seems the idea has already been suggested and you dismissed it. All I can say is I wish you luck and that using the ANN route would provide vastly more fruitful results from your labour.
Hmm. I should probably do some reading on NN's then, I've heard of them including mentioned once already in this thread; though my objective is not for an AI that 'learns'. Teaching the AI is not the idea here, and I don't see it fitting very well, however where you say that very human like output is achieved than I should at least give it a read over. Hopefully if it is useful it fits into the design of my AI already since I've spent some time on that.
Update: The AI now has true access to the gauge sensor. When waiting at the grid it will shift up into first, and step on the throttle. Also, it can now stay at a steady speed; which I have down to 10mph since I shouldn't exceed any limits there and not need to worry about 'driving at the limit'. So now the next step is to get the physical sensor and visual sensor to work for the driver to drive to a destination; and then it will be down to making a layout around FE1 so that I can try getting a car to drive around the track.
I can see that your an open minded guy but I'm still sort of struggling with your goals. Myself I would like to develop AI that can drive on the track as good as the real deal and/or without you knowing they aren't human. Just like the Turing test where if they can be perceived as being human through the game then in this context they perform as good as humans would. That would open the doors for many possibilities from very clean racing all the time every time to be able to race against the legends. If we are on the same path then I'd like to continue this conversation.
I am certainly hopeful that it becomes 'human-like'. I assume that the reason a lot of people have trouble understanding my intentions is likely because; they may have changed (though I am not sure they did), and there are probably several layers of intentions that fight at eachother / conflict. I am guessing here as it seems that no one seems to understand what my intention is. (This includes the possibility that perhaps I don't understand my intentions well enough to explain them).
That said, I do know my intention is to remove some things that racing AI have done for years; in my mind cheating. This is acceptable for games, but not for simulations. If I could make the AI behave believably like a person within the constraints I set out for myself than that would be amazing, I would call the 'competitiveness' a sub-goal though, especially when it comes time for the car to judge traffic and handling scenario's that a human can just change input and everything is okay; there are too many special cases.
That said; I am trying to get the Ai to work within human constraints. We don't have super fast reaction skills, while most AI algorithms (including that of LFS) do. We can't spin the steering wheel from 90* to 180* in 0.0001 seconds; whereas some AI algorithms can, ad do. Things like this will come from an interface the AI needs to deal with to input the controls;
Here is something like what my idea is...
Simulation -> Reference Points & Information -> AIWorld -> AI Sensors -> AIDriver -> Decision Layers -> Desired Control Output -> Reaction Time Checker -> Real Control Output -> Simulation -> loop
The decision layer and reaction time checker are the only things left untouched at this moment. Everything else has a structure within my simulation, even if it is not fully linked up to LFS or other things yet. The reaction sensor thing will behave a bit like this (in case you missed that post somewhere here).
Scenario A: RPM: 4500 Driver wants to Shift Up .25 seconds later driver shifts up. (Simulating H-shifter where driver takes hand from steering wheel to shifter. Actual time would need to be tested to human limits) In this case the shift actually happened around ~4800 depending on acceleration.
Scenario B: RPM 3900, Driver prepares to shift. (signals that the arm is moving toward the shifter which takes .25 seconds to complete the action). RPM hits 4500 driver shifts which takes about .05 to take place since the hand is on the shifter already. (Again times need to be worked out.)
This scenario is for h-shifters, cars with paddles obviously don't need the hand to move but I wanted to point out the level of detail I want to put into this layer of the AI control/reaction time thing. Some sort of "Prepare for Twitch Steering" could take place as well, since when the AI knows they are on the verge of the limits they could use that to successfully catch it. This comes with experience on a human, knowing that the car may be upset during a particular corner and preparing for it. However it takes concentration to do this, and I do want to have the AI have some form of emotion gauge to wear them down after time or something. This is also the area where Ai difficulty could be changed; more or less ability to concentrate / judge distances... Like I said, still need to work on the gameplan here.