The online racing simulator
Dygear; It certainly would be useful, (and should be shown visually), that the path gets less certain as the prediction runs into the future. That shouldn't be too hard to add so I'm going to give it a whirl.

EDIT: Added screenshots.

Prediction Path with Uncertainty 1
Prediction Path with Uncertainty 2
Prediction Path with Uncertainty 3

The ai driver is not yet driving. I'm still using a replay of me driving a few laps around FE1 that keeps looping over and over again to pull all the information from InSim, OutSim and OutGuage. I have not yet made the virtual controller interface (again) yet and have not attempted to make a driver get into a car. I am delaying this for as long as I can so that the information gathered is useful, somewhat accurate, and ready to be used by the driver.
Does that mean that you're teaching it your driving style? What would happen if you put it on with the WR Replay?
No it isn't actually learning anything, I'm going to avoid the complexity of AI learning things when I can help it, just so I don't need to keep tweaking and hoping. So unfortunately nothing else would happen by using the WR replay.

Actually there is no concept of a driver in the AIRS project at this moment. I have the sensors, and a memory unit that the driver will use to make his decisions, but no driver at this time.

I'm currently trying to decide whether I should make the next video explanation of the project or attempt using the physics sensor to calculate the current, and predict future, grip/slip angles. Oversteer/Understeer situations. I would really like to use the current slip angle to determine (guess) the possible cause of Oversteer / Understeer situations, however since I have no steering input available I am not sure that is possible to guess the causes.

Understeer: Input overload (Steering too hard)
Understeer: Weight transfer (Braking too hard)
Oversteer: Input overload (Too much throttle, Turning too hard)
Oversteer: Weight transfer (Too much braking / Too little throttle)

There are probably more reasons that I am not thinking about at this moment, that a driver causes for these situations, but maybe I can eliminate all but steering and use steering if all others fail. (Ex; If the car is shown in an oversteer state and the throttle is not pressed (much), and the brake is not pressed (much), then the oversteer much be caused by turning too hard/sharply.

Not sure how to determine under or over steer without knowing the steering angle though!

//-------------------------------------------------------- A little later

Although I don't yet know how to use this to estimate the current oversteer/understeer (may not be possible without knowing steering input) I do have the prediction unit running an analysis of the slip angle to predict whether it will get worse or better given the slip angle of the previous few memory states. By slip angle, I mean the angle between the direction the car point and direction it is actually moving. So if the car were reversing, the angle should be 180*.

Prediction with slip angle

Red means a slip angle of near or over 15* (which will need adjustments)
Green means a no slip, or a slip angle nearing 0*
I am sort of battling how I want the artificial driver to 'see' the racing line. On one hand I've been arguing to make him see it based completely on triangulating the position based on what the visual sensor gives him for information. On the other hand, while that is the more 'human thought process' like approach (something I've been aiming at) it also adds some extreme complexity and processing effort to get the information that.

I have been considering to GIVE the artificial driver the entire racing line, in world coordinates. Something I earlier argued against, probably more heavily than I should have. But this does not mean I will GIVE the artificial driver his position in world coordinates. That will still come via the visual sensor. So, basically instead of finding some complex way of finding the racing line without world coordinates, I think I will allow the artificial driver to know the world coordinates of all the reference points.

By knowing the world coordinates of each reference point, the driver can then estimate his position in the world by using the direction and distances provided from those reference points within view. This will still be an estimation, and with a sloppy visual sensor it should produce sloppy results. This should still achieve the same effect I am going for, although it does so a little different than my initial thought. Again, I am not giving the artificial driver his exact position in the world, but making him guess based on what he see's.

I think after that the only thing remaining is to put the driver into the car and start making him drive, hopefully intelligently!
I've finally gathered all the information I need from the LFS Cars, Track Environment, Current car/driver information for all the sensors (as currently defined) for the artificial driver.

Currently the driver can detect/retrieve this information:
  • Distance and Direction to Reference points that are visible from the drivers view. (an estimation from where the car position is, a better estimation could probably be made using the LFS camera position packet and forcing cockpit view, which I may do in the future?)
  • Forces of motion felt by the driver, relative to the driver.
  • Direction the car is physically pointing and moving.
I've gathered much more information than this, including car dimensions, torqueRPM, powerRPM, idleRPM, limiterRPM, numberOfForwardGears, speedometer reading, tachometer reading, etc etc However the artificial driver does not yet have access to any of that data.


I have just started hooking up a virtual controller using PPJoy. I suspect this will take the rest of this weekend to get working and calibrated, and maybe next weekend I can start working on the actual driver. I still have not allowed the driver to estimate his position in world space yet, but I suspect that will be the direction I take as I can't find anyway it goes against the ideas behind the project and doing it in any other way would be extremely computationally heavy for the same sort of result.

I should make a video of the current state of the sensors and prediction unit, it is pretty impressive how it uses the Visual and Physics sensors to predict the cars slip angle and path of motion for the next second.


I had some troubles setting up PPJoy, and it was a massive pain to get it working the first time long ago. Upon looking through what I needed to do to get it working I found vJoy. Much better, still not documented well, but I now have the ability to set the steering, throttle, brake and clutch input through vJoy to LiveForSpeed. Next up is a few essential buttons, ignition being the primary one. I might actually get to start on the driver today after all!

Took longer than expected on the buttons, but the Virtual Controller is now hooked up to shiftUp/shiftDown, ignition, and h-shifter. Car reset might get on the list as well.
I have finally started working on the driver. Currently there isn't much driving however, the driver will "enter a car". Meaning, as soon as AIRS connects to LiveForSpeed, (it currently assumes there is a player car to use), the driver will be placed into an action "EnterCarAction". This action will stop the car, if it was already moving which it almost always won't be. The artificial driver will then put the car into neutral, hold the brake at 25%, and let all other controls be at rest.

I don't have time this weekend to get the driver doing more than this, and at that I still want to design the action system a little better. I am hoping to allow the driver to multi-task with small, reusable actions. The current actions available are: "EnterCarAction", "ShiftToGearAction", "TurnOnCarAction".

Taking a break now to work on the video I've promised regarding each of the sensors and the prediction unit.
Can you hear me now?
Is anyone still reading? I will continue writing regardless, if only for my benefit of tracking the progress. But knowing others are reading will make me feel less awful about making new post after new post being the only one talking.
Absolutely! as posted a lot of people will be reading but don't like messing up the thread and like me having nothing technical to add. It really is interesting to see this unfold and the way in which you are working things out. Keep at it!
I have some quiet optimism, emphasis on quiet.
I worry too much sometimes.

I do completely understand that I am dealing with something very technical, and making my best attempts at explaining it so it is understandable by a not-so-technical mind. There are times, as any following reader has witnessed, where the technical bits get more than I can handle, and usually I stumble along and it gets clear again.

That said, I try to answer any questions and enjoy doing so as it makes me think about the process, and sometimes simply explaining something differently can spark new ideas or improvements.
I'm still reading your progres reports, thoughts, worries and all.
Please keep them coming.
What you are trying to achieve is way beyond my skills and knowlegde, but reading while things are happening is very interesting for me.
I check this topic out every time there is an update
But as most people say here, it's on a field I'm not too familiar with, and therfor I rather sit and listen than trying to come up with something clever

But please continue on, and maybe us others learns something every now and then
Reading with lots of interest every update. Keep going .
Maybe this will spark a new breed of AI
Quote from blackbird04217 :I do completely understand that I am dealing with something very technical, and making my best attempts at explaining it so it is understandable by a not-so-technical mind.

I love the technical details. I like the insight to a world that I'm not interacting with yet. Exposing myself to the nitty-gritty is massively useful from an intellectual point of view as should you see it again, then you now have a frame of refrence to go back too.

Don't forget that quite a lot of us are programmers ourselves, so we have the technical experience to understand the scope of what you are dealing with. Don't be afraid of losing us.

Okay, thanks for the support guys! I'm going to take the chance to talk a little technical as I've found a great reason for writing in this thread is to think through the problems. Usually if I find a way to explain it to someone then I can make it happen. In this case I'm a little stuck on a pretty difficult problem that I know must be possible.

*Warning, this post might be a little technical, hopefully it is explained well, but if you get lost don't worry. I wrote this post while coming up with the algorithm to figure out how to do what I'm trying to do. Meaning this has a lot of unfinished/unpolished raw thoughts as I worked out the problem.

The artificial driver can see two reference points, A world(50, 0, 0) and B world(100, 0, 0).
The driver knows the distance and direction to each of these reference points... *note 1* ...relative to the drivers point of view. A cone directly ahead would be in the direction of (0,0,-1) regardless of the world direction the driver is viewing. In this example, I am going to place the driver at world(30, 0, 40) and he will be looking straight at reference point B.

Therefore the driver knows that point B is in the direction of (0, 0, -1) and a distance of: Magnitude(100-30, 0-0, 0-40) = 80.6 meters. point A is a little more difficult as subtracting the values as such won't give the direction in relation to the driver, hence how I figured I had the issue described in note 1. I had simply did driverPosition - conePosition, to get direction/distance, but that direction remained in world space. Doing this above for point B was simple because I defined the driver to be looking straight at that point, knowing straight ahead is (0, 0, -1).

I now need to figure out how to get the direction from driver to point A relative to the driver. It should be something along the lines of, as a complete and utter guess based on estimating distances from those points on graph paper, seems A is just under 40 meters in front of the driver and just about 25 meters to the right. I will continue this post with these estimated values, though don't be disappointed if the math doesn't come out exactly to the expected driver position, world(30, 0, 40).

The distance to point A (from my estimation) is about ~47.2 meters. The actual world space distance is: (to check my work): ~44.7 meters, so my estimation is withing 3 meters, not bad for graph paper at 10m per square!

///////////////////////////////////// Therefore the driver knows the following:

Reference Point A is 47.2 meters in the direction of driver(0.53, 0.0, -0.85) which would be positioned driver(25, 0, -40)
Reference Point B is 80.6 meters in the direction of driver(0, 0, -1) which would be positioned driver(0, 0, -80.6)

The driver will be given the world space position of each reference point, as he is trying to compute his own world space position.

Reference Point A is world(50, 0, 0)
Reference Point B is world(100, 0, 0)


Given only that information the driver needs to figure out he is actually at world(30, 0, 40)....

If I take driverB - driverA that will give me the vector from A to B in driver space. driver(-25, 0.0 -40.6) and he is trying to line that up with the world space vector from A to B which is world(50, 0, 0)

I will admit to being slightly lost and currently rambling about numbers and positions that are known, trying to find a way to use some other mathematical function to get from what the driver knows to where the driver IS.

My line of thought is something along the lines of this, we have a triangle, in the world space we see this as reference point A, to reference point B to driver. world(50, 0, 0) to world(100, 0, 0) to world(30, 0, 40). The driver also sees a triangle, but it currently exists in driver space as follows: driver(25, 0, 40) to driver(0, 0, -80.6) to driver (0, 0, 0). Since the driver has the vector from reference point A to reference point B in both world and driver space, he should be able to figure out his position in world space. Lets see trigonometry....

Getting the angle between worldAtoB and driverAtoB is going to be easy and critical. The question then is what to do with that angle, and my brain isn't connecting dots.

With just a moment (20 minutes) break I figured it out. So the angle between worldAtoB and driverAtoB is 122 degrees, which makes perfect sense given my diagram. If we then find the vector that is 122 degrees from the vector (driverB to driver) and subtract it from worldB, the result will be the drivers position in world space. I didn't quite follow what I just said but...

driverB to driver = (0, 0, 80.6) It is not negative here because the direction is not driver to point B in driver space, it is from point B to driver in driver space

The vector which is 122 degrees from this is: (0 * cos(122) + 80.6 * sin(122), 0, -0 * sin(122) + 80.6 * cos(122)

cos(122) = -0.530, sin(122) = 0.848

which is: (68.3, 0, -42.7)
This added to worldB (0, 0, 100) should give us... (something failed in my logic...) (68.3, 0, 57.3) which is obviously not where we expect the driver I suspect something is only slightly wrong with my rotation vector...

I've decided to try using driverA to driver instead of driverB only because driverB has a special case consideration above where x is 0. The following also only solves the position in 2D space, so I may need to just bite the bullet and multiply the vector from driverA to driver by the drivers local coordinate matrix... (Grabbed directly from LFS and something I was trying to avoid, but I currently see no way to avoid that and accurately compute this while including the dimension of height). Ignoring height...

driverA to driver = (-25, 0, -40) (Again negative of driverA since it points back to the driver)
(-25 * cos(122) + -40 * sin(122), 0, --25 * sin(122) + -40 * cos(122)
13.25 + -33.92, 0, 21.2 + 21.2 = (-20.67, 0, 42.4)

(-20.67, 0, 42.4) + worldA(50, 0, 0) = (29.33, 0, 42.4) Which is within expectations given my estimation above of the driverToA vector in driver space...

But now I'm a little confused because that didn't work for driverB to driver?? I believe it has some reason to do with the stupidity of -Z being the forward direction, however negating the Z in driverA to driver worked flawlessly, if I instead take driver to driverB and run it through then add it I will get (-68.3, 0, 42.7), and add that to worldB(100, 0, 0) is (31.8, 0, 42.7) which is again in the acceptable range of error due to my estimations. I am a little unsure why the vector (driverA to driver) works but the vector (driverB to driver) needs to be negated, I can only assume I didn't negate a Z when I should have.

I may come back to solving this but as said in the notes, I may just leave it in world space, still with estimated world space directions and distances as it would be a severe pain to compute in the third dimension... Ultimately I'd like the driver to have all information from within driver space,so I may give it an attempt at least.

Note 1: This is where I learn that although the driver currently only knows the distance and direction, he currently knows the direction in world space. Which means finding his world position is a matter of taking the reference point position, and moving backwards along that direction by the estimated distance. Initially I planned that the driver would only have knowledge of the direction in driver space.

Meaning if the driver saw a cone 50 meters in front of him, regardless the direction he is facing, the direction would be (0, 0, -1) and a distance of 50m And if he saw a cone 25 meters to the right and 25 meters forward, he would then know the direction as (~0.707, 0, -0.707) and a distance of ~35m.

I'm not sure this matters in the big picture of things, but I am going to attempt to figure that out, if it breaks the prediction unit, I may leave it as it is. Back to how I would compute it if the direction were in the right space...

Note 2: I did manage to get these into the driver space and as I predicted, this breaks the prediction unit completely. Worse than I was expect, it was obvious to me that the prediction unit would then visualize the cars movement always along (nearly) the world z since the visualization is rendered world space, but I didn't expect it to break the reported speed. I may change this up a little so the prediction unit runs using the "estimated world space positions" that the visual sensor uses. Which means storing this estimated position into the drivers memory unit. Which is probably an overall optimization because it will be a bit computational...

Note 3: Sorry this one got really technical, maybe someone sees the mistake I did with driverB to driver, but this post was written as I figured out how, by hand and thought, to compute the drivers world position only given driver space information and two world points.
also definitely keeping an eye on this..used to post, then realized i wasnt adding anything

but keep at it!
Wow, very very very impressing.
Keep up the good work and Scawen....... keep an eye on this guys work.
Awesome work, Tim! It's interesting to read about all those details, but to actually see them in action is very nice indeed!

I also like how you got a female voice for the video, great job.

Now to get that driver to drive...
Still at it after five years, I see.

I think what you're doing is pretty cool. I've been doing this professionally now for about 14 years including an AI system or two, and must say that it's pretty rare to see somebody doing what you're doing. I've written an AI system too, and you probably could not have picked a more difficult challenge. AI is hard and judging by your video you're doing a great job so far.

I haven't read the whole thread, but seem to remember when you were starting out on this. I'm looking through post 316 from last Wednesday and it looks like you're using trig to solve everything instead of using rotation matrices and vectors. Using those you can solve everything in 3D and a lot of the stuff it looks like you're trying to do may become a lot easier. Have you looked much into these yet?

There are few basic operations that can help. Forgive me if you know this already, but with all the sines and cosines I see in post 316 I'm thinking maybe this might be new for you?

If you want to compute the angle between two 3D vectors, you can just compute the dot product of it like this:

dotProduct = Vector1.x * Vector2.x + Vector1.y * Vector2.y + Vector1.z * Vector2.z

This gives you the cosine of the angle between the two vectors. For your purposes this may be enough if you're doing steering and so on based on the angles. The actual angle in degrees or radians may not be important to you. If it is, you can just do an arc cosine operation to get it in radians:

angleRadians = arcCos(dotProduct)

Not sure what language you're using, but it will probably have an arc-cosine function to do this somewhere if it's doing trig. If not, you can compute the arc-cosing using other trig functions. Will leave that up to you to Google.

What I would probably end up doing in your case is build a 3x3 rotation matrix for the car if you aren't able to pull that matrix directly from LFS. A 3x3 rotation matrix is really just 3 vectors grouped together that are always perpendicular to each other. The driver's forward vector you describe is one of them and points forward. Another vector points up, and another vector points either to the right or left depending on your coordinate system.

You need a forward direction vector (normalized to length 1) to start with which it looks like you already have (driver or car forward vector, I'll just call it "car forward vector"). To build the rest of the matrix I bet you can cheat a little bit since the tracks are pretty flat and just assign an "up" vector that's always 0,1,0 (or whatever it is in your coordinate system).

Next, you construct a new vector that's perpendicular to that "up vector" and the car's forward vector. You do that with a cross product operation. A cross product of two 3D vectors works like this:

Result.x = Vector1.y * Vector2.z - Vector2.y * Vector1.z
Result.y = Vector1.z * Vector2.x - Vector2.z * Vector1.x
Result.z = Vector1.x * Vector2.y - Vector2.x * Vector1.y

"Result" is a 3D vector just like you're car's forward vector is, but this one is perpendicular to the other two. Since your other vectors were "forward" and "up," this one points to the right or left side. For now I'm going to call "result" the "right" vector.

To keep things simple and see if this is at all interesting to you, I'm going to skip something here. Really what you'll want to do at this step is create another vector or two. This will allow your driver to see in actual 3D and remove the simplification that I made where we assume "up" is the same in the car's space as it is in the world's space. It would replace that first 0,1,0 vector we made with a new "up" vector that points up relative to the car instead of the world. With this, if the car is rolled at a 52 degree angle or going around a banked turn or down a steep hill or something, the reference points you describe will move up and down and spiral all correctly from the driver's perspective. You won't scratch your head anymore over the directions and relative orientations of any of this anymore, and you won't ever use another sine or cosine unless you really care how many degrees are in an angle for some reason. Maybe you use it as an input into your steering controller, but you could probably use the dot product instead. Anyway, I'll save this for later in case this all interests you. It's just a couple more quick steps that would go right here. The important part here is that you create 2 more vectors in addition to the forward vector you're already using. For now we'll just move along to the next bit.

Now you can just stuff those into a 3x3 matrix by stacking them into a structure something like this:

carRotationMatrix.x.x = ForwardVector.x
carRotationMatrix.x.y = ForwardVector.y
carRotationMatrix.x.z = ForwardVector.z
carRotationMatrix.y.x = UpVector.x
carRotationMatrix.y.y = UpVector.y
carRotationMatrix.y.z = UpVector.z
carRotationMatrix.z.x = RightVector.x
carRotationMatrix.z.y = RightVector.y
carRotationMatrix.z.z = RightVector.z

This is cool because now you not only know what direction is forward, you also know just as accurately what direction is up and what direction is right! You could have your AI controller react to something above or below you, or even tell if the car crashed and flipped upside down or something. It also may come in handy if you want to make a flight simulator some day and have your aircraft do stuff in 3D.

When you want to rotate some point into the car's coordinate frame, you just multiply whatever point you want to rotate by the transpose of this carRotationMatrix. For example, if you knew reference point A in world coordinates, but you really wanted it in the car's coordinate frame since that's where the driver is seeing everything, you multiply point A by the transpose of carRotationMatrix. Probably what you'll want is a function something like this:

referencePointRelativeToCar = Transpose(carRotationMatrix) * referencePoint

I do it a little differently than this, but you could do it this way probably if you wanted to.

What's the Transpose()? I used to just write out the entire computation in my code over and over every time I used it so it would get burned into my brain. I use a function or a function macro that looks like this:


Result.x = InitialVector.x * Matrix.x.x +_
InitialVector.y * Matrix.x.y +_
InitialVector.z * Matrix.x.z

Result.y = InitialVector.x * Matrix.y.x +_
InitialVector.y * Matrix.y.y +_
InitialVector.z * Matrix.y.z

Result.z = InitialVector.x * Matrix.z.x +_
InitialVector.y * Matrix.z.y +_
InitialVector.z * Matrix.z.z

That's the math for it. This looks like a lot but it's actually very fast. What is somewhat slow computationally are the sine and cosine operations, so I try to avoid those whenever I can. Note that right now we have an angle and have computed two other direction vectors that are all perpendicular to each other without using sine or cosine at all. So much easier. I still use sine/cosine in some places where there isn't really any other choice though, so don't worry about it too much.

Anyway, at the end of this you have referencePointRelativeToCar. This is sweet because now a coordinate like (1,0,0) tells you that the reference point is 1 unit left of the driver relative to the car regardless of what direction the car is pointing in the world. It could have crashed into a wall and flipped up on its front bumper, rotated 87.254 degrees around the roll axis and you would know that point is 1 unit left relative to the driver's point of view. Just imagine the possibilities...

This matrix and vector stuff is incredibly powerful. I used to do everything with sines and cosines too, and once I started learning this stuff things got a lot easier and I never went back. Beginning 3D graphics books and web sites will cover the big important ones that you use all the time: Rotation matrices, cross products (to get perpendicular vectors), and dot products. Dot products are especially cool. They're used to get angles and do lots of other cool stuff like finding out how far something is from something else in ANY direction from ANY point of view. So it wouldn't even need to be something that's forward/up/right relative to the car anymore. You could have a gun pointed at a 30 degree angle off the hood, twisted to the right 10 degrees, and figure out how much further it would have to rotate to point to the bad guy.

Or if you wanted to get even fancier with your AI, you could have the AI compute things based on the orientation of the driver's head inside the car rather than basing it on the orientation of the car. Maybe he's looking to the left at the car next to him so he doesn't see that Schumacher just slammed his brakes in front of him, and POW! Fortunately the AI next to him was looking forward, so he swerves out of the way of both of them. Maybe the next guy sees that and pees himself, so he looks down at his shorts for a couple seconds and misses his next braking point.


These two operations (dot products and cross products) are used extensively in my simulation's suspension system in VRC Pro which has 76 individually animated parts on every car. Everything has a rotation matrix just like the one we computed earlier. These are all used to compute the physics of the vehicle and resulting motion, and because I hardly use sines or cosines anywhere at all, it runs really fast on the CPU. Those 76 matrices get passed to the renderer and presto, this happens:

You can really do a lot with this vector stuff once you get the basic math behind it. It's not really that hard, and when you're coding it all you really have to do is write a MatrixMultiply() function and maybe a Transpose() function, things like that. Then you just start multiplying matrices and doing all that heavy math with a single line of code here and there. Next thing you know you're talking about multiplying this vector by that dot product rotating it by the thingamabob in the doohickey's coordinate system and transforming it back into whazoo-land. It gets to be kind of fun.

Anyway, if you have any questions about any of this, I'd be happy to help guide you a little bit. I haven't read the whole thread so I'm not totally sure where your understanding is. I'm just going off what I read in parts of post 316. So if you know this all already I apologize in advance.

Great stuff. I'll read some more and perhaps post again if you haven't replied in the meantime. It's neat to see that you've stuck to it all these years, let's just see if we can help speed things up a bit.
Hello Todd,

How are you?

I do actually know about the Matrices and Vectors. I'll admit to having troubles from time to time dealing with different coordinate systems but overall I would classify myself as above beginner. The thing about post #316 is that I was attempting to figure out (without the cars transform matrix) where the car was). I solved it on paper for 2D, without much in the need for resources simply as a challenge to myself, and thus that is the lost behavior of that post.

I am not sure it is possible to solve in 3D using this approach as the error of setting/assuming the up vector to 0,1,0 could be far too large.

Therefore I have abandoned that approach. The Visual Sensor will predict the position of the car by doing the following steps. (The Visual Sensor has the transform matrix readily available, required for the field of vision tests, but the driver never gets that information (the transform)).

- The Visual Sensor will take the points within three sections of the track (current, next and previous from where the racecar is actually located.
- The Visual Sensor will test to make sure the vector from the drivers eye to the reference point is within the field of vision, throwing it away if it is not.
- The Visual Sensor will then test to make sure that vector does not go through any terrain, (and eventually other cars), if it does it gets thrown away.
- The Visual Sensor will then take this vector (which is currently worldspace) and transform it into driver space via the racecar transform. It will (eventually) apply some estimation error to the direction and distance. (Currently 100% accurate for simplicity)
- After all visible reference points have been detected, the Visual Sensor will inverse the racecar transform matrix, to bring those directions back into world space, and reverse the process to get the perceived position

Note, once the estimation error has been injected, the perceived position will not be 100% accurate, it currently is, as expected.

I do realize the undertaking of this project, and have been in the game industry for 5+ years. I love programming, and racing, and am just doing this project on the side for fun and to see where it goes.

EDIT: The primary reason for the slow development speed is me dropping the project for other things, then coming back to it for a bit.
It looks to me like you have 2 spaces: "world" and "driver." Is that right? So driverA and driverB are the positions of points A and B relative to the driver in his own coordinate system. worldA and worldB are the positions of those points in the world coordinate system. Am I following you?

In other words, if your driver was sitting still with 0 velocity somewhere and spinning around in circles, worldA and worldB would never change, and driverA and driveB would move in circles around the driver the same way a driver would see it out his window. Is that right?

So basically you are starting with worldA and worldB and are pretending to not know the car position because you're trying to get the AI to figure out where it is based solely on those two reference points, and you want it to do that regardless of what direction the car/driver is facing. Is that right?

You then want to compute driverA and driverB in the driver's coordinate system, meaning how it looks out his window. So if point A is 20 units directly forward of him, driverA = (0,0,20) (or whatever it is in your coordinate system) regardless of what direction he's pointing. Meanwhile worldA could be anything. You then want to feed that to your AI and have the AI figure out the car's world position knowing only worldA, worldB, driverA, and driverB. Is that right?

Does the driver have a compass? Does he know what direction he is facing? I read that you're reading the rotation matrix but I'm not sure what information you want the AI to have. Is a compass ok? Does he have the forward vector of the car perhaps?

If so, you don't need two reference points to get the car position, you only need one. All you should need to do is rotate driverA in the opposite direction of the car, then shift it. If you're using the rotation matrix of the car/driver then you just multiply driverA by the transpose of the rotation matrix like I showed before. If he has a compass and you're basically treating it as 2D, you can just rotate by the heading angle. So for example in world coordinates the point might be 20 units north of the car. The car is rotated 45 degrees to the right, so driverA = (-17.07,0,17.07) or something depending on your coordinate system. Just rotate that point by the opposite of the heading angle to swing it back around to line up with the world coordinate system again. Do this in other variables so you don't mess up driverA since you might need it later for something else.

I'll just do it in 2D with a compass heading assuming your up direction is "y" to illustrate the basic idea. In 3D you'd just multiply by the transpose of the rotation matrix to rotate the driverA and driverB points to unrotate it. Let's see if I get this right:
rotationAngle = -heading //Note the negative sign so it rotates the point in the opposite direction which will undo the heading of the car.
//To make it cleaner for this post I'll assign 2 temporary variables like this:
x = driverA.x
z = driverA.z

//Be sure to convert the angle to radians for these functions calls.
s = sin(rotationAngle)
c = cos(rotationAngle)
xnew = x * c + z * s;
ynew = -x * s + z * c;

xnew and ynew is your original driverA point, but rotated back to what it would have been if the car was pointing north. If the car was at 0,0,0 this would just make newA= worldA. If your car is not at 0,0,0 then newA will be shifted off of worldA by some x/y/z distance.

In other words, if this new point landed exactly on top of worldA, you would know the car position is 0,0,0. If it landed 10 units to the left of that, you would know the car is 10 units to the right of worldA. All you should have to do then is a subtraction using this new point and the worldA point and presto, there's your car position.
Quote :I am not sure it is possible to solve in 3D using this approach as the error of setting/assuming the up vector to 0,1,0 could be far too large.

This is what my italicized paragraph in there was talking about. All you need to do is a cross product on the forward and right vectors you created originally from the 0,1,0 vector. This gives you a fourth vector which is now "up" relative to the car and you throw away the original 0,1,0. You now have up/down/right vectors (a rotation matrix).

If this needs to work when the car is pointing almost perfectly straight up or down and you do not want to use the entire LFS matrix, you can do a dot product check before all this to get the angle between the driver's eye direction and that 0,1,0 vector, and if they're too close to the same (lined up with each other) then you just pick another vector to use. If you can get the normal vector of a ground triangle under the car you could use that, or you could just switch the 0,1,0 to 0,1,0, or 1,0,0 and use that. You might need to take care to make sure "right" and "up" are not flipped too, but unless you're driving up a wall or around a loop you'll probably never need this check. You'll run into gimbal lock somewhere with this approach if you're computing your own matrices from a single forward vector, but cars using AI usually don't point straight up or down so it probably won't be a problem.

Either way you need orientation represented somewhere. If building the matrix is too much hassle then you might as well use the LFS matrix which won't ever have a gimbal lock problem. LFS probably uses a 4x4 matrix like other games, so just grab the first 3x3 elements.

What you use in the end depends on what you are allowing the AI to use for information of course. Maybe you want to provide it with only minimal data like a compass, something that's found in the real world, and let the AI compute what it really needs even if it's already available in LFS. I can understand that.

The trick is to try not to use trig to solve everything like you're doing. I avoid sine/cosine and all that except to actually rotate something. When I go from one coordinate system to another, I think of everything as points that are just rotated and moved. They're usually either rotated first and then moved, or moved first and then rotated, or sometimes moved first, then rotated, then moved back at the end. So just think "move, rotate, move" when you want to rotate something. You can rotate something forward by multiplying by a rotation matrix as you already know probably, and you can reverse the rotation by multiplying by the transpose of that matrix. That's about all you'll ever need with matrices.
Yea, that is pretty close to what I derived (for a 2D solution) in post 316, at least as I understood it minus you used only point A. There are technically three coordinate systems, LFS World, AIRS World and AIRS Driver. For most purposes (including this) the LFS coordinate system can be ignored.

Basically everything you said is exactly correct, except I don't give the driver a compass, which does make it difficult to continue. I did successfully compute the 'compass' direction by using two points, A and B, and the angle between these points in driverSpace vs worldSpace is the rotation of the driver. This was the reason I used 2 points.

Your solution would work also, if the driver knew the compass heading already because then he could undo that rotation. Long story short though, the driver is doing this now but simply using the transform. It holds 75% true to the project since the driver, doesn't get fed his position or rotation, and can only calculate his position (currently knows nothing about rotation) based on the visual estimations to each reference point. The other 25% is taken away because it is using the transform directly to do so, which as I state in the following paragraph could mostly be computed anyway, minus roll.

I do fully understand matrices, and know how to build one using a forward direction, then cross that with an up (0, 1, 0), to get the right vector, then cross those to get the actual up vector. However, my head tells me doing this in this situation allows for more error than I could allow; it removes the roll from the cars transform by assumption. It is possible I am wrong, and this error amount would be insignificant, in any case the Visual Sensor has a way to perceive the position based on the visual input.

I actually don't use sin/cos much at all, honestly I use the 4x4 matrices and vector3's so much more as it is simply easier. Post 316 was completely my thought process as I looked through a problem without searching for a solution, just my raw thoughts as I worked through it, in 2-dimensions.


I'm currently working on getting the driver to drive back to the track from where ever the car is at a given point at time, at least, as well as I can. There are going to be a lot of issues, primarily walls and objects that block the car from driving where the driver wants.

The driver, who my girlfriend named Jared, can current get in the car. The car must be H-shifter only at the moment, using XRG. LFS is not using auto clutch, so he can stall the car. In the StartFromStopAction, the driver can get the car moving from a stop, which if the car is off he will first turn it on, put it in first gear, rev the engine a bit, and fade off clutch/on throttle until moving. The prediction unit requires motion to project where the car might be in the future.

Once the car is moving 25mph, Jared will panic. An action call PanicStopAction, will force him to put both feet on the clutch and brake 100%, and release all other controls until the car is stopped and then he will put the car in neutral.

Both of these actions use other mini actions to help shift, start the car etc.

The new action will be GetOnTrackAction, which will attempt to pick a point on the racing line nearest the driver, and move along the forward racing line some short distance, and drive toward that point. Once the car is detected as on the track, this action will end. This action not yet developed.
Ok. You might need to bring me up to speed on how some of this works. I haven't read too much. Basically you have all the relevant information that LFS has, you've got the matrix and position and all that. What you're trying to do is make a basically blind AI that figures it's way around the track without using that position directly, right? You have the position because LFS has the position, but you want the AI to figure out the position on its own using only your sensors. That sensor data is itself produced by the LFS fed position/orientation. Am I following you correctly?

The visual sensor knows everything including the position and orientation so it can produce the visual information Jared sees that comes in as reference points in Jared's reference frame (driverA, driverB, etc..). Jared then needs to determine the car's world position on his own using only the information the visual sensor provides. Is that right? Ultimately in your best case scenario you wouldn't even want Jared to know the orientation of the car? He would basically see a bunch of dots and through the visual sensor only knows the distance/angle to them and makes his decisions that way?