Okay, thanks for the support guys! I'm going to take the chance to talk a little technical as I've found a great reason for writing in this thread is to think through the problems. Usually if I find a way to explain it to someone then I can make it happen. In this case I'm a little stuck on a pretty difficult problem that I know must be possible.

*Warning, this post might be a little technical, hopefully it is explained well, but if you get lost don't worry. I wrote this post while coming up with the algorithm to figure out how to do what I'm trying to do. Meaning this has a lot of unfinished/unpolished raw thoughts as I worked out the problem.

The artificial driver can see two reference points, A world(50, 0, 0) and B world(100, 0, 0).

The driver knows the distance and direction to each of these reference points... *note 1* ...relative to the drivers point of view. A cone directly ahead would be in the direction of (0,0,-1) regardless of the world direction the driver is viewing. In this example, I am going to place the driver at world(30, 0, 40) and he will be looking straight at reference point B.

Therefore the driver knows that point B is in the direction of (0, 0, -1) and a distance of: Magnitude(100-30, 0-0, 0-40) = 80.6 meters. point A is a little more difficult as subtracting the values as such won't give the direction in relation to the driver, hence how I figured I had the issue described in note 1. I had simply did driverPosition - conePosition, to get direction/distance, but that direction remained in world space. Doing this above for point B was simple because I defined the driver to be looking straight at that point, knowing straight ahead is (0, 0, -1).

I now need to figure out how to get the direction from driver to point A relative to the driver. It should be something along the lines of, as a complete and utter guess based on estimating distances from those points on graph paper, seems A is just under 40 meters in front of the driver and just about 25 meters to the right. I will continue this post with these estimated values, though don't be disappointed if the math doesn't come out exactly to the expected driver position, world(30, 0, 40).

The distance to point A (from my estimation) is about ~47.2 meters. The actual world space distance is: (to check my work): ~44.7 meters, so my estimation is withing 3 meters, not bad for graph paper at 10m per square!

/////////////////////////////////////

**Therefore the driver knows the following:**
Reference Point A is 47.2 meters in the direction of driver(0.53, 0.0, -0.85) which would be positioned driver(25, 0, -40)

Reference Point B is 80.6 meters in the direction of driver(0, 0, -1) which would be positioned driver(0, 0, -80.6)

The driver will be given the world space position of each reference point, as he is trying to compute his own world space position.

Reference Point A is world(50, 0, 0)

Reference Point B is world(100, 0, 0)

/////////////////////////////////////

Given only that information the driver needs to figure out he is actually at world(30, 0, 40)....

If I take driverB - driverA that will give me the vector from A to B in driver space. driver(-25, 0.0 -40.6) and he is trying to line that up with the world space vector from A to B which is world(50, 0, 0)

*I will admit to being slightly lost and currently rambling about numbers and positions that are known, trying to find a way to use some other mathematical function to get from what the driver knows to where the driver IS.*
My line of thought is something along the lines of this, we have a triangle, in the world space we see this as reference point A, to reference point B to driver. world(50, 0, 0) to world(100, 0, 0) to world(30, 0, 40). The driver also sees a triangle, but it currently exists in driver space as follows: driver(25, 0, 40) to driver(0, 0, -80.6) to driver (0, 0, 0). Since the driver has the vector from reference point A to reference point B in both world and driver space, he should be able to figure out his position in world space. Lets see trigonometry....

Getting the angle between worldAtoB and driverAtoB is going to be easy and critical. The question then is what to do with that angle, and my brain isn't connecting dots.

With just a moment (20 minutes) break I figured it out. So the angle between worldAtoB and driverAtoB is 122 degrees, which makes perfect sense given my diagram. If we then find the vector that is 122 degrees from the vector (driverB to driver) and subtract it from worldB, the result will be the drivers position in world space. I didn't quite follow what I just said but...

driverB to driver = (0, 0, 80.6)

*It is not negative here because the direction is not driver to point B in driver space, it is from point B to driver in driver space*
The vector which is 122 degrees from this is: (0 * cos(122) + 80.6 * sin(122), 0, -0 * sin(122) + 80.6 * cos(122)

cos(122) = -0.530, sin(122) = 0.848

which is: (68.3, 0, -42.7)

This added to worldB (0, 0, 100) should give us... (something failed in my logic...) (68.3, 0, 57.3)

*which is obviously not where we expect the driver* **I suspect something is only slightly wrong with my rotation vector...**
I've decided to try using driverA to driver instead of driverB only because driverB has a special case consideration above where x is 0. The following also only solves the position in 2D space, so I may need to just bite the bullet and multiply the vector from driverA to driver by the drivers local coordinate matrix... (Grabbed directly from LFS and something I was trying to avoid, but I currently see no way to avoid that and accurately compute this while including the dimension of height). Ignoring height...

driverA to driver = (-25, 0, -40)

*(Again negative of driverA since it points back to the driver)*
(-25 * cos(122) + -40 * sin(122), 0, --25 * sin(122) + -40 * cos(122)

13.25 + -33.92, 0, 21.2 + 21.2 = (-20.67, 0, 42.4)

(-20.67, 0, 42.4) + worldA(50, 0, 0) = (29.33, 0, 42.4) Which is within expectations given my estimation above of the driverToA vector in driver space...

But now I'm a little confused because that didn't work for driverB to driver?? I believe it has some reason to do with the stupidity of -Z being the forward direction, however negating the Z in driverA to driver worked flawlessly, if I instead take driver to driverB and run it through then add it I will get (-68.3, 0, 42.7), and add that to worldB(100, 0, 0) is (31.8, 0, 42.7) which is again in the acceptable range of error due to my estimations. I am a little unsure why the vector (driverA to driver) works but the vector (driverB to driver) needs to be negated, I can only assume I didn't negate a Z when I should have.

I may come back to solving this but as said in the notes, I may just leave it in world space, still with estimated world space directions and distances as it would be a severe pain to compute in the third dimension... Ultimately I'd like the driver to have all information from within driver space,so I may give it an attempt at least.

**Note 1:** This is where I learn that although the driver currently only knows the distance and direction, he currently knows the direction in world space. Which means finding his world position is a matter of taking the reference point position, and moving backwards along that direction by the estimated distance. Initially I planned that the driver would only have knowledge of the direction

*in ***driver** space.

Meaning if the driver saw a cone 50 meters in front of him, regardless the direction he is facing, the direction would be (0, 0, -1) and a distance of 50m And if he saw a cone 25 meters to the right and 25 meters forward, he would then know the direction as (~0.707, 0, -0.707) and a distance of ~35m.

I'm not sure this matters in the big picture of things, but I am going to attempt to figure that out, if it breaks the prediction unit, I may leave it as it is. Back to how I would compute it if the direction were in the right space...

**Note 2:** I did manage to get these into the driver space and as I predicted, this breaks the prediction unit completely. Worse than I was expect, it was obvious to me that the prediction unit would then visualize the cars movement always along (nearly) the world z since the visualization is rendered world space, but I didn't expect it to break the reported speed. I may change this up a little so the prediction unit runs using the "estimated world space positions" that the visual sensor uses. Which means storing this estimated position into the drivers memory unit. Which is probably an overall optimization because it will be a bit computational...

**Note 3:** Sorry this one got really technical, maybe someone sees the mistake I did with driverB to driver, but this post was written as I figured out how, by hand and thought, to compute the drivers world position only given driver space information and two world points.