The online racing simulator
#151 - Dac
This is an area where ANN can help you greatly. My interpretations of your goals is to make them operate on the basis that a human does. And for this ANN are just so perfect. I can understand how you are wary of them since you haven't researched them and from a few google searches they may seem too abstract and not very practical, but bare with me here and so you can see the same light I do.

Your method is a cure of the symptoms rather than a cure for the cause. You can go down this route but it will never be as genuine or appropriate as ANN. For instance, reaction times have a lot less to do with racing than you might think. Jeremy Clarkson I believe did a documentary on it and found that F1 drivers actually have the same reaction times as the rest of us. Where they differ is in their experience, just like a professional football (soccer) goalkeeper reacts to the smallest of cues in order to predict where the ball will go. This is working on experience to judge BEFORE the event happens the best course of action rather than working at the limit with superhuman reaction or slightly before as is your technique to try and 'cover up' the superhuman effect. As ANN work on experience there is no need to cover up their ability because they are genuinely functioning as a human would. So the problem is eliminated. If you see what I mean.

Let me just run you through the basics of it. You have 3 layers, input nodes, hidden layer nodes and output nodes. They are self explanatory. The input nodes correspond to the inputs a human would have, the hidden layer is the 'brain' which encodes experience into a purely parallel processing unit which uses the output nodes (hands, feet etc) to produce the action or in this case control the car.

Here is what immediately strikes me as necessary;

Input;
Vision
Feeling - Steering Wheel, G-forces, Pedals, Seat,
Sound - Engine, Tyres [FL, FR, RL, RR],

Output;
Steering Movement - Steering Angle
Pedal Movement - Accelerator Pressure, Brake Pressure
Head Movement - Left/Right
Gearbox - Shift Up, Shift Down

I have not included the hidden layer because in practice it is not an exact science, just like engine manufacturers find ignition timing maps by trial and error so do ANN find the best configuration for the hidden layer. Don't get disheartened there are only a few variables to control namely, the number of nodes and the learning rate (how big you want the changes to be within trial and error). You start off with the minimum number (i.e. 1) and then slowly increase until the desired behaviour is acceptable.

The concept is very easy to understand, just like the gates on a CPU work extremely simply so do ANN, but what is astonishing is when you put them in action you can see how easily the human psyche can be modelled. In fact it gave me a lot of food for thought about my own existence, and whether or not a good ANN would indeed be concious.

This area is highly worth pursing for you if you can spend a few hours researching it.
Not that I want to waste your time, especially where i have not yet gone into reading it myself; which is something I promise I will do at somepoint, likely today if I can find the time; which considering my current state I should be able to do, as I am currently keeping track of the AI thread, and working on the AI project itself. Attempting to get the car to drive towards a point. Interestingly enough, it starts to; and then goes the opposite direction.

Sorry, I got side-tracked while trying to form a question. I get the input/output idea. But I don't understand very well what happens on the hidden layer. Obviously this is likely where the meat is. At least, I am thinking so... I am trying to think of an easier example than driving for you to use to explain that layer; and again, don't bother if it is something that I really will need to read on to understand in the first place. I can likely follow any other simple example you have even if it doesn't involve four tires a couple pedals and a steering wheel!

I am quite lost at fitting this into the project; though that is understandable since I need some reading on. The breif mentioning on this topic in college, (including the article I read and wrote about), has long escaped my memory... Going to dig out the books now.
#153 - Dac
Please don't worry about wasting my time, I am quite passionate about improving driving simulators especially the AI.

A brief explanation for you. The hidden layer, yes that is where the meat is. NN work unlike any other system you have seen before, the only other example is biological brains. Every brain is built up of neurons. They take two input signals, add them together and decide whether or not to signal to the nodes connected to it. Since they are all connected together you have a network of individual and extremely simple neurons. Hence a neural network.

The magic happens when you join all these little building blocks together. You have input nodes which you activate, such as you could have a neuron for 'CarInFront'. When there is a car in front, this node would be switched to 'ON'. This activation would be connected to all the other neurons in the hidden layer, which are then connected to the output nodes. When the CarInFront node is activated it cascades into all the other neurons in the network affecting their activations, which in turn affect the activations on the output nodes, such as 'SteeringAngle'.

So you can vaguely see from this example how the inputs influences the hidden layer nodes which in turn affect the driving of the car.
Ok, well from my 5 minutes spend reading in, not my college book but another AI book I bought, I have learned quite a bit about my own brain... Some of which as you said was astonishing and I looked at the individual letters on the page for a moment and was like; damn - this reading thing is perfect for NNs due to the patterns that are presented.

However, although I am kinda excited about continuing this read I a currently starting to feel more negative about NN's in the AI here. I am not so sure that a racing simulation has patterns in the way NN's deal with them. That said I do need to keep reading and yes racing has a pattern; go around the track. But from a moment to moment pattern, I am less sure about. I got to get deeper into how it would be implemented before I can really decide whether it is useful or not.
#155 - Dac
It is most certainly useful I can assure, every racing driver who ever lived had a brain that worked functionally as ANN do. There is always a pattern, and it's the NN's job to find it. Otherwise they are no use. All of the inputs I mentioned above are extracts from the environment that the car is travelling through hence that is the pattern.

The only concern I would have is in the implementation as you said. The only experience I have is with the JNNS which allows you to create, run and test neural networks very quickly. How you would go about mapping the input and output nodes to the simulation environment is I am hoping your expertise.

If you'd like I can send you the program and quick tutorial I was given to implement and run a simple NN.

Edit: A few google searches yields this http://www.ra.cs.uni-tuebingen ... tware/snns/welcome_e.html output's C-programs?
Sure I don't know Java syntax directly but I think I would be able to pickup the ideas. I finished reading the NN chapter in that book; all except the source code implementation. I like some of the ideas, but would certainly need to test the idea of this outside of realms of the racing simulation; on something very simple and then one step further than simple without over complicating things. I now see that the implementation of NN is considered the easy part of the process, and that the hard part, time consuming part and challenging part is to get the required results.

I do understand that X,Y,Z inputs are patterns which if you answer with A will always lead to B; but when it comes to handling traffic, and other things I think this will lead to less than desired results. In other words, I can't think of any way to accomplish training the NN; which is the hard part of NN's!
#157 - Dac
Quote from blackbird04217 :Sure I don't know Java syntax directly but I think I would be able to pickup the ideas. I finished reading the NN chapter in that book; all except the source code implementation. I like some of the ideas, but would certainly need to test the idea of this outside of realms of the racing simulation; on something very simple and then one step further than simple without over complicating things. I now see that the implementation of NN is considered the easy part of the process, and that the hard part, time consuming part and challenging part is to get the required results.

I do understand that X,Y,Z inputs are patterns which if you answer with A will always lead to B; but when it comes to handling traffic, and other things I think this will lead to less than desired results. In other words, I can't think of any way to accomplish training the NN; which is the hard part of NN's!

Of course, as I said I can give you a simple practical tutorial where you create a small ANN to recognise 10 letters of the alphabet.

As long as you train the ANN to deal with traffic it would be capable of interacting with other cars on the circuit no more or less than if we raced each other.

At the moment I am speaking in pipe-dream terms and I think a good starting point would be to get a simple 2D car to navigate around an oval circuit with the two constrain parameters of crashes and lap time.

After a few google searches it appears this technique has been used before - http://www.ectri.org/YRS07/Papiers/Session-2/Booth.pdf

http://togelius.blogspot.com/2 ... ry-car-racing-videos.html
One thing - are you trying to get the AI to simulate real drivers or sim-racers?

If the former, then I think the g-forces on the car would definitely be an important thing to monitor (as opposed to sim-racers, who can't feel the g-forces, amongst other things), and would also give you feedback about things like locking wheels, where the car suddenly isn't slowing down as fast as the AI estimates it should. It also has the added bonus that you can introduce a random judgement factor based on driver experience into it to find the limit. An experienced driver would be able to feel the g-forces and know that in that car, at around that speed, at that point in the corner, they were approaching the limit, wheras an inexperienced driver could miss it and mess up the corner.
I don't know, I would likely be aiming more towards 'real drivers' since I have hopes that eventually sim-racing will get closer to that. Motion simulators do exist, it will only take time before a small enough version is built for the home. And you don't need to feel full forces, even just partial forces would give you that level of feedback needed; of course it will never be quite the same...

Anyways, I came back as I am having some issues with my world coordinates now. One of the things that I could foresee, although this should be something that can be overcome with a little effort; so I ask for some help before I beat my head against the keyboard;


In my world the following apply;
X-Axis: 1, 0, 0 (RIGHT)
Y-Axis: 0, 1, 0 (UP)
Z-Axis: 0, 0, 1 (FORWARD)

Quote from Insim.txt :
// If ISF_MCI flag is set, a set of IS_MCI packets is sent...

struct CompCar // Car info in 28 bytes - there is an array of these in the MCI (below)
{
word Node; // current path node
word Lap; // current lap
byte PLID; // player's unique id
byte Position; // current race position : 0 = unknown, 1 = leader, etc...
byte Sp2;
byte Sp3;
int X; // X map (65536 = 1 metre)
int Y; // Y map (65536 = 1 metre)
int Z; // Z alt (65536 = 1 metre)
word Speed; // speed (32768 = 100 m/s)
word Direction; // direction of car's motion : 0 = world y direction, 32768 = 180 deg
word Heading; // direction of forward axis : 0 = world y direction, 32768 = 180 deg

short AngVel; // signed, rate of change of heading : (16384 = 360 deg/s)
};

// NOTE 1) Heading : 0 = world y axis direction, 32768 = 180 degrees, anticlockwise from above
// NOTE 2) AngVel : 0 = no change in heading, 8192 = 180 degrees per second anticlockwise

struct IS_MCI // Multi Car Info - if more than 8 in race then more than one of these is sent
{
byte Size; // 4 + NumP * 28
byte Type; // ISP_MCI
byte ReqI; // 0 unless this is a reply to an TINY_MCI request
byte NumC; // number of valid CompCar structs in this packet

CompCar Info[8]; // car info for each player, 1 to 8 of these (NumC)
};


namespace LFS_to_AIRS
{

void ConvertToOrientation(ice::iceVector3 *vDir, const unsigned short usAngle)
{
float val = ice::MathConverters::DegreesToRadians((usAngle / 32768.0f) * 180.0f);
vDir->m_fX = sin(val);
vDir->m_fY = 0.0f;
vDir->m_fZ = cos(val);

ice::Math::Vector3Normalize(vDir, vDir);
}
}

The "ConvertOrientation" function should takes in the orientation values from the MCI packet, so 32768 = 180 degrees. All my Math:: stuff has been proven to work on several projects so I already know that code is not suspect; though I am trying to make sure this is working. (The car is not going where I thought it would be and the rudimentary display I made wasn't quite what I expected. Leading me to believe this is the problem; however I am still checking my display code to make sure the units are setup correctly there as that is new code as well.) I am hoping someone could check that code out.

I would want;
X = 0, Y= 0, Z = 1 when the car is oriented with the world Z (0,0,1) Which should be an orientation of 0 I assume; it's not listed but other rotational values in InSim.txt say it rotates anticlockwise. Of course; in LFS Z is the upaxis and Y is the forward, though this should be converting that just fine. Help?

EDIT: It is listed, and not an assumption, I just never read the 'notes' before apparently! Either way it still isn't quite working for me at this time. Though I will keep checking to see if there is something wrong with my world coordinates in the AI Project.
The coordinates in LFS work this way:
X = left/right (or rather, West/East)
Y = forward/back (or rather, North/South)
Z = up/down

I think this is actually the more logical variant and maybe you should change your AI logic to work like this too. The reason for this is that your AI will mostly work on a 2D plane as far as I understood, so basically as if you're looking from a top-down view onto the car/track. This means that by far the most interactions / coordinates will be on the "track surface plane" which would be using X,Y coordinates in the LFS / traditional system. In your system most stuff would happen on the X,Z plane, which "feels" kinda weird in my opinion. Of course in the end it doesn't really matter, so use what you're most used to.

In regard to your conversion function, are you sure you're getting the heading as ushort? It's a float for me... wait, are you actually using OutSim (OutSimPack struct) or the InSim CompCar struct? You should be using the former, but it seems you're applying the maths required for the latter. In the OutSim packet the angles already come in radians, ranging from -180° to +180° (-Pi to +Pi).
In this current example I am using the MCI pack, which would be the CarComp. This is a way that will get me the data available on all the cars, and not just the one being driven; of course the OutSim packet has other information that may be usable for the _actual_ driver. But at this time I am using the MCI pack; so yes I am sure that I am getting this value as an unsigned short as described from InSim quote above.

As far as my world coordinates I am fully used to it that way, doing it this way allows me to change from 2D to 3D without changing the coordinates. In my 3D worlds Z is always forward, X to the right and Y up and down; hence why I used the convention here. Since the AI project will have life outside of LFS I have used my convention. It seems I may have some issues in my conversion from LFS -> AI World interface that I have.

If not stated already I have named the project; Artificial Intelligence for Racing Simulators. Or A.I.R.S. for short. I will need to look further into things for the rotational thing. For some reason when I was reading the InSim document I understood it as Y was back to forward; maybe this explains some of my issues. I will get back to this when I figure out the problem - though I am still willing to accept help if my math above is converting something in the wrong way...
Woooh!! Whooo! (Yes, I am every bit excited!) AMAZING!! Whoo!

Proof attached to this post!

I have successfully driven my AI to a target! Okay, when you watch the replay you will likely die of laughter, as I came close to on the first time I saw the behavior the other day... However - the AI driver successfully hits the blue cone; which is where I set him to go! Now, to fix some behavioral issues! ENJOY!

EDIT: Ignore the 30sec penalty. That is _my_ fault as the AI is dependent on me to tell it when the lights turn green. LFS has no accurate way to detect the light change so I substituted an input key for the time.
Attached files
AIRS_LFS 1-28-10.spr - 14.4 KB - 593 views
Ah of course, the heading is anticlockwise. You have to take X = -sin(val).

By the way, why are you normalizing the vector afterwards? As far as I understand it already comes out normalized (length = 1) due to the use of sin() and cos()?

E: :ices_rofl @ the driving skill
Because I am paranoid. :P

And actually;

vDir->m_fX = sin(val);
vDir->m_fY = 0.0f;
vDir->m_fZ = cos(val);

Is the code that works; likely it could be that the LFS Forward to Backward and my Backward to Forward axis's switch this around, though I whatever is happening I know that things are working from my debug display. I am still a bit excited; such a small thing yet, really such a big step.
Sorry but that doesn't seem right. I'll draw a diagram as to how LFS coordinate system works.

E: Attached the diagram below (just imagine the Y = your Z). Your calculation would put the X positive and therefore on the right side, having the angle mirrored along the Y (Z) axis. I just want to make sure you use the correct values for calculation, so that the AI reaching its goal wasn't just a fluke.
Attached images
LFSheading.png
Quote from blackbird04217 :Woooh!! Whooo! (Yes, I am every bit excited!) AMAZING!! Whoo!

Proof attached to this post!

I have successfully driven my AI to a target! Okay, when you watch the replay you will likely die of laughter, as I came close to on the first time I saw the behavior the other day... However - the AI driver successfully hits the blue cone; which is where I set him to go! Now, to fix some behavioral issues! ENJOY!

EDIT: Ignore the 30sec penalty. That is _my_ fault as the AI is dependent on me to tell it when the lights turn green. LFS has no accurate way to detect the light change so I substituted an input key for the time.

I would try to get a pixel's color from the location of the virtual start light's green part, that way the AI could "see" when it changes to green.
Quote from AndroidXP :Sorry but that doesn't seem right. I'll draw a diagram as to how LFS coordinate system works.

E: Attached the diagram below (just imagine the Y = your Z). Your calculation would put the X positive and therefore on the right side, having the angle mirrored along the Y (Z) axis. I just want to make sure you use the correct values for calculation, so that the AI reaching its goal wasn't just a fluke.

See, you can't just imagine the LFS Y = my Z. In reality it is LFS Y = my -Z, remember my +Z points forward, and LFS +Y points backward; which is, I believe, what is causing the phenomenon where it is corrected. However, I will be testing more; my AI display showed everything exactly as I expected, I will take a screenshot of it shortly and post. EDIT- posted


Quote from _--NZ--_[HUN] :I would try to get a pixel's color from the location of the virtual start light's green part, that way the AI could "see" when it changes to green.

As stated before I won't be doing any image processing. The pixel location would be different for different tracks, resolutions, window positions etc... Thus I won't be handling that sort of thing; the button press is the best way to detect it for now.
Attached images
aicoords.PNG
Well, what do you see as "forward"? The Y in LFS is also positive for "forward," but more accurately you'd have to call it "North." Remember, the coordinates are aligned to the track, not to the car. Unless you align the velocity vector with the car's heading, the velocity vector will actually point wherever the car is travelling in relation to the track, instead of Y = forward axis of the car.

Note that on the autocross area you start facing exactly 180° backwards, so driving forwards will decrease the Y coordinate LFS reports.

E: Attached the image, hope you can read the values.
(E2: In my debug output the velocity is already aligned to the car's heading, so Y is always forward.)
Attached images
LFScoordinates.jpg
There is a difference between Local and World space yes; but in World space +Z (in my system) is still forward, and local to the car +Z is forward. It is important to know which system your coordinates are in, but should be possible to change from one to another; I haven't had to do that as of yet though.

About the "North" comment this is not really a good way of thinking about it, at least for me. Because it can change; at least I believe so- depending on how it is set out. I still have some things to work out - obviously, and who knows maybe you are right and something is still wrong and it works by a fluke.
But world space has no back/forward/left/right. These terms are all relative to an object, relative to your car. The world space is fixed, so North/South/East/West are more logical in my opinion. Your local space might have +Z always pointing forward, but relative to the track that might be anywhere. The aforementioned 75° would be lots of East (-X) and a little bit of North (Y/Z).

Maybe we just have a different meaning of world and local space though. I can't wrap my head around how +Z (+Y) is always forward in your world space. Or why your +Z would be LFS' -Y
World space definitely has a right, up and forward axis; And in my world space it is as described above;

Right 1,0,0
Up 0,1,0
Forward 0,0,1

When you put a car in the world initially it starts with that same orientation, meaning the cars right is the same as the worlds right and so on. When you rotate the car 90 degrees clockwise, then the cars forward (still (0,0,1) in relation to Local Space) is now (1,0,0) in relation to World Space). Which is where things can be different from World to Local space, but each still have Right, Up and Forward vectors.

I still have yet to wrap my head around all of this stuff sometimes, though once I get setup I am good to go from there. I do think something is a bit off though, as first of all the AI should have been trying to turn after it passed the cone; so I am confused why it went straight after reaching the destination when the only thing the car can do is head towards that point in space and continue driving 10mph.

I am quite tired, so there are probably a few things wrong and likely I am not making sense somewhere down the line.

EDIT: Ok, after some extensive testing when I finally got the idea to use the 'S' key to place starting points at different angles and make sure my debug display matches I finally have the correct orientation from the MCI packets. It seems X = -sin() and Z = -cos(). It also seemed that another mistake was leading me to believe the prior way was correct, as I used wrong values from testing other situations and never changed it back. I am 90% sure now that it is corrected; and the car drives a LOT more smoothly to that cone which is what my algorithim should have been doing. Actually it now shows that it drives "too smoothly" and misses the cone, only to turn sharper later. So as expected the AI continues to chase the cone forever, and circling it at 10mph. This was the expected results. Thank-you AndroidXP for pointing out you thought it was a fluke; if you still think its a fluke let me know and I will continue investigating, although I am quite sure that it is corrected now since I started the car in several different orientations to be sure the forward pointed in the right direction in comparison to my cones.
Okay, your world and local space work exactly like LFS' world (the track) and local (the car) space, the only difference being that Z and Y is switched. Also, if you're spawned on the track, you're not guaranteed that the world and local space are aligned / equal. In fact, on the autocross track the car spawns facing "backwards" in world space.

Of course "forward" etc. are just labels for the axes so it doesn't really matter what you call them, but the "forward" in world space makes no physical/logical sense to me, since a world has no front or back. "Forward" just happens to be the Y (in your case Z) axis that is the one that points to the top of your screen if you use the top-down view of the map editor.

That said Z being the "forward" axis is not that foreign to me. When rendering for example, the screen coordinates are also XY (0,0 is top left there, though) whereas Z is the depth (hence the term Z-buffer ), pointing "forward," away from yourself who's sitting in front of the monitor. Just sayin'
Android that is pretty much what I mean, I guess in a sense the world doesn't have Forward/Right etc - but it certainly has Up, and in the industry it is common to call them right/forward and whatnot; the more important thing is knowing whether you are talking about world/local space - or as you put it; track/car space.

Anyways, here is a quick update of the finished progress of tonight. I do have the car working as I said in the post above. I added a multiplier so that even when 'close to the right direction' the car will still turn a little bit to have better overall accuracy. The car hits the cone, and continues circling the location where the cone was because that is what it knows how to do.

My next step is to make it follow a track. I will edit a small track in the layout, and do the appropriate changes in code; though I am looking for a volunteer that; can listen well to instructions, and has about 2hrs give or take, to make a layout around an actual track. After that I will need to figure something out to get the car speed higher than 10mph! PM me if interested in making a layout, first person whom I feel comfortable giving the task will get it and the instructions of how; I don't want more than 1 person doing it since that is unneeded work, however when I get the proof of concept done I may tell others how to make more layouts!

EDIT: Added a quick clip of me adding more speed to the AI - goes 40mph instead of 10, still hits the mark - does a little understeering to oversteering and remarkably hits the mark again!
Attached files
AIRS_LFS Smoothed Turning.spr - 12.9 KB - 566 views
AIRS_LFS Higher Speed.spr - 11.7 KB - 561 views
New updates for the very few people following the progress here. I have successfully completed a lap with my AI driving!! Ok, so this was just following cone after cone and keeping the car's speed to 40mph and the car doesn't even shift up yet, so that is a real problem as well... But in any situation I still have the AI driving a lap!

The only problem is that is actually the easy part. I am trying to decide whether I should work on getting the AI to go faster through this lap, or whether I should start doing my experimental stuff with the reference points.

Currently the AI doesn't shift up,(except for while waiting on the grid), shift down or do any sort of car control besides 'turn towards next cone'. So it is a pretty non-intelligent driver, but that is beside my point...

The driver went around Fern Bay Club!!!!!!!!!!!!!!!
Attached files
AIRS_LFS_FirstFinishedLap-FE1.spr - 39.1 KB - 605 views
wow!
nice one!

I think you should take small steps.
- get the AI to shift (couldn't be so hard)
- create and use the stability data
With this you can build a basic ai, and you can fine tune this buliding blocks, to eliminate all errors in these, sou you can take the next step.

Quote from blackbird04217 :New updates for the very few people following the progress here. I have successfully completed a lap with my AI driving!! Ok, so this was just following cone after cone and keeping the car's speed to 40mph and the car doesn't even shift up yet, so that is a real problem as well... But in any situation I still have the AI driving a lap!

The only problem is that is actually the easy part. I am trying to decide whether I should work on getting the AI to go faster through this lap, or whether I should start doing my experimental stuff with the reference points.

Currently the AI doesn't shift up,(except for while waiting on the grid), shift down or do any sort of car control besides 'turn towards next cone'. So it is a pretty non-intelligent driver, but that is beside my point...

The driver went around Fern Bay Club!!!!!!!!!!!!!!!


FGED GREDG RDFGDR GSFDG