The online racing simulator

Poll : Multithreading as option in LFS?

Yes
115
No
30
Quote from 91mason91 :wow, almost perfect life then (theoretically)

haha, um. Let's not get too into all my baggage and emo - but no.

I'm trying to live the life I want, but i'm not terribly sucessful at it.
Quote from george_tsiros :i'm having difficulty not thinking you are being unnecessarily pedantic.

the CPUs out now = a cpu you can now go buy. like a phenom x2 II. which is quite cheap. in fact it is one of the cheapest current CPUs.

i get 50% cpu usage, of one core only, with 12 cars on the grid.

if that is not 'light', then... what would you consider 'light'? minesweeper?

Well I am probably pedantic. Its not hard not to be when you manage a team of people and give them instructions or orders or do a perfomance/mistake analysis of their work. When I make mistake and they will follow it might cost a lot of money. Its just the life and my job which teached me.

Back to topic.

Try a grid of 20 AI or full grid at online racing and you could see how having support for multiple CPU cores could help.
Quote from Becky Rose :As for my own work I resist the urge to use messaging between threads and duplication of objects

I tend to use messaging. The overhead involved in copying the data is usually small compared to the overhead of thread context switching. I already mentioned the alternative of using two objects referenced by pointers, and simply swapping the pointers as an atomic (mutex may be required)operation to switch between the current and the in process version of the objects. Here the size of the shared (between threads) object and the time it takes to update the object determines which method is best.

Quote :I dislike event driven approaches anyway as the code is messy and less optimiseable so I try to avoid it, especially in games, where consistent framerate is important and event queues can run awry

I prefer the event driven approach. In the case of the fixed rate stuff, this is normally dealt with by having a timer based event that triggers at some fixed rate. If a thread, such as a physics engine, runs at 500hz, then a timer triggers an event every 2 ms. During development, there's usually some type of overrun condition check (a step instance taking longer than 2 ms for a 500hz engine). After checking Visual Studio help, it appears that the resolution offered by "multimedia timers" is limited to 1ms increments (1ms, 2ms, 3ms, ... ). timeSetEvent() can either call a user supplied function or set a user supplied event (handle) at the specified frequency.

Normally this real time thread needs the ablity to send messages to pending threads, and may reduce the overhead by only reducing the message rate, perhaps that 500hz thread is a racing game physics engine that only sends an graphics update message once every 5 events, reducing the graphics engine rate to 100hz.

Quote :I'm still learning and descovering new techniques in the field of multi-threading.

In a corporate environment, generally you hire someone or a team with the appropriate knowledge as part of the team or at least as an assistant to the team. At this level, there's no point in wasting time to rediscover solutions to problems solved long ago by others outside the currrent team.

For an individual or small group, forums are probably the best place to get solutions and suggestions for deaing with to the type of issues that programmers run into. http://www.physicsforums.com is a good source for math, physics, and programming related issues.

I've witnessed the harm that can come from poor design. In one case, the team leader was opposed to a proper OS, and forced the evolving team (lots of turnover in this group), to use a round robin "scheduler", where thread switching only occured when each thread called an OS function to go to sleep and let the next thread run, until the OS cycled back to the first thread. Instead of messaging, the threads polled each other's gobal state variables to determine what to do next. It ends up as a long series of conditional statements, do I have an I/O pending? ... If yes, is that pending I/O now complete, if yes, handle the I/O completion ... if not, do I have an outstanding I/O request, if yes, do I have a buffer needed for that I/O request, if not, do nothing, if yes, then start an I/O request and set all the needed state flags, ... finally call the OS to switch to the next thread.

The thing was a nightmare to support. Even a minor change in one thread could impact the entire drive. One issue is that events occur out of sync with round robin scheduling, so performance is always compromised.

The other projects at this company used a proper OS, with prioritization of threads, combined with event and message interfacing. The team would define the threads and messages in a design document, then each thread would be implemented by own or two team members. The messages could be logged, which simplified debugging system level issues. Since the threads were isolated, it was very easy to track down and solve problems. For new projects involving a new CPU, the company would hire an expert to write the OS or they would buy an off the shelf OS for that particular CPU. One of my jobs was to implement an inter threading message interface layer so that the calls the threads made to send and receive messages would be independent of the actual OS being used.

I know this a bit off topic, but it's interesting to me, and might as well chat about something while waiting for the next release of LFS.

Regarding the poll, I'd obviously be in favor of multi-threading. I find it hard to believe that there isn't some form of multi-threading in LFS, so is the original poster sure that LFS doesn't implement multi-threading?
Quote :I already mentioned the alternative of using two objects referenced by pointers

Actually this double buffering approach did interest me. I'd not yet considered using an approach like that in this context before (hense, knowing the commands != technique). It is currently being digested cerebrally.

Quote :In the case of the fixed rate stuff

Here we have a massive different of approach, although I appreciate in a sim like LFS having a fixed rate is something of a necessity - and if I look hard enough I know i've timer based elements in places - but on the whole I preffer to use a delta wherever possible.

For anyone actually reading who isn't familiar with the jargon being used i'll quickly explain what a delta is: It's a unit of time.

Imagine a piece of code that operates 100 times a second and simply add 1 to a counter, at the end of a second it will reach 100. Well computers all run at different speeds, so we can achieve the same result by executing the code once and adding 100. This is the basic premise of delta timing.

X =+ (1 * delta);

There are some drawbacks to the delta method, particularly relating to divisions and multiplications, in these cases I use a timer for a fixed rate.

myTimer =+ delta;
if (myTimer > 100){ myTimer =- 100; X = X*2; }

This approach meens that the majority of my code need to run less frequently, providing massive optimisations in the area of physics and also totally negates slowdowns.

Ontop of this I code in bouyancy to the delta value so that small system spikes from background tasks are smoothed out. The final result is a program where code is executed only when it absolutely has to be executed and background operations are accounted for.

Now it can be argued that a delta adds overhead because instead of adding 1 to X I am adding 1*delta. However, computers internal clocks all run at a slightly different speed - and as my work is almost exclusively online these days - I need to do a multiplication anyway to keep things in sync. So I factor this into the delta aswell, so there is zero overhead for taking this approach.

Quote :In a corporate environment, generally you hire someone or a team with the appropriate knowledge as part of the team or at least as an assistant to the team. At this level, there's no point in wasting time to rediscover solutions to problems solved long ago by others outside the currrent team.

Whilst programming is my job, it is also one of my passions. I realise I am very lucky to be able to do one of my hobbies for a living, but it also meens I frequently code in the evenings and improve my skillset and decover new things. I like to figure things out for myself, i'm so unorthodox anyway that I find on programming boards I tend to gravitate toward the general discussion and the showcases and ignore the rest except when searching for a specific solution to general things like "what minimum version of X does this depency require" - people dont listen to my programming solutions on dev boards because my fixes are either too brilliant or too stupid, and i've long since given up stating x y and z is possible because nobody ever believes me even when I post an example...

Quote :I've witnessed the harm that can come from poor design.

mmm. I can't claim to always get it right myself and I dont think anyone can, I think one of the hardest parts of design is designing something that your team can achieve, rather than designing something that I would write then expecting my team to be able to do it. I can't talk about my current job because of contractual limitations but I can say I relate to what you are saying.

For me working in a team is very difficult because of my unorthodox methods, i'm sure I would not be able to hold a job down if I didn't deliver the impossible things, because the simple stuff I screw up every time.

Quote :Regarding the poll, I'd obviously be in favor of multi-threading. I find it hard to believe that there isn't some form of multi-threading in LFS, so is the original poster sure that LFS doesn't implement multi-threading?

Gaming really is a long way behind applications in this department, indie gaming particularly. Top end games make the use of the latest hardware, so they've had multithreading for a while. For indi developers the market penetration of suiteable hardware is only about a year into sufficient numbers to warrant supporting it. For many years dual core (or rather dual processor) systems where the domain of NT servers.

LFS is indeed single core at the current time, it started it's life long before multi-processor systems was mainstream enough for an indie title to factor it in.

A lot of indi software is still single core, on the grounds of not needing to use multiple cores or the developers lacking the skills for it (as I did a year or two ago). Of course, i'm sure with enough digging you could find an old example that had it.

It's simply a matter of assessing the hardware installs of your target audience and supporting those hardware features in common abundance amongst that audience.
Quote from Becky Rose :LFS having a fixed rate ... use a delta
X += (1 * delta);

myTimer += delta;
if (myTimer > 100){ myTimer =- 100; X = X*2; }

I'm don't understand what you're getting at here. Is "delta" a constant, or a value set by some hardware?

How often are you executing this code fragment: if (myTimer > 100)...? Is it part of some large processing loop where the code periodically checks some hardware timer for a timer event?

Instead of doing perodic checks for elapsed time in the main code, Windows allows the equivalent of an interrupt to call a function or set an event at some fixed frequency via timeSetEvent(). The fixed frequency code resides in the function that is being called, or in a higher priority thread that waits for the event to be set (signaled), running independently of the main line code. In either case, the current (lower priority) thread is interrupted and control is given to the fixed rate function via a call or thread via an event. That fixed rate function or thread can then communicate with the other threads via messages, or events as needed. Once the fixed rate function has completed (returned from the called function case, or waitfor...object from the event driven thread case, control returns back to the previously running thread, eliminating any need for that thread to "monitor" a timer.

As mentioned before, the fixed rate process could either call or set event flags for other fixed rate processes, such as the graphics thread, which was triggered once every 5 cycles in my previous example to reduce the frequency from 500hz down to 100hz.

Here is where the multi-threading approach helps. The physics engine runs at 500 hz and needs to complete in less than 2ms, leaving enough cpu bandwidth left over for the rest of the game to function. The graphics engine runs at 100hz, and may take more than 2ms per step, so it couldn't be called directly from the physic engine thread because it would cause the physics engine to overrun it's timer. As a separate thread running at 100hz, the graphics thread has up to 10ms to complete without causing any problems to the game, and no special coding is required to split up the graphics thread into sub 2ms steps, since while the graphics thread is running, the physics thread will interrupt the graphics thread as needed so that the physics thread operates at it's 500hz. Each thread operates independently and has seperate criteria for meeting peformance goals. Neither thread requires any knowledge of the state of the other thread and the threads operate independently.

Quote :Top end games make the use of the latest hardware, so they've had multithreading for a while.

A problem with our terminology here. I'm using the term "multi-threading" to mean any form of multi-tasking, which until relatively recently was implemented on single core cpu's. It just so happens that Windows will automatically use multi-cores on multi-threaded programs, but there has always been a significant advantage to writing multi-threaded code, even on a single core systems, depending on the application.

The example code I created in "mtcopy.c", uses two threads so that reading and writing of data can occur concurrently without having to use special forms of I/O calls that return immediately after starting the I/O and require the program poll for completion or to specify a callback routine to handle the completion step, similar to an interrupt routine. Instead each thread just issues a normal start the I/O and wait for completion, and the OS automatically switches between the threads while they wait for I/O completion or wait for messages or events (event, mutex, semaphore) from each other.

Quote :However, computers internal clocks all run at a slightly different speed

But generally they have timers that run independently of the CPU. PC's have always had an actual or emulated 8253 timer running at 1.19318 MHz, classically used to drive the DRAM refresh rate, and divided down by 65536 in the BIOS and in MSDOS to get a ~55 ms => 18.2hz "ticker" interrupt.

http://www.dcc.unicamp.br/~celio/mc404s2-03/8253timer.html

Windows default timer functions run at 15.625ms => 64 hz, and the "multi-media" timer runs at 1ms => 1000hz, regardless of the PC speed.

Quote :A lot of indi software is still single core

As mentioned before, Windows 95 supports multi-threaded applications, and many games have been multi-threaded since then (1995). Some of those older multi-threaded games are having issues with multiple cores, typically due to a issue with directx. Using a opengl driver, if possible, usually eliminates the problem, or setting the affinity via task manager to a single cpu may be required.

Synetic is a small developer with about 8 people, and their games (World Racing, Cobra Alarm) are multi-threaded.

http://en.wikipedia.org/wiki/Synetic

Again I ask, is LFS really a non-multi-threaded game?
Hi Jeff,

even I am not programmer its indeed interesting reading. Maybe LFS is multithreading but my point with this thread was rather that LFS cant get performance profit if the CPU has same architecture and have more then 1 core. So the question is if it could would it be usefull if you look at it from all the possible perspective. I see only benefit and reading your discussion with Becky there seem no any drawback even when someone would be still using single core CPU. Or I am wrong?
Quote from DEVIL 007 : Try a grid of 20 AI or full grid at online racing and you could see how having support for multiple CPU cores could help.

cpu usage would go to 80-90%, possibly less if the cars aren't AI.

still not convinced LFS needs optimization now.

if, in the future, the engine gets upgraded, then that's the future and there will be more powerful CPUs.
I still to see it better doing now then later when the LFS code will more and more complex.
Can't you just check how many threads are associated with a process in Windows?
Quote from JeffR :I'm don't understand what you're getting at here. Is "delta" a constant, or a value set by some hardware?

Quote :Windows default timer functions run at 15.625ms => 64 hz, and the "multi-media" timer runs at 1ms => 1000hz, regardless of the PC speed.

You're almost right, but the problem is PC hardware is knocked out cheap and these timers have minor variances +/- 5%. In normal use this makes no difference what-so-ever, but in a multiplayer game environment like LFS you need to modify your timer routines to resyncronise the internal timers so that players dont get an unfair advantage. In many games this discrepancy is irrelevent, for anything that has a time factor like LFS this innacuracy would destroy online play.

To compensate for this I use delta-timing, it's an established technique. One of the issues with PC's is that some players have hardware incapable of running your game at full speed, it's one thing to say "The physics needs to complete in 2ms" but what if the players computer is completely incapable of doing that? Do they just not play the game? How should the game react?

My solution is simply to run the code less often and increase the delta. To explain this in the simplest way I can think of, go back to the 80's and say you are writing a game on the ZX spectrum. You press the key to move your guy right along the screen (hehe, usually the P key :P) and the variable is incremented by 1.

Later in development more stuff is happening, and now when you increment by X it is too slow. So you increase X by the time elapsed since the last time the program executed it's program flow and voila, it no longer matters how slow your game runs.

As I mentioned, multiplication and division by delta is not possible, so these things are usually done on timers, sometimes nested within code, sometimes using an IRQ in seperate loops.

Delta timing allows a program to run at it's maximum possible rate and provide a comparable experience on all systems. Now take this technique and mix it with modern multi-threaded programming, what you get is a physics thread that likes to run at 400mhz, but if it can only cope with 230mhz because it's running on out-dated hardware then that's what it'll do and the player will not be disadvantaged in a multi-player environment, all that'll happen is some loss of fidelity in the cars responsiveness.

Quote :Instead of doing perodic checks for elapsed time in the main code, Windows allows the equivalent of an interrupt to call a function or set an event at some fixed frequency via timeSetEvent().

aye i've used interupts since the Amiga era, though invariably i'll have my own software timers, the reason I use my own timers is to modify them and compensate for the inaccuracies of the internal clock in a multiplayer environment.

Quote :A problem with our terminology here. I'm using the term "multi-threading" to mean any form of multi-tasking, which until relatively recently was implemented on single core cpu's.

As a developer it's the same, either your software has more than 1 internal thread or it doesnt. The OS handles what core it goes onto and as a programmer I don't give a damn how many cores are there.

Quote :Again I ask, is LFS really a non-multi-threaded game?

From what I understand I gather that LFS has seperate physics and graphics processes, I used to know the physics rate but at some point over the years I've forgotten it, I believe 200mhz for complex and 50mhz for simple, or was it 400 and 100? I don't recall. However these "threads" are a simple internal timer system of some kind and not an actual thread process.

Run LFS in a window with task manager open and you'll see it has a single processor affinity. As a programmer, if you use threads, the OS moves those threads about between processors automatically so anything that runs on 1 CPU (and more games do than you might think) is not multi-threaded.

As I mentioned before, it's simply a case of hardware penetration. Not everyone works in the same way, or uses the "advantages" of threads because of the "disadvantages" of threads - as discussed earlier.

Now that the hardware has permeated the market however, the market has taken a very direct swing toward using it.
Quote from DEVIL 007 :I see only benefit and reading your discussion with Becky there seem no any drawback even when someone would be still using single core CPU. Or I am wrong?

There are some drawbacks, depending upon the programmers IDE (the software tools they use to write the game), which in Scawens case is based around Visual Studio, debugging tools in a threaded environment can be tricky. I havn't done anything so complex in Visual Studio and i've no idea what it's debugging is like so I can't answer that. I find myself installing it, getting frustrated, and throwing the disk as far away from me as I can.

Also other debugging related problems can occur, with truly independent threads running it can be difficult to understand the program flow of your game, so when something doesnt happen or crashes out it can be hard to figure out why. Especially with a non-OO approach or a badly considered OO construct where the threads inter-relate to each other in complex ways. I can't tell you the number of times i've thought to myself "I wish I had done it like X/Y/Z instead"...

In a matured project like LFS which was not originally written with multi-threading it could add a level of complexity to working on the code of the game that makes further development even more time consuming to achieve.

This is all down to specifics which none of us know of course, in the case of LFS, the only person who knows just what effect multi-threading will have on further development is Scawen, but it is possible that there may be downsides.

As far as I can tell LFS currently has a time based event driven mainloop with several inter-related processing hanging off it. I'm pretty sure Scawen is doing his own time control in some way at the moment to compensate for RTC (clock) innacuracies, and the issues of moving that into a threaded model will depend upon his implementation, it may be that that stuff needs rewritting which ultimately would need extensive public teting to get enough machines with RTC variance to know the new implementation is good. (hense i'm pretty sure he's not working on it atm).
Quote from Bob Smith :Can't you just check how many threads are associated with a process in Windows?

What is your point Bob?
Quote from DEVIL 007 :What is your point Bob?

Well if task manager is telling me that LFS has 5 threads, then at least some small part of it must be multi-threaded already? Which was a question being asked.
Quote from Bob Smith :Well if task manager is telling me that LFS has 5 threads, then at least some small part of it must be multi-threaded already? Which was a question being asked.

That's interesting, I have never looked at LFS thread only at processor affinity and I doubt my install is up to date. Are all the threads on the same CPU affinity?

I wonder if those threads are packaged resources as opposed to the main application, I can't say i've paid that much attention under Windows - but I know when I wa coding a Mac application a couple of years ago I had several threads from bolt-ons that where nothing to do with my application.
Quote from DEVIL 007 :I have different view on this. The time he would spend on this now could bring only benefit. Later when the code would be more complex it would be simply more job to do. What could he save now woudl simply pay him back later.

And you're welcome to having one I agree it would be a good thing in preparation for greater updates to come, but for the time being LFS is quite light.

I'd like to see it raise the bar a bit, some people are really racing with prehistoric machines for which expectations should be cast aside: it's time to stop complaining and move on. And before someone jumps to my throat, let's put this in perspective: my sim computer is basically 4 years old.
Quote from NightShift :And you're welcome to having one I agree it would be a good thing in preparation for greater updates to come, but for the time being LFS is quite light.

I'd like to see it raise the bar a bit, some people are really racing with prehistoric machines for which expectations should be cast aside: it's time to stop complaining and move on. And before someone jumps to my throat, let's put this in perspective: my sim computer is basically 4 years old.

If I was Scawen I would already long ago have been detecting the hardware configs on demo players, these are the potential buyers and I don't see it as an invasion of privacy to know the CPU/GFX/RAM etc of your demo players. If anything such information can only lead to benefit.

Something tells me that Scawen isn't doing this (or maybe he is via the unlock system), because as i've said before he's a bit of a communist when it come to commercial exploitation of hi userbase, but the point being that it's the hardware in use by the demo users which should have the greatest influence over what hardware aspects are implemented - because put simply, that's real data and not hypothesis.

I wonder how other people feel about the collecting of that kind of information. To me I don't see it as an issue, but I know how upset some people can get over 'information'.
Quote from Becky Rose :If I was Scawen I would already long ago have been detecting the hardware configs on demo players, these are the potential buyers and I don't see it as an invasion of privacy to know the CPU/GFX/RAM etc of your demo players. If anything such information can only lead to benefit.

Something tells me that Scawen isn't doing this (or maybe he is via the unlock system), because as i've said before he's a bit of a communist when it come to commercial exploitation of hi userbase, but the point being that it's the hardware in use by the demo users which should have the greatest influence over what hardware aspects are implemented - because put simply, that's real data and not hypothesis.

I wonder how other people feel about the collecting of that kind of information. To me I don't see it as an issue, but I know how upset some people can get over 'information'.

unless I'm specificly asked if I want to give out that information I'm strictly against such things. if, then as free option.
Lol Becky.You made me laughing a lot with that "comunist statement".

I would not have nothing against colleting such a simple piece of information about my computer. It would be exactly kind of benefit you have mentioned above.
Quote from three_jump :unless I'm specificly asked if I want to give out that information I'm strictly against such things. if it's an option ok to send this data ok, but everything else leaves a bad taste in my mouth.

You would be suprised how much data is already collected for development purposes, for instance just about every major website is already tracking your browser usage. They're not interested in harvesting information about you, they're interested in making sure their product works on the equipment people are using.

Is this not the same?
With programs they call it phoning home you know and basically only evil corporation as MS and Apple do that without asking for permission. If I was Scawen I wouldn't want to be put on the same level as those two

TBH I'd just collect information for each and every user regardless of the licensing status.
Quote from Becky Rose :You're almost right, but the problem is PC hardware is knocked out cheap and these timers have minor variances +/- 5%.

Virtually all PC's use some equivalent of the crystal that runs the virtual 8253, specified to run at 1.19318 mhz (1.193175mhz -> 1.193185mhz, a variation of less than .00042% is allowed. Some motherboards do violate the spec if run in an overclocked mode.

Quote :To compensate for this I use delta-timing ... unable to run physics engine at 500hz

If the physics engine is run at 1/2 it's normal rate, then the time deltas used for numerical integration are doubled. This would decrease the accuracy and create a slight difference in peformance. Still it's better than not being able to run the game at all.

I've seen what appears to be similar behavior in other games when using fraps to record post race replays. On some games, if I wait until after a race is completed, and then use fraps to record the replay, the replay runs in slow motion. The usual fix is to run fraps for at least a short while at the start of the race, apparently when the game is adjusting it's internal engine rates, aftewards, I'm able to capture the replays via fraps. If the physics engine rate is being altered, then this is affecting the game play. Usually it's just the graphics engine that is being dynamically altered.

If anyone here is interested, the physics of a real or gaming car involves differential equations that are too complex to be solved by integration, so numerical integration is used to convert calculated accelerations into velocities, and then into positions. Runge-Kutta is a commonly used method:

http://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods

Quote :my own software timers

I don't know how this could be done with Windows NT, 2000, XP, Vista, or later systems, as direct I/O and interrupt intercepting is locked out.

Quote from Becky Rose :Visual Studio - multi-threading

Visual Studio has good support for multi-thread debugging (only the current thread is affected by a breakpoint, the other threads continue to run). Visual Studio Pro (the one you buy) supports remote debugging between two computers, the debug screen is on the "host" computer, while the applciation and it's screen runs on the "target" computer. In addition to Visual Studio, Microsoft has a good set of additional debugging tools, probably beyond what is needed for debugging a game:

http://www.microsoft.com/whdc/ ... ls/Debugging/default.mspx
Quote from Bob Smith :Can't you just check how many threads are associated with a process in Windows?

I tried that on my Phenom 2. Basically what it does it that if one core starts to get a high load it moves lfs to another core with less load which basically means LFS is bouncing round like a pinball.
Quote from JeffR :Virtually all PC's use some equivalent of the crystal that runs the virtual 8253, specified to run at 1.19318 mhz (1.193175mhz -> 1.193185mhz, a variation of less than .00042% is allowed. Some motherboards do violate the spec if run in an overclocked mode.

I think you will be sorely dissapointed to find out just how innacurate the RTC is in practice. Forgetting what is down on paper as a published specification, the facts are the variance is as much as 5% each way, with the majority being within 2%. All substantially higher than the .00042% of whatever spec you've got there that the motherboard manufacturers clearly havn't read

Quote :If the physics engine is run at 1/2 it's normal rate, then the time deltas used for numerical integration are doubled. This would decrease the accuracy and create a slight difference in peformance. Still it's better than not being able to run the game at all.

That is it in a nutshell, but the same delta can then be used to address the other problems I mentioned, Windows background task spikes (via adding bouyancy to the delta) is a useful trick when a process runs flat out without any delay or timer based processed, such as when running on substandard hardware - and the innacuracies between PC's of the RTC.

It's these tertiary benefits which have kept me as an advocate of the delta timing method.

Quote :If anyone here is interested, the physics of a real or gaming car involves differential equations that are too complex to be solved by integration, so numerical integration is used to convert calculated accelerations into velocities

*scratches head* I get this problem a lot at work, sometimes some of the senior programmers talk about things using maths jargon and such, it can often take them half an hour of long words to explain that want they want is to calculate a trajectory. Me, I just calculate a trajectory, I know how to do that.

I've never much gotten on with conventional mathematics, I remember my maths teacher getting me to stand infront of the class to do a complex sum and explain how I got to the answer because she was annoyed that I was almost always right but she never understood my workings out. I went through the process, and I forget what the sum was it was something weird like multiply by 47. Now I find some numbers easier to work with than others, 4 and 7 irn't amongst them. So I multiply by three, multiply original number by 2 and double it, add them together (to get x7). Take the x4 part of the result, add a naught (to get x40) and add the two results together. My maths teacher demoted me down to the second set every time within a few weeks every time I got put back up again...

This is true of physics too. I know how to make stuff recreate the real world, I have never bothered with off the shelf physics engines, I just can't read the formulas posted on academia sites.

I tend to simplify everything so that I can understand it, rather than educating myself to understand complex things. As I said before, i'm not terribly bright I just have a different way of looking at things.
Quote from Luke.S :I tried that on my Phenom 2. Basically what it does it that if one core starts to get a high load it moves lfs to another core with less load which basically means LFS is bouncing round like a pinball.

This sounds to me like the other processes are .dll calls or something of that nature, what tends to happen when using these libraries is that you open a .dll (Dynamic Link Library - a collection of functions, ie system32.dll has lots of system related functions) and you leave it open to save time when you need it next, it then runs on it's own thread but does absolutely nothing until your code makes a call to one of the functions contained within it.

When you make a call to the dll your application then waits for a response from the dll before continuing, so although there are multiple threads open you're never actually using them concurrently.

This seems consistent with the behaviour of LFS' CPU affinity and it's number of threads.
Haven't read the entire thread so appologies if this has been asked/answered already.

Am I just being dumb here but I thought that dual/quad cores were kind of seen as "logical" devices by the OS. As in the OS acts as if it is dealing with a single CPU but the internal structure of the CPU decides how to task out the instructions etc between the cores. Isn't multi-threading more about working on machines that actually have physically two discrete CPUs where the OS has to make the decisions about how to divide up an applications instructions between the two CPs?? Or has Windows been "adapted" in this way in order to work as if it has two physical CPUs when actually its a dual/quad core single CP ??

FGED GREDG RDFGDR GSFDG