The online racing simulator
LFS Benchmark
(139 posts, started )
Quite some work but got it it done
Thanks Victor, was a bit worried that any update could fake the data or something.
#78 - troy
I'm getting:

Error

Invalid form data: memory size.


Pretty sure I've got my graphics card and system memory correct, any clues on what's going on?

edit: fixed, thanks vic
This benchmark got me wondering...
Isn't there a way to let your GPU help the CPU in calculations? Like how it takes physics nowadays.

I came across such an option in a game or program once, but forgot which. There has to be someone who wrote a program to do this, no?

EDIT:
RCUDA seems to be doing this in some way.
http://www.hightechnewstoday.c ... 2011-high-tech-news.shtml
http://en.wikipedia.org/wiki/RCUDA

Can any PC-Wiz tell me how I go about this?
Seems RCUDA is made specifically for usage in a network.
There is OpenCL that can be used across a wide variety of hardware (gpu and cpu). The problem here is that LFS still needs to retain support for non OpenCL capable devices (older hardware) and they would have a hard time keeping up, if the standard would be GPGPU accellerated car physics.
However, GPGPU could be used for additional (physics based) eye candy for example. Something that only matters on the person's own computer, that doesn't have to be reproduced on another person's computer.
I see, thanks for the answer. Especially fun to see that it seems like you already considered this.

But eye-candy is not what I care about. I was thinking more along the lines that most old mainboards now have PCI or even PCI-e.
Since PC upgrades cost a lot because of a different CPU sockets, you need to upgrade most of your hardware.

Let's say non CUDA graphical cards can also share their free capacity.
Then people would just be able to upgrade only their Graphic card to one that fits their mainboard. And also a lot of people's system now, would be able to play with higher FPS in 12+ player grids.
It seems a lot of people here have an old CPU, so they drive about in 10-24 FPS not knowing what they are causing. This would've been an easy fix to that.

EDIT: Sorry for the big off-topic
Well, my gf9600 was in the first generation that supported opencl. The same with my intel coreduo e8400. It only takes one generation less than that and the hardware is 'incompatible'. I don't think that can be seen as too old hardware.
But give it time. At some point you'll start seeing gpgpu applied in more and more software (we're not the only ones that concluded it's too early to depend on it).
Indeed not a lot of software uses it and if they do it's a very recent version. For example 3D designing software.

On the benchmark results I was baffled. As my GPU beats yours and others people easily. But because I have a quad core my max result is "way" below some people.
Another benchmark that shows quad cores are no good for gaming.
Maybe there should be a 3DMark benchmark result added? So we know how the system in total scores too.(altought this might just be only personal interest)
Quote from lolzol :Indeed not a lot of software uses it and if they do it's a very recent version. For example 3D designing software.

On the benchmark results I was baffled. As my GPU beats yours and others people easily. But because I have a quad core my max result is "way" below some people.
Another benchmark that shows quad cores are no good for gaming.
Maybe there should be a 3DMark benchmark result added? So we know how the system in total scores too.(altought this might just be only personal interest)

The problem isn't your quadcore per say, the problem lies in the age of your quadcore + xeon's are aimed to be endurance CPU's rather than gaming one's. Newer quads with lesser GPU's beat your result. Even a Q8400 beats your score with a low end GPU. Saying that quad cores aren't good for gaming was maybe true several years ago, when quads were mainly produced for servers.
It's not that multi-core CPU's aren't good for games. If a program is not written to take advantage of a particular number of CPUs/cores, then you'll see no benefit from having more. With older CPU's, you often paid for those extra cores with a reduced clock speed in order to deal with the increased thermal load. Thus, given equivalently priced pair of CPU's, the one that is clocked higher but with fewer cores often performed better in games that did not take advantage of the extra cores (i.e. most games for quite some time). This is often what led people to conclude that multi-core CPU's were not worth it for gaming. Nowadays, with Intel's latest CPU's, the clockspeed actually varies slightly based upon how many cores you're using. More games are also being written to take advantage of the extra cores.
Yes, for example my 720QM, which is quite 'old' already (2009)... Works in the so called turbo mode which Intel named this technology. Directing more power to 1 core. Thats why I'm measuring a 2.3-2.5 Ghz clockspeed (variates constantly) with CPU-Z while it originally only is 1.6 Ghz per core.

* Actually I didn't realize I had this until today

.
I've submited my results. I found interesting how I win Flame on min settings by 19 fps, and then lose on max by 4 fps.

Damn you hahaha

PS: Is there any way to remove some systems, I failed my first 2 attemps...genius xD
For those who are actually willing to give it a try, it should be noted that it's not possible to use Fraps as a benchmarking tool with WINE. There is a less accurate LFS-specific workaround available, but it's not "production-ready" at the moment...
Ran both max and min benches for 0.6B, for science!
This benchmark method requires a bit of work. If LFS had this type of benchmarking built in (along with voluntary reporting of results and hardware to LFSW) the developers could have a bigger data pool of actual hardware beeing used to run LFS. It might be easier to make decisions about directions to go in with the more techincal things, like GPGPU support, etc.
Quote from Skagen :This benchmark method requires a bit of work. If LFS had this type of benchmarking built in (along with voluntary reporting of results and hardware to LFSW) the developers could have a bigger data pool of actual hardware beeing used to run LFS. It might be easier to make decisions about directions to go in with the more techincal things, like GPGPU support, etc.

+1

I remember somewhere I read how LFS does not support CrossFire/SLI, is it truth or?
It supports SLI atleast, but using it might cause micro stuttering. There isn't really any benefit from it either, unless you use two of some REALLY old cards. LFS is all about the CPU.

BTW Taavi, I'm curious how did you pull those framerate off with a 2500K?
Quote from Matrixi :It supports SLI atleast, but using it might cause micro stuttering. There isn't really any benefit from it either, unless you use two of some REALLY old cards. LFS is all about the CPU.

BTW Taavi, I'm curious how did you pull those framerate off with a 2500K?

I actually have no clue, nothing's clocked (it's too hot here to clock anything). Just overwrote the cfg.txt files (min/max) and started the replay. I'm actually quite baffled myself that i'm beating the LGA2011 CPU's. Ivy Bridge should atleast give some edge over the sandy.

Or maybe it has to do with 2500k being a quad, rather than the hexa that your 3930k is. Single core performance may be compromised. Also have heard that Ivy isn't very efficient.

Bottom line....I have no clue.
#95 - troy
Pretty sure in my case it has to do with the gtx260 that is bottlenecking. Your sandybridge is also doing turbo for one core to 3.7ghz I would imagine. Not sure why matrixi doesn't have more fps though, looks like quite a monster setup.
That's the problem of LFS using only one single thread.. most of the available CPU power isn't being utilized at all.

Edit: screenshot added
Attached images
lfsthread.jpg
I uploaded mine too
Quote from Matrixi :That's the problem of LFS using only one single thread.. most of the available CPU power isn't being utilized at all.

Edit: screenshot added

Oh my god you have 12 cores?!
Quote from Sobis :Oh my god you have 12 cores?!

He has 6 cores, Windows sees 12 logical CPUs because of HyperThreading.

Actually, I wonder if Taavi's high benchmark score can be (partly) accounted to the Windows scheduler not being able to work well in this complex environment where some CPUs share caches and some CPUs aren't really CPUs.

LFS Benchmark
(139 posts, started )
FGED GREDG RDFGDR GSFDG