*edit*Skip to the 'The point' bit if you can't be arsed to read all of my babble. There is a point. I promise
Excellent stuff Vic, although the 'format' parameter seems to deviate from your (up until now) 'I'll give you the data, you do what you like with it' mantra.
I only have one 'gripe', if you like with the LFSWorld data now.
PBs and PSTs.
Anyone who has written a script to get these on a restricted server (especially if you want both of them because they share a tarpit) will tell you that the current method is a pain in the arse.
Consider a page with a team listing.
Each driver has basic information, car number, name, plate, plus a couple of stats and their best PB.
Lets assume that this team has 15 members.
Now, how to get the data?
The easy way around would be alter the max execution time & make the requests in a loop, sleeping for 5 seconds at the end of each iteration.
(15 * 5) * 2 = 150 seconds.
15 drivers, 5 second delays, once for PBs, again for PSTs.
But not everyone has permission (iirc safe_mode restricts this) to alter the max execution time. Some hosts even limit it to lower than default times.
So how to get it if you can't alter the execution time? A cyclic, or round robin cache mechanism, or alternatively a caching request chain.
The first involves picking a name from the top of the driver list, updating and putting them at the bottom, except each driver needs to appear twice. Once for PB, once for PST.
The second is a bit of a hack, but it gets the job done as quick as a for loop but with only a 5 second execution time required.
Basically, a root request is made that takes the same list a cyclic cache would use as an url parameter. The script takes the first one off the list, requests the data, sleeps for 5 seconds and then recurses by making a HTTP request for itself with the new list. This carries on until the list is empty, at which point the job is done.
IMO that is a very long way around, and incredibly wasteful.
I think a separated list of usernames for PB and PST with separate tarpits of at least a minute would make a lot more sense, especially considering that the majority of users for the lfsworld data seem to be teams.
I don't know the request numbers, and I don't know the database schema, but taking our example above, 2 queries has to be better than 30, especially if there are any joins involved.
Of course thats just my opinion and could be total twaffle, but it makes sense to me.
Otherwise, keep up the good work