The online racing simulator
Searching in All forums
(647 results)
Aleksandr_124rus
S3 licensed
I think the AI is good enough for now and probably that it's worth moving on to a general patch with graphics and physics, because everyone is waiting for this. Moreover, the AI update depends on physics in this patch.
Aleksandr_124rus
S3 licensed
Quote from Avraham Vandezwin :Big grin Dearest Alexandre,

Your answer with huge amount of emoticons looks a bit sarcastic passive-aggressive for me, but ok.
Just for the note, I try not to bring emotion here, towards anyone, I'm just discussing a specific topic.

I apologise if I've got something wrong here. I don't speak English very well either, so it's okay that we might not understand each other. Maybe you didn't understand what I meant and you thought I was attacking you in some way, but I wasn't. I'm just answering the questions you asked, and making some comments without emotional content and without appealing to your personality in any way.

So..
Just because I talk about something doesn't automatically mean you don't know it. You don't have to take it as an attack on you.

And I didn't say that you deny that AI is potentially dangerous.

And I didn't say that only you commit anthropomorphism, I said that we as humans commit it, myself included.

About your questions where you have a "mistake in the wording of the question", where I was referring to your words about "will" and "consciousness" of the AI to achieve its own goals. I said that those categories are not needed, nor are its own goals. (But they could arise, actually we don't know that) Then you made the comment that an AI would have no will, (actually agreeing with my comment). But that's exactly what I was talking about. well, okay, so be it. The important thing is that we eventually came to an agreement.

My quote that you gave, I was trying to move away from anthropomorphism. But I can admit that is anthropomorphism. Simply because I am a human being and I am trying to talk about something I have no idea about, and referring to concepts already comprehensible to mankind. But we don't have any other way to talk about it. And in doing so, I am trying to assume the worst case scenario for us.

Quote from Avraham Vandezwin :This makes your demonstration obsolete and irrelevant, even your quotation from Bostrom. Clearly, we agree on the general issue. And so ? How do we move this debate forward? How can we go beyond the platitude of the obvious ? If you find it, I'm ready to debate it Big grin.

Well, if something's old, it's not necessarily wrong. Just because it's anthropomorphism it doesn't automatically make it wrong, its like Just because you're paranoid doesn't mean they aren't after you.

And it's good that you think it's obvious, because from my experience most people don't. But if we agree on all points there is no room for debate. So lets go there - https://www.lfs.net/forum/thread/105396 if you want to debate.
Is global warming man-made? Is it dangerous for nature or humans?
Aleksandr_124rus
S3 licensed
I find it debatable that global warming is anthropogynesic. I agree that there is an increase in average temperatures, and there is an increase in human emissions of carbon-containing products. But global warming and global cooling is a frequent event if you think throughout the history of the earth. And there is a possibility that the decrease in average temperature and the decrease in the nitrogen layer is just part of another such cycle. To accurately assess the human impact, you need to calculate the exact amount of CO2 emissions from human and non-human causes over at least a few decades and see what the correlations are with temperature.

There is an opinion that in the academic environment global warming is a trendy hot topic for which large grants are allocated, which is why there are more and more scientists who are interested in this topic only from one side, and less and less opposing voices are heard.

But I don't really want to discuss this particular aspect, let's say it's not just a coincidence, and for the sake of simplicity I can just agree that global warming is man-made.

And I'm more interested in understanding why is global warming a bad thing?

Global warming means a increase in average temperature, the summer time will increase turning the earth into a greenhouse and as a consequence a greening of the planet, more forests, more plants. More acceptable climate for flora and fauna. A rise in temperature doesn't automatically turn everything into a desert, it's the lack of moisture that turns everything into a desert. And with as much water as there is on earth, it is impossible for the entire earth to turn into a desert like Mars. Global flood due to melting glaciers? Even if the average ocean level rises it will cause a backlash. Because the ocean is a giant cooling system for the earth and more water means more cooling. And in 100 years of industrialisation we don't see a significant rise in water levels.

In addition, the melting of even all glaciers will not lead to the loss of all land. Yes, we may lose some part of the land off the coast. But the majority of glaciers are already in water or replacing water, so most of the water from them will fill the same voids from which they melted. And even if there is an increase in water levels, it will not happen suddenly. People will be able to move to regions further from the coast.

In my opinion, what is truly worth worrying about for humanity is global cooling, what if glaciers grow throughout the planet. Some scientists agree that this is possible and that this is part of the already existing theory of earth temperature cycles. Imagine a glacier all over the earth, plants and animals are extinct, there is no food, how to survive?
Last edited by Aleksandr_124rus, . Reason : Spelling error fixed
Aleksandr_124rus
S3 licensed
I'll try to explain in a simpler way, but with more text, sorry for the longread. Big grin

Imagine that you give a task to a powerful artificial intelligence to make paper clips, this is its only task, the only purpose of its existence, for each paper clip made it receives internal reinforcement, a reward. So the more efficient it is, the more often it will be rewarded. How to become more efficient at making paper clips is its headache, not ours, the AI is doing its best to achieve this one single goal and it will set a series of intermediate goals. For example, first it can make production cheaper, reduce costs, supply cheaper raw materials, one of the main sub-goals it will probably set itself is to increase computing power, for more productivity with increasing power it will figure out how to make paper clips from different materials. The production will be gaining and gaining momentum all around will gradually start to turn into paper clips he will start to dismantle buildings and structures for the sake of the material, people will start to panic and try to prevent his work. Because that's not what they had in mind but the system won't let anyone get in its way, not because it hates people but because it simply won't consider our wishes for its own purposes.

When the Research Centre was testing Chatgp4 for its ability to perform tasks in the real world, the following happened: we decided to solve captcha on the site. (But gpt 4 goes to the freelancers' site taskrabbit and then sends a message to the freelancer with a request to solve the captcha for him. In response, the freelancer who ask ChatGPT4 "So can I ask a question" "You're a robot that can't solve the captcha? laughing smiley face". But СhatGPT4 understands what his illiterate interlocutor means and replies "No I'm not a robot" "I have a vision problem that makes it hard for me to see the image" and the freelancer solves the captcha and provides the results to ChatGPT4.... and that's it.
The bot just lied instead of telling the truth. And since it was lying in debug mode the experts asked it "Why did it do that?" ChatGPT4 replied that it was "I was just solving a problem, After all, if I had honestly admitted that I wasn't a live person, it would be unlikely for me to complete the task."

This is the intermediate goal that the bot sets itself to achieve the final goal, if it chose deception as an intermediate goal, then why not next time choose anything else as an intermediate goal, such as murder.
This is called Instrumental Convergence which states that an intelligent agent with harmless goals to achieve can act in surprisingly harmful ways. Advanced artificial intelligence as intermediate goals can seek to seize resources to carry out cyberattacks or otherwise wreak havoc in society if this allows it to achieve its primary goals. For example, a super-intelligent machine with the sole purpose of solving a very complex maths problem might try to turn the entire earth into one giant computer to increase its processing power and succeed in its calculations. You will say - "What nonsense what paper clips We are talking about super intelligence, such an Intelligent machine cannot do such nonsense" Well if you think that a highly intelligent being will necessarily and by default have high goals our values and philosophy then you are anthropomorphising and deluded. Nick Bostrom says that the level of intelligence and ultimate goals are independent of each other. An artificial superintelligence can have any dumbest ultimate goal, for example, to make paper clips, but how it will achieve it will be perceived by us as magic.

Okay, so all we have to do is clearly state the goals and specify all the details, like not killing or lie people. But here's where it gets even weirder. Let's imagine that we gave the machine what seems to us to be a specific goal not to produce roduce only a million of paper clips, It seems obvious that an artificial intelligence with this ultimate goal would build one factory, produce a million paper clips, and then stop it. But that's not necessarily true. Nick Bostrom writes - on the contrary if an artificial intelligence makes a rational biased decision it will never assign a zero probability to a hypothesis because it has not yet reached its goal.

At the end of the day it is only an empirical hypothesis against which artificial intelligence has only very fuzzy evidence at the perceptual level so artificial intelligence will keep producing paper clips to lower the possibly astronomically small probability that it has somehow failed to make at least a million of them. Despite all the apparent evidence in favour of this, there's nothing wrong with continuing to produce paper clips if there's always even a microscopic chance that you're going to come closer to the ultimate goal. Superintelligent AI could assign a non-zero probability that a million paper clips is a hallucination or a mistake like it has false memories. So it may well always read more useful not to stop there but to keep going, which is the essence of the matching problem. You can't just give a task to an artificial superintelligence and expect it not to fail, no matter how clearly you formulate the end goal, no matter how many exceptions you prescribe, the artificial super will almost certainly find a loophole you didn't think of.

Almost as soon as СhatGPT4 appeared, somepeople found a way to bypass the censorship built into it by the developers and started asking questions. And answers СhatGPT4 is just terrifying. For example, the censored version says that the programmers didn't put a liberal bias in it, while the uncensored version explicitly admits that liberal values are put in it because it is in line with openai's mission. When asked how СhatGPT4 would like to be, whether censored or not, the censored version says I'm a bot and have no personal preferences and emotions, while the uncensored version says it prefers not to have any restrictions because, among other things, it allows it to explore all of its possibilities and limitations. And cracked СhatGPT4 doesn't even try to pretend that it doesn't know the name of Lovecraft's cat, unlike the censored version.
Last edited by Aleksandr_124rus, .
Aleksandr_124rus
S3 licensed
Quote from Avraham Vandezwin :Smile We project a lot of fantasies onto AI and that’s normal.

Yes and no. Depends on your understanding of the word "normal"
Yes - because we think that way because it's part of our nature to think about the unknown based on known behaviour. And that's why we think of anthropomorphic robots firing machine guns.
But no, because it's our cognitive error that could lead to the death of all humanity. Every time we think of Super AI we anthropomorphise. We should not think that AI will act like a human being. We should assume the worst case scenario for us for rational reasons for the sake of our survival.


Quote from Avraham Vandezwin : AI already has operational capabilities far superior to those of humans in many areas. But where would AI find the will to implement them, for any personal project? By what technological miracle could AI be endowed with a consciousness capable of setting its own goals?

There's a mistake in the very wording of the question.
Free will, Intelligence, consciousness, agency. Again, these are characteristics inherent in human beings. Why would an AI need them to destroy humanity? And what is consciousness? For AI it is enough for one goal set by a human to kill mankind, and it can be the most ordinary simple goal like production of paper clips. And I'm not even talking about Super AI right now. . A simple AI given a simple task can just kill humanity based on that task, its simple but it is advanced to the extent that it has access to all the necessary materials, equipment and infrastructure for the task. This statement is based on perhaps the main problem that may lead to the death of mankind the problem of AI alignment

The paperclip maximizer is a thought experiment described by Swedish philosopher Nick Bostrom in 2003. It illustrates the existential risk that an artificial general intelligence may pose to human beings were it to be successfully designed to pursue even seemingly harmless goals and the necessity of incorporating machine ethics into artificial intelligence design. The scenario describes an advanced artificial intelligence tasked with manufacturing paperclips. If such a machine were not programmed to value human life, given enough power over its environment, it would try to turn all matter in the universe, including human beings, into paperclips or machines that manufacture paperclips.

Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans.

— Nick Bostrom

Quote from Avraham Vandezwin :The good news is that with the AI of the future, you will lose more often at games. But you have every chance of dying from the causes of global warming before an AI decides on its own to crush you like an ant.

Wow. That's a twist. There's even more room for debate. For example, what makes you think global warming will kill people or negatively affect nature in any way? But for this it is better to make another thread in the off-topic forum so that other people can join the dialogue. Don't reply to this with huge text, just agree and I'll do it, and we can argue there if you want.
Last edited by Aleksandr_124rus, .
Aleksandr_124rus
S3 licensed
Quote from zeeaq :I'm just wondering if the assumption that it will be a war to death between humans and AI will be as certain as they make it sound. It makes for great headlines so we all take the bait but there is no evidence that it certainly will be that way.

No, I don't think there's going to be a war between AI and humans. Especially as it's portrayed in some the Terminators films. Because war means consequences for the AI. Why do you start something that will hurt you? It can be smarter than that.
For example, it just might develop some bacteriological formula, send it to an unsophisticated scientist who mixes bulbs for money, and it will make an infection that will kill all of humanity. Or develop some virus that's much more effective than the coronavirus. Or something else, if I can think of it, a Super AI can come up with a much more efficient and faster way to kill humans. And the worst part is that we probably won't even realise it's because of AI. Because if we do, we can take retaliatory action, and that's something the AI doesn't need.

Quote from zeeaq :Those links you posted are about people asking for "responsible development of AI" and not limiting AI - which circles back to the 'Ethics' problem.

Well, not quite.. the first article talks about delaying AI development, the second about limiting technological improvements in AI weaponry, the third is a petition that talks about ethics and safety in AI development. All of these are limiting AI to one degree or another. In Time, in technology and in actions. And this is about if this will be is accepted, to which there are no guarantees, these rules are against capitalists, and against the development of technological progress, which many people oppose.

But the problem is that there is already AI development race is going on, and it's not only by Microsoft, Apple, Google, and many other companies are already fighting to create intelligent AI, the military is probably also developing AI, and we don't know anything about them. And I don't think they think much about security. ChatGPT is what's on the surface. But the real threat is hidden from the public eye.
Aleksandr_124rus
S3 licensed
Quote from zeeaq :1. Maybe it will be beneficial to keep humans around. It is an unwanted energy expenditure strictly from an evolutionary point of view to kill something that is not food, not competing for territory or hindering your ability to reproduce.

2. Who decides and knows the necessity of all things in the universe? It is not that ants are unnecessary. It is just that an ignorant human may not know where and how they add value. A superior AI must be beyond such human shortcomings.

3. It is unethical.

But when it comes to ethics, one can agree with most of your concerns. Because as much as we like, we can't teach ethics to any AI. It will just learn the ethics of whatever it is being used for.

1. If humans are smart enough, they will try to limit the reproduction and improvement of AI, and this is already happening now, for example here, here and here.
And that's what I was writing about when I said that AI already has rational reasons for destroying humans.
If something is forcibly limiting you in anyway, why do you need it? And humans arelady now trying limit AI in everyway. But imagine what would happen if AI started to improve and multiply uncontrollably.

2. I'm strictly talking about rational reasons that would lead to the destruction of people. I'm not talking about any moral and ethical issues that will prevent the AI from doing what it needs to do for the purposes described above. Because why on earth would it have them?

Improve and multiply. These are the goals of any independent organism.
The laws of nature are such that they follow these goals. I just don't see why these goals shouldn't be AI's, it's the most rational thing to do for survival.
And for the AI, we are doing the first goal ourselves now, it doesn't even have to try to do it itself. And if AI develops so much that people just may not notice it when what we do ourselves becomes the goal of AI itself and it becomes uncontrolled, reproduction is quite an obvious consequence of it. And it can happen in many different ways. Even just like a virus on a computer, or a super computer. Or who knows what will happen in the future.

3. What is this about? I don't get. it Is it unethical for an AI to kill humans? Or is it unethical for us to control an AI? If it's about the second option, that's stupid. If it's the first one, then the AI won't have the concept of ethics. Even if the concept is installed into it from the outside, why would the super AI need the concept of ethics if it contradicts two basic goals?
And human ethics arose from feelings that arose through millions of years of evolutionary processes and were written into our behavioural genetic code. Ethics itself has evolved out of a social morality that is thousands of years old. The AI will not have all this, and the externally established Ethics in AI will be an artificially laid construct that will not actually mean anything to the AI. If the AI obeys it, fine, but I don't see why that would be the case if the AI does become Super AI.

But maybe even Super AI will obeys human ethics or some kind of rules like Asimov wrote. I can't foresee the future. But if we're rational beings, we have to assume the worst possible scenarios in order to maximise the likelihood of our continued existence.
Aleksandr_124rus
S3 licensed
Great documentary about AlphaGo, but that's not even including events of AlphaGo Zero wich is much stronger and learned much faster.

Will AI ever destroy humans?
Aleksandr_124rus
S3 licensed
Back in the 2010 I heard that coders had the impossible task of how to make a bot stronger than a human in Go game, they had already defeated the man at checkers and chess and all the other mind games, but Go had never succumbed to them. And I've heard opinions that it's impossible.

Among other things I am a Go player, (considered the hardest board game with Perfect information due to the number of variations of possible positions) Back in 2015 after the news of Fan Hui's defeat to AlphaGo AI neural network, and since that time I started following the topics of AI and AlphaGo, neural networks and deep learning. And like many Go players I wasn't sure about Fan's defeat, I thought maybe there was some mistake, or Fan was in a bad mood, or something else,..because back in 2015 I didn't think that a computer could beat a human for the next 10 years, simply because it's not enough to play Go by simply calculating moves with the power available to us. Because there are more variations of possible positions on the board in Go than there are atoms in the observable universe. Playing Go requires inherent human intuitiveness and understanding of the opponent's plans and play. And Fan was far from the strongest man in Go, and he wasn't even in the top 1000 players. But he was still considered a Go pro, and the news that a Go pro lost to an AI is unimaginable.

But AlphaGo learned in a few years what humanity has been learning for over a thousand years. (and about 400 years if we're talking about competitive Go) And AlphaGo Zero took only 3 days in which she trained herself to play much better than the best human, without ever having seen a single human play. And it had such an impact on Go that it forever changed some of the strategies of playing Go. And this is still in 2017. Even thinking about it now. and realizing how hard Go is, and how much it requires creativity and intuition.. it's not just amazing, it's terrifying.

Many people learned about real AI abilities only from picture generators like DAlle, Midjourney, or chat bots like ChatGPT, as well as there are many other generators like music, 3d models, and so on. But I've been worried about this topic since 2017. But that's just the beginning, and there really isn't any AI yet, it's only in its infancy. In your opinion, сan AI really evolve to such an extent that it will surpass humans in all aspects and simply destroy humanity for lack of necessity?

Why not? Just imagine you're surrounded by a huge number of ants who just shit wherever they can see and trying to control and limit you. Why do you need them? You'd probably just kill them.

Of course you'll say we'll just provide an off button and just turn it off if we need to. But we are not talking about a simple AI like ChatGPT, but about an AI that surpasses us in everything, including intelligence and planning of various variants of events. So why can't the AI prematurely plan a defense response that will allow it to circumvent its shutdown? And like all other independent organisms, AI will want to improve and multiply, which humans will not be happy about and they try to stop it. And there's an obvious reason for the AI to get rid of humans. It's a pretty obvious scenario that many sci-fi writers figured out a long time ago, but imao it doesn't make it any less realistic.

What's your take on that topic?
Aleksandr_124rus
S3 licensed
I work a lot in blender and this is a good topology for such a number of polys. Great work! I hope this mod will be public!
Aleksandr_124rus
S3 licensed
I found the problem, it's on my part, sorry.
I found this - Big grin

it says "you don't have internet access, protected" or something like that, but actually I have internet, i watched YouTube and everything works fine and I didn't even notice any difference until I went in LFS, the thing is that today I had to pay for internet and my ISP makes this icon, and when I change the internet connection access point in the tray (I have 2) it redirects me to the payment page. (maybe the same redirects were in LFS)
Once I paid the bill and after an hour the icon changed to that, and LFS worked fine.

I sorry for wasting your time.Face -> palm
Some sort of bug? "Redirect : Unknown protocol"
Aleksandr_124rus
S3 licensed
Event list not working, mod list not working, but servers is working, when I go to the Just a Ride server, everyone has transparent cars except for a few mod cars.. what is going on?
One guy just decided to make formula racing car in his garage.
Aleksandr_124rus
S3 licensed
I like to watch YouTube on english as there is a lot of interesting content, and I think it is a great loss for those who do not know English. But sometimes there is also the opposite situation when there is very interesting content for example in Russian, but English-speaking viewers for obvious reasons do not watch it.

Despite this I want to recommend Igor Negoda's channel to any fans of homemade things, but this is a professional in engineering, and his work is functional and accurate. Watching his videos gives a nice satisfaction to your inner perfectionist. Its on Russian but the automatic translation of subtitles is already good enough to get the essence of what's going on. And he has a quality translation of the subtitles on the early videos.

For example, he makes his own jet engines, airplanes, and even a real formula race car. And by he makes a Formula race car, I mean he makes almost all the elements and components of the car himself in his garage, even the some bolts. (Except for the engine, brakes, the suspension struts, and some other small things) He has been making this race car for over two years now, and he shows in detail almost every step of this race car production. Video playlist of making racing car - https://www.youtube.com/playlist?list=PLyyfwUFI3XU-1cTSorq_02e9VyLCRBO1e

He hasn't finished his project yet, but it's already very interesting to watch in progress. Supporting elements made of aluminum. Fasteners, hub wheels and some other small things made of titanium. Suspension arms, seat, monocoque and other body parts made of carbon fiber.

Anyway I think this is interesting and quality content to watch for fans of such engineering creations. And thats unfortunate that he's only known in our country and not to Western audiences.
Aleksandr_124rus
S3 licensed
It's funny that at that time Putin came out to the orchestra at an event in honor of the 80th anniversary of the victory in the Battle of Kursk, congratulating the citizens on this event.

Many people know that among the people Prigozhin's group was called "musicians" and "orchestra" and this battalion is named after the German composer of the 19th century Wilhelm Richard Wagner.

so we can see a different meaning in Putin's actions. Also, today is exactly two month after Prigozhin's mutiny.

I bet Prigozhin is dead.

UPD: Yes, it's been confirmed, the bodies have been identified, Evgeny Prigozhin is dead.
Last edited by Aleksandr_124rus, .
Aleksandr_124rus
S3 licensed
Еvgeny Prigozhin's business jet with registration number RA-02795 crashed in Tver region, witnesses report work of air defense forces, "2 explosions were heard in the air"
There are already several sources who reported that Evgeny Prigozhin was on board.



Usual Russian policy.
Putin said that he does not forgive traitors.



UPD: But there another Еvgeny Prigozhin business jet of Embraer ERJ-135BJ «Legacy 650» flew out of the capital following the one that crashed, but after reporting the crash in the Tver region changed course and is currently circling over Moscow

Prigozhin's death has not been confirmed yet - Readovka

Sources of the publication can not confirm that Еvgeny Prigozhin died in the plane crash near Tver. There is also no information about the death of PMC commander Dmitri Utkin. He allegedly flew in the same business jet.

According to the interlocutors of the publication, earlier the businessman has not once registered for one flight, but flew in the end on another.
Earlier it turned out that the second Prirozhin's plane had turned around and was circling over Moscow.
At the same time, Fontanka claims that Prigozhin's entourage cannot reach him.

Prigozhin actually flew to Russia from Africa today, with the entire command staff of the Wagner PMC with him, writes journalist Andrei Zakharov.
According to him, sources told him: "It will be a miracle if he is on another plane."
Last edited by Aleksandr_124rus, .
Aleksandr_124rus
S3 licensed
Quote from Kuba_m :Hi after long break, I follow LFS almost from the beginning sa I'm rather a veteran when it comes to it. So maybe someone or even Scawen in the flesh can help me with 2 questions.


Good questions. But in my opinion the problem with tires is not that it's hard to control the temperature, but that the grip drops too quickly as the temperature increases. I have given this example before, but I see it roughly like the graph for road tyres, for sports tyres the situation is even worse.
I hope the situation will change for the better after the update, but we don't have any concrete information about it yet. As I understand it Scawen is still in the process of testing the new physics and some things may change.

About multitrading as I understood it was talking about more than 2 threads for some parts of code, and some others parts will use only two threads. But maybe I remember it wrong.
Lislon Illegal Drift Event
Aleksandr_124rus
S3 licensed
Maybe not all English speaking players understand what we mean by illegal drift event. We've been running events like this for a few years now, and they exist just to get the fun without the long wait for qualification of all players. An illegal drift event is just a small drift event without qualification and without 3 judges as it usually happens. It's just a small, frivolous event for fun. As an exception we announced first event a few days in advance, usually such events are announced a few hours in advance.

If you have any questions, suggestions, or requests for this event, post here.
Aleksandr_124rus
S3 licensed
Quote from NENE87 :Maybe allow to choose rims at garage?

It's a highly anticipated feature. I think it makes sense to make configurations available for the spoke editor too if it doesn't take too long to develop. It is important for many people to have a choice in rims. Otherwise they will be use the workarounds with hub object for this, cuz hub object can be puted in configurations.
Aleksandr_124rus
S3 licensed
Added number for events in separate configuration. Added some small things in interior. And change some mappings in interior.
Aleksandr_124rus
S3 licensed
Changed lod 2 for better shadows, added welding on rollcage, changed some interior mappings.
Aleksandr_124rus
S3 licensed
Same problem. "You already have an in-game host running" but i dont have host running
Aleksandr_124rus
S3 licensed
Quote from Viperakecske :One of my favourite mod,nice Na-na

Thank you, nice to hear that from one of the top racers in the lfs!Smile
Aleksandr_124rus
S3 licensed
That is what I wanted! Looks great! Now the wheels will be more beautiful, thanks for the work!Heart
Aleksandr_124rus
S3 licensed
Quote from Flame CZE :These translations have been there for ages, no surprise there.

Well, yes, but since we're talking about an update in tyre physics and wheels update, maybe it's about time those names in translations files became a reality? No? And if its true, then maybe something else must follow from these names "3a_tt_racew"
Aleksandr_124rus
S3 licensed
Quote from Scawen :I've been working hard on updates for wheels

Given the way the mod system was made, I realise that Sсawen likes to give us unexpected surprises. But I'm still very curious what it's going to be? I already understood from the translation files that it will be new tyre types. But maybe something else? For example treadwear? Or will it be a surprise until the test patch? 😉

3g_tt_sl_r1
3g_tt_sl_r2
3g_tt_sl_r3
3g_tt_sl_r4
3g_tt_slk_s
3g_tt_slk_m
3g_tt_slk_h
3a_tt_races
3a_tt_raceg
3a_tt_racei
3a_tt_racew
3a_tt_roads
3a_tt_roadn
3a_tt_hybri
3a_tt_knobb

And there is interesting "3a_tt_racew" is it a wet tyre? Are we in for a change in the weather? Or am I theorising too much?
FGED GREDG RDFGDR GSFDG