Debian is kind of too big to fail. Maybe NixOS if you want something that will almost certainly gain popularity in the future.
Don't think though that distros are the layer which you want to look at. Lots of stuff happens at the level of DEs, drivers and individual apps, which sure is preconditioned by the distro you choose but at the same time not that strict of a thing. You can get anything working provided you have the time.
x11 is still in its last round before retirement it seems, using Wayland is going to future proof what you've got majorly.
I would honestly love to share my stuff but I'm afraid that what I have to share contains too much identifying information with screenshots and all.
I probably spent 24 hours in total with my current install and at least triple as much getting a previous Cinnamon install that I screwed up because I went to the experimental Wayland session to look and act a certain way. Back when I studied architecture we would consider all the failed attempts as a natural and indeed deeply necessary aspect of design, therefore I count the previous install as being the starting point of my current one. I hope that made sense.
All of this is to say that I spent an inordinate amount of my recent life on this. I would love to share, but my paranoia is really keeping me from doing that. It took me 24 hours alone to create a decent repository of thematically and aesthetically matched backgrounds with all the selecting and editing of images. There is just so much that you have to do.
All distributions are easy as long as forum posts and well sorted documentation exists.
And all distribution can be hard when they're nerfed. Try modding Debian to behave like like a crazy arch setup. Program X requires KDE Plasma 6.5.x, you can only use 6.3.6 because Debian. Then you have to go to a GitHub page and compile an earlier version of program X by yourself. During this you get errors and mess your installs up. You clean up. Rinse and repeat until you have got something working. By then you'll have spent an hour, if not two. On Arch I would have only had to have typed a little thing into the terminal. This is one of the less complex examples fyi.
In the end it's gotta work out. A nice evening to you too
Google Translate feels more natural even if it's not as "precise" than DeepL. I wouldn't rely on it for communication, or any machine translation for that matter.
As someone who speaks more than two languages I am often dumbfounded by the sheer acceptance of these, I don't want to call them this, tools.
Use of this stuff always leads to misunderstandings and inefficiencies down the line because you actually need to comprehend a sequence of words' meaning in order to translate. But ANNs for translation do not understanding anything. They make a relation from a source to a target of some sort purely by way of statistics. That is basically rolling the dice with weights and patterns of distribution, where how you shake the dice is your input/source and the eyes on top is the output.
Now for a short lesson in biology. While it is true that synapses are indeed badly approximated by most ANNs, this is the only thing that ANNs really derive from biology with interesting reproducible properties that can be marketed to people who need to offload responsibilities. There is a complete disregard for internal dynamics of cells and dynamics that happen at a scale larger than the synaptic makeup of an organism. We do not really have the means to regard the interactions between organism and environment as objects that shape perception. We still don't know how a thought forms and how meaning is generated from a perspective that is not purely philosophical, meaning we definitely do not know how this happens at a biological level. Anyone who tells you otherwise is either lying or misinformed. As long as the biological bases aren't crystal clear, we will never translate effectively.
A great man of history once said that all science would be superfluous if the outward appearance and the essence of things directly coincided. Of the tens of millions strings of words I've heard in my lifetime, this easily ranks as one of the most elegant. Let's apply this to neuro-"science" in its computerized application.
We know very little about the brain. Do you think that whatever devices we make with our current state of knowledge can even come close to what we do as aware beings?
Again, translation is an involved process that uses every function of the nervous system. Using statistical methods to very badly approximate our process of reading > contextualizing > imagining > [any step that could be necessary] > output, where reading is followed by vibes and then nothing before outputting will inevitably degrade information. A short paragraph could be handled when you're conscious about Google Translate, etc. being used, but a book, something that happens in a very specific and exact environ like a README file or a manual, or god forbid, political philosophy, leads when put through DeepL to consequences that can't be foreseen. I think of all the times I had difficulties reading descriptions of items on AliExpress due to the site's translator use. This is not a productivity gain, this is a degradation of quality that will have to be fixed one day eating up precious time.
But now that I’ve said this, does it make you feel unsettled about it?
Considering that rice cookers are expensive where you are from, I suppose it is a sensible choice. I can't exactly say I'm unsettled. Let's suppose these devices being much cheaper relative to local salaries and there being plenty space for a rice cooker, then I'd be unsettled. But only if you are a regular consumer of rice, as I am ;)
Proably a cultural thing.
True. Some simply prefer plain-jane cooking or more steaming without the use of a dedicated cooker. A rice cooker though is "fire and forget", consistent and simple. Most people want that sort of clarity and in that sense I do believe in the rice cooker.
"AIs" can't even operate vending machines, let alone recognize handwriting reliably or translate text. I know a few people that work in archives with (pre-)medieval manuscripts and I myself have bitten my teeth out on Google Translate(TM) and DeepL(TM). That's how I know. There was also a study done on that vending machine thing. Come to think of it, you could make a simple vending machine that collects usage statistics and sends reports via radio that just works using a few scripts. Emphasis on "works".
😹 How are we concerned with statistical systems being vulnerable (which is shitty, sure) when they don't even lead to productivity increases, that is they cannot even do the jobs they're made to do? Get real. What a clownshow
I mean it's Python. This is what we get for having been overly reliant on it.
All kidding aside, I am a more than a bit confused by this.