Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)Z
Posts
0
Comments
100
Joined
5 days ago

  • I'm not familiar with shadowbanning because it didn't happen to me or anyone else I wrote with. We all got permabanned officially. But even the permabans were weird. In some case I was still able to edit old comments, in another I wasn't. In a third case I was able to backup my saved links. So technically, a lot was broken. I don't think that anyone at Reddit really knows what's going on or has control over anything.

  • It can't be kept anyway. It's not sustainable like the Chinese model. The tech bros and the Christian fundamentalists are not compatible in the slightest. Both also want absolute power, but for different reasons. Remains the question which regime would be worse...

    I don't think that the dems will be in charge anytime soon though. Not with all the creative garrymandering, not with the exclusion of PoC, not with all the people who gave up on voting because it doesn't make a difference anyway. This is also a serious problem here in Europe nowadays.

  • One of my accounts got banned because I wrote that racism is bad. Another for writing that sexism is bad. A third one because I condemned a terrorist group in a private chat (Of course a group which is officially considered to be a terrorist group by many countries). I used simple language in all those cases, like literally writing "Racism is bad". Is there a pattern? Is it random? Who knows. I wouldn't be surprised if there is no reasonable explanation.

  • And you still ignore what I wrote. Because you can't process how wrong you and your AI are.

  • Additive colors -> active light emitter. Which should be obvious. But yeah, you simply lack the ability to think beyond what AI tells you. You understand nothing. You're nothing mote than a stochastic parrot yourself. Enjoy your daily rock.

  • No. Tell me. I'm genuinely interested.

  • All LLMs still claim that green is the complementary color to red...

  • And you showed that you don't understand complementary colors, just like AI. Because the above color circle is wrong. Why? Because of tests like the afterimage test (Example: https://i.pinimg.com/originals/da/7c/fb/da7cfba87ffdc8f426953397162329b4.gif), proving that purple (like pictured above) can never be the complementary color to yellow, it always has to be a deep blue. It doesn't matter if it's additive colors or subtractive colors you're using (Afterimage tests work both passive and active) because in the end, it's all only about light hitting our L/M/S-cones and how our brains work when it comes to interpreting the signals from those cones (https://en.m.wikipedia.org/wiki/Metamerism_(color). Metamerism explains why engineers chose perceptually equidistant cyan/magenta/yellow for (simple) printing ("Subtractive colors") and perceptually equidistant red/green/blue for active emitting devices like cameras and displays ("Active colors"). And if you now say "But bro, I see a green shifting towards blue in the afterimage test" - didn't you wonderful AI tell you about the Abney effect? Weird. It's all well known and documented on the web which has been used to train your wonderful AI. But yeah - without being able to understand all of that, there is no way your wonderful AI can tell you which one of all those color circles is the correct one (And there is only one because it does not violate the CIE 1931 color space). It's up to you to either learn and understand - or to blindly follow a LLM which sticks to green being the complementary color to red. Because all the LLM can do is repeating the garbage it has been trained with. Because it's nothing more than a stochastic parrot. Your choice.

  • And why didn't you include the name of the model in your test? Looks like you don't want me to try it myself. It would be interesting to do so. Of course with values which don't fit perfectly into 8 bit. What if I define the range from 0 to 47204 for each color channel instead? What if I would use CMY(K) instead of RGB? A good "great" AI must be able to handle all of that. And of course correctly explain what complementary colors are (which you didn't include either). So yeah - what you provided does not go beyond the output from htmlcolorcodes.com - a very simple website with very simple code. I doubt it requires much power either.

  • But is it a gift if it has to be matched or returned? Or rather a transaction?

  • Funny. Each time I ask any LLM what the complementary color to red is. Then I always get green as answer instead of cyan (With cyan being the only correct answer). And a completely wrong explanation about what complementary colors are based on digital screens. So yeah - LLMs still fail miserably at language-based tests. And rearranging complex equations doesn't work either.

  • I thought this way a long time ago. But just like Sisyphus I had to watch the rock roll down the slope again and again after believing to have achieved anything. It only takes a slight breeze to make the rock roll down again.

  • And you don't know what a circular argument is either...

    No, 2+2 is never "about 4" nor is it 4 in most cases. It's always exactly 4. And no LLM can ever come to this conclusion. LLMs fail at math in a truly spectacular way. Just like no LLM will ever be able to understand what complementary colors are. Which is one of my favorite tests because it has a 100 % error rate.

  • Reddit is not worse than any other platform when it comes to getting banned for (seemingly) random reasons.

  • So you don't know how math works.

  • What makes you think that using single letters as tokens instead could teach a stochastic parrot to count or calculate? Both are abilities. You can't create an ability only from a set of data no matter how much data you have. You can only make a model seem to have that ability. Again: All you can ever get out of it is something that resembles human language. There is nothing beyond/behind that, by design. Not even hallucinations. Whenever a LLM gives you the advice to eat a rock per day it still works. Because it outputs a correct sounding sentence purely and entirely based on probability. But counting and calculating are not based on probability which is something everyone who ever had a math class knows very well. No math teacher will let students guess the result of an equation.

  • As if a stochastic parrot could ever be able to count or calculate...

    LLM work really well. You get something out of it that resembles human language. This is what it has been designed for and nothing else. Stop trying to make a screwdriver shoot laser beams, it's not going to happen.

  • It depends on your prefered type of humor. Do you like gallows humor?

  • What's the alternative to ignoring them? You can't make them smarter. You can't exclude them from voting without creating something similar to fascism. Not even removing warning labels helps because idiots don't read them anyway.