Skip Navigation

User banner
InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)S
Posts
3
Comments
239
Joined
2 yr. ago

  • Yud: “Woe is me, a child who was lied to!”

    He really can't let down that one go, it keeps coming up. It was at least vaguely relevant to a Harry Potter self-insert, but his frustrated gifted child vibes keep leaking into other weird places. (Like Project Lawful, among it's many digressions, had an aside about how dath ilan raises it's children to avoid this. It almost made me sympathetic towards the child-abusing devil worshipers who had to put up with these asides to get to the main character's chemistry and math lectures.)

    Of course this a meandering plug to his book!

    Yup, now that he has a book out he's going to keep referencing back to it and it's being added to the canon that must be read before anyone is allowed to dare disagree with him. (At least the sequences were free and all online)

    Is that… an incel shape-rotator reference?

    I think shape-rotator has generally permeated the rationalist lingo for a certain kind of math aptitude, I wasn't aware the term had ties to the incel community. (But it wouldn't surprise me that much.)

  • I couldn't even make it through this one, he just kept repeating himself with the most absurd parody strawman he could manage.

    This isn't the only obnoxiously heavy handed "parable" he's written recently: https://www.lesswrong.com/posts/dHLdf8SB8oW5L27gg/on-fleshling-safety-a-debate-by-klurl-and-trapaucius

    Even the lesswronger's are kind of questioning the point:

    https://www.lesswrong.com/posts/dHLdf8SB8oW5L27gg/on-fleshling-safety-a-debate-by-klurl-and-trapaucius?commentId=BhePfCvbGaNauDqfz

    I enjoyed this, but don't think there are many people left who can be convinced by Ayn-Rand length explanatory dialogues in a science-fiction guise who aren't already on board with the argument.

    A dialogue that references Stanislaw Lem's Cyberiad, no less. But honestly Lem was a lot more terse and concise in making his points. I agree this is probably not very relevant to any discourse at this point (especially here on LW, where everyone would be familiar with the arguments anyway).

    And: https://www.lesswrong.com/posts/3q8uu2k6AfaLAupvL/the-tale-of-the-top-tier-intellect?commentId=oHdfZkiKKffqSbTya

    Reading this felt like watching someone kick a dead horse for 30 straight minutes, except at the 21st minute the guy forgets for a second that he needs to kick the horse, turns to the camera and makes a couple really good jokes. (The bit where they try and fail to change the topic reminded me of the "who reads this stuff" bit in HPMOR, one of the finest bits you ever wrote in my opinion.) Then the guy remembers himself, resumes kicking the horse and it continues in that manner until the end.

    Who does he think he's convincing? Numerous skeptical lesswrong posts have described why general intelligence is not like chess-playing and world-conquering/optimizing is not like a chess game. Even among his core audience this parable isn't convincing. But instead he's stuck on repeating poor analogies (and getting details wrong about the thing he is using for analogies, he messed up some details about chess playing!).

  • Eh, cuck is kind of the right-winger's word, it's tied to their inceldom and their mix of moral-panic and fetishization of minorities' sexualities.

  • Remember when a bunch of people poured their life savings into GameStop and started a financial doomsday cult once they lost everything? That will happen again if OpenAI goes public.

    I've seen redditors on /r/singularity planning on buying OpenAI stock if it goes public. And judging by Tesla, cultists buying meme stock can keep up their fanaticism through quite a lot.

  • It seems like a complicated but repeatable formula: Start a non-profit dedicated to some technology, leverage the charity status for influence and tax avoidance and PR and recruiting true believers in the initial stages, and then make a bunch of financial deals conditional on your non-profit changing to for profit, then claim you need to change to for-profit or your organization will collapse!

    Although I'm not sure how repeatable it is without the "too big to fail" threat of loss of business to state AGs. OTOH, states often bend the rules to gain (or even just avoid losing) embarrassingly few jobs, so IDK.

  • i’ve listened to his podcast, i’ve read his articles, he is pretty up front about what his day job is and that he is a disappointed fanboy for tech. the dots are 1/1000th of an inch apart.

    For comparison I've only read Ed's articles, not listened to his podcasts, and I was unaware of his PR business. This doesn't make me think his criticisms are wrong, but it does make me concerned he's overlooked critiquing and analyzing some aspects of the GenAI industry because of these connections to those aspects.

  • This week's southpark makes fun of prediction markets! Hanson and the rationalists can be proud their idea has gone mainstream enough to be made fun of. The episode actually does a good job highlighting some of the issues with the whole concept: the twisted incentives and insider trading and the way it fails to actually create good predictions (as opposed to just getting vibes and degenerate gambling).

  • and the person who made up the "math pets" allegation claimed no such source

    I was about to point out that I think this is the second time he claimed math pets had absolutely no basis in reality (and someone countered with a source that forced him to) but I double checked the posting date and this is the example I was already thinking of. Also, we have supporting sources that didn't say as much directly but implied it heavily: https://www.reddit.com/r/SneerClub/comments/42iv09/a_yudkowsky_blast_from_the_past_his_okcupid/ or like, the entire first two thirds of the plot of Planecrash!

  • So us Americans do get some of "grabbed guns and openly fought" in the history of our revolutionary war, but its taught in a way that doesn't link it to any modern movements that armed themselves. And the people most willing to lean into guns and revolutionary war imagery/iconography tend to be far right wing (and against movement for worker's rights or minorities' rights or such).

  • So, to give the first example that comes to mind, in my education from Elementary School to High School, the (US) Civil Rights movement of the 1950s and 1960s was taught with a lot of emphasis on passive nonviolent resistance, downplaying just how disruptive they had to make their protests to make them effective and completely ignoring armed movements like the Black Panthers. Martin Luther King Jr.'s interest and advocacy for socialism is ignored. The level of organization and careful planning by some of the organizations isn't properly explained. (For instance, Rosa Parks didn't just spontaneously decide to not move her seat one day, they planned it and picked her in order to advance a test case, but I don't think any of my school classes explained that until High School.) Some of the level of force the federal government had to bring in against the Southern States (i.e. Federal Marshals escorting Ruby Bridges) is properly explained, but the full scale is hard to visualize so. So the overall misleading impression someone could develop or subconsciously perceive is that rights were given to black people through democratic processes after they politely asked for them with just a touch of protests.

    Someone taking the way their education presents the Civil Rights protests at face value without further study will miss the role of armed resistance, miss the level of organization and planning going on behind pivotal acts, and miss just how disruptive protests had to get to be effective. If you are a capital owner benefiting from the current status quo (or well paid middle class that perceives themselves as more aligned with the capital owners than other people that work for a living), then you have a class interest in keeping protests orderly and quiet and harmless and non-disruptive. It vents off frustration in a way that ultimately doesn't force any kind of change.

    This hunger strike and other rationalist attempts at protesting AI advancement seems to suffer from this kind of mentality. They aren't organized on a large scale and they don't have coherent demands they agree on (which is partly a symptom of the fact that the thing they are trying to stop is so speculative and uncertain). Key leaders like Eliezer have come out strongly against any form of (non-state) violence. (Which is a good thing, because their fears are unfounded, but if I actually thought we were doomed with p=.98 I would certainly be contemplating vigilante violence.) (Also, note form the nuke the datacenter's comments, Eliezer is okay with state level violence.) Additionally, the rationalist often have financial and social ties to the very AI companies they are protesting, further weakening their ability to engage in effective activism.

  • The way typical US educations (idk about other parts of the world) portray historical protests and activist movements has been disastrous to the ability of people to actually succeed in their activism. My cynical assumption is that is exactly as intended.

  • So if I understood NVIDIA's "strategy" right, their usage of companies like Coreweave is drawing in money from other investors and private equity? Does this mean, that unlike many of the other companies in the current bubble, they aren't going to lose money on net, because they are actually luring in investment from other sources in companies like Coreweave (which is used to buy GPU and thus goes to them), whileleaving the debt/obligations in the hands of companies like Coreweave? If I'm following right this is still a long term losing strategy (assuming some form of AI bubble pop or deflation we are all at least reasonably sure of), but the expected result for NVIDIA is more of a massive drop in revenue as opposed to a total collapse of their company under a mountain of debt?

  • Side note: The way I've seen clanker used has been for the AIs themselves, not their users. I've mostly seen the term in the context of star wars memers eager to put their anti-droid memes and jokes to IRL usage.

  • It's like a cargo cult version of bootstrapping or monte carlo methods.

  • That thread gives me hope. A decade ago, a random internet discussion in which rationalist came up would probably mention "quirky Harry Potter fanfiction" with mixed reviews, whereas all the top comments on that thread are calling out the alt-right pipeline and the racism.

  • I'm at least enjoying the many comments calling her out, but damn she just doubles down even after being given many many examples of him being a far-right nationalist monster who engaged in attempts to outright subvert democracy.

  • The Oracle deal seemed absurd, but I didn't realize how absurd until I saw Ed's compilation of the numbers. Notably, it means even if OpenAI meets its projected revenue numbers (which are absurdly optimistic, like bigger than Netflix and Spotify and several other services combined) paying Oracle (along with everyone else it has promised to buy compute from) will put it net negative on revenue until 2030, meaning it has to raise even more money.

    I've been assuming Sam Altman has absolutely no real belief that LLMs would lead to AGI and has instead been cynically cashing in on the sci-fi hype, but OpenAI's choices don't make any long term sense if AGI isn't coming. The obvious explanation is that at this point he simply plans to grift and hype (while staying technically within the bounds of legality) to buy few years of personal enrichment. And to even ask what his "real beliefs" are gives him too much credit.

    Just to remind everyone: the market can stay irrational longer than you can stay solvent!

  • This feels like a symptom of liberals having a diluted incomplete understanding of what made past movements that utilized protest succeed or fail.

  • It is pretty good as a source for science fiction ideas. I mean, lots of their ideas originate from science fiction, but their original ideas would make fun fantasy sci-fi concepts. Like looking off their current front page... https://www.lesswrong.com/posts/WLFRkm3PhJ3Ty27QH/the-cats-are-on-to-something cat's deliberately latching on to humans as the most lazy way of advancing their own value across the future seems like a solid point of fantasy worldworldbuilding...

  • SneerClub @awful.systems

    Sneerquence classics: Eliezer on GOFAI (half serious half sneering effort post)

  • SneerClub @awful.systems

    China and AGI: A New Yellow Peril and Red Scare

  • SneerClub @awful.systems

    Is Scott and others like him at fault for Trump... no it's the "elitist's" fault!

    www.writingruxandrabio.com /p/the-edgelords-were-right-a-response