Skip Navigation
InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)BU
Posts
0
Comments
49
Joined
2 yr. ago
  • That's a bummer of a post but oddly appropriate during the darkest season in the northern hemisphere in a real bummer of a year. Kind of echoes the "Stop talking to each other and start buying things!" post from a few years back though I forget where that one came from.

  • Came across this gem with the author concluding that a theoretical engineer can replace SaaS offerings at small businesses with some Claude and while there is an actual problem highlighted (SaaS offerings turning into a disjoint union of customer requirements that spiral complexity, SaaS itself as a tool for value extraction) the conclusion is just so wrong-headed.

  • This doubly disappoints me because in a professional capacity I strive for being incredibly intentional and accurate while recreationally I aspire to shitposting, and "more accurate than most" satisfies neither. I really need to get my blood boy to write better material.

  • it’s likely that the user will spend at least 30 minutes to an hour unable to articulate language.

    This presumes Johnson was able to articulate language in the first place, which given that his brain has melted at an incredible pace since 2020 may be a bit of a stretch.

  • will these guys ever get to rainbow-chart levels of galaxy brain or will just be content to fudge some numbers on regular visuals?

    This is reminiscent of memestock/buttcoin charts where a new asymptomatic curve is dropped onto the same rather flat graph over and over again.

  • the mention in QAA came during that episode and I think there it was more illustrative about how a person can progress to conspiratorial thinking about AI. The mention in Panic World was from an interview with Ed Zitron's biggest fan, Casey Newton if I recall correctly.

  • One thing I've heard repeated about OpenAI is that "the engineers don't even know how it works!" and I'm wondering what the rebuttal to that point is.

    While it is possible to write near-incomprehensible code and make an extremely complex environment, there is no reason to think there is absolutely no way to derive a theory of operation especially since any part of the whole runs on deterministic machines. And yet I've heard this repeated at least twice (one was on the Panic World pod, the other QAA).

    I would believe that it's possible to build a system so complex and with so little documentation that on its surface is incomprehensible but the context in which the claim is made is not that of technical incompetence, rather the claim is often hung as bait to draw one towards thinking that maybe we could bootstrap consciousness.

    It seems like magical thinking to me, and a way of saying one or both of "we didn't write shit down and therefore have no idea how the functionality works" and "we do not practically have a way to determine how a specific output was arrived at from any given prompt." The first might be in part or on a whole unlikely as the system would need to be comprehensible enough so that new features could get added and thus engineers would have to grok things enough to do that. The second is a side effect of not being able to observe all actual input at the time a prompt was made (eg training data, user context, system context could all be viewed as implicit inputs to a function whose output is, say, 2 seconds of Coke Ad slop).

    Anybody else have thoughts on countering the magic "the engineers don't know how it works!"?