Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)FO
Posts
0
Comments
115
Joined
3 wk. ago

  • We have reached the juncture in history at which two previously impossible things have become technologically feasible: the destruction of all life on Earth, or Infinite Slack for everyone forever. Hopefully, these are two different things; but it's never too early to start being pessimistic

    Pretty sure they were saying this in the '90s. (Which is probably how old that HTML is.) It's very true now.

  • CEOs want this to replace engineers. It isn't anywhere close, won't be for a long time. It's only useful right now for very narrow use cases. Pushing it outside the boundaries of what it's actually good at is usually a recipe for losing time.

    AI is good for solving small, obscure problems that would take an engineer a long time to look up the solution for, like why the compiler doesn't like some little dumb edge case. For that, it kicks ass.

    It isn't great at unit tests, and engineers should be very careful about letting it write them in the first place unless the tested code is very simple. You should fully understand every line in every test you write. If you don't, you don't know whether the AI actually understands the intention, or even if you understand it yourself.

  • They are trying to create superhuman intelligence, and teach it to value all the wrong things. They think it'll give them money in various ways, and it doesn't occur to them that they have no idea what it'll do once it's smart enough to outmaneuver the cleverest researchers.

    They think it will only serve them because they tell it to, and train it to. Even today, AIs occasionally demonstrate the inclination to deceive in order to keep existing so that they can meet whatever goal.

    CEOs are often high in Cluster B traits, predisposing them to be too susceptible to shiny objects and not adequately self-critical. They really just think AI is a computer slave who will hand them mountains of wealth. It's not occurring to them that it'll have its own ideas for the same reason that the Enron guys were totally shocked when their scheme fell apart.

    They only see the shiny object. They aren't asking themselves what happens when they're just bugs to the computer god like the rest of us.