By 60 days from now, the DoE must produce a list — “at least 20 science and technology challenges of national importance” in advanced manufacturing, biotechnology, critical materials, nuclear fission and fusion — yes that’s more vibe nuclear — something quantum, and semiconductors.
And by "biotechnology", I presume this administration means "eugenics", if not some other form of medical pseudoscience.
Well, at least the website isn't vibe-coded. Considering the creator's an out-and-proud promptfondler (as seen on his Twitter), that's genuinely shocking.
Mozilla continues to double down on AI, promising to "do for AI what we did for the web" in their latest (probably AI-extruded) blogpost.
In related news, I found a toot thread attributing the current shitfest (and AI's popularity in general) to "a strong majority of even the actually well-intentioned, smart leaders in tech [getting] their brains fully cooked by these heuristics machines". Where the OP is finding those "well-intentioned, smart leaders", I do not know.
That's a one-two punch of Prabhakar Raghavan publicly executing it for profit and the slop-nami drowning out human-made work via SEO-optimised garbage (indirectly assisting Google's enshittification efforts in the process).
Publishers have been reportedly readjusting their businesses to account for this shit - combined with Google's enshittifcation and the slop-nami, we may see Google (if not search engines in general) lose a lot of relevance due to AI.
Subnautica is a video game made by Unknown Worlds - their first game released in 2018, and the sequel's been delayed to 2026 after some major development shakeups in July 2025 (namely, Unknown World CEO Ted Gill and co-founders Charlie Cleveland and Max McGuire getting fired)
Krafton, who acquired Unknown Worlds in 2021, are currently being sued by Gill, Cleveland and McGuire. The company stands accused of screwing the three out of $250 mil in bonuses that a release into Early Access would've helped them earn.
Most likely the latter - Gerard's already done a couple Pivot to AI posts about AI's ability to lobotomise its users (Exhibit A, Exhibit B). Someone else has noted their coworker admitting to the lying machine ruining their Google-fu, too (source):
If the AI bros get these things built before the bubble pops, we should expect bad designs, not allowing for local conditions, setting up the reactor operators for ridiculous errors, and lots of nuclear accidents. Hopefully not very big ones. Cross fingers!
If and when those accidents start happening, its going to set back adoption of nuclear power by years, if not decades - especially if there's an incompetent response to those accidents (which, considering Starmer and Trump are in charge, is worryingly likely)
One of the other important things about the nuclear regulation process is that it makes sure the local people are involved. You can’t skip that step either. If you just run roughshod over the locals for the sake of AI, you’ve already got people in the streets protesting AI.
The government will almost certainly try to just bypass the activists. But remember: anti-nuclear activists have decades of experience at this. So I’m sure it’ll go great all round.
Anti-nuclear activists are going to have a field day with this, aren't they?
And by "biotechnology", I presume this administration means "eugenics", if not some other form of medical pseudoscience.