BlueMonday1984 @ BlueMonday1984 @awful.systems Posts 36Comments 391Joined 1 yr. ago
The curl Bug Bounty is getting flooded with slop, and the security team is prepared to do something drastic to stop it. Going by this specific quote, reporters falling for the hype is a major issue:
As a lot of these reporters seem to genuinely think they help out, apparently blatantly tricked by the marketing of the AI hype-machines, it is not certain that removing the money from the table is going to completely stop the flood. We need to be prepared for that as well. Let’s burn that bridge if we get to it.
Shot-in-the-dark prediction here - the Xbox graphics team probably won't be filling those positions any time soon.
As a sidenote, part of me expects more such cases to crop up in the following months, simply because the widespread layoffs and enshittification of the entire tech industry is gonna wipe out everyone who cares about quality.
New high-strength sneer from Matthew Hughes: The Biggest Insult, targeting "The Unspeakable Contempt At The Heart of Generative AI"
It would be really funny if Devin caused a financial crash this way
They're LWers, they already baked their psyche long ago
Machine learning is essentially AI with a paper-thin disguise, so that makes sense
Zed Run: a play to earn (P2E) virtual horse NFT racing game. Defunct as of February, probably due to rug pulling, they are pivoting to “Zed Champions”, which is… pretty much the exact same thing, with likely the same fate.
They're also (indirectly) competing with Umamusume: Pretty Derby, which offers zero P2E elements, but does offer horse waifus and actual entertainment value. Needless to say, we both know who's winning this particular fight for people's cash.
EquineChain: a blockchain platform for tracking horse care history, because apparently people don’t trust horse caregivers and need GPUs to remember how much ivermectin and ketamine their show-ponies have mainlined.
It'd arguably be helpful if the caregivers are helping themselves to the stash, but I doubt there's anything stopping then from BSing the blockchain, too.
But how are they going to awkwardly cram robots in everywhere, to follow up the overwhelming success of AI?
Good question - AFAICT, they're gonna struggle to find places to cram their bubble-bots into. Plus, nothing's gonna stop Joe Public from wrecking them in the streets - and given we've already seen Waymos getting torched and Lime scooters getting wrecked these AI-linked 'bots are likely next on the chopping block.
Don't we know it.
To my knowledge, previous bubbles happened in the background, with the general public feeling little effect from them.
Here, the entire bubble has happened directly in the public eye, either though breathless hype being shoved down their throats or through the bubble's negative externalities directly impacting their lives.
With that in mind, I expect this upcoming winter to be particularly long, with public mockery/condemnation of AI to be particular hallmarks.
Glad I could help with writing this.
I've already predicted that AI will completely and permanently disappear once the bubble bursts, and between AI's utterly radioactive public image and businesses' increasing realisation that AI is a useless money pit, its a prediction I've only grown more confident in over time.
The users who choose Cursor are hardcore vibe addicts. They are tech incompetents who somehow BSed their way into a developer job. They cannot code without a vibe coding bot. I compared chatbots to gambling and cocaine before, and Cursor fans are the most abject gutter krokodil addicts.
They're also easily comparable to psychics in how they con people, and of course there's all the reports of them crippling critical thinking and generally making you stupider.
So, these things are essentially brainrot-as-a-service.
trying to explain why a philosophy background is especially useful for computer scientists now, so i googled “physiognomy ai” and now i hate myself
Well, I guess there's your answer - "philosophy teaches you how to avoid falling for hucksters"
Its also completely accurate - AI bros are not only utterly lacking in any sort of skill, but actively refuse to develop their skills in favour of using the planet-killing plagiarism-fueled gaslighting engine that is AI and actively look down on anyone who is more skilled than them, or willing to develop their skills.
Someone finally flipped a switch. As of a few minutes ago, Grok is now posting far less often on Hitler, and condemning the Nazis when it does, while claiming that the screenshots people show it of what it’s been saying all afternoon are fakes.
LLMs are automatic gaslighting machines, so this makes sense
Sarah Skidd, in Arizona, was called in to fix some terrible chatbot website writing. She charged $100 an hour [...] Skidd now has a side business fixing these.
The AI bros were right - AI is creating new business opportunities /s
Where there’s muck, there’s brass. And sometimes the muck is toxic waste. And radioactive. So if you get called in to fix a vibe-slopchurned disaster, charge as much as you can. Then charge more than that.
If someone's using AI, its a sign that they're (a) Nigerian Prince levels of gullible and (b) an anti-human tech asshole who fundamentally does not respect labour. Scamming these kinds of people is a moral duty.
Another day, another jailbreak method - a new method called InfoFlood has just been revealed, which involves taking a regular prompt and making it thesaurus-exhaustingly verbose.
In simpler terms, it jailbreaks LLMs by speaking in Business Bro.
"Another thing I expect is audiences becoming a lot less receptive towards AI in general - any notion that AI behaves like a human, let alone thinks like one, has been thoroughly undermined by the hallucination-ridden LLMs powering this bubble, and thanks to said bubble’s wide-spread harms […] any notion of AI being value-neutral as a tech/concept has been equally undermined. [As such], I expect any positive depiction of AI is gonna face some backlash, at least for a good while."
Well, it appears I've fucking called it - I've recently stumbled across some particularly bizarre discourse on Tumblr recently, reportedly over a highly unsubtle allegory for transmisogynistic violence:
You want my opinion on this small-scale debacle, I've got two thoughts about this:
First, any questions about the line between man and machine have likely been put to bed for a good while. Between AI art's uniquely AI-like sloppiness, and chatbots' uniquely AI-like hallucinations, the LLM bubble has done plenty to delineate the line between man and machine, chiefly to AI's detriment. In particular, creativity has come to be increasingly viewed as exclusively a human trait, with machines capable only of copying what came before.
Second, using robots or AI to allegorise a marginalised group is off the table until at least the next AI spring. As I've already noted, the LLM bubble's undermined any notion that AI systems can act or think like us, and double-tapped any notion of AI being a value-neutral concept. Add in the heavy backlash that's built up against AI, and you've got a cultural zeitgeist that will readily other or villainise whatever robotic characters you put on screen - a zeitgeist that will ensure your AI-based allegory will fail to land without some serious effort on your part.
My only hope for this is that the GPUs in these CDO spiritual successors become dirt cheap afterwards.
They hopefully will, since the end of the AI bubble will kill AI for good and crash GPU demand.
Stubsack: Stubsack: weekly thread for sneers not worth an entire post, week ending 13th July 2025
Bonus: He also appears to think LLM conversations should be exempt from evidence retention requirements due to ‘AI privilege’ (tweet).
Hot take of the day: Clankers have no rights, and that is a good thing
Stubsack: Stubsack: weekly thread for sneers not worth an entire post, week ending 6th July 2025
Stubsack: Stubsack: weekly thread for sneers not worth an entire post, week ending 29th June 2025
Stubsack: weekly thread for sneers not worth an entire post, week ending 22nd June 2025
Stubsack: weekly thread for sneers not worth an entire post, week ending 15th June 2025
Stubsack: weekly thread for sneers not worth an entire post, week ending 8th June 2025
Stubsack: weekly thread for sneers not worth an entire post, week ending 1st June 2025
Stubsack: weekly thread for sneers not worth an entire post, week ending 25th May 2025
Stubsack: weekly thread for sneers not worth an entire post, week ending 18th May 2025
Stubsack: weekly thread for sneers not worth an entire post, week ending 11th May 2025
Stubsack: weekly thread for sneers not worth an entire post, week ending 4th May 2025
Stubsack: weekly thread for sneers not worth an entire post, week ending 27th April 2025
Stubsack: weekly thread for sneers not worth an entire post, week ending 20th April 2025