Hot take: 18 years of user contributions to reddit will serve as a base model for an AI that generates content and conversations. the reddit experience continues as a simulation, to harvest clicks, sa
most of the time you'll be talking to a bot there without even realizing. they're gonna feed you products and ads interwoven into conversations, and the AI can be controlled so its output reflects corporate interests. advertisers are gonna be able to buy access and run campaigns. based on their input, the AI can generate thousands of comments and posts, all to support your corporate agenda.
for example you can set it to hate a public figure and force negative commentary into conversations all over the site. you can set it to praise and recommend your latest product. like when a pharma company has a new pill out, they'll be able to target self-help subs and flood them with fake anecdotes and user testimony that the new pill solves all your problems and you should check it out.
the only real humans you'll find there are the shills that run the place, and the poor suckers that fall for the scam.
Glad it wasn't just me. It wasn't often I paid attention to usernames on the big subs, but it seemed like at some point they were absolutely flooded with "Adjective_Noun_1234" users, and I couldn't stop seeing it once I noticed. Those and the comment-reposting bots (which probably won't be called out by other bots anymore without a usable API) made me wonder how many actual humans I was interacting with.
I now want to make a bot that detects bots, grades their responses as 0% - 100% bot, posts the bottage score, and if they determine bottage, engage the other bot in endless conversation until it melts down from confusion.
We can live stream the battles. We'll call the show Babblebots.
anyone else remember how historically youtube comments were always pure garbage? i wonder if that was just a very primitive a.i. spamming posts on popular videos?
They still are. That's just "average and below" humans commenting.
Or as a park ranger would put it once: "there is a large overlap between the smartest bears and the dumbest humans"
Yeah I've replied to a post here too about bots taking over.
I used ChatGPT to "reply to the post as if you were a robot"
Made it a pretty funny response and then people were asking if I was a bot.
Who knows, maybe I am.
It's feasible. Highly profitable. Only a matter of time until someone does it. The only reason not do it, is if your morals stop you. and u/spez has no morals.
What's happening right now is that the smart users leave the platform. Makes perfect sense, they are not needed anymore, in fact they would be in the way of the scam running smoothly. So you want them gone. Reddit's actions make perfect sense really. They act exactly like they don't need contributors anymore. And for some reason, it doesn't bother them? There's a reason why it doesn't bother them, and people can't delete their history.
Another stealth benefit to reddit with all this API crap, is that it'll be much harder to tell since most of the tools people use to analyze accounts won't work anymore. Keeping in mind Reddit started out by inflating their user numbers.
I actually think this is the fate of the entire corporate driven part of the internet (so basically 95% nowadays, lol). Non-corporate, federated platforms are the future and will remain as the bastions of actual human interaction while the rest of the internet is being FUBAR by large language model bots.
Seriously asking, what makes you think the fediverse is immune to that? Eventually they'll get good enough that they'll be almost indistinguishable from normal users, so how can we keep the bots out?
There's a number of options including a chain of trust where you only see comments from someone who's been verified by someone who's been verified by someone and so on who's been verified by an actual real human that you've met in person. We can also charge per post, which will rapidly drive up the cost of a botnet (as well as trim down the number of two word derails).
It's not immune but until the fediverse reaches a critical mass, we're safe... probably.
After that, it will be the same whac-a-mole game we're used to and somehow I don't think we'll win.
Right now, we can already recognize lower quality bots within conversation. AI generated "art" is already very distinct to everyone to the point almost nobody misses it.
Language is a human instinct. Our minds create it, we can use it in all sorts of ways, bend it to our will however we want.
By the time bots become good enough to be indistinguishable online, they'll either be actually worth talking to, or they will simply be another corporate shill.
Reddit has been that way for a long time, after it lost the reputation of "niche forum for tech-obsessed weirdos" and became the internet's general hub for discussion. The default subreddits are severely astroturfed by marketing and political campaigning groups, and Reddit turns a blind eye to it as long as it's a paid partnership. There was one obvious case where bots in /r/politics accidentally targeted an AutoModerator thread instead of a candidate's promotion thread and filled it with praise for that candidate.
I see something similar in a lot of tech-related threads too.
Just check out posts and comments about Corsair and AMD in particular. There is often no room for logic, facts or debate around their products on Reddit. Rather, threads feel like you're stuck in a marketing promo event where everyone feels the products are great and fantastic and can do no wrong. It's eerily like you're seeing a bunch of bots or paid shill accounts all talking to each other.
I discussed in the AMD sub and it's completely filled with consumers. They have no clue about electronics or development. It could be malevolence, but it's becoming harder and harder to discern it from ignorance.
There was one obvious case where bots in /r/politics accidentally targeted an AutoModerator thread instead of a candidate's promotion thread and filled it with praise for that candidate.
Nope, sorry. Just a memory of a Reddit thread with very out-of-context comments. Ironically, while trying to search for documentation of the thread, DuckDuckGo returned a lot of research papers about the analysis of bot content on Reddit starting from 2015, so there's still proof that botting on Reddit goes way back.
We control the experience here to a greater degree. If an instance decides to lean into AI content, we can leave for another, and others can defederate (if desired). Further, bots will be far more transparent. Reddit can (and likely does) offer their preferred bots exemptions for automatic filtering; probably promoting their content using some opaque algorithm. Said bots will receive no such preferential treatment across the Fediverse.
Ever heard of the Dead Internet Theory? It's the idea that bots have taken over the Internet and there are few real humans left. For the whole of the Internet, this is a conspiracy theory. But for any individual platform, it is a totally plausible outcome. Reddit could become one of those bot networks that just pretends to be a social media platform. Twitter is on track for that too.
it's bleak. can I say.. what they want is for you to be half-asleep, hooked on drugs, forever hating each other. they want this. it's your ideal state for anyone that wields power in this world.
Like all other before it. Tay got the same fate, and the only reason ChatGPT isn't it because they have some filters that have a bit more quality than the rest.
The larger subs are already starting to become a war between different groups of spammers. The smaller subs can get by for now, but when the war in the larger subs gets to the extent that spammers start needing to branch out, they'll likely invade the smaller subs, as well.
We need better solutions for proving identity online. Email, capcha, etc. are insufficient. I imagine a system similar to the certificate authority system, where you prove your identity to one of many trusted identity providers and then that provider vouches for you when you sign up for other services (while also protecting you anonymity.)
the protecting your anonymity part would be very hard though, such a system has a high risk of eventually enabling a dystopian future where your every online move is being monitored by big brother
I was thinking that a mandatory donation to a charity could work. Like a simple $5 donation per account to any of a (carefully curated) list of charities. It would dramatically throttle new account creation / app adoption, of course, which is bad, but if a potential user wants it bad enough then they'd be OK with donating $5 to their favorite charity. It would reduce the number of bots / trolls / Sybils and it could work in a decentralized manner (imaging a lemmy instance doing this)
There will always be a trade-off between anonymity and authenticity. I could see a future where some web services will only interact with users that present a verified certificate that establishes them as a real person, even if it's not necessarily tied to your real-world identity. Some could require a cert that is tied to your actual identity. Some others could allow general anonymous accounts, though they would struggle with spam and AI bots. But ultimately, I think people are going to come to value some amount of guarantee that they're interacting with actual people.
In a seedy back alley bar, an identity broker checks his bank accounts as a man enters the front door. In his pocket, the man entering the bar carries a uSD card. He sits down across from the broker and sets the card on the vinyl table-top.
“PGP or minisign,” asks the broker, without looking up from his data pad.
“PGP,” responds the man, looking over his shoulder, back at the door, nervously.
The broker looks up, assesses the man, and says, “These older protocols cost extra, you know, you don't look like you have the credits.”
“Look, I just need to prove I'm human by the end of tonight, or else The Outlaws are going to put a tire iron between my eyes for not being able to get them the goods they've asked for.”
“The problem,” the broker said, before taking a long pull from his tobacco nebulizer, “Is that the AI bots are getting harder and harder to tell from the humans in this city. Technology has come a long way since Greenville became a coastal town"
The man looks back at the broker, realization dawning on him about what's about to happen. The gun which usually lived its days taped under the booth was now pointed at the man. “Typically, I wouldn't do this, but I don't like The Outlaws. I'm not going to lose business over that, though. But I work for The Bastards mostly. I know you don't work for them directly. You got mixed up in all this, didn't you? Nevertheless. In this one case, the cruelty is the point.”
Most of the inhabitants of the bar jumped as the pistol cracked, but made a point not to look over at the booth in the corner.
“Hmm… Yes… Blood. I should have your identity confirmed within the hour. I would wish you luck on your purchase, but frankly I wouldn't mind if you failed,” says the broker, sliding the uSD card into a slot just to the side of his right eye
Never underestimate the power of negative energy, plenty of people flock to also dump on things they don't like, it's a great way to drive engagement (albeit shitty engagement)
Bots are already engaging with users and pushing narratives. The percentage of Reddit that is inorganic is probably higher than most people would expect.
Although you have to wonder how much advertisers would actually pony up if most of the Reddit users weren't actual users at all. They want people to do the clicking, and if the users are all bots, they're likely not going to bother wasting their money at that point.
I'm interested to see how AI training on reddit turns out. Especially the default subs are full of snarky jokes, even on serious topics the majority of comments are "funny" one liners. And those are the ones getting the most upvotes.
Compared to a system like StackOverflow where the upvoted answers are the most helpful and mostly well written and thoughtfully crafted.
Content will be used to train bots, yes, but it probably won't be Reddit doing it, and they likely won't be offering bots as a service.
Instead, they'll sell access to the API to people training LLMs, and sell it again to people who want to use bots on the site. They can split API access into bulk read, and read/write packages so that people can't double-dip. Then they'll let people monetize subreddits, directly incentivising bot access and usage.
I suggest to watch "The Social Dilemma" on Netflix, not a paid sponsor ok hahaha.
This will become or "just already becomes" a great opportunity for elections, conspiracies, propagandas, prohibitions and defamations and corruption of minds of freely learning real online people, a great opportunity with poor, naive, innocent real online people and especially youngsters and upcoming generations just browsing, enjoying and wanting the Internet. So I swear there will be a massive daunting aftereffect on society/people/youngsters soon to be grownups with not-normal mindsets anew after.
BTW OPTIONS:
Rebellious: annihilate those servers!!!
Civil: manually delete/edit/request away every info, posts, comments and personals.
Best I believe: be edified and keep learning particularly to media literacy to well and easily discern the color—truth every time reading or obtaining online/outside. Nothing can ever corrupt a rational man that learns and achieves truth.