What do you think? Is it some sort of a bug or do people run bot farms?
Edit2: It's been now 3 days and we went from 150 000 user accounts 3 days ago to 700 000 user accounts today making it 550 000+ bot accounts and counting. Almost 80% accounts on lemmy are now bots and it may end up being an very serious issue for lemmy platform once they become active.
Edit3: It's now 4th day of the attack and the amount of accounts on lemmy has almost reached 1 200 000. Almost 90% of total userbase are now bots.
Edit 3.1: my numbers are outdated, there are currently 1 700 000 accounts which makes it even worse: https://fedidb.org/software/lemmy
A few persons control a large amount of bots. They can manipulate upvotes, downvotes. Silence opinions they don't like, boost the ones they support. They can flood everyone's feed with whatever topic they like. They get to choose what is important, what people get to think about. They can harass any single user, by downvoting posts or being generally unpleasant all the time, and giving the impression that the community agrees. They can create a fake impression of consensus on any given topic.
Now that bots basically pass the Turing test, they can get you to almost never interact with a real person, but instead with machines who never actual learn, listen or change their mind. That sort of thing could erode anyone's opinion of their fellow humans. That could make one think that there's no possibility of common grounds with their adversaries.
Don't underestimate the bots, they're responsible for most of the political turmoil of the last decade.
I think this happen to reddit, but really. It become so preoccupied with same way of thinking and never any tolerance or interest in other opinion it was scary.
Bots are a fight we will have upon us and coordinated effort will be (is) necessary.
We will have to follow paterns known human users have, and patterns bots have and at least one line of defense should be made from there.
I am thinking of key signing parties, so that we organically confirm known humans and trust in that data more when training our algorithms for bot detections. Maybe even pur own bots for detecting bots, all this tools go both ways, we can use them too.
It's been quite funny seeing people talking about how Meta is going to come onto fedi and scrape everyone's Mastodon posts.
Inasmuch as Meta gives a shit about scraping content, it is so that they can translate that into marketing reach. They don't give a shit about your toots because they have no reasonable means of generating value for themselves from those posts, because they have no way of making you look at ads based upon them.
Even if they could somehow generate value from your posts, they don't need to start an ActivityPub-based social network to scrape everything you say. They can just open a no-name account on a reasonably-well federated server like Mastodon.social, grab an API key and suck down as much as they please. There's no mechanism to prevent them from doing so.
ActivityPub is not private and never has been. While obviously it's morally dubious to scrape fedi, there's nothing technologically preventing anyone from doing so, and frankly there are many worse actors that could do so than Meta.
The normal operation of ActivityPub itself means that the moment your post gets federated to other servers, you lose final control over its viewership and destination.
Basically I think if you don't want other people you may not want seeing, downloading or storing what you have to say, don't post it publicly and definitely don't use ActivityPub.
Sorry, bit of a tangent, but a lot of people are pretty oblivious to how obnoxiously unprivate fedi is.
What’s the point of getting the data if you can’t advertise to those people? There is no ad space in the fediverse and it’s easy to defederate bot infested instances that might be trying to advertise through vote boosting.
There's no need for ad space when you can create posts with ads and promote them by using bots. Bots can be used for giving upvotes at realistic rate and a person can use multiple bot/alt accounts with the help of chatGPT to make realistic comments to make your product look good.
Owners can use those bots to boost choosen posts/comments with a lot of upvotes or downvote something into oblivion if they don't like something. Bots can be also used for spam and advertising stuff. Overall, if the bots become active the platform will be fucked as the quality of everything will go down. One problem that affects us now is that we lost a reliable way of telling how much factual users are on the platform.
Karma doesn't matter, it's the power to make whatever you want visible by upvoting it hower many times you want or to make something invisible by downvoting it if you don't like it. As long the amount of downvotes/upvotes is realistic then it will be impossible to know when bots touch something.