Skip Navigation

Private voting has been added to PieFed

We had a really interesting discussion yesterday about voting on Lemmy/PieFed/Mbin and whether they should be private or not, whether they are already public and to what degree, if another way was possible. There was a widely held belief that votes should be private yet it was repeatedly pointed out that a quick visit to an Mbin instance was enough to see all the upvotes and that Lemmy admins already have a quick and easy UI for upvotes and downvotes (with predictable results ). Some thought that using ActivityPub automatically means any privacy is impossible (spoiler: it doesn't).

As a response, I’m trying this out: PieFed accounts now have two profiles within them - one used for posting content and another (with no name, profile photo or bio, etc) for voting. PieFed federates content using the main profile most of the time but when sending votes to Mbin and Lemmy it uses the anonymous profile. The anonymous profile cannot be associated with its controlling account by anyone other than your PieFed instance admin(s). There is one and only one anonymous profile per account so it will still be possible to analyze voting patterns for abuse or manipulation.

ActivityPub geeks: the anonymous profile is a separate Actor with a different url. The Activity for the vote has its “actor” field set to the anonymous Actor url instead of the main Actor. PieFed provides all the usual url endpoints, WebFinger, etc for both actors but only provides user-provided PII for the main one.

That’s all it is. Pretty simple, really.

To enable the anonymous profile, go to https://piefed.social/user/settings and tick the ‘Vote privately’ checkbox. If you make a new account now it will have this ticked already.

This will be a bit controversial, for some. I’ll be listening to your feedback and here to answer any questions. Remember this is just an experiment which could be removed if it turns out to make things worse rather than better. I've done my best to think through the implications and side-effects but there could be things I missed. Let's see how it goes.

161 comments
  • Dude this is genius

    I am interested to see how it plays out but the idea of the instance admin being able to pierce the veil and investigate things that seem suspect (and being responsible for their instance not housing a ton of spam accounts just as now) seems like a perfect balance at first reading

    Edit: Hahaha now I know Rimu’s alter ego because he upvoted me. Gotcha!

  • While not a perfect solution, this seems very smart. It’s a great mitigation tactic to try to keep user’s privacy intact.

    Seems to me there’s still routes to deanonymization:

    1. Pull posts that a user has posted or commented in
    2. Do an analysis of all actors in these posts. The poster’s voting actor will be over represented (if they act like I assume most users do. I upvote people I reply to etc)
    3. if the results aren’t immediately obvious, statistical analysis might reveal your target.

    Piefed is smaller than lemmy, right? So if only one targeted posting account is voting somewhat consistently in posts where few piefed users vote/post/view, you got your guy.

    Obviously this is way harder than just viewing votes. Not sure who would go to the trouble. But a deanonymization attack is still possible. Perhaps rotate the ids of the voting accounts periodically?

    • It will never be foolproof for users coming from smaller instances, even with changing IDs. If you see a downvote coming from PieFed.social you already have it narrowed down to not too many users, and the rest you can probably infer based on who contributes to a given discussion.

      Still, I think it's enough to be effective most of the time.

      • Yea, I agree. It’s good enough. Sorry, I didn’t mean to sound like it was a bad solution, it’s just not perfect and people ought to be aware of limitations.

        I used a small instance in my example so the problem was easier to understand, but a motivated person could target someone on a large instance, too, so long as that person tended to vote in the posts they commented on.

        Just for example (and I feel like I should mention, I have no bad feelings towards this guy), Flying Squid on lemmy.world posts all over the place, even on topics with few upvotes. If you pull all his posts, and all votes left in those posts from all users, I bet you could find one voter who stands out from the crowd. You just need to find the guy following him everywhere: himself.

        I mean, if he tends to leave votes in topics he comments on, which I assume he does.

        It would have to be a very targeted attack and that’s much better than the system lemmy uses right now. I’m remembering the mass tagger on Reddit, I thought that add on was pretty toxic sometimes.

        Also, it just occurred to me, on Lemmy, when you post you start with one vote, your own. I can even remove this vote (and I’ll do it and start this post off with score 0). I wonder how this vote is handled internally? That would be an immediate flaw in this attempt to protect people’s privacy.

    • It could be mitigated further by having a different Actor per community you engage in, but that is definitely a bigger change in how voting works currently, and might have issues detecting vote brigading.

    • Not familiar with how piefed handles it specifically but aren't posts/comments self-upvoted by default?

      You could probably figure it out pretty easily just by looking at a user's posts, no?

      (This is unless piefed makes it so the main actor up votes their own posts, and the anonymous actor upvotes others' posts, but then it would still be possible to do analysis on others' comments to get a pretty accurate guess)

  • The problem with this approach is trust. It works for the users, but not admins. If I run a PieFed instance with this on, how can lemmy.world for example can trust my tiny instance to be playing by the rules? I went over more details in this other comment.

    Sure, right now admins can contact you, for your instance. But you can't really do that with dozens of instances and hundreds of instances. There's a ton of instances we tolerate the users, but would you trust the admin with anonymous votes? Be in constant contact with a dozen instance admins on a daily basis?

    It's a good attempt though. Maybe we're all pessimistic and it will work just fine!

    • I can only respond in general terms because you didn't name any specific problems.

      Firstly, remember than each piefed account only has one alt account and it's always the same alt account doing the votes with the same gibberish user name. If the person is always downvoting or always voting the same as another person you'll see those patterns in their alt and the alt can be banned. It's an open source project so the mechanics of it cannot be kept secret and they can be verified by anyone with intermediate Python knowledge.

      Regardless, at any kind of decent scale we're going to have to use code to detect bots and bad actors. Relying on admins to eyeball individual posts activity and manually compare them isn't going to scale at all, regardless whether the user names are easy to read or not.

      • Firstly, remember than each piefed account only has one alt account and it's always the same alt account doing the votes with the same gibberish user name. It's an open source project so the mechanics of it cannot be kept secret and they can be verified by anyone with intermediate Python knowledge.

        That implies trust in the person that operates the instance. It's not a problem for piefed.social, because we can trust you. It will work for your instance. But can you trust other people's PieFed instances? It's open-source, I could just install it on my server, change the code to make me 2-3 alt accounts instead. Pick a random instance from lemmy.world's instance list, would you blindly trust them to not fudge votes?

        The availability of the source code doesn't help much because you can't prove that it's the exact code that's running with no modifications, and marking people running modified code as suspicious out of the box would be unfair and against open-source culture.

        I also see some deanonymization exploits too: people commonly vote+comment, so with some time, you can do correlation attacks and narrow down the accounts. So to prevent that, you'd have to remove the users mapping 1:1 to a gibberish alt by at least letting the user rotate them on demand, or rotate them on a schedule, and now we can't correlate votes to patterns anymore. And everyone's database endlessly fills up with generated alt accounts (that you can't delete).

        If the person is always downvoting or always voting the same as another person you'll see those patterns in their alt and the alt can be banned.

        Sure, but you lose some visibility into who the user is. Seeing the comments is useful to get a better grasp of who they are. Maybe they're just a serial fact checker and downvoting misinformation and posting links to reputable sources. It can also help identify if there's other activity beside just votes, large amounts of votes are less suspicious if you see the person's also been engaging with comments all day.

        And then you circle back to, do you trust the instance admin to investigate or even respond to your messages? How is it gonna go when a big, politically aligned instance is accused of botting and the admin denies the claims but the evidence suggests it's likely? What do we do with Threads or even an hypothetical Twitter going fediverse, with Elon still as the boss? Or Truth Social?

        The bigger the instance, the easier it is to sneak a few votes in. With millions of user accounts, you can borrow a couple hundred of your long inactive user's alts easily and it's essentially undetectable.


        I'm sorry for the pessimism but I've come to expect the worst from people. Anything that can be exploited, will be exploited. I do wish this problem to be solved, and it's great that some people like you go ahead and at least try to make it work. I'm not trying to discourage anyone from experimenting with that, but I do think those what-ifs are important to discuss before everyone implements it and then oops we have a big problem.

        The way things are, we don't have to put any trust in an instance admin. It might as well not be there, it's just a gateway and file host. But we can independently investigate accounts and ban them individually, without having to resort to banning whole instances, even if the admins are a bit sketchy. Because of the inherent transparency of the protocol.

    • It will be extremely obvious if you see 300 user agents voting but the instance only has 100 active users.

  • Very interesting development, I'll be curious to see how it ends up working out.

  • How does this work with moderation? I.e. what happens if I ban the real user from a Lemmy instance? What if I ban the alternate user?

    Also, what happens if on Piefed, a user votes for something, then they change the setting and then they vote for the same thing again? How would a Lemmy instance know if it should count the vote or not, since the original user didn't actually vote from Lemmy's point of view?

    • The 'real user' and the 'private voter' are 2 different accounts as far a external instances are concerned, but only 1 as far as piefed.social is concerned. So if you banned either one, it would have the same effect, because PF would locate the same account from the information provided.

      Likewise, a piefed user can't vote twice on something, they make one vote, and then the 'private voting' setting determines how it is sent out. The local system has tracked that they have voted, and changing the setting won't change that.

      There's always more work to do of course, but piefed.social is a small instance, with manual approval required for registration, no API to script things like mass downvoting, and concepts such as 'attitude' which would prevent that anyway, so I can't foresee anything too disastrous happening from this little experiment.

      • I'm a little concerned about the precedent this sets. An instance could use this technique to facilitate anonymous commenting or posting in addition to votes.

  • I'm surprised most people are against public votes. Most people already seem to have an anonymous account via some weird username not connected to their real identity already. What difference does it make that votes can be viewed, other than for transparency during discussion?

    Maybe I'm the odd one out that uses my real name on the Internet and generally try to behave/vote the same as I would in person, but it seems weird wanting a hybrid account that's private (votes), yet not private (comments).

  • Is it possible to double vote this way (once on each account)? On second thought, would it even matter? A malicious actor could have multiple accounts.

    • No, the other account isn't something you can log into or interact with. PieFed knows whether I've already voted on something, so it won't let me vote again by changing the 'vote privately' setting.

  • So I've been thinking about this and I would go for a different approach.

    Admins can set voting to be public or private on a server wide level.

    When users vote, a key is created as the userid

    The votes table is essentially: voteid, postid, userid, timestamp, salt, public

    If the vote is private, userid is salt(userid, password)

    And it's that simple.

    • With the user id being salted it's going to be different every time. This means it'll be difficult if not impossible to monitor voting trends or abuse.

      Also how would you use the password unless it was stored in the clear. If it's based on a pre-salted tuple, how does one handle password changes?

      • Dammit! Okay, cancel the salt idea. How about just a simple md5() and then it should remain a static value right?

    • @dullbananas@lemmy.ca does the design hold up?

      • This might work well with a separate per-user random secret value instead of the password.

        Overall the vote privacy issue is a tough dilemma for me.

  • People who post and vote anonymously have no incentive to stand by their comments and votes. Anonymity is how we allow trolls to troll. We already allow fake names with no limits or verification, and now we're trying to protect their fake reputation, too. And for what benefit, exactly?

    Hiding votes like this also allows pretty much anyone to generate as many votes in whatever direction they want. If we could see the votes, we could at least see patterns in reused accounts or personal instances. Without that, anyone can always be "right" by spamming themselves with upvotes, and whoever disagrees will always "wrong" when they get spammed down.

    What's the point of voting at that point? May as well remove votes all-together, since they're even more pointless than they were on reddit.

161 comments