Skip Navigation

Soliciting Feedback for Improvements to the Media Bias Fact Checker Bot

Hi all!

As many of you have noticed, many Lemmy.World communities introduced a bot: @MediaBiasFactChecker@lemmy.world. This bot was introduced because modding can be pretty tough work at times and we are all just volunteers with regular lives. It has been helpful and we would like to keep it around in one form or another.

The !news@lemmy.world mods want to give the community a chance to voice their thoughts on some potential changes to the MBFC bot. We have heard concerns that tend to fall into a few buckets. The most common concern we’ve heard is that the bot’s comment is too long. To address this, we’ve implemented a spoiler tag so that users need to click to see more information. We’ve also cut wording about donations that people argued made the bot feel like an ad.

Another common concern people have is with MBFC’s definition of “left” and “right,” which tend to be influenced by the American Overton window. Similarly, some have expressed that they feel MBFC’s process of rating reliability and credibility is opaque and/or subjective. To address this, we have discussed creating our own open source system of scoring news sources. We would essentially start with third-party ratings, including MBFC, and create an aggregate rating. We could also open a path for users to vote, so that any rating would reflect our instance’s opinions of a source. We would love to hear your thoughts on this, as well as suggestions for sources that rate news outlets’ bias, reliability, and/or credibility. Feel free to use this thread to share other constructive criticism about the bot too.

210 comments
  • My personal view is to remove the bot. I don't think we should be promoting one organisations particular views as an authority. My suggestion would be to replace it with a pinned post linking to useful resources for critical thinking and analysing news. Teaching to fish vs giving a fish kind of thing.

    If we are determined to have a bot like this as a community then I would strongly suggest at the very least removing the bias rating. The factuality is based on an objective measure of failed fact checks which you can click through to see. Although this still has problems, sometimes corrections or retractions by the publisher are taken note of and sometimes not, leaving the reader with potentially a false impression of the reliability of the source.

    For the bias rating, however, it is completely subjective and sometimes the claimed reasons for the rating actually contradict themselves or other 3rd party analysis. I made a thread on this in the support community but TLDR, see if you can tell the specific reason for the BBC's bias rating of left-centre. I personally can't. Is it because they posted a negative sounding headline about Trump once or is it biased story selection? What does biased story selection mean and how is it measured? This is troubling because in my view it casts doubt on the reliability of the whole system.

    I can't see how this can help advance the goal (and it is a good goal) of being aware of source bias when in effect, we are simply adding another bias to contend with. I suspect it's actually an intractable problem which is why I suggest linking to educational resources instead. In my home country critical analysis of news is a required course but it's probably not the case everywhere and honestly I could probably use a refresher myself if some good sources exist for that.

    Thanks for those involved in the bot though for their work and for being open to feedback. I think the goal is a good one, I just don't think this solution really helps but I'm sure others have different views.

  • My personal view is that the bot provides a net negative, and should be removed.

    Firstly, I would argue that there are few, if any, users whom the bot has helped avoid misinformation or a skewed perspective. If you know what bias is and how it influences an article then you don't need the bot to tell you. If you don't know or care what bias is then it won't help you.

    Secondly, the existence of the bot implies that sources can be reduced to true or false or left or right. Lemmy users tend to deal in absolutes of right or wrong. The world exists in the nuance, in the conflict between differing perspectives. The only way to mitigate misinformation is for people to develop their own skeptical curiosity, and I think the bot is more of a hindrance than a help in this regard.

    Thirdly, if it's only misleading 1% of the time then it's doing harm. IDK how sources can be rated when they often vary between articles. It's so reductive that it's misleading.

    As regards an open database of bias, it doesn't solve any of the issues listed above.

    In summary, we should be trying to promote a healthy sceptical curiosity among users, not trying to tell them how to think.

    • Thanks for the feedback. I have had the thought about it feeling like mods trying to tell people how to think, although I think crowdsourcing an open source solution might make that slightly better.

      One thing that’s frustrating with the MBFC API is that it reduces “far left” and “lean left” to just “left.” I think that gets to your point about binaries, but it is a MBFC issue, not an issue in how we have implemented it. Personally, I think it is better on the credibility/reliability bit, since it does have a range there.

      • That's perhaps a small part of what I meant about binaries. My point is, the perspective of any given article is nuanced, and categorising bias implies that perspectives can be reduced to one of several.

        For example, select a contentious issue like abortion. Collect 100 statements from 100 people regarding various related issues, health concerns, ethics, when an embryo becomes a fetus, fathers rights. Finally label each statement as either pro-choice or pro-life.

        For sobering trying to understand the complex issues around abortion, the labels are not helpful, and they imply that the entire argument can be reduced to a binary choice. In a word it's reductive. It breeds a culture of adversity rather than one of understanding.

        In addition, I can't help but wonder how much "look at this cool thing I made" is present here. I love playing around with web technologies and code, and love showing off cool things I make to a receptive audience. Seeking feedback from users is obviously a healthy process, and I praise your actions in this regard. However, if I were you I would find it hard not to view that feedback through the prism of wanting users to find my bot useful.

        As I started off by saying, I think the bot provides a net negative, as it undermines a culture of curious scepticism.

      • Just a point of correction, it does distinguish between grades. There is "Center-Left," "Left," and "Extreme Left."

  • Who fact-checks the fact-checkers? Fact-checking is an essential tool in fighting the waves of fake news polluting the public discourse. But if that fact-checking is partisan, then it only acerbates the problem of people divided on the basics of a shared reality.

    This is why a consortium of fact-checking institutions have joined together to form the International Fact-Checking Network (IFCN), and laid out a code of principles. You can find a list of signatories as well as vetted organizations on their website.

    MBFC is not a signatory to the IFCN code of principles. As a partisan organization, it violates the standards that journalists have recognized as essential to restoring trust in the veracity of the news. I've spoken with @Rooki@Lemmy.World about this issue, and his response has been that he will continue to use his tool despite its flaws until something better materializes because the API is free and easy to use. This is like searching for a lost wallet far from where you lost it because the light from the nearby street lamp is better. He is motivated to disregard the harm he is doing to !politics@Lemmy.World, because he doesn't want to pay for the work of actual fact-checkers, and has little regard for the many voices who have spoken out against it in his community.

    By giving MBFC another platform to increase its exposure, you are repeating his mistake. Partisan fact-checking sites are worse than no fact-checking at all. Just like how the proliferation of fake news undermines the authority of journalism, the growing popularity of a fact-checking site by a political hack like Dave M. Van Zandt undermines the authority of non-partisan fact-checking institutions in the public consciousness.

  • Remove it.

    No need for a bot. Obvious misinformation should be removed by the mods. Bias is too subjective to be adjudicated by the mods. Just drop it already. It's consistently downvoted into oblivion for a reason. The feedback has been petty damn obvious. This whole thread is just because the mods are so sure they're right that they can't listen to the feedback they already got. Just kill the bot.

    1. Please, move the bias and reliability outside of the first accordion/spoiler. This is the sole purpose the bot was meant to provide. If we can't see that at a glance, it's bad. I don't see how these few words are "too long" either. I feel like a lot of the space could be cleared by turning the "Search Ground News" accordion into another link in the footer.
    2. While I personally don't see the point of the controversy, it wouldn't be too hard to manually enter Wikipedia's Perennial Sources list into the database that the bot references, especially with MediaWiki's watchlist RSS feed. This would almost certainly satisfy the community.
    3. Open source the database and the bot. Combined with #2, this could also offer an API to query Wikipedia's RSP for everyone to use in the spirit of fedi and decentralization.
  • I blocked it straight away so I don't have a dog in this fight but I'm instantly skeptical of any organization that claims to be the arbiter of what is biased and to what degree.

  • Im sorry but the sole concept of the bot is bullshit and as many have said already the idea is biased per se. I wish i lived in the same world as mbfc where it seems like all media is left-center.

    If anything, what would be needed would be a bot that checked if the information on that article has any known missinformation or incorrect/wrong facts. And that would be extremely hard to maintain and update as a lot of news are posted before any fact checking can be done.

  • Here's the comment reply from when I first asked what was wrong with MBFC. Gotta say. I agree with that comment. I'm surprised more people haven't posted similar examples here.

    https://lemmy.dbzer0.com/comment/12328918

    Edit: here is the text from the linked comment.

    I'm just gonna drop this here as an example:

    https://mediabiasfactcheck.com/the-jerusalem-report/

    https://mediabiasfactcheck.com/the-jerusalem-post/

    The Jerusalem Report (Owned by Jerusalem Post) and the Jerusalem Post

    This biased as shit publication is declared by MBFC as VEEEERY slightly center-right. They make almost no mention of the fact that they cherry pick aspects of the Israel war to highlight, provide only the most favorable context imaginable, yadda yadda. By no stretch of the imagination would these publications be considered unbiased as sources, yet according to MBFC they're near perfect.

    • This biased as shit publication is declared by MBFC as VEEEERY slightly center-right. They make almost no mention of the fact that they cherry pick aspects of the Israel war to highlight

      You keep repeating this lie.

      From their report on the Jerusalem Post:

      Overall, we rate The Jerusalem Post Right-Center biased based on editorial positions that favor the right-leaning government. We also rate them Mostly Factual for reporting rather than High due to two failed fact checks.

      Until 1989, the Jerusalem Post’s political leaning was left-leaning as it supported the ruling Labor Party. After Conrad Black acquired the paper, its political position changed to right-leaning, when Black began hiring conservative journalists and editors. Eli Azur is the current owner of Jerusalem Post. According to Ynetnews, and a Haaretz article, “Benjamin Netanyahu, the Editor in Chief,” in 2017, Azur gave testimony regarding Prime Minister Benjamin Netanyahu’s pressure. Current Editor Yaakov Katz was the former senior policy advisor to Naftali Bennett, the former Prime Minister and head of the far-right political party, “New Right.”

      In review, The Jerusalem Post covers Israeli and regional news with strongly emotionally loaded language with right-leaning bias with articles such as this “Country’s founding Labor party survives near extinction” and “Netanyahu slams settler leader for insulting Trump.” . . . During the 2023 Israel-Hamas conflict, the majority of stories favored the Israeli government, such as this Netanyahu to Hezbollah: If you attack, we’ll turn Beirut into Gaza. In general, the Jerusalem Post holds right-leaning editorial biases and is usually factual in reporting.

      They literally mention their bias over and over. Center-right is consistent with how they're rated everywhere. Allsides rates them center with the note that the community thinks they lean right. Wikipedia rates them as centre-right/conservative. Your "VEEEERY slightly" bit is pure fabrication. They specifically note that they're a highly biased source on the conflict in Gaza.

  • I'm frankly rather concerned about the idea of crowdsourcing or voting on "reliability", because - let's be honest here - Lemmy's population can have highly skewed perspectives on what constitutes "accurate", "unbiased", or "reliable" reporting of events. I'm concerned that opening this to influence by users' preconceived notions would result in a reinforced echo chamber, where only sources which already agree with their perspectives are listed as "accurate". It'd effectively turning this into a bias bot rather than a bias fact checking bot.

    Aggregating from a number of rigorous, widely-accepted, and outside sources would seem to be a more suitable solution, although I can't comment on how much programming it would take to produce an aggregate result. Perhaps just briefly listing results from a number of fact checkers?

  • Ban it and all bots honestly. I hate seeing a comment on a thread just to find out it's a bot. If not use like this continues we might see a fresh post with 6 new comments, all of them bots that don't add to the discussion.

  • The bot has no purpose. Either an article can be posted or not there's no reason for the bot prompt. It just looks like thought policing using a bias checker which 'coincidentally' prefers what the current Democrats position is.

    I can hardly imagine the bot stopping any fake news from being posted either.

  • Credibility isn't subjective. It should be a hard value.

    Orientation is indeed subjective and unless in the extremes should (imo) not be defined

  • Bias ratings will always be biased. So aggregate or having multiple sources briefly used in a single small post would work best.

  • Not directly related to MBFC bot, but what's your opinion on other moderation ideas to improve the nature of the discussion? Something Awful forums have strawmanning as a bannable offense. If someone says X, and you say they said Y which is clearly different from X, you can get a temp ban. It works well enough that they charge a not-tiny amount of money to participate and they've had a thriving community for longer than more existing social media has been alive. They're absolutely ruthless about someone who's being tricksy or pointlessly hostile with their argumentation style simply isn't allowed to participate.

    I'm not trying to make more work for the moderators. I recognize that side of it... the whole:

    This bot was introduced because modding can be pretty tough work at times and we are all just volunteers with regular lives. It has been helpful and we would like to keep it around in one form or another.

    ... makes perfect sense to me. I get the idea of mass-banning sources to get rid of a certain type of bad faith post, and doing it with automation so that it doesn't create more work for the moderators. But to me, things like:

    • Blatant strawmanning
    • Saying something very specific and factual (e.g. food inflation is 200%) and then making no effort to back it up, just, that's some shit that came into my head and so I felt like saying it and now that I've cluttered up the discussion with it byeeeeee

    ... create a lot more unpleasantness than just simple rudeness, or posting something from rt.com or whatever so-blatant-that-MBFC-is-useful type propaganda.

    • It’s tricky because we could probably make 100 rules if we wanted to define every specific type of violation. But a lot of what you’re talking about could fall under Rules 1 and 8, which deal with civility and misinformation. If people are engaging in bad faith, feel free to report them and we’ll investigate.

      • Hm

        I can try it -- I generally don't do reports; I actually don't even know if reports from mbin will go over properly to Lemmy.

        For me it's more of a vibe than a set of 100 specific rules. The moderation on political Lemmy feels to me like "you have to be nice to people, but you can argue maliciously or be dishonest if you want, that's all good." Maybe I am wrong in that though. I would definitely prefer that the vibe be "you can be kind of a jerk, but you need to be honest about where you're coming from and argue in good faith, and we'll be vigorous about keeping you out if you're not." But maybe it's fair to ask that I try to file some reports under that philosophy before I assume that they wouldn't be acted on.

    • Some of what you describe is likely against our community rules. We do not allow trolling, and we do not allow misinformation. We tend to err on the side of allowing speech when it is unclear, but repeat offenders are banned.

      When you see these behaviors, please make a report that that we can review it. We cannot possibly see everything.

210 comments