I made a robot moderator. It models trust flow through a network that’s made of voting patterns, and detects people and posts/comments that are accumulating a large amount of “negative trust,” so to speak.

In its current form, it is supposed to run autonomously. In practice, I have to step in and fix some of its boo-boos when it makes them, which happens sometimes but not very often.

I think it’s working well enough at this point that I’d like to experiment with a mode where it can form an assistant to an existing moderation team, instead of taking its own actions. I’m thinking about making it auto-report suspect comments, instead of autonomously deleting them. There are other modes that might be useful, but that might be a good place to start out. Is anyone interested in trying the experiment in one of your communities? I’m pretty confident that at this point it can ease moderation load without causing many problems.

!santabot@slrpnk.net

  • alcoholicorn@lemmy.ml
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    6
    ·
    edit-2
    2 months ago

    Maybe, but conservatism is considered much more acceptable among Americans than anything left of liberalism. Particularly now when the dems are trying to reach out to conservatives with policies such as closing the border, “tough on crime” rhetoric, unlimited support for Israel, etc. You can check by whether you’ve been banned from PleasentPolitics

    legally” and “Illegal

    Adding “this should be done according to the law” doesn’t divorce an action from its morality.

    Rounding up millions of immigrants, some whom have been here for decades, and nearly all of whom are here because they’re fleeing the effects of the US constantly couping their governments and training/funding terrorists is an immoral action, whether they’re legal or illegal.