Mercy craned forward, took a deep breath and loaded another task on her computer. One after another, disturbing images and videos appeared on her screen. As a Meta content moderator working at an outsourced office in Nairobi, Mercy was expected to action one “ticket” every 55 seconds during her 10-hour shift. This particular video was of a fatal car crash. Someone had filmed the scene and uploaded it to Facebook, where it had been flagged by a user. Mercy’s job was to determine whether it had breached any of the company’s guidelines that prohibit particularly violent or graphic content. She looked closer at the video as the person filming zoomed in on the crash. She began to recognise one of the faces on the screen just before it snapped into focus: the victim was her grandfather.

New tickets appeared on the screen: her grandfather again, the same crash over and over. Not only the same video shared by others, but new videos from different angles. Pictures of the car; pictures of the dead; descriptions of the scene. She began to recognise everything now. Her neighbourhood, around sunset, only a couple of hours ago – a familiar street she had walked along many times. Four people had died. Her shift seemed endless.

We spoke with dozens of workers just like Mercy at three data annotation and content moderation centres run by one company across Kenya and Uganda. Content moderators are the workers who trawl, manually, through social media posts to remove toxic content and flag violations of the company’s policies. Data annotators label data with relevant tags to make it legible for use by computer algorithms. Behind the scenes, these two types of “data work” make our digital lives possible. Mercy’s story was a particularly upsetting case, but by no means extraordinary. The demands of the job are intense.

  • Flying Squid@lemmy.worldM
    link
    fedilink
    English
    arrow-up
    11
    ·
    5 months ago

    I honestly don’t know the solution to this beyond, obviously, pay these people more. Because if you don’t train an AI to go through this sick shit and get rid of it, you need to get people to do it. The only other option is to just leave it up and I don’t think that’s a good option.

    AI sucks for a lot of applications and I’m sure it will not be effective enough in this instance to go without any humans involved in the process entirely, but if we don’t try to get humans out of this process, we’re forcing people to look at murders and child porn to get rid of it.

    • LadyAutumn@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      10
      ·
      edit-2
      5 months ago

      Yeah. I think one fantastic step is not to outsource this to people in countries who can be paid several orders of magnitude lower than the minimum wage they can be paid in the states (which is already pitifully low).

      I also feel like this can’t be someone’s full time job. You just can’t do this full time. People who do content moderation should be rotated on and off of checking content. They also shouldn’t have KPI metrics. They should have enough time to process after seeing one of these things. Whole thing is criminally inhumane.

      And yeah, idk why AI can’t auto-remove video/image content and it’s only human reviewed if it’s appealed.

      • KevonLooney@lemm.ee
        link
        fedilink
        English
        arrow-up
        5
        ·
        5 months ago

        A real solution is to require every single employee in the company to review these videos equally. If the Executive Leadership Team experiences this once, the issue will be fixed. Probably by banning repeat problem users.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    5
    ·
    5 months ago

    This is the best summary I could come up with:


    Workers at one of the content moderation centres we visited were left crying and shaking after witnessing beheading videos, and were told by management that at some point during the week they could have a 30-minute break to see a “wellness counsellor” – a colleague who had no formal training as a psychologist.

    Workers who ran away from their desks in response to what they’d seen were told they had committed a violation of the company’s policy because they hadn’t remembered to enter the right code on their computer indicating they were either “idle” or on a “bathroom break” – meaning their productivity scores could be marked down accordingly.

    Their employer was a client of Meta’s, a prominent business process outsourcing (BPO) company with headquarters in San Francisco and delivery centres in east Africa where insecure and low-income work could be distributed to local employees of the firm.

    Like Anita, workers are trained to identify elements of the image in response to client specifications: they may, for example, draw polygons around different objects, from traffic lights to stop signs and human faces.

    As tech commentator Phil Jones puts it: “In reality, the magic of machine learning is the grind of data labelling.” This is where the really time-consuming and laborious work takes place.

    They present a vision of shining, sleek, autonomous machines – computers searching through large quantities of data, teaching themselves as they go – rather than the reality of the poorly paid and gruelling human labour that both trains them and is managed by them.


    The original article contains 2,512 words, the summary contains 257 words. Saved 90%. I’m a bot and I’m open source!