Abstract
To manage user-generated harmful video content, YouTube relies on AI algorithms (e.g., machine learning) in content moderation and follows a retributive justice logic to punish convicted YouTubers through demonetization, a penalty that limits or deprives them of advertisements (ads), reducing their future ad income. Moderation research is burgeoning in CSCW, but relatively little attention has been paid to the socioeconomic implications of YouTube's algorithmic moderation. Drawing from the lens of algorithmic labor, we describe how algorithmic moderation shapes YouTubers' labor conditions through algorithmic opacity and precarity. YouTubers coped with such challenges from algorithmic moderation by sharing and applying practical knowledge they learned about moderation algorithms. By analyzing video content creation as algorithmic labor, we unpack the socioeconomic implications of algorithmic moderation and point to necessary post-punishment support as a form of restorative justice. Lastly, we put forward design considerations for algorithmic moderation systems.
Original language | English (US) |
---|---|
Article number | 429 |
Journal | Proceedings of the ACM on Human-Computer Interaction |
Volume | 5 |
Issue number | CSCW2 |
DOIs | |
State | Published - Oct 18 2021 |
All Science Journal Classification (ASJC) codes
- Social Sciences (miscellaneous)
- Human-Computer Interaction
- Computer Networks and Communications