Hello and welcome to Everything in Moderation's Week in Review, your in-depth guide to the policies, products, platforms and people shaping the future of online speech and the internet. It's written by me, Ben Whitelaw and supported by members like you. Last week's news (EiM #237) that Alice Hunsberger will be writing weekly for EiM got some of you almost as excited as me. Her first newsletter proved to be a real talking point and I'm glad to welcome some fellow admirers of Alice's work as subscribers of EiM. This week, I have another exciting announcement: I'm launching a weekly news podcast with the brilliant Mike Masnick of Techdirt. Ctrl-Alt Speech will bring you bring you the latest news in content moderation and internet regulation from both sides of the Atlantic and provide context and analysis to help you understand the difficult trade-offs at play. We're hoping it will become a must-listen for Trust and Safety professionals and anyone interested in who gets to speak online (which should be everyone). You can find a teaser episode via ctrlaltspeech.com or on your favourite podcast app. Mike is renowned for his protocols work and is "something of a Silicon Valley oracle" (according to the New York Times, no less). I'm privileged to be working with him on this podcast. I'll be including regular reminders here in EiM but you can also subscribe to Ctrl-Alt-Speech wherever you get you podcasts. Next week will see a return to the shorter intros of yore, I promise. For now, here's your weekly round-up — BW
Today's edition is in partnership with All Things in Moderation, the annual conference for humans who moderateIf you moderate, or if you care about building safe, thriving digital spaces, All Things In Moderation is for you.
Taking place online on May 16-17, ATIM is a must-attend event for moderators, community managers, researchers, policy-makers and technologists who want to learn from each other, network and collaborate. Sessions include 'How to Support your Volunteer Moderators' and 'The Impact of Human and AI Content Moderation during the Israel-Palestine crisis' and much, much more...
PoliciesNew and emerging internet policy and online speech regulation More than 20 human rights and journalists’ groups have called for social media platforms in Turkey to withstand government pressure to take down critical content ahead of upcoming municipal elections at the end of the month. It comes after a Turkish court instructed X/Twitter and Meta to take down dozens of posts by government critics, which they proceeded to do to avoid sanctions, despite admitting that the content did not violate its guidelines. Perspective: Twitter has long been a target for overreaching state leaders but its compliance with requests has jumped significantly since Elon Musk took over (EiM #199). Prior to that, it had built a reputation for publishing takedown data publicly and fighting governments in court but those days — when former legal, policy and trust lead Vijaya Gadde (#159) was at the helm — feel like a long time ago now. Oh and belated happy Digital Markets Act implementation day to those that celebrated on Thursday. I noted in last week's newsletter how speech rules are made harder to implement by the size of companies (People, EiM #237) so the DMA — which attempts to make it easier for smaller companies to access the users of the big tech companies — is very interesting in this regard. Tech Policy Press explains more and there's a handy trend piece from the New York Times too. Also in this section...- Political deepfakes are spreading like wildfire thanks to GenAI (TechCrunch)
- We Can’t Have Serious Discussions About Section 230 If People Keep Misrepresenting It (Techdirt)
- Online Age Verification Laws Are a Bet Worth Making (Newsweek)
Become an EiM member Let me get right to it: Becoming an EiM member gives you access to 250+ past editions and helps me expand and improve the weekly newsletter for you and thousands of people fascinated by the difficult trade-offs in online speech and content moderation.
For a few dollars a week, you can support a curated and independent source of news and analysis.
Become an EiM member today. Thanks in advance for your support— BW ProductsFeatures, functionality and technology shaping online speech An open letter signed by over 100 leading researchers, journalists, and advocates has called for policy changes at AI companies that make safety research into their models easier to conduct. The lack of exemption for "independent good faith research", the letter states, leaves "researchers at risk of account suspension or even legal reprisal." It comes after several AI companies reportedly suspended accounts and changed terms of service to avoid academics conducting research. An accompanying paper sets out a case for "a legal and technical safe harbor, indemnifying public interest safety research". Not sure about you but feels like we're a long way from the AI research pause called for almost 12 months ago. A brilliant investigation into algorithmic suppression of Palestinian speech on Instagram has found evidence of deleted captions, missing photos on hashtags and unexplainable errors. Investigative outlet The Markup conducted a series of tests with newly created accounts and hashtags and found that nongraphic photos of the Israel/Palestine conflict were 8.5 times more likely than other posts to be hidden. Many posts were also classed as spam, meaning that users were not given the option to request a review. Where there was a means to do so, the reporting function did not work as expected (h/t Steve B). I've covered accusations of Palestinian speech suppression going back years (EiM #168 and others) and this is further fuel to the idea that marginalised groups lose when it comes moderation at scale. The Markup also published a guide to algorithmic moderation and how to appeal if you have been shadowbanned. My read of the week. Also in this section...- Preparing for the global elections cycle of 2024 and beyond (Synthesia)
- Announcing Microsoft’s open automation framework to red team generative AI Systems (Microsoft)
Social networks and the application of content guidelines While we're on the topic of Israel/Palestine, Sidechat — the US campus chat app that has become the go-to for "college students across the country sound off on just about everything" — has seen a rise in anti-semitic posts. But, despite being urged to release more detailed guidelines, the company has failed to address the issue, thus risking it going the same way as once popular YikYak, which it merged with last year. While some social media platforms face censure from the European Commission (EiM #235), others are being more strategic about their designation as a Very Large Online Platform (VLOP) under the Digital Services Act. Snap hosted a child safety roundtable in Brussels last week and explained that its team "took away even more ideas and insights to further simplify and improve our support experience, communicate in more teen-friendly language in-app, and consider certain opt-in features for older teens and young adults." Finally in this section, a fascinating story from Columbia about the role of Tinder and Hinge in a spate of tourist robberies and deaths. Rest of World reports that parent company Match Group sent over representatives to Medellin, the country's second largest city, to meet with the US Embassy, the FBI and local authorities to discuss ways of mitigating attacks on foreigners. A reminder, as if we needed one, that Tinder swindling has not gone away. Also in this section...- Inside the World of AI TikTok Spammers (404 Media)
- How the porn bots took over Twitter (WUFT)
- News Organizations Are Leaving Twitter. What About You? (Nieman Reports)
PeopleThose impacting the future of online safety and moderation We've seen writers, playwrights and filmmakers turn to content moderation as a topic worth of that merits artistic interrogation (EiM #161). But that that doesn't mean content moderation is very forgiving of artistic freedom. That's the case made by Emma Shapiro, editor-at-large of the international project Don’t Delete Art, in this op-ed. She notes how artists online often struggle to interpret platform policies and regularly found themselves suppressed or shadow banned. It's worse, as always, for marginalised groups and those under governments that want to quell citizen speech. Since its founding in 2020, Don't Delete Art has has been building a database of platform takedowns and has drawn attention to what happens when companies censor art. A project worth following. Posts of noteHandpicked posts that caught my eye this week - "I am incredibly excited by this opportunity to work with the new commission across all its areas of responsibility, and in particular to lead our EU-level coordination." - Ofcom's Head of International Content Policy Maria Donde on her new role across the Irish Sea.
- "Content moderation is hard: my post on antisemitism was taken down by LinkedIn, put back up, taken down again, and is now back up again" - Law professor Michael Geist finds out the hard way.
- "How have your experiences on social media (including LinkedIn) changed over the last year? Take a look and see whether your experience matches others in our research" - Integrity Institute fellow Matt Motyl shares an interesting piece of longitudinal research users' experience of platform.
That's it for this week. If you enjoyed today's edition, give it a thumbs up and considering supporting Everything in Moderation by becoming a member for less than $2 a week. If you can't do that in these straitened economic times, forward this edition to someone like you who cares about the future of the web. Thanks for reading - BW
|