Hello and welcome to Everything in Moderation's Week in Review, your in-depth guide to the policies, products, platforms and people shaping the future of online speech and the internet. It's written by me, Ben Whitelaw and supported by members like you. Online speech and content moderation is inherently political and, increasingly, we're seeing questions of content moderation form part of political and even diplomatic storylines. This week's passing of the TikTok ban bill in the US House of Representatives is perhaps the most clear example of that. Mike and I will get into the issue in the first episode of Ctrl-Alt-Speech, due to be released on all well-moderated platforms (and some less well moderated ones) later today. Subscribe now to be alerted. There's plenty more in today's newsletter, here's everything you need to know from the last seven day — BW
Today's edition is in partnership with the London School of Economics, which is helping professionals like you master the Digital Services ActWant to better understand the DSA and its obligations? Interested to explore in-depth case studies and practical strategies for compliance? Join the popular Zoom course offered by the London School of Economics (April 12-18) and learn directly from leading DSA expert Dr Martin Husovec. The course is tailored for lawyers, regulators, and trust and safety experts and will involve interactive lectures with lively discussions. EiM subscribers receive a discount of over £200 by quoting “EiM10”. Spaces are limited to 30 people so don't wait around.
PoliciesNew and emerging internet policy and online speech regulation An interesting development with regards to the Digital Services Act: Chinese online shopping behemoth Shein will soon be designated as a Very Large Online Platform (VLOP) after recording more than 108m monthly users across EU member states, far surpassing the 45m threshold for VLOPs. The EU said a "timetable [for designation] cannot be indicated" but, if passed, would mean it has to trace sellers on the platform, add methods for users to flag content and trace counterfeit goods. Broader context: Shein would join AliExpress and Amazon as other marketplaces regulated under the DSA but, frankly, looks like any other shopping website. It's a reminder — to me if perhaps not some politicians in Europe and elsewhere— that platform regulation isn't just about social media. Saying that, we're still waiting to hear about investigations into TikTok and X/Twitter (EiM #236). Meanwhile, Temu — the Chinese marketplace which said in September that it didn't passed the VLOP threshold — has been asked to submit additional info by the Irish DSA co-ordinator Coimisiún na Meán after rumours of a notable increase in users, according to Euronews. The DSA has had a rocky start (EiM #235) but there's an argument here that this is the European Commission apparatus working as it should to ensure a large Chinese company is playing fairly. Ring any bells? Also in this section...- New study unveils strategies to combat disinformation wars on social media (Phys)
- Brazil opens public consultation on digital platforms (Mattos Filho)
ProductsFeatures, functionality and technology shaping online speech A new report has predicted a huge rise in the market for T&S software as a result of regulation, high-profile news events and the rise of artificial intelligence, which can (in theory) better scale responses to online harms. Trust and Safety Market Research Report, produced by Duco, expects that the target addressable market of safety tech and software will rise from $7.4B to $15.4B by 2028, much faster than the T&S services, which currently form the majority of the market. Talking of which, four gaming technology providers have banded together under a "shared commitment to improving player and moderator well-being". Technology companies Modulate, Keyword Studios, ActiveFence and non-profit Take This announced the Gaming Safety Coalition this week and launched a white paper which, among other recommendations, called for reducing dependency on user reports (which, not coincidentally, some of the tools already do). Wider view: I've noted trust and safety technology vendors working more closely together over the past year as the safety technology space becomes more competitive and platforms looks to procure tools that fulfil more tasks with fewer people. Something that the Duco report also noted. Also in this section...- DoorDash’s new AI-powered ‘SafeChat+’ tool automatically detects verbal abuse (TechCrunch)
- The risks of expanding the definition of ‘AI safety’ (Semafor)
Become an EiM member Everything in Moderation is your guide to understanding how online speech and content moderation is changing the world.
Between the weekly digest, regular perspectives, occasional explorations and T&S Insider newsletter (out every Monday), I try to help people like you working in online safety and content moderation stay ahead of threats and risks by keeping you up-to-date about what is happening in the space.
Becoming a member helps me connect you to the ideas and people you need in your work making the web a safer, better place for everyone. Social networks and the application of content guidelines Following the recent appointment of new heads of T&S at Bluesky and TikTok (EiM #237), this week brought another notable hire: Yoel Roth at Match Group. Roth joins the dating giant 16 months after leaving X/Twitter and having done what he describes in this Wired interview "a bunch of research on federated and decentralized social media." He will work with the respective heads of T&S within Match Group including Hinge and Tinder and had some interesting things to say about the role of app stores. On the topic of Bluesky, the decentralised platform this weekk open-sourced Ozone, its web interface for moderating content, as part of its effort to give "communities power to create their own spaces, with their own norms and preferences". The tool allows individuals and teams to review content on the platform and even run a mod service that other can subscribe to with a click. By this time next year, I predict that someone will have started a moderation layer and will have enough subscribers to be making a living from it. Trust & Safety is dead. Long live Safety. That's the new view of Roth's previous employer X/Twitter as it this week unveiled a new, shorter name for it's safety function. It committed to, if you can believe it, work "tirelessly to create a secure environment for all users". Also in this section...- How two smart people fell for a classic Facebook scam (Washington Post)
- Bringing Policies to Life (Discord)
PeopleThose impacting the future of online safety and moderation SXSW took place this week and I was glad to see a number of sessions on online speech and content moderation (including this stellar one). None, predictably, got as much coverage as Megan Markle's star-studded panel discussion. Alongside actress Brooke Shields and journalist Katie Couric, the Duchess of Sussex touched on the nature of online abuse and criticised social media platforms for not doing enough to address hatred: "As we look at what's happening in social media, there's so much work done to keep people safe". Markle also said it was "disturbing" that "much of the hate is women completely spewing that to other women", which suggests her experience as a global celebrity is vastly different from most women. Research going as far back as 2017 notes that men are the most frequent perpetrators of online hate and many women know the man they are targeted by. The proportion of women who are sent explicit images also indicates that it is men who are the issue. Posts of noteHandpicked posts that caught my eye this week - "A lot of laws have been considered and a fair number have passed, but relatively few of the US laws seem likely to be upheld by the courts." Tech policy analyst Tim Bernard on his new Stanford white paper on legislative approaches to combatting children's harms
- "Deepfakes are covered under Australia’s image-based abuse scheme, which does not use copyright or privacy law as a basis for non-consensually shared intimate image removal" - Australian eSafety Commissioner Julie Inman Grant on the trend of victims of deepfake videos
- "Now we need Ofcom to drive meaningful change. The Online Safety Act has given them the powers, but the Illegal Harms Consultation shows their approach is incredibly cautious, with recommendations falling far short of best practice today, rather than raising the bar." - Ian Stevenson, Cyacomb CEO, urges the UK regulator to proceed at pace
Every week, Everything in Moderation brings together the need-to-read articles, analyses and research about content moderation to give you a broader, more global perspective on online speech and internet safety. If you're a regular reader, consider becoming a member for less than $2 a week to get access to over 200+ posts in the EiM archive. If you afford to become a member at this time, not to worry — I'm grateful to have you as a subscriber of EiM. Thanks for reading - BW
|