A number of Stack Overflow moderators have declared a “general moderation strike” to protest about being instructed not to remove AI-generated content “outside of exceedingly narrow circumstances.”
The two key issues are first, that the moderators feel unable to perform their quality-checking role effectively because of the new policy; and second, that the policy has been forced upon them without proper consultation.
“Moderators are no longer allowed to remove AI-generated answers on the basis of being AI-generated, outside of exceedingly narrow circumstances. This results in effectively permitting nearly all AI-generated answers to be freely posted, regardless of established community consensus on such content,” the striking mods said in an open letter.
“We deeply believe in the core mission of the Stack Exchange network: to provide a repository of high-quality information in the form of questions and answers, and the recent actions taken by Stack Overflow, Inc. are directly harmful to that goal,” they added.
The site is still running but the volume of spam and unflagged content is likely to rise, particularly since a tool called SmokeDetector, developed by a volunteer network called Charcoal for StackOverflow to detect spam, has also stopped running. “Attention: Charcoal is participating in the network-wide strike, so will remain shut down until this AI policy is dropped,” said a notice on a site discussion thread.
DevClass spoke to Norway-based moderator Zoe who is among the signatories. “There’s so far 15/24 active mods on SO (if I’ve counted correctly) that are officially on strike, and the rest just don’t have a chance of keeping up with flag volumes,” she told us.
The background is that StackOverflow appears to be struggling to make sense of the impact of AI tools like GitHub Copilot and ChatGPT on its question and answer site. Traffic has fallen. The company’s initial reaction was to ban use of ChatGPT to post Stack Overflow answers via a “temporary policy”, to the approval of the community, but an official staff post also states that “we’ve decided no network-wide, general policy regarding banning ChatGPT, or other AI generated content, is necessary or helpful at this time.” There is a distinction between Stack Overflow, the developer site, and Stack Exchange, a family of sites covering a wide range of different topics, though Stack Overflow is much the busiest.
Last week CEO Prashanth Chandrasekar posted positively about the role of AI in Stack Overflow and that “the rise of GenAI is a big opportunity for Stack. Approximately 10% of our company is working on features and applications leveraging GenAI that have the potential to increase engagement.” Chandrasekar wrote of having “AI and community at the center;” but actions of the moderators suggest that the community aspect is currently in doubt.
The trigger for the current crisis was an instruction on Monday last week (a public holiday) to Stack Overflow moderators in an official but private forum, “Moderators were informed, via pinned chat messages in various moderator rooms (not a normal method), to view a post in the Moderator Team that instructed all moderators to stop using AI detectors (as outlined above) in taking moderation actions,” said a post. The details of the instruction are not public. VP of Community Philippe Beaudette posted that “AI-generated content is not being properly identified across the network,” that “the potential for false positives is very high,” and “internal evidence strongly suggests that the overapplication of suspensions for AI-generated content may be turning away a large number of legitimate contributors to the site.” He said moderators had been asked to “apply a very strict standard of evidence to determining whether a post is AI-authored when deciding to suspend a user.” However, the moderators claim that a description of the policy posted by Beaudette “differs greatly from the Teams guidance … which we’re not allowed to publicly share.”
Zoe told us “the public version glosses over the part where mods were told to stop suspending over suspected GPT content, and various other details.” She also said the unreliability of automatic AI detectors is well-known and accepted, but that “most of our detection systems do not rely on AI detectors, precisely because they’re unreliable.” She told us that “the bit jumping from ‘GPT detectors have a bias’ to ‘a significant amount of suspensions are incorrect, so we’re banning effectively all ways of detecting AI content’, while only showing evidence of the detector bias and not suspension bias/false positive rates, has been challenged heavily.”
In practice, she explained, the ChatGPT policy enforcement has been guided by a combination of patterns of user behaviour combined with other techniques. “In a large number of extensive discussions with the company, we’ve been unable to get any data from them supporting that false positive rates of detectors have had any impact on suspensions,” she said.
The moderators are volunteers and concerned about the quality of content rather than the level of traffic to the site as “traffic is generally something that’s considered to be the company’s problem,” said Zoe.
The company was planning a second policy change which is not public, but which according to the moderators “has the potential to facilitate unprecedented levels of abuse.” This second change (footnote 3) has been “delayed indefinitely,” possibly as a result of moderator pushback.
The problem is complex, but worth noting is that the policy banning ChatGPT has been well upvoted by the community (+3677 at the time of writing) suggesting that the most active Stack Overflow members are wary of AI-generated answers.
In a statement sent to Dev Class, Stack Overflow’s Beaudette told us:
“A small number of moderators (11%) across the Stack Overflow network have stopped engaging in several activities, including moderating content. The primary reason for this action is dissatisfaction with our position on detection tools regarding AI-generated content.
“Stack Overflow ran an analysis and the ChatGPT detection tools that moderators were previously using have an alarmingly high rate of false positives. Usage of these tools correlated to a dramatic upswing in suspensions of users with little or no prior content contributions; people with original questions and answers were summarily suspended from participating on the platform. These unnecessary suspensions and their outsize impact on new users run counter to our mission and have a negative impact on our community.
“We stand by our decision to require that moderators stop using the tools previously used. We will continue to look for alternatives and are committed to rapid testing of those tools.
“Our moderators have served this community for many years, and we appreciate their collective decades of service. We are confident that we will find a path forward. We regret that actions have progressed to this point, and the Community Management team is evaluating the current situation as we work hard to stabilize things in the short term,” he added.