Jump to content

Wikipedia:Village pump (proposals)

From Wikipedia, the free encyclopedia
 Policy Technical Proposals Idea lab WMF Miscellaneous 

The proposals section of the village pump is used to offer specific changes for discussion. Before submitting:

Discussions are automatically archived after remaining inactive for nine days.

RfC: Extended confirmed pending changes (PCECP)

[edit]

The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.



The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.


Should a new pending changes protection level - extended confirmed pending changes (hereby abbreviated as PCECP) - be added to Wikipedia? Awesome Aasim 19:58, 5 November 2024 (UTC)[reply]

Background

[edit]

WP:ARBECR (from my understanding) encourages liberal use of EC protection in topic areas authorized by the community or the arbitration committee. However, some administrators refuse to protect pages unless if there is recent disruption. Extended confirmed pending changes would allow non-XCON users to propose changes for them to be approved by someone extended confirmed, and can be applied preemptively to these topic areas.

It is assumed that it is technically possible to have PCECP. That is, we can have PCECP as "[auto-accept=extended confirmed users] [review=extended confirmed users]" Right now it might not be possible to have extended confirmed users review pending changes with this protection with the current iteration of FlaggedRevs, but maybe in the future.

Survey (PCECP)

[edit]

Support (PCECP)

[edit]
  • Support for multiple reasons: WP:ARBECR only applies to contentious topics. Correcting typos is not a contentious topic. Second, WP:ARBECR encourages the use of pending changes when protection is not used. Third, pending changes effectively serves to allow uncontroversial edit requests without needing to create a new talk page discussion. And lastly, this is within line of our protection policy, which states that protection should not be applied preemptively in most cases. Awesome Aasim 19:58, 5 November 2024 (UTC)[reply]
  • Support (per... nom?) PC is the superior form of uncontroversial edit requests. Aaron Liu (talk) 20:09, 5 November 2024 (UTC)[reply]
    It's better than EC, which already restricts being the free encyclopedia more. As I've said below, the VisualEditor allows much more editing from new people than edit requesting, which forces people to use the source editor. Aaron Liu (talk) 03:52, 6 November 2024 (UTC)[reply]
    This is not somehow less or more restrictive as ECR. It's exactly the same level of protection, just implemented in a different way. I do not get the !votes from either side who either claim that this will be more restriction or more bureaucracy. I understand neither, and urge them to explain their rationales. Aaron Liu (talk) 12:32, 12 November 2024 (UTC)[reply]
    By creating a difference between what non logged-in readers (that is, the vast majority of them) see versus logged-in users, there is an extra layer of difficulty for non-confirmed and non-autoconfirmed editors, who won't see the actual page they're editing until they start the editing process. Confirmed and autoconfirmed editors may also be confused that their edits are not being seen by non-logged in readers. Because pending changes are already submitted into the linear history of the article, unwinding a rejected edit is potentially more complicated than applying successive edit requests made on the talk page. (This isn't a significant issue when there aren't many pending changes queued, which is part of the reason why one of the recommended criteria for applying pending changes protection is that the page be infrequently edited.) For better or worse, there is no deadline to process edit requests, which helps mitigate issues with merging multiple requests, but there is pressure to deal with all pending changes expediently, to reduce complications in editing. isaacl (talk) 19:54, 12 November 2024 (UTC)[reply]
    Do you think this would be fixed with "branching" (similar to GitHub branches)? In other words, instead of PC giving the latest edit, PC just gives the edit of the stable revision and when "Publish changes" is clicked it does something like put the revision in a separate namespace (something like Review:PAGENAME/#######) where ####### is the revision ID. If the edit is accepted, then that page is merged and the review deleted. If the edit is rejected the review is deleted, but can always be restored by a Pending Changes Reviewer or administrator. Awesome Aasim 21:01, 12 November 2024 (UTC)[reply]
    Technically, that would take quite a bit to implement. Aaron Liu (talk) 23:18, 12 November 2024 (UTC)[reply]
    There are a lot of programmers who struggle with branching; I'm not certain it's a great idea to make it an integral part of Wikipedia editing, at least not in a hidden, implicit manner. If an edit to an article always proceeded from the last reviewed version, editors wouldn't be able to build changes on top of their previous edits. I think at a minimum, an editor would have to be able to do the equivalent of creating a personal working branch. For example, this could be done by working on the change as a subpage of the user's page (or possibly somewhere else (perhaps in the Draft namespace?), using some standard naming hierarchy), and then submitting an edit request. That would be more like how git was designed to enable de-centralized collaboration: everyone works in their own repository, rebasing from a central repository (*), and asks an integrator to pull changes that they publish in their public repository.
    (*) Anyone's public repository can act as a central repository. It just has to be one that all the collaborators agree upon using, and thus agree with the decisions made by the integrator(s) merging changes into that repository. isaacl (talk) 23:22, 12 November 2024 (UTC)[reply]
    That makes sense. This has influenced me to amend my Q2 answer slightly, but I still support the existence of this protection and the preemptive PC protecting of low-traffic pages. (Plus, it's still not more restriction.) Aaron Liu (talk) 23:20, 12 November 2024 (UTC)[reply]
  • Support, functionally a more efficient form of edit requests. The volume of pending changes is still low enough for this to be dealt with, and it could encourage the pending changes reviewer right to be given to more people currently reviewing edit requests, especially in contentious topics. Chaotic Enby (talk · contribs) 20:25, 5 November 2024 (UTC)[reply]
  • Support having this as an option. I particularly value the effect it has on attribution (because the change gets directly attributed to the individual who wanted it, not to the editor who processed the edit request). WhatamIdoing (talk) 20:36, 5 November 2024 (UTC)[reply]
  • Support: better and more direct system than preemptive extended-confirmed protection followed by edit requests on the talk page. Cremastra (uc) 20:42, 5 November 2024 (UTC)[reply]
  • Support, Pending Changes has the capacity to take on this new task. PC is much better than the edit request system for both new editors and reviewers. It also removes the downsides of slapping ECP on everything within contentious topic areas. Toadspike [Talk] 20:53, 5 November 2024 (UTC)[reply]
    I've read the opposes below and completely disagree that this would lead to more gatekeeping. The current edit request system is extremely complicated and inaccessible to new users. I've been here for half a decade and I still don't really know how it works. The edit requests we do get are a tiny fraction of the edits people want to make to ECP pages but can't. PCECP would allow them to make those edits. And many (most?) edit requests are formatted in a way that they can't be accepted (not clear what change should be made, where, based on what souce), a huge issue which would be entirely resolved by PCECP.
    The automatic EC protection of all pages in certain CTOPs is not the point of this proposal. Whether disruption is a prerequisite to protection is not altered by the existence of PCECP and has to be decided in anther RfC at another venue, or by ArbCom. PCECP is solely about expanding accessibility to editing ECP pages for new and unregistered editors, which is certainly a positive move.
    I, too, hate the PC system at dewiki, and I appreciate that Kusma mentioned it. However, what we're looking at here is lowering protection levels and reducing barriers to editing, which is the opposite of dewiki's PC barriers. Toadspike [Talk] 10:24, 16 November 2024 (UTC)[reply]
  • Support (Summoned by bot): per above. C F A 💬 23:34, 5 November 2024 (UTC)[reply]
  • Support : Per above. PC is always at a low or very low backlog, therefore is completely able to take this change. ~/Bunnypranav:<ping> 11:26, 6 November 2024 (UTC)[reply]
  • Support: I would be happy to see it implemented. GrabUp - Talk 15:14, 6 November 2024 (UTC)[reply]
  • Support Agree with JPxG's principle that it is better to "have drama on a living project than peace on a dead one," but this is far less restrictive than preemptively setting EC protection for all WP:ARBECR pages. From a new editor's perspective, they experience a delay in the positive experience of seeing their edit implemented, but as long as pending changes reviewers are equipped to minimize this delay, then this oversight seems like a net benefit. New users will get feedback from experienced editors on how to operate in Wikipedia's toughest content areas, rather than stumbling through. ViridianPenguin 🐧 ( 💬 ) 08:57, 8 November 2024 (UTC)[reply]
  • Support * Pppery * it has begun... 05:17, 11 November 2024 (UTC)[reply]
  • Support Idk what it's like in other areas but in mine, of edit requests that I see, a lot, maybe even most of them are POV/not actionable/nonsense/insults so if it is already ECR only, then yea, more filtering is a good thing.Selfstudier (talk) 18:17, 11 November 2024 (UTC)[reply]
  • Support assuming this is technically possible (which I'm not entirely sure it is), it seems like a good idea, and would definitely make pending changes more useful from my eyes. Zippybonzo | talk | contribs (they/them) 20:00, 12 November 2024 (UTC)[reply]
  • Strong support per @JPxG:'s reasoning—I think it's wild that we're willing to close off so many articles to so many potential editors, and even incremental liberalization of editing restrictions on these articles should be welcomed. This change would substantially expand the number of potential editors by letting non-EC contributors easily suggest edits to controversial topic areas. It would be a huge win for contributions if we managed to replace most ECP locks with this new PCECP.– Closed Limelike Curves (talk) 02:07, 14 November 2024 (UTC)[reply]
  • Yes, in fact, somebody read my mind here (I was thinking about this last night, though I didn't see this VP thread...) Myrealnamm (💬Let's talk · 📜My work) 21:38, 14 November 2024 (UTC)[reply]
  • Support in principle. Edit requests are a really bad interface for new users; if discouraging people from editing is the goal, we've succeeded. Flagged revisions aren't the best, but they are better than edit request templates. Toadspike's reasoning hasn't been refuted. Right now, it seems like opposers aren't aware that the status quo for many Palestine-Israel related articles is ECP. Both Israeli cuisine and Palestinian cuisine are indefinitely under WP:ECP due to gastronationalist arguments about the politics of food in the Arab–Israeli conflict (a page not protected), so editors without 500/30 status cannot add information about falafels to Wikipedia.
    That being said, this proposal would benefit from more detail. For example, the current edit request policy requires the proposed change to be uncontroversial and puts the burden on the proposer to show that it is uncontroversial. On the other hand, the current review policy assumes a change is correct unless it's obvious vandalism or the like, which would be a big change to the edit request workflow. Likewise, what counts as WP:INVOLVED for reviewers? Right now, there's a big firewall between editors involved in content in an area like Israel-Palestine and admins using their powers in that area. Can reviewers edit in the area and use their tools? This needs to be clarified, as it seems like editing in PIA doesn't disqualify one from answering edit requests. Chess (talk) (please mention me on reply) 21:06, 18 November 2024 (UTC)[reply]

    the current review policy assumes a change is correct unless it's obvious vandalism or the like

    @Chess That's true, but reviewers are also currently expected to accept and revert if the change is correct but also irky for a revert. Below, Aasim clarified that reviewers should only reject edits that fail the existing PC review guidelines plus edits made in violation of an already well-established consensus.
    As for Involved, since there's no guidance about edit request reviewers yet either, I think that should be asked in a separate RfC. Aaron Liu (talk) 21:35, 18 November 2024 (UTC)[reply]
  • Support. The number of sysops is ever decreasing and so we will need to take drastic action to ensure maintenance and vandalism prevention can keep up. Stifle (talk) 17:29, 19 November 2024 (UTC)[reply]
  • Support in principle. While I understand objections from others based on the technical downsides and design of the current Flagged Revisions extension, I support making it easier for users to suggest changes with a GUI rather than a difficult-to-understand edit request template, which creates a barrier to entry. Frostly (talk) 05:24, 26 November 2024 (UTC)[reply]
  • Support - It seems to be entirely preferable to ECR. It would be interesting if any current or former Arbcom members were to see it as more problematic. — Charles Stewart (talk) 04:12, 28 November 2024 (UTC)[reply]

Oppose (PCECP)

[edit]
  • Oppose There's a lot of history here, and I've opposed WP:FPPR/FlaggedRevs consistently since ~2011. Without reopening the old wounds over how the initial trial was implemented/ended, nothing that's happened since has changed my position. I believe that proceeding with an expansion of FlaggedRevs would be a further step away from our commitment to being the free encyclopedia that anyone can edit without actually solving any critical problems that our existing tools aren't already handling. While the proposal includes However, some administrators refuse to protect pages unless if there is recent disruption as a problem, I see that as a positive. In fact that's the entire point; protection should be preventative and there should be evidence of recent disruption. If a page is experiencing disruption, protection can handle it. If not, there's no need to limit anyone's ability to edit. The WordsmithTalk to me 03:45, 6 November 2024 (UTC)[reply]
    The Wordsmith, regarding "However, some administrators refuse to protect pages unless if there is recent disruption as a problem, I see that as a positive.", for interest, I see it as a negative for a number of reasons, at least in the WP:PIA topic area, mostly because it is subjective/non-deterministic.
    • The WP:ARBECR rules have no dependency on subjective assessments of the quality of edits. Non-EC editors are only allowed to make edit requests. That is what we tell them.
      • If it is the case that non-EC editors are only allowed to make edit requests, there is no reason to leave pages unprotected.
      • If it is not the case that non-EC editors can only allowed to make edit requests, then we should not be telling them that via talk page headers and standard notification messages.
    • There appears to be culture based on an optimistic faith-based belief that the community can see ARBECR violations, make reliable subjective judgements based on some value system and deal with them appropriately through action or inaction. This is inconsistent with my observations.
      • Many disruptive violations are missed when there are hundreds of thousands of revisions by tens of thousands of actors.
      • The population size of editors/admins who try to address ARBECR violations is very small, and their sampling of the space is inevitably an example of the streetlight effect.
      • The PIA topic area is largely unprotected and there are thousands of articles, templates, categories, talk pages etc. Randomness plays a large part in ARBECR enforcement for all sorts of reasons (and maybe that is good to some extent, hard to tell).
    • Wikipedia's lack of tools to effectively address ban evasion in contentious topic areas means that it is not currently possible to tell whether a revision by a non-EC registered account or IP violating WP:ARBECR that resembles an okay edit (to me personally with all of my biases and unreliable subjectivity) is the product of a helpful person or a ban evading recidivist/member of an off-site activist group exploiting a backdoor.
    Sean.hoyland (talk) 08:00, 6 November 2024 (UTC)[reply]
  • Oppose I am strongly opposed to the idea of getting yet another level of protection for the sole purpose of using it preemtively, which has never been ok and should not be ok. Just Step Sideways from this world ..... today 21:25, 6 November 2024 (UTC)[reply]
  • Oppose, I hate pending changes. Using them widely will break the wiki. We need to be as welcoming as possible to new editors, and the instant gratification of wiki editing should be there on as many pages as possible. —Kusma (talk) 21:47, 6 November 2024 (UTC)[reply]
    @Kusma Could you elaborate on "using them widely will break the wiki", especially as we currently have the stricter and less-friendly EC protection? Aaron Liu (talk) 22:28, 6 November 2024 (UTC)[reply]
    Exhibit A is dewiki's 53-day Pending Changes backlog. —Kusma (talk) 23:03, 6 November 2024 (UTC)[reply]
    We already have a similar and larger backlog at CAT:EEP. All this does is move the backlog into an interface handled by server software that allows newcomers to use VE for their "edit requests", where currently they must use the source editor due to being confined to talk pages. Aaron Liu (talk) 23:06, 6 November 2024 (UTC)[reply]
    The dewiki backlog is over 18,000 pages. CAT:EEP has 54. The brokenness of optional systems like VE should not be a factor in how we make policy. —Kusma (talk) 09:40, 7 November 2024 (UTC)[reply]
    The backlog will not be longer than the EEP backlog. (Also, I meant that EEP's top request was over 3 months ago, sorry.) Aaron Liu (talk) 12:23, 7 November 2024 (UTC)[reply]
    ... if the number of protected pages does not increase. I expect an increase in protected pages from the proposal, even if the terrifying proposal to protect large classes of articles preemptively does not pass. —Kusma (talk) 13:08, 7 November 2024 (UTC)[reply]
    Why so? Aaron Liu (talk) 13:33, 7 November 2024 (UTC)[reply]
    Most PCECP pages should be ECP pages (downgraded?) as they have lesser traffic/disruption. So, the number of pages that will be increase should not be that much. ~/Bunnypranav:<ping> 13:35, 7 November 2024 (UTC)[reply]
    @Kusma Isn't the loss of instant gratification of editing better than creating a request on the talk page of an ECP page, and having no idea by when will it be reviewed and implemented. ~/Bunnypranav:<ping> 11:25, 7 November 2024 (UTC)[reply]
    With PC you also do not know when or whether your edit will be implemented. —Kusma (talk) 13:03, 7 November 2024 (UTC)[reply]
  • Oppose — Feels unnecessary and will only prevent other good faith editors from editing, not to mention the community effort required to monitor and review pending changes requests given that some areas like ARBIPA apply to hundreds of thousands of pages. Ratnahastin (talk) 01:42, 7 November 2024 (UTC)[reply]
    @Ratnahastin Similar to my above question, won't this encourage more good faith editors compared to a literal block from editing of an ECP page? ~/Bunnypranav:<ping> 11:32, 7 November 2024 (UTC)[reply]
    There is a very good reason I reference Community Resources Against Street Hoodlums in my preferred name for the protection scheme, and the answer is generally no since the topic area we are primarily talking about is an ethno-political contentious topic, which tend to draw partisans interested only in "winning the war" on Wikipedia. This is not limited to just new users coming in, but also established editors who have strong opinions on the topic and who may be put into the position of reviewing these edits, as a read of any random Eastern Europe- or Palestine-Israel-focused Arbitration case would make clear just from a quick skim. —Jéské Couriano v^_^v threads critiques 18:21, 7 November 2024 (UTC)[reply]
    Aren't these problems that can also be seen to the same extent in edit requests if they exist? Aaron Liu (talk) 19:10, 7 November 2024 (UTC)[reply]
    A disruptive/frivolous edit request can be summarily reverted off to no damage as patently disruptive/frivolous without implicating the 1RR in the area. As long as it's not vandalism or doesn't introduce BLP violations, an edit committed to an article that isn't exactly helpful is constrained by the 1RR, with or without any sort of protection scheme. —Jéské Couriano v^_^v threads critiques 16:21, 8 November 2024 (UTC)[reply]
    Patently disruptive and frivolous edits are vandalism, emphasis on "patently". Aaron Liu (talk) 16:28, 8 November 2024 (UTC)[reply]
    POV-pushing is not prima facie vandalism. —Jéské Couriano v^_^v threads critiques 16:32, 8 November 2024 (UTC)[reply]
    POV-pushing isn't patently disruptive/frivolous and any more removable in edit requests. Aaron Liu (talk) 16:45, 10 November 2024 (UTC)[reply]
    But edit requests make it harder to actually push that POV to a live article. —Jéské Couriano v^_^v threads critiques 17:22, 11 November 2024 (UTC)[reply]
    Same with pending changes. Aaron Liu (talk) 17:36, 11 November 2024 (UTC)[reply]
    Maybe in some fantasy land where the edit didn't need to be committed to the article's history. —Jéské Couriano v^_^v threads critiques 18:08, 11 November 2024 (UTC)[reply]
    Except that is how pull requests work on GitHub. You make the edit, and someone with reviewer permissions approves it to complete the merge. Here, the "commit" happens, but the revision is not visible until reviewed and approved. Edit requests are not pull requests, they are the equivalent of "issues" on GitHub. Awesome Aasim 19:03, 11 November 2024 (UTC)[reply]
    It may come as a surprise, but Wikipedia is not GitHub. While they are both collaborative projects, they are very different in most other respects. Thryduulf (talk) 19:20, 11 November 2024 (UTC)[reply]
    With Git, submitters make a change in their own branch (which can even be in their own repository), and then request that an integrator pull that change into the main branch. So the main branch history remains clean: it only has changes that were merged in. (It's one of the guiding principles of Git: allow the history tree of any branch to be simplified to improve clarity and performance.) isaacl (talk) 22:18, 11 November 2024 (UTC)[reply]
    Edit requests are supposed to be pull requests.

    Clearly indicate which sections or phrases should be replaced or added to, and what they should be replaced with or have added.
    — WP:ChangeXY

    Aaron Liu (talk) 22:51, 11 November 2024 (UTC)[reply]
    Yeah that is what they are supposed to be but in practice they are not. As anyone who has answered edit requests before, there are often messages that look like this:
Extended content

The reference is wrong. Please fix it. 192.0.0.1 (talk) 23:19, 11 November 2024 (UTC)[reply]

  • Which is not in practice WP:CHANGEXY. Awesome Aasim 23:19, 11 November 2024 (UTC)[reply]
    I don't see how that's much of a problem, especially as edits are also committed to the talk page's history. Aaron Liu (talk) 22:50, 11 November 2024 (UTC)[reply]
    Do the words "Provoke edit wars" mean anything? Talk page posts are far less likely to be the locus of an edit war than article edits. —Jéské Couriano v^_^v threads critiques 18:05, 14 November 2024 (UTC)[reply]
    As an editor who started out processing edit requests, including ECP edit requests, I disagree. Aaron Liu (talk) 18:08, 14 November 2024 (UTC)[reply]
  • Oppose, per what JSS has said. I am a little uncomfortable at the extent to which we've seemingly accepted preemptive protection of articles in contentious areas. It may be a convenient way of reducing the drama us admins and power users have to deal with... but only at the cost of giving up on the core principle that anybody can edit. I would rather have drama on a living project than peace on a dead one. jp×g🗯️ 18:16, 7 November 2024 (UTC)[reply]
  • Oppose I am one of those admins who likes to see disruption before protecting. Lectonar (talk) 08:48, 8 November 2024 (UTC)[reply]
  • Oppose as unnecessary, seems like a solution in search of a problem. Furthermore, this *is* Wikipedia, the encyclopedia anyone can edit; preemptively protecting pages discourages contributions from new editors. -Fastily 22:36, 8 November 2024 (UTC)[reply]
  • Weak Oppose I do understand where this protection would be helpful. But I just think something is EC-protectable or not. Don't necessarily think adding another level of bureaucracy is particularly helpful. --Takipoint123 (talk) 05:14, 11 November 2024 (UTC)[reply]
  • Oppose. I'm inclined to agree that the scenarios where this tool would work a benefit as technical solution would be exceedingly niche, and that such slim benefit would probably be outweighed by the impact of having yet one more tool to further nibble away at the edges of the open spaces of the project which are available to new editors. Frankly, in the last few years we have already had an absurdly aggressive trend towards community (and ArbCom fiat) decisions which have increasingly insulated anything remotely in the vain of controversy from new editors--with predictable consequences for editor recruitment and retention past the period of early involvement, further exacerbating our workloads and other systemic issues. We honestly need to be rolling back some of these changes, not adding yet one more layer (however thin and contextual) to the bureaucratic fabric/new user obstacle course. SnowRise let's rap 11:23, 12 November 2024 (UTC)[reply]
  • Oppose. The more I read this discussion, the more it seems like this wouldn't solve the majority of what it sets out to solve but would create more problems while doing so, making it on balance a net negative to the project. Thryduulf (talk) 21:43, 12 November 2024 (UTC)[reply]
  • Oppose and Point of Order Oppose because pending changes is already too complicated and not very useful. I'm a pending changes reviewer and I've never rejected one on PC grounds (basically vandalism). But I often revert on normal editor grounds after accepting on PC grounds. (I suspect that many PC rejections are done for non-PC reasons instead of doing this) "Point of Order" is because the RFC is unclear on what exactly is being opposed. Sincerely, North8000 (talk) 22:15, 12 November 2024 (UTC)[reply]
    Pretty sure that what happens is that when vandals realize they will have to submit their edit for review before it goes live, that takes all the fun out of it for them because it will obviously be rejected, and they don't bother. That's pretty much how it was supposed to work. Just Step Sideways from this world ..... today 22:22, 12 November 2024 (UTC)[reply]
    This is a very good point, and I ask for @Awesome Aasim's clarification on whether reviewers will be able to reject edits on grounds for normal reverts combined with the EC restriction. I think there's enough rationale to apply this here beyond the initial rationale for PC as explained by JSS above. Aaron Liu (talk) 23:24, 12 November 2024 (UTC)[reply]
    Reviewers are given specific reasons for accepting edits (see Wikipedia:Pending changes § Reviewing pending edits) to avoid overloading them with work while processing pending changes expeditiously. If the reasons are opened up to greater evaluation of the quality of edits, then expectations may shift towards this being a norm. Thus some users are concerned this will create a hierarchy of editors, where edits by non-reviewers are gated by reviewers. isaacl (talk) 23:44, 12 November 2024 (UTC)[reply]
    I understand that and wonder how the reviewer proposes to address this. I would still support this proposal if having reviewers reject according to whether they'd revert and "ostensibly" to enforce EC is to be the norm, albeit to a lesser extent for the reasons you mentioned (though I'd replaced "non-reviewers" with "all non–auto-accepted"). Aaron Liu (talk) 00:13, 13 November 2024 (UTC)[reply]
    I'm not sure to whom you are referring when you say "the reviewer" – you're the one suggesting there's a rationale to support more reasons for rejecting a pending change beyond the current set. Since any pending change in the queue will prevent subsequent changes by non-reviewers from being visible to most readers, their edits too will get evaluated by a single reviewer before being generally visible. isaacl (talk) 00:59, 13 November 2024 (UTC)[reply]
    Sorry, I meant Aasim, the nominator. I made a thinko.
    Currently, reviewers can undo just the edits that aren't good and then approve the revision of their own revert. I thought that was what we were supposed to do. Aaron Liu (talk) 02:13, 13 November 2024 (UTC)[reply]
    Yes. Anything that is obvious vandalism or a violation of existing Wikipedia's policies can still be rejected. However, for edits where there is no other problem, the edit can still be accepted. In other words, a user not being extended confirmed shall not be sufficient grounds for rejecting an edit under PCECP, since the extended confirmed user takes responsibility for the edit. If the extended confirmed user accepts a bad edit, it is on them, not whoever made it. That is the whole idea.
    Of course obviously helpful changes such as fixing typos and adding up-to-date information should be accepted sooner, while more controversial changes should be discussed first. Awesome Aasim 17:37, 13 November 2024 (UTC)[reply]
    By or a violation of existing Wikipedia's policies, do you only mean violations of BLP, copyvio, and "other obviously inappropriate content" that may be very-quickly checked, which is the current scope of what to reject? Aaron Liu (talk) 17:41, 13 November 2024 (UTC)[reply]
    Yeah, but also edits made in violation of an already well-established consensus. Edits that enforce a clearly-established consensus (proven by previous talk page discussion), are, from my understanding, exempt from all WP:EW restrictions. Awesome Aasim 18:38, 13 November 2024 (UTC)[reply]
  • Oppose per Thryduulf and SnowRose. Also regardless of whether this is a good idea as a policy, FlaggedRevs has a large amount of technical debt, to the extent that deployment to any additional WMF wikis is prohibited, so it seems unwise to expand its usage.  novov talk edits 19:05, 13 November 2024 (UTC)[reply]
  • Oppose I have never found the current pending changes system easily to navigate as a reviewer. ~~ AirshipJungleman29 (talk) 20:50, 14 November 2024 (UTC)[reply]
  • Oppose the more productive approach would be to reduce the overuse of extended-confirmed protection. We have come to rely on it too much. This would be technically difficult and complex for little real gain. —Ganesha811 (talk) 18:30, 16 November 2024 (UTC)[reply]
    That's the goal of this proposal (reducing the overuse of ECP), and it provides a plausible mechanism for that (replacing it with the much-less stringent PCECP). How would you go about reducing overuse of ECP instead? – Closed Limelike Curves (talk) 23:29, 29 November 2024 (UTC)[reply]
    Would you support a version in which the reviewers remain PC patrollers? Aaron Liu (talk) 00:58, 30 November 2024 (UTC)[reply]
  • Oppose there might be a need for this but not preemptive. Andre🚐 01:31, 17 November 2024 (UTC)[reply]
    Wouldn't that be a support here for question #1, and an oppose in question #2? – Closed Limelike Curves (talk) 23:34, 29 November 2024 (UTC)[reply]
    Indeed, but as I've said below, it appears the rationale in the background section has confused many. Aaron Liu (talk) 00:58, 30 November 2024 (UTC)[reply]
  • Oppose. The pending changes system is awful and this would make it awfuler (that wasn't a word but it is now). Zerotalk 05:58, 17 November 2024 (UTC)[reply]
  • Oppose. How can we know that the 73,026 extended-confirmed users are capable of reviewing pending changes? I assume this is a step above normal PCP (eg. pcp is preferred over pcecp), how can reviewing semi-protected pending changes have a higher bar (requiring a request at WP:PERM) than reviewing extended-protected pending changes? Doesn't make much sense to me. — BerryForPerpetuity (talk) 14:15, 20 November 2024 (UTC)[reply]
    I do not think that XCON are reviewers is fixed. This RfC is primarily about the creation of PCECP. ~/Bunnypranav:<ping> 14:21, 20 November 2024 (UTC)[reply]
    Well, they're capable of reviewing edit requests. Aaron Liu (talk) 14:39, 20 November 2024 (UTC)[reply]
    Sure, but assuming this will work the same as PCR, isn't it possible that an extended-confirmed user who doesn't want to review edits, will try to edit a PCECP page, and be required to review edits beforehand? They're not actively seeking out to review edits in the same way that a PCR or someone who handles edit requests does. Will their review be on par with the scrutiny required for this level of protection? — BerryForPerpetuity (talk) 14:55, 20 November 2024 (UTC)[reply]
    You do not need to review edits to edit the pending version of the page, which is what happens when you press save on a page with pending edits. Aaron Liu (talk) 15:02, 20 November 2024 (UTC)[reply]
    Is it not the case that reviewers need to check a page's pending changes to edit a page? Either way, the point of "what would constitute a revert" needs to be discussed and decided on before we start to implement this, which I appreciate you discussing above. — BerryForPerpetuity (talk) 15:38, 20 November 2024 (UTC)[reply]
    No. It's just that if the newest change is not reviewed, the last reviewed change is shown to readers instead of the latest change. Aaron Liu (talk) 16:00, 20 November 2024 (UTC)[reply]
    How can we know that the 72,734 extended-confirmed users are capable of reviewing pending changes? This isn't about pending changes level 1. This is about pending changes as applied to enforce ECP, with the level [auto-accept=extendedconfirmed] [review=extendedconfirmed]. As this is only intended to be used for WP:ARBECR restricted pages, it shouldn't be used for anything else.
    What might need to happen for this to work is there are ways to configure who can auto-accept and review changes individually (rather than bundled as is right now) with the FlaggedRevs extension. Something like this for these drop-downs:
    • Auto-accept:
      • All users
      • Autoconfirmed
      • Extended confirmed
      • Template editor
      • Administrators
    • Review:
      • Autoconfirmed
      • Extended confirmed and reviewers
      • Template editors and reviewers
      • Administrators
    Of course, autoreview will have auto-accept perms regardless of these settings, and review will have review perms regardless of these settings. Awesome Aasim 16:36, 20 November 2024 (UTC)[reply]
    I understand what you're saying, and I'm aware this isn't about level 1. I'm not strongly opposed to PCECP, but my original point was talking about the difference in reviewer requirements for semi-protected PC and XCON PC. If this passes, it would make reviewing semi-protected pending changes require a permission request, but reviewing extended-protected pending changes would only require being extended-confirmed. If that could be explained so I could understand it better, I'd appreciate it.
    This also relates to edit requests. XCON users are capable of reviewing edit requests, because they don't have to implement what the request was verbatim. If a user makes a request that has good substance, but has a part that doesn't adhere to some policy (MOS, NPOV, ect), the reviewer can change it to fit policy. With pending changes, there's really no way to do that besides editing the accepted text after accepting it. The edit request reviewer can ask for clarification on something, add notes, give a reason for declining, ect.
    Especially on pages that have ARBCOM enforcement on them, the edit request system is far better than the pending changes system. This approach seems to be a solution for the problem of over-protection, which is what should actually be addressed. — BerryForPerpetuity (talk) 17:22, 22 November 2024 (UTC)[reply]
    Personally, I would also support this change if only reviewers may accept.
    I think editing a change after acceptance is superior. It makes clear which parts were written by whom (and thus much easier to satisfy our CC license). Aaron Liu (talk) 17:43, 22 November 2024 (UTC)[reply]
    Identifying which specific parts were written by whom isn't necessary for the CC BY-SA license. (And since each new revision is a new derivative work, it's not that easy to isolate.) isaacl (talk) 18:50, 22 November 2024 (UTC)[reply]
    Right, but there's no need to forget the attributive edit summary, which is needed when accepting edit requests. Identifying specific parts is just cleaner this way. Aaron Liu (talk) 18:57, 22 November 2024 (UTC)[reply]
    If the change is rejected, then a user who isn't an author of the content appears in the article history. In theory that would unnecessarily entangle the user in any copyright issues that arose, or possibly defamation cases. isaacl (talk) 22:55, 22 November 2024 (UTC)[reply]
    I personally see that as a much lesser problem than the EditRequests issue. Aaron Liu (talk) 19:15, 23 November 2024 (UTC)[reply]
    We should be maximizing the number of pages that are editable by all. Protection fails massively at this task. All this does is tell editors "hey don't edit this page", which is fine for certain legal pages and the main page that no one should really be editing, but for articles? There is a reason we have this thing called "code review" on Git and "peer review" everywhere else; we should be encouraging changes but if there is disruption we should be able to hold them for review so we can remove the problematic ones.
    Since Wikipedia is not configured to have software-based RC patrol outside of new pages patrol (and RC patrol would be a problem anyway not only because of the sheer volume of edits but also because edits older than a certain timeframe are removed from the patrol queue), we have to rely on other software measures to hide revisions until they are approved. Specifically, RC patrol hiding all edits until approved (wikiHow does this) would be a problem on Wikipedia. But that is a tangent. Awesome Aasim 19:43, 22 November 2024 (UTC)[reply]
    There's also a reason why Git changes aren't pushed directly to the main code branch for review, and instead a pull request is sent to an integrator in order to integrate the changes. There's a bottleneck in processing the request (including integration testing). Also note with software development, rebasing your changes onto the latest integrated stream is your responsibility. The equivalent with pending changes would be for each person to revalidate their proposed change after a preceding change had been approved or rejected. Instead, the workload falls upon the reviewer. Side note: the term "code review" far predates git, and is widely used by many software development teams. isaacl (talk) 22:45, 22 November 2024 (UTC)[reply]
    I see I see. I do think we need better pending changes as the current flagged revs system sucks. Also just because a feature is turned on doesn't mean there is consensus to use it, as seen by WP:SUPERPROTECT and WP:PC2. Awesome Aasim 18:11, 23 November 2024 (UTC)[reply]
    Your second sentence would render everything about this to be meaningless. Plus, the community does not like unnecessarily turning features on; both of your examples have been removed. Aaron Liu (talk) 19:18, 23 November 2024 (UTC)[reply]
    I know, that is my point. We also have consensus to make in Vector 2022 the unlimited width being default which was never turned on. Awesome Aasim 19:20, 23 November 2024 (UTC)[reply]
    I don't understand your point. You're making a proposal for a new feature that has to be developed in a MediaWiki extension. If it does get developed, it won't get deployed on English Wikipedia unless there's consensus to use it. And given that the extension is not supported by the WMF right now, to the extent that it won't deploy it on new wikis, I'm not sure it has the ability to support any new version. isaacl (talk) 22:53, 23 November 2024 (UTC)[reply]
  • Oppose, per JSS and others. We don't need another system just to allow the preemptive protection of pages, and allowing non-EC editors to clutter up this history in ARBECR topic areas would just create a lot of extra work with little or no real benefit. – bradv 23:10, 23 November 2024 (UTC)[reply]
  • Oppose - edit requests only for non-EC users is against spirit of open wiki, but is necessary to prevent the absolute flame-wars/edit-wars on contentious topic pages. having a pending changes version of an article only moves flamewars by non-ECR users to pending changes version. Better to allow edit requests and use ARBECR to close non-productive discussions on talk page than having another venue for CTOP flamewars to occur. Bluethricecreamman (talk) 02:28, 2 December 2024 (UTC)[reply]
    In your argument, aren't flamewars still moved to the edit request's discussions? Can't editors also just reject non-productive pending changes? Aaron Liu (talk) 03:48, 2 December 2024 (UTC)[reply]

Neutral (PCECP)

[edit]
  1. I have made my opposition to all forms of FlaggedRevisions painfully clear since 2011. I will not formally oppose this, however, so as to avoid the process being derailed by people rebutting my opposition. —Jéské Couriano v^_^v threads critiques 02:36, 6 November 2024 (UTC)[reply]
  2. I'm not a fan of the current pending changes, so I couldn't support this. But it also wouldn't effect my editing, so I won't oppose it if it helps others.-- LCU ActivelyDisinterested «@» °∆t° 14:32, 6 November 2024 (UTC)[reply]

Discussion (PCECP)

[edit]

Someone who is an expert at configuring mw:Extension:FlaggedRevs will need to confirm that it is possible to simultaneously have our current type of pending changes protection plus this new type of pending changes protection. The current enwiki FlaggedRevs config looks something like the below and may not be easy to configure. You may want to ping Ladsgroup or post at WP:VPT for assistance.

Extended content
// enwiki
// InitializeSettings.php
$wgFlaggedRevsOverride = false;
$wgFlaggedRevsProtection = true;
$wgSimpleFlaggedRevsUI = true;
$wgFlaggedRevsHandleIncludes = 0;
$wgFlaggedRevsAutoReview = 3;
$wgFlaggedRevsLowProfile = true;
// CommonSettings.php
$wgAvailableRights[] = 'autoreview';
$wgAvailableRights[] = 'autoreviewrestore';
$wgAvailableRights[] = 'movestable';
$wgAvailableRights[] = 'review';
$wgAvailableRights[] = 'stablesettings';
$wgAvailableRights[] = 'unreviewedpages';
$wgAvailableRights[] = 'validate';
$wgGrantPermissions['editprotected']['movestable'] = true;
// flaggedrevs.php
wfLoadExtension( 'FlaggedRevs' );
$wgFlaggedRevsAutopromote = false;
$wgHooks['MediaWikiServices'][] = static function () {
	global $wgAddGroups, $wgDBname, $wgDefaultUserOptions,
		$wgFlaggedRevsNamespaces, $wgFlaggedRevsRestrictionLevels,
		$wgFlaggedRevsTags, $wgFlaggedRevsTagsRestrictions,
		$wgGroupPermissions, $wgRemoveGroups;

	$wgFlaggedRevsNamespaces[] = 828; // NS_MODULE
	$wgFlaggedRevsTags = [ 'accuracy' => [ 'levels' => 2 ] ];
	$wgFlaggedRevsTagsRestrictions = [
		'accuracy' => [ 'review' => 1, 'autoreview' => 1 ],
	];
	$wgGroupPermissions['autoconfirmed']['movestable'] = true; // T16166
	$wgGroupPermissions['sysop']['stablesettings'] = false; // -aaron 3/20/10
	$allowSysopsAssignEditor = true;

	$wgFlaggedRevsNamespaces = [ NS_MAIN, NS_PROJECT ];
	# We have only one tag with one level
	$wgFlaggedRevsTags = [ 'status' => [ 'levels' => 1 ] ];
	# Restrict autoconfirmed to flagging semi-protected
	$wgFlaggedRevsTagsRestrictions = [
		'status' => [ 'review' => 1, 'autoreview' => 1 ],
	];
	# Restriction levels for auto-review/review rights
	$wgFlaggedRevsRestrictionLevels = [ 'autoconfirmed' ];
	# Group permissions for autoconfirmed
	$wgGroupPermissions['autoconfirmed']['autoreview'] = true;
	# Group permissions for sysops
	$wgGroupPermissions['sysop']['review'] = true;
	$wgGroupPermissions['sysop']['stablesettings'] = true;
	# Use 'reviewer' group
	$wgAddGroups['sysop'][] = 'reviewer';
	$wgRemoveGroups['sysop'][] = 'reviewer';
	# Remove 'editor' and 'autoreview' (T91934) user groups
	unset( $wgGroupPermissions['editor'], $wgGroupPermissions['autoreview'] );

	# Rights for Bureaucrats (b/c)
	if ( isset( $wgGroupPermissions['reviewer'] ) ) {
		if ( !in_array( 'reviewer', $wgAddGroups['bureaucrat'] ?? [] ) ) {
			// promote to full reviewers
			$wgAddGroups['bureaucrat'][] = 'reviewer';
		}
		if ( !in_array( 'reviewer', $wgRemoveGroups['bureaucrat'] ?? [] ) ) {
			// demote from full reviewers
			$wgRemoveGroups['bureaucrat'][] = 'reviewer';
		}
	}
	# Rights for Sysops
	if ( isset( $wgGroupPermissions['editor'] ) && $allowSysopsAssignEditor ) {
		if ( !in_array( 'editor', $wgAddGroups['sysop'] ) ) {
			// promote to basic reviewer (established editors)
			$wgAddGroups['sysop'][] = 'editor';
		}
		if ( !in_array( 'editor', $wgRemoveGroups['sysop'] ) ) {
			// demote from basic reviewer (established editors)
			$wgRemoveGroups['sysop'][] = 'editor';
		}
	}
	if ( isset( $wgGroupPermissions['autoreview'] ) ) {
		if ( !in_array( 'autoreview', $wgAddGroups['sysop'] ) ) {
			// promote to basic auto-reviewer (semi-trusted users)
			$wgAddGroups['sysop'][] = 'autoreview';
		}
		if ( !in_array( 'autoreview', $wgRemoveGroups['sysop'] ) ) {
			// demote from basic auto-reviewer (semi-trusted users)
			$wgRemoveGroups['sysop'][] = 'autoreview';
		}
	}
};

Novem Linguae (talk) 09:41, 6 November 2024 (UTC)[reply]

I basically came here to ask if this is even possible or if it would need WMMF devs involvement or whatever.
For those unfamiliar, pending changes is not the same thing as the flagged revisions used on de.wp. PC was developed by the foundation specifically for this project after we asked for it. We also used to have WP:PC2 but nobody really knew what that was supposed to be and how to use it and it was discontinued. Just Step Sideways from this world ..... today 21:21, 6 November 2024 (UTC)[reply]
Is PC2 an indication of implementation being possible? Aaron Liu (talk) 22:27, 6 November 2024 (UTC)[reply]
Depends on what exactly is meant by "implementation". A configuration where edits by non-extendedconfirmed users need review by reviewers would probably be similar to what was removed in gerrit:/r/334511 to implement T156448 (removal of PC2). I don't know whether a configuration where edits by non-extendedconfirmed users can be reviewed by any extendedconfirmed user while normal PC still can only be reviewed by reviewers is possible or not. Anomie 13:32, 7 November 2024 (UTC)[reply]
Looking at the MediaWiki documentation, it is not possible atm. That said, currently the proposal assumes that it is possible and we should work with that (though I would also support allowing all extended-confirmed to review all pending changes). Aaron Liu (talk) 13:56, 7 November 2024 (UTC)[reply]

I think the RfC summary statement is a bit incomplete. My understanding is that the pending changes feature introduces a set of rights which can be assigned to corresponding user groups. I believe all the logic is based on the user rights, so there's no way to designate that one article can be autoreviewed by one user group while another article can be autoreviewed by a different user group. Thus unless the proposal is to replace autoconfirmed pending changes with extended confirmed pending changes, I don't think saying "enabled" in the summary is an adequate description. And if the proposal is to replace autoconfirmed pending changes, I think that should be explicitly stated. isaacl (talk) 22:06, 6 November 2024 (UTC)[reply]

The proposal assumes that coexistence is technically possible. Aaron Liu (talk) 22:28, 6 November 2024 (UTC)[reply]
The proposal did not specify if it assumed co-existence is possible, or enabling it is possible, which could mean replacement. Thus I feel the summary statement (before the timestamp, which is what shows up in the central RfC list) is incomplete. isaacl (talk) 22:31, 6 November 2024 (UTC)[reply]
While on a re-read, It is assumed that it is technically possible to have PCECP does not explicitly imply co-existence, that is how I interpreted it. Anyways, it would be wonderful to hear from @Awesome Aasim about this. Aaron Liu (talk) 22:42, 6 November 2024 (UTC)[reply]
The key question that ought to be clarified is if the proposal is to have both, or to replace the current one with a new version. (That ties back to the question of whether or not the arbitration committee's involvement is required.) Additionally, it would be more accurate not to use a word in the summary that implies the only cost is turning on a switch. isaacl (talk) 22:49, 6 November 2024 (UTC)[reply]
It is assuming that we can have PC1 where only reviewers can approve edits and PCECP where only extended confirmed users can approve edits AND make edits without requiring approval. With the current iteration I don't know if it is technically possible. If it requires an extension rewrite or replacement, that is fine. If something is still unclear, please let me know. Awesome Aasim 23:06, 6 November 2024 (UTC)[reply]
I suggest changing the summary statement to something like, "Should a new pending changes protection level be added to Wikipedia – extended confirmed pending changes (hereby abbreviated as PCECP)?". The subsequent paragraph can provide the further explanation on who would be autoreviewed and who would serve as reviewers with the new proposed level. isaacl (talk) 23:19, 6 November 2024 (UTC)[reply]
Okay, done. I tweaked the wording a little. Awesome Aasim 23:40, 6 November 2024 (UTC)[reply]
I think inclusion of the preemptive-protection part in the background statement is causing confusion. AFAIK preemptive protection and whether we should use PCECP over ECP are separate questions. Aaron Liu (talk) 19:11, 7 November 2024 (UTC)[reply]
The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

Q2: If this proposal passes, should PCECP be applied preemptively to WP:ARBECR topics?

[edit]

The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.


Particularly on low traffic articles as well as all talk pages. WP:ECP would still remain an option to apply on top of PCECP. Awesome Aasim 19:58, 5 November 2024 (UTC)[reply]

Support (Preemptive PCECP)

[edit]
  • Support for my reasons in Q1. Awesome Aasim 19:58, 5 November 2024 (UTC)[reply]
    Also to add on there needs to be some enforcement measure for WP:ARBECR. No technical enforcement measures on WP:ARBECR is akin to site-banning an editor and then refusing to block them because "blocks should be preventative". Awesome Aasim 19:42, 13 November 2024 (UTC)[reply]
    Blocking a site-banned user is preventative, because if we didn't need to prevent them from editing they wouldn't have been site banned. Thryduulf (talk) 21:16, 13 November 2024 (UTC)[reply]
  • Slightly ambivalent on protecting talk pages, but I guess it would bring prominence to low-traffic pages. Aaron Liu (talk) 20:13, 5 November 2024 (UTC)[reply]
    Per isaacl, I only support preemptive protection on low-traffic pages. Aaron Liu (talk) 23:21, 12 November 2024 (UTC)[reply]
  • Support, including on talk pages. With edit requests mostly dealt with through pending changes, protecting the talk pages too should limit the disruption and unconstructive comments that are often commonplace there. (Changing my mind, I don't think applying PCECP on all pages would be a constructive solution. The rules of ARBECR limit participation to extended-confirmed editors, but the spirit of the rules has been to only enforce that on pages with actual disruption, not preemptively. 20:49, 7 November 2024 (UTC)) Chaotic Enby (talk · contribs) 20:21, 5 November 2024 (UTC)[reply]
  • Support I'm going to disagree with the "no" argument entirely - we should be preemptively ECPing (even without pending changes). It's a perversion of logic to say "you can't (per policy) do push this button", and then refuse to actually technically stop you from pushing the button even though we know you could. * Pppery * it has begun... 20:52, 5 November 2024 (UTC)[reply]
  • Support (Summoned by bot): While I disagree with ECR in general, this is a better way of enforcing it as long as it exists. Constructive "edit requests" can be accepted, and edits that people disagree with can be easily reverted. I'm slightly concerned with how this could affect the pending changes backlog (which has a fairly small number of active reviewers at the moment), but I'm sure that can be figured out. C F A 💬 23:41, 5 November 2024 (UTC)[reply]

Oppose (Preemptive PCECP)

[edit]
No, we still shouldn't be protecting preemptively. Wait until there's disruption, and then choose between PCXC or regular XC protection (I would strongly favour the former for the reasons I gave above). Cremastra (uc) 20:43, 5 November 2024 (UTC)[reply]
  • Mu - This is a question that should be asked afterwards, not same time as, since ArbCom will want to look at any such proposal. —Jéské Couriano v^_^v threads critiques 02:38, 6 November 2024 (UTC)[reply]
  • No, I feel this would be a bad idea. Critics of Wikipedia already use the idea that it's controlled by a select group, this would only make that misconception more common. -- LCU ActivelyDisinterested «@» °∆t° 14:36, 6 November 2024 (UTC)[reply]
  • Preemptive protection has always been contrary to policy, with good reason. Just Step Sideways from this world ..... today 21:26, 6 November 2024 (UTC)[reply]
  • Absolutely not. No need for protection if there is no disruption. The number of protected pages should be kept low, and the number of pages that cry out "look at me!" on your watchlist (anything under pending changes) should be as close to zero as possible. —Kusma (talk) 21:44, 6 November 2024 (UTC)[reply]
    No need for protection if there is no disruption. Trouble is, the ECR restriction is enacted in response to widespread disruption, this time to the entire topic area as a whole. Disregard for POV, blatant inclusion of unverifiable or false (unreliable) information, and more all pose serious threats of disruption to the project. If WP:ARBECR was applied broadly without any protection I would agree, but WP:ARBECR is applied in response to disruption (or a serious threat of), not preemptively. Take this one for example, which is a long winded ANI discussion that ended in the WP:GS for the Russo-Ukranian War (and the ECR restrictions). And as for Arbitration Committee, ArbCom is a last resort when all other attempts to resolve disruption fail. See WP:ARBPIA WP:ARBPIA2 WP:ARBPIA3 WP:ARBPIA4. The earliest reference to the precursor to ARBECR in this case is on the third ArbCom case. Not protecting within a topic area that has a high risk of disruption is akin to having a high-risk template unprotected. The only difference is that carelessly editing a high-risk template creates technical problems, while carelessly editing a high-risk topic area creates content problems.
    Either the page is protected technically (which enforces a community or ArbCom decision that only specific editors are allowed in topic areas) or the page is not protected technically but protected socially (which then gives a chance of evasion). I see this situation no different from banning an editor sitewide and then refusing to block them on the grounds that "blocks should only be used to prevent disruption" while ignoring the circumstances leading up to the site ban.
    What PCECP would do is allow for better enforcement of the community aspect. New editors won't be bitten, if they find something that needs fixing like a typo, they can make an edit and it can get approved. More controversial edits will get relegated to the talk page where editors not banned from that topic area can discuss that topic. And blatant POV pushing and whatnot would get reverted and would never even be seen by readers.
    The workflow would look like this: new/anon user make an edit → edit gets held for review → extended confirmed user approves the edit. Rather than the current workflow (and the reason why preemptive ECP is unpopular): new/anon user makes an edit → user is greeted with a "this page is protected" message → user describes what they would like to be changed but in a badly formulated way → edit request gets closed as "unclear" or something similar. Awesome Aasim 14:21, 11 November 2024 (UTC)[reply]
    Consider this POV change made to a topic that I presume is covered under WP:ARBPIA and that is not protected. The whole reason that there is WP:ARBECR is to prevent stuff like this from happening. There already is consensus either among arbitrators or among the community to enact ECR within specific contentious topic areas, so I don't see how it is productive to refuse to protect pages because of "not enough disruption" when the entire topic area has faced widespread disruption in the past. Awesome Aasim 18:18, 23 November 2024 (UTC)[reply]
    Simple, everyday vandalism is far from the levels of disruption that caused the topic to be marked Contentious. Aaron Liu (talk) 19:20, 23 November 2024 (UTC)[reply]
    That example I provided isn't vandalism. Yes it is disruptive POV pushing but it is not vandalism. Wikipedia also exists in the real world, and Wikipedia does not have the technical tools to fight armies of POV pushers and more. One example is Special:PermaLink/1197462753#Arbitration_motion_regarding_PIA_Canvassing. When the stakes are this high people feel entitled to impose their view on the project, but Wikipedia isn't the place to right great wrongs. Awesome Aasim 19:32, 23 November 2024 (UTC)[reply]
    It is vandalism, the changing of content beyond recognition. Even if it were just POV-pushing, there was no army here. Aaron Liu (talk) 19:41, 23 November 2024 (UTC)[reply]
  • Per my vote above. Ratnahastin (talk) 09:00, 7 November 2024 (UTC)[reply]
  • Absolutely not. Protection should only ever be preventative. Kusma puts it better than I can. Thryduulf (talk) 13:49, 7 November 2024 (UTC)[reply]
  • Per my comment above. jp×g🗯️ 18:17, 7 November 2024 (UTC)[reply]
  • No; see my comment above. I prefer to see disruption before protecting. Lectonar (talk) 08:51, 8 November 2024 (UTC)[reply]
  • No. We should be quicker to apply protection in these topics than we would elsewhere, but not preemptively except on highly visible pages (which, in these topics, are probably ECP-protected anyway). Animal lover |666| 17:18, 11 November 2024 (UTC)[reply]
  • No, that would create a huge backlog. ~~ AirshipJungleman29 (talk) 20:50, 14 November 2024 (UTC)[reply]
  • Oppose per Kusma Andre🚐 01:30, 17 November 2024 (UTC)[reply]

Neutral (preemptive PCECP)

[edit]

Discussion (preemptive PCECP)

[edit]
@Jéské Couriano Could you link to said ArbCom discussion? Aaron Liu (talk) 03:51, 6 November 2024 (UTC)[reply]
I'm not saying such a discussion exists, but changes to Arbitration remedies/discretionary sanctions are something they would want to weigh in on. Arbitration policy (which includes WP:Contentious topics) is in their wheelhouse and this would have serious implications for WP:CT/A-I and any further instances where ArbCom (rather than individual editors, as a discretionary sanction) would need to resort to a 500/30 rule as an explicit remedy. —Jéské Couriano v^_^v threads critiques 04:58, 6 November 2024 (UTC)[reply]
That is not my reading of WP:ARBECR. Specifically, On any page where the restriction is not enforced through extended confirmed protection, this restriction may be enforced by...the use of pending changes... (bold added by me for emphasis). But if there is consensus not to use this preemptively so be it. Awesome Aasim 05:13, 6 November 2024 (UTC)[reply]
  • While I appreciate the forward thinking that PCECP may want to be used in Arb areas, this feels like a considerable muddying of the delineation between the Committee's role and the community's role. Traditionally, Contentious Topics have been the realm of ArbCom, and General Sanctions have been the realm of the Community. Part of the logic comes down to who takes the blame when things go wrong. The Community shouldn't take the blame when ArbCom makes a decision, and vice versa. Part of the logic is separation of powers. If the community wants to say "ArbCom, you will enforce this so help you God," then that should be done by amending ArbPol. Part of the logic is practical. If the community creates a process that adds to an existing Arb process, what happens when the Arbs want to change that process? Or even end it altogether? Bottomline: Adopting PCECP for ARBECR is certainly something ArbCom could do. But I'd ask the community to consider the broader structural problems that would arise if the community adopted it on behalf of ArbCom. CaptainEek Edits Ho Cap'n! 05:18, 7 November 2024 (UTC)[reply]
    Interesting. I'd say ArbCom should be able to override the community if they truly see such action fit and worthy of potential backlash. Aaron Liu (talk) 12:30, 7 November 2024 (UTC)[reply]
    Just a terminology note, although I appreciate many think of general sanctions in that way, it's defined on the Wikipedia:General sanctions page as ... a type of Wikipedia sanctions that apply to all editors working in a particular topic area. ... General sanctions are measures used by the community or the Arbitration Committee ("ArbCom") to improve the editing atmosphere of an article or topic area.. Thus the contentious topics framework is a form of general sanctions. isaacl (talk) 15:22, 7 November 2024 (UTC)[reply]
    Regarding the general point: I agree that it is cumbersome for the community to impose a general sanction that is added on top of a specific arbitration remedy. I would prefer that the community work with the arbitration committee to amend its remedy, which would facilitate keeping the description of the sanction and logging of its enforcement together, instead of split. (I appreciate that for this specific proposal, logging of enforcement is not an issue.) isaacl (talk) 15:30, 7 November 2024 (UTC)[reply]
    Extended confirmed started off as an arbcom concept - 500 edits/30 days - which the community then choose to adopt. ArbCom then decided to make its remedy match the community's version - such that if the community were to decide extended confirmed were 1000 edits/90 days all ArbCom restrictions would update. I find this a healthy feedback loop between ArbCom and the community. The community could clearly choose (at least on a policy level, given some technical concerns) to enact PCECP. It could choose to apply this to some/all pages. If it is comfortable saying that it wants to delegate some of which pages this applies to the Arbitration Committee I think it can do so without amending ArbPol. However, I think ArbCom could could decide that PCECP would not apply in some/all CTOP areas given that the Committee is exempt from consensus for areas with-in its scope. And so it might ultimately make more sense to do what isaacl suggests. Best, Barkeep49 (talk) 16:02, 7 November 2024 (UTC)[reply]
    The "contentious topics" procedure does seem like something that the community should absolutely mirror and that ultimately both the community and ArbCom should work out of. If one diverges, there is probably a good reason why it diverged.
    As for the broader structural problems that would arise if the community adopted it on behalf of ArbCom, there are already structural problems with general sanctions because of the community's failure to adopt the new CTOP procedure for new contentious topics. Although the community has adopted the contents of WP:ARBECR for other topic areas like WP:RUSUKR, they don't adopt it by reference but by copying the whole text verbatim. Awesome Aasim 17:13, 7 November 2024 (UTC)[reply]
    That's not the same structural problem. The community hasn't had a lot of discussion about adopting the contentious topic framework for its own use (in my opinion, because it's a very process-wonky discussion that doesn't interest enough editors to generate a consensus), but that doesn't interfere with how the arbitration committee uses the contentious topic framework. This proposal is suggesting that the community automatically layer on its own general sanction on top of any time the arbitration committee decides to enact a specific sanction. Thus the committee would have to consider each time whether or not to override the community add-on, and amendment requests might have to be made both to the committee and the community. isaacl (talk) 17:33, 7 November 2024 (UTC)[reply]
    Prior to contentious topics there were discretionary sanctions. Those became very muddled and so the committee created Contentious topics to help clarify the line between community and committee (disclosure: I help draft much of that work). As part of that the committee also established ways for the community to tie-in to contentious topics if it wanted. So for the community hasn't made that choice which is fine. But I do this is an area that, in general, ArbCom does better than the community because there is more attention paid to having consistency across areas and when a problem arises I have found (in basically this one area only) ArbCom to be more agile at addressing it. But the community is also more willing to pass a GS than ArbCom is to designate something a CT (which I think is a good hting all around) and so having the community come to consensus about how, if at all, it wants to tie in to CT (and its evolutions) or if it would prefer to do its own thing (including just mirroring whatever happens to be in CT at the time but not subsequent changes) would probably be a good meta discussion to have. But it also doesn't seem necessary for this particular proposal. Best, Barkeep49 (talk) 17:41, 7 November 2024 (UTC)[reply]
The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

Q3: If this proposal does not pass, should ECP be applied preemptively to articles under WP:ARBECR topics?

[edit]

The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.


Support (preemptive ECP)

[edit]
  • Support as a second option, but only to articles. Talk pages can be enforced solely through reverts and short protections so I see little reason why those should be protected. Awesome Aasim 19:58, 5 November 2024 (UTC) Moved to oppose. Awesome Aasim 19:10, 23 November 2024 (UTC)[reply]
  • Support for articles per Aasim. Talk pages still need to be open for edit requests. (Also changing my mind, per above. If anything, we should clarify ARBECR so that the 500-30 limit is only applied in cases where it is needed, not automatically, to resolve the ambiguity. 20:52, 7 November 2024 (UTC)) Chaotic Enby (talk · contribs) 20:20, 5 November 2024 (UTC)[reply]
  • Support per my comment in the previous section. * Pppery * it has begun... 20:52, 5 November 2024 (UTC)[reply]
  • I agree with Chaotic Enby and Pppery above and think all CT articles should be protected. I am generally not a fan of protecting Talk pages, but it's true that many CT Talk pages are cesspools of hate, so I am not sure where I sit on protecting Talk pages. Toadspike [Talk] 20:57, 5 November 2024 (UTC)[reply]
    Under the current wording of ARBECR, When such a restriction is in effect in a topic area, only extended-confirmed editors may make edits related to the topic area. We should protect pages, rather than letting new editors edit and then reverting them for basically no reason. This is a waste of their time and very BITEy.
    I am not opposed to changing the wording of ARBECR to forbid reverting solely because an editor is not extended confirmed, which is a silly reason to revert otherwise good edits. However, until ArbCom changes ARBECR, we are stuck with the rules we have. We ought to make these rules clear to editors before they edit, by page protection, instead of after they edit, by reversion. Toadspike [Talk] 10:55, 16 November 2024 (UTC)[reply]
  • Support preemptive ECP without PCECP (for article space only). If we have a strict policy (or ArbCom ruling) that a class of user is forbidden to edit a class of page, there is no downside whatever to implementing that policy by technical means. All it does is stop prohibited edits. The consequences would all be positive, such as removing the need for constant monitoring, reducing IP vandalism to zero, and reducing the need to template new editors who haven't learned the rules yet. What I'd like with regard to the last one, is that a non-EC editor sees an "edit" button on an ECP page but clicking it diverts them to a page that explains EC and how to get it. Zerotalk 05:53, 17 November 2024 (UTC)[reply]

Oppose (preemptive ECP)

[edit]
  • Oppose because I think this is a bad idea. For one thing, just making a list of all the covered articles could produce disputes that we don't need. (This article might be covered, but is it truly covered? Reasonable people could easily disagree about whether some articles are "mostly" about the restricted area vs "partly", and therefore about whether the rule applies.) Second, where a serious and obvious problem, such as blatant vandalism, is concerned, it would be better to have an IP revert it than to mindlessly follow the rules. It is important to remember that our rules exist as a means to an end. We follow them because, and to the extent that, they help overall. We expect admins and other editors to exercise discretion. It is our policy that Wikipedia:If a rule prevents you from improving or maintaining Wikipedia, ignore it. This is a proposal to declare that the IAR policy never applies to the rule about who should normally be editing these articles, and that exercising discretion is not allowed. WhatamIdoing (talk) 20:42, 5 November 2024 (UTC)[reply]
    I am neither Arb nor admin, but I think the words "broadly construed" are specifically chosen so that if a topic is "partly" about the restricted area, it is included in the CTOP. @WhatamIdoing, could you please show me an example of a case where CTOP designation or ECP was disputed? Toadspike [Talk] 10:59, 16 November 2024 (UTC)[reply]
    I avoid most of those articles, but consider "the entire set of Arab-Israeli conflict-related articles, broadly interpreted": Does that include BLPs who come from Israel/Palestine? What about BLPs who are in the news because of what they said about the Israel–Hamas war? IMO reasonable people could disagree about whether "every person living in the affected area" or "every person talking about the conflict" is part of "the entire set of Arab-Israeli conflict-related articles, broadly interpreted". WhatamIdoing (talk) 19:54, 16 November 2024 (UTC)[reply]
    David Miller is what we call a "partial" Arbpia. So while it's a BLP in general, parts of it are subject to Arbpia/CT, not a particularly unusual situation. The talkpage and edit notices should, but don't always, tell you whether it is or isn't, part of. Selfstudier (talk) 20:59, 16 November 2024 (UTC)[reply]
    WP:IAR applies to content not to conduct. ArbCom is empowered to take action against poor conduct. You can't claim WP:IAR for example to reverse engineering a script that requires specific permissions to use. Likewise a new editor cannot claim "IAR" to adding unverifiable (albeit true) information to an ARBECR protected article. Awesome Aasim 15:25, 16 November 2024 (UTC)[reply]
    IAR stands for IgnoreAllRules. The latter two cannot be claimed valid based on IgnoreAllRules because they don't have strong IgnoreAllRules arguments for what they did, not because IgnoreAllRules somehow only applies to content. Aaron Liu (talk) 16:07, 16 November 2024 (UTC)[reply]
    I meant ignore all rules applies to rules not to behavior. Point still stands as ARBPIA addresses behavior not content. Awesome Aasim 21:04, 16 November 2024 (UTC)[reply]
    I agree that "ignore all rules" applies to rules – including rules about behavior. ARBPIA is a rule about behavior. IAR therefore applies to ARBPIA.
    Of course, if breaking the rule doesn't prove helpful to Wikipedia in some way, then no matter what type of rule it is, you shouldn't break the rule. We have a rule against bad grammar in articles, and you should not break that rule. But when two rules conflict – say, the style rule of "No bad grammar" and the behavioral rule of "No editing this ARBPIA article while logged out, even if it's because you're on a public computer and can't remember your password" – IAR says you can choose to ignore the rule that prevents you from improving Wikipedia. WhatamIdoing (talk) 21:34, 16 November 2024 (UTC)[reply]
  • While there's already precedent for preemptive protection at e.g. RFPP, I do not like this. For one, as talk pages (and, by extension, edit requests) cannot use the visual editor, this makes it much harder for newcomers to contribute edits, often unnecessarily on articles where there are no disruption. Aaron Liu (talk) 23:47, 5 November 2024 (UTC)[reply]
  • Oppose (Summoned by bot): Too strict. C F A 💬 00:03, 6 November 2024 (UTC)[reply]
  • Mu - This is basically my reading of the 500/30 rule as writ. Anything that would fall into the 500/30'd topic should be XCP'd on discovery. It's worth noting I don't view this as anywhere close to ideal but then neither did ArbCom, and given the circumstances of the real-world ethnopolitical conflict only escalating as of late (which in turn feeds the disruption) the only other - even worse - option would be full-protection across the board everywhere in the area. So why am I not arguing Support? Because just like the question above, this is putting the cart before the horse and this is better off being discussed after this RfC ends, not same time as. —Jéské Couriano v^_^v threads critiques 02:47, 6 November 2024 (UTC)[reply]
  • Oppose Preemptive protection of any page where there is not a problem that needs solving. Just Step Sideways from this world ..... today 21:28, 6 November 2024 (UTC)[reply]
  • Absolutely not, pages that do not experience disruption should be open to edit. Pending changes should never become widely used to avoid situations like dewiki's utterly absurd 53-day backlog. —Kusma (talk) 21:53, 6 November 2024 (UTC)[reply]
  • Very strong oppose, again Kusma puts it excellently. Protection should always be the exception, not the norm. Even in the Israel-Palestine topic area most articles do not experience disruption. Thryduulf (talk) 13:50, 7 November 2024 (UTC)[reply]
    WP:RUNAWAY sums up some of the tactics used by disruptive editors: namely Their edits are limited to a small number of pages that very few people watch and Conversely, their edits may be distributed over a wide range of articles to make it less likely that any given user watches a sufficient number of affected articles to notice the disruptions. If a user is really insistent on pushing their agenda, they might not be able to push it on the big pages, they may push it on some of the smaller pages where their edits may get unwatched for months if not years. Then, researchers digging up information will come across the POV article and blindly cite it. Although Wikipedia should never be cited as a source, it still happens. Awesome Aasim 14:35, 11 November 2024 (UTC)[reply]
  • Per my comment above. jp×g🗯️ 18:18, 7 November 2024 (UTC)[reply]
  • No, see my comment to the other questions. Lectonar (talk) 08:52, 8 November 2024 (UTC)[reply]
  • No, we should never be preemptively protecting pages. Cremastra (uc) 16:35, 10 November 2024 (UTC)[reply]
  • No, except on the most prominent articles on each CT topic (probably already done on current CTs, but relevant for new ones). Animal lover |666| 19:47, 11 November 2024 (UTC)[reply]
  • Absolutely not. See above comments for details. ~~ AirshipJungleman29 (talk) 20:50, 14 November 2024 (UTC)[reply]
  • Comment - The number of revisions within the PIA topic area that violate the ARBECR rule is not measured. It is not currently possible to say anything meaningful about the amount of 'disruption' in the topic area by non-EC IPs and accounts. And the way people estimate the amount of 'disruption' subjectively depends on the timescale they choose to measure it. Nobody can see all of the revisions and the number of people looking is small. Since the ARBECR rule was introduced around the start of 2020, there have been over 71,000 revisions by IPs to articles and talk pages within the subset of the PIA topic, about 11,000 pages, used to gather statistical data (ARBPIA templated articles and articles that are members of both wikiproject Israel and wikiproject Palestine). Nobody has any idea how many of those were constructive, how many were disruptive, how many involved ban-evading disposable accounts etc. And yet, this incomplete information situation apparently has little to no impact on the credence we all assign to our views about what would work best for the PIA topic area. I personally think it is better to dispense with non-evidence-based beliefs about the state of the topic area at any given time and simply let the servers enforce the rule as written in WP:ARBECR, "only extended-confirmed editors may make edits related to the topic area, subject to the following provisions...". Sean.hoyland (talk) 17:22, 16 November 2024 (UTC)[reply]
    Make sense, but I am not sure if this is meant to be an oppose. Personally, since there hasn't been much big outrage not solved by a simple RfPP, anecdotally I see no problem with the status quo on this question. Aaron Liu (talk) 01:24, 17 November 2024 (UTC)[reply]
  • Oppose per Thryduulf and others Andre🚐 01:29, 17 November 2024 (UTC)[reply]
  • Oppose. Preemptive protection is just irresponsible.—Alalch E. 23:22, 22 November 2024 (UTC)[reply]
  • As OP I am actually starting to lean weak oppose unless if we have a robust and new-user-friendly edit request system (which currently we don't). We already preemptively protect templates used on a lot of pages for technical reasons, and I don't think new users are at all going to be interested in templates so our current edit request system works decent for templates, modules, code pages, etc. When we choose to protect it should be the same as blocking which is the risk of disruption for specific pages or topic areas, using previous disruption to hope predict the future. Users already have a hard time submitting edit requests for pages not within contentious topic areas, so as it stands right now preemptive protection will do more harm than good. Awesome Aasim 19:10, 23 November 2024 (UTC)[reply]
  • Oppose - more harm than good, too strict. Bluethricecreamman (talk) 02:30, 2 December 2024 (UTC)[reply]

Neutral (preemptive ECP)

[edit]

Discussion (preemptive ECP)

[edit]

I think this question should be changed to "...articles under WP:ARBECR topics?". Aaron Liu (talk) 20:11, 5 November 2024 (UTC)[reply]

Okay, updated. Look good? Awesome Aasim 20:13, 5 November 2024 (UTC)[reply]

As I discussed in another comment, should this concept gain approval, I feel it is best for the community to work with the arbitration committee to amend its remedy. isaacl (talk) 15:34, 7 November 2024 (UTC)[reply]

And as I discussed in another comment while I think the community could do this, I agree with isaac that it would be best to do it in a way that works with the committee. Best, Barkeep49 (talk) 16:03, 7 November 2024 (UTC)[reply]
The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

Q4: Should there be a Git-like system for submitting and reviewing edits to protected pages?

[edit]

The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.


This behaves a little like pending changes, but with a few different things:

  1. There would be an additional option entitled "allow users to submit edits for review" in the protection window. There could also be a specific user group able to accept such edits.
  2. Instead of the standard "protected page text" informing the user is protected, when this option is enabled, the user is given a message something like "This page is currently protected, so you are currently submitting an edit request. Only when your change is approved will your edit be visible." An edit summary as well as a more detailed explanation into the review can be provided. Same for title blacklisted pages. However, the "permission error" will still show for attempting to rename the page, as well as for cases where a user cannot edit a page for a reason other than protection (like being blocked from editing).
  3. All the changes submitted for review end up in some namespace (like Review:1234567) with the change id. Only users with the ability to edit the page or accept the revision would be able to see these changes. There would also be the ability to discuss each change on the talk page for that change or something similar. This namespace by design will be unprotectable.
  4. Users with the ability to edit the page (or when a higher accept level is selected, users with that accept level) are given the ability to merge these changes in. Administrators can delete changes just like they can delete individual revisions, and these changes can also be suppressed just like individual revisions.
  5. Changes are not directly committed to the edit history, unlike the current pending changes system; only to the page in the Review: namespace.

This would be a major improvement over our edit request system which ONLY allows a user to write what they want changed, and that is often prone to stuff that is not WP:CHANGEXY. If there are merge conflicts preventing a clean merge then the person who submitted the edit or the reviewer will have to manually fix it before it merges cleanly. If this path is chosen we can safely retire pending changes. Awesome Aasim 18:52, 23 November 2024 (UTC)[reply]

Survey (Q4)

[edit]
  • Support failing Q1, as it streamlines the experience for making edit requests, especially for new users. I have had ideas for scripts to make the experience of submitting an edit request a lot easier but none has really come to fruition. I still don't entirely agree with the arguments with Q2 and Q3, but I am starting to agree that that is putting the pen before the pig and thus can be closed as premature, unless if there is an emerging consensus that pages being within a topic area should not be protected for being within that particular topic area. Awesome Aasim 18:52, 23 November 2024 (UTC)[reply]
  • Support in theory, but wait to see if this is technically possible to implement. While a clear improvement, it will likely require quite some amount of work (and workshopping) for implementation. While a non-binding poll to gauge community interest is a good thing, having a full RfC to adopt this before coding has even begun is way too premature. Chaotic Enby (talk · contribs) 21:29, 23 November 2024 (UTC)[reply]
  • Too soon to know. Once it is known that it is technically possible and you have mockups of things like interfaces and details of how it would handle a range of common real-world scenarios then we can discuss whether it would make sense to implement it. Thryduulf (talk) 22:52, 23 November 2024 (UTC)[reply]
    The whole premise of this RfC is if this is possible, and if it is not that some are willing to make this possible. Awesome Aasim 22:54, 23 November 2024 (UTC)[reply]
    Before proposing something like this, first find out whether it is possible. If it isn't currently possible but could be, work out structures and how it will work, at least broadly. Then find out whether enough people want it that someone spending the time to make it will be worthwhile. You can't just assume that anything you want is technically possible and that if enough other people also want it that developers will make it for you. Some relatively simple, uncontroversial feature requests, with demonstrated demand, have been open tasks awaiting developer intention for over 15 years. Thryduulf (talk) 02:16, 24 November 2024 (UTC)[reply]
    As an actual developer, this seems like it would be possible in the technical sense, but also a sufficiently large project that it won't actually get done unless some WMF team takes the initiative to do it. This would likely amount to writing a new extension, which would have to go through the review queue, whose first step now is Find at least one WMF team (or staff member on behalf of their team) to agree to offer basic support for the extension for when it's deployed to Wikimedia Production.
    And I have no idea what team would support this. Moderator Tools would be my first guess, but they refused to support Adiutor even when it was actually coded up and ready to go and is much simpler, so they definitely won't.
    I personally think this requirement is unnecessary (and hypocritical), and the WMF needs to stop stifling volunteers' creativity, but there's nothing I can do about it now.
    And all of this is despite the fact that I think there's actually some merit to the idea. * Pppery * it has begun... 04:17, 24 November 2024 (UTC)[reply]
  • Provisionally support - there is the problem that this requires implementation, so a support !vote has to wait until someone comes along who has the skills needed and is sufficiently enthusiastic about the proposal to get it done. This barrier aside, I do think that this is a good idea. It is more likely to attract attention if the underlying proposal is approved. Perhaps the underlying proposal could be added as an alternate to page protection for use by Arbcom. — Charles Stewart (talk) 05:19, 28 November 2024 (UTC)[reply]
  • Support - I think this would be a better way to replace edit request system, by having many potential merges, instead of a single pending changes version. If a flame warrior wants to make their own version of an article, no need to worry about the pending changes version being polluted and edit warred over, let the isolated proposed branch exist for that one user. Bluethricecreamman (talk) 02:33, 2 December 2024 (UTC)[reply]

Discussion (Q4)

[edit]

If additional proposals come (seems unlikely), I wonder if this might be better split as a "pending changes review" or something similar. Awesome Aasim 18:52, 23 November 2024 (UTC)[reply]

I really think this should be straight-up implemented as whatever first instead of being asked in an RfC. Aaron Liu (talk) 19:32, 23 November 2024 (UTC)[reply]

First, please stop calling this a git-like system. The real essence of version control systems is branching history. Plus one of the key principles for git is to enable developers to keep the branching history as simple as possible, with changes merged cleanly into an integration branch, so proposed changes never show up in the history of the integration branch.

I would prefer keeping the article history clear of any edit requests. There could be a tool that would clone an article (or designated sections) to a user subpage, preserving attribution in the edit summary. The user could make their changes on that page, and then a tool could assist them in creating an edit request. Whoever processes the request will be able to review the diff on the subpage. If the current version of the article has changed significantly, they can ask the requester to rebase the page to the current version and redo their change. I think this approach simplifies both creating and reviewing a proposed change, and helps spread the workload of integrating changes when they pile up. isaacl (talk) 22:44, 23 November 2024 (UTC)[reply]

It won't. If the change is not merged. The point of this is the edit history remains clear up until the edit is approved. We can do some "squashing" as well as limit edits to be reviewed to the original creator. A commit on GitHub and GitLab does not show up on main until merged. It is already possible to merge two page's histories right now, this is done after cut and paste moves. This just takes it to a different level. Awesome Aasim 22:53, 23 November 2024 (UTC)[reply]
History merge isn't really the same thing, in that you can't interlace changes in the version history, but only have a "clean" merge when the two have disjoint timespans. If multiple versions of the same page are edited simultaneously before being merged, even assuming no conflicts in merging, the current histmerge system will not be able to handle it properly. Chaotic Enby (talk · contribs) 22:58, 23 November 2024 (UTC)[reply]
If it doesn't show up in the article history, then it isn't like pending changes at all, so I suggest your summary should be updated accordingly. In which case, under the hood your proposal is similar to mine; I suggest having subpages under the user page would be easier for the user to manage. Squashing shouldn't be done with the history of public branches (commits should remain fixed once they've been made known to everyone) plus rewriting history can be confusing, so I think the change history should be preserved on the working page. If you mean that the submission into the article should be one edit, sure.
My proposal was to layer on tools to assist with creating edit requests, while yours seeks to integrate the system with the edit function when a user is prevented from editing due to page protection. Thus from an implementation perspective, my proposal can be implemented independently of the rest of the MediaWiki code base (and could be done with gadgets), while yours would require changes to the MediaWiki code. Better integration of course offers a more cohesive user experience, but faces greater implementation and integration challenges. I suggest reaching out to the WMF development team to find a contact to discuss your ideas. isaacl (talk) 23:13, 23 November 2024 (UTC)[reply]
I agree that for now we should have JS tools, although that itself has challenges. A modification to MediaWiki core will also have challenges but it might be worth it in the long run, as Core gets regular updates to features, but extensions not always. Awesome Aasim 01:31, 24 November 2024 (UTC)[reply]
Okay, I took a stab at making the experience of making an edit request a bit more new-user friendly: User:Awesome Aasim/editrequestor.js.
I did notice someone else created a similar script but it behaves quite differently. This relies largely on the MediaWiki compare API to build a result. Unfortunately it uses deprecated libraries, etc. and will definitely need rewriting, but I think it is a good first prototype.
If something similar was loaded for every edit request with withJS, I wonder how this will change the views of users who expressed opposition. Awesome Aasim 02:35, 30 November 2024 (UTC)[reply]
Not sure which users you're thinking of, as no one in this discussion has so far opposed changes to the edit process so it can feed an edit request system without introducing pending changes into the article history. (I can imagine opposition based on potentially swamping the edit request system, and a lack of capacity to handle requests, but I don't think the discussion is there yet.) Maybe you can create a short video to demonstrate how your prototype functions? It should be a good starting point for discussions with the appropriate WMF developers. isaacl (talk) 20:19, 30 November 2024 (UTC)[reply]
The "similar script" I am referring to is User:NguoiDungKhongDinhDanh/FormattedEditRequest. But it works a bit differently, rather than intercepting "submit an edit request" requests, it adds a link to a portlet.
Here is a MP4 file of my prototype. If this can be converted to a compatible format and uploaded to Wikipedia that would be nice. Awesome Aasim 20:44, 30 November 2024 (UTC)[reply]
I wasn't wondering about the other script, but thanks for the info. isaacl (talk) 22:23, 30 November 2024 (UTC)[reply]
The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

General discussion

[edit]

Since we're assuming that PCECP is possible and the last two questions definitely deal with policy, I feel like maybe this should go to VPP instead, with the header edited to something like "Extended-confirmed pending changes and preemptive protection in contentious topics" to reflect the slightly−larger-than-advertised scope? Aaron Liu (talk) 23:53, 5 November 2024 (UTC)[reply]

I think policy proposals are also okay here, though I see your point. There is definitely overlap, though. This is both a request for a technical change as well as establishing policy/guidelines around that technical change (or lack thereof). Awesome Aasim 00:26, 6 November 2024 (UTC)[reply]

If this proposal is accepted, my assumption is that we'd bring back the ORANGELOCK which was used for the original incarnation of Pending Changes Level 2. There's a proposed lock already at File:Pending_Changes_Protected_Level_2.svg, though it needs fixes in terms of name (should probably be something like Pending-level-2-protection-shackle.png or Extended-pending-protection-shackle.png), SVG code (the top curve is a bit cut off), and color (should probably be darker but still clearly distinguishable from REDLOCK). pythoncoder (talk | contribs) 21:43, 8 November 2024 (UTC)[reply]

I think light blue is a better color for this. But in any case we will probably need a lock with a checkmark and the letter "E" for extended confirmed. Awesome Aasim 22:22, 8 November 2024 (UTC)[reply]
Light blue seems too similar to the sky-blue currently used for WP:SALT pythoncoder (talk | contribs) 18:04, 1 December 2024 (UTC)[reply]
I would go for either the EC lock just with the icon replaced with a checkmark or what you said but with the same color and a diagonal line down the middle. Aaron Liu (talk) 20:02, 1 December 2024 (UTC)[reply]

Courtesy ping

[edit]

Courtesy ping all from the idea lab that participated in helping formulate this RfC: @Toadspike @Jéské Couriano @Aaron Liu @Mach61 @Cremastra @Anomie @SamuelRiv @Isaacl @WhatamIdoing @Ahecht @Bunnypranav. Awesome Aasim 19:58, 5 November 2024 (UTC)[reply]

Protection?

[edit]

I am actually starting to wonder if "protection" is a bit of a misnomer, because technically pages under pending changes are not really "protected". Yeah the edits are subject to review, but there are no technical measures to prevent a user from editing. It is just like recent changes on many wikis; those hold edits for review until they are approved, but they do not "protect" the entire wiki. Awesome Aasim 23:40, 11 November 2024 (UTC)[reply]

How about “kinder, gentler protection”? To appear in the know, you can use an acronym, such as in “TCPIP is an example of KGP”. — Charles Stewart (talk) 04:57, 28 November 2024 (UTC)[reply]

Move to close

[edit]

The main proposal is basically deadlocked and has been for six days, and the sub-proposals are clearly failing. Seems like we have a result. Just Step Sideways from this world ..... today 23:09, 22 November 2024 (UTC)[reply]

I was about to withdraw Q2 and Q3 for putting the pen before the pig, but I did realize I added a couple more comments particularly to Q2. I did add a Q4 that might be more actionable and that is about making the experience of submitting edit requests a lot better. I am starting to agree though for Q2 and Q3 everything that has needed to be said has been said so the proposals can be withdrawn.
We do need to consider the experience of the users actually being locked out of this. I understand the opposition to Q3 (and in fact just struck my !vote because of this). But Q2? Look at the disaster that WP:V22RFC, WP:V22RFC2, and WP:V22RFC3 is. These surveys are barely representative of new users, just of experienced editors. We should absolutely be bringing new editors to the table for these discussions. Awesome Aasim 19:13, 23 November 2024 (UTC)[reply]
Please don't pre-close. 4 of the opposers to the main proposal seem to address only Q2 instead of Q1, and I don't see anyone addressing the argument that it's less restrictive than ECP. It's up to the closer to weigh the consensus. Aaron Liu (talk) 19:30, 23 November 2024 (UTC)[reply]
The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

RfC: Should a blackout be organized in protest of the Wikimedia Foundation's actions?

[edit]

RfC: Log the use of the HistMerge tool at both the merge target and merge source

[edit]

Currently, there are open phab tickets proposing that the use of the HistMerge tool be logged at the target article in addition to the source article. Several proposals have been made:

  • Option 1a: When using Special:MergeHistory, a null edit should be placed in both the merge target and merge source's page's histories stating that a history merge took place.
    (phab:T341760: Special:MergeHistory should place a null edit in the page's history describing the merge, authored Jul 13 2023)
  • Option 1b: When using Special:MergeHistory, add a log entry recorded for the articles at the both HistMerge target and source that records the existence of a history merge.
    (phab:T118132: Merging pages should add a log entry to the destination page, authored Nov 8 2015)
  • Option 2: Do not log the use of the Special:MergeHistory tool at the merge target, maintaining the current status quo.

Should the use of the HistMerge tool be explicitly logged? If so, should the use be logged via an entry in the page history or should it instead be held in a dedicated log? — Red-tailed hawk (nest) 15:51, 20 November 2024 (UTC)[reply]

Survey: Log the use of the HistMerge tool

[edit]
  • Option 1a/b. I am in principle in support of adding this logging functionality, since people don't typically have access to the source article title (where the histmerge is currently logged) when viewing an article in the wild. There have been several times I can think of when I've been going diff hunting or browsing page history and where some explicit note of a histmerge having occurred would have been useful. As for whether this is logged directly in the page history (as is done currently with page protection) or if this is merely in a separate log file, I don't have particularly strong feelings, but I do think that adding functionality to log histmerges at the target article would improve clarity in page histories. — Red-tailed hawk (nest) 15:51, 20 November 2024 (UTC)[reply]
  • Option 1a/b. No strong feelings on which way is best (I'll let the experienced histmergers comment on this), but logging a history merge definitely seems like a useful feature. Chaotic Enby (talk · contribs) 16:02, 20 November 2024 (UTC)[reply]
  • Option 1a/b. Choatic Enby has said exactly what I would have said (but more concisely) had they not said it first. Thryduulf (talk) 16:23, 20 November 2024 (UTC)[reply]
  • 1b would be most important to me but but 1a would be nice too. But this is really not the place for this sort of discussion, as noted below. Graham87 (talk) 16:28, 20 November 2024 (UTC)[reply]
  • Option 2 History merging done right should be seamless, leaving the page indistinguishable from if the copy-paste move being repaired had never happened. Adding extra annotations everywhere runs counter to that goal. Prefer 1b to 1a if we have to do one of them, as the extra null edits could easily interfere with the history merge being done in more complicated situations. * Pppery * it has begun... 16:49, 20 November 2024 (UTC)[reply]
    Could you expound on why they should be indistinguishable? I don't see how this could harm any utility. A log action at the target page would not show up in the history anyways, and a null edit would have no effect on comparing revisions. Aaron Liu (talk) 17:29, 20 November 2024 (UTC)[reply]
    Why shouldn't it be indistinguishable? Why it it necessary to go out of our way to say even louder that someone did something wrong and it had to be cleaned up? * Pppery * it has begun... 17:45, 20 November 2024 (UTC)[reply]
    All cleanup actions are logged to all the pages they affect. Aaron Liu (talk) 18:32, 20 November 2024 (UTC)[reply]
  • 2 History merges are already logged, so this survey name is somewhat off the mark. As someone who does this work: I do not think these should be displayed at either location. It would cause a lot of noise in history pages that people probably would not fundamentally understand (2 revisions for "please process this" and "remove tag" and a 3rd revision for the suggested log), and it would be "out of order" in that you will have merged a bunch of revisions but none of those revisions would be nearby the entry in the history page itself. I also find protections noisy in this way as well, and when moves end up causing a need for history merging, you end up with doubled move entries in the merged history, which also is confusing. Adding history merges to that case? No thanks. History merges are more like deletions and undeletions, which already do not add displayed content to the history view. Izno (talk) 16:54, 20 November 2024 (UTC)[reply]
    They presently are logged, but only at the source article. Take for example this entry. When I search for the merge target, I get nothing. It's only when I search the merge source that I'm able to get a result, but there isn't a way to know the merge source.
    If I don't know when or if the histmerge took place, and I don't know what article the history was merged from, I'd have to look through the entirety of the merge log manually to figure that out—and that's suboptimal. — Red-tailed hawk (nest) 17:05, 20 November 2024 (UTC)[reply]
    ... Page moves do the same thing, only log the move source. Yet this is not seen as an issue? :)
    But ignoring that, why is it valuable to know this information? What do you gain? And is what you gain actually valuable to your end objective? For example, let's take your There have been several times I can think of when I've been going diff hunting or browsing page history and where some explicit note of a histmerge having occurred would have been useful. Is not the revisions left behind in the page history by both the person requesting and the person performing the histmerge not enough (see {{histmerge}})? There are history merges done that don't have that request format such as the WikiProject history merge format, but those are almost always ancient revisions, so what are you gaining there? And where they are not ancient revisions, they are trivial kinds of the form "draft x -> page y, I hate that I even had to interact with this history merge it was so trivial (but also these are great because I don't have to spend significant time on them)". Izno (talk) 17:32, 20 November 2024 (UTC)[reply]

    ... Page moves do the same thing, only log the move source. Yet this is not seen as an issue? :)

    I don't think everyone would necessarily agree (see Toadspike's comment below). Chaotic Enby (talk · contribs) 17:42, 20 November 2024 (UTC)[reply]
    Page moves do leave a null edit on the page that describes where the page was moved from and was moved to. And it's easy to work backwards from there to figure out the page move history. The same cannot be said of the Special:MergeHistory tool, which doesn't make it easy to re-construct what the heck went on unless we start diving naïvely through the logs. — Red-tailed hawk (nest) 17:50, 20 November 2024 (UTC)[reply]
    It can be *possible* to find the original history merge source page without looking through the merge log, but the method for doing so is very brittle and extremeley hacky. Basically, look for redirects to the page using "What links here", and find the redirect whose first edit has an unusual byte difference. This relies on the redirect being stable and not deleted or retargetted. There is also another way that relies on byte difference bugs as described in the above-linked discussion by wbm1058. Both of those are ... particularly awful. Graham87 (talk) 03:48, 21 November 2024 (UTC)[reply]
    In the given example, the history-merge occurred here. Your "log" is the edit summaries. "Created page with '..." is the edit summary left by a normal page creation. But wait, there is page history before the edit that created the page. How did it get there? Hmm, the previous edit summary "Declining submission: v - Submission is improperly sourced (AFCH)" tips you off to look for the same title in draft: namespace. Voila! Anyone looking for help with understanding a particular merge may ask me and I'll probably be able to figure it out for you. – wbm1058 (talk) 05:51, 21 November 2024 (UTC)[reply]
    Here's another example, of a merge within mainspace. The automatic edit summary (created by the MediaWiki software) of this (No difference) diff "Removed redirect to Jordan B. Acker" points you to the page that was merged at that point. Voila. Voila. Voila. – wbm1058 (talk) 13:44, 21 November 2024 (UTC)[reply]
    There are times where those traces aren't left. Aaron Liu (talk) 13:51, 21 November 2024 (UTC)[reply]
    Here's another scenario, this one from WP:WikiProject History Merge. The page history shows an edit adding +5,800 bytes, leaving the page with 5,800 bytes. But the previous edit did not leave a blank page. Some say this is a bug, but it's also a feature. That "bug" is actually your "log" reporting that a hist-merge occurred at that edit. Voila, the log for that page shows a temp delete & undelete setting the page up for a merge. The first item on the log:
    @ 20:14, 16 January 2021 Tbhotch moved page Flag of Yucatán to Flag of the Republic of Yucatán (Correct name)
    clues you in to where to look for the source of the merge. Voila, that single edit which removed −5,633 bytes tells you that previous history was merged off of that page. The log provides the details. – wbm1058 (talk) 16:03, 21 November 2024 (UTC)[reply]
    (phab:T76557: Special:MergeHistory causes incorrect byte change values in history, authored Dec 2 2014) — Preceding unsigned comment added by Wbm1058 (talkcontribs) 18:13, 21 November 2024 (UTC)[reply]
    Again, there are times where the clues are much harder to find, and even in those cases, it'd be much better to have a unified and assured way of finding the source. Aaron Liu (talk) 16:11, 21 November 2024 (UTC)[reply]
    Indeed. This is a prime example of an unintended undocumented feature. Graham87 (talk) 08:50, 22 November 2024 (UTC)[reply]
    Yeah. I don't think that we can permanently rely on that, given that future versions of MediaWiki are not bound in any real way to support that workaround. — Red-tailed hawk (nest) 04:24, 3 December 2024 (UTC)[reply]
  • Support 1b (log only), oppose 1a (null edit). I defer to the experienced histmergers on this, and if they say that adding null edits everywhere would be inconvenient, I believe them. However, I haven't seen any arguments against logging the histmerge at both articles, so I'll support it as a sensible idea. (On a similar note, it bothers me that page moves are only logged at one title, not both.) Toadspike [Talk] 17:10, 20 November 2024 (UTC)[reply]
  • Option 2. The merges are already logged, so there’s no reason to add it to page histories. While it may be useful for habitual editors, it will just confuse readers who are looking for an old revision and occasional editors. Ships & Space(Edits) 18:33, 20 November 2024 (UTC)[reply]
    But only the source page is logged as the "target". IIRC it currently can be a bit hard to find out when and who merged history into a page if you don't know the source page and the mergeperson didn't leave any editing indication that they merged something. Aaron Liu (talk) 18:40, 20 November 2024 (UTC)[reply]
  • 1B. The present situation of the action being only logged at one page is confusing and unhelpful. But so would be injecting null-edits all over the place.  — SMcCandlish ¢ 😼  01:38, 21 November 2024 (UTC)[reply]
  • Option 2. This exercise is dependent on finding a volunteer MediaWiki developer willing to work on this. Good luck with that. Maybe you'll find one a decade from now. – wbm1058 (talk) 05:51, 21 November 2024 (UTC)[reply]
    And, more importantly, someone in the MediaWiki group to review it. I suspect there are many people, possibly including myself, who would code this if they didn't think they were wasting their time shuffling things from one queue to another. * Pppery * it has begun... 06:03, 21 November 2024 (UTC)[reply]
    That link requires a Gerrit login/developer account to view. It was a struggle to get in to mine (I only have one because of an old Toolforge account and I'd basically forgotten about it), but for those who don't want to go through all that, that group has only 82 members (several of whose usernames I recognise) and I imagine they have a lot on their collective plate. There's more information about these groups at Gerrit/Privilege policy on MediaWiki. Graham87 (talk) 15:38, 21 November 2024 (UTC)[reply]
    Sorry, I totally forgot Gerrit behaved in that counterintuitive way and hid public information from logged out users for no reason. The things you miss if Gerrit interactions become something you do pretty much every day. If you want to count the members of the group you also have to follow the chain of included groups - it also includes https://ldap.toolforge.org/group/wmf, https://ldap.toolforge.org/group/ops and the WMDE-MediaWiki group (another login-only link), as well as a few other permission edge cases (almost all of which are redundant because the user is already in the MediaWiki group) * Pppery * it has begun... 18:07, 21 November 2024 (UTC)[reply]
  • Support 1a/b, and I would encourage the closer to disregard any opposition based solely on the chances of someone ever actually implementing it. Compassionate727 (T·C) 12:52, 21 November 2024 (UTC)[reply]
    Fine. This stupid RfC isn't even asking the right questions. Why did I need to delete (an expensive operation) and then restore a page in order to "set up for a history merge" Should we fix the software so that it doesn't require me to do that? Why did the page-mover resort to cut-paste because there was page history blocking their move, rather than ask a administrator for help? Why doesn't the software just let them move over that junk page history themselves, which would negate the need for a later hist-merge? (Actually in this case the offending user only has made 46 edits, so they don't have page-mover privileges. But they were able to move a page. They just couldn't move it back a day later after they changed their mind.) wbm1058 (talk) 13:44, 21 November 2024 (UTC)[reply]
    Yeah, revision move would be amazing, for a start. Graham87 (talk) 15:38, 21 November 2024 (UTC)[reply]
  • Option 1b – changes to a page's history should be listed in that page's log. There's no need to make a null edit; pagemove null edits are useful because they meaningfully fit into the page's revision history, which isn't the case here. jlwoodwa (talk) 00:55, 22 November 2024 (UTC)[reply]
  • Option 1b sounds best since that's what those in the know seem to agree on, but 1a would probably be OK. Abzeronow (talk) 03:44, 23 November 2024 (UTC)[reply]
  • Option 1b seems like the one with the best transparency to me. Thanks. Huggums537voted! (sign🖋️|📞talk) 06:59, 25 November 2024 (UTC)[reply]

Discussion: Log the use of the HistMerge tool

[edit]

CheckUser for all new users

[edit]

All new users (IPs and accounts) should be subject to CheckUser against known socks. This would prevent recidivist socks from returning and save the time and energy of users who have to prove a likely case at SPI. Recidivist socks often get better at covering their "tells" each time making detection increasingly difficult. Users should not have to make the huge effort of establishing an SPI when editing from an IP or creating a new account is so easy. We should not have to endure Wikipedia:Long-term abuse/HarveyCarter, Wikipedia:Sockpuppet investigations/Phạm Văn Rạng/Archive or Wikipedia:Sockpuppet investigations/Orchomen/Archive if CheckUser can prevent them. Mztourist (talk) 04:06, 22 November 2024 (UTC)[reply]

I'm pretty sure that even if we had enough checkuser capacity to routinely run checks on every new user that doing so would be contrary to global policy. Thryduulf (talk) 04:14, 22 November 2024 (UTC)[reply]
Setting aside privacy issues, the fact that the WMF wouldn't let us do it, and a few other things: Checking a single account, without any idea of who you're comparing them to, is not very effective, and the worst LTAs are the ones it would be least effective against. This has been floated several times in the much narrower context of adminship candidates, and rejected each time. It probably belongs on WP:PEREN by now. -- Tamzin[cetacean needed] (they|xe) 04:21, 22 November 2024 (UTC)[reply]
Why can't it be automated? What are the privacy issues and what would WMF concerns be? There has to be a better system than SPI which imposes a huge burden on the filer (and often fails to catch socks) while we just leave the door open for LTAs. Mztourist (talk) 04:39, 22 November 2024 (UTC)[reply]
How would it be automated? We can't just block everyone who even sometimes shares an IP with someone, which is most editors once you factor in mobile editing and institutional WiFi. Even if we had a system that told checkusers about all shared-IP situations and asked them to investigate, what are they investigating for? The vast majority of IP overlaps will be entirely innocent, often people who don't even know each other. There's no way for a checkuser to find any signal in all that noise. So the only way a system like this would work is if checkusers manually identified IP ranges that are being used by LTAs, and then placed blocks on those ranges to restrict them from account creation... Which is what already happens. -- Tamzin[cetacean needed] (they|xe) 04:58, 22 November 2024 (UTC)[reply]
I would assume that IT experts can work out a way to automate CheckUser. If someone edits on a shared IP used by a previous sock that should be flagged and human CheckUsers notified so they can look at the edits and the previous sock edits and warn or block as necessary. Mztourist (talk) 05:46, 22 November 2024 (UTC)[reply]
We already have autoblock. For cases it doesn't catch, there's an additional manual layer of blocking, where if a sock is caught on an IP that's been used before but wasn't caught by autoblock, a checkuser will block the IP if it's technically feasible, sometimes for months or years at a time. Beyond that, I don't think you can imagine just how often "someone edits on a shared IP used by a previous sock". I'm doing that right now, probably, because I'm editing through T-Mobile. Basically anyone who's ever edited in India or Nigeria has been on an IP used by a previous sock. Basically anyone who's used a large institution's WiFi. There is not any way to weed through all that noise with automation. -- Tamzin[cetacean needed] (they|xe) 05:54, 22 November 2024 (UTC)[reply]
Addendum: An actually potentially workable innovation would be something like a system that notifies CUs if an IP is autoblocked more than once in a certain time period. That would be a software proposal for Phabricator, though, not an enwiki policy proposal, and would still have privacy implications that would need to be squared with the WMF. -- Tamzin[cetacean needed] (they|xe) 05:57, 22 November 2024 (UTC)[reply]
I believe Tamzin has it about right, but I want to clarify a thing. If you're hypothetically using T-Mobile (and this also applies to many other ISPs and many LTAs) then the odds are very high that you're using an IP address which has never been used before. With T-Mobile, which is not unusually large by any means, you belong to at least one /32 range which contains a number of IP addresses so big that it has 30 digits. These ranges contain a huge number of users. At the other extreme you have some countries with only a handful of IPs, which everyone uses. These IPs also typically contain a huge number of users. TLDR; is someone is using a single IP on their own then we'll probably just block it, otherwise you're talking about matching a huge number of users. -- zzuuzz (talk) 03:20, 23 November 2024 (UTC)[reply]
As I understand it, if you're hypothetically using T-Mobile, then you're not editing, because someone range-blocked the whole network in pursuit of a vandal(s). See Wikipedia:Advice to T-Mobile IPv6 users. WhatamIdoing (talk) 03:36, 23 November 2024 (UTC)[reply]
T-Mobile USA is a perennial favourite of many of the most despicable LTAs, but that's besides the point. New users with an account can actually edit from T-Mobile. They can also edit from Jio, or Deutsche Telecom, Vodafone, or many other huge networks. -- zzuuzz (talk) 03:50, 23 November 2024 (UTC)[reply]
Would violate the policy WP:NOTFISHING. –Novem Linguae (talk) 04:43, 22 November 2024 (UTC)[reply]
It would apply to every new User as a protective measure against sockpuppetry, like a credit check before you get a card/overdraft. WP:NOTFISHING is archaic like the whole burdensome SPI system that forces honest users to do all the hard work of proving sockpuppetry while socks and vandals just keep being welcomed in under WP:AGF. Mztourist (talk) 05:46, 22 November 2024 (UTC)[reply]
What you're suggesting is to just inundate checkusers with thousands of cases. The suggestion (as I understand it) removes burden from SPI filers by adding a disproportional burden on checkusers, who are already an overworked group. If you're suggesting an automated solution, then I believe IP blocks/IP range blocks and autoblock (discussed by Tamzin, above) already cover enough. It's quite hard to weigh up what you're really suggesting because it feels very vague without much detail - it sounds like you're just saying "a new SPI should be opened for every new user and IP, forever" which is not really a workable solution (for instance, 50 accounts were made in the last 15 minutes, which is about one every 18 seconds) BugGhost🦗👻 18:12, 22 November 2024 (UTC)[reply]
And most of those accounts will make zero, one, or two edits, and then never be used again. Even if we liked this idea, doing it for every single account creation would be a waste of resources. WhatamIdoing (talk) 23:43, 22 November 2024 (UTC)[reply]
No, they should not. voorts (talk/contributions) 17:23, 22 November 2024 (UTC)[reply]
This, very bluntly, flies in the face of WMF policy with regards to use/protection of PII, and as noted by Tamzin this would result in frankly obscene amounts of collateral damage. You have absolutely no idea how frequently IP addresses get passed around (especially in the developing world or on T Mobile), such that it could feasibly have three different, unrelated, people on it over the course of a day or so. —Jéské Couriano v^_^v threads critiques 18:59, 22 November 2024 (UTC)[reply]
 Just out of curiosity: If a certain case of IPs spamming at Help Desk is any indication, would a CU be able to stop that in its track? 2601AC47 (talk|contribs) Isn't a IP anon 14:29, 23 November 2024 (UTC)[reply]
CU's use their tools to identify socks when technical proof is necessary. The problem you're linking to is caused by one particular LTA account who is extremely obvious and doesn't really require technical proof to identify - check users would just be able to provide evidence for something that is already easy to spot. There's an essay on the distinction over at WP:DUCK BugGhost🦗👻 14:45, 23 November 2024 (UTC)[reply]
@2601AC47: No, and that is because the user in question's MO is to abuse VPNs. Checkuser is worthless in this case because of that (but the IPs can and should be blocked for 1yr as VPNs). —Jéské Couriano v^_^v threads critiques 19:35, 26 November 2024 (UTC)[reply]
LTA MAB is using a peer-to-peer VPN service which is similar to TOR. Blocking peer-to-peer VPN service endpoint IP addresses carries a higher risk of collateral damage because those aren't assigned to the VPN provider but rather a third party ISP who is likely to dynamically reassign the blocked address to a completely innocent party. 216.126.35.235 (talk) 00:22, 27 November 2024 (UTC)[reply]
I slightly oppose this idea. This is not Reddit where socks are immediately banned or shadowbanned outright. Reddit doesn't have WP:DUCK as any wiki does. Ahri Boy (talk) 00:14, 25 November 2024 (UTC)[reply]
How do you know this is how Reddit deals with ban and suspension evasion? They use advanced techniques such as device and IP fingerprinting to ban and suspend users in under an hour. 2600:1700:69F1:1410:5D40:53D:B27E:D147 (talk) 23:47, 28 November 2024 (UTC)[reply]
I can see where this is coming from, but we must realise that checkuser is not magic pixie dust nor is it meant for fishing. - Ratnahastin (talk) 04:49, 27 November 2024 (UTC)[reply]
The question I ask myself is why must we realize that it is not meant for fishing? To catch fish, you need to fish. The no-fishing rule is not fit for purpose, nor is it a rule that other organizations that actively search for ban evasion use. Machines can do the fishing. They only need to show us the fish they caught. Sean.hoyland (talk) 05:24, 27 November 2024 (UTC)[reply]
I think for the same reason we don't want governments to be reading our mail and emails. If we checkuser everybody, then nobody has any privacy. Donald Albury 20:20, 27 November 2024 (UTC)[reply]

I sympathize with Mztourist. The current system is less effective than it needs to be. Ban evading actors make a lot of edits, they are dedicated hard-working folk in contentious topic areas. They can make up nearly 10% of new extendedconfirmed actors some years and the quicker an actor becomes EC the more likely they are to be blocked later for ban evasion. Their presence splits the community into two classes, the sanctionable and the unsanctionable with completely different payoff matrices. This has many consequences in contentious topic areas and significantly impacts the dynamics. The current rules are probably not good rules. Other systems have things like a 'commitment to authenticity' and actively search for ban evasion. It's tempting to burn it all down and start again, but with what? Having said that, the SPI folks do a great job. The average time from being granted extendedconfirmed to being blocked for ban evasion seems to be going down. Sean.hoyland (talk) 18:28, 22 November 2024 (UTC)[reply]

I confess that I am doubtful about that 10% claim. WhatamIdoing (talk) 23:43, 22 November 2024 (UTC)[reply]
WhatamIdoing, me too. I'm doubtful about everything I say because I've noticed that the chance it is slightly to hugely wrong is quite high. The EC numbers are work in progress, but I got distracted. The description "nearly 10% of new extendedconfirmed actors" is a bit misleading, because 'new' doesn't really mean new actors. It means actors that acquired EC for a given year, so newly acquired privileges. They might have registered in previous years. Also, I don't have 100% confidence in the way count EC grants because there are some edge cases, and I'm ignoring sysops. But anyway, the statement was based on this data of questionable precision. And the statement about a potential relationship between speed of EC acquisition and probability of being blocked is based on this data of questionable precision. And of course, currently undetected socks are not included, and there will be many. Sean.hoyland (talk) 03:39, 23 November 2024 (UTC)[reply]
I'm not interested in clicking through to a Google file. Here's my back-of-the-envelope calculation: We have something like 120K accounts that would qualify for EXTCONF. Most of these are no longer active, and many stopped editing so long ago that they don't actually have the user right.
Wikipedia is almost 24 years old. That makes convenient math: On average, since inception, 5K editors have achieved EXTCONF levels each year.
If the 10% estimate is true, then 500 accounts per year – about 10 per week – are being created by banned editors and going undetected long enough for the accounts to make 500 edits and to work in CTOP areas. Do we even have enough WP:BANNED editors to make it plausible to expect banned editors to bring 500 accounts a year up to EXTCONF levels (plus however many accounts get started but are detected before then)? WhatamIdoing (talk) 03:53, 23 November 2024 (UTC)[reply]
Suit yourself. I'm not interested in what interests other people or back of the envelope calculations. I'm interested in understanding the state of a system over time using evidence-based approaches by extracting data from the system itself. Let the data speak for itself. It has a lot to tell us. Then it is possible to test hypotheses and make evidence-based decisions. Sean.hoyland (talk) 04:13, 23 November 2024 (UTC)[reply]
@WhatamIdoing, there's a sockmaster in the IPA CTOP who has made more than 100 socks. 500 new XC socks every year doesn't seem that much of a stretch in comparison. -- asilvering (talk) 19:12, 23 November 2024 (UTC)[reply]
More than 100 XC socks? Or more than 100 detected socks, including socks with zero edits?
Making a lot of accounts isn't super unusual, but it's a lot of work to get 100 accounts up to 500+ edits. Making 50,000 edits is a lot, even if it's your full-time job. WhatamIdoing (talk) 01:59, 24 November 2024 (UTC)[reply]
Lots of users get it done in a couple of days, often through vandal fighting tools. It really is not that many when the edits are mostly mindless. nableezy - 00:18, 26 November 2024 (UTC)[reply]
But that's kind of my point: "A couple of days", times 100 accounts, means 200–300 days per year. If you work five days per week and 52 weeks per year, that's 260 work days. This might be possible, but it's a full-time job.
Since the 30-day limit is something that can't be achieved through effort, I wonder if a sudden change to, say, 6 months would produce a five-month reprieve. WhatamIdoing (talk) 02:23, 26 November 2024 (UTC)[reply]
Who says it’s only one at a time? Icewhiz for example has had 4 plus accounts active at a time. nableezy - 02:25, 26 November 2024 (UTC)[reply]
There is some data about ban evasion timelines for some sockmasters in PIA that show how accounts are operated in parallel. Operating multiple accounts concurrently seems to be the norm. Sean.hoyland (talk) 04:31, 26 November 2024 (UTC)[reply]
Imagine that it takes an average of one minute to make a (convincing) edit. That means that 500 edits = 8.33 hours, i.e., more than one full work day.
Imagine, too, that having reached this point, you actually need to spend some time using your newly EXTCONF account. This, too, takes time.
If you operate several accounts at once, that means:
You spend an hour editing from Account1. You spend the next hour editing from Account2. You spend another hour editing from Account3. You spend your fourth hour editing from Account4. Then you take a break for lunch, and come back to edit from Accounts 5 through 8.
At the end of the day, you have brought 8 accounts up to 60 edits (12% of the minimum goal). And maybe one of them got blocked, too, which is lost effort. At this rate, it would take you an entire year of full-time work to get 100 EXTCONF accounts, even though you are operating multiple accounts concurrently. Doing 50 edits per day in 10 accounts is not faster than doing 500 edits in 1 account. It's the same amount of work. WhatamIdoing (talk) 05:13, 29 November 2024 (UTC)[reply]
Sure it’s an effort, though it doesn’t take a minute an edit. But I’m not sure why I need to imagine something that has happened multiple times already. Icewhiz most recently had like 4-5 EC accounts active, and there are probably several more. Yes, there is an effort there. But also yes, it keeps happening. nableezy - 15:00, 29 November 2024 (UTC)[reply]
My point is that "4-5 EC accounts" is not "100". WhatamIdoing (talk) 19:31, 30 November 2024 (UTC)[reply]
It’s 4-5 at a time for a single sock master. Check the Icewhiz SPI for how many that adds up to over time. nableezy - 20:16, 30 November 2024 (UTC)[reply]
Many of our frequent fliers are already adept at warehousing accounts for months or even years, so a bump in the time period probably won't make much off a difference. Additionally, and without going into detail publicly, there are several methods whereby semi- or even fully-automated editing can be used to get to 500 edits with a minimum of effort, or at least well within script-kid territory. Because so many of those are obvious on inspection some will assume that all of them are, but there are a number of rather subtle cases that have come up over the years and it would be foolish to assume that it isn't ongoing. 184.152.68.190 (talk) 17:31, 28 November 2024 (UTC)[reply]

Also, if we divide the space into contentious vs not-contentious, maybe a one size fits all CU policy doesn't make sense. Sean.hoyland (talk) 18:55, 22 November 2024 (UTC)[reply]

Terrible idea. Let's AGF that most new users are here to improve Wikipedia instead of damage it. Some1 (talk) 18:33, 22 November 2024 (UTC)[reply]

Ban evading actors who employ deception via sockpuppetry in the WP:PIA topic area are here to improve Wikipedia, from their perspective, rather than damage it. There is no need to use faith. There are statistics. There is a probability that a 'new user' is employing ban evasion. Sean.hoyland (talk) 18:46, 22 November 2024 (UTC)[reply]
My initial comment wasn't a direct response to yours, but new users and IPs won't be able to edit in the WP:PIA topic area anyway since they need to be extended confirmed. Some1 (talk) 20:08, 22 November 2024 (UTC)[reply]
Let's not hold up the way PIA handles new users and IPs, in which they are allowed to post to talk pages but then have their talk page post removed if it doesn't fall within very specific parameters, as some sort of model. CMD (talk) 02:51, 23 November 2024 (UTC)[reply]

Strongly support automatically checkusering all active users (new and existing) at regular intervals. If it were automated -- e.g., a script runs that compares IPs, user agent, other typical subscriber info -- there would be no privacy violation, because that information doesn't have to be disclosed to any human beings. Only the "hits" can be forwarded to the CU team for follow-up. I'd run that script daily. If the policy forbids it, we should change the policy to allow it. It's mind-boggling that Wikipedia doesn't do this already. It's a basic security precaution. (Also, email-required registration and get rid of IP editing.) Levivich (talk) 02:39, 23 November 2024 (UTC)[reply]

I don't think you've been reading the comments from people who know what they are talking about. There would be hundreds, at least, of hits per day that would require human checking. The policy that prohibits this sort of massive breach of privacy is the Foundation's and so not one that en.wp could change even if it were a good idea (which it isn't). Thryduulf (talk) 03:10, 23 November 2024 (UTC)[reply]
A computer can be programmed to check for similarities or patterns in subscriber info (IP, etc), and in editing activity (time cards, etc), and content of edits and talk page posts (like the existing language similarity tool), with various degrees of certainty in the same way the Cluebot does with ORES when it's reverting vandalism. And the threshold can be set so it only forwards matches of a certain certainty to human CUs for review, so as not to overwhelm the humans. The WMF can make this happen with just $1 million of its $180 million per year (and it wouldn't be violating its own policies if it did so). Enwiki could ask for it, other projects might join too. Levivich (talk) 05:24, 23 November 2024 (UTC)[reply]
"Oh now I see what you mean, Levivich, good point, I guess you know what you're talking about, after all."
"Thanks, Thryduulf!" Levivich (talk) 17:42, 23 November 2024 (UTC)[reply]
I seem to have missed this comment, sorry. However I am very sceptical that sockpuppet detection is meaningfully automatable. From what CUs say it is as much art as science (which is why SPI cases can result in determinations like "possilikely"). This is the sort of thing that is difficult (at best) to automate. Additionally the only way to reliably develop such automation would be for humans analyse and process a massive amount of data from accounts that both are and are not sockpuppets and classify results as one or the other, and that anaylsis would be a massive privacy violation on its own. Assuming you have developed this magic computer that can assign a likelihood of any editor being a sock of someone who has edited in the last three months (data older than that is deleted) on a percentage scale, you then have to decide what level is appropriate to send to humans to check. Say for the sake of argument it is 75%, that means roughly one in four people being accused are innocent and are having their privacy impinged unnecessarily - and how many CUs are needed to deal with this caseload? Do we have enough? SPI isn't exactly backlog free and there aren't hoards of people volunteering for the role (although unbreaking RFA might help with this in the medium to long term). The more you reduce the number sent to CUs to investigate, the less benefit there is over the status quo.
In addition to all the above, how similar is "similar" in terms of articles edited, writing style, timecard, etc? How are you avoiding legitimate sockpuppets? Thryduulf (talk) 18:44, 23 November 2024 (UTC)[reply]
You know this already but for anyone reading this who doesn't: when a CU "checks" somebody, it's not like they send a signal out to that person's computer to go sniffing around. In fact, all the subscriber info (IP address, etc.) is already logged on the WMF's server logs (as with any website). A CU "check" just means a volunteer CU gets to look at a portion of those logs (to look up a particular account's subscriber info). That's the privacy concern: we have rules, rightfully so, about when volunteer CUs (not WMF staff) can read the server logs (or portions of them). Those rules do not apply to WMF staff, like devs and maintenance personnel, nor do they apply to the WMF's own software reading its own logs. Privacy is only an issue when those logs are revealed to volunteer CUs.
So... feeding the logs into software in order to train the software doesn't violate anyone's policy. It's just letting a computer read its own files. Human verification of the training outcomes also doesn't have to violate anyone's privacy -- just don't use volunteer CUs to do it, use WMF staff. Or, anonymize the training data (changing usernames to "Example1", "Example2", etc.). Or use historical data -- which would certainly be part of the training, since the most effective way would be to put known socks into the training data to see if the computer catches them.
Anyway, training the system won't violate anyone's privacy.
As for the hit rate -- 75% would be way, way too low. We'd be looking for definitely over 90% or 95%, and probably more like 99.something percent. Cluebot doesn't get vandalism wrong 1 out of 4 times, neither should CluebotCU. Heck, if CluebotCU can't do better than 75%, it's not worth doing. A more interesting question is whether the 99.something% hit rate would be helpful to CUs, or whether that would only catch the socks that are so obvious you don't even need CU to recognize them. Only testing in the field would tell.
But overall, AI looking for patterns, and checking subscriber info, edit patterns, and the content of edits, would be very helpful in tamping down on socking, because the computer can make far more checks than a human (a computer can look at 1,000 accounts and a 100,000 edits no problem, which no human can do), it'll be less biased than humans, and it can do it all without violating anyone's privacy -- in fact, lowering the privacy violations by lowering the false positives, sending only high-probability (90%+, not 75%+) to humans for review. And it can all be done with existing technology, and the WMF has the money to do it. Levivich (talk) 19:38, 23 November 2024 (UTC)[reply]
The more you write the clearer you make it that you don't understand checkuser or the WMF's policies regarding privacy. It's also clear that I'm not going to convince you that this is unworkable so I'll stop trying. Thryduulf (talk) 20:42, 23 November 2024 (UTC)[reply]
Yeah it's weird how repeatedly insulting me hasn't convinced me yet. Levivich (talk) 20:57, 23 November 2024 (UTC)[reply]
If you are are unable to distinguish between reasoned disagreement and insults, then it's not at all weird that reasoned disagreement fails to convince you. Thryduulf (talk) 22:44, 23 November 2024 (UTC)[reply]
@Levivich: Whatever existing data set we have has too many biases to be useful for this, and this is going to be prone to false positives. AI needs lots of data to be meaningfully trained. Also, AI here would be learning a function; when the output is not in fact a function of the input, there's nothing for an AI model to target, and this is very much the case here. On Wikidata, where I am a CheckUser, almost all edit summaries are automated even for human edits (just like clicking the rollback button is, or undoing an edit is by default), and it is very hard to meaningfully tell whether someone is a sock or not without highly case-specific analysis. No AI model is better than the data it's trained on.
Also, about the privacy policy: you are completely incorrect when you "Those rules do not apply to WMF staff, like devs and maintenance personnel, nor do they apply to the WMF's own software reading its own logs". Staff can only access that information on a need to know basis, just like CheckUsers, and data privacy laws like the EU's and California's means you cannot just do whatever random thing you want with the information you collect from users about them.--Jasper Deng (talk) 21:56, 23 November 2024 (UTC)[reply]
So which part of the wmf:Privacy Policy would prohibit the WMF from developing an AI that looks at server logs to find socks? Do you want me to quote to you the portions that explicitly disclose that the WMF uses personal information to develop tools and improve security? Levivich (talk) 22:02, 23 November 2024 (UTC)[reply]
I mean yeah that would probably be more productive than snarky bickering BugGhost🦗👻 22:05, 23 November 2024 (UTC)[reply]
@Levivich: Did you read the part where I mentioned privacy laws? Also, in this industry no one is allowed unfettered usage of private data even internally; there are internal policies that govern this that are broadly similar to the privacy policy. It's one thing to test a proposed tool on an IP address like Special:Contribs/2001:db8::/32, but it's another to train an AI model on it. Arguably an equally big privacy concern is the usage of new data from new users after the model is trained and brought online. The foundation is already hiding IP addresses by default even for anonymous users soon, and they will not undermine that mission through a tool like this. Ultimately, the Board of Trustees has to assume legal responsibility and liability for such a thing; put yourself in their position and think of whether they'd like the liability of something like this.--Jasper Deng (talk) 22:13, 23 November 2024 (UTC)[reply]
So can you quote a part of the privacy policy, or a part of privacy laws, or anything, that would prohibit feeding server logs into a "Cluebot-CU" to find socking?
Because I can quote the part of the wmf:Privacy Policy that allows it, and it's a lot:

We may use your public contributions, either aggregated with the public contributions of others or individually, to create new features or data-related products for you or to learn more about how the Wikimedia Sites are used ...

Because of how browsers work, we receive some information automatically when you visit the Wikimedia Sites ... This information includes the type of device you are using (possibly including unique device identification numbers, for some beta versions of our mobile applications), the type and version of your browser, your browser's language preference, the type and version of your device's operating system, in some cases the name of your internet service provider or mobile carrier, the website that referred you to the Wikimedia Sites, which pages you request and visit, and the date and time of each request you make to the Wikimedia Sites.

Put simply, we use this information to enhance your experience with Wikimedia Sites. For example, we use this information to administer the sites, provide greater security, and fight vandalism; optimize mobile applications, customize content and set language preferences, test features to see what works, and improve performance; understand how users interact with the Wikimedia Sites, track and study use of various features, gain understanding about the demographics of the different Wikimedia Sites, and analyze trends. ...

We actively collect some types of information with a variety of commonly-used technologies. These generally include tracking pixels, JavaScript, and a variety of "locally stored data" technologies, such as cookies and local storage. ... Depending on which technology we use, locally stored data may include text, Personal Information (like your IP address), and information about your use of the Wikimedia Sites (like your username or the time of your visit). ... We use this information to make your experience with the Wikimedia Sites safer and better, to gain a greater understanding of user preferences and their interaction with the Wikimedia Sites, and to generally improve our services. ...

We and our service providers use your information ... to create new features or data-related products for you or to learn more about how the Wikimedia Sites are used ... To fight spam, identity theft, malware and other kinds of abuse. ... To test features to see what works, understand how users interact with the Wikimedia Sites, track and study use of various features, gain understanding about the demographics of the different Wikimedia Sites and analyze trends. ...

When you visit any Wikimedia Site, we automatically receive the IP address of the device (or your proxy server) you are using to access the Internet, which could be used to infer your geographical location. ... We use this location information to make your experience with the Wikimedia Sites safer and better, to gain a greater understanding of user preferences and their interaction with the Wikimedia Sites, and to generally improve our services. For example, we use this information to provide greater security, optimize mobile applications, and learn how to expand and better support Wikimedia communities. ...

We, or particular users with certain administrative rights as described below, need to use and share your Personal Information if it is reasonably believed to be necessary to enforce or investigate potential violations of our Terms of Use, this Privacy Policy, or any Wikimedia Foundation or user community-based policies. ... We may also disclose your Personal Information if we reasonably believe it necessary to detect, prevent, or otherwise assess and address potential spam, malware, fraud, abuse, unlawful activity, and security or technical concerns. ... To facilitate their work, we give some developers limited access to systems that contain your Personal Information, but only as reasonably necessary for them to develop and contribute to the Wikimedia Sites. ...

Yeah that's a lot. Then there's this whole FAQ that says

It is important for us to be able to make sure everyone plays by the same rules, and sometimes that means we need to investigate and share specific users' information to ensure that they are.

For example, user information may be shared when a CheckUser is investigating abuse on a Project, such as suspected use of malicious "sockpuppets" (duplicate accounts), vandalism, harassment of other users, or disruptive behavior. If a user is found to be violating our Terms of Use or other relevant policy, the user's Personal Information may be released to a service provider, carrier, or other third-party entity, for example, to assist in the targeting of IP blocks or to launch a complaint to the relevant Internet Service Provider.

So using IP addresses, etc., to develop new tools, to test features, to fight violations of the Terms of Use, and disclosing that info to Checkusers... all explicitly permitted by the Privacy Policy. Levivich (talk) 22:22, 23 November 2024 (UTC)[reply]
@Levivich: "We, or particular users with certain administrative rights as described below, need to use and share your Personal Information if it is reasonably believed to be necessary to enforce or investigate potential violations of our Terms of Use" – "reasonably believed to be necessary" is not going to hold up in court when it's sweepingly applied to everyone. This doesn't even take into consideration the laws I mentioned, like GDPR. I'm not a lawyer, and I'm guessing neither are you. If you want to be the one assuming the legal liability for this, contact the board today and sign the contract. Even then they would probably not agree to such an arrangement. So you're preaching to the choir: only the foundation could even consider assuming this risk. Also, it's clear that you do not have a single idea of how developing something like this works if you think it can be done for $1 million. Something this complex has to be done right and tech salaries and computing resources are expensive.--Jasper Deng (talk) 22:28, 23 November 2024 (UTC)[reply]
What I am suggesting does not involve sharing everyone's data with Checkusers. It's pretty obvious that looking at their own server logs is "necessary to enforce or investigate potential violations of our Terms of Use". Five people is how big the WMF's wmf:Machine Learning team is, @ $200k each, $1m/year covers it. Five people is enough for that team to improve ORES, so another five-person team dedicated to "ORES-CU" seems a reasonable place to start. They could double that, and still have like $180M left over. Levivich (talk) 22:40, 23 November 2024 (UTC)[reply]
@Levivich: Yeah no, lol. $200k each is not a very competitive total compensation, considering that that needs to include benefits, health insurance, etc. This doesn't include their manager or the hefty hardware required to run ML workflows. It doesn't include the legal support required given the data privacy law compliance needed. Capriciously looking at the logs does not count; accessing data of users the foundation cannot reasonably have said to be likely to cause abuse is not permissible. This all aside from the bias and other data quality issues at hand here. You can delude yourself all you want, but nature cannot be fooled. I'm finished arguing with you anyways, because this proposal is either way dead on arrival.--Jasper Deng (talk) 23:45, 23 November 2024 (UTC)[reply]
@Jasper Deng, haggling over the math here isn't really important. You could quintuple the figures @Levivich gave and the Foundation would still have millions upon millions of dollars left over. -- asilvering (talk) 23:48, 23 November 2024 (UTC)[reply]
@Asilvering: The point I'm making is Levivich does not understand the complexity behind this kind of thing and thus his arguments are not to be given weight by the closer. Jasper Deng (talk) 23:56, 23 November 2024 (UTC)[reply]
As a statistician/data scientist, @Levivich is correct about the technical side of this—building an ML algorithm to detect sockpuppets would be pretty easy. Duplicate user algorithms like these are common across many websites. For a basic classification task like this (basically an ML 101 homework problem), I think $1 million is about right. As a bonus, the same tools could be used to identify and correct for possible canvasing or brigading, which behaves a lot like sockpuppetry from a statistical perspective. A similar algorithm is already used by Twitter's community notes feature.
IANAL, so I can't comment on the legal side of this, and I can't comment on whether that money would be better-spent elsewhere since I don't know what the WMF budget looks like. Overall though, the technical implementation wouldn't be a major hurdle. – Closed Limelike Curves (talk) 20:44, 24 November 2024 (UTC)[reply]
Third-party services like Sift.com provide this kind of algorithm-based account fraud protection as an alternative to building and maintaining internally. czar 23:41, 24 November 2024 (UTC)[reply]
Building such a model is only a small part of a real production system. If this system is to operate on all account creations, it needs to be at least as reliable as the existing systems that handle account creations. As you probably know, data scientists developing such a model need to be supported by software engineers and site reliability engineers supporting the actual system. Then you have the problem of new sockers who are not on the list of sockmasters to check against. Non-English-language speakers often would be put at a disadvantage too. It's not as trivial as you make it out to be, thus I stand by my estimate.--Jasper Deng (talk) 06:59, 25 November 2024 (UTC)[reply]
None of you have accounted for Hofstadter's law.
I don't think we need to spend more time speculating about a system that WMF Legal is extremely unlikely to accept. Even if they did, it wouldn't exist until several years from now. Instead, let's try to think of things that we can do ourselves, or with only a very little assistance. Small, lightweight projects with full community control can help us now, and if we prove that ____ works, the WMF might be willing to adopt and expand it later. WhatamIdoing (talk) 23:39, 25 November 2024 (UTC)[reply]
That's a mistake -- doing the same thing Wikipedia has been doing for 20+ years. The mistake is in leaving it to volunteers to catch sockpuppetry, rather than insisting that the WMF devote significant resources to it. And it's a mistake because the one thing we volunteers can't do, that the WMF can do, is comb through the server logs looking for patterns. Levivich (talk) 23:44, 25 November 2024 (UTC)[reply]
Not sure about the "building an ML algorithm to detect sockpuppets would be pretty easy" part, but I admire the optimism. It is certainly the case that it is possible, and people have done it with a surprising level of success a very long time ago in ML terms e.g. https://doi.org/10.1016/j.knosys.2018.03.002. These projects tend to rely on the category graph to distinguish sock and non-sock sets for training, the categorization of accounts as confirmed or suspected socks. However, the category graph is woefully incomplete i.e. there is information in the logs that is not reflected in the graph, so ensuring that all ban evasion accounts are properly categorized as such might help a bit. Sean.hoyland (talk) 03:58, 26 November 2024 (UTC)[reply]
Thankfully, we wouldn't have to build an ML algorithm, we can just use one of the existing ones. Some are even open source. Or WMF could use a third party service like the aforementioned sift.com. Levivich (talk) 16:17, 26 November 2024 (UTC)[reply]
Let me guess: Essentially, you would like their machine-learning team to use Sift's AI-Powered Fraud Protection, which from what I can glance, handles safeguarding subscriptions to defending digital content and in-app purchases and helps businesses reduce friction and stop sophisticated fraud attacks that gut growth, to provide the ability for us to automatically checkuser all active users? 2601AC47 (talk·contribs·my rights) Isn't a IP anon 16:25, 26 November 2024 (UTC)[reply]
The WMF already has the ability to "automatically checkuser all users" (the verb "checkuser" just means "look at the server logs"), I'm suggesting they use it. And that they use it in a sophisticated way, employing (existing, open source or commercially available) AI/ML technologies, like the same kind we already use to automatically revert vandalism. Contrary to claims here, doing so would not be illegal or even expensive (comparatively, for the WMF). Levivich (talk) 16:40, 26 November 2024 (UTC)[reply]
So, in my attempt to get things set right and steer towards a consensus that is satisfactory, I sincerely follow-up: What lies beyond that in this vast, uncharted sea? And could this mean any more in the next 5 years? 2601AC47 (talk·contribs·my rights) Isn't a IP anon 16:49, 26 November 2024 (UTC)[reply]
What lies beyond is mw:Extension:SimilarEditors. Levivich (talk) 17:26, 26 November 2024 (UTC)[reply]
So, @2601AC47, I think the answer to your question is "tell the WMF we really, really, really would like more attention to sockpuppetry and IP abuse from the ML team". -- asilvering (talk) 17:31, 26 November 2024 (UTC)[reply]
Which I don't suppose someone can at the next board meeting on December 11? 2601AC47 (talk·contribs·my rights) Isn't a IP anon 18:00, 26 November 2024 (UTC)[reply]
I may also point to this, where they mention development in other areas, such as social media features and machine learning expertise. 2601AC47 (talk·contribs·my rights) Isn't a IP anon 16:36, 26 November 2024 (UTC)[reply]
e.g. m:Research:Sockpuppet_detection_in_Wikimedia_projects Sean.hoyland (talk) 17:02, 26 November 2024 (UTC)[reply]
And that mentions Socksfinder, still in beta it seems. 2601AC47 (talk·contribs·my rights) Isn't a IP anon 17:10, 26 November 2024 (UTC)[reply]
3 days! When I first posted my comment and some editors responded that I didn't know what I was talking about, it can't be done, it'd violate the privacy policy and privacy laws, WMF Legal would never allow it... I was wondering how long it would take before somebody pointed out that this thing that can't be done has already been done and has been under development for at least 7 years now.
Of course it's already under development, it's pretty obvious that the same Wikipedia that developed ClueBot, one of the world's earlier and more successful examples of ML applications, would try to employ ML to fight multiple-account abuse. I mean, I'm obviously not gonna be the first person to think of this "innovation"!
Anyway, it took 3 days. Thanks, Sean! Levivich (talk) 17:31, 26 November 2024 (UTC)[reply]
Unlike what is being proposed, SimilarEditors only works based on publicly available data (e.g. similarities in editing patterns), and not IP data. To quote the page Sean linked, in the model's current form, we are only considering public data, but most saliently private data such as IP addresses or user-agent information are features currently used by checkusers that could be later (carefully) incorporated into the models.
So, not only the current model doesn't look at IP data, the research project also acknowledges that actually using such data should only be done in a "careful" way, because of those very same privacy policy issues quoted above.
On the ML side, however, this does proves that it's being worked on, and I'm honestly not surprised at all that the WMF is working on machine learning-based tools to detect sockpuppets. Chaotic Enby (talk · contribs) 17:50, 26 November 2024 (UTC)[reply]
Right. We should ask WMF to do the later (carefully) incorporated into the models part (especially since it's now later). BTW, the SimilarUsers API already pulls IP and other metadata. SimilarExtensions (a tool that uses the API) doesn't release that information to CheckUsers, by design. And that's a good thing, we can't just release all IPs to CheckUsers, it does indeed have to be done carefully. But user metadata can be used. What I'm suggesting is that the WMF should proceed to develop these types of tools (including the careful use of user metadata). Levivich (talk) 17:57, 26 November 2024 (UTC)[reply]
Not really clear that they're pulling IP data from logged-in users. The relevant sections reads:

USER_METADATA (203MB): for every user in COEDIT_DATA, this contains basic metadata about them (total number of edits in data, total number of pages edited, user or IP, timestamp range of edits).

This reads like they're collecting the username or IP depending on whether they're a logged-in user or an IP user. Chaotic Enby (talk · contribs) 18:14, 26 November 2024 (UTC)[reply]
In a few years people might look back on these days when we only had to deal with simple devious primates employing deception as the halcyon days. Sean.hoyland (talk) 18:33, 26 November 2024 (UTC)[reply]
I assumed 1 million USD/year was accounting for Hofstadter's law several times over. Otherwise it feels wildly pessimistic. – Closed Limelike Curves (talk) 15:57, 26 November 2024 (UTC)[reply]
IP range 2600:1700:69F1:1410:0:0:0:0/64 blocked by a CU
The following discussion has been closed. Please do not modify it.
Why do you guys hate the WMF so much? If it weren’t for them, you wouldn’t have this website at all. 2600:1700:69F1:1410:5D40:53D:B27E:D147 (talk) 23:51, 28 November 2024 (UTC)[reply]
We don’t. 2601AC47 (talk·contribs·my rights) Isn't a IP anon 01:13, 29 November 2024 (UTC)[reply]
Then why do you guys always whine and complain about how incompetent they are and how much money they make and are actively against their donation drives? 2600:1700:69F1:1410:6DF5:851F:7413:CA3B (talk) 01:29, 29 November 2024 (UTC)[reply]
We don't. Levivich (talk) 02:47, 29 November 2024 (UTC)[reply]
Don’t “we don’t” me again. 2600:1700:69F1:1410:C812:78B7:C08A:5AA5 (talk) 03:11, 29 November 2024 (UTC)[reply]
This may be surprising, but it turns out there's more than one person on Wikipedia, and many of us have different opinions on things. You're probably thinking of @Guy Macon's essay.
I disagree with his argument that the WMF is incompetent, but at the same time, smart thinking happens on the margin. Just because the WMF spent their first $20 million extremely well (on creating Wikipedia) doesn't mean giving them $200 million would make them 10× as good. Nobody here thinks the WMF budget should be cut to $0; there's just some of us who think it needs a haircut.
For me it comes down to, "if you don't donate to the WMF, what does that money go instead"? I'd rather you give that money to some other charity—feeding African children is more important than reskinning Wikipedia—but if you won't, I'd doubt giving it to the WMF is worse than whatever else you were going to spend it on. Whether we should cut back on ads depends on whether this money is coming out of donors' charity budgets or their regular budgets. – Closed Limelike Curves (talk) 03:10, 29 November 2024 (UTC)[reply]
I already struggle enough with prioritizing charities and whether which ones are ethical or not and how I should be spending every single penny I get on charities dealing with PIA and trans issues because those are the most oppressed groups in the world right now. The WMF is not helping people who are actively getting killed and having their rights taken away therefore they are not important. 2600:1700:69F1:1410:C812:78B7:C08A:5AA5 (talk) 03:15, 29 November 2024 (UTC)[reply]
In that case, I'd suggest checking out GiveWell, which has some very good recommendations. That said, this subthread feels wildly off-topic. – Closed Limelike Curves (talk) 03:33, 29 November 2024 (UTC)[reply]
So goes this whole discussion; but to give a slightly longer answer to the IP: We’re not telling them to get lost on a different path, we’re trying (despite everything) to establish relations, consensus and mutual trust. And hopefully long-term progress on key areas of contention. We don’t hate them, or else they’ll dismiss us completely. 2601AC47 (talk·contribs·my rights) Isn't a IP anon 03:44, 29 November 2024 (UTC)[reply]
Any such system would be subject to numerous biases or be easily defeatable. Such an automated anti-abuse system would have to be exclusively a foundation initiative as only they have the resources for such a monumental undertaking. It would need its own team of developers.--Jasper Deng (talk) 18:57, 23 November 2024 (UTC)[reply]

Absolutely no chance that this would pass. WP:SNOW, even though there isn't a flood of opposes. There are two problems:

  1. The existing CheckUser team barely has the bandwidth for the existing SPI load. Doing this on every single new user would be impractical and would enable WP:LTA's by diverting valuable CheckUser bandwidth.
  2. Even if we had enough CheckUser's, this would be a severe privacy violation absolutely prohibited under the Foundation privacy policy.

The vast majority of vandals and other disruptive users don't need CU involvement to deal with. There's very little to be gained from this.--Jasper Deng (talk) 18:36, 23 November 2024 (UTC)[reply]

It is perhaps an interesting conversation to have but I have to agree that it is unworkable, and directly contrary to foundation-level policy which we cannot make a local exemption to. En.wp, I believe, already has the largest CU team of any WMF project, but we would need hundreds more people on that team to handle something like this. In the last round of appointments, the committee approved exactly one checkuser, and that one was a returning former mamber of the team. And there is the very real risk that if we appointed a whole bunch of new CUs, some of them would abuse the tool. Just Step Sideways from this world ..... today 18:55, 23 November 2024 (UTC)[reply]
And its worth pointing out that the Committee approving too few volunteers for Checkuser (regardless of whether you think they are or aren't) is not a significant part of this issue. There simply are not tens of people who are putting themselves forward for consideration as CUs. Since 2016 54 applications (an average of per year) have been put forward for consideration by Functionaries (the highest was 9, the lowest was 2). Note this is total applications not applicants (more than one person has applied multiple times), and is not limited to candidates who had a realistic chance of being appointed. Thryduulf (talk) 20:40, 23 November 2024 (UTC)[reply]
The dearth of candidates has for sure been an ongoing thing, it's worth reminding admins that they don't have to wait for the committee to call for candidates, you can put your name forward at any time by emailing the committee. Just Step Sideways from this world ..... today 23:48, 24 November 2024 (UTC)[reply]
Generally, I tend to get the impression from those who have checkuser rights that CU should be done as a last resort, and other, less invasive methods are preferred, and it would seem that indiscriminate use of it would be a bad idea, so I would have some major misgivings about this proposal. And given the ANI case, the less user information that we retain, the better (which is also probably why temporary accounts are a necessary and prudent idea despite other potential drawbacks). Abzeronow (talk) 03:56, 23 November 2024 (UTC)[reply]
Oppose. A lot has already been written on the unsustainable workload for the CU team this would create and the amount of collateral damage; I'll add in the fact that our most notorious sockmasters in areas like PIA already use highly sophisticated methods to evade CU detection, and based on what I've seen at the relevant SPIs most of the blocks in these cases are made with more weight given to the behaviour, and even then only after lengthy deliberations on the matter. These sort of sockmasters seem to have been in the OP's mind when the request was made, and I do not see automated CU being of any more use than current techniques against such dedicated sockmasters. And, has been mentioned before, most cases of sockpuppetry (such as run-of-the-mill vandals and trolls using throwaway accounts for abuse) don't need CU anyways. JavaHurricane 08:17, 24 November 2024 (UTC)[reply]
These are, unfortunately, fair points about the limits of CU and the many experienced and dedicated ban evading actors in PIA. CU information retention policy is also a complicating factor. Sean.hoyland (talk) 08:28, 24 November 2024 (UTC)[reply]
As I said in my original post, recidivist socks often get better at covering their "tells" each time making behavioural detection increasingly difficult and meaning the entire burden falls on the honest user to convince an Admin to take an SPI case seriously with scarce evidence. After many years I'm tired of defending various pages from sock POV edits and if WMF won't make life easier then increasingly I just won't bother, I'm sure plenty of other users feel the same way. Mztourist (talk) 05:45, 26 November 2024 (UTC)[reply]

SimilarEditors

[edit]

The development of mw:Extension:SimilarEditors -- the type of tool that could be used to do what Mztourist suggests -- has been "stalled" since 2023 and downgraded to low-priority in 2024, according to its documentation page and related phab tasks (see e.g. phab:T376548, phab:T304633, phab:T291509). Anybody know why? Levivich (talk) 17:43, 26 November 2024 (UTC)[reply]

Honestly, the main function of that sort of thing seems to be compiling data that is already available on XTools and various editor interaction analyzers, and then presenting it nicely and neatly. I think that such a page could be useful as a sanity check, and it might even be worth having that sort of thing as a standalone toolforge app, but I don't really see why the WMF would make that particular extension a high priority. — Red-tailed hawk (nest) 17:58, 26 November 2024 (UTC)[reply]
Well, it doesn't have to be that particular extension, but it seems to me that the entire "idea" has been stalled, unless they're working on another tool that I'm unaware of (very possible). (Or, it could be because of recent changes in domestic and int'l privacy laws that derailed their previous development advances, or it could be because of advancements in ML elsewhere making in-house development no longer practical.)

As to why the WMF would make this sort of problem a high priority, I'd say because the spread of misinformation on Wikipedia by sockpuppets is a big problem. Even without getting into the use of user metadata, just look at recent SPIs I filed, like Wikipedia:Sockpuppet investigations/Icewhiz/Archive#27 August 2024 and Wikipedia:Sockpuppet investigations/Icewhiz/Archive#09 October 2024. That involved no private data at all, but a computer could have done automatically, in seconds, what took me hours to do manually, and those socks could have been uncovered before they made thousands and thousands of edits spreading misinformation. If the computer looked at private data as well as public data, it would be even more effective (and would save CUs time as well). Seems to me to be a worthy expenditure of 0.5% or 1% of the WMF's annual budget. Levivich (talk) 18:09, 26 November 2024 (UTC)[reply]

This looks really interesting. I don't really know how extensions are rolled out to individual wikis - can anyone with knowledge about that summarise if having this tool turned on (for check users/relevant admins) for en.wp is feasible? Do we need a RFC, or is this a "maybe wait several years for a phab ticket" situation? BugGhost🦗👻 18:09, 26 November 2024 (UTC)[reply]
I find it amusing that ~4 separate users above are arguing that automatic identification of sockpuppets is impossible, impractical, and the WMF would never do it—and meanwhile, the WMF is already doing it. – Closed Limelike Curves (talk) 19:29, 27 November 2024 (UTC)[reply]
So, discussion is over? 2601AC47 (talk·contribs·my rights) Isn't a IP anon 19:31, 27 November 2024 (UTC)[reply]
I think what's happening is that people are having two simultaneous discussions – automatic identification of sockpuppets is already being done, but what people say "the WMF would never do" is using private data (e.g. IP addresses) to identify them. Which adds another level of (ethical, if not legal) complications compared to what SimilarEditors is doing (only processing data everyone can access, but in an automated way). Chaotic Enby (talk · contribs) 07:59, 28 November 2024 (UTC)[reply]
"automatic identification of sockpuppets is already being done" is probably an overstatement, but I agree that there may be a potential legal and ethical minefield between the Similarusers service that uses public information available to anyone from the databases after redaction of private information (i.e. course-grained sampling of revision timestamps combined with an attempt to quantify page intersection data), and a service that has access to the private information associated with a registered account name. Sean.hoyland (talk) 11:15, 28 November 2024 (UTC)[reply]
The WMF said they're planning on incorporating IP addresses and device info as well! – Closed Limelike Curves (talk) 21:21, 29 November 2024 (UTC)[reply]
Yes, automatic identification of (these) sockpuppets is impossible. There are many reasons for this, but the simplest one is this: These types of tools require hundreds of edits – at minimum – to return any viable data, and the sort of sockmasters who get accounts up to that volume of edits know how to evade detection by tools that analyse public information. The markers would likely indicate people from similar countries – naturally, two Cypriots would be interested in Category:Cyprus and over time similar hour and day overlaps will emerge, but what's to let you know whether these are actual socks when they're evading technical analysis? You're back to square one. There are other tools such as mediawikiwiki:User:Ladsgroup/masz which I consider equally circumstantial; an analysis of myself returns a high likelihood of me being other administrators and arbitrators, while analysing an alleged sock currently at SPI returns the filer as the third most likely sockmaster. This is not commentary on the tools themselves, but rather simply the way things are. DatGuyTalkContribs 17:42, 28 November 2024 (UTC)[reply]
Oh, fun! Too bad it's CU-restricted, I'm quite curious to know what user I'm most stylometrically similar to. -- asilvering (talk) 17:51, 28 November 2024 (UTC)[reply]
That would be LittlePuppers and LEvalyn. DatGuyTalkContribs 03:02, 29 November 2024 (UTC)[reply]
Fascinating! One I've worked with, one I haven't, both AfC reviewers. Not bad. -- asilvering (talk) 06:14, 29 November 2024 (UTC)[reply]
Idk, the half dozen ARBPIA socks I recently reported at SPI were obvious af to me, as are several others I haven't reported yet. That may be because that particular sockfarm is easy to spot by its POV pushing and a few other habits; though I bet in other topic areas it's the same. WP:ARBECR helps because it forces the socks to make 500 edits minimum before they can start POV pushing, but still we have to let them edit for a while post-XC just to generate enough diffs to support an SPI filing. Software that combines tools like Masz and SimilarEditor, and does other kinds of similar analysis, could significantly reduce the amount of editor time required to identify and report them. Levivich (talk) 18:02, 28 November 2024 (UTC)[reply]
I think it is possible, studies have demonstrated that it is possible, but it is true that having a sufficient number of samples is critical. Samples can be aggregated in some cases. There are several other important factors too. I have tried some techniques, and sometimes they work, or let's say they can sometimes produce results consistent with SPI results, better than random, but with plenty of false positives. It is also true that there are a number of detection countermeasures (that I won't describe) that are already employed by some bad actors that make detection harder. But I think the objective should be modest, to just move a bit in the right direction by detecting more ban evading accounts than are currently detected, or at least to find ways to reduce the size of the search space by providing ban evasion candidates. Taking the human out of the detection loop might take a while. Sean.hoyland (talk) 18:39, 28 November 2024 (UTC)[reply]
If you mean it's never going to be possible to catch some sockpuppets—the best-hidden, cleverest, etc. ones—you're completely correct. But I'm guessing we could cut the amount of time SPI has to spend dramatically with just some basic checks. – Closed Limelike Curves (talk) 02:27, 29 November 2024 (UTC)[reply]
I disagree. Empirically, the vast majority of time spent at SPI is not on finding possible socks, nor is it using the CheckUser tool on them, but rather it's the CU completed cases (of which there are currently 14 and I should probably stop slacking and get onto some) with non-definitive technical results waiting on an administrator to make the final determination on whether they're socks or not. Extension:SimilarUsers would concentrate various information that already exists (EIA, RoySmith's SPI tools) in one place, but I wouldn't say the accessibility of these tools is a cause of SPI backlog. An AI analysis tool to give an accurate magic number for likelihood? I'm anything but a Luddite, but still believe that's wishful thinking. DatGuyTalkContribs 03:02, 29 November 2024 (UTC)[reply]
Something seems better than nothing in this context doesn't it? EIA and the Similarusers service don't provide an estimate of the significance of page intersections. An intersection on a page with few revisions or few unique actors or few pageviews etc. is very different from a page intersection on the Donald Trump page. That kind of information is probably something that could sometimes help, even just to evaluate the importance of intersection evidence presented at SPIs. It seems to me that any kind of assistance could help. And another thing about the number of edits is that too many samples can also present challenges related to noise, with signals getting smeared out, although the type of noise in a user's data can itself be a characteristic signal in some cases it seems. And if there are too few samples, you can generate synthetic samples based on the actual samples and inject them into spaces. Search strategy matters a lot. The space of everyone vs everyone is vast, so good luck finding potential matches in that space without a lot of compute, especially for diffs. But many socks inhabit relatively small subspaces of Wikipedia, at least in the 20%-ish of time (on average in PIA) they edit(war)/POV-push etc. in their topic of interest. So, choosing the candidate search space and search strategy wisely can make the problem much more tractable for a given topic area/subspace. Targeted fishing by picking a potential sock and looking for potential matches (the strategy used by the Similarusers service and CU I guess) is obviously a very different challenge than large-scale industrial fishing for socks in general. Sean.hoyland (talk) 04:08, 29 November 2024 (UTC)[reply]
And to continue the whining about existing tools, EIA and the Similarusers service use a suboptimal strategy in my view. If the objective is page intersection information for a potential sock against a sockmaster, and a ban evasion source has employed n identified actors so far e.g. almost 50 accounts for Icewhiz, the source's revision data should be aggregated for the intersection. This is not difficult to do using the category graph and the logs. Sean.hoyland (talk) 04:25, 29 November 2024 (UTC)[reply]
There is so much more that could be done with the software. EIA gives you page overlaps (and isn't 100% accurate at it), but it doesn't tell you:
  • how many times the accounts made the same edits (tag team edit warring)
  • how many times they voted in the same formal discussions (RfC, AfD, RM, etc) and whether they voted the same way or different (vote stacking)
  • how many times they use the same language and whether they use unique phraseology
  • whether they edit at the same times of day
  • whether they edit on the same days
  • whether account creation dates (or start-of-regular-editing dates) line up with when other socks were blocked
  • whether they changed focus after reaching XC and to what extent (useful in any ARBECR area)
  • whether they "gamed" or "rushed" to XC (same)
All of this (and more) would be useful to see in a combined way, like a dashboard. It might make sense to restrict access to such compilations of data to CUs, and the software could also throw in metadata or subscriber info in there, too (or not), and it doesn't have to reduce it all into a single score like ORES, but just having this info compiled in one place would save editors the time of having to compile it manually. If the software auto-swept logs for this info and alerted humans to any "high scores" (however defined, eg "matches across multiple criteria"), it would probably not only reduce editor time but also increase sock discovery. Levivich (talk) 04:53, 29 November 2024 (UTC)[reply]
This is like one of my favorite strategies for meetings. Propose multiple things, many of which are technically challenging, then just walk out of the meeting.
The 'how many times the accounts made the same edits' is probably do-able because you can connect reverted revisions to the revisions that reverted them using json data in the database populated as part of the tagging system, look at the target state reverted to and whether the revision was an exact revert. ...or maybe not without computing diffs, having just looked at an article with a history of edit warring. Sean.hoyland (talk) 07:43, 29 November 2024 (UTC)[reply]
I agree with Levivich that automated, privacy-protecting sock-detection is not a pipe dream. I proposed a system something like this in 2018, see also here, and more recently here. However, it definitely requires a bit of software development and testing. It also requires the community and the foundation devs or product folks to prioritize the idea. Andre🚐 02:27, 10 December 2024 (UTC)[reply]
  • Comment. For some time I have vehemnently suspected that this site is crawling with massive numbers of sockpuppets, that the community seems to be unable or unwilling to recognise probable sockpuppets for what they are, and it is not feasible to send them to SPI one at a time. I see a large number of accounts that are sleepers, or that have low edit counts, trying to do things that are controversial or otherwise suspicious. I see them showing up at discussions in large numbers and in quick succession, and offering !votes consist of interpretations of our policies and guidelines that may not reflect consensus, or other statements that may not be factually accurate.
I think the solution is simple: when closing community discussions, admins should look at the edit count of each !voter when determining how much weight to give his !vote. The lower the edit count, the greater the level of sleeper behaviour, and the more controversial the subject of the discussion is amongst the community, the less weight should be given to !vote.
For example, if an account with less than one thousand edits !votes in a discussion about 16th century Tibetan manuscripts, we may well be able to trust that !vote, because the community does not care about such manuscripts. But if the same account !votes on anything connected with "databases" or "lugstubs", we should probably give that !vote very little weight, because that was the subject of a massive dispute amongst the community, and any discussion on that subject is not particulary unlikely to be crawling with socks on both sides. The feeling is that, if you want to be taken seriously in such a controversial discussion, you need to make enough edits to prove that you are a real person, and not a sock. James500 (talk) 15:22, 12 December 2024 (UTC)[reply]
The site presumably has a large number of unidentified sockpuppets. As for the identified ban evading accounts, accounts categorized or logged as socks, if you look at 2 million randomly selected articles for the 2023-10-07 to 2024-10-06 year, just under 2% of the revisions are by ban evading actors blocked for sockpuppetry (211,546 revisions out of 10,732,361). A problem with making weight dependent on edit count is that the edit count number does not tell you anything about the probability that an account is a sock. Some people use hundreds of disposable accounts, making just a few edits with each account. Others stick around and make thousands of edits before they are detected. Also, Wikipedia provides plenty of tools that people can use to rapidly increase their edit count. Sean.hoyland (talk) 16:12, 12 December 2024 (UTC)[reply]

Requiring registration for editing

[edit]
information Note: This section was split off from "CheckUser for all new users" (permalink) and the "parenthetical comment" referred to below is: (Also, email-required registration and get rid of IP editing.)—03:49, 26 November 2024 (UTC)

@Levivich, about your parenthetical comment on requiring registration:

Part of the eternally unsolvable problem is that new editors are frankly bad at it. I can give examples from my own editing: Create an article citing a personal blog post as the main source? Check. Merge two articles that were actually different subjects? Been there, done that, got the revert. Misunderstand and mangle wikitext? More times than I can count. And that's after I created my account. Like about half of experienced editors, I edited as an IP first, fixing a typo here or reverting some vandalism there.

But if we don't persist through these early problems, we don't get experienced editors. And if we don't get experienced editors, Wikipedia will die.

Requiring registration ("get rid of IP editing") shrinks the number of people who edit. The Portuguese Wikipedia banned IPs only from the mainspace three years ago. Have a look at the trend. After the ban went into effect, they had 10K or 11K registered editors each month. It's since dropped to 8K. The number of contributions has dropped, too. They went from 160K to 210K edits per month down to 140K most months.

Some of the experienced editors have said that they like this. No IPs means less impulsive vandalism, and the talk pages are stable if you want to talk to the editor. Fewer newbies means I don't "have to" clean up after so many mistake-makers! Fewer editors, and especially fewer inexperienced editors, is more convenient – in the short term. But I wonder whether they're going to feel the same way a decade from now, when their community keeps shrinking, and they start wondering when they will lose critical mass.

The same thing happens in the real world, by the way. Businesses want to hire someone with experience. They don't want to train the helpless newbie. And then after years of everybody deciding that training entry-level workers is Somebody else's problem, they all look around and say: Where are all the workers that I need? Why didn't someone else train the next generation while I was busy taking the easy path?

In case you're curious, there is a Wikipedia that puts all of the IP and newbie edits under "PC" type restrictions. Nobody can see the edits until they've been approved by an experienced editor. The rate of vandalism visible to ordinary readers is low. Experienced editors love the level of control they have. Have a look at what's happened to the size of their community during the last decade. Is that what you want to see here? If so, we know how to make that happen. The path to that destination even looks broad, easy, and paved with all kinds of good intentions. WhatamIdoing (talk) 04:32, 23 November 2024 (UTC)[reply]

Size isn't everything... what happened to their output--the quality of their encyclopedias--after they made those changes? Levivich (talk) 05:24, 23 November 2024 (UTC)[reply]
Well, I can tell you objectively that the number of edits declined, but "quality" is in the eye of the beholder. I understand that the latter community has the lowest use of inline citations of any mid-size or larger Wikipedia. What's now yesterday's TFA there wouldn't even be rated B-class here due to whole sections not having any ref tags. In terms of citation density, their FA standard is currently where ours was >15 years ago.
But I think you have missed the point. Even if the quality has gone up according to the measure of your choice, if the number of contributors is steadily trending in the direction of zero, what will the quality be when something close to zero is reached? That community has almost halved in the last decade. How many articles are out of date, or missing, because there simply aren't enough people to write them? A decade from now, with half as many editors again, how much worse will the articles be? We're none of us idiots here. We can see the trend. We know that people die. You have doubtless seen this famous line:

All men are mortal. Socrates is a man. Therefore, Socrates is mortal.

I say:

All Wikipedia editors are mortal. Dead editors do not maintain or improve Wikipedia articles. Therefore, maintaining and improving Wikipedia requires editors who are not dead.

– and, memento mori, we are going to die, my friend. I am going to die. If we want Wikipedia to outlive us, we cannot be so shortsighted as to care only about the quality today, and never the quality the day after we die. WhatamIdoing (talk) 06:13, 23 November 2024 (UTC)[reply]
Trends don't last forever. Enwiki's active user count decreased from its peak over a few years, then flattened out for over a decade. The quality increased over that period of time (by any measure). Just because these other projects have shed users doesn't mean they're doomed to have zero users at some point in the future. And I think there's too many variables to know how much any particular change made on a project affects its overall user count, nevermind the quality of its output. Levivich (talk) 06:28, 23 November 2024 (UTC)[reply]
If the graph to the right accurately reflects the age distribution of Wikipedia users, then a large chunk of the user base will die off within the next decade or two. Not to be dramatic, but I agree that requiring registration to edit, which will discourage readers from editing in the first place, will hasten the project's decline.... Some1 (talk) 14:40, 23 November 2024 (UTC)[reply]
😂 Seriously? What do you suppose that chart looked like 20 years ago, and then what happened? Levivich (talk) 14:45, 23 November 2024 (UTC)[reply]
There are significantly more barriers to entry than there were 20 years ago, and over that time the age profile has increased (quite significantly iirc). Adding more barriers to entry is not the way to solve the issued caused by barriers to entry. Thryduulf (talk) 15:50, 23 November 2024 (UTC)[reply]
"PaperQA2 writes cited, Wikipedia style summaries of scientific topics that are significantly more accurate than existing, human-written Wikipedia articles" - maybe the demographics of the community will change. Sean.hoyland (talk) 16:30, 23 November 2024 (UTC)[reply]
That talks about LLMs usage in artcles, not the users. 2601AC47 (talk|contribs) Isn't a IP anon 16:34, 23 November 2024 (UTC)[reply]
Or you could say it's about a user called PaperQA2 that writes Wikipedia articles significantly more accurate than articles written by other users. Sean.hoyland (talk) 16:55, 23 November 2024 (UTC)[reply]
No, it is very clearly about a language model. As far as I know, PaperQA2, or WikiCrow (the generative model using PaperQA2 for question answering), has not actually been making any edits on Wikipedia itself. Chaotic Enby (talk · contribs) 16:58, 23 November 2024 (UTC)[reply]
That is true. It is not making any edits on Wikipedia itself. There is a barrier. But my point is that in the future that barrier may not be there. There may be users like PaperQA2 writing articles better than other users and the demographics will have changed to include new kinds of users, much younger than us. Sean.hoyland (talk) 17:33, 23 November 2024 (UTC)[reply]
And who will never die off! Levivich (talk) 17:39, 23 November 2024 (UTC)[reply]
But which will not be Wikipedia. WhatamIdoing (talk) 06:03, 24 November 2024 (UTC)[reply]
In re "What do you suppose that chart looked like 20 years ago": I believe that the numbers, very roughly, are that the community has gotten about 10 years older, on average, than it was 20 years ago. That is, we are bringing in some younger people, but not at a rate that would allow us to maintain the original age distribution. (Whether the original age distribution was a good thing is a separate consideration.) WhatamIdoing (talk) 06:06, 24 November 2024 (UTC)[reply]
I like looking at the en.wikipedia user retention graph hosted on Toolforge (for anyone who might go looking for it later, there's a link to it at Wikipedia:WikiProject Editor Retention § Resources). It shows histograms of how many editors have edited in each month, grouped by all the editors who started editing in the same month. The data is noisy, but it does seem to show an increase in editing tenure since 2020 (when the pursuit of a lot of solo hobbies picked up, of course). Prior to that, there does seem to be a bit of slow growth in tenure length since the lowest point around 2013. isaacl (talk) 17:18, 23 November 2024 (UTC)[reply]
The trend is a bit clearer when looking at the retention graph based on those who made at least 10 edits in a month. (To see the trend when looking at the retention graph based on 100 edits in a month, the default colour range needs to be shifted to accommodate the smaller numbers.) isaacl (talk) 17:25, 23 November 2024 (UTC)[reply]
I'd say that the story there is: Something amazing happened in 2006. Ours (since both of us registered our accounts that year) was the year from which people stuck around. I think that would be just about the time that the wall o' automated rejection really got going. There was some UPE-type commercial pressure, but it didn't feel unmanageable. It looks like an inflection point in retention. WhatamIdoing (talk) 06:12, 24 November 2024 (UTC)[reply]
I don't think something particularly amazing happened in 2006. I think the rapid growth in articles starting in 2004 attracted a large land rush of editors as Wikipedia became established as a top search result. The cohort of editors at that time had the opportunity to cover a lot of topics for the first time on Wikipedia, requiring a lot of co-ordination, which created bonds between editors. As topic coverage grew, there were fewer articles that could be more readily created by generalists, the land rush subsided, and there was less motivation for new editors to persist in editing. Boom-bust cycles are common for a lot of popular things, particularly in tech where newer, shinier things launch all the time. isaacl (talk) 19:07, 24 November 2024 (UTC)[reply]
Ah yes, that glorious time when we gained an article on every Pokemon character and, it seems, every actor who was ever credited in a porn movie. Unfortunately, many of the editors I bonded with then are no longer active. Some are dead, some finished school, some presumably burned out, at least one went into the ministry. Donald Albury 23:49, 26 November 2024 (UTC)[reply]
Have a look at what happened to the size of their community.—I'm gonna be honest: eyeballing it, I don't actually see much (if any) difference with enwiki's activity. "Look at this graph" only convinces people when the dataset passes the interocular trauma test (e.g. the hockey stick).
On the other hand, if there's a dataset of "when did $LANGUAGEwiki adopt universal pending changes protections", we could settle this argument once and for all using a real statistical model that can deliver precise effect sizes on activity. Maybe then we can all finally drop the stick. – Closed Limelike Curves (talk) 18:08, 26 November 2024 (UTC)[reply]

This particular idea will not pass, but the binary developing in the discussion is depressing. A bargain where we allow IPs to edit (or unregistered users generally when IPs are masked), and therefore will sit on our hands when dealing with abuse and even harassment is a grim one. Any steps taken to curtail the second half of that bargain would make the first half stronger, and I am generally glad to see thoughts about it, even if they end up being impractical. CMD (talk) 02:13, 24 November 2024 (UTC)[reply]

I don't want us to sit on our hands when we see abuse and harassment. I think our toolset is about 20 years out of date, and I believe there are things we could do that will help (e.g., mw:Temporary accounts, cross-wiki checkuser tools for Stewards, detecting and responding to a little bit more information about devices/settings [perhaps, e.g., whether an edit is being made from a private/incognito window]). WhatamIdoing (talk) 06:39, 24 November 2024 (UTC)[reply]
Temporary accounts will help with the casual vandalism, but they’re not going to help with abuse and harassment. If it limits the ability to see ranges, it will make issues slightly worse. CMD (talk) 07:13, 24 November 2024 (UTC)[reply]
I'm not sure what the current story is there, but when I talked to the team last (i.e., in mid-2023), we were talking about the value of a tool that would do range-related work. For various reasons, this would probably be Toolforge instead of MediaWiki, and it would probably be restricted (e.g., to admins, or to a suitable group chosen by each community), but the goal was to make it require less manual work, particularly for cross-wiki abuse, and to be able to aggregate some information without requiring direct disclosure of some PII. WhatamIdoing (talk) 23:56, 25 November 2024 (UTC)[reply]

Oh look, misleading statistics! "The Portuguese Wikipedia banned IPs only from the mainspace three years ago. Have a look at the trend. After the ban went into effect, they had 10K or 11K registered editors each month. It's since dropped to 8K. " Of course you have a spike in new registrations soon after you stop allowing IP edits, and you can't sustain that spike. That is not evidence of anything. It would have been more honest and illustrative to show the graph before and after the policy change, not only afterwards, e.g. thus. Oh look, banning IP editing has resulted in on average some 50% more registered editors than before the ban. Number of active editors is up 50% as well[1]. The number of new pages has stayed the same[2]. Number of edits is down, yes, but how much of this is due to less vandalism / vandalism reverts? A lot apparently, as the count of user edits has stayed about the same[3]. Basically, from those statistics, used properly, it is impossible to detect any issues with the Portuguese Wikipedia due to the banning of IP editing. Fram (talk) 08:55, 26 November 2024 (UTC)[reply]

"how much of this is due to less vandalism / vandalism reverts?" That's a good question. Do we have some data on this? Jo-Jo Eumerus (talk) 09:20, 26 November 2024 (UTC)[reply]
@Jo-Jo Eumerus:, the dashboard is here although it looks like they stopped reporting the data in late 2021. If you take "Number of reverts" as a proxy for vandalism, you can see that the block shifted the number of reverts from a higher equilibrium to a lower one, while overall non-reverted edits does not seem to have changed significantly during that period. CMD (talk) 11:44, 28 November 2024 (UTC)[reply]
Upon thinking, it would be useful to know how many good edits are done by IP. Or as is, unreverted edits. Jo-Jo Eumerus (talk) 14:03, 30 November 2024 (UTC)[reply]
I agree that one should expect a spike in registration. (In fact, I have suggested a strictly temporary requirement to register – a few hours, even – to push some of our regular IPs into creating accounts.) But once you look past that initial spike, the trend is downward. WhatamIdoing (talk) 05:32, 29 November 2024 (UTC)[reply]

But once you look past that initial spike, the trend is downward.

I still don't see any evidence that this downward trend is unusual. Apparently the WMF did an analysis of ptwiki and didn't find evidence of a drop in activity. Net edits (non-revert edits standing for at least 48 hours) increased by 5.7%, although edits across other wikis increased slightly more. The impression I get is any effects are small either way—the gains from freeing up anti-vandalism resources basically offset the cost of some IP editors not signing up.
TBH this lines up with what I'd expect. Very few people I talk to cite issues like "creating an account" as a major barrier to editing Wikipedia. The most common barrier I've heard from people who tried editing and gave it up is "Oh, I tried, but then some random admin reverted me, linked me to MOS:OBSCURE BULLSHIT, and told me to go fuck myself but with less expletives." – Closed Limelike Curves (talk) 07:32, 29 November 2024 (UTC)[reply]

But once you look past that initial spike, the trend is downward.

Not really obvious, and not more or even less so in Portuguese wikipedia [4] than in e.g. Enwiki[5], FRwiki[6], NLwiki[7], ESwiki[8], Svwiki[9]... So, once again, these statistics show no issue at all with disabling IP editing on Portuguese Wikipedia. Fram (talk) 10:38, 29 November 2024 (UTC)[reply]

Aside from the obvious loss of good 'IP' editors, I think there's a risk of unintended consequences from 'stopping vandalism' at all; 'vandalism' and 'disruptive editing' from IP editors (or others) isn't necessarily a bad thing, long term. Even the worst disruptive editors 'stir the pot' of articles, bringing attention to articles that need it, and otherwise would have gone unnoticed. As someone who mostly just trawls through recent changes, I can remember dozens of times when where an IP, or brand new, user comes along and breaks an article entirely, but their edit leads inexorably to the article being improved. Sometimes there is a glimmer of a good point in their edit, that I was able to express better than they were, maybe in a more balanced or neutral way. Sometimes they make an entirely inappropriate edit, but it brings the article to the top of the list, and upon reading it I notice a number of other, previously missed, problems in the article. Sometimes, having reverted a disruptive change, I just go and add some sources or fix a few typos in the article before I go on my merry way. You might think 'Ah, but 'Random article' would let you find those problems too. BUT random article' is, well, random. IP editors are more targeted, and that someone felt the need to disparage a certain person's mother in fact brings attention to an article about someone who is, unbeknownst to us editors, particularly contentious in the world of Czech Jazz Flautists so there is a lot there to fix. By stopping people making these edits, we risk a larger proportion of articles becoming an entirely stagnant. JeffUK 15:00, 9 December 2024 (UTC)[reply]

I feel that the glassmaker has been too clever by half here. "Ahh, but vandalism of articles stimulates improvements to those articles." If the analysis ends there, I have no objections. But if, on the other hand, you come to the conclusion that it is a good thing to vandalize articles, that it causes information to circulate, and that the encouragement of editing in general will be the result of it, you will oblige me to call out, "Halt! Your theory is confined to that which is seen; it takes no account of that which is not seen." If I were to pay a thousand people to vandalize Wikipedia articles full-time, bringing more attention to them, would I be a hero or villain? If vandalism is good, why do we ban vandals instead of thanking them? Because vandalism is bad—every hour spent cleaning up after a vandal is one not spent writing a new article or improving an existing one.
On targeting: vandals are more targeted than a "random article", but are far more destructive than basic tools for prioritizing content, and less effective than even very basic prioritization tools like sorting articles by total views. – Closed Limelike Curves (talk) 19:11, 9 December 2024 (UTC)[reply]
I mean, I only said Vandalism "isn't necessarily a bad thing, long term", I don't think it's completely good, but maybe I should have added 'in small doses', fixing vandalism takes one or two clicks of the mouse in most cases and it seems, based entirely on my anecdotal experience, to sometimes have surprisingly good consequences; intuitively, you wouldn't prescribe vandalism, but these things have a way of finding a natural balance, and what's intuitive isn't necessarily what's right. One wouldn't prescribe dropping asteroids on the planet you're trying to foster life upon after you finally got it going, but we can be pretty happy that it happened! - And 'vandalism' is the very worst of what unregistered (and registered!) users get up to, there are many, many more unambiguously positive contributors than unambiguously malicious. JeffUK 20:03, 9 December 2024 (UTC)[reply]

Wikipedia donation ads

[edit]

in the school I go to, they have chromebooks for Wikipedia but they are reset every time you log on so every time i go to wikipidia I always get the annoying Wikipidia donation adds, and even though I press x they come back the next time I log on, which can be many times in one day.

I am proposing that the poppup system should ask for donations bassed off of the IP adress and not bassed of of cookies. Instead of adds automaticaly popping up when wikipidia is oppened for the first time on the browser, make them poppup when its oppened for the first time on the IP adress or user. — Preceding unsigned comment added by YisroelB501 (talkcontribs) 09:59, 29 November 2024 (UTC)[reply]

I have experienced similar issues when I've not been logged in, and they've definitely been getting more aggressive as of late. IP-based messages would be sensible, in my view. JayCubby 21:50, 29 November 2024 (UTC)[reply]
According to the Wikimedia Foundation, their banner fundraising campaign only runs once a year and doesn’t start until December 2nd, so it’s unclear where any ads you or I have seen before that date come from. Perhaps there’s some sort of security issue. 216.147.127.204 (talk) 21:56, 29 November 2024 (UTC)[reply]
Every country has a different schedule. Also, they test the system beforehand.
@YisroelB501, I agree that it sounds annoying, but unfortunately, I think fixing it would take more than a few weeks, and the main fundraising campaign only lasts until the first few days of January. So I think you'll keep seeing this problem for a while yet. @JBrungs (WMF) can inquire with the fundraising team about whether this would be feasible. WhatamIdoing (talk) 19:54, 30 November 2024 (UTC)[reply]
I’m in the United States, which is one of the countries where the Wikimedia Foundation says its fundraiser hasn’t started yet. The messages I saw looked just like a legitimate fundraiser, not test messages, and I didn’t sign up for any testing. I don’t know who is putting these messages up on Wikipedia or where the money is going. 216.147.127.204 (talk) 21:00, 30 November 2024 (UTC)[reply]
These are legitimate fundraising requests. They periodically run tests of the real thing (e.g., to see if they get complaints about the wording being confusing, to make sure that the credit card processing system works, etc.). If they had to change anything recently, they'll definitely be running tests right now, because they want everything to work on "Giving Tuesday". "Giving Tuesday" is the response from the non-profit world to Black Friday, Small Business Saturday, and Cyber Monday.
The money goes to the Wikimedia Foundation, a 501(c)(3) non-profit organization that supports Wikipedia and other projects like Wiktionary and Wikimedia Commons (where almost all of our photos are stored and organized).
If you want to donate, you should go to https://donate.wikimedia.org and do so. And if you don't, you should click the little button to make the banner go away for a week (or until you clear the cookies in your web browser). If you really want to make it go away, then go to Special:CreateAccount and create a free account (pick a username/password; no e-mail address or anything else required). After you're logged in, go to Special:Preferences#mw-prefsection-centralnotice-banners and turn off all fundraising banners. That will suppress all fundraising banners until you get logged out. (And if you do get logged out, just log back in again, and your prefs settings will take over again.) You can use the same account on all your devices, as long as you can remember the username and password. WhatamIdoing (talk) 22:24, 30 November 2024 (UTC)[reply]
The wording certainly was confusing, because I didn’t even know that these were tests! I had assumed that the Wikimedia Foundation was running its annual banner fundraising campaign until I saw a statement by a WMF employee that the WMF was not doing that. I couldn’t have complained about the wording’s being confusing because I wasn’t told that there was a test going on or where to make any reports about it.
Looking more closely at the “community collaboration page” and its mentions of testing, I gather that these are psychological tests seeking to determine which banners induce the most donations. Conducting such experimentation on human subjects without obtaining their informed consent is a serious breach of ethical standards. (Obviously there was no informed consent, because I wasn’t even informed!) Even the WMF’s own Code of Conduct calls such “psychological manipulation” an “unacceptable behaviour”.
I’d like to formally report this violation of the Code of Conduct, but I can’t find a way to do that. I see that the Code of Conduct has enforcement guidelines, but they contain vague language like “Reporting of UCoC violations should be possible by the target of the violation”…so, um, is reporting of UCoC violations possible, and if so, how? 216.147.127.204 (talk) 00:37, 1 December 2024 (UTC)[reply]
The UCoC says it is an abuse of power to engage in:
  • Psychological manipulation: Maliciously causing someone to doubt their own perceptions, senses, or understanding with the objective to win an argument or force someone to behave the way you want.
Did you actually doubt your own perceptions, senses, or understanding? Was producing those doubts (if, indeed, they were produced) in aid of someone winning an argument against you or forcing you to behave the way the other person wanted?
If you can't answer yes to both of those questions, then you've got no complaint to make under the UCoC.
I suggest reading up about the words you're using, like Informed consent, and maybe the difference between Psychological testing of a person and A/B testing an advertisement. WhatamIdoing (talk) 00:00, 3 December 2024 (UTC)[reply]
Tracking page customization information by IP address would require storing that information, and would slow down serving pages to non-logged in users, as they can't just be served a pre-composed page right from the cache servers. The only cache-friendly approach is for the client to hide the messages, which means using cookies. Plus, some network configurations have IP addresses shared between users that need to be supported. isaacl (talk) 22:49, 30 November 2024 (UTC)[reply]
Perhaps simply waiting a bit to show donation prompts at all would solve the issue (and prevent it from annoying people who have their cookies cleared all the time). JayCubby 03:01, 1 December 2024 (UTC)[reply]
If the delay is long enough that people won't see the donation banner, then the purpose of the banner is defeated. A slight pause will either leave a blank space for a period of time, or shift the page layout after the pause. Both of these are arguably more annoying than displaying the banner without a pause. isaacl (talk) 04:09, 2 December 2024 (UTC)[reply]
What I mean is a delay of days before the banner is shown to users, rather than bugging them on the first time a user goes to Wikipedia. JayCubby 17:19, 2 December 2024 (UTC)[reply]
The problem is that if IP addresses are tracked, then that information on how recently and how often an IP address has been used to access WP will have to be stored somewhere and retrieved every time that IP is used to access WP. There is also the point that it may not be the same IP being used the next time WP is accessed from that device. And if cookies are used to track the device, some devices are in schools or libraries, and not only is there no guarantee that the same person will be accessing WP two times in a row on the same device, it is very likely it will not be the same person, while other users may use more than one device to access WP. All of that will give inconsistent results on when a user see an add. Donald Albury 17:55, 2 December 2024 (UTC)[reply]
When is the start of this delay determined? The shared IP addresses in question from the original post aren't never-been-used-before IP addresses. I think for most readers this would just amount to shifting the start date of the campaign. Also note this won't solve either the caching problem or the problem of storing info for every IP address. isaacl (talk) 18:27, 2 December 2024 (UTC)[reply]
I have little technical expertise, but my thoughts are as follows:
User visits wp, a cookie is set.
Period of time, let's say a day or two elapses, and next time user visits wp, banner is displayed. No IPs needed. JayCubby 02:03, 3 December 2024 (UTC)[reply]
That defeats the purpose of the banner (which I appreciate suits those who don't want to see it, but doesn't help the campaign). isaacl (talk) 05:33, 3 December 2024 (UTC)[reply]
I don't see much value in displaying the banner on the first time, I myself wouldn't appreciate being begged for donations the first time I visited a webpage (or cleared my cookies). I don't know who is likely to donate, maybe it's effective to ask the first time a reader visits. JayCubby 16:09, 3 December 2024 (UTC)[reply]
I imagine the vast majority of readers is repeat traffic, in any case. isaacl (talk) 17:40, 3 December 2024 (UTC)[reply]
I have to wonder what the point of showing the banner to (likely) new visitors is. Maybe someone at the Foundation can chine in on their strategy. JayCubby 18:49, 3 December 2024 (UTC)[reply]
I think Wikipedia has far more repeat visitors than first-time visitors. These days, the only criterion I can think of to try to identify a group of likely first-time visitors is by age, and there isn't a good way to filter for that. isaacl (talk) 18:11, 4 December 2024 (UTC)[reply]
It's likely that there are some, but relatively few, first-time visitors. https://stats.wikimedia.org/ says the English Wikipedia sees about a billion unique devices each month, and there are only 1.5 billion English speakers in the world.
There will always be some first-time visitors. If we assume, to a first approximation, that everyone in the US will eventually visit the English Wikipedia, then purely due to about 10,000 babies being born each day, we'll (when they've gotten a little older) see about 10,000 first-time users each day. We get about 100,000,000 page views a day from the US, so 10,000 of those will be the first visit to the site.
That's roughly a one-in-10,000 risk that the person seeing the banner today is at the site for the first time. The other 9,999 times, it's at least the person's second page on the site. Since banners aren't run all the time, then it's only about 10% of each year's true first-timers who see any fundraising banners.
In other works, >99.9% of the time, this isn't something we should be worrying about. WhatamIdoing (talk) 00:20, 5 December 2024 (UTC)[reply]
Are cookies not now being created with the partial rollout of masked temporary accounts? CMD (talk) 04:31, 2 December 2024 (UTC)[reply]
Cookies are currently used, regardless of the IP masking initiative. The original post proposed hiding the donation banner based on IP address, instead of using client cookies. isaacl (talk) 04:44, 2 December 2024 (UTC)[reply]

Hi everyone,

Thank you very much for engaging with the fundraiser and I wanted to briefly take the opportunity to reply to you here.

I will take the IP address idea to the fundraising team to think about for future fundraisers, though I am not sure this is possible.

A note on our testing. From July to November, we test different versions of banners to small numbers of readers on English Wikipedia before the primary campaign begins, which this year runs from Dec 2nd. We need to test our technical infrastructure, trial new payment methods, and new features (such as annual recurring). We also conduct a/b testing on the content in order to determine on aggregate what readers are responsive to. This allows us to run overall less fundraising messages on the site while ensuring that we raise the money needed to support the movement.

Ethical considerations around A/B testing are valid and actively discussed across industries. At the Foundation, we approach testing with care. While we analyze donation rates to understand effectiveness, we also rely on ethical guardrails, informed by reader research and discussions with volunteers about the content, for example on our community collaboration page. The banners are displayed publicly without collecting personal data or requiring user participation, and no identifying information is gathered. Additionally, our privacy policy states that some browser information is collected automatically to improve the experience of the website.

Testing in fundraising and marketing is a standard practice but we agree it must be conducted thoughtfully to avoid harm. We appreciate your feedback, as it helps us continue to refine our approach.

I hope this addresses your concerns. Sheetal Puri (WMF) (talk) 21:38, 2 December 2024 (UTC)[reply]

Point 1 of Procedural removal for inactive administrators which currently reads "Has made neither edits nor administrative actions for at least a 12-month period" should be replaced with "Has made no administrative actions for at least a 12-month period". The current wording of 1. means that an Admin who takes no admin actions keeps the tools provided they make at least a few edits every year, which really isn't the point. The whole purpose of adminship is to protect and advance the project. If an admin isn't using the tools then they don't need to have them. Mztourist (talk) 07:47, 4 December 2024 (UTC)[reply]

Endorsement/Opposition (Admin inactivity removal)

[edit]
  • Support as proposer. Mztourist (talk) 07:47, 4 December 2024 (UTC)[reply]
  • Oppose - this would create an unnecessary barrier to admins who, for real life reasons, have limited engagement for a bit. Asking the tools back at BN can feel like a faff. Plus, logged admin activity is a poor guide to actual admin activity. In some areas, maybe half of actions aren't logged? —Femke 🐦 (talk) 19:17, 4 December 2024 (UTC)[reply]
  • Oppose. First, not all admin actions are logged as such. One example which immediately comes to mind is declining an unblock request. In the logs, that's just a normal edit, but it's one only admins are permitted to make. That aside, if someone has remained at least somewhat engaged with the project, they're showing they're still interested in returning to more activity one day, even if real-life commitments prevent them from it right now. We all have things come up that take away our available time for Wikipedia from time to time, and that's just part of life. Say, for example, someone is currently engaged in a PhD program, which is a tremendously time-consuming activity, but they still make an edit here or there when they can snatch a spare moment. Do we really want to discourage that person from coming back around once they've completed it? Seraphimblade Talk to me 21:21, 4 December 2024 (UTC)[reply]
    We could declare specific types of edits which count as admin actions despite being mere edits. It should be fairly simple to write a bot which checks if an admin has added or removed specific texts in any edit, or made any of specific modifications to pages. Checking for protected edits can be a little harder (we need to check for protection at the time of edit, not for the time of the check), but even this can be managed. Edits to pages which match specific regular expression patterns should be trivial to detect. Animal lover |666| 11:33, 9 December 2024 (UTC)[reply]
  • Oppose There's no indication that this is a problem needs fixing. SWATJester Shoot Blues, Tell VileRat! 00:55, 5 December 2024 (UTC)[reply]
  • Support Admins who don't use the tools should not have the tools. * Pppery * it has begun... 03:55, 5 December 2024 (UTC)[reply]
  • Oppose While I have never accepted "not all admin actions are logged" as a realistic reason for no logged actions in an entre year, I just don't see what problematic group of admins this is in response to. Previous tweaks to the rules were in response to admins that seemed to be gaming the system, that were basically inactive and when they did use the tools they did it badly, etc. We don't need a rule that ins't pointed a provable, ongoing problem. Just Step Sideways from this world ..... today 19:19, 8 December 2024 (UTC)[reply]
  • Oppose If an admin is still editing, it's not unreasonable to assume that they are still up to date with policies, community norms etc. I see no particular risk in allowing them to keep their tools. Scribolt (talk) 19:46, 8 December 2024 (UTC)[reply]
  • Oppose: It feels like some people are trying to accelerate admin attrition and I don't know why. This is a solution in search of a problem. Gnomingstuff (talk) 07:11, 10 December 2024 (UTC)[reply]
  • Oppose Sure there is a problem, but the real problem I think is that it is puzzling why they are still admins. Perhaps we could get them all to make a periodic 'declaration of intent' or some such every five years that explains why they want to remain an admin. Alanscottwalker (talk) 19:01, 11 December 2024 (UTC)[reply]

Discussion (Admin inactivity removal)

[edit]
  • Making administrative actions can be helpful to show that the admin is still up-to-date with community norms. We could argue that if someone is active but doesn't use the tools, it isn't a big issue whether they have them or not. Still, the tools can be requested back following an inactivity desysop, if the formerly inactive admin changes their mind and wants to make admin actions again. For now, I don't see any immediate issues with this proposal. Chaotic Enby (talk · contribs) 08:13, 4 December 2024 (UTC)[reply]
  • Looking back at previous RFCs, in 2011 the reasoning was to reduce the attack surface for inactive account takeover, and in 2022 it was about admins who haven't been around enough to keep up with changing community norms. What's the justification for this besides "use it or lose it"? Further, we already have a mechanism (from the 2022 RFC) to account for admins who make a few edits every year. Anomie 12:44, 4 December 2024 (UTC)[reply]
  • I also note that not all admin actions are logged. Logging editing through full protection requires abusing the Edit Filter extension. Reviewing of deleted content isn't logged at all. Who will decide whether an admin's XFD "keep" closures are really WP:NACs or not? Do adminbot actions count for the operator? There are probably more examples. Currently we ignore these edge cases since the edits will probably also be there, but now if we can desysop someone who made 100,000 edits in the year we may need to consider them. Anomie 12:44, 4 December 2024 (UTC)[reply]
    I had completely forgotten that many admin actions weren't logged (and thus didn't "count" for activity levels), that's actually a good point (and stops the "community norms" arguments as healthy levels of community interaction can definitely be good evidence of that). And, since admins desysopped for inactivity can request the tools back, an admin needing the bit but not making any logged actions can just ask for it back. At this point, I'm not sure if there's a reason to go through the automated process of desysopping/asking for resysop at all, rather than just politely ask the admin if they still need the tools.
    I'm still very neutral on this by virtue of it being a pretty pointless and harmless process either way (as, again, there's nothing preventing an active admin desysopped for "inactivity" from requesting the tools back), but I might lean oppose just so we don't add a pointless process for the sake of it. Chaotic Enby (talk · contribs) 15:59, 4 December 2024 (UTC)[reply]
  • To me this comes down to whether the community considers it problematic for an admin to have tools they aren't using. Since it's been noted that not all admin actions are logged, and an admin who isn't using their tools also isn't causing any problems, I'm not sure I see a need to actively remove the tools from an inactive admin; in a worst-case scenario, isn't this encouraging an admin to (potentially mis-)use the tools solely in the interest of keeping their bit? There also seems to be somewhat of a bad-faith assumption to the argument that an admin who isn't using their tools may also be falling behind on community norms. I'd certainly like to hope that if I was an admin who had been inactive that I would review P&G relevant to any admin action I intended to undertake before I executed. DonIago (talk) 15:14, 4 December 2024 (UTC)[reply]
  • As I have understood it, the original rationale for desysopping after no activity for a year was the perception that an inactive account was at higher danger of being hijacked. It had nothing to do with how often the tools were being used, and presumably, if the admin was still editing, even if not using the tools, the account was less likely to be hijacked. - Donald Albury 22:26, 4 December 2024 (UTC)[reply]
    And also, if the account of an active admin was hijacked, both the account owner and those they interact with regularly would be more likely to notice the hijacking. The sooner a hijacked account is identified as hijacked, the sooner it is blocked/locked which obviously minimises the damage that can be done. Thryduulf (talk) 00:42, 5 December 2024 (UTC)[reply]
  • I was not aware that not all admin actions are logged, obviously they should all be correctly logged as admin actions. If you're an Admin you should be doing Admin stuff, if not then you obviously don't need the tools. If an Admin is busy IRL then they can either give up the tools voluntarily or get desysopped for inactivity. The "Asking the tools back at BN can feel like a faff." isn't a valid argument, if an Admin has been desysopped for inactivity then getting the tools back should be "a faff". Regarding the comment that "There's no indication that this is a problem needs fixing." the problem is Admins who don't undertake admin activity, don't stay up to date with policies and norms, but don't voluntarily give up the tools. The 2022 change was about total edits over 5 years, not specifically admin actions and so didn't adequately address the issue. Mztourist (talk) 03:23, 5 December 2024 (UTC)[reply]
    obviously they should all be correctly logged as admin actions - how would you log actions that are administrative actions due to context/requiring passive use of tools (viewing deleted content, etc.) rather than active use (deleting/undeleting, blocking, and so on)/declining requests where accepting them would require tool use? (e.g. closing various discussions that really shouldn't be NAC'd, reviewing deleted content, declining page restoration) Maybe there are good ways of doing that, but I haven't seen any proposed the various times this subject came up. Unless and until "soft" admin actions are actually logged somehow, "editor has admin tools and continues to engage with the project by editing" is the closest, if very imperfect, approximation to it we have, with criterion 2 sort-of functioning to catch cases of "but these specific folks edit so little over a prolonged time that it's unlikely they're up-to-date and actively engaging in soft admin actions". (I definitely do feel criterion 2 could be significantly stricter, fwiw) AddWittyNameHere 05:30, 5 December 2024 (UTC)[reply]
    Not being an Admin I have no idea how their actions are or aren't logged, but is it a big ask that Admins perform at least a few logged Admin actions in a year? The "imperfect, approximation" that "editor has admin tools and continues to engage with the project by editing" is completely inadequate to capture Admin inactivity. Mztourist (talk) 07:06, 6 December 2024 (UTC)[reply]
    Why is it "completely inadequate"? Thryduulf (talk) 10:32, 6 December 2024 (UTC)[reply]
    I've been a "hawk" regarding admin activity standards for a very long time, but this proposal comes off as half-baked. The rules we have now are the result of careful consideration and incremental changes aimed at specific, provable issues with previous standards. While I am not a proponent of "not all actions are logged" as a blanket excuse for no logged actions in several years, it is feasible that an admin could be otherwise fully engaged with the community while not having any logged actions. We haven't been having trouble with admins who would be removed by this, so where's the problem? Just Step Sideways from this world ..... today 19:15, 8 December 2024 (UTC)[reply]

"Blur all images" switch

[edit]

Although i know that WP:NOTCENSORED, i propose that the Vector 2022 and Minerva Neue skins (+the Wikipedia mobile apps) have a "blur all images" toggle that blurs all the images on all pages (requiring clicking on them to view them), which simplifies the process of doing HELP:NOSEE as that means:

  1. You don't need to create an account to hide all images.
  2. You don't need any complex JavaScript or CSS installation procedures. Not even browser extensions.
  3. You can blur all images in the mobile apps, too.
  4. It's all done with one push of a button. No extra steps needed.
  5. Blurring all images > hiding all images. The content of a blurred image could be easily memorized, while a completely hidden image is difficult to compare to the others.

And it shouldn't be limited to just Wikipedia. This toggle should be available on all other WMF projects and MediaWiki-powered wikis, too. 67.209.128.126 (talk) 15:26, 5 December 2024 (UTC)[reply]

Sounds good. Damon will be thrilled. Martinevans123 (talk) 15:29, 5 December 2024 (UTC)[reply]
Sounds like something I can try to make a demo of as a userscript! Chaotic Enby (talk · contribs) 15:38, 5 December 2024 (UTC)[reply]
User:Chaotic Enby/blur.js should do the job, although I'm not sure how to deal with the Page Previews extension's images. Chaotic Enby (talk · contribs) 16:16, 5 December 2024 (UTC)[reply]
Will be a problem for non registered users, as the default would clearly to leave images in blurred for them. — Masem (t) 15:40, 5 December 2024 (UTC)[reply]
Better show all images by default for all users. If you clear your cookies often you can simply change the toggle every time. 67.209.128.132 (talk) 00:07, 6 December 2024 (UTC)[reply]
That's my point: if you are unregistered, you will see whatever the default setting is (which I assume will be unblurred, which might lead to more complaints). We had similar problems dealing with image thumbnail sizes, a setting that unregistered users can't adjust. Masem (t) 01:10, 6 December 2024 (UTC)[reply]
I'm confused about how this would lead to more complaints. Right now, logged-out users see every image without obfuscation. After this toggle rolls out, logged-out users would still see every image without obfuscation. What fresh circumstance is leading to new complaints? Zanahary 07:20, 12 December 2024 (UTC)[reply]
Well, we'd be putting in an option to censor, but not actively doing it. People will have issues with that. Lee Vilenski (talkcontribs) 10:37, 12 December 2024 (UTC)[reply]
Isn't the page Help:Options to hide an image "an option to censor" we've put in? Gråbergs Gråa Sång (talk) 11:09, 12 December 2024 (UTC)[reply]
I'm not opposed to this, if it can be made to work, fine. Gråbergs Gråa Sång (talk) 19:11, 5 December 2024 (UTC)[reply]
What would be the goal of a blur all images option? It seems too tailored. But a "hide all images" could be suitable. EEpic (talk) 06:40, 11 December 2024 (UTC)[reply]
Simply removing them might break page layout, so images could be replaced with an equally sized placeholder. JayCubby 13:46, 13 December 2024 (UTC)[reply]

Could there be an option to simply not load images for people with a low-bandwidth connection or who don't want them? Travellers & Tinkers (talk) 16:36, 5 December 2024 (UTC)[reply]

I agree. This way, the options would go as
  • Show all images
  • Blur all images
  • Hide all images
It would honestly be better with your suggestion. 67.209.128.132 (talk) 00:02, 6 December 2024 (UTC)[reply]
Of course, it will do nothing to appease the "These pics shouldn't be on WP at all" people. Gråbergs Gråa Sång (talk) 06:52, 6 December 2024 (UTC)[reply]
“Commons be thataway” is what we should tell them Dronebogus (talk) 18:00, 11 December 2024 (UTC)[reply]
I suggest that the "hide all images" display file name if possible. Between file name and caption (which admittedly are often similar, but not always), there should be sufficient clue whether an image will be useful (and some suggestion, but not reliably so, if it may offend a sensibility.) -- Nat Gertler (talk) 17:59, 11 December 2024 (UTC)[reply]
For low-bandwidth or expensive bandwidth -- many folks are on mobile plans which charge for bandwidth. -- Nat Gertler (talk) 14:28, 11 December 2024 (UTC)[reply]

Regarding not limiting image management choices to Wikipedia: that's why it's better to manage this on the client side. Anyone needing to limit their bandwidth usage, or to otherwise decide individually on whether or not to load each photo, will likely want to do this generally in their web browsing. isaacl (talk) 18:43, 6 December 2024 (UTC)[reply]

Definitely a browser issue. You can get plug-ins for Chrome right now that will do exactly this, and there's no need for Wikipedia/Mediawiki to implent anything. — The Anome (talk) 18:48, 6 December 2024 (UTC)[reply]

I propose something a bit different: all images on the bad images list can only be viewed with a user account that has been verified to be over 18 with government issued ID. I say this because in my view there is absolutely no reason for a minor to view it. Jayson (talk) 23:41, 8 December 2024 (UTC)[reply]

Well, that means readers will be forced to not only create an account, but also disclose sensitive personal information, just to see encyclopedic images. That is pretty much the opposite of a free encyclopedia. Chaotic Enby (talk · contribs) 23:44, 8 December 2024 (UTC)[reply]
I can support allowing users to opt to blu4 or hide some types of images, but this needs to be an opt-in only. By default, show all images. And I'm also opposed to any technical restriction which requires self-identification to overcome, except for cases where the Foundation deems it necessary to protect private information (checkuser, oversight-level hiding, or emails involving private information). Please also keep in mind that even if a user sends a copy of an ID which indicates the individual person's age, there is no way to verify that it was the user's own ID whuch had been sent. Animal lover |666| 11:25, 9 December 2024 (UTC)[reply]
Also, the bad images list is a really terrible standard. Around 6% of it is completely harmless content that happened to be abused. And even some of the “NSFW” images are perfectly fine for children to view, for example File:UC and her minutes-old baby.jpg. Are we becoming Texas or Florida now? Dronebogus (talk) 18:00, 11 December 2024 (UTC)[reply]
You could've chosen a much better example like dirty toilet or the flag of Hezbollah... Traumnovelle (talk) 19:38, 11 December 2024 (UTC)[reply]
Well, yes, but I rank that as “harmless”. I don’t know why anyone would consider a woman with her newborn baby so inappropriate for children it needs to be censored like hardcore porn. Dronebogus (talk) 14:53, 12 December 2024 (UTC)[reply]
The Hezbollah flag might be blacklisted because it's copyrighted, but placed in articles by uninformed editors (though one of JJMC89's bots automatically removes NFC files from pages). We have File:InfoboxHez.PNG for those uses. JayCubby 16:49, 13 December 2024 (UTC)[reply]
I support this proposal. It’s a very clean compromise between the “think of the children” camp and the “freeze peach camp”. Dronebogus (talk) 17:51, 11 December 2024 (UTC)[reply]
Let me dox myself so I can view this image. Even Google image search doesn't require something this stringent. Lee Vilenski (talkcontribs) 19:49, 11 December 2024 (UTC)[reply]
oppose should not be providing toggles to censor. ValarianB (talk) 15:15, 12 December 2024 (UTC)[reply]
What about an option to disable images entirely? It might use significantly less data. JayCubby 02:38, 13 December 2024 (UTC)[reply]
This is an even better idea as an opt-in toggle than the blur one. Load no images by default, and let users click a button to load individual images. That has a use beyond sensitivity. Zanahary 02:46, 13 December 2024 (UTC)[reply]
Yes I like that idea even better. I think in any case we should use alt text to describe the image so people don’t have to play Russian roulette based on potentially vague or nonexistent descriptions, i.e. without alt text an ignorant reader would have no idea the album cover for Virgin Killer depicts a nude child in a… questionable pose. Dronebogus (talk) 11:42, 13 December 2024 (UTC)[reply]
An option to replace images with alt text seems both much more useful and much more neutral as an option. There are technical reasons why a user might want to not load images (or only selectively load them based on the description), so that feels more like a neutral interface setting. An option to blur images by default sends a stronger message that images are dangerous.--Trystan (talk) 16:24, 13 December 2024 (UTC)[reply]
Also it'd negate the bandwidth savings somewhat (assuming an image is displayed as a low pixel-count version). I'm of the belief that Wikipedia should have more features tailored to the reader. JayCubby 16:58, 13 December 2024 (UTC)[reply]

Class icons in categories

[edit]

This is something that has frequently occurred to me as a potentially useful feature when browsing categories, but I have never quite gotten around to actually proposing it until now.

Basically, I'm thinking it could be very helpful to have content-assessment class icons appear next to article entries in categories. This should be helpful not only to readers, to guide them to the more complete entries, but also to editors, to alert them to articles in the category that are in need of work. Thoughts? Gatoclass (talk) 03:02, 7 December 2024 (UTC)[reply]

If we go with this, I think there should be only 4 levels - Stub, Average (i.e. Start, C, or B), GA, & FA.
There are significant differences between Start, C, and B, but there's no consistent effort to grade these articles correctly and consistently, so it might be better to lump them into one group. Especially if an article goes down in quality, almost nobody will bother to demote it from B to C. ypn^2 04:42, 8 December 2024 (UTC)[reply]
Isn't that more of an argument for consolidation of the existing levels rather than reducing their number for one particular application?
Other than that, I think I would have to agree that there are too many levels - the difference between Start and C class, for example, seems quite arbitrary, and I'm not sure of the usefulness of A class - but the lack of consistency within levels is certainly not confined to these lower levels, as GAs can vary enormously in quality and even FAs. But the project nonetheless finds the content assessment model to be useful, and I still think their usefulness would be enhanced by addition to categories (with, perhaps, an ability to opt in or out of the feature).
I might also add that including content assessment class icons to categories would be a good way to draw more attention to them and encourage users to update them when appropriate. Gatoclass (talk) 14:56, 8 December 2024 (UTC)[reply]
I believe anything visible in reader-facing namespaces needs to be more definitively accurate than in editor-facing namespaces. So I'm fine having all these levels on talk pages, but not on category pages, unless they're applied more rigorously.
On the other hand, with FAs and GAs, although standards vary within a range, they do undergo a comprehensive, well-documented, and consistent process for promotion and demotion. So just like we have an icon at the top of those articles (and in the past, next to interwiki links), I could hear putting them in categories. [And it's usually pretty obvious whether something's a stub or not.] ypn^2 18:25, 8 December 2024 (UTC)[reply]
Isn't the display of links Category pages entirely dependent on the Mediawiki software? We don't even have Short descriptions displayed, which would probably be considerably more useful.
Any function that has to retrieve content from member articles (much less their talkpages) is likely to be somewhat computationally expensive. Someone with more technical knowledge may have better information. Folly Mox (talk) 18:01, 8 December 2024 (UTC)[reply]
Yes, this will definitely require MediaWiki development, but probably not so complex. And I wonder why this will be more computationally expensive than scanning articles for [ [Category: ] ] tags in the first place. ypn^2 18:27, 8 December 2024 (UTC)[reply]
And I wonder why this will be more computationally expensive than scanning articles for [ [Category: ] ] tags in the first place my understanding is that this is not what happens. When a category is added to or removed from an article, the software adds or removes that page as a record from a database, and that database is what is read when viewing the category page. Thryduulf (talk) 20:14, 8 December 2024 (UTC)[reply]
I think that in the short term, this could likely be implemented using a user script (displaying short descriptions would also be nice). Longer-term, if done via an extension, I suggest limiting the icons to GAs and FAs for readers without accounts, as other labels aren't currently accessible to them. (Whether this should change is a separate but useful discussion). — Frostly (talk) 23:06, 8 December 2024 (UTC)[reply]
I'd settle for a user script. Who wants to write it? :) Gatoclass (talk) 23:57, 8 December 2024 (UTC)[reply]
As an FYI for whoever decides to write it, Special:ApiHelp/query+pageassessments may be useful to you. Anomie 01:04, 9 December 2024 (UTC)[reply]
@Gatoclass, the Wikipedia:Metadata gadget already exists. Go to Special:Preferences#mw-prefsection-gadgets-gadget-section-appearance and scroll about two-thirds of the way through that section.
I strongly believe that ordinary readers don't care about this kind of inside baseball, but if you want it for yourself, then use the gadget or fork its script. Changing this old gadget from "adding text and color" to "displaying an icon" should be relatively simple. WhatamIdoing (talk) 23:43, 12 December 2024 (UTC)[reply]

Space-saving front page change

[edit]
The following discussion is an archived record of a request for comment. Please do not modify it. No further edits should be made to this discussion. A summary of the conclusions reached follows.
Consensus against removing welcome banner entirely. I'm withdrawing this for now so I can try to workshop a compromise at the idea lab. – Closed Limelike Curves (talk) 17:46, 8 December 2024 (UTC)[reply]

Right now, the front-page has a huge "Welcome to Wikipedia" message, presumably to remind readers that typing en.wikipedia.org leads to Wikipedia. This displaces about half the DYKs and "On this day" content. Given this is possibly the single most valuable piece of screen real estate on the internet, I think we should be spending it on something that provides more information.

I have two alternatives:

  1. Remove the banner entirely.
  2. Move it to the bottom of the page, replacing "Welcome to Wikipedia" with "Brought to you by Wikipedia". An example can be found at User:Closed Limelike Curves/Main Page.

This is already done on mobile, but would be extended to desktop.

Support (move to bottom of page)

[edit]
  1. Support as proposer. Mild preference for removing the message entirely as redundant.– Closed Limelike Curves (talk) 05:32, 8 December 2024 (UTC)[reply]
  2. Support * Pppery * it has begun... 07:39, 8 December 2024 (UTC)[reply]
  3. Support option 2 - looks better without removing the banner completely. '''[[User:CanonNi]]''' (talkcontribs) 14:08, 8 December 2024 (UTC)[reply]

Oppose

[edit]
  1. Oppose. Welcoming users and explaining what Wikipedia is is a valid purpose for the Main Page. Sdkbtalk 07:36, 8 December 2024 (UTC)[reply]
  2. Oppose. While the message isn't information-dense like the rest of the Main Page, it is much more welcoming for a new visitor, and easier on the eyes, than immediately starting with four blocks of text. Chaotic Enby (talk · contribs) 13:09, 8 December 2024 (UTC)[reply]
  3. Oppose per above. C F A 13:58, 8 December 2024 (UTC)[reply]
  4. Oppose per Sdkb. – DreamRimmer (talk) 14:08, 8 December 2024 (UTC)[reply]
  5. Oppose, always good to put out a welcome mat. Reader and site friendly (note: using Monobook on a laptop I'm not aware of how the page looks on mobile). Randy Kryn (talk) 14:23, 8 December 2024 (UTC)[reply]
    Doesn't a welcome mat usually go on the floor, instead of the ceiling? – Closed Limelike Curves (talk) 17:26, 8 December 2024 (UTC)[reply]
    Yes, but it's the first thing you see. Cremastra ‹ uc › 17:33, 8 December 2024 (UTC)[reply]
  6. Oppose - because it's too important. 2601AC47 (talk·contribs·my rights) Isn't a IP anon 14:24, 8 December 2024 (UTC)[reply]
    And for those curious about why there isn't, say, content portals? Lookie here. 2601AC47 (talk·contribs·my rights) Isn't a IP anon 14:33, 8 December 2024 (UTC)[reply]
  7. Oppose per Sdkb. and Randy Kryn. Thryduulf (talk) 14:59, 8 December 2024 (UTC)[reply]
  8. Oppose The Welcome message is valuable and it makes sense for it to be at the top; the message includes a link to Wikipedia for those unfamiliar with the site, and "anyone can edit" directs readers (and prospective editors) to Help:Introduction to Wikipedia. The article count statistic is a fun way to show how extensive the English Wikipedia has become. (My only suggestion would be to include a stat about the number of active editors in the message, preferably after the article count stat.) Some1 (talk) 15:06, 8 December 2024 (UTC)[reply]
  9. Oppose This proposal essentially restricts informing readers about one of Wikipedia’s core ideas: anyone can edit. The current text on the main page is important because it reminds readers that we’re a free encyclopedia where anyone can contribute. The article count also matters—it shows how much Wikipedia has grown since 2001 and how many topics it covers.Another point to consider is that moving it to the bottom isn't practical. I don't think readers typically scroll that far down—personally, I rarely do. This could lead to fewer contributions from new users.The AP (talk) 15:29, 8 December 2024 (UTC)[reply]
  10. Oppose (strongly). Saying welcome to Wikipedia is just basic courtesy and draws readers in. That's the least important part. Why on earth would we want to hide the fact that we're the free encyclopedia anyone can edit? We need more information about how to edit on the MP, not less! We want to say, front and centre, that we're a volunteer-run free encyclopedia. Remove it, and we end up looking like Britannica. The banner says who we are, what we do, and what we've built, in a fairly small space with the help of links that draw readers in and encourage them to contribute. Aesthetically, I also think it pleasantly frames the main content; it is a preamble, a unchanging pale grey first course. Removing or moving it for the sake of space is like ripping the dust cover off a hardcoverbecause it takes up too much space and readers shouldn't be encumbered with reading a blurb or looking at the cover art (although cover art is often pretty bad these days...) I really don't see any benefit to tearing it off the Main Page. Cremastra ‹ uc › 17:31, 8 December 2024 (UTC)[reply]
    Why on earth would we want to hide the fact that we're the free encyclopedia anyone can edit? We're not, it's still in the giant logo in the top-left. (Are we sure 2 banners is enough? Maybe we need a 3rd one.) – Closed Limelike Curves (talk) 17:35, 8 December 2024 (UTC)[reply]

Discussion

[edit]

Do you have another good reason that the top of the MP should be taken down? Do you have a alternative banner in mind? Moreover, this needs a much wider audience: the ones on the board. 2601AC47 (talk·contribs·my rights) Isn't a IP anon 14:27, 8 December 2024 (UTC)[reply]

On which board? This is both at the village pump and at WP:CENT, so it should reach as much people as possible. Chaotic Enby (talk · contribs) 15:13, 8 December 2024 (UTC)[reply]
Them. They may not take too kindly to this, and we all should know by now. 2601AC47 (talk·contribs·my rights) Isn't a IP anon 15:26, 8 December 2024 (UTC)[reply]
This is a strange concern; of course a community consensus can change the main page's content. It doesn't seem to be happening, but that has nothing to do with the WMF. ~ ToBeFree (talk) 16:16, 8 December 2024 (UTC)[reply]

Do you have a alternative banner in mind?

I avoided specific replacements because I didn't want to get bogged down in the weeds of whether we should make other changes. The simplest use of this space would be to increase the number of DYK hooks by 50%, letting us clear out a huge chunk of the backlog. – Closed Limelike Curves (talk) 17:43, 8 December 2024 (UTC)[reply]
The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

Cleaning up NA-class categories

[edit]

We have a long-standing system of double classification of pages, by quality (stub, start, C, ...) and importance (top, high, ...). And then there are thousands of pages that don't need either of these; portals, redirects, categories, ... As a result most of these pages have a double or even triple categorization, e.g. Portal talk:American Civil War/This week in American Civil War history/38 is in Category:Portal-Class United States articles, Category:NA-importance United States articles, and Category:Portal-Class United States articles of NA-importance.

My suggestion would be to put those pages only in the "Class" category (in this case Category:Portal-Class United States articles), and only give that category a NA-rating. Doing this for all these subcats (File, Template, ...) would bring the at the moment 276,534 (!) pages in Category:NA-importance United States articles back to near-zero, only leaving the anomalies which probably need a different importance rating (and thus making it a useful cleanup category).

It is unclear why we have two systems (3 cat vs. 2 cat), the tags on Category talk:2nd millennium in South Carolina (without class or NA indication) have a different effect than the tags on e.g. Category talk:4 ft 6 in gauge railways in the United Kingdom, but my proposal is to make the behaviour the same, and in both cases to reduce it to the class category only (and make the classes themselve categorize as "NA importance"). This would only require an update in the templates/modules behind this, not on the pages directly, I think. Fram (talk) 15:15, 9 December 2024 (UTC)[reply]

Are there any pages that don't have the default? e.g. are there any portals or Category talk: pages rated something other than N/A importance? If not then I can't see any downsides to the proposal as written. If there are exceptions, then as long as the revised behaviour allows for the default to be overwritten when desired again it would seem beneficial. Thryduulf (talk) 16:36, 9 December 2024 (UTC)[reply]
As far as I know, there are no exceptions. And I believe that one can always override the default behaviour with a local parameter. @Tom.Reding: I guess you know these things better and/or knows who to contact for this. Fram (talk) 16:41, 9 December 2024 (UTC)[reply]
Looking a bit further, there do seem to be exceptions, but I wonder why we would e.g. have redirects which are of high importance to a project (Category:Redirect-Class United States articles of High-importance). Certainly when one considers that in some cases, the targets have a lower importance than the redirects? E.g. Talk:List of Mississippi county name etymologies. Fram (talk) 16:46, 9 December 2024 (UTC)[reply]
I was imagining high importance United States redirects to be things like USA but that isn't there and what is is a very motley collection. I only took a look at one, Talk:United States women. As far as I can make out the article was originally at this title but later moved to Women in the United States over a redirect. Both titles had independent talk pages that were neither swapped nor combined, each being rated high importance when they were the talk page of the article. It seems like a worthwhile exercise for the project to determine whether any of those redirects are actually (still?) high priority but that's independent of this proposal. Thryduulf (talk) 17:17, 9 December 2024 (UTC)[reply]
Category:Custom importance masks of WikiProject banners (15) is where to look for projects that might use an importance other than NA for cats, or other deviations.   ~ Tom.Reding (talkdgaf)  17:54, 9 December 2024 (UTC)[reply]
Most projects don't use this double intersection (as can be seen by the amount of categories in Category:Articles by quality and importance, compared to Category:GA-Class articles). I personally feel that the bot updated page like User:WP 1.0 bot/Tables/Project/Television is enough here and requires less category maintenance (creating, moving, updating, etc.) for a system that is underused. Gonnym (talk) 17:41, 9 December 2024 (UTC)[reply]
Support this, even if there might be a few exceptions, it will make them easier to spot and deal with rather than having large unsorted NA-importance categories. Chaotic Enby (talk · contribs) 18:04, 9 December 2024 (UTC)[reply]

Okay, does anyone know what should be changed to implement this? I presume this comes from Module:WikiProject banner, I'll inform the people there about this discussion. Fram (talk) 14:49, 13 December 2024 (UTC)[reply]

Category:Current sports events

[edit]

I would like to propose that sports articles should be left in the Category:Current sports events for 48 hours after these events have finished. I'm sure many Wikipedia sports fans (including me) open CAT:CSE first and then click on a sporting event in that list. And we would like to do so in the coming days after the event ends to see the final standings and results.

Currently, this category is being removed from articles too early, sometimes even before the event ends. Just like yesterday. AnishaShar, what do you say about that?

So I would like to ask you to consider my proposal. Or, if you have a better suggestion, please comment. Thanks, Maiō T. (talk) 16:25, 9 December 2024 (UTC)[reply]

Thank you for bringing up this point. I agree that leaving articles in the Category:Current sports events for a short grace period after the event concludes—such as 48 hours—would benefit readers who want to catch up on the final standings and outcomes. AnishaShar (talk) 18:19, 9 December 2024 (UTC)[reply]
Sounds reasonable on its face. Gatoclass (talk) 23:24, 9 December 2024 (UTC)[reply]
How would this be policed though? Usually that category is populated by the {{current sport event}} template, which every user is going to want to remove immediately after it finishes. Lee Vilenski (talkcontribs) 19:51, 11 December 2024 (UTC)[reply]

User-generated conflict maps

[edit]

In a number of articles we have (or had) user-generated conflict maps. I think the mains ones at the moment are Syrian civil war and Russian invasion of Ukraine. The war in Afghanistan had one until it was removed as poorly-sourced in early 2021. As you can see from a brief review of Talk:Syrian civil war the map has become quite controversial there too.

My personal position is that sourcing conflict maps entirely from reports of occupation by one side or another of individual towns at various times, typically from Twitter accounts of dubious reliability, to produce a map of the current situation in an entire country (which is the process described here), is a WP:SYNTH/WP:OR. I also don't see liveuamap.com as necessarily being a highly reliable source either since it basically is an WP:SPS/Wiki-style user-generated source, and when it was discussed at RSN editors there generally agreed with that. I can understand it if a reliable source produces a map that we can use, but that isn't what's happening here.

Part of the reason this flies under the radar on Wikipedia is it ultimately isn't information hosted on EN WP but instead on Commons, where reliable sourcing etc. is not a requirement. However, it is being used on Wikipedia to present information to users and therefore should fall within our PAGs.

I think these maps should be deprecated unless they can be shown to be sourced entirely to a reliable source, and not assembled out of individual reports including unreliable WP:SPS sources. FOARP (talk) 16:57, 11 December 2024 (UTC)[reply]

A lot of the maps seem like they run into SYNTH issues because if they're based on single sources they're likely running into copyright issue as derivative works. I would agree though that if an image does not have clear sourcing it shouldn't be used as running into primary/synth issues. Der Wohltemperierte Fuchs talk 17:09, 11 December 2024 (UTC)[reply]
Though simple information isn't copyrightable, if it's sufficiently visually similar I suppose that might constitute a copyvio. JayCubby 02:32, 13 December 2024 (UTC)[reply]
I agree these violate OR and at least the spirit of NOTNEWS and should be deprecated. I remember during the Wagner rebellion we had to fix one that incorrectly depicted Wagner as controlling a swath of Russia. Levivich (talk) 05:47, 13 December 2024 (UTC)[reply]

Google Maps: Maps, Places and Routes

[edit]

Google Maps#Google Maps API

Google Maps have the following categories: Maps, Places and Routes

for example: https://www.google.com/maps/place/Sheats+Apartments/@34.0678041,-118.4494914,3a,75y,90t/data=!...........

most significant locations have a www.google.com/maps/place/___ URL

these should be acknowledged and used somehow, perhaps geohack

69.181.17.113 (talk) 00:22, 12 December 2024 (UTC)[reply]

Allowing page movers to enable two-factor authentication

[edit]

I would like to propose that members of the page mover user group be granted the oathauth-enable permission. This would allow them to use Special:OATH to enable two-factor authentication on their accounts.

Rationale (2FA for page movers)

[edit]

The page mover guideline already obligates people in that group to have a strong password, and failing to follow proper account security processes is grounds for revocation of the right. This is because the group allows its members to (a) move pages along with up to 100 subpages, (b) override the title blacklist, and (c) have an increased rate limit for moving pages. In the hands of a vandal, these permissions could allow significant damage to be done very quickly, which is likely to be difficult to reverse.

Additionally, there is precedent for granting 2FA access to users with rights that could be extremely dangerous in the event of account compromise, for instance, template editors, importers, and transwiki importers have the ability to enable this access, as do most administrator-level permissions (sysop, checkuser, oversight, bureaucrat, steward, interface admin).

Discussion (2FA for page movers)

[edit]
  1. Support as proposer. JJPMaster (she/they) 20:29, 12 December 2024 (UTC)[reply]
  2. Support (but if you really want 2FA you can just request permission to enable it on Meta) * Pppery * it has begun... 20:41, 12 December 2024 (UTC)[reply]
    For the record, I do have 2FA enabled. JJPMaster (she/they) 21:47, 12 December 2024 (UTC)[reply]
  3. Support as a pagemover myself, given the potential risks and need for increased security. I haven't requested it yet as I wasn't sure I qualified and didn't want to bother the stewards, but having oathauth-enable by default would make the process a lot more practical. Chaotic Enby (talk · contribs) 22:30, 12 December 2024 (UTC)[reply]
    Anyone is qualified - the filter for stewards granting 2FA is just "do you know what you're doing". * Pppery * it has begun... 22:46, 12 December 2024 (UTC)[reply]
  4. Question When's the last time a page mover has had their account compromised and used for pagemove vandalisn? Edit 14:35 UTC: I'm not doubting the nom, rather I'm curious and can't think of a better way to phrase things. JayCubby 02:30, 13 December 2024 (UTC)[reply]