Wikipedia:Bot requests/Archive 8
This is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 5 | Archive 6 | Archive 7 | Archive 8 | Archive 9 | Archive 10 | → | Archive 15 |
Too much has been allowed to creep into this category and be reused without watching. The template and category should have been deleted a long time ago, but now represents so much work in removing images from articles nobody would dare approaching it. Can a bot deal with removing images from articles?
Bot for researchers needed
- Crossposted to Wikipedia_talk:Bots#Bot_for_researchers_needed.
A bot (or perhaps a script, or some other tool) would be very useful to
- generate a list of people who have edited target article
- generate an information feedable to a statistical analytical program (comma-separated values format is quite simple and popular) which would tell:
- how often each individual edited this article
- when did he edit it
- how much new content has he changed
- was the edit marked as minor
- was the edit summary used
- was the edit a vandalism (possibly using parts of Tawkerbot2)
- was the edit a vandalism revert
- was the edit part of a revert war
Even one or a few of those if implemented would be much, much appreciated! If we already have tools that can answer some of these questions, please let me know.--
- While this could be implemented as a 28 June 2006(UTC)
- I agree that it likely would be better to use dumps to avoid server load (on the other hand, this bot would not likely be used often). Can one download only a part of the database dump (since this bot would be analyzing only 1 or few pages)? Does toolserver work on the dumps?--28 June 2006(UTC)
- I have a history stats tool that picks up reverts, and a user contrib's stats tool that get the number of large, medium, and small edits form the summaries (and can be percectly accurate sometimes). I'll try to combine the twain. A tool to get the editors and how many times they edited can be found here[1] for example.14 July 2006(UTC)
- Tnx for any help. Unfortunately the current ' TDS' Article Contribution Counter' seems not to be working with non-article namespaces, which makes it mostly useless for my research :( I will ask TDS if he can fix it, and I will be waiting for your combined tool :) --14 July 2006(UTC)
- It doesn't work? It seems to work for me. (Yay! I'm second on the list!) 14 July 2006(UTC)
- Indeed, it should be working for all namespaces. I just merged my user contribs tool's article sorting stats for the type of edits with my history stats tool[2]. It still may need some debugging though.14 July 2006(UTC)
- Oh :) All right, it's a case of user friendliness problem. We need better instructions on the tool page, as I was putting in Wikipedia:Verifiability instead of Verifiability, and the Wikipedia namespace is named Project in the pool down menu :) Now what all those tools are missing is some kind of time-series analysis: a listing on when the edits were done. A graph would be nice, but that can be easily done with any common statistical packages, if the data can be extracted first. Which reminds me - it would be nice if those tools could generate output in 15 July 2006(UTC)
- Indeed, it should be working for all namespaces. I just merged my user contribs tool's article sorting stats for the type of edits with my history stats tool[2]. It still may need some debugging though.
- It doesn't work? It seems to work for me. (Yay! I'm second on the list!)
- Tnx for any help. Unfortunately the current ' TDS' Article Contribution Counter' seems not to be working with non-article namespaces, which makes it mostly useless for my research :( I will ask TDS if he can fix it, and I will be waiting for your combined tool :) --
- I have a history stats tool that picks up reverts, and a user contrib's stats tool that get the number of large, medium, and small edits form the summaries (and can be percectly accurate sometimes). I'll try to combine the twain. A tool to get the editors and how many times they edited can be found here[1] for example.
- I agree that it likely would be better to use dumps to avoid server load (on the other hand, this bot would not likely be used often). Can one download only a part of the database dump (since this bot would be analyzing only 1 or few pages)? Does toolserver work on the dumps?--
Bot requested for image backlogs
I have been thinking over the last few days and seeing that
- Yes it does, afaik. 6 July 2006(UTC)
Book citation completion
I'd very much like a bot that would fill in the blanks in incomplete {{cite book}}s. A cite can be considered incomplete if it's missing any of the following parameters: Title, last, first, publisher, year, id.
I can see five cases in which this could be used (given in fall-through order):
- An id (ISBN etc.) parameter is given, and it has been referenced in another article. In this case, the completed template (omitting page/chapter references, etc.) can be copied over.
- An id (ISBN etc.) parameter is given, and the info can be looked up from an external source.
- A wikilinked title is given, and the linked-to article gives the required information in an {{infobox book}}.
- A wikilinked title is given, and the linked-to article gives exactly one ISBN. (ISBNs in the References and See Also sections don't count, unless the article's one ISBN is in the References section.) In this case, the info can be looked up as above.
- A subset of the aforementioned basic parameters is given, and an external source gives exactly one search match with the required data.
If the bot is unable to fill in all of these basic parameters, it should insert ?s for the missing ones and/or a page comment, to show that it has tried.
If this type of bot responds quickly to new {{
- Extra thing this bot should do: remove the frequent colons afetr the word "ISBN". these prevent the ISBN from being parsed properly. 1 July 2006(UTC)
- Anyone working on this? I think i might do it, sounds easy and fun. 14 July 2006(UTC)
Bot Wanted. User: Xana1
Bot's Name: SpyroBot Editing Spyro Articles
- I think you need here: 1 July 2006(UTC)
I noticed a lot of the requested article pages are in quite a mess with not being proper formatted, ie bullet points.
Would it be possible for someone to write a bot to place bullet points in front of the requests to make the pages look better?
I did do one page manually, and it took quite a while.
And, would it be possible for a bot to automatically remove blue links.
I would happily run it if someone could write it!
Thanks!
Main to user space redirect finder
I have already created a list of some cross-space redirects at User:Invitatious/cnr. I would like this bot to do the following:
- Generate a list of cross-space redirects. (Not from any of the acceptable pseudo-spaces)
- Split this list into main to userspace and not that.
- Present confirmation of each MTU redirect to the human user of the bot, which if approved, will tag it for speedy deletion using the following tag: {{db-rediruser}}
- Make the other redirects into a wikicode list, grouped into sections depending on the first two characters of the redirect. And also upload it to a user page, preferably with code that will prevent a bot from "bypassing the cross-space redirect". Or upload it to an outside web server. Example:
==BE== * [[:Being a dick]] → [[:Wikipedia:Don't be a dick]]
Prod Bot
I'd like to see if anyone's up for writing a bot to help those who monitor PROD. Specifically, I think a bot could:
- Maintain pages, one for each day, that lists all articles that were PROD'ed that day, the time of the prod, the prodder, the prod concern, and the edit summary used when prodding. It could strike out articles that were de-prodded. Optionally, it could note whether a {{prod2a}} tag has been added.
- Automatically remove prod from any article if prod was previously removed from that same article, and notify the prodder that this was done.
- Automatically remove prod from any page that isn't an article, and notify the prodder that this was done.
Number 1 especially would be very helpful in patrolling. I don't have the skills to write the bot myself, or I would.
- NOTE: There is already a system in place that provides functionality similar to that described in item 1 (I can't remember the specifics, but it includes an auto-updating table of PRODs), and it was used successfully in the early days of PROD. Unfortunately it relies on a copy of the database kept on the 5 July 2006(UTC)
NoOffenseBot
I am requesting a bot that can detect & remove offensive language.--
- If the bot plans to run in the Article namespace, this is censoring, which is not allowed. If it plans to run in talk pages, then this is editing other peoples' comments, which certain users think it rude. 5 July 2006(UTC)
- Yeah, this could be problematic. In the article space it could lead to problems, for example, the 13 July 2006(UTC)
- This would definitely be problematic. Wikipedia is not censored. 14 July 2006(UTC)
- This would definitely be problematic. Wikipedia is not censored.
- I would suggest a slightly different approach, then. How about an option where is censors out words for any particular user who doesn't like them?
—Preceding unsigned comment added by OneWeirdDude (talk • contribs)
- That wouldn't be a bot thing. That would be something that would have to b accomplished by changing the software behind Wikipedia. It's really not going to happen. Even if it ever did, it would be difficult deciding what was an appropriate and innapropriate use of profanity. Coincidentally, did you know that wikipedia has a fairly large number of pornographic images? As I said before, Wikipedia is not censored. 20 July 2006(UTC)
- Wikipedia is not censored. --31 July 2006(UTC)
- That wouldn't be a bot thing. That would be something that would have to b accomplished by changing the software behind Wikipedia. It's really not going to happen. Even if it ever did, it would be difficult deciding what was an appropriate and innapropriate use of profanity. Coincidentally, did you know that wikipedia has a fairly large number of pornographic images? As I said before, Wikipedia is not censored.
A bit of redirect snapping
Will a botbeard please go through the
- That's not necessary unless you intend to re-use the old title for a different purpose, or delete the trailing redirect. — Jul. 6, '06 [03:11] <freak|talk>
Index of initials
I think a bot that creates an index for all 2 and 3 word articles according to their initials (instead of the first letters as in
I would like to work on such a bot, but I have no idea how to write one (even though I have some programming experience). If somebody send me a code for a similar bot (that looks every article title and categorize for a certain criteria) I can modify it.
--
- The generated index would never be complete, because articles are always being added, although I can generate a list for you if you are a bit more specific. Will small, common words such as "the" or "of" be counted? If I understand what you mean, 6 July 2006(UTC)
- Good point. I was thinking of "no short words" version but it could be otherwise. I don't need it for immediate use so may be we should think of some dynamic way to generate it. I thought that a bot running periodically would do the trick. Do you know how does 7 July 2006(UTC)
- Quick index uses a special page 7 July 2006(UTC)
- Quick index uses a special page
- Good point. I was thinking of "no short words" version but it could be otherwise. I don't need it for immediate use so may be we should think of some dynamic way to generate it. I thought that a bot running periodically would do the trick. Do you know how does
- Do you think it would worth the effort? 7 July 2006(UTC)
- Maybe for finding pages that are not on disambiguation pages for acronyms. I might start programming it to generate
wikicode>HTML once I get the data (the current one's at "http://download.wikimedia.org/enwiki/20060702/enwiki-20060702-all-titles-in-ns0.gz" ). If that turns out to be extremely useful, maybe the data can be cached every night and made a special page extension.7 July 2006(UTC)- Thanks. I didn't know that there was a titles only database. May be I can play with it later. I am looking forward for your results. 8 July 2006(UTC)
- Thanks. I didn't know that there was a titles only database. May be I can play with it later. I am looking forward for your results.
- A prototype is at 10 July 2006(UTC)
- Do you think it would worth the effort?
- Thank you 10 July 2006(UTC)
- Thank you
Substitution
A bot is needed to keep
Batch move of interwiki images
We've got a non-Wikimedia MediaWiki site over at Wikible; there are several versions of the wiki in different languages, so we also have a Pool (like Wikimedia Commons). We've changed how things work and now have a bunch of images to transfer from wikible.org/en to wikible.org/pool. I don't think anyone has any bot experience in our group; would you mind helping us out? Read the discussion on our site for more background and relevant links. Thanks! --
- We've manually moved all the pictures, but we're still interested in Bot experts to help out with the site. --2 August 2006(UTC)
Any idea whom should I contact to get more answers there?--
- First try installing these[3][4] to YOURNAME/monobook.js to be able to anaylize pages. As for something that a bot could do en masse (nothing as complex as that tool), I'd have to know what you want more specifically (point by point).30 July 2006(UTC)
Stub-labeller
I'd like a bot to go through small articles and label them stubs. Anything smaller than 1k is probably a stub, yeah? --
Grammar & Spell Checking bot
I need a bot to review a page I added. I've been without the necessary time to review my writing and I know there must be mulitple grammar and spelling errors. Page is
- This is no easy job for a bot, best done by a human. 17 July 2006(UTC)
Wikiproject Bot
I originally posted this here, but figured it would be good to post it here also.
I've noticed a recent issue with WikiProjects. I've noticed it in the one I work on, Wikipedia:WikiProject_Anime_and_manga, but it probably applies to all WikiProjects. When an article is declared a "Good Article" or any other article class, editors add the appropriate tag on the Discussion page, but they often forget to add the appropriate Wikiproject tag that says "this is a good article for Wikiproject whatever". This means that the Wikiproject statistics page that shows how many Good Articles and the like the Wikiproject has may be drastically off, and the categories sorting the Wikiproject's articles may show tons of "good articles" and the like in the "unassessed" section.
Would it be possible for a bot to regularly peruse the Wikiproject article discussion pages and find ones that have a GA tag but no Wikiproject GA tag, and the same for all article assessment tags? Also, if a Wikiproject has a system for article ratings that doesn't coincide with the main Wikipedia system (as warned by a commentator on my original post), the bot could simply not affect articles on that Wikiproject.
Date Change Bot
It could change M/Y or M/D/Y dates for template defaults like {{cleanup}} and various "As of..." containing articles. It sounds like it wouldn't put too much strain on the servers and would prevent articles from becoming inaccurate datewise. --
- I'm a little confused about what you are talking about. 19 July 2006(UTC)
- Oh, okay. --19 July 2006(UTC)
- Oh, okay. --
Fixing #Redirects
How about a bot that fixes links that redirect. Like, say a link points to A, which redirects to B; the bot could fix it that it points straight to B.
- Then what's the point of having a redirect from A to B?--21 July 2006(UTC)
- In case a user goes to that page (Got you, Orgullo!) 22 July 2006(UTC)
- In case a user goes to that page (Got you, Orgullo!)
Daily maintenance bot
Given the recent apparent demises of both
- I agree and I would be willing to help with this once you get some sort of system implemented to mantain it (21 July 2006(UTC)
- I don't run any bots, per se, but do run some automated scripts whose source I simply post as a subpage either of my user page or of the page they deal with. See, for example, 21 July 2006(UTC)
- That's fine for just one script, but I don't imagine it's ideal for some one like AllyUnion or myself, who have dozens of bots. I would support a central repository for all of the open source Wikipedia bots.--22 July 2006(UTC)
- That's fine for just one script, but I don't imagine it's ideal for some one like AllyUnion or myself, who have dozens of bots. I would support a central repository for all of the open source Wikipedia bots.--
- Subpages of 22 July 2006(UTC)
- I don't run any bots, per se, but do run some automated scripts whose source I simply post as a subpage either of my user page or of the page they deal with. See, for example,
Help needed on the feasibility of a bot.
This comment is a little different from the others here as its not directly a Bot request but rather some assistance with clarifying whether a bot that can do the following is possible. Any thoughts before I embark on trying to make it are greatly helpful. Please forgive me if this is the wrong place to raise this. I am in the early stages of designing and building a bot named the Prolificity Sentinel.
In short the bot will flag articles where:
- They (and their talk page) have not been updated for a minimum of 6 months.
- They will not be flagged if they contain under 1000 characters. (a measure to avoid flagging stubs and minor articles like most biographies).
It will upon finding a suitable candidate for flagging edit the page of the article and give it 'Sentinel Alert Status'. This will be a category all wikipedians can see and go through.
Any way, thats a very breif overview, i've discussed it in much more detail here
- It is possible. Of course, if you're using a database dump you can check all pages of the 'pedia, but a live lookup demands for extensive thinking about how will this affect the server. 25 July 2006(UTC)
I guess the three key things I'd need to know per page on the run are:
- When the last namespace page edit was made
- When the last talk page edit was made
- How many characters are on the namespace page
So per page analysed there'd be 3 requests for information if it was going to be flagged. I was thinking of setting the Bot up with specific parameters say, so it would only do a run of say 1,000 articles at a time(roughly one per minute over the course of a day?) I can make the bot less server intense by having it immiediately stop requesting say the last talk page edit if its already found out that the main page edit was within the last 6 months. That way the majority of articles would only have one piece of information requested making sure I dont accidently DoS the server. Any thoughts on that. Is there a database query that can be made to ascertain the last page edit date? Thats the key think I need to know and I can't find a script for it if one exists. ta --
Create a bot to update CIA World Factbook links
I wanted to ask if someone could create a bot to automate the changing of links to the CIA's World Factbook web site.
For example, the current link for Malaysia's entry on the Wikipedia page for Putrajaya point to http://www.cia.gov/library/publications/the-world-factbook/geos/my.html . If you go to that address, the CIA says that the page has been moved. Even worse, it doesn't forward you to the correct page for Malaysia. Instead, it redirects you to The World Factbook's front page and you have to navigate yourself to the right page.
I thought they re-did the directory structure or something, but found that the change is much more subtle. They've simply required The World Factbook to be accessed using the secure server method.
So, http://www.cia.gov/library/publications/the-world-factbook/geos/my.html can be successfully viewed at https://www.cia.gov/library/publications/the-world-factbook/geos/my.html .
Can someone create a wikibot to go through the wiki files and change the http://www.cia.gov.... to https://www.cia.gov ?
I figure it'd be more helpful since the CIA doesn't do the forwarding automatically... The wiki community would appreciate it. :-D
Thanks!
Brian --
- Another observation: Links formerly beginning with http://www.cia.gov/nic/ must now be changed to http://www.dni.gov/nic/. -- 29 July 2006(UTC)
- Google says there are 106 pages linking to the url, http://www.cia.gov/library/publications/the-world-factbook/geos. Not sure how you can pick up the links from there. - 29 July 2006(UTC)
- Or Wikipedia search too (per Omicronpersei8's suggestion). - 29 July 2006(UTC)
- I'm going to go ahead and selfishly plug my query link, because the problem described in this section applies to all subpages of cia.gov, with a few exceptions, like with cia.gov/nic and the strange, slow www.foia.cia.gov, which doesn't seem to need changing. -- 29 July 2006(UTC)
- I'm going to go ahead and selfishly plug my query link, because the problem described in this section applies to all subpages of cia.gov, with a few exceptions, like with cia.gov/nic and the strange, slow www.foia.cia.gov, which doesn't seem to need changing. --
- Or Wikipedia search too (per Omicronpersei8's suggestion). -
- Just searching for 'cia.gov' has been working quite well for me. -- 29 July 2006(UTC)
- Google says there are 106 pages linking to the url, http://www.cia.gov/library/publications/the-world-factbook/geos. Not sure how you can pick up the links from there. -
Okay, I put in the proposal on
- Actually, I only get ten results for 'cia.gov/nic', so I guess you don't have to worry about that. (See where I saw this problem here.) -- 29 July 2006(UTC)
- Right. It might be worth doing a manual run on that link after the main bot run. 29 July 2006(UTC)
- I think I already took care of all of them. -- 29 July 2006(UTC)
- Thanks everyone for their help. Hope they can expedite the approval process. :-D 31 July 2006(UTC)
- Heh. Well, we'll see. I'll post here whenever I get approved to meet the request. I really don't see this being a major inconvenience...the only thing really holding us up is the wait for approval. 31 July 2006(UTC)
- Heh. Well, we'll see. I'll post here whenever I get approved to meet the request. I really don't see this being a major inconvenience...the only thing really holding us up is the wait for approval.
- Thanks everyone for their help. Hope they can expedite the approval process. :-D
- I think I already took care of all of them. --
- Right. It might be worth doing a manual run on that link after the main bot run.
User interaction
Is there a way to make a tool that will help tell us if two users have ever interacted before? For example, it tells you (and provides diffs) of instances when user A has edited user B's talk page (or any user space page) and vice versa.
- This really should go on the userscripts page. I have a tool that can spit out the last 20 blocks for two users and all pages they edited in common. I't should be too hard to whip up something like this.30 July 2006(UTC)
- Knowing which pages they edited in common would likely do the trick, since you can look for the user talk pages and other low volume pages that may suggest an interaction (although you would still have to dig into the history). 30 July 2006(UTC)
- Knowing which pages they edited in common would likely do the trick, since you can look for the user talk pages and other low volume pages that may suggest an interaction (although you would still have to dig into the history).
I don't know whether this requires a new bot or just an addition to a data file for an existing bot. Anyway, various people add links to various pages in http://www.websearchinfo.com/ e.g in Cloaking. All links to this site are spam e.g. http://www.websearchinfo.com/poker and http://www.websearchinfo.com/cloaking-techniques.
Could we create / amend a bot to revert all links to this site shortly after they are made?
- If it's always spam (I haven't checked), we could add that website to our blacklist. 30 July 2006(UTC)
Color change
Is there a bot that will do font color changes? That is change every instance of a six digit hex string to another 6 digit string within a single page. What I am planning to do is explained
- Replace a string of hex digits within a page? Well, a bot can easily do that as long as there aren't any occurances of the hex strings which you don't want to be replaced. 24 June 2006(UTC)
- Thanks, as soon as I posted this I realized this is a pretty simple request that you guys can probably do in your sleep. The key to my color changes is doing them in the proper order so that you don't have two groups of data having the same color value at a given point in time. I think I still need to select better color choices. If anyone has a better color scheme suggestion, please comment 24 June 2006(UTC)
- If this is a find and replace operation, it can even be done with AWB. If you send me a list of pages, and the find and replace criteria I can blow it out pretty quick. — 4 July 2006(UTC)
- If this is a find and replace operation, it can even be done with AWB. If you send me a list of pages, and the find and replace criteria I can blow it out pretty quick. —
- Thanks, as soon as I posted this I realized this is a pretty simple request that you guys can probably do in your sleep. The key to my color changes is doing them in the proper order so that you don't have two groups of data having the same color value at a given point in time. I think I still need to select better color choices. If anyone has a better color scheme suggestion, please comment
pngbot
I'm looking to write a bot that will go trough all pages, (or for wikipedia, preferably a dump) and look for references to uploaded .gif's if found, convert them to .png's and update the reference. I have some experience in programming, but not anything of this sort. I seems like a fun project, but I would like some help making it.
- It's not necessary or useful. Also, some GIFs are animated; there is no standard animated PNG. There are two common types of animated PNGs (19 July 2006(UTC)
- thanks for pointing that out, I hadn't tought of animated gif's. Would bringing down the filesize/maintaining the same quality be enough improvement to edit? 19 July 2006(UTC)
- thanks for pointing that out, I hadn't tought of animated gif's. Would bringing down the filesize/maintaining the same quality be enough improvement to edit?
- It should be noted that the MediaWiki image scaling code often produces much better results with 15 August 2006(UTC)
- It should be noted that the MediaWiki image scaling code often produces much better results with
Have you looked at gif2png, which also includes a python web2png, which may do what you want?
ISBN Checker
moved the following from the talk page to here:
Such as bot would go around, and whenever the string "ISBN" is followed by 10 or 13 digits, would compute the special ISBN checksum, and if the checksum is invalid, leave a comment or a template to the effect that someone needs to check the transcription of the ISBN or fix it. --
- SmackBot already does a lot with ISBNs, this will be the next step. Incidentally there is a category of "invalid isbn". 21 August 2006(GMT).
- This is now tested and awaitng approval. FYI there were c. 1,100 with obvious checksum errors. 24 August 2006(GMT).
- This is now tested and awaitng approval. FYI there were c. 1,100 with obvious checksum errors.
- Note that there are indeed erroneous ISBNs. If a book has a bad ISBN printed on it, we can label it as incorrect as much as we want but it still exists, and someone holding the book should be told the ISBN on the book rather than what it should be. There also are different books which have the same ISBN printed on them. A related issue are SBNs which have been converted to ISBNs although the book only has the SBN on it. It probably is a good idea to be able to graciously handle certain improper information, but we're in a different situation than unpacking books in a library where a challenged book can be immediately examined and confirmed. (24 August 2006(UTC))
AIV watcher/template
Can someone design a bot that when a user posts an alert on the
The template is {{subst:User:Daniel.Bryant/AIV}}. I'll run it off my main user, or create a new bot account, whichever is easier. Thanks!
- I'd like to follow up on this really quickly. I encouraged Daniel/Killfest to post this request here. At present I'm not sure what this would entail in terms of unecessary server load and increasing vandalism to 20 July 2006(UTC)
- anyone????? I've thought about it, and really do think it might be a good idea, if implemented correctly. 23 July 2006(UTC)
- anyone????? I've thought about it, and really do think it might be a good idea, if implemented correctly.
- Judging by the time admins take to block vandals, are there many vandals who would need this anyway? 23 July 2006(UTC)
- Judging by the time admins take to block vandals, are there many vandals who would need this anyway?
- When I was an IP, got blocked because an administrator didn't even bother to check my contributions, but instead took the word of a now-blocked user that I was "POV-pushing". If there was this template to alert people who have been reported, situations like mine would be avoided. 5 August 2006(UTC)
- When I was an IP, got blocked because an administrator didn't even bother to check my contributions, but instead took the word of a now-blocked user that I was "POV-pushing". If there was this template to alert people who have been reported, situations like mine would be avoided.
Disambiguating Commonwealth
There are hundreds of articles about holders of the Victoria Cross where the opening statement is a standard phrase:
the highest and most prestigious award for gallantry in the face of the enemy that can be awarded to [[United Kingdom|British]] and [[Commonwealth]] forces
which should be disambiguated to
the highest and most prestigious award for gallantry in the face of the enemy that can be awarded to [[United Kingdom|British]] and [[Commonwealth of Nations|Commonwealth]] forces
Simultaneously, most of these articles could have the reference title
- Consider it done (I seem to keep coming back to these articles). 21 August 2006(GMT).
- BTW "SCOTLAND'S FORgotten VALOUR" is correct. 21 August 2006(GMT).
- BTW "SCOTLAND'S FORgotten VALOUR" is correct.
Bot requested for link title change
I was hoping to get a bot to change all instances of
- All instances of [[List of professional wrestling throws#Spinebuster slam|*]], [[List of professional wrestling throws#Spinebuster slam]], [[Professional wrestling throws#Spinebuster slam|*]] and [[Professional wrestling throws#Spinebuster slam]] should be changed to [[List of professional wrestling throws#Spinebuster|*]] or [[List of professional wrestling throws#Spinebuster]], where * stands for random characters, correct? --30 July 2006(UTC)
- Correct, if you're going to do this can you leave a message on my talk page and if I don't respond in time can you change the title from Spinebuster slam to Spinebuster before you run the bot otherwise other users may change the link back to Spinebuster slam? --- 31 July 2006(UTC)
- Just an addition to state that I still need a bot for this and any help would be appreciated. --- 4 August 2006(UTC)
- Your request looks fairly simple. I might be able to help out. Trouble is, I already have 2 pending requests on 4 August 2006(UTC)
- Not really no, sorry. --- 5 August 2006(UTC)
- 5 August 2006(UTC)
- That is true, I've left the title so as not to break the links in advance however if you were to change the title before you start the bot there would be no problem. HOWEVER, an issue has come up in that 5 August 2006(UTC)
- Ewwwwww. Copy and paste move. I'm really sorry about that. Just shoot me a line when we get this whole thing working. 5 August 2006(UTC)
- Ok we've got everything fixed and it's moved back, just change Spinebuster slam to Spinebuster and then run the bot. Thanks for your help. --- 6 August 2006(UTC)
- Ok we've got everything fixed and it's moved back, just change Spinebuster slam to Spinebuster and then run the bot. Thanks for your help. ---
- Ewwwwww. Copy and paste move. I'm really sorry about that. Just shoot me a line when we get this whole thing working.
- That is true, I've left the title so as not to break the links in advance however if you were to change the title before you start the bot there would be no problem. HOWEVER, an issue has come up in that
- Not really no, sorry. ---
- Your request looks fairly simple. I might be able to help out. Trouble is, I already have 2 pending requests on
- Just an addition to state that I still need a bot for this and any help would be appreciated. ---
- Correct, if you're going to do this can you leave a message on my talk page and if I don't respond in time can you change the title from Spinebuster slam to Spinebuster before you run the bot otherwise other users may change the link back to Spinebuster slam? ---
Bot to warn vandals
I would like to request a bot for me that can:
- Warn vandals
- Block pagemove vandals with the edit summary (pagemove...) or ({{WoW}})
This would be good, since Curps block bot is inactive at the moment. --
- If you just want to warn vandals, a script would be a good idea. You can find some 30 July 2006(UTC)
- Curps is offline, and allowing bot to have administrator rights is never going to happen (not in the near future anyway). They can cause too much damage. — talk) 14:53, 30 July '06
- At least one bot has admin powers --31 July 2006(UTC)
- Which? — talk) 12:12, 01 August '06
- Which? —
- At least one bot has admin powers --
- Curps is offline, and allowing bot to have administrator rights is never going to happen (not in the near future anyway). They can cause too much damage. —
Company Ticker Symbols
Will anyone be willing to create a bot to redirect ticker symbols to their respective companies? -
- Can you give an example of what you're talking about? I'd just like to see one article example to get an idea of what we're looking at. 30 July 2006(UTC)
- Like 30 July 2006(UTC)
- How would that list be generated? It almost seems to me like this job would be better done by a user than a bot, simply because some of the names of listed companies are common names. Still, it would likely be a crapton of work...and I imagine that some of the companies don't even have articles yet. What would be done about that? 31 July 2006(UTC)
- You may be right as we do have a 31 July 2006(UTC)
- Hmm. Do you mean something like what 31 July 2006(UTC)
- Well unless we could get someone to get permission from a copyrighted source, I guess we're out of luck because I've tried searching for a GFDL source for company profiles already. -Blackjack48
- The goal, though, is very admirable. Someday, Wikipedia really out to have an article on every listed company, considering that listing is a criteria of 31 July 2006(UTC)
- The goal, though, is very admirable. Someday, Wikipedia really out to have an article on every listed company, considering that listing is a criteria of
- Well unless we could get someone to get permission from a copyrighted source, I guess we're out of luck because I've tried searching for a GFDL source for company profiles already. -Blackjack48
- Hmm. Do you mean something like what
- You may be right as we do have a
- How would that list be generated? It almost seems to me like this job would be better done by a user than a bot, simply because some of the names of listed companies are common names. Still, it would likely be a crapton of work...and I imagine that some of the companies don't even have articles yet. What would be done about that?
- Hasn't this morphed a bit? I read the original request to be to add a set of redirects (not the articles). Assuming there's a list of symbols to company names somewhere, shouldn't this be relatively trivial? -- 2 August 2006(UTC)
Category architects
We put a request in recently for all the articles in
- That shouldn't be too hard to do with AWB (considering that that's how BlueMoose originally did it). Unless someone else is interested, I'll take a look tonight and see if I can cook something up using AWB. I'll get you some diffs, and, if you approve, I'll put in a request. 31 July 2006(UTC)
- Many thanks. I don't know whether this is relevant, but for some of the article which I added the template to manually didn't have talk pages - I had to create the page and then add the template. Regards --1 August 2006(UTC)
- Many thanks. I don't know whether this is relevant, but for some of the article which I added the template to manually didn't have talk pages - I had to create the page and then add the template. Regards --
- Perhaps someone else could have a look at this - alphachimp seems busy doing other things - many thanks. --18 August 2006(UTC)
apply different copyright template to images
A bunch of images are showing up on Category:License_tags. When I looked at the source it looks like some public domain template might have gotten subst'd in rather than being included using PD-whatever resulting in those pages having the License_tags category applied even though it's in a noinclude section. I forget if subst works that way, but that's my best guess.
I created Template:PD-Japan for some of those images and started applying, but I think it might be bottable. If so that would be much simpler than doing it by hand :-). At the least, I would think the noinclude sections could be removed from the image pages using a bot.
Hospital stubs
{{
- I can easily do it, but they're not approving my requests over at 3 August 2006(UTC)
I think there should be a bot that fixes spelling mistakes and typos
County highlighting maps
There are still lots of such maps not moved to Commons. Is it possible for bot to do it? If bot can't find maps, which haven't been moved, I can find some of them.
Asteroids (interwiki)
Hi! I would like to ask someone to add missing interwiki links to Polish Wikipedia in
Genetic disorder category
Regarding
FreedomBot
We all know
- This is probably not the place to voice your objections to Orphanbot. 9 August 2006(UTC)
"Deonbot" hehe
lol i love the name already :P
Anyway.. I was going through some random pages and i saw
- You're talking about admins (or those closing AfDs) missing the {{oldafdfull}} template. It's a real problem, and you're certainly bringing up a valid concern. Are you saying that you would be able to program a bot to do it youself? 9 August 2006(UTC)
- I .. could... give it a shot :), although I may need a little help :)--9 August 2006(UTC)
- We were discussing the need for this tonight on IRC. I think it might be a good idea to incorporate something like this into the scripts for closing AfDs. Is there a way you could do that? 9 August 2006(UTC)
- I s'pose I could help.. I don't have a lot of programming experience.. probably not enough to fully make a bot by myself, but i guess I would catch on :) --9 August 2006(UTC)
- I s'pose I could help.. I don't have a lot of programming experience.. probably not enough to fully make a bot by myself, but i guess I would catch on :) --
- We were discussing the need for this tonight on IRC. I think it might be a good idea to incorporate something like this into the scripts for closing AfDs. Is there a way you could do that?
- I .. could... give it a shot :), although I may need a little help :)--
Image trawler
I'd like to request an image trawling bot to do some indexing for me. Now, I'll explain with the first one I'd like to go with. Start with linksearch. Go to each image page and log if that image is not in
I'm attempting to restart this project, and was wondering if a bot could regularly run on a list of its subpages (such as
Anti-insult Bot
Quite simple really, a bot that checks all edits that are done to User: and User_talk: pages and user space for swears/insults/racism etc. And ****'s them. It is optional and only watches user pages that are listed in the bots user space. Thought this would be useful for admins, and users that recieve a lot of vandalism/attacks. Good idea?--
- Personally I'd prefer to have admins that are mature enough not to give a **** if there are nasty words on their talk page. 10 August 2006(UTC)
- Well some insults/vandalism are just repeats from 10 minutes ago and are just not needed.--10 August 2006(UTC)
- talk) 21:40, 10 August '06
- Yes, I know that. But does that mean you can upload the most disgusting images and place them on your user page for others to see? Having a bot to give some sort of protection to users who want it would be nice.--10 August 2006(UTC)
- It'd be really easy to do, even with AWB. The trick would be listing the relevant images and relevant profanity. Very simple. I'm just a little worried about the controversy it would generate (by the way, I could do it if you provide me with the profanity list and bad picture list). 11 August 2006(UTC)
- It'd be really easy to do, even with AWB. The trick would be listing the relevant images and relevant profanity. Very simple. I'm just a little worried about the controversy it would generate (by the way, I could do it if you provide me with the profanity list and bad picture list).
- Images? I don't believe you said that in your original description, so I assumed you just meant the odd swear word or two. — talk) 10:15, 11 August '06
- Yes, I know that. But does that mean you can upload the most disgusting images and place them on your user page for others to see? Having a bot to give some sort of protection to users who want it would be nice.--
- Well some insults/vandalism are just repeats from 10 minutes ago and are just not needed.--
little bitty bot
I've found a bug in this template. I fixed it, but there are a ton of pages that used the broken template. Here's what I've done, and here's the list of sites that need to be fixed. Is this a thing a bot could fix, or does this need to be done by hand?
- All fixed. 15 August 2006(UTC)
Athletics --> Athletics (track and field)
The content of
- The key word that makes this request difficult for a bot to satisfy is "appropriate". If the task requires user input to make the decision, I'd suggest you guys use 15 August 2006(UTC)
Bot to drop project template in talk pages of articles in categories
There's probably already a generic bot that can do this, but I need a bot to add the {{
- This is quite easy to do. Unfortunately my bot is currently in the middle of a very long task, so I'll leave this open for other offers, and if nobody has said anything when my bot is available, I'll try and sort something out for you. — 15 August 2006
- And also, can I just ask you to clarify what you want doing, for the benefit of myself and other bot owners? Am I correct in saying you want only {{WikiProject Kentucky}} adding to the talk page, if both {{WikiProject Kentucky}} and {{LouisvilleWikiProject}} aren't there? — 15 August 2006
- That is correct. If neither of these templates are on the talk page, add the {{WikiProject Kentucky}} template to the top of the talk page, or under other preexisting templates at the top (if that's possible). Thank you very much for considering doing this for WikiProject Kentucky. 15 August 2006(UTC)
- Okay, I've put in a request and I'll get started as soon as that goes through. If I don't get finished tonight I'll finish it off tomorrow for you. — 15 August 2006
- Okay, I've put in a request and I'll get started as soon as that goes through. If I don't get finished tonight I'll finish it off tomorrow for you. —
- That is correct. If neither of these templates are on the talk page, add the {{WikiProject Kentucky}} template to the top of the talk page, or under other preexisting templates at the top (if that's possible). Thank you very much for considering doing this for WikiProject Kentucky.
I can Take over just let me get it approved.
- Thank you for your kind assistance. It is much appreciated. 18 August 2006(UTC)
- Should be done within a few hours 20 August 2006(UTC)
- Appears done, and it worked like a charm. Thank you again for doing this task for the project. It is very much appreciated. 20 August 2006(UTC)
- Appears done, and it worked like a charm. Thank you again for doing this task for the project. It is very much appreciated.
Transwiki Updates
A couple weeks ago, I noticed a significant disparity between a Wikipedia page and the Meta master page. I updated it, but I notice there are a lot of such pages, and many are not kept up-to-date. I suggest a bot to update page copies on a regular basis.
The pages in question are most—but not all—of the pages listed here and here.
Now it's worth noting that people do edit these pages, despite the prominantly displayed message suggesting that they refrain from doing so. It may be worth the time to transfer significant edits back to the Meta copies beforehand, if they are obviously good additions to the article. I dunno if I'd want to do that all myself though, so someone else would have to be interested as well.
Any regular bot activity should be announced on the talk page, I think, to discourage anyone making changes that may be quickly overwritten. --
- A good idea, but I tried to code it in the pythony way, and my computer can't encode the characters used at meta. I've seen that the bots used to ignore them, but I don't know how to do it without triggering any error that causes the program to stop. 18 August 2006(UTC)
Number edit detecting bot
I've run into a number of people lately whose idea of fun is editing numbers on Wikipedia (and other information that most people won't recognize as vandalism, but numbers are the most obvious) just for the hell of it. Some do it every day as a hobby. They usually switch usernames often, or use anon/proxy.
It would be very useful to have a bot that downloads the Wikipedia database (to avoid undeserved load on the real one) and searches the entire edit history of Wikipedia for edits that:
- Were made by an anon or user with few edits (fewer than 5 or 10 maybe).
- Only change a number and nothing else with no edit summary.
- Have not been reverted or otherwise changed since.
These edits could then be checked out by Wikipedians to see if they're vandalistic or not.
- I'd like to work this bot. Can you provide examples of pages that have been vandalized in this manner? 5 September 2006(UTC)
New Reference Desk Daily Maintainance Bot
Discussion of this proposed bot's function can also be found at Wikipedia talk:Reference desk.
Cleaning and maintainance of the
As such, the bot must be able to complete the following tasks for each of the six reference desks on a daily basis at approximately 00:00 UTC:
- Start the new date at the bottom of each page
- Archive oldest date
- Transclude questions and answers that are more than 24 hours old (include {{Reference desk navigation}})
- Add the newly transcluded page to the reference desk archives
At the start of each month, the bot would have six additional tasks to perform to create a new monthly reference desk archive page with {{Reference desk navigation}} for each of the six reference desks.
Detailed Example of Bot's Function:
At 00:00 UTC on August 16, 2006, for the Miscellaneous reference desk the bot would:
- Append = August 16 = to the bottom of Wikipedia:Reference desk/Miscellaneous
- Remove = August 9 = and {{Wikipedia:Reference desk archive/Miscellaneous/2006 August 9}} from the top of Wikipedia:Reference desk/Miscellaneous
- Copy all text between = August 14 = and = August 15 = from Wikipedia:Reference desk archive/Miscellaneous/2006 August 14
- Prepend the copied text with the following text and create the new page Wikipedia:Reference desk archive/Miscellaneous/2006 August 14
<noinclude> {{subst:Reference desk navigation |previous = Wikipedia:Reference desk archive/Miscellaneous/2006 August 13 |date1 = August 13 |next = Wikipedia:Reference desk archive/Miscellaneous/2006 August 15 |date2 = August 15 |type = Miscellaneous }} </noinclude>
- Replace all text between = August 14 = and = August 15 = on Wikipedia:Reference desk/Miscellaneous with {{Wikipedia:Reference desk archive/Miscellaneous/2006 August 14}} and save the page
- Append = August 14 = and [[Wikipedia:Reference desk archive/Miscellaneous/2006 August 14]] followed by a numbered list of all questions asked to the end of Wikipedia:Reference desk archive/Miscellaneous/August 2006and save the page. For August 14, this would look like:
<!--werdnabot-archive--> = August 14 = [[Wikipedia:Reference desk archive/Miscellaneous/2006 August 14]] #A type of chair #Male Orgasm #World Trade Center Movie #edits #Maps from Nationalatlas.gov #Clitoral Hood Piercing #Guitar #Alexander Graham Bell #Cruise control on the 1998 ford windstar #Gangster Chronicles TV Series #Physics of a bullet #T.E.A.M. #Who would be richest? #My surname is Bencko. #Top Hats #pounds to dollars #The New York Pass
In addition to its normal daily tasks, at 00:00 UTC on the third of each month, the bot would need to create a new monthly reference archive page for each of the reference desks. After creating all six monthly pages, the bot would perform its normal daily duties.
For example, at 00:00 UTC on
<noinclude> {{subst:Reference desk navigation |previous = Wikipedia:Reference desk archive/Miscellaneous/August 2006 |date1 = August |next = Wikipedia:Reference desk archive/Miscellaneous/October 2006 |date2 = October |type = Miscellaneous }} </noinclude>
--
- I hope you don't mind, but the templates have been altered since then, so <noinclude>{{subst:Reference desk navigation|24|August|Computing}}</noinclude> is now used to create daily archives, and
<noinclude> {{subst:User:71-247-243-173/RDmonthly |previous = |date1 = |next = |date2 = |type = }} </noinclude>
is now being used for monthly archives. The changes were to decrease the work load so that it could be done by hand until a bot was made available--
Replace specific invalid ISBN
There are a bunch of articles in this category: Category:Articles_with_invalid_ISBNs that have this reference in them:
''Naval wars in the Levant 1559-1853'' - R. C. Anderson [ISBN 0-87839-799-0]
The correct reference for the 2005 edition (the only one with an ISBN I could find) is:
Anderson, R. C. (2005), Naval wars in the Levant 1559-1853, Martino Pub,ISBN 1578985382
or, wiki-style:
{{Harvard reference|ISBN=1578985382|Title=Naval wars in the Levant 1559-1853|Given1=R.C.|Surname1=Anderson|Year=2005|Publisher=Martino Pub|Location=Mansfield Center, [[Connecticut]]}}
Is it possible for someone to do a mass search and replace on that reference in the bad ISBN category? It looks like all of the bogus ISBNs were added by [User:SpookyMulder] on September 4, if that helps.
Examples:
- Action_of_14_July_1616
- Battle of Focchies
Thanks
- Talk to User:SpookyMulder. Adding a zero to a nine-digit SBN makes a valid 25 August 2006(UTC))
- I forgot to check something: 87839 identifies Princeton University Press, the publisher of one of the 1952 editions of that book. The ISBN probably is valid, it just isn't in all modern databases. (25 August 2006(UTC))
- Thanks for the info... I just looked in LC and in WorldCat at that edition of the book and couldn't find the SBN. Do you know of any databases that have SBNs available? 25 August 2006(UTC)
- Thanks for the info... I just looked in LC and in WorldCat at that edition of the book and couldn't find the SBN. Do you know of any databases that have SBNs available?
- I forgot to check something: 87839 identifies Princeton University Press, the publisher of one of the 1952 editions of that book. The ISBN probably is valid, it just isn't in all modern databases. (
Ratings
Can someone write a bot that parses a page in the form (...)(...) and prints the results on another page? The format of (...) is (quality and importance and name). Ratings could be posted on another page. The rating of an article is the average of all the user ratings. There is a quality rating and an importance rating (quality and importance are numbers). The URL for the ratings is http://en.wikipedia.org/wiki/User:Eyu100/Bot_area, but there are no ratings yet. This bot will be used for the Wikisort project.
Unlink (Main) categories on user pages
Occasionally, when browsing categories, I find user pages that are tagged under some of the encyclopedic categories. These pages are usually user sandboxes or "Works in progress" of articles that they copied to their user space to thoroughly revise incrementally. Now, I'm still quite new at Wikipedia, so I may be missing something here... but I am surprised that the encyclopedic categories aren't hardcoded to skip User: Space pages.
In lieu of such a change, it seems like it would be simple to have a bot go through the categories, and when it finds an article linked to User: Space, it would go and tack <nowiki> tags around the [[Category:(.*)]] links. Of course, templates that add categories, like {{stub}} and such would make things more difficult, but I imagine the most common templates could be similarly coded into the bot.
Does such a bot exist? Is there a particular reason why it doesn't? Just a few thoughts.
- There is a reason that the bot doesnt exist there are thousands of cat's and cheching them all isnt feasable. also the Temp pages should have those cats that way when the user copys them back the page has the cats. 31 August 2006(UTC)
- Yes, I see that now. Previously I had assumed that all User categories began with "Wikipedian," but that is obviously not the case. I would still love to see some sort of segregation between the user pages and the encyclopedic content... but I suppose this would need to be done at a deeper level (like category classes). 2 September 2006(UTC)
- Yes, I see that now. Previously I had assumed that all User categories began with "Wikipedian," but that is obviously not the case. I would still love to see some sort of segregation between the user pages and the encyclopedic content... but I suppose this would need to be done at a deeper level (like category classes).
request for help at CfD
At
Template parameter changes bot
- → Discussion moved to 5 September 2006(UTC)
Mina19 1919 bot request
PLEESE PLEASE PRETTY PLEASE! See my request for beurocrattiness to see what I am all about.I just think that I could program a bot-abouve-all-bots bot that picks up all vandelism and nothing BUT vandelism!--
- That'd be really hard to do without a human controlling it. Tawkerbot and Antivandalbot are trying, however. 5 September 2006(UTC)
Add template to talk pages
Is there a bot that can add the {{Wikipedia:WikiProject Sharks/SharksTalk}} template to the top of all of the talk pages of the articles in Category:Sharks? And if there is one for that is there also one that can replace {{portalpar|Sharks}} with {{Sharksportal}} for all of the articles in Category:Sharks, and if it doesn't have {{portalpar|Sharks}} could it just add {{Sharksportal}} below the taxobox.
Is this easy to do? --
- Not a problem it is fairly easy for my bot 5 September 2006(UTC)
[[Pittsburgh]] --> [[Pittsburgh, Pennsylvania|Pittsburgh]]
I have created
- Easy to do. I'm putting in a bot request. 5 September 2006(UTC)
Bypassing old user template redirects
If someone could design a bot to replace things like "
- Will do with 7 September 2006(UTC)
Redirects to Connecticut towns
I would like a bot to check links to all Connecticut cities & towns and create the standard redirects [[town, CT]] and [[town (CT)]] if they haven't yet been created. I have noticed that these have not all been finished for Connecticut. --
Redirect templates need to be substituted
According to
- I'll get on subst'ing the templates on 31 August 2006(UTC)
WikiProject banners on talk page
Can someone please write a bot that could tag articles in
Template Tagging Bot
Does anyone know of a Bot that I could request to add a project tag {{
- Will do with 7 September 2006(UTC)
Request for bot sorting Pages needing expert attention
I've also requested feedback on this idea from User:Beland:
"Hi,
I had an idea to sort Category:Pages needing expert attention according to expertise. That way, Wikiprojects could easily track articles needing expert attention in their field. I think expert attention will get more input from wikiprojects than general cleanup. I've already begun "Pages needing attention from expert in medicine" manually, but if a bot could detect categories, it could split up the expert-category entirely into subjects (if necessary, with a complete list of pages needing expert attention in place).
a) I'm not sure if you don't already plan to implement such functions into Pearle. b) If you like the idea, I'd be interested in running a clone of Pearle for this task. Or maybe Pearle could be expanded.
Anyway I have no experience with bots. "
--
Adding a template to talkpages
Hi everyone. I'm a member of wikiproject Writing Systems. A while back I created the template {{
- Being Worked out 8 September 2006(UTC)
Category:Protected deleted categories
There's been
- Someone with 8 September 2006(UTC)
Dates containing "th", "rd", and "st"
Per the
- Bot request.
- this bot is python bot.
- I use it mainly for interwiki.
- I use bot in korea. (at kowiki)
- -- 12 September 2006(UTC)
AWB category removal
AWB request: http://en.wikipedia.org/w/index.php?title=Special:Whatlinkshere/Category:Evolution_Wikipedians&limit=500&from=0 - remove this category from these places, it's been deleted by CfD CfD: http://en.wikipedia.org/wiki/Wikipedia:Categories_for_deletion/Log/2006_May_10#Category:Evolution_Wikipedians
Thanks!
eu interwiki on Protected pages
Berria requested:
- Please, can somebody (an administrator) add the interwikis to this kind of templates (selected anniversaries templates) if it is possible? Only the month of january yet. I would do it myself but it's impossible. The interwiki of this day is [[eu:Txantiloi:Urtarrila 2]] and the next days are similar; only changes the number ([[eu:Txantiloi:Urtarrila 3]], [[eu:Txantiloi:Urtarrila 4]]... ...[[eu:Txantiloi:Urtarrila 31]]). Thanks in advance. If there are any trouble, my talk page is always open for questions. 31 August 2006(UTC)
Adding 31 slightly varying interwiki's seems like the perfect job for a bot, so I'm posting it here...
- Bots do not run with administrator rights, so a bot cannot do that task. —8 September 2006(UTC)
A bot could create a list of proposed edits, and an admin could approve them manually, but it makes it alot harder.
A while back Template:Cite journal was forked to create Template:Cite journal2, the only difference being that "In contrast to cite journal, cite journal2 omits the quotation marks around the article title." Since this fork, an option was coded into cite journal so that the quotation marks can be removed from individual usages of the template. Would a bot be able to change all usages of {{cite journal2}} to {{cite journal}}, copying over the existing information for each usage of the template and adding "|quotes=no" at the end? For example,
* {{cite journal2 | author=Könneke, M., Bernhard, A.E., de la Torre, J.R., Walker, C.B., Waterbury, J.B. and Stahl, D.A. | title=Isolation of an autotrophic ammonia-oxidizing marine archaeon | journal=Nature | year=2005 | volume=437 | pages=543-546}}
needs to be changed to:
* {{cite journal | author=Könneke, M., Bernhard, A.E., de la Torre, J.R., Walker, C.B., Waterbury, J.B. and Stahl, D.A. | title=Isolation of an autotrophic ammonia-oxidizing marine archaeon | journal=Nature | year=2005 | volume=437 | pages=543-546 | quotes=no}}
For comparison, this is how the two render:
{{cite journal2 | author=Könneke, M., Bernhard, A.E., de la Torre, J.R., Walker, C.B., Waterbury, J.B. and Stahl, D.A. | title=Isolation of an autotrophic ammonia-oxidizing marine archaeon | journal=Nature | year=2005 | volume=437 | pages=543-546}}[deprecated]- Könneke, M., Bernhard, A.E., de la Torre, J.R., Walker, C.B., Waterbury, J.B. and Stahl, D.A. (2005). "Isolation of an autotrophic ammonia-oxidizing marine archaeon". Nature. 437: 543–546.
{{cite journal}}
: Unknown parameter|quotes=
ignored (help)CS1 maint: multiple names: authors list (link)
{{
- Yup. I will do it. 14 September 2006(UTC)
- Done. I also put a deprecate notice on 15 September 2006(UTC)
- Done. I also put a deprecate notice on
- Nice work. Thanks. It's a pity that nothing like your MWiki-Browser is available for use on a Mac. :( 16 September 2006(UTC)
- Nice work. Thanks. It's a pity that nothing like your MWiki-Browser is available for use on a Mac. :(
Fun Song Factory page
Table requested:
- On the Fun Song Factory page in Wikipedia can someone put a table for the TV Schedule (You will see this added on the page). I have tried to do this but got into a bit of difficulty.
Any help appeciated
- This is a question for 21 September 2006(UTC)
userbox
I've moved a few userboxes onto my user space, and I need a bot to update them.
- {{User trifecta}} to {{User:Lawilkin/UBX/WP:PCM}}
- {{User wikipedia/WPC}} to {{User:Lawilkin/UBX/WPC}}
- {{User wikipedia/Counter Vandalism Unit (alternative)}} to {{User:Lawilkin/UBX/CVU-alternative}}
- {{User wikipedia/Administrator someday}} to {{User:Lawilkin/UBX/admin someday}}
- {{User WikiProject Preclinical medicine}} to {{User:Lawilkin/UBX/WP:PCM</nowiki}}
Thanks!
- As long as the original templates have the GUS template (which all of those do), they are listed at 21 September 2006(UTC)
- Gotcha, thanks! 21 September 2006(UTC)
- Gotcha, thanks!
List of all pages touched by one editor
I am not certain if this is the right place to post this question (if it is not, please move it to the right place and drop a note on my talk page.
The editor Sheynhertz-Unbayg has recently been banned and now his contributions (mostly weird "onomastics" pages that are just concatenations of several disambiguation pages, see Lust (onomastics) for a typical example) need to be cleaned up. To ensure that all pages he has edited do get checked, I would like to have a bot- or script-generated list of all pages he has touched. As he has more than 20,000 edits, manually created lists like the one here are probably incomplete. Also, a centralized list would help avoid duplicated efforts from the people who check the pages.
Please create a list of all pages touched by this editor and drop it somewhere, for example at User:Kusma/Sheynhertz/contribs. A good format would be a bulleted list with wikilinks to the pages, perhaps with a "redirect=no" or a mentioning of the redirect target for the numerous redirects created.
In addition, a bot could be used to check all of the interwikilinks created by Sheynhertz. I have already removed dozens of links to nonexistent articles on the Japanese Wikipedia, and I expect that many more of his interwikis are wrong.
Thank you for any help or insight you can offer,
- My bot can't do this, but someone else's might. You might also want to see the pages linked to from here, as it seems like that editor has already made a big list of his contributions. —21 September 2006(UTC)
- Interiot just did it for me. Thanks for the idea, though! 21 September 2006(UTC)
- Interiot just did it for me. Thanks for the idea, though!
Bot for test removal
There ought to be a bot that is automated to remove common things used for testing, such as Bold text, Italic text, [[Link title]], [http://www.example.com link title], [[Media:Example.ogg]],
- No, that should better be done by hand, as such articles have to be checked for other damage done by the editing experiment. Sometimes such an article needs a reversion, sometimes it can be deleted right away (talk pages), replaced by the welcome message (user talk of newbies), or it may also be a unintentional click on the toolbar while doing good edits, only then the plain removal is correct. I am doing these cleanups manually almost daily... 20 September 2006(UTC)
- Yeah, me too. 22 September 2006(GMT).
- Yeah, me too.
- Tawkerbots usually pick them up.--22 September 2006(UTC)
- Tawkerbots usually pick them up.--
Article and image talk pages
Is there any way a bot can go through and look for all talk pages without an associated article/image and tag them with
- Good idea! I'm going to look into it. —22 September 2006(UTC)
- The problem with doing this via bot is that you run the risk of taging and retagging (depending on how often the bot is run) talk pages that are being used for page research/construction. See Talk:Chiba-Ken for an example (only one I can think of off the top of my head). There's also a number of pages which have been deleted but contain information related to the deletion or an eventual recreation (explicitly exempted from the CSD criterion).
- Now, what I think would be useful is if someone took an offline data dump and created a list of all such pages for human examination and possible tagging. It'll flood CSD like whoa when it's processed, but still has that human intervention necessary. -- 22 September 2006(UTC)
- I see your point. What about an altered template, such as this? —22 September 2006(UTC)
- I see your point. What about an altered template, such as this? —
This page may meet Wikipedia's speedy deletion criteria, as it is a talk page of a page which does not exist (CSD G8 ).
This notice was added by a bot. There are three reasons why a talk page may exist without an article, which are:
Only delete this page if it clearly does not meet any of those three criteria. |
- A link to the articles deletion log would be helpful for admins.--22 September 2006(UTC)
- Agreed, that looks pretty good. Another option may be a "whitelist" where pages can be added that the bot won't tag. The offline data dump is also a fine option. We may also want it to find all other namespace pages with only talk pages just for review. Wikipedia space talk pages are the most likely to be exempt from speedy deletion, but it would still be useful and we may still find some of those elligible for deletion.22 September 2006(UTC)
- Agreed, that looks pretty good. Another option may be a "whitelist" where pages can be added that the bot won't tag. The offline data dump is also a fine option. We may also want it to find all other namespace pages with only talk pages just for review. Wikipedia space talk pages are the most likely to be exempt from speedy deletion, but it would still be useful and we may still find some of those elligible for deletion.
- Actually, I was thinking of something similar to 22 September 2006(UTC)
- Another thing to note is that there are a large number of MediaWiki images that have had Wikipedia talk pages accidentally created for them. Such pages are speedyable, but any relevant content should first be merged to the MediaWiki talk page. 22 September 2006(UTC)
- Another thing to note is that there are a large number of MediaWiki images that have had Wikipedia talk pages accidentally created for them. Such pages are speedyable, but any relevant content should first be merged to the MediaWiki talk page.
- I've added a 22 September 2006(UTC)
Category:Articles to be merged
I was wondering if a bot could do the same for
- I'd be able to do this (at least put everything into a single month now, and into later months starting in October). I'm not able to find the month that the tag was applied...sorry. Fortunately, someone could always send a bot through September 2006 and organize the articles. I'll put in a bot request. 21 September 2006(UTC)
- Thanks. That already would help a lot. 25 September 2006(UTC)
- Thanks. That already would help a lot.
Indiana cityboxes
A while ago, I made maps for all of the cities and towns in Indiana, and started semi-automatically adding cityboxes each article. I got part way through the 'L's but I have not worked on it for a year now. It would be helpful for someone to finish the rest of the pages. The red-dot maps are located here [11]. -
- Bots can't really add city boxes to pages (where would they get the info?), the only thing they could do was add the map, but that would look kind of funny in an infobox with no other information. —21 September 2006(UTC)
- Most of the information you can get from the text of the page itself - 21 September 2006(UTC)
- If there is a better place to request "semi-automated bot + a little research", let me know or move this request, though I can't imagine doing this task without at least a little bot participation. - 21 September 2006(UTC)
- Most of the information you can get from the text of the page itself -
- Never mind, I will take care of these - 27 September 2006(UTC)
Split off Bat-Stubs
Hi, could someone please replace mammal-stub with bat-stub for the articles listed at http://en.wikipedia.org/wiki/User:Eug/Bat-Stubs ?
- Working on it. - 27 September 2006(UTC)
Dates containing "th", "rd", and "st"
- I'm relisting this since nobody responded to it the first time before it got archived.
Per the
- Sorry this didn't get responded to the first time, I read it but was thinking about it and forgot to respond. I don't know if this is a good job for a bot, because of things like 21 September 2006(UTC)
- Thanks for the response, though I'm a bit confused about your response. Are you using 21 September 2006(UTC)
- OK, you're probably right. —21 September 2006(UTC)
- So does this mean that it will/can get done? 22 September 2006(UTC)
- Not by my bot, but if someone will pick it up then it can get done. You could also do it yourself: do you use 23 September 2006(UTC)
- I don't run Windows, so no go on the AWB. Where do I find someone with a bot that can do it? Will they possibly be reading this? 25 September 2006(UTC)
- This doesn't seem that hard, the worst part looks like making the list of articles. I'll see what I can get done with AWB. -30 September 2006(UTC)
- This doesn't seem that hard, the worst part looks like making the list of articles. I'll see what I can get done with AWB. -
- I don't run Windows, so no go on the AWB. Where do I find someone with a bot that can do it? Will they possibly be reading this?
- Not by my bot, but if someone will pick it up then it can get done. You could also do it yourself: do you use
- So does this mean that it will/can get done?
- OK, you're probably right. —
- Thanks for the response, though I'm a bit confused about your response. Are you using
Diacritics bot
Following
This could prove useful because many articles lack these redirect pages and are hard to find for those who do not possess the diacritics on their keyboards. Recently I had to create redirect pages for nearly all Portuguese municipalities. A bot could perform these tasks much more efficiently. If there is someone interested in creating this bot, thank you in advance. Regards.--
1500 Mormon redirects
Hi. I wonder if there's a bot with the time and inclination to go around all the articles linking to
- Just a comment, wouldn't the correct article title not include the "the"? The MoS says this: Avoid the definite article ("the") and the indefinite article ("a"/"an") at the beginning of the page name unless the article would be capitalized in the course of a sentence such as 28 September 2006(UTC)
- Well, in this case, the word "The" is part of the official title of the church. Kind of like 28 September 2006(UTC)
- A point raised by your comment though, is that maybe any bot completing this request should search for instances of "the 28 September 2006(UTC)
- There are also those articles that use a capital D for day (Latter-Day) which could be fixed, and others which have a lowercase the and a capitalized one, as in "the The Church..." --28 September 2006(UTC)
- To clarify, you guys want to switch all links (in articles) from 28 September 2006(UTC)
- Yeah, let me be precise: every occurence of the text:
- the [[Church of Jesus Christ of Latter-day Saints]]
- or
- The [[Church of Jesus Christ of Latter-day Saints]]
- should be changed to:
- [[The Church of Jesus Christ of Latter-day Saints]]
- If there's no "the" in front of the link, then those should probably be dealt with by hand, because it could mess up the surrounding grammar to start inserting indefinite articles. Still, I think the edit I've described above will fix the vast majority of those redirecting links. Does that make sense? Please feel free to ask for any additional clarification, and thanks for offering to help. -28 September 2006(UTC)
- There are also 90 pages[12] that link to Church of Jesus Christ of Latter-Day Saints, a redirect page which seems to be there in case someone uses improper capitalization. So, using the system above, pages using:
- the [[Church of Jesus Christ of Latter-Day Saints]]
- or
- The [[Church of Jesus Christ of Latter-Day Saints]]
- should also be changed to:
- Yeah, let me be precise: every occurence of the text:
- To clarify, you guys want to switch all links (in articles) from
- There are also those articles that use a capital D for day (Latter-Day) which could be fixed, and others which have a lowercase the and a capitalized one, as in "the The Church..." --
- Well, in this case, the word "The" is part of the official title of the church. Kind of like
Firefoxbot
I would like permission to run
- Request that on 6 October 2006(UTC)
WP:RD
We still need an archive bot, our current system is starting to break down, and the last bot we had was CrypticBot, so we've been doing it all by hand for quite a while now. Since our old request has been long since archived off this page (by an archiving BOT, oh the irony o:) I decided to repost a less involved version of the same request here--
- A link to the complete description. I'm unable to do it with my bot, but maybe someone else will pick it up. —1 October 2006(UTC)
- Quick breakdown: Each day it would remove the oldest header from the top of the page, let's say for Wikipedia:Reference desk archive/Computing/2006 October 3, and add "<noinclude>{{subst:Reference desk navigation|3|October|Science}}</noinclude> " to the top of the page, and place the content for October 3, on that page. The older description is a little dated--1 October 2006(UTC)
Revisal
Here is a new REVISED bot request. I have created a demo reference desk that could be used to test implementation of an RD bot using a slightly updated layout. It will be proposed by a few of the RD editors (including me) only once a bot is working for it, because of the increased number of desks to manage manually. Please read User:Freshgavin/Sandbox/Reference_desk_bot_request for more details about the changes that will be proposed, and for a detailed summary of the requirements for the bot.
The reference desk relies now on a few diligent editors for manual archiving, and there are a lot of people that would really appreciate a bot to help us do this task. Any suggestions, ideas would be greatly appreciated. Questions and comments about the new layout should be posted on
- I'm happy to give this request a go - I'll look at writing the code over the coming week (fitting it in with other commitments) - then I'll test and fix (and so on) until submittiny to 8 October 2006(UTC)
Wikipedia bots on AIM, MSN, and Google Talk, and SMS
Off-topic question: There's an MSN bot that lets you get information from
- Very interesting; I never thought about something like that before. Not quite sure how to implement it though. — 5 October 2006(UTC)
- 8 October 2006(UTC)
Template:User iso15924
The templates {{user ara}}, {{user Arab}}, {{user cyr-1}} etc shall be replaced with the parameterized template:user iso15924. Parameters given below. Some templates may be included via Template:Babel. AFAIK these cannot be replaced by a bot and will be done by hand. The request was developped at
- {{user cyr-1}} -> {{user iso15924|Cyrl|1}}
- {{user cyr-2}} -> {{user iso15924|Cyrl|2}}
- {{user cyr-4}} -> {{user iso15924|Cyrl|4}}
- {{user cyr}} -> {{user iso15924|Cyrl|5}}
- {{user grk-1}} -> {{user iso15924|Grek|1}}
- {{user grk-5}} -> {{user iso15924|Grek|5}}
- {{user grk}} -> {{user iso15924|Grek|5}}
- {{user cyrl}} -> {{user iso15924|Cyrl|5}}
- {{user Cyrl-1}} -> {{user iso15924|Cyrl|1}}
- {{user Cyrl-2}} -> {{user iso15924|Cyrl|2}}
- {{user Cyrl-4}} -> {{user iso15924|Cyrl|4}}
- {{user Cyrl}} -> {{user iso15924|Cyrl|5}}
- {{user Grek-1}} -> {{user iso15924|Grek|1}}
- {{user Grek-5}} -> {{user iso15924|Grek|5}}
- {{user Grek}} -> {{user iso15924|Grek|5}}
- I completed all replacements on user pages that do not use Babel. - 7 October 2006(UTC)
- And I have done all Babel replacements. 7 October 2006(UTC)
thanks a lot for your help!
page creation bot
I'm proposing a bot that patrols articles for creation and starts new articles for unregistered users. I understand that this defeats the purpose of the restriction that only registered users can create articles, and that this bot can be easily abused. However, this bot may not be such a bad idea if the following measures were in place:
- the bot creates the article if and only if it can find enough Google matches
- anonymous users cannot use the bot to create more than three pages in one hour
- the bot declines requests from blocked IPs
Anonymous editors would also be able to create pages by accessing the bot interface on an off-wiki site.
Any thoughts on this? --
- I wouldn't recommend it. It does completely defeat the purpose of not letting unregistered users start articles. Someone with an AOL dynamic IP could create thousands of articles easily on things that are "googleable" but not deserving of Wikipedia articles. —9 October 2006(UTC)
- Personally it sounds like something impossible to work. Google matches are a pretty bad indicator of notability - it seems way too much like a one way trip to CSD for most of these articles. Good thought but I think the risks outweigh the benefits on this one -- 10 October 2006(UTC)
- I think that AfC needs human patrolling to find worthy articles - neologisms may have huge numbers of gHits, but are not permitted on WP. However, don't despair! I'm in the progress of writing a program for closing AfDs, which a list of approved users will be able to use (regular AfC patrollers). I was having a problem with it, but I think that may be fixed now :) (hoping..) I'll try to get a BETA release out soon. 10 October 2006(UTC)
Assessment
I'd like to ask a bot to help the
Infobox status
At
- I'll do this 13 October 2006(UTC)
Bot for WikiProject Indonesia
Hello everyone, how are you going? I'd like to request a bot for
- Hi - I'll look at doing this for you :) 15 October 2006(UTC)
I'm looking for some help converting all the article titles, links, and non-linked mentions of a great number of Japan-related articles which, when macrons (e.g. ō and ū) are taken into account, need to be respelled. The greatest congestion of these, I think, comes from the ships of the Imperial Japanese Navy. Right now, I have a very short list of names that need to be changed, but as I look into each individual ship's Japanese name and how it ought to be spelled, I'll be adding to the list of those that need renaming. The number of ships isn't too great - those that need renaming hopefully do not number more than 30-50, hopefully. But if each of those is linked to by 10 articles, that's 300-500 right there. Please let me know what to do or who to talk to. Thanks for the help.
- P.S. The current changes that need to be made are Kotetsu-->Kōtetsu, Hyuga-->Hyūga, Soryu-->Sōryū, Unryu-->Unryū, Fuso-->Fusō, Kongo-->Kongō, Kaiyo Maru-->Kaiyō Maru, and Hiryu-->Hiryū. All of these have an article named either "Japanese battleship X" or "Japanese aircraft carrier X", but that is not the only context in which they will appear; they will be referred to simply by name (e.g. "the Unryu was sunk at the battle of such-and-such...") or as "X-class aircraft carrier" or a number of other contexts. Some such as Hyuga and Kotetsu may appear quite often in non-naval related contexts. But as far as I am aware, and I can double-check this, both of those words should be respelled (with macrons added) in any and all contexts. Thanks once again. 15 October 2006(UTC)
- P.S. The current changes that need to be made are Kotetsu-->Kōtetsu, Hyuga-->Hyūga, Soryu-->Sōryū, Unryu-->Unryū, Fuso-->Fusō, Kongo-->Kongō, Kaiyo Maru-->Kaiyō Maru, and Hiryu-->Hiryū. All of these have an article named either "Japanese battleship X" or "Japanese aircraft carrier X", but that is not the only context in which they will appear; they will be referred to simply by name (e.g. "the Unryu was sunk at the battle of such-and-such...") or as "X-class aircraft carrier" or a number of other contexts. Some such as Hyuga and Kotetsu may appear quite often in non-naval related contexts. But as far as I am aware, and I can double-check this, both of those words should be respelled (with macrons added) in any and all contexts. Thanks once again.
Page update bot
Would there be any way to create a bot which would be able to indicate which pages have been added to the listing of pages here or elsewhere, over, say, the last month? Maybe a two-section format listing all the pages in one column and another listing either those which existed (or were revised) before a given date or were created after a given date might be easiest. Also, the bot could potentially be used to determine which pages are "stable", which is to say, not modified over a given period. Thanks for your response, positive or negative.
- I'll take a look at doing this - would you be able tell me (on my talk page), how many days ago the last edit is to have beeen for the page to be considered stable? (I'll be a while fulfilling this request - perhaps just over a week because of other commitments). Thanks 16 October 2006(UTC)
Adding {{UEFA leagues}} to articles
I'd like a bot to go through all the articles linked to in this template and add this template to the bottom with {{
- I can do this manually with AWB using some fancy regexes, just let me set it up 19 October 2006(UTC)
- 48 edits completed using AWB at 19:20, 19 October 2006(UTC)
- Well, thanks, but there are a couple of errors I noticed from checking a few of the pages:
- Some of the pages haven't been touched at all (Norwegian Premier Leagueto name one, {{FA Premier League]] too, though I added it manually just now)
- Some have had the box added around other boxes that were already there (League of Walesfor example)
- Some of the pages haven't been touched at all (
- I wonder if there's a way you can fix these easily using some more fancy programming or maybe it's best to just go through them manually? - 19 October 2006(UTC)
- Well, thanks, but there are a couple of errors I noticed from checking a few of the pages:
Deleted categories
Just like repeatedly-recreated unwanted articles are tagged with {{
- I could have my bot do it if you are interested. 18 October 2006(UTC)
Welcome bot
Do we have a welcome bot, that adds welcome templates to new user pages? Seems like a good idea to me, it was discussed on the mailing list somewhere. Mind, it seems such a good idea it's probably been discussed before.
- I think a welcome bot is a bad idea, as it will reduce the number of personal welcome messages. If I see a redlinked talk page for a user and see that he is making good contributions, I welcome him and thank him specifically for his contributions. I don't see much point in welcoming users just for signing up. In any case, welcoming can be done through a MediaWiki message at account creation instead of by a bot, saving lots of server space that would otherwise be used for all those indefblocked username vandals. 20 October 2006(UTC)
- One place where this was discussed is 20 October 2006(UTC)
- To say some more, I think we should solve the social problem that newbies aren't welcomed by encouraging RC patrollers and New Page patrollers to welcome good-faith users personally (ideally with a message that acknowledges their edits), not by applying a technical solution that doesn't actually have any of the social components that welcoming should have. 20 October 2006(UTC)
- To say some more, I think we should solve the social problem that newbies aren't welcomed by encouraging RC patrollers and New Page patrollers to welcome good-faith users personally (ideally with a message that acknowledges their edits), not by applying a technical solution that doesn't actually have any of the social components that welcoming should have.
phonetic symbols to { {IPA} } template
I've learned that the { { IPA } } template is used to enable phonetic symbols to appear as they should, and not as little squares, in IE6. A bot to do a mass conversion of "hard" phonetic symbols to { { IPA } } template-formatted phonetic symbols would be useful.
- can you give me a list of the hard phonetic symbols, and i'll load up a database dump and see how many edits need to be made? I'd also like an example to help set up. Two options are a bot and manual using a tool, bot's probably smarter but would need a BRFA, and User:STBot is currently busy overnight, so I couldn't even run it for another week. reply here or my talk 19 October 2006(UTC)
- ok, i see how it works, I'll start working on a count later. 19 October 2006(UTC)
- count is at 62k edits. I'll confer with some admins later to determine whether or not it's worth it, though as it, first probably is less that that, as that includes pages with the ipa template(subtract 10-20k) and will add support for an otherwise porked browser as far as i can tell, it should go. only one question - are you bringing the help, or the cookies? 19 October 2006(UTC)
- Uh, I don't really understand any of the technicalities of this. It sounds like you found the answer to your first question. Just in case, the IPA symbols are the last line of hyperlinked characters in the Insert/Sign your name/characters rectangle at the bottom of each "Edit this page" page. I have plenty of cookies on my hard drive that I can upload if you're interested. Anyway, thanks very much for your work on this. 22 October 2006(UTC)
- Uh, I don't really understand any of the technicalities of this. It sounds like you found the answer to your first question. Just in case, the IPA symbols are the last line of hyperlinked characters in the Insert/Sign your name/characters rectangle at the bottom of each "Edit this page" page. I have plenty of cookies on my hard drive that I can upload if you're interested. Anyway, thanks very much for your work on this.
- count is at 62k edits. I'll confer with some admins later to determine whether or not it's worth it, though as it, first probably is less that that, as that includes pages with the ipa template(subtract 10-20k) and will add support for an otherwise porked browser as far as i can tell, it should go. only one question - are you bringing the help, or the cookies?
- ok, i see how it works, I'll start working on a count later.
POV Bot
Can someone please write 4 me a bot that will pick up NPOV breaches which is shut-off complinatnt.
Thanks
- a NPOV bot is not feasable as there is too much of a human factor.22 October 2006(UTC)
Red Link Bot
Can someone write for me a bot that removes red links, red templates, and red categories from articles (except
- IMO, this is a very bad idea. See Wikipedia:Red link for some reasons why red links are good.
- If you remove all red links, then the person who writes a new article has to go around to every page that my conceivably contain a link to that page and make new links. If the red links are left alone and that same article is written then they will automatically turn blue. Red links aren't the devil!! 26 October 2006(UTC)
- Red links to plausible article titles are a good thing. It inspires people to create articles and create them under the correct name, and it means that when a good article is created there are likely to be at least a few useful incoming links, so people will be able to find the new article. But people should be careful to only create redlinks to possible articles, same as always. A bot to just remove all redlinks is a bad idea. --26 October 2006(UTC)
- Touché. --27 October 2006(UTC)
- Touché. --
small batch of double redirects
Hi. At this page, you can see a list of links pointing at
- No problem, getting to it. —26 October 2006(UTC)
Major Highways
Change all "Major Highways" titles to "Major highways", particularly in counties. --
You mean page titles or in text? why? is there a consensus/vote(I SAID THE V WORD ZOMG!) somewhere?
- In the titles, I don’t know about consensus/ there hasn't been a vote, but the template on 1 November 2006(UTC)
Auto-signature bot
I have noticed that perhaps six out of seven anonymous users who leave comments on talk pages do not sign their posts properly. I have usually added the {{
- Do-able but not 100% sure of the demand on resources it would make -- 17 October 2006(UTC)
- To help gauge how big the demand would be, what is the number of nsigned comments made, say, per minute (I can easily see this being well over reasonable bot editting limits). Perhaps an extension to mediawiki is more appropriate? 17 October 2006(UTC)
- To help gauge how big the demand would be, what is the number of nsigned comments made, say, per minute (I can easily see this being well over reasonable bot editting limits). Perhaps an extension to mediawiki is more appropriate?
- Well, within the last two hours there were an average of 110 anonymous edit per hour in the talk namespace. Some of these were edits instead of new comments. Also excluding signed comments, I'd say there are some 90 matches / hour, or 1,5 matches / minute. Registered users seem to edit the talk namespace maybe five times a minute (300 times / hour), but if you count only those who have not created an user page (the risk group of new users that may leave unsigned comments), the number is much smaller, about 15 / hour. So all in all, rounding a little upwards, I'd say the bot would have to make ~120 edits / hour or 2 edits / minute. I believe this is about half the amount of edits that the AntiVandalBot makes.
- I'm not familiar with the MediaWiki extensions you mentioned. However, I can accept anything that works. :) --18 October 2006(UTC)
- I thought of this same idea last night. We could really use a bot signing for anonymous users. 18 October 2006(UTC)
- A disadvantage of auto-signing edits with a bot is that a potentially nonsensical/vandalistic entry will no longer be the top edit in people's watchlist, and will require more effort reverting or removing. Also, you'll have to know which edits do not require signing (addition of templates, edits to headers or general information about the talk page, archiving, refactoring etc.), which is a rather nontrivial and probably impossible task. I would certainly not want a bot that signs my edits. 18 October 2006(UTC)
- Another problem is that many pages need signatures more than talk pages do. On AFDs, signing is usually necessary, but they are not in the talk namespace. On many article talk pages, it doesn't matter at all which anon says what, as most discussions with anons do not involve more than one at the same time, and they will sign manually if necessary for identification. Plus, an AOL anon signing with a nickname is signing in a more useful way than our autosig with his always-changing IP address. 18 October 2006(UTC)
- A disadvantage of auto-signing edits with a bot is that a potentially nonsensical/vandalistic entry will no longer be the top edit in people's watchlist, and will require more effort reverting or removing. Also, you'll have to know which edits do not require signing (addition of templates, edits to headers or general information about the talk page, archiving, refactoring etc.), which is a rather nontrivial and probably impossible task. I would certainly not want a bot that signs my edits.
- I thought of this same idea last night. We could really use a bot signing for anonymous users.
- If the bot gets a bot-flag, it won't even be shown on people's watchlist, so no problem there. The first rule could be that if a new paragraph is created, then add signature, otherwise treat it as an edit. The paragraph should also not start with a {{ and not end with a }}, nor should the signature be added to those paragraphs that already end with one. The bot could also be made to monitor AFD-pages.
- The signature has other functions besides identifying the user who left the message. It separates different messages from each other and tells you the date and time the messages were left. --20 October 2006(UTC)
- Er, when I have "hide bots" turned on, I don't see the edit before the bot edit on my watchlist, the page disappears from my watchlist completely (and I want to see what the interwiki bots do anyway, so I don't want to hide bots). I can't use the "extended watchlist" because I have too many high-volume pages on my watchlist, and it sucks anyway, as I need an extra click to see whether the top edit summary at 20 October 2006(UTC)
- Er, when I have "hide bots" turned on, I don't see the edit before the bot edit on my watchlist, the page disappears from my watchlist completely (and I want to see what the interwiki bots do anyway, so I don't want to hide bots). I can't use the "extended watchlist" because I have too many high-volume pages on my watchlist, and it sucks anyway, as I need an extra click to see whether the top edit summary at
- Hmm, if that's how it works then it sounds like a bug in the MediaWiki software. --20 October 2006(UTC)
- Hmm, if that's how it works then it sounds like a bug in the MediaWiki software. --
Why not just write it into the program (an auto signature)? If it's not an option to not leave a signature then a bot wouldnt be needed in the first place. --
- In case of simple edits or addition of certain templates one will not want to leave a signature. --4 November 2006(UTC)
Interlanguage Commons image suggestion bot
How about a manually summoned bot that can suggest images for an article based on images from Commons that exist on copies of the page on other language wikis? --
Bot to help with WP:COMIC assessments
Waht I'm after is either guidance on how to write a bot and what I need to run it, or perhaps someone to set up a bot that would run through the various comic stubs categories and tag them as stubs for the 1.0 assessment. I get some webspace with my ISP that includes cgi space I don't know if that's enough to host a bot, but I'd be interested in doing it if someone would hold my hand, otherwise if that's impractical or impossible, I'd appreciate someone taking it on. The categories are
- can do the tagging but do you realy need a subpage? I could just put that in the edit summary. 30 October 2006(UTC)
- Please don't bot-create the subpage: otherwise we'll end up with several hundred thousand sub-pages saying nothing but "automatically assessed as a stub-class article due to being a stub", as well as several hundred thousand article talk pages saying much the same thing. The "auto-assessed" surely says it all; if people want to expand on that when they confirm (or deny) the auto-assessment, they can do so at their leisure. 7 November 2006(UTC)
Standardizer bot
I'm not sure if this would be a good idea or not but perhaps it would be possible to build a robot to standardize Wikipedia pages (make them of similiar formatting).
Some points would be:
- Convert American spellings to English or vice versa.
- Capitalize words that should obviously be capitalized; for example, the beginning of a sentence.
- Uncapitalize words that should obviously not be capitalized.
- Convert accented characters to their normal equivalents (for example, '�' to 'e').
I'd be interested to see other people's points on this.
- re 1&4 should be left alone American and Engish are both the same language. And the accents are there for a reason 3 November 2006(UTC)
- Re 2 & 3, how would the bot know where the beginning of a sentence is or which words are improperly capitalized? For the first, if a sentence were to have an phrase such as "...percent of U.S. citizens...", how would the bot know that the word "citizens" wasn't the beginning of a new sentence? For the second, how would the bot know that the word in question isn't part of a band name or something like that? For instance, Red Hot Chili Peppers is a band and thus those words are capitalized but a bot wouldn't know that it's a band name and would therefore uncapitalize the words. 3 November 2006(UTC)
- I suppose you're both right. I was thinking of human interaction though: the bot finds keywords it's not sure about and the human checks them. I suppose this could take too long and would only be useful for individual pages.3 November 2006(UTC)
- I suppose you're both right. I was thinking of human interaction though: the bot finds keywords it's not sure about and the human checks them. I suppose this could take too long and would only be useful for individual pages.
- Re 2 & 3, how would the bot know where the beginning of a sentence is or which words are improperly capitalized? For the first, if a sentence were to have an phrase such as "...percent of U.S. citizens...", how would the bot know that the word "citizens" wasn't the beginning of a new sentence? For the second, how would the bot know that the word in question isn't part of a band name or something like that? For instance, Red Hot Chili Peppers is a band and thus those words are capitalized but a bot wouldn't know that it's a band name and would therefore uncapitalize the words.
- It might not be exactly what you described, but the 7 November 2006(UTC)
- It might not be exactly what you described, but the
Bot on wikispecies
I would like to request a bot that can touch about 80.000 pages on Wikispecies. I am one of the admins on Wikispecies, and we're going through some major changes. We do have one registered bot, but it stopped working, for an unknown reason. Perhaps a 'techy' is able and willing to do some standard changes. In principal it would need to delete '::::' colons out of taxonavigation sections. Perhaps also a check on a certain layout and if it does not fit standard layout add a Category. (or fix the issue if possible). Help would be highly appreciated. --
- Well, I can write a quick thing for 3 November 2006(UTC)
- Ok, the colon thing is funtional simply by changing :::::::Species: to Species:. 3 November 2006(UTC)
- Note here it can also be ::::Subspecies, ::::Regnum, :Superregnum, :::Subregnum, or ::Subregnum, and many more :) --3 November 2006(UTC)
- ok, how about ^:+([KPCOFGSR]) -> \1 ?(that is, if it works, ill try it out in 5 seconds) 3 November 2006(UTC)
- Yep, that works. Generating list... 3 November 2006(UTC)
- That ended up hitting interwikis, now I'm using [^(\w\w)]:+([KPCOFGSR]) 3 November 2006(UTC)
- That ended up hitting interwikis, now I'm using [^(\w\w)]:+([KPCOFGSR])
- Yep, that works. Generating list...
- ok, how about ^:+([KPCOFGSR]) -> \1 ?(that is, if it works, ill try it out in 5 seconds)
- Note here it can also be ::::Subspecies, ::::Regnum, :Superregnum, :::Subregnum, or ::Subregnum, and many more :) --
- Ok, the colon thing is funtional simply by changing :::::::Species: to Species:.
I have some annoying hiccups sometimes, but that seems quite normal. So far I received 20.000 e-mails from your edits :) Is that how many you did? --
- 20,000? WHY! That's probably about right. i have 20k pages left to do, then i wait for a database dump and re-scan it 6 November 2006(UTC)
A double redirect bot
I've just come back from fixing about 16 double redirects. Could someone write up a bot for me that fixes double redirects (If it's possible)? --
WP:COMIC noticeboard
Is it possible to have a bot written that would patrol subcategories of
move pages with "Ancient Greece" in title to "ancient Greece"; "Ancient Rome" to "ancient Rome"
There are many pages dealing with subjects in ancient Greece and Rome that erroneously capitalize "ancient". WP guidelines and editorial consensus say that "ancient" should be lowercase. It's easy enough to move individual pages, but fixing the redirects is a pain. Is this the kind of task that a bot can help with? If not, are there other ways to (semi-)automate the process? Thanks.
- Have you looked into 27 October 2006(UTC)
Not in detail, because I mostly use OS X. But I have some access to a Windows machine, so I'll check it out.
- Of course, you want to be able to capitalize if it's the beginning of a sentence, and possibly some other contexts a bot might not always recognize. Maybe there could be a special symbol, such as inserting any invisible comment between ancient and Rome, which signals the bot not to decapitalize in this case (as well as the bot being smart enough to recogize almost all beginnings of sentences). Just an idea. --7 November 2006(UTC)
Errr...this proposal speaks of moving pages. Article titles must begin with a capital letter, for technical reasons. Hence the link
- OIC. If the article has "Ancient" in a medial position, then this makes sense. Is that what was meant? 10 November 2006(UTC)
- Yes. My initial request was unclear, I see. I was interested in a less tedious way to make moves such as
- Gymnasium (Ancient Greece) -> Gymnasium (ancient Greece)and
- Art in ancient Greece
- There's a similar problem with the titles of articles dealing with ancient Rome, classical Greece/Rome, and other articles where a country/culture is modified by an adjective.
- Mistaken capitalization in the article text is also an issue, of course, but it wasn't what I was asking about. 11 November 2006(UTC)
- Mistaken capitalization in the article text is also an issue, of course, but it wasn't what I was asking about.
PDFlink bot
Could someone write a bot that'll convert the old
- I can't do the file size, but I'll convert the old style to the new style. —29 October 2006(UTC)
Memory Beta (aka Non-canon Star Trek Wiki)
Is it possible that a bot could be created or used that would be able to patrol all images on the wiki, and either add or replace the category with Category:Memory Beta images, as we have hundreds of images and it would be a mammoth task to do by hand. If so that would be fantastic, address for the wiki is Memory Beta Main Page. --The Doctor 11:32, 08 November 2006 (UTC)
- I could make a list of all the image names on your wiki, load them into AWB, and if they have a page here, I could add the category. 8 November 2006(UTC)
- That would be brilliant if possible. Thank You :-). --The Doctor 12:21, 08 November 2006 (UTC)
- Looks like there's something up with the wiki, I can't get AWB to connect to it, I'll hve to think of a better way to get your images. 8 November 2006(UTC)
- OK, I have a list, and I have a way to isolate star trek images, can you create the category with a note saying it was populated by a bot and may have one or two false positives, and I'll do the categorizing as soon as I have a free computer 8 November 2006(UTC)
- OK, I have a list, and I have a way to isolate star trek images, can you create the category with a note saying it was populated by a bot and may have one or two false positives, and I'll do the categorizing as soon as I have a free computer
- Looks like there's something up with the wiki, I can't get AWB to connect to it, I'll hve to think of a better way to get your images.
- That would be brilliant if possible. Thank You :-). --The Doctor 12:21, 08 November 2006 (UTC)
Userbox Bot
A lot of userboxes are being moved per the
- Sure, give me a list of changes, my bot is already allowed to do that. 9 November 2006(UTC)
- Many bots regularly do that (mine too). Just add the GUS template, it will appear in 9 November 2006(UTC)
- Many bots regularly do that (mine too). Just add the GUS template, it will appear in
Dead Playboy Playmates
Could someone run a bot through the Playboy Playmate articles to compile a list of the dead ones so that I can compare it to the list at
- Uhh...you want me to do what? with what? Define dead. We'll need a list of articles to check and a place to output to, and a way to define dead, and a category to add them all to is probably the only way to do it with a bot, still, bots, being bats, cannot determine whether something is dead unless you tell it how to. 10 November 2006(UTC)
TABLE-to-DIV bot
I mentioned this on the Village Pump, but then I realized that this page existed: I've noticed that tables are used an awful lot everywhere on Wikipedia, even when using <div> tags would work just as well. I looked up
- no I don't think so. Best way to do this would probably be
- {|([^!\|]+)|} -> <div style="CSS">$1</div>
- Can I have a sample of a bad table and a good div for comparison?
- In addition, if anyone has any ideas for or against this, please say so. 10 November 2006(UTC)
- Divs can't be vertical-align: middle; in IE because it doesn't support display: table;. --10 November 2006(UTC)
One example is
Very specific template-conversion request
Hello, I need a very specific change to be made to a number of very specific articles. For a list, see User:lensovet/Rail. What I need is as follows: for each line that reads
{{rail line|previous=[[Metropark (NJT station)|Metropark]]|route=[[Northeast Corridor Line]]|next=[[Linden (NJT station)|Linden]]|col=FF2400}}
to be converted to
{{NJT line|previous=Metropark|line=Northeast Corridor|next=Linden}}
that is:
- change template name from rail line to NJT line
- for previous and next parameters, remove piped link and keep only the part that is normally displayed
- if either parameter is not a piped link, remove it completely
- change route= to line=
- change Northeast Corridor Line to Northeast Corridor
- remove the col= parameter completely
please let me know when you make the change. thanks! —
- There are 10 articles. Hardly seems worth a bot. 12 November 2006(UTC)
- There will probably be at least 20-30 more, just with different lines. I wanted to make this easy and break up the job into each line, and also to get a feel if something like this is actually feasible to do with a bot. How many articles do you want, 100? It's repetitive work regardless. —12 November 2006(UTC)
- It's definitely possible, my best idea would be using WP:AWB:
- {{rail line|previous=\[\[[^|]+|([^\]]+)\]\]|route=([^\||[lL]ine])[lL]ine]]|next=\[\[[^|]+|([^\]]+)\]\]|col=FF2400}}
- {{NJT line|previous=[[$1]]|line=$2|next=[[$3]]}}
- which is incredibly scary and almost certainly wrong. but we can make it work :) - only thing is that I haven't a clue what to do about the third request. I'll think about that. 12 November 2006(UTC)
- Thanks. Let me know if you can come up with something for the third request. also, i don't have access to a wintel box, so ideally i'd need a bot account to actually perform the edits. —13 November 2006(UTC)
- So, what's new here? —18 November 2006(UTC)
- I can do the edits, I'd like to do a regex for each change, and we can to request for approval - I'll file it, but you might want to comment on it just to make sure we know what we're doing, I'll post when it's done. 18 November 2006(UTC)
- Not worth filing a bot approval for 40 edits, just do it from your user account. 20 November 2006(GMT).
- Not worth filing a bot approval for 40 edits, just do it from your user account.
- I can do the edits, I'd like to do a regex for each change, and we can to request for approval - I'll file it, but you might want to comment on it just to make sure we know what we're doing, I'll post when it's done.
- Thanks. Let me know if you can come up with something for the third request. also, i don't have access to a wintel box, so ideally i'd need a bot account to actually perform the edits. —
- It's definitely possible, my best idea would be using
- There will probably be at least 20-30 more, just with different lines. I wanted to make this easy and break up the job into each line, and also to get a feel if something like this is actually feasible to do with a bot. How many articles do you want, 100? It's repetitive work regardless. —
Wikipedia random copyedit bot
I was thinking last night that it would be fun to have a bot that randomly generated a page in certain areas for the editor to edit. Like folks interested in botany or biology could get a random biology or botany page, then copyedit it. If Wikipedians did this for a year in all major areas, many of our crummiest articles, apperance wise, would get cleaned up.
I think that there are numerous articles on Wikipedia that need copyedits. I attempted to do this in the [Herat] article and the [Afghanistan] articles, but got sucked into a vicious flame war--these articles need serious work. However, I moved on to using the Random Article generator to find articles that could use copyediting, leading me to copyedit obscure pages like [Pre-dreadnought]. About half of the articles that come up have to do with Anime or television shows it seems, and some are in areas I know nothing about, but sometimes I find something interesting that needs work.
I can find articles on list, fine, but adding a little fun to it, and making it an all-Wikipedia project could seriously improve many Wikipedia articles. Editors would be encouraged to add citation needed tags, categories, and just do the rudimentary copyedit work that really makes Wikipedia viable. By allowing folks to get random articles in selected categories people would work on articles in their areas.
One of the best things about Wikipedia is writing a good article, then coming back the next day and finding someone else has spit-shined for you. There are a lot of articles that have some useful information but are rather sorry in appearance. Devoting some time to cleaning up these articles would, imo, greatly improve Wikipedia. Adding a little twist for those seeking something to do would make it a bit more interesting.
Please someone write this bot. Oh, I would call it the KPBot (for
- You may want to check out 15 November 2006(UTC)
main page protection bot
As we know, images and templates on the main page are changed on a daily basis. To prevent vandalism to Wikipedia's most important page, these images and templates must be fully protected. I'm sure that many administrators will agree that this task can be pretty tedious. Also, it's always possible that something will be left unprotected by accident. After all, we're all humans! :)
Therefore, I'm proposing a bot that will automate the following tasks:
- protect any images and templates that will be used on the main page (one day beforehand)
- unprotect any inactive images and templates (many inactive templates are still protected, despite being off the main page)
I do have one concern, though. If administrators become too dependent on the bot, some images or templates may be left unprotected if the bot suffers a downtime. --
- We don't protect stuff on the main page:
- "Important Note: When a page is particularly high profile, either because it is linked off the main page, or because it has recently received a prominent link from offsite, it will often become a target for vandalism. It is rarely appropriate to protect pages in this case. Instead, consider adding them to your watchlist, and reverting vandalism yourself."
- 22 November 2006(UTC)
- But the main page policy does require protection for all images displayed on it (for obvious reasons). The protection policies only prevent the protection of articles linked from the main page. 22 November 2006(UTC)
- But the main page policy does require protection for all images displayed on it (for obvious reasons). The protection policies only prevent the protection of articles linked from the main page.
Template:Verylong
It'd be nice to have a bot automatically add
- I can do that automatically with 23 November 2006(UTC)
- I don't have a quick answer. 23 November 2006(UTC)
- OK, I can do that - 16, 32, 48K mabye - I'll see what I can do 23 November 2006(UTC)
- Whoa! Wait up. We have featured articles such as 23 November 2006(UTC)
- Well, I can't do anything yet 23 November 2006(UTC)
- OK, I just wanted to make sure you didn't start yet :-) —23 November 2006(UTC)
- OK, I just wanted to make sure you didn't start yet :-) —
- I'm not entirely sure I agree with Mets that .999... isn't too long, but even if it isn't, the template can easily be removed from articles that it doesn't suite. If you think that the 32k threshold would provide too many inappropriate tags then we can increase the number to something where the ratio is much better. I have no idea how many very large articles there are but I feel reasonably confident that the vast majority of articles over 80k should be trimmed. Also keep in mind that a key reason to keep articles small is not just for readability, it's for technical reasons, and technical reasons don't care about your good reason for being too big. 24 November 2006(UTC)
- Well, I can't do anything yet
- Whoa! Wait up. We have featured articles such as
- OK, I can do that - 16, 32, 48K mabye - I'll see what I can do
- I don't have a quick answer.
- Exactly what technical reason? My understanding has always been that anything over 32K can be problematic for users of certain browsers, but I've never heard of anything that would rise to the level of a true technical issue, i.e., a software or server problem. We are first and foremost an encyclopedia, and being so, the good reason for being to big (vis-a-vis being a good encyclopedia article) does and will always trump technical issues. Something tells me that if asked, Wikimedia's CTO Brion Vibber will tell us that there is absolutely no technical reason to start cutting featured articles back to stubs. 24 November 2006(UTC)
- Exactly what technical reason? My understanding has always been that anything over 32K can be problematic for users of certain browsers, but I've never heard of anything that would rise to the level of a true technical issue, i.e., a software or server problem. We are first and foremost an encyclopedia, and being so, the good reason for being to big (vis-a-vis being a good encyclopedia article) does and will always trump technical issues. Something tells me that if asked, Wikimedia's CTO Brion Vibber will tell us that there is absolutely no technical reason to start cutting featured articles back to stubs.
WebCite (External Link Archiving) Bot
"On Wikipedia, and other Wiki-based websites broken external links still present a maintenance problem."
linkrot
I am hoping someone takes on the task to write a WebCite-Bot, i.e. a bot which automatically feeds all cited URLs in Wikipedia to WebCite, which is a web archive similar to the Internet Archive, but with enhanced features such as on-demand archiving, and with a focus on scholarly material (as opposed to IA's shotgun-crawling-"archive-all-you-can"-approach). WebCite creates a snapshot (cached copy) of the cited URL, thereby archiving/preserving the cited webpage or webdocument (if caching was successful, the link to the snapshot should be added to the original URL on wikipedia). WebCite takes care of robot exclusion standards, no-cache tags etc on the cited page. If caching was successful, WebCite hands back a WebCite-ID, which should be added to the original link (for an example on how this could look like see the reference list at the bottom of the article http://www.jmir.org/2006/4/e27/ - all cited non-journal webpages also have a link to the WebCite snapshot). The benefit would be that all cited material will be automatically preserved and link rot, 404s or changes in the cited webpages will no longer be a problem. WebCite has an XML-based webservice which allows to communicate the caching request as well as to receive the WebCite ID if the caching was successful, see technical guide http://www.webcitation.org/doc/WebCiteBestPracticesGuide.pdf User:Eysen 19:45, 01 Dec 2006 (UTC)
I wanted a bot to do the following :
- take a page of the form Wikipedia:Translation/XXX in Category:Translation sub-pages
- edit the article XXX
- add at the top of this article the template {{Wikipedia:Translation/XXX}} if it doesn't exist
Thanks,
- oh that's easy - couple of simple regexes and you're done - let me get a full database dump and I'll see what is to be done:
But I just noticed, there is something else to care of.
Before, we used {{
But there is the problem of existing translation requests.
The bot should replace the obsoloted templates in the talk pages Talk:XXX (the full list is hre) with
- {{Wikipedia:Translation/XXX}} if [[Wikipedia:Translation/XXX]] exists<
- {{Translation request from German (old)}} in the other cases (we do not want to port all the existing templates to the new system because it is a lot of work).
Is it doable ?
- That's not only doably, but as a template change, my bot can already do that - you want me to change {{Translation request}} and {{Translation request from German}} both to {{Translation}}?? 18 November 2006(UTC)
- That's not only doably, but as a template change, my bot can already do that - you want me to change {{Translation request}} and {{Translation request from German}} both to {{Translation}}??
No, this is not what I want.
To make it simpler :
I want you to replace
- {{Translation request}} with {{Translation request (old)}}
- {{Translation request from German}} with {{Translation request from German (old)}}
The thing is that, because I made a redirect, the talk pages with {{Translation request}} are not listed in Special:Whatlinkshere/Template:Translation request as you could expect, but in Special:Whatlinkshere/Template:Translation
Template:Verylong
It'd be nice to have a bot automatically add
- I can do that automatically with 23 November 2006(UTC)
- I don't have a quick answer. 23 November 2006(UTC)
- OK, I can do that - 16, 32, 48K mabye - I'll see what I can do 23 November 2006(UTC)
- Whoa! Wait up. We have featured articles such as 23 November 2006(UTC)
- Well, I can't do anything yet 23 November 2006(UTC)
- OK, I just wanted to make sure you didn't start yet :-) —23 November 2006(UTC)
- OK, I just wanted to make sure you didn't start yet :-) —
- I'm not entirely sure I agree with Mets that .999... isn't too long, but even if it isn't, the template can easily be removed from articles that it doesn't suite. If you think that the 32k threshold would provide too many inappropriate tags then we can increase the number to something where the ratio is much better. I have no idea how many very large articles there are but I feel reasonably confident that the vast majority of articles over 80k should be trimmed. Also keep in mind that a key reason to keep articles small is not just for readability, it's for technical reasons, and technical reasons don't care about your good reason for being too big. 24 November 2006(UTC)
- Well, I can't do anything yet
- Whoa! Wait up. We have featured articles such as
- OK, I can do that - 16, 32, 48K mabye - I'll see what I can do
- I don't have a quick answer.
- Exactly what technical reason? My understanding has always been that anything over 32K can be problematic for users of certain browsers, but I've never heard of anything that would rise to the level of a true technical issue, i.e., a software or server problem. We are first and foremost an encyclopedia, and being so, the good reason for being to big (vis-a-vis being a good encyclopedia article) does and will always trump technical issues. Something tells me that if asked, Wikimedia's CTO Brion Vibber will tell us that there is absolutely no technical reason to start cutting featured articles back to stubs. 24 November 2006(UTC)
- Exactly what technical reason? My understanding has always been that anything over 32K can be problematic for users of certain browsers, but I've never heard of anything that would rise to the level of a true technical issue, i.e., a software or server problem. We are first and foremost an encyclopedia, and being so, the good reason for being to big (vis-a-vis being a good encyclopedia article) does and will always trump technical issues. Something tells me that if asked, Wikimedia's CTO Brion Vibber will tell us that there is absolutely no technical reason to start cutting featured articles back to stubs.
- Beyond the admittedly uncommon 32k issue, people with slow internet connection will find it difficult to load very large pages, although I admit this is much more strongly effected by included images than text (although there is certainly at least a weak correlation). I think you're sensationalizing a bit when you talk about cutting featured articles to stubs. I also think that all techinical issues aside, keeping articles of reasonable size improves readability and makes them more manageable, keep in mind I've not suggested removing content at any point, and tag I'm suggesting we have a bot add doesn't encourage deleting material either. I might add, this is not just my opinion, refer to 4 December 2006(UTC)
- Beyond the admittedly uncommon 32k issue, people with slow internet connection will find it difficult to load very large pages, although I admit this is much more strongly effected by included images than text (although there is certainly at least a weak correlation). I think you're sensationalizing a bit when you talk about cutting featured articles to stubs. I also think that all techinical issues aside, keeping articles of reasonable size improves readability and makes them more manageable, keep in mind I've not suggested removing content at any point, and tag I'm suggesting we have a bot add doesn't encourage deleting material either. I might add, this is not just my opinion, refer to
fiddler bot
the list of fiddlers is getting a little unwielding. right now it duplicates itself completely, listing all the names first alphabetically and then by style - nice, convenient, but long. the plan is to split it into two articles. obviously there's a problem: how do we make sure people put their additions on both pages? already people aren't adding them to both lists.
seems like there are a couple ways this could be done with a bot, though I haven't read up on how they work and what they're capable of. conceptually most simply a bot could copy recent additions from the one page to the other - but it would have to check to see if the editor had edited both already. if bots can get around edit protection we could protect the list-by-style and have the bot check for changes to the list by name (take a look at the page - each name in the alphabetical list is followed by the styles they play... could a bot find and parse those parenthetical strings, and copy the name into the appropriate part(s) of a protected list-by-style article?) --Eitch 19:21, 14 November 2006 (UTC)
- IMHO categories is the way to do this. 20 November 2006(GMT).
- Absolutely. Is there an existing bot that can tag the existing entries in the list? What about subcategories? Regards, Ben Aveling 00:53, 27 November 2006 (UTC)
- My bot can trun list into cats. ) 00:57, 27 November 2006 (UTC)
- Whaddaya mean?^v^ [[User:orngjce223|my home page[[Talk:orngjce223|my talk page]]]] 21:12, 29 November 2006 (UTC)
- Whaddaya mean?
- My bot can trun list into cats. ) 00:57, 27 November 2006 (UTC)
- Absolutely. Is there an existing bot that can tag the existing entries in the list? What about subcategories? Regards, Ben Aveling 00:53, 27 November 2006 (UTC)
- This sounds suspiciously similar to a discussion I've been having over on the Simpsons project, for which I've set about developing ListGenBot - see User:Mortice/ListGenBot for a spec and let me know if it would help, or if it needs expanding in order to be useful for you --Mortice21:24, 29 November 2006 (UTC)
- This sounds suspiciously similar to a discussion I've been having over on the Simpsons project, for which I've set about developing ListGenBot - see
- Pretty cool, Mortice, but I think Rich and Ben are right that -in the case of the fiddlers- categories are the way to go. Ben and ß: the only problem I see with implementing categories is how to deal with redlink articles. I guess they could go in "Articles that have yet to be written" subcategories... is there some kind of standard out there already for dealing with this? --Eitch 23:05, 29 November 2006 (UTC)
Clean up Ganeshbot grammar
Someone should write a bot to clean up Ganeshbot's bad grammar. Kaldari 06:56, 15 November 2006 (UTC)
- seems useless, can't ganeshbot clean up his own grammar? ST47Talk 11:39, 15 November 2006 (UTC)
- apparently not. why do you say this is useless? it would certainly be easier than me fixing thousands of articles by hand. Kaldari 02:07, 16 November 2006 (UTC)
- I am not sure this can be automated (or at least, it's not automatizable using simple regular expression replacement, as done by most bots.) Tizio 10:35, 16 November 2006 (UTC)
- apparently not. why do you say this is useless? it would certainly be easier than me fixing thousands of articles by hand. Kaldari 02:07, 16 November 2006 (UTC)
Adding articles to law enforcement wikiproject
Hello, would it be possible to have a bot created that would add the Law enforcement wikiproject header ({{Law enforcement}}) to articles that are in the Law Enforcement catagory? (here). Many thanks.--SGGH 14:37, 22 November 2006 (UTC)
- I'll take care of it shortly. ) 14:56, 22 November 2006 (UTC)
- Thankyou.--SGGH 15:40, 22 November 2006 (UTC)
- What subcats do you want tagged? ) 16:25, 22 November 2006 (UTC)
- If im understanding you correctly, then all of the sub cats apart from Unarmed people shot by police, People shot dead by police and Vigilantes.
- What subcats do you want tagged? ) 16:25, 22 November 2006 (UTC)
- Thankyou.--SGGH 15:40, 22 November 2006 (UTC)
Transitioning from Template:CopyrightedFreeUse
- Ill start on this ) 17:55, 24 November 2006 (UTC)
- See User talk:Betacommand/20081201#Bot question, thanks/wangi 22:41, 24 November 2006 (UTC)
- It looks like some people have "Free Use" confused with "Fair Use". Regardless, this bot is still doing the proper thing. The images that have been tagged by people incorrectly will have to be fixed by hand regardless. Kaldari 04:53, 25 November 2006 (UTC)
- The sooner we can get Template:CopyrightedFreeUse deleted the better. It seems to be the source of lots of confusion and redundancy. Kaldari04:55, 25 November 2006 (UTC)
- See User talk:Betacommand/20081201#Bot question, thanks/wangi 22:41, 24 November 2006 (UTC)
Bot for country code
Could someone please create a bot which would automatically replace the Template:SER with Template:SRB. Even though SRB is an official ISO 3166-1 3-letter country code and an abbreviation for Serbia (from Srbija) many people still think this code is SER and having a wrong template doesn`t help either. It would be nice to have an automatic bot to correct future mistakes. Avala 14:08, 27 November 2006 (UTC)
Generation of list pages
I'm tinkering with a proposal I've made which involves using categories to replace a list page (the proposal is Wikipedia:WikiProject The Simpsons/Proposal for managing song lists on Simpsons episodes) but I wonder if this is something that could be well managed by a bot.
The requirement would be to generate a page (or I guess edit a delimited section in the middle of a page) based on all the pages listed in a different category. Perhaps the extended requirement would be to find all the subcategories of a given category and use them to generate a list of lists. For instance, 'generate a page with sections named from subcategories of the category "songs on the simpsons", where each section has a list formed from the names of the pages in those subcategories (which would be the names of songs)'.
Or perhaps to take the text from a given section from each page of a list of pages (identified from being members of a category) and generate one page containnig all those sections. For instance 'copy all the sections called "songs" from all pages in the category "simpsons episodes" to the "songs" section of the page "list of songs on the simpsons" page'.
This all sounds quite fiddly, but I can imagine someone may have made a generic bot that can be fed with parameters to do this sort of processing. Is there anything out there to do something like this, or any keen on developing one? Please feel free to chat on my talk page if you want to ask questions or suggest solutions --Mortice 18:10, 27 November 2006 (UTC)
- Sounds feasable, But i have few questions are the songs in their own cats? if not do the page names all follow a pattern? could this just be one master list sorted by name? ) 18:51, 27 November 2006 (UTC)
- Thanks for the feedback. There are, as suggested above, a number of options for doing this. The category based way would work something like this: category 'songs in the simpsons' contains subcategory 'songs in the simpsons episode 13'. Song 'happy birthday' is in category 'songs in the simpsons episode 13'. The bot should generate a page (or insert into a page) section "episode 13" with a bullet list including 'happy birthday'. The songs are all in 'episode' based categories (with structured names) which are all subcategories of the supercategory 'songs in the simpsons' so it should be all auto-generatable.
- Hope that answers your question. The alternative might be easier (and preferable for editors?) - extract sections from the 'episode' pages into a big page, and I'd think that would make for a more generic bot (if you tend to do bots that way). As for your question about 'all just one list', personally I'd like the 'songs' page to have two lists, one alphabetical by song name and one sectioned by episode name, but that may be considered 'overkill'.
- Even if the bot can be developed, this suggestion has to pass approval by the people who maintain the relevant pages, I'm just trying to kick some ideas around to see if one sticks --Mortice 20:25, 27 November 2006 (UTC)
- Update - I've been having a look at the bot API and it looks quite straightforward so I might 'have a go' at this (I'm a coder by trade and have a 24/7 connected server), although I know it will require 'approval' and I'm sure I'll need advice about writing an industrial strength bot --Mortice 00:40, 28 November 2006 (UTC)
- I've written a spec for this bot - see User:Mortice/ListGenBot, which I'm sure would be of use for all sorts of pages. So much so that I'm sure someone must have developed it already - anyone confirm or deny that? And if I develop it I'm likely to have a few questions about producing a pywikipedia based bot - anyone volunteer to be a sounding board for me? I'm aware of the requirement to get it approved first --Mortice21:56, 28 November 2006 (UTC)
- I've written a spec for this bot - see
I'm currently developing this bot --Mortice 12:21, 29 November 2006 (UTC)
Template moving
Category:Main pages with misplaced talk page templates contains pages with a template that belongs on the page's talk page instead. Can we employ a bot to move these? (Radiant) 13:28, 29 November 2006 (UTC)
- Yet again I should be able to do this if you explain the function a little better, ) 14:50, 29 November 2006 (UTC)
- Thanks. Templates in this category shuold be on an article's talk page rather than the article itself. The category I mentioned above contains articles that contain at least one of those templates. The bot should periodically check this category, remove the relevant template(s) (and any parameters) and add them to the talk page instead. (Radiant) 14:58, 29 November 2006 (UTC)
In use
And, Category:Articles actively undergoing construction and Category:Articles actively undergoing a major edit are supposed to be temporary categories, but have grown very large. Perhaps a bot could depopulate them weekly? (Radiant) 13:44, 29 November 2006 (UTC)
- Yes I can and Will have my bot empty the cat, Is there a day that you would suggest doing this? ) 14:44, 29 November 2006 (UTC)
- Not particularly. Note that these categories are generated by {{Underconstruction}}. Thanks! (Radiant) 14:58, 29 November 2006 (UTC)
- Before I go and have my bot blocked for this, (as a malfunctioning bot) what should I use for the edit summary? ) 15:06, 29 November 2006 (UTC)
- How about "(bot) Remove inuse tag, no activity for three days"? Or maybe five days. Or I could think of some pun to go with that :) (Radiant) 15:25, 29 November 2006 (UTC)
- Before I go and have my bot blocked for this, (as a malfunctioning bot) what should I use for the edit summary? ) 15:06, 29 November 2006 (UTC)
I try to go through this and weed out the forgotten ones. It is a bit frustrating because some people demand the right to leave up the tag for long periods of time, and aren't shy about complaining. Also it's transcluded on a bunch of instructional pages, like
Spam Bot
How do I make a bot to message Wikiproject Gold Coast members? If possible, can someone make it for me. Thank you -- Nathannoblet 07:49, 1 December 2006 (UTC)
- Can do this. But I need what you want sent out and to who. ) 13:08, 1 December 2006 (UTC)
Bot for future PermissionRequested template
I have opened suggestions for creating a template for image pages that warns admins that a user has contacted the copyright owner requesting the image's use of Wikipedia and asks not for it to be deleted while a response is expected. The discussion can be found here.
In order to work, it will need a bot that checks a tagging date and remove images that have been tagged for more than one week, and create a relevant category of unlicensed images each day, the same way it is done in {{
I don't have any expertise in creating bots, so could someone please create a bot that can do this? ~ ► Wykebjs ◄ (userpage | talk) 18:05, 1 December 2006 (UTC)
- ) 20:02, 1 December 2006 (UTC)
Infobox change
Also, Category:Pages needing an infobox conversion and Category:Needs album infobox conversion contain a template that should be switched to another template. Would a bot be feasible here? (Radiant) 13:33, 29 November 2006 (UTC)
- Likewise I can set my bot up to do this if you give some more detailed information on exactly what is changing ) 14:49, 29 November 2006 (UTC)
- This gives some related info. I'll ask around a bit to get the specifics. (Radiant) 14:58, 29 November 2006 (UTC)
New articles bot
There are quite a few lists of new articles related to specific subject used by various wikiprojects. They all work 'manually' - with dedicated users adding articles they find to the lists - but this could all be easily autmomatized (botized). We need a bot that would: 1) look at a specified forum (i.e. Portal:Poland/New article announcements) 2) look at a specified section to find last reported article (they differ as various wikiprojects and such have no unified structure, so some may have 'November', others 'November 1-15', and so on 3) look at a 'what links here' of given article(s) - for example, Poland, Polish, Polish language for Portal:Poland, see what new articles have been added to the 'what links here' and generate a report in the above section in the format *[[article name]] created by [[User:Username]] on date This bot would save much time now spend by dozens of dedicated editors who scour the 'what links here' lists instead of doing more constructive work. Issues to consider: articles in question (countries) have many pages of 'what links here', to speed up the process the bot may want to look from the end to find the most recent article added, thus skipping 99% of the links - but this may skip checking redirect. I don't know how long will it take to analyze the entire page to find and analyze redirects, but once they are found they can be added to the main 'check' list and the bot wouldn't have to look through main article for them again, so it would be useful to have to options: normal scan (from the end to the last reported) and complete (from the end to the last reported, and then to the begining but generate only list of redirects). Additional features which I doubt would be included (wishlist): add lenght of the article, tags, lead; scan for new pictures, categories and stubs.-- Piotr Konieczny aka Prokonsul Piotrus | talk 18:03, 2 December 2006 (UTC)
- I support this idea. If the bot doesn't work properly, it can be disabled. --Ineffable3000 18:58, 2 December 2006 (UTC)
- Support. It would definetly save a lot time for users. AQu01rius (Talk) 00:19, 3 December 2006 (UTC)
- Support. These lists could do with a bit of a hand. It'd be better if it had some way of working from newpages instead, but whatlinkshere would at least get it kicked off. Rebecca 09:40, 3 December 2006 (UTC)
- This is an interesting idea. I would be happy to get a bot-generated list once per day of articles that might be related to Germany, I would check them by hand and announce at the relevant announcement page myself. Whether the bot should work from Special:Newpages or from some Whatlinksheres or both should be tried out experimentally, whatever works best. Kusma (討論)16:39, 4 December 2006 (UTC)
- Seems worth trying the experiment. - Jmabel | Talk 07:51, 5 December 2006 (UTC)
Mass renaming
Per
- {{Africa in topic}} to {{Africa topic}}
- {{Asia in topic}} to {{Asia topic}}
- {{Europe in topic}} to {{Europe topic}}
- {{North America in topic}} to {{North America topic}}
- {{Oceania in topic}} to {{Oceania topic}}
- {{South America in topic}} to {{South America topic}}
Thanks! David Kernow (talk) 10:22, 3 December 2006 (UTC)
- I don't think there's really a need for that. It wastes a lot of resources making all those edits, and the templates redirect to each other anyway. —Mets501 (talk) 21:32, 4 December 2006 (UTC)
Template substitution
Templates {{
20:10, 5 December 2006 (UTC)- I can do this with AWB, though can I have a link that says they must be substed? ST47Talk 20:56, 5 December 2006 (UTC)
- Wikipedia:Template namespace. Chris cheese whine 20:59, 5 December 2006 (UTC)
Country and Cities Pages
Hi, I'm trying to get some Country and City Wikipedia pages for use on a Google map travel site. The code is written in C# and works ok for other sites. On Wikipedia I get a 403 error when I read in a page. Do I need to register my site process as a bot or could I use an existing bot to get the pages ?
please
- our robots.txt blocks all automated crawlers. I would recommend that you either [download.wikimedia.org download a database dump] or use google's cache. ST47Talk 01:51, 7 December 2006 (UTC)
Thanks I'll try those --Seewhere.net 02:08, 7 December 2006 (UTC)
WebCite (External Link Archiving) Bot
"On Wikipedia, and other Wiki-based websites broken external links still present a maintenance problem."
linkrot
I am hoping someone takes on the task to write a WebCite-Bot, i.e. a bot which automatically feeds all cited URLs in Wikipedia to WebCite, which is a web archive similar to the Internet Archive, but with enhanced features such as on-demand archiving, and with a focus on scholarly material (as opposed to IA's shotgun-crawling-"archive-all-you-can"-approach). WebCite creates a snapshot (cached copy) of the cited URL, thereby archiving/preserving the cited webpage or webdocument (if caching was successful, the link to the snapshot should be added to the original URL on wikipedia). WebCite takes care of robot exclusion standards, no-cache tags etc on the cited page. If caching was successful, WebCite hands back a WebCite-ID, which should be added to the original link (for an example on how this could look like see examples below and also see the reference list at the bottom of the article http://www.jmir.org/2006/4/e27/ - all cited non-journal webpages also have a link to the WebCite snapshot). The benefit would be that all cited material will be automatically preserved and link rot, 404s or changes in the cited webpages will no longer be a problem. WebCite has an XML-based webservice which allows to communicate the caching request as well as to receive the WebCite ID if the caching was successful, see technical guide http://www.webcitation.org/doc/WebCiteBestPracticesGuide.pdf User:Eysen 19:45, 04 December 2006 (UTC)
- Webcite usage is under discussion at 02:21, 5 December 2006 (UTC)
- I have seen a bot dealing with 404 broken links running on Polish Wikipedia. I couldn't remeber the bot's name, but you can ask at pl:Wikipedia:Boty.-- Piotr Konieczny aka Prokonsul Piotrus | talk 05:34, 5 December 2006 (UTC)
- The idea of WebCite is to prevent 404s in the first place by archiving the cited webpage immediately after they have been cited, i.e. archiving them prospectively and replacing the original URL with a link to the cited snapshot (in contrast, I assume that the Polish bot merely deletes or flags broken links). BTW - In the discussions on WebCite I have seen some misguided comments on copyright concerns - on that note one should stress that caching on the Internet is frequently done and does not constitute a copyright violation if no-cache tags and robot exclusion standards are honored (see also Webcite FAQ, which cites a recent lawsuit against Google, which Google won). While retrospective "mass replacement" makes little sense, at least cited URLs in new articles should be webcited as soon as possible. I am still hoping for a volunteer to write an experimental bot which hands over new wikipedia submissions to WebCite and replaces newly cited URLs with WebCite links or at least adds a WebCite link to the original reference (for how to do this including explanations of the XML-based webservice see WebCite FAQ. Below, I have given some examples on how this should look like. --Eysen 19:22, 6 December 2006 (UTC)
--snip--
It is proposed to develop a bot which - using the WebCite webservice - changes a reference (or even "naked" URLs) as follows:
Replace a reference like:
- Plunkett, John. "Sorrell accuses Murdoch of panic buying", The Guardian, October 27, 2005, retrieved October 27, 2005.
with a reference like this
- Plunkett, John. "Sorrell accuses Murdoch of panic buying", The Guardian, October 27, 2005, retrieved October 27, 2005, WebCited December 4th, 2006.
or this (in addition to the WebCite URL, the original URL might be given):
- Plunkett, John. "Sorrell accuses Murdoch of panic buying", The Guardian, October 27, 2005, retrieved October 27, 2005, original URL: http://media.guardian.co.uk/site/story/0,14173,1601858,00.html, WebCited December 4th, 2006.
Alternatively, the cited URL can also be retained as part of the link to webcitation, to keep the cited URL explicit and to allow easy reverting to the original URL should this be desired:
- Plunkett, John. "Sorrell accuses Murdoch of panic buying"], The Guardian, October 27, 2005, retrieved October 27, 2005, WebCited December 4th, 2006, archived URL:http://www.webcitation.org/query?url=media.guardian.co.uk/site/story/0,14173,1601858,00.html&date=2006-12-04
--snap--
- As a lowly peasant of wikipedia only, I think this is a great idea - PocklingtonDan 22:06, 6 December 2006 (UTC)
- Great - any volunteers to write a beta-bot to experiment with this?--Eysen 17:21, 7 December 2006 (UTC)
WP:Arch request
Hello again, I was wondering if one of you lovely people could help us out again - we've set up an Assement department and would like all {{Architecture|class=stub}} adding to all of the stubs currently listed at Wikipedia:WikiProject Architecture/Stub categories, starting with those articles with {{architecture-stub}} and {{architect-stub}} tags. Cheers. --Mcginnly | Natter 12:06, 7 December 2006 (UTC)
- Ill take care of that ) 14:19, 7 December 2006 (UTC)
Idea for a bot - worthwhile or not?
(This isn't a bot request per se but an "is this worth doing" post - I can write the bot myself if people think its worth doing.)
I notice that one of the items permanently on the maintenance list is the Wikipedia:Cleanup list and that the number of articles needing cleanup seems to be increasing rather than decreasing. I had an idea for a bot that might help with this problem. I could write this bot myself (already have one bot on trial) but wanted to get people's ideas for whether it was worthwhile, comments etc etc, before I started on it.
Brief scope as I see it now (amenable to change): Bot would be manually run or automatically run on an eg weekly basis. Bot would trawlCategory:All_pages_needing_cleanup and find any new additions since its last trawl. It would visit each new addition and pull a list of contributors. It would then leave a message on the talk page of (every contributor) or (last 10 contributors) or (article starter) or (contributors with 10+edits) or (whatever) notifying them that the article is in need of cleanup and listing tips for how they could help to achieve this etc.
What do you think? Worthwhile? Ideas? Comments? - PocklingtonDan 17:44, 7 December 2006 (UTC)
- Sounds like a nag-bot though if it's only weekly, I don't have a complaint on that. I'd be willing to help out with this if you decide to go ahead with it - for example, AWB can be used to generate a list, and to find new additions to the list.(it can compare 2 lists of articles) AWB can also probably be used to find the last 10 people - it has support for C# modules. I'd have to think about that one. But yeah, it is a good idea! ST47Talk 19:04, 7 December 2006 (UTC)
- It would be a nag-bot essentially, yes. I'm not sure from your wording whether or not you're suggesting there is generally a policy agains the use of nag-bots?? It just seems that the articles needing cleanup list is getting bigger and bigger and, since bots cannot effectively clear up articles, the next best thing would be to run a bot to try and prod users into cleaning up articles. Can I get some more input on this, from multiple people before I go ahead? Which set of users do you think it would be fair to "nag"? Perhaps last 0-20 plus article originator?? Editors with top 10 number of edits for that article?? Top 10 most-editing editors from other articles in the category? I want to get a good impression of whether people think the general idea is a good one, and then hammer out a more specific scope/specification before I start work on the bot. Cheers - PocklingtonDan 19:32, 7 December 2006 (UTC)
- If the talk page has a WikiProject tag, could you have the bot flag the attention parameter instead? This will place the article in "X articles needing attention" for the specific WikiProject (see for example Category:Hawaii articles needing attention). Members of the projects are supposed to be watching these cats, so there's no reason to contact anyone. —Viriditas | Talk 21:03, 7 December 2006 (UTC):
- This seems like a useful extra feature, certainly. Can you just explain to me exactly how the attention flag works etc? Is it a simple case of adding something like "attention=yes" to the poject banner? If so, I don't see why the bot couldn't set that flag if it was a project article, or else contact some of the editors if not a project article. - PocklingtonDan 21:40, 7 December 2006 (UTC)
- Yes, exactly. See Template_talk:WikiProject_Hawaii for a brief description. I doubt every WikiProject has the appropriate template, so your bot could check against a list of WikiProjects currently using the assessment template. —Viriditas | Talk 22:05, 7 December 2006 (UTC)
- Thank you for all comments. This is now in request for approval - PocklingtonDan 12:40, 8 December 2006 (UTC)
- If the talk page has a WikiProject tag, could you have the bot flag the attention parameter instead? This will place the article in "X articles needing attention" for the specific WikiProject (see for example Category:Hawaii articles needing attention). Members of the projects are supposed to be watching these cats, so there's no reason to contact anyone. —Viriditas | Talk 21:03, 7 December 2006 (UTC):
- It would be a nag-bot essentially, yes. I'm not sure from your wording whether or not you're suggesting there is generally a policy agains the use of nag-bots?? It just seems that the articles needing cleanup list is getting bigger and bigger and, since bots cannot effectively clear up articles, the next best thing would be to run a bot to try and prod users into cleaning up articles. Can I get some more input on this, from multiple people before I go ahead? Which set of users do you think it would be fair to "nag"? Perhaps last 0-20 plus article originator?? Editors with top 10 number of edits for that article?? Top 10 most-editing editors from other articles in the category? I want to get a good impression of whether people think the general idea is a good one, and then hammer out a more specific scope/specification before I start work on the bot. Cheers - PocklingtonDan 19:32, 7 December 2006 (UTC)
Spacing bot
Hi there!
I was wondering if it would be possible to run a spacing bot, either automated or manual, that puts spaces or newlines between wiki syntax and removes spaces or newlines if they weren't needed? Obviously it would have to be pretty simple so as not to get spacing wrong. I could probably knock something up in Yabasic pretty quickly, using an external program like Wget to handle getting and sending data from the page.
If anyone's interested I'll give an account of different wiki syntax that I believe spacing makes easier to understand (like putting spaces after bullet points before the text).
Cheers,
02:44, 7 December 2006 (UTC)- This seems pretty straight-forward; But is it a valuable use of resources? Jmax- 02:59, 7 December 2006 (UTC)
- Yes, if it makes wiki syntax while editing pages simpler to understand. By the way, what would be a good delay between each edit? Thanks, 03:09, 7 December 2006 (UTC)
- I'm skeptical; that's a lot of resources. Part of the rules of ) 11:56, 7 December 2006 (UTC)
- The reason for that rule is primarily that (a very few) people complained about such edits (for a number of reasons - one was that there was no "hide bots" option on the watchlist then). With the amount of AWB editing going on, were it to be relaxed, very likely most pages would be in conformance with AWB's "Genreal fixes" quite quickly. However the nightmare scenario would be if a different framework had different standards. So it could work, but would need a little planning and cooperation. 7 December 2006(GMT).
- The reason for that rule is primarily that (a very few) people complained about such edits (for a number of reasons - one was that there was no "hide bots" option on the watchlist then). With the amount of AWB editing going on, were it to be relaxed, very likely most pages would be in conformance with AWB's "Genreal fixes" quite quickly. However the nightmare scenario would be if a different framework had different standards. So it could work, but would need a little planning and cooperation.
- I don't know how many resources the bot would use up. I expect I would just focus it in the main articlespace. With a 2-10 second delay between edits, it should give the server enough of a break as so not to notice any difference (of course, you're more knowledgeable in this than me). I would make a suggestion that it runs about once every day for a period of about one or two hours; is this acceptable? (To me, it sounds quite a long time, so feel obliged to point out a better timescale.) I'm not sure the rules of AWB take any standing with this, although I see how they could be a good guideline.
- The only nightmare I see is if different people have different spacing or newline standards. However, my (personal!) rule is to use as much whitespace as possible.
- I would like to see your input on this. Cheers, 3141519:28, 7 December 2006 (UTC)
- Any bot that does nothing but make wikitext prettier is a waste of resources. 10 seconds delay between edits means an extra 8640 edits per day with no benefit to readers. Whitespace edits also tend to make diffs more complicated, although nothing has happened. I am not convinced that this bot will benefit the encyclopedia at all, so any potential mild harm should count against it. Kusma (討論) 13:08, 8 December 2006 (UTC)
- Just a random user here, but I have to say i'm baffled as to what if any benefit the clearing of excess whitespace brings?????- PocklingtonDan 13:27, 8 December 2006 (UTC)
- Hi! I agree with your statement, "... so any potential mild harm should count against it", but I personally disagree with your other statement, "... any bot that does nothing but make wikitext prettier is a waste of resources". 10 seconds delay is too little for this sort of bot, I agree; if it was a 60-second pause then that means 60 edits per hour, ie. 120 edits if the bot runs for a two hour period. Whitespace, in my opinion, can make editing easier (and if there are, by any chance, two newlines in a row, they will be converted to one newline, therefore changing the display). It may be you are completely right; however, I just want to 'do my bit' to improve the encyclopedia. Perhaps, instead, I could write a bot to detect very short articles with no stub marker and report them to me, or as such. Any ideas? Cheers, 3141519:06, 8 December 2006 (UTC)
- Hi! I agree with your statement, "... so any potential mild harm should count against it", but I personally disagree with your other statement, "... any bot that does nothing but make wikitext prettier is a waste of resources". 10 seconds delay is too little for this sort of bot, I agree; if it was a 60-second pause then that means 60 edits per hour, ie. 120 edits if the bot runs for a two hour period. Whitespace, in my opinion, can make editing easier (and if there are, by any chance, two newlines in a row, they will be converted to one newline, therefore changing the display). It may be you are completely right; however, I just want to 'do my bit' to improve the encyclopedia. Perhaps, instead, I could write a bot to detect very short articles with no stub marker and report them to me, or as such. Any ideas? Cheers,
- Just a random user here, but I have to say i'm baffled as to what if any benefit the clearing of excess whitespace brings?????- PocklingtonDan 13:27, 8 December 2006 (UTC)
- Any bot that does nothing but make wikitext prettier is a waste of resources. 10 seconds delay between edits means an extra 8640 edits per day with no benefit to readers. Whitespace edits also tend to make diffs more complicated, although nothing has happened. I am not convinced that this bot will benefit the encyclopedia at all, so any potential mild harm should count against it. Kusma (討論) 13:08, 8 December 2006 (UTC)
- (removing indent) I applaud your wish to contibrute your programmign knowledge its just I personally think re-arranging whitespace is a poor use of your bot's time and the server's resources. If there isn't already such a thing, a bot that flags near-empty articles as stubs would seem a much better idea. If it was able to flag the stubs as project stubs based on the category it was in or article name (this would probably need to be manualy-assisted) then all the better. Why not start a new section on this page to discuss this and move these comments there? - PocklingtonDan 19:11, 8 December 2006 (UTC)
WP:HI assessment
Aloha. WikiProject Hawaii assessment is just getting started and we really need help. To start, I need a bot to replace the current WikiProject tag with the new tag {{WikiProject Hawaii |class=NA |cat=yes}} on every category talk page contained within Category:WikiProject Hawaii articles (please exclude the subcats). Thank you for your assistance! —Viriditas | Talk 21:51, 7 December 2006 (UTC)
- Will add that to the que ) 22:06, 7 December 2006 (UTC)
- Thank you for your good work. Can you do the exact same thing for Category:Unassessed Hawaii articles? I just noticed it is full of cats. —Viriditas | Talk 21:49, 8 December 2006 (UTC)
Great job. Now, on to stub assessment. I would like to add {{WikiProject Hawaii|class=Stub}} to all talk pages in Category:Hawaii stubs (including subcats). Please add class=Stub to unassessed or untagged articles only, skipping articles where class is already flagged. Thank you again. —Viriditas | Talk 01:32, 9 December 2006 (UTC)
User:WatchlistBot can help you with this. I'm a bit behind right now, but I'll add it to my to-do list and contact you when I can get to it, if you can't find anyone to do it sooner. Ingrid 01:48, 9 December 2006 (UTC)
- Thank you for your offer to help, Ingrid. Long time, no talk! Hopefully, Betacommand will get to the stub assessment before you do, because I want to talk to you about developing a dedicated WikiProject bot as well as utilizing WatchlistBot data. For example, I would like to see an infobox on the WikiProject page displaying the top ten most active articles in the project. Furthermore, along with what others are proposing regarding stubs, I was wondering if WatchlistBot could also monitor the Hawaii stub cats (mentioned above) for new additions, adding the WP tag and flagging class=Stub as needed. Thank you. —Viriditas | Talk 03:45, 9 December 2006 (UTC)
Inform 152 users of Common.css change
Per discussions
"Per
If there are any questions about this request, please ask me on my talk page rather than here. Thanks. Kaldari 08:33, 24 December 2006 (UTC)
- Done! ShakingSpirittalk 00:36, 25 December 2006 (UTC)
lowercase bot
Simply, this bot would check for {{
- Lowercase names aren't allowed my the MediaWiki software. —Mets501 (talk) 20:55, 10 December 2006 (UTC)
featured article counter
This a request for a bot that counts the number of items at Wikipedia:Featured articles and puts that number into a template. Once the bot has proved reliable, it envisaged that the bot will be flagged to edit the protected page Template:FA number, which is used as a counter within the Main Page FA box.
A bit of background, in
It occurs to me (as opposed to me relaying the results of the discussion to date) that some sort of vandal-spoofing feature would be useful. Wikipedia:Featured articles is already under semiprotection, but a particularly determined vandal might add or remove items to make the Main Page number jump. One idea is the use of a user whitelist (admins and selected non-admin FA regulars), in which the bot waits 15 minutes or so if anyone not on the whitelist adds or removes articles, to give time for vandalism to be reverted. (Yes, I'm paranoid.) Hopefully this description has made sense. Thanks! - BanyanTree 13:43, 9 December 2006 (UTC)
- Seems pretty straight-forward; I'll let you know once I've set up a test page. --Jmax- 05:51, 10 December 2006 (UTC)
- Alright, See User:Jmax-bot/FACounter. I am in the middle of requesting approval for this bot. I will also need a list of whitelisted editors for WP:FA. --Jmax- 02:31, 11 December 2006 (UTC)
Out of curiosity, is there a GA botcounter? b_cubed 21:00, 11 December 2006 (UTC)
- It doesn't look like it. The number at Wikipedia:Good articles is updated manually. - BanyanTree 21:44, 11 December 2006 (UTC)
Biography stubs
Any chance that I can get a bot to go through Category:Unassessed biography articles on the Biography Wikiproject, and have it label any that are stubs as a stub in Wikiproject Biography's rating? I know there's a bot that's doing something similar, but that one's more purpseful in finding unassessed ones rather than assessing some. --Wizardman 05:56, 10 December 2006 (UTC)
Anyone? If there's a way to do it on AWB, how would I go about doing that then? --Wizardman 00:52, 12 December 2006 (UTC)
- You might want to take a look at ) 01:45, 12 December 2006 (UTC)
Datebot
Is there anyway that a bot capable of unwikilinking dates (specifically years) could be made? I read a lot of articles that contain such wikilinked dates, e.g. 1942 or 1784, which really add nothing to the article. I am aware, that as it stands now, the wikipedia policy on dates is to have all of the date wikilinked. However, in practice, there has been a growing trend with FA articles to unlink the dates. Personally I think a bot capable of this would be very useful. I'm not sure how to do it otherwise I'd try myself. The only concern is that you'd have to make sure it doesn't unlink the "fuller" dates, e.g. November 20 1983. (if you can respond on my talk page it would be helpful) b_cubed 19:52, 8 December 2006 (UTC)
- If given the go-ahead I could probably write this myself, but I'll wait for further input first. Please note I'll be away for a few hours; I'll get back to you then. Cheers! 3141520:29, 8 December 2006 (UTC)
::I think I saw some discussion of this on another page recently. Objections were raised that:
::*It would lead to overlinking - ie wikipedia doesn't want every year linked.
::*Four-digit numbers could be used as a number or a year. ie "In the year 2000, X did Y" or "X led 2000 troops into battle"
::*Numbers can refer to something more specific ie "2000 AD" or whatever that Judge Dredd comic is.
::*Numbers could be years, but as part of fuller strings, ie "Battle of Suessonia (1976)" should link tot he whole battle article, not just the year.
::*It wouldn't be possible to have a sufficiently clever bot to get round these caveats, so it would probably have to be manually-assisted and would thus represent a massive amount of work.
::Note the above are my recollections of what I read of a discussion of this same idea elsewhere
- Ignore me, didn't read properly, thought your edits were going to be in the opposite direction (ie linking, not delinking). Oops - PocklingtonDan 21:46, 8 December 2006 (UTC)
- I can set it up and have it run just getting the linked to years ) 21:42, 8 December 2006 (UTC)
- Out of curiosity does anyone know if a bot like this has been made before? Additionally, would someone be willing to explain how to make bots? b_cubed 04:24, 9 December 2006 (UTC)
- I use AWB/Pywikipedia framework but this is a simple find and replace command that I could do fairly simply. ) 06:25, 9 December 2006 (UTC)
- With regard to how to make bots, there is a new guide being written at Wikipedia:Creating a bot. It is not yet complete - PocklingtonDan09:37, 10 December 2006 (UTC)
- So for myself to get a bot of this kind, would I need someone else to write the code for me or do I need to figure it out? Although I suppose it technically wouldn't matter whose bot it was, as it would help edit wikipedia, I wanted to have one of my own (assuming of course that its usefulness is approved). b_cubed 20:59, 11 December 2006 (UTC)
- Either/or would work. I would offer to write this for you but already got a bot development project on the go. If you have programming experience check out the creating a bot guide i linked above, and you should be able to do this yourself. if not, you'll have to hang around til someone with free time and programming skills reads this post - PocklingtonDan 17:29, 12 December 2006 (UTC)
- Looks like I will be waiting around. b_cubed 19:12, 12 December 2006 (UTC)
- Out of curiosity does anyone know if a bot like this has been made before? Additionally, would someone be willing to explain how to make bots? b_cubed 04:24, 9 December 2006 (UTC)
Stub detector and marker bot
[Copied from above]: (removing indent) I applaud your wish to contibrute your programmign knowledge its just I personally think re-arranging whitespace is a poor use of your bot's time and the server's resources. If there isn't already such a thing, a bot that flags near-empty articles as stubs would seem a much better idea. If it was able to flag the stubs as project stubs based on the category it was in or article name (this would probably need to be manualy-assisted) then all the better. Why not start a new section on this page to discuss this and move these comments there? - PocklingtonDan 19:11, 8 December 2006 (UTC)
- I'd love to. How does everyone feel about this? 3141500:56, 9 December 2006 (UTC)
- Manually assisted I'd be fine with, but a bot going around analyzing articles and tagging them is a little unnerving for me. --Mets501 (talk) 11:54, 11 December 2006 (UTC)
- You are correct. My simple definition of a stub would be an article of 10 or less sentences; this could be manually assisted, the user giving appropriate stub markers, or the bot could just mark the article as a {{stub}} and move on. How does this sound? Cheers, 3141506:22, 12 December 2006 (UTC)
- Since you are able to write this yourself, I would move straight to the request for approval stage rather than posting here - you'll soon find out whether or not people think your proposal is any good there. If this is your first bot, make sure to follow the guidelines for the request process, including setting up a user page for your bot etc and writing a full spec - see other examples on the request for approval page - PocklingtonDan 19:22, 12 December 2006 (UTC)
- Okay, I will when I have time :). At the moment I am trying to figure out how to upload the text, it is a bit harder than I thought. I may use one of the frameworks yet. Cheers! 3141507:05, 13 December 2006 (UTC)
- Okay, I will when I have time :). At the moment I am trying to figure out how to upload the text, it is a bit harder than I thought. I may use one of the frameworks yet. Cheers!
- Since you are able to write this yourself, I would move straight to the request for approval stage rather than posting here - you'll soon find out whether or not people think your proposal is any good there. If this is your first bot, make sure to follow the guidelines for the request process, including setting up a user page for your bot etc and writing a full spec - see other examples on the request for approval page - PocklingtonDan 19:22, 12 December 2006 (UTC)
- You are correct. My simple definition of a stub would be an article of 10 or less sentences; this could be manually assisted, the user giving appropriate stub markers, or the bot could just mark the article as a {{stub}} and move on. How does this sound? Cheers,
- Manually assisted I'd be fine with, but a bot going around analyzing articles and tagging them is a little unnerving for me. --Mets501 (talk) 11:54, 11 December 2006 (UTC)
Categorize
I recently created a new category
- AutoWikiBrowser can probably do this. Can you give me the names of the sub categories, so I can give it a try? Jayden5410:00, 11 December 2006 (UTC)
- Yea sure. Here are the sub categories I have so far: talk/contribs03:12, 12 December 2006 (UTC)
- Yea sure. Here are the sub categories I have so far:
- I've done all the articles in those categories (around 160), and they've been added to the new category. Let me know when you have any other categories I need to do. Cheers, Jayden54 21:31, 12 December 2006 (UTC)
- OK, thanks a lot for your help! talk/contribs23:13, 12 December 2006 (UTC)
- OK, thanks a lot for your help!
Orphan article bot
Does anyone want to take over tagging orphan articles with {{linkless}}? The actual tagging is simple with AWB, just load Special:Lonelypages (it refreshed Saturdays/Wednesdays as of mid-November) and run the bot, it will make about 800 edits each refresh. I can supply you with the regex I used to ignore an array of pages that should not be tagged (dab pages, pages to be transwikid, various other odd stuff). You would also want to maintain a list of orphaned articles, you can see what I mean at User:W.marsh/orphans articles/A-C.
The one hitch is that you will also want to de-orphan the ignored articles somehow or other, you could create a list of orphaned dab pages and so on from your userspace, or manually add them to
The more automation you can add (e.g. automatically updated lists) the faster this would be, unfortunately I could never add much except automated tagging with AWB. It's not glamerous work (for every 3,000 or so edits my bot made, I got about one reply on my talk page) but I think it's helpful work, I did notice a whole lot of articles getting de-orphaned within a few days after the tag was added. --W.marsh 15:37, 13 December 2006 (UTC)
- can you give me the regex? ) 20:11, 13 December 2006 (UTC)
- The regex I used to ignore articles was:
linkless|geodis|copyvio|{{Disambig|this AfD|This page has been deleted|#redirect|dated prod|4LA|{{dab}}|{{hndis|{{disamb|numberdis|4CC|3CC|2CC|Schooldis|Shipindex|Tempdab|Wikipedia does not currently have an encyclopedia article for|{{wi}}|{{surname|4CC|TLAdisambig|{{deleted|{{move to|{{dicdef
- It's crude but it worked pretty well. But you'll need to maintain some kind of a list, or use a non-AWB bot that can check "what links here" or special:lonelypages will just give you the same articles every refresh. --W.marsh 20:18, 13 December 2006 (UTC)
- The regex I used to ignore articles was:
Census Bot
We have a lot of articles about individual US towns, many of which refer to the "2000 census" in their introductions. As we have an article for the
- Are they in any specific category, or is there any other way I can track them? --Jmax- 08:04, 14 December 2006 (UTC)
Contact the people listed on Wikipedia:Translators available
|
Hello, I've upgraded with others
One thing we decided is to migrate from one big static page with all the available translators (which is heavy, hard to maintain, never up-to-date) to two userbox templates ({{
To migrate from the old system to the new one, since they were a lot of people, I need a bot which would go through every user listed on Wikipedia:Translators available, and which would let the following message on his talk page :
{{subst:Translation/Talkpage}}
Jmfayard 18:22, 14 December 2006 (UTC)
- Ok. I'll get this done for you ASAP. --Jmax- 20:19, 14 December 2006 (UTC)
- I have asked for clarification of policy for this request. I am still a relatively new editor, and am not completely familiar with WP:BOT. Sorry for the delay. --Jmax- 03:44, 15 December 2006 (UTC)
New user bot?
Can someone make a bot which places new user templates on someone's talk pages? If you can, I would seriously appreciate it!
- Such requests are usually denied, unfortunately. --Jmax- 03:01, 16 December 2006 (UTC)
- If you haven't done so already, I would suggest contacting User:FrummerThanThou, who is trying to put together a proposal for this that everyone is happy with - PocklingtonDan 10:16, 16 December 2006 (UTC)
- Wow, I wanted to do exactly that. But, I don't know bot code. :-( Can someone help me? Bearly541 13:49, 17 December 2006 (UTC)
- There is no such thing as "bot code". Bots can be programmed in several languages. Read Wikipedia:Creating_a_bot. Cheers - PocklingtonDan13:55, 17 December 2006 (UTC)
- There is no such thing as "bot code". Bots can be programmed in several languages. Read
- Wow, I wanted to do exactly that. But, I don't know bot code. :-( Can someone help me? Bearly541 13:49, 17 December 2006 (UTC)
- If you haven't done so already, I would suggest contacting User:FrummerThanThou, who is trying to put together a proposal for this that everyone is happy with - PocklingtonDan 10:16, 16 December 2006 (UTC)
WP:AIV-Clearing Bot?
Is it possible to make a bot that automatically removes users that are blocked? Often times admins forget to remove users that they block or spend time blocking the user and putting the appropriate message(s) on the blocked user's page. Or, sometimes when there are a large number of notices on
- Check with Voice of All; he has a bot that does something similar on WP:RPP, so he probably has the framework that would be necessary already and could make needed modifications, rather than writing a whole new bot. Essjay (Talk) 01:14, 15 December 2006 (UTC)
- Do you mean something like: user A adds vandal B to AIV. admin C blocks vandal B, but forgets to remove him from AIV. bot D looks through all block logs of people listed at AIV every... 1 minute?... and sees that vandal B has been blocked, then edits AIV to remove the vandal, and posts on vandal B's talkpage about being blocked? Seems simple enough I might be able to give my framework a go. GeorgeMoney (talk) 05:31, 16 December 2006 (UTC)
- I'd be able to add this to VoABot I I suppose. It would just AJAX down the usernames looking at the ipblocklist for that user, if they are blocked, it could removed the name. Voice-of-All07:39, 20 December 2006 (UTC)
- I'd be able to add this to VoABot I I suppose. It would just AJAX down the usernames looking at the ipblocklist for that user, if they are blocked, it could removed the name.
Article/Talk Page latest revision date/time
As part of some work I am carrying out on Scottish Historic Railways, I have created a progress and reference page in my user space at User:Pencefn/Historical Scottish Railways. I would like to add the latest revision date/time into the table for the article and associated talk page on the second and fourth column respectively - which is updated to reflect the work in progress (covering updates to articles and the potential addition of more articles). Can anyone help me? Stewart 19:16, 21 December 2006 (UTC)
- How often do you need it run? If not just once, then this may not be a valuable use of resources. Not my call to make, however. If deemed otherwise, I'd be glad to assist. -- Jmax- 20:55, 21 December 2006 (UTC)
- I guess once every two or three weeks. This is my first foray into the use of bots, so advice, guidance, etc. gratefully received. Stewart 21:08, 21 December 2006 (UTC)
UNsigned Comments IN Ref Desk/Welcome
Whoever puts any comments on REF desk without signing should be given a welcome template, so they will sign next time. (only if they don't have the welcome template already).--Judged 23:10, 22 December 2006 (UTC)
- You should ask Hagerman about this (his bot HagermanBot already puts unsigned templates on unsigned user comments). —Mets501 (talk) 23:57, 23 December 2006 (UTC)
Neutrality template categories
Happy Holidays and Happy New Year to everyone. I'm curious if anyone knows of any bots working the neutrality template categories. I would like to know what percentage of articles have neutrality-related tags by WikiProject and have a report generated, with a template updated on the project page (Pearle produced a similar report listing articles needing cleanup). After the report is generated, the template on the project page could be updated with a percentage linking to the category of WikiProject-related neutrality issues. Something like, "12% of articles require attention for neutrality-related issues." WikiProject departments would deal with this. The bot would only need to be run once a week. Thanks. —Viriditas | Talk 03:33, 25 December 2006 (UTC)
Wikipedia:WikiProject Massively multiplayer online games Banner Bot
I was wondering if someone with a bot would be able to place banners for
- I should be able to start within the next 24 hours —The preceding talk • contribs) 05:39, 19 December 2006 (UTC).
- Not to be rude, but when will you be starting? It has almost been a week. If you cannot do it, would there be any other bot owners willing to help? By the way, the tag to place on the talk pages is {{WP_MMOG}}. Thanks in advance and have a merry Christmas! Greeves 00:00, 25 December 2006 (UTC)
- He started today (my watchlist is now full of "Tagging for {{WP_MMOG}}" ^_^) ShakingSpirittalk 00:41, 25 December 2006 (UTC)
- Great! Thanks Betacommand! Greeves 17:10, 25 December 2006 (UTC)
- Not a problem just spread the word about my bot and the availability of it :) ) 01:14, 26 December 2006 (UTC)
- Great! Thanks Betacommand! Greeves 17:10, 25 December 2006 (UTC)
- He started today (my watchlist is now full of "Tagging for {{WP_MMOG}}" ^_^) ShakingSpirittalk 00:41, 25 December 2006 (UTC)
- Not to be rude, but when will you be starting? It has almost been a week. If you cannot do it, would there be any other bot owners willing to help? By the way, the tag to place on the talk pages is {{WP_MMOG}}. Thanks in advance and have a merry Christmas! Greeves 00:00, 25 December 2006 (UTC)
{{Permprot}} application
A bot would be needed to carry this task following a modification to {{
- Look linked pages in the template: namespace.
- If the page uses {{/doc}}, add doc=yes as a parameter to {{Permprot}}.
Circeus 21:14, 23 December 2006 (UTC)
- Basically no bots have admin rights, so this has to be done by hand. —Mets501 (talk) 02:25, 26 December 2006 (UTC)
- {{Permprot}} is a talk page header. Circeus02:31, 26 December 2006 (UTC)
- {{
- can I get a few links to what you are talking about (examples)? ) 02:47, 26 December 2006 (UTC)
- I think he means pages like Template:POV. —Mets501 (talk) 03:03, 26 December 2006 (UTC)
- No. Stuff like Template:Protected template instead, which will then make the bot request far more useful, by making it clear that edits to the documentation (quite frequent) can be made directly. Circeus03:16, 26 December 2006 (UTC)
- No. Stuff like
- I think he means pages like Template:POV. —Mets501 (talk) 03:03, 26 December 2006 (UTC)
AfD alert bot
Given that my proposal for an additional step to the AfD process (found here) is meeting both opposition and the suggestion that the job could be better done by a bot, I've brought that proposal here. The suggestion is a reasonably simple one:
- Once or twice a day (preferably the latter), the bot would scan through the list of AfD nominations at WP:AFD/T.
- For each nomination, it would check the history to find the creator and creation date of the article;
- If the article is older than four months, it is ignored and the bot continues to scan the other nominations.
- If it is younger than four months, the process continues.
- The bot then moves to the User Talk page of the article's creator, and checks that it does not already contain an instance of {{AFDNote}} for that article.
- If there is none, it places {{subst:AFDNote|ArticleName}} -- ~~~~ at the bottom of the page. No new section is required, since the template creates its own.
- Finally, it returns to WP:AFD/Tand continues.
This would avoid the bureaucracy that is the major criticism of my original proposal, and (hopefully) significantly reduce the problems of
) 01:11, 27 November 2006 (UTC)- I should note that I would be happy to manually run this bot once or twice daily, if it were not fully automated. ) 01:34, 27 November 2006 (UTC)
- If there is currently no system in place to notify users that articles they started are nominated for deletion then I think this is a great idea. I am unable to offer you any help right now due to other commitments but if you need help coding this in a week or so I could help. You can start the approvals request process without actually having any code written, which will tell you if it is a good idea or whether to give it up. - PocklingtonDan 19:19, 12 December 2006 (UTC)
- Support - this sounds like a really good idea, and I was wondering if any progress has been made. I can write this bot from scratch if necessary, just let me know. Jayden54 18:18, 22 December 2006 (UTC)
- I have listed this request on Requests for approval so if have any comments or suggestions, please list them there. Cheers, Jayden54 20:01, 26 December 2006 (UTC)
deadlink removal
I'm using the Weblinkchecker.py bot and have a whole load of bad links. Is there a way to have a bot remove them from the articles? (I reaize that this could be hard, since we have refs and [] links) One output looks like:
- http://www.zbi.ee/fungal-genomesize/index.php
- In Animal Genome Size Database on Tue Sep 12 01:04:24 2006, Socket Error: (10054, 'Connection reset by peer')
- In C-value on Wed Nov 29 14:41:33 2006, Socket Error: (10060, 'Operation timed out')
ST47Talk 22:13, 5 December 2006 (UTC)
- I was looking at replace.py for this, and, well, it's ugly. Every link would need me to run the bot another time, with different parameters. I'm thinking a .BAT file with each replacement.
- RegExes I will place here for development purposes
- [\[|<ref>]''opening tag''[^<\[]*''additional text, like {{cite''%link[^<\]]*''More text before the end''[\]|</ref>]''end tag''
- To
- Tested in AWB, didn't work. Any other ideas? ST47Talk 19:22, 6 December 2006 (UTC)
- Eagle_101, king of regular expressions, says:
- (<ref>.*?url=\s*|\[)LINK*?(</ref>|\])
- Eagle_101, king of regular expressions, says:
- I don't think simply removing dead links is a good idea at all. The linkw as presumably added for a reason - because it held good content. Because of web caching services such as google and alexa and WebCite etc this information may still be available eve thought he original link is dead. I would not want a bot siply removing the dead link without giving people a chance to manually update or find a caches copy of the linked page - PocklingtonDan 14:53, 18 December 2006 (UTC)
- Definitely NOT a good idea, though an understandable desire: Wikipedia:Dead external linksspecifically says that dead links are NOT to be removed.
- Definitely NOT a good idea, though an understandable desire:
- On the other hand, TAGGING such dead links or otherwise marking them could be a GREAT idea - then other editors would know that a problem existed when they read the article with the bad link in it. A similar concept is being discussed at Wikipedia talk:Disambiguation pages with links; it has been suggested that a template be put immediately after the bad link; the template would display the problem (as does, for example, {{fact}}), and could contain a link ("more"; "help", whatever) that a user could click on to get to an instruction page that would discuss possible ways to fix the bad external link. John Broughton | Talk 00:54, 28 December 2006 (UTC)
Category counts
I'm trying to find out how many articles and categories ultimately descend from Category:Dungeons & Dragons and how many of these are stubs (both by categorization and by byte/word count). Lists would be good if possible. For comparison sake, I'm also seeking similar numbers for Category:Chess. This is for the following purposes:
- See how much of Wikipedia as a whole is given over to D&D. (It seems to me D&D exposure is much higher among Wikipedians than the general population.)
- Determine whether sweeping mergers are called for, into titles one level less specific (e.g. Dwarven deities in the World of Greyhawk rather than the individual deities).
NeonMerlin 23:56, 24 December 2006 (UTC)
- There are 1041 articles in categories branching from Category:Chess. In order to collect the statistics you desire, I would have to request each of those pages and perform a character count. I'll speak to someone from the Bot Approvals Group and see if they'll let me perform this, or what I must do in order to. -- Jmax- 21:30, 25 December 2006 (UTC)
- PockBot would give you a list of all articles in category, as well as their article class (eg stub) - PocklingtonDan 16:28, 27 December 2006 (UTC)