Talk:Cloaking

Page contents not supported in other languages.
Source: Wikipedia, the free encyclopedia.

Untitled It says "

". There has been real research in the fields of cloaking. This... annoys me.

* Other cloakers give the fake page to everyone except those coming from a major search engine; this makes it harder to detect cloaking, while not costing them many visitors, since most people find websites by using a search engine.

Who would then see the fake page (just the occasional person who bookmarked it or opened it from an email?), and what would it contain?
Talk 04:24, 6 January 2007 (UTC)[reply
]
Superm401 asks: "Who would then see the fake page?"
Answer: Everyone but those users who will click that page on a serp. Those users are expected to represent far the largest slice of the total visits.
The fake page would contain just some spammy/optimized content, made with the only goal to rank high on search engines for a given keyword. The "real" page (that one seen by any user coming from a search) may contain, instead: affiliate links, dialers/malware, porn, or just everything else (depending on what is the webmaster's real goal).
If you have further questions, please feel free to ask :)
--Jestering 11:54, 19 January 2007 (UTC)[reply]

the "what search engines see" link doesn't work Linelor 21:50, 2 July 2007 (UTC)[reply]

Google cloaking

Something needs to be added concerning sites like jstor, jwatch, ingentaconnect, etc, cloaking journal content that is indexed by Google. Even if "legitimate", it still is cloaking since the search engine indexes content for public consumption that cannot be accessed without going through a pay wall. -Rolypolyman (talk) 01:26, 1 March 2008 (UTC)[reply]

I don't think there's anything legitimate or special about it at all. It's still deception to boost search rankings. It should be presented right alongside all the other spammer techniques. Gigs (talk) 19:59, 6 April 2008 (UTC)[reply]
Some useful information about the topic of ongoing SEO cloaking by academic journals: http://golem.ph.utexas.edu/category/2007/07/web_spamming_by_academic_publi.html -Rolypolyman (talk) 04:51, 10 June 2008 (UTC)[reply]
I think the key point that this form of cloaking has the approval of search engines [1] and also is (in the opinion of both Google and those publishers, and some independent commentators) not about deception to boost search ranking but about making the search engines more useful Nil Einne (talk) 16:51, 5 July 2009 (UTC)[reply]

Truth vs. Verifiability

Truth vs. Verifiability: This article has 5 references from search engines where they claim to delist cloaking sites, but fact is experts-exchange.com, webmasterworld.com, nytimes.com, and lioncity.net all have done cloaking for years, it's well known, and the major search engines do nothing. Are you ready for IPv6? (talk) 18:25, 1 January 2009 (UTC)[reply]

Nobody said they delist all cloaking sites immediately. They do certainly have a policy that they may delist cloaked sites at any time. It made the news some time ago when one of BMW's websites was delisted for cloaking.

I believe that experts-exchange.com, at least, no longer cloaks. It changed things so the questions are visible (and are indexed by Google), but the answers are not visible (neither to humans nor to Google). This is annoying, but not cloaking. Compare, for instance, [2] and [3]. They did used to cloak (and I always reported them when they turned up in search results), but they don't seem to anymore.

If you had specific links for the other sites that exhibited the cloaking, those might be helpful as points of reference, although maybe not to put in the article itself (too liable to change). —Simetrical (talk • contribs) 18:13, 2 January 2009 (UTC)[reply]

webmasterworld.com lets search engines crawl the whole site, but anyone else after a couple pages suddenly gets their IP blocked for about a day and told they must pay $200 to regiter. nytimes.com requires cookies to be enabled to view the site, except for search engines that they let crawl without cookies. lioncity.net is one of many forums that block humans from viewing posts until they register, but lets search engines crawl all they want. webmasterworld has been complained about since around 2005 and nothing ever was done by google. Are you ready for IPv6? (talk) 04:18, 6 January 2009 (UTC)[reply]
Google doesn't consider it cloaking if people are blocked from viewing when they browse away from the actual search result. As long as the first click is free, there's no cloaking. As for nytimes.com, I seem to be able to browse just fine. Requiring cookies from people but not spiders might technically be cloaking, but in practice it's not noticeable. lioncity.net is definitely cloaking, but it's a pretty minor site and it's not surprising that Google hasn't caught it (I just reported it, don't know if that will do any good).

I think it's true that people often get away with cloaking, but it's not like the anti-cloaking rules are completely unenforced. They just aren't necessarily enforced for small-time violators, and for larger-scale violators they aren't necessarily enforced immediately. Would you say that the US government doesn't punish tax evasion just because some people get away with it in practice? I don't think the article is overly misleading, but a note that cloaking isn't necessarily dealt with in all cases would be useful, if we could find a source for that. —Simetrical (talk • contribs) 14:44, 6 January 2009 (UTC)[reply]

"or even pornographic content" is very clumsy

Cloaking is often used as a spamdexing technique, to try to trick search engines into giving the relevant site a higher ranking; it can also be used to trick search engine users into visiting a site based on the search engine description which site turns out to have substantially different, or even pornographic content.

If the intent is to establish some kind of extreme on how "different" the content can be when comparing what the search engine reports it is, vs. what the User actually sees, I think the wording needs to be reworked. Otherwise, this sounds like some kind of moral condemnation of pornography in general, creating the impression that wikipedia thinks that "pornography" is so dramatically different that it deserves it's own class "pornographic" because "different" cannot accurately describe it.

What if the search phrase is actually pornographic? Then "pornographic" results are not wildly "different" than what a User would expect, they would be exactly what the Search Engine indicated they would be.````Jonny Quick