Saturday, 5 September 2015

Cloaking: What It Is and Why You Shouldn't Do It

Cloaking: What It Is and Why You Shouldn't Do It

 Cloaking is a black hat search engine optimization (SEO) technique in which the content presented to the search engine spider is different to that presented to the user's browser. This is done by delivering content based on the IP addresses or the User-Agent HTTP header of the user requesting the page. When a user is identified as a search engine spider, a server-side script delivers a different version of the web page, one that contains content not present on the visible page. The purpose of cloaking is to deceive search engines so they display the page when it would not otherwise be displayed.

As of 2006, better methods of accessibility, including progressive enhancement are available, so cloaking is not considered necessary by proponents of that method. Cloaking is often used as a spamdexing technique, to try to trick search engines into giving the relevant site a higher ranking; it can also be used to trick search engine users into visiting a site based on the search engine description which site turns out to have substantially different, or even pornographic content. For this reason, major search engines consider cloaking for deception to be a violation of their guidelines, and therefore, they delist sites when deceptive cloaking is reported.
Cloaking: What It Is and Why You Shouldn't Do It
Cloaking is a form of the doorway page technique.

A similar technique is also used on the Open Directory Project web directory. It differs in several ways from search engine cloaking:

* It is intended to fool human editors, rather than computer search engine spiders.
* The decision to cloak or not is often based upon the HTTP referrer, the user agent or the visitor's IP; but more advanced techniques can be also based upon the client's behaviour analysis after a few page requests: the raw quantity, the sorting of, and latency between subsequent HTTP requests sent to a website's pages, plus the presence of a check for robots.txt file, are some of the parameters in which search engines spiders differ heavily from a natural user behaviour. The referrer tells the URL of the page on which a user clicked a link to get to the page. Some cloakers will give the fake page to anyone who comes from a web directory website, since directory editors will usually examine sites by clicking on links that appear on a directory web page. Other cloakers give the fake page to everyone except those coming from a major search engine; this makes it harder to detect cloaking, while not costing them many visitors, since most people find websites by using a search engine.

0 comments:

Post a Comment