Powered by Max Banner Ads
Sometimes we have a tendency to rank well on one engine for a specific keyphrase and assume that all search engines will like our pages, and hence we have a tendency to will rank well for that keyphrase on a variety of engines. Sadly this can be rarely the case. All the majorsearch engines differ somewhat, therefore what is get you ranked high on one engine may truly help to lower your ranking on another engine.
It’s because of this that some folks prefer to optimize pages for each particular search engine. Sometimes these pages would only be slightly different but this slight distinction may make all the difference when it comes to ranking high.
However as a result of search engine spiders crawl through sites indexing each page it can realize, it would possibly come back across your search engine specific optimizes pages and because they are terribly similar, the spider might assume you are spamming it and can do one amongst 2 things, ban your site altogether or severely punish you in the shape of lower rankings.
The solution is that this case is to prevent specific Search Engine spiders from indexing some of your web pages. This can be done employing a robots.txt file that resides on your webspace.
A Robots.txt file is a very important part of any webmasters battle against obtaining banned or punished by the search engines if she designs totally different pages for various search engine’s.
The robots.txt file is just a straightforward text file as the file extension suggests. It’s created employing a simple text editor like notepad or WordPad, sophisticated word processors like Microsoft Word can only corrupt the file.
You’ll insert sure code during this text file to form it work. This can be how it can be done.
User-Agent: (Spider Name)
Disallow: (File Name)
The User-Agent is the name of the search engines spider and Disallow is that the name of the file that you don’t need that spider to index.
You’ve got to start a brand new batch of code for each engine, but if you would like to list multiply disallow files you’ll be able to one below another. For example
User-Agent: Slurp (Inktomi’s spider)
The higher than code disallows Inktomi to spider 2 pages optimized for Google (gg) and two pages optimized for AltaVista (al). If Inktomi were allowed to spider these pages as well because the pages specifically made for Inktomi, you will run the danger of being banned or penalized. Hence, it is usually a smart idea to use arobots.txt file.
The robots.txt file resides on your webspace, however where on your webspace? The root directory! If you upload your file to sub-directories it will not work. If you wished to disallow all engines from indexing a file, you just use the “*” character where the engines name would usually be. But beware that the “*” character won’t work on the Disallow line.
Here are the names of some of the large engines:
Excite – ArchitextSpider
AltaVista – Scooter
Lycos – Lycos_Spider_(T-Rex)
Google – Googlebot
Alltheweb – FAST-WebCrawler
Be positive to check over the file before uploading it, as you’ll have created a easy mistake, which might mean your pages are indexed by engines you do not want to index them, or even worse none of your pages might be indexed.
You´ll Love These Ones Too:
- 7 Step How To Create And Publish Your Own E-Book
- The Worst SEO Mistakes That Lead To Lowering Your Search Engine Ranking
- SEO – Search Engine Optimisation
- Search Engines: Understand In A Better Way
- Search Engine Optimization Reviews – The 4 Techniques That Work