Robots.txt Generator |

Robots.txt Generator

Default - All Robots are:  
Sitemap: (leave blank if you don't have) 
Search Robots: Google
  Google Image
  Google Mobile
  MSN Search
  Yahoo MM
  Yahoo Blogs
  DMOZ Checker
  MSN PicSearch
Restricted Directories: The path is relative to root and must contain a trailing slash "/"

Now, Create 'robots.txt' file at your root directory. Copy above text and paste into the text file.

About Robots.txt Generator

Robots. Txt is a report that can be located inside the root folder of your internet site to help serps index your web site extra as it should be. ... Txt documents use some thing known as the Robots Exclusion Protocol. This website will without difficulty generate the document for you with inputs of pages to be excluded.

The robots.Txt document is a text file which incorporates instructions for search engine indexing robots (or crawlers) to specify which pages can or can't be listed. It is a ASCII file positioned at the foundation of the internet website online, and have to be written in lowercase and plural. The Meta tag "robots" (in the header of pages) additionally can be used to forbid the listed of a page.

This document lets in you to go away commands to the indexing robots:

To indicate the region of the sitemap files
To forbid the indexation of your internet website online for certain robots
To forbid the indexation of fine pages / directories
It is able to incorporate the following directives:

Sitemap: It specifies the sitemap or Sitemap index documents
Individual-Agent: It specifies the robotic concerned through the following directives.
For instance, Google's man or woman-agent is Googlebot.
*: this means that the directives communicate over with all indexing robots
Disallow: It permit to disclaim get right of access to to positive pages / directories of your website.
Want first of all /
/ approach the whole net web site
Severe serps like google (Google and Bing) permit using the characters $ and *:
*: It represents any series of characters.
$: It suits the end of the URL.
Allow: it is the other of the Disallow directive. It specifies which pages / directories to encompass from indexation. Thru default each pages may be listed.
Be careful: The robots.Txt record isn't treated inside the equal way via all seek engine.

For curtains robots the primary directive takes the pinnacle hand, for others robots the maximum unique directives that take the higher hand.