Skip to main content
Loading...

Frequently Asked Questions

If you have pages you don’t want crawled, add a robots.txt file. You want to ensure that Google and other search engines can find the most important pages, like your homepage, contact information page, and top products/services pages. If you have many pages with thin content or duplicate URLs or content, you do not want to risk these pages being crawled, indexed, and ranked instead. You want to avoid experiencing index bloat and overloading your website.

It is incredibly useful for large websites to have a robots.txt file in place to ensure that search engine bots are crawling and indexing the right pages. However, smaller websites or those just beginning shouldn’t necessarily worry about setting this up immediately. Instead, focus on quality content that guarantees a great user experience. As your website grows, you can then start to think about creating a robots.txt file.

No. Every website/domain can only create one robots.txt file. Even if you have your website available in different languages, you still only have one robots.txt file. If you have multiple websites or subdomains, then you can create a robots.txt file for each, but nothing more.

You will find your robots.txt in your WordPress directory. To access this, you can use an SEO or robots.txt plugin. Alternatively, access it via the cPanel found in your hosting or by using FTP (File Transfer Protocol). You could even enter your URL address in the browser and add robots.txt to the end.

Using the same methods for finding it in WordPress, the quickest way is to add robots.txt to your domain. If nothing displays, you likely don’t have anything in place. However, the most accurate way to find what your robots.txt is and whether you have one is to use a checker like ours. This doesn’t take long to use and can provide you with completely accurate information.

A user-agent is the specific bot you want to follow your crawling instructions. For instance, you might enter Googlebot, Bingbot, Slurp Bot (for Yahoo!), Apple Bot, DuckDuck Bot, GoogleOther, or one of the many others. You can have a block in your robots.txt file for all search engines if you want. If you would prefer, you can use a wildcard and put in blocks for certain search engines.

You need to create a file named robots.txt and then add rules to it. Do this by opening up any plain text file editor, such as Notepad or TextEdit. Do not use a word processor. As an example, you might enter the following information if you want to target Googlebots and disallow:

User-agency: Googlebot

Disallow: /

This is the most basic robots.txt file example, but if you are ready, you can now upload it to your main directory and test it.

Although they can be simple to create, getting your robots.txt file right is critical, as any mistakes could harm your website. It can contain 1 group of rules or several more, but you must enter only 1 rule per line. Always begin with your User-agent and then provide the information about what they can access and cannot. Examples of directives include disallows, allows, and crawl-delays.