Open In App

Components of Robot.txt File – User-Agent, Disallow, Allow & Sitemap

Last Updated : 09 Nov, 2023
Like Article

The robots.txt file in SEO acts as a gatekeeper, before any good bots entering to your website they first visit the robots.txt file and read which pages are allowed to crawl and which are not.

A robots.txt file tells the Google crawler bot which URLs the crawler can access on your website.

Example of Robot.txt File

You can also visit our robots.txt file by this URL:

User-agent: *
Disallow: /wp-admin/
Disallow: /community/
Disallow: /wp-content/plugins/
Disallow: /content-override.php
User-agent: ChatGPT-User
Disallow: /

Components of Robot.txt File

Now lets explain above code

  • User-agent means bots.
    • * means all.
  • Disallow means if the URL contains this keyword don’t crawl.

For example:

If we put Disallow on the URL and do not allow it to be crawled.

Now even if the URL is changed to
then also is not allowed to crawl (although is allowed to crawl).

  • User-agent: ChatGPT-User
    • blocks the ChatGPT bot from crawling the whole website.

User-agent: *
Disallow: /

Above code block all web crawler to visit any page of website.

Note: If you want any URL to deindex from Google Search quickly you can use Google Search Console removal request from your GSC account.

Like Article
Suggest improvement
Share your thoughts in the comments

Similar Reads