Directives

Taking the form of HTTP status codes, a robots.txt file or robots tags, directives inform search engine bots (also referred to as crawlers or spiders) what pages they are allowed to crawl on a website and/or add to a search engine’s index. Listed below are the most commonly used directives a website can leverage for search engine optimization benefits.

HTTP Status Codes

When a request is made to a server for a page on a website, the server returns an HTTP status code in response to the request. This status code provides information about the state of the request: 200 OK, 404 Not Found, etc.

Learn more about HTTP status codes

Robots.txt

Robots.txt files help you guide how search engines crawl your site, and can be an integral part of your SEO strategy. Learn more about Robots.txt files and their effect on SEO.

Learn more about the robots.txt file

Robots Tags

Robots tags are directives that allow you to control how search engines crawl and index the content of your website. They are page-level signals and can be implemented in a meta tag or in an HTTP response header.

Learn more about robots tags