house
Home
Learn
Dictionary

What are Meta Robots - Definition

Meta robots - is the name for the HTML tag responsible for providing information to search engine crawlers about whether a page should be indexed and what content can be displayed in search results. The meta robots tag is placed in the HTML document. The correct use of robots meta tags is essential for optimizing your website. Right meta-name robots content in title and description allows you to influence how your web page will look in search results.

What is a robots meta tag what types meta robots google uses?
What are Meta Robots - Definition

What is the meta robots tag used for?

The meta robots tags are mainly used to block the indexing of low-value pages, duplicates, login pages, or other pages that are not useful from an SEO perspective. This practice is helpful for pages such as terms and conditions or shopping carts, which can undercut rankings due to repetitive content. Different meta robots tag values can allow you to leave instructions for specific robots. The meta description tells search engines about desire indexing behavior.

How is meta robots tag constructed?

The default value of the meta robots tag is "all." The default value allows robots to index the sub-page and follow all the links it contains. The absence of the meta robots tag in the page code is always equal to the value "all."

An example tag structure looks like this:

<meta name="robots" content="noindex, nofollow" />.

The above example is called "noindex directive" and tells crawlers not to index the page and not follow its links. The individual values can be combined, e.g., "index, nofollow." Entering "index, follow" in the meta robots tag will be equivalent to the value "all." Joining "noindex, nofollow" will work identically as "none." Remember that the robot meta tag is unnecessary if a page is subject to indexing and link tracking.

The examples of robots directives governing indexing and display web pages in search results listings

  • index - the page may be indexed. A default directive does which does not have to be specified.
  • noindex - the page cannot be indexed.
  • follow - follow the links on the page. The default behavior of robots on the page, so the directive does not have to be given.
  • nofollow - do not follow links on the site.
  • noimageindex - do not index images on the page.
  • all - a combination of the index and follow directives. The default directive does not have to be specified.
  • none - a combination of noindex and nofollow directives.
  • noarchive - do not cache a copy of the page.
  • nosnippet - do not display snippets (e.g., meta description) in search results.