Everything You Need To Know About The X-Robots-Tag HTTP Header

Posted by

Seo, in its a lot of standard sense, relies upon something above all others: Online search engine spiders crawling and indexing your website.

However nearly every site is going to have pages that you do not want to include in this expedition.

For instance, do you truly want your privacy policy or internal search pages showing up in Google results?

In a best-case scenario, these are not doing anything to drive traffic to your site actively, and in a worst-case, they might be diverting traffic from more important pages.

Thankfully, Google allows web designers to tell search engine bots what pages and content to crawl and what to ignore. There are numerous methods to do this, the most typical being utilizing a robots.txt file or the meta robotics tag.

We have an excellent and comprehensive description of the ins and outs of robots.txt, which you ought to absolutely check out.

However in top-level terms, it’s a plain text file that lives in your site’s root and follows the Robots Exemption Protocol (ASSOCIATE).

Robots.txt supplies crawlers with guidelines about the website as an entire, while meta robots tags include instructions for particular pages.

Some meta robotics tags you might utilize include index, which informs online search engine to include the page to their index; noindex, which informs it not to include a page to the index or include it in search engine result; follow, which advises an online search engine to follow the links on a page; nofollow, which informs it not to follow links, and an entire host of others.

Both robots.txt and meta robots tags work tools to keep in your toolbox, however there’s also another way to instruct online search engine bots to noindex or nofollow: the X-Robots-Tag.

What Is The X-Robots-Tag?

The X-Robots-Tag is another way for you to control how your websites are crawled and indexed by spiders. As part of the HTTP header action to a URL, it controls indexing for an entire page, as well as the specific elements on that page.

And whereas utilizing meta robotics tags is fairly simple, the X-Robots-Tag is a bit more complex.

However this, obviously, raises the concern:

When Should You Use The X-Robots-Tag?

According to Google, “Any instruction that can be utilized in a robots meta tag can likewise be defined as an X-Robots-Tag.”

While you can set robots.txt-related regulations in the headers of an HTTP reaction with both the meta robotics tag and X-Robots Tag, there are particular circumstances where you would wish to utilize the X-Robots-Tag– the two most common being when:

  • You want to manage how your non-HTML files are being crawled and indexed.
  • You want to serve instructions site-wide rather of on a page level.

For instance, if you wish to block a specific image or video from being crawled– the HTTP reaction approach makes this simple.

The X-Robots-Tag header is likewise helpful due to the fact that it allows you to combine several tags within an HTTP response or use a comma-separated list of regulations to define instructions.

Maybe you don’t desire a specific page to be cached and want it to be unavailable after a certain date. You can utilize a mix of “noarchive” and “unavailable_after” tags to instruct online search engine bots to follow these instructions.

Basically, the power of the X-Robots-Tag is that it is much more flexible than the meta robotics tag.

The benefit of using an X-Robots-Tag with HTTP actions is that it enables you to utilize routine expressions to carry out crawl directives on non-HTML, as well as apply specifications on a bigger, worldwide level.

To help you comprehend the difference in between these directives, it’s practical to classify them by type. That is, are they crawler directives or indexer directives?

Here’s a helpful cheat sheet to explain:

Spider Directives Indexer Directives
Robots.txt– uses the user representative, enable, prohibit, and sitemap directives to define where on-site search engine bots are enabled to crawl and not permitted to crawl. Meta Robotics tag– permits you to specify and avoid search engines from showing particular pages on a site in search results.

Nofollow– enables you to specify links that must not hand down authority or PageRank.

X-Robots-tag– permits you to control how specified file types are indexed.

Where Do You Put The X-Robots-Tag?

Let’s say you want to obstruct particular file types. A perfect approach would be to add the X-Robots-Tag to an Apache configuration or a.htaccess file.

The X-Robots-Tag can be contributed to a website’s HTTP reactions in an Apache server configuration via.htaccess file.

Real-World Examples And Uses Of The X-Robots-Tag

So that sounds great in theory, however what does it appear like in the real life? Let’s take a look.

Let’s state we wanted online search engine not to index.pdf file types. This setup on Apache servers would look something like the below:

Header set X-Robots-Tag “noindex, nofollow”

In Nginx, it would look like the listed below:

place ~ * . pdf$ add_header X-Robots-Tag “noindex, nofollow”;

Now, let’s look at a various scenario. Let’s state we wish to use the X-Robots-Tag to obstruct image files, such as.jpg,. gif,. png, and so on, from being indexed. You could do this with an X-Robots-Tag that would look like the below:

Header set X-Robots-Tag “noindex”

Please note that understanding how these instructions work and the impact they have on one another is important.

For instance, what takes place if both the X-Robots-Tag and a meta robots tag lie when spider bots find a URL?

If that URL is obstructed from robots.txt, then specific indexing and serving directives can not be found and will not be followed.

If instructions are to be followed, then the URLs containing those can not be disallowed from crawling.

Look for An X-Robots-Tag

There are a couple of different methods that can be used to check for an X-Robots-Tag on the site.

The most convenient method to examine is to install a browser extension that will tell you X-Robots-Tag details about the URL.

Screenshot of Robots Exemption Checker, December 2022

Another plugin you can use to figure out whether an X-Robots-Tag is being utilized, for instance, is the Web Developer plugin.

By clicking on the plugin in your web browser and navigating to “View Response Headers,” you can see the numerous HTTP headers being used.

Another approach that can be utilized for scaling in order to pinpoint concerns on sites with a million pages is Shrieking Frog

. After running a website through Shouting Frog, you can navigate to the “X-Robots-Tag” column.

This will show you which areas of the site are using the tag, in addition to which particular directives.

Screenshot of Shrieking Frog Report. X-Robot-Tag, December 2022 Utilizing X-Robots-Tags On Your Site Understanding and controlling how search engines engage with your site is

the cornerstone of search engine optimization. And the X-Robots-Tag is a powerful tool you can utilize to do just that. Simply know: It’s not without its dangers. It is really easy to make a mistake

and deindex your entire site. That stated, if you’re reading this piece, you’re probably not an SEO novice.

So long as you utilize it carefully, take your time and check your work, you’ll discover the X-Robots-Tag to be a beneficial addition to your arsenal. More Resources: Featured Image: Song_about_summer/ Best SMM Panel