Seo, in its the majority of basic sense, relies upon something above all others: Search engine spiders crawling and indexing your site.
But almost every website is going to have pages that you do not want to include in this expedition.
In a best-case situation, these are doing nothing to drive traffic to your site actively, and in a worst-case, they could be diverting traffic from more crucial pages.
Luckily, Google allows web designers to inform online search engine bots what pages and material to crawl and what to neglect. There are a number of methods to do this, the most common being using a robots.txt file or the meta robotics tag.
We have an exceptional and in-depth explanation of the ins and outs of robots.txt, which you need to absolutely check out.
However in high-level terms, it’s a plain text file that lives in your site’s root and follows the Robots Exemption Procedure (ASSOCIATE).
Robots.txt supplies crawlers with guidelines about the website as an entire, while meta robotics tags consist of instructions for specific pages.
Some meta robotics tags you might utilize consist of index, which informs search engines to include the page to their index; noindex, which informs it not to add a page to the index or include it in search engine result; follow, which advises a search engine to follow the links on a page; nofollow, which tells it not to follow links, and a whole host of others.
Both robots.txt and meta robots tags are useful tools to keep in your tool kit, however there’s likewise another way to advise online search engine bots to noindex or nofollow: the X-Robots-Tag.
What Is The X-Robots-Tag?
The X-Robots-Tag is another method for you to control how your websites are crawled and indexed by spiders. As part of the HTTP header response to a URL, it manages indexing for an entire page, as well as the particular aspects on that page.
And whereas using meta robotics tags is relatively simple, the X-Robots-Tag is a bit more complex.
But this, of course, raises the concern:
When Should You Use The X-Robots-Tag?
According to Google, “Any directive that can be utilized in a robots meta tag can likewise be defined as an X-Robots-Tag.”
While you can set robots.txt-related instructions in the headers of an HTTP response with both the meta robotics tag and X-Robots Tag, there are specific scenarios where you would want to utilize the X-Robots-Tag– the two most typical being when:
- You want to manage how your non-HTML files are being crawled and indexed.
- You wish to serve regulations site-wide instead of on a page level.
For example, if you wish to obstruct a particular image or video from being crawled– the HTTP action approach makes this simple.
The X-Robots-Tag header is also useful because it permits you to integrate multiple tags within an HTTP response or utilize a comma-separated list of regulations to define regulations.
Possibly you do not want a certain page to be cached and want it to be not available after a certain date. You can utilize a combination of “noarchive” and “unavailable_after” tags to advise online search engine bots to follow these guidelines.
Basically, the power of the X-Robots-Tag is that it is far more versatile than the meta robots tag.
The benefit of using an X-Robots-Tag with HTTP actions is that it allows you to utilize routine expressions to perform crawl instructions on non-HTML, in addition to apply parameters on a larger, worldwide level.
To assist you understand the difference in between these directives, it’s useful to categorize them by type. That is, are they crawler directives or indexer regulations?
Here’s a handy cheat sheet to discuss:
|Spider Directives||Indexer Directives|
|Robots.txt– utilizes the user representative, permit, prohibit, and sitemap regulations to specify where on-site online search engine bots are permitted to crawl and not allowed to crawl.||Meta Robots tag– permits you to specify and prevent online search engine from showing particular pages on a website in search results.
Nofollow– enables you to define links that should not hand down authority or PageRank.
X-Robots-tag– allows you to control how specified file types are indexed.
Where Do You Put The X-Robots-Tag?
Let’s say you want to block particular file types. A perfect technique would be to add the X-Robots-Tag to an Apache configuration or a.htaccess file.
The X-Robots-Tag can be added to a site’s HTTP reactions in an Apache server configuration via.htaccess file.
Real-World Examples And Uses Of The X-Robots-Tag
So that sounds great in theory, however what does it look like in the real life? Let’s take a look.
Let’s say we wanted search engines not to index.pdf file types. This configuration on Apache servers would look something like the below:
In Nginx, it would look like the below:
place ~ * . pdf$
Now, let’s take a look at a different circumstance. Let’s state we want to utilize the X-Robots-Tag to obstruct image files, such as.jpg,. gif,. png, etc, from being indexed. You might do this with an X-Robots-Tag that would appear like the below:
Please note that comprehending how these instructions work and the impact they have on one another is crucial.
For example, what takes place if both the X-Robots-Tag and a meta robots tag are located when crawler bots find a URL?
If that URL is obstructed from robots.txt, then particular indexing and serving regulations can not be discovered and will not be followed.
If regulations are to be followed, then the URLs including those can not be prohibited from crawling.
Look for An X-Robots-Tag
There are a couple of various methods that can be utilized to look for an X-Robots-Tag on the website.
The easiest method to examine is to set up an internet browser extension that will inform you X-Robots-Tag information about the URL.
Screenshot of Robots Exclusion Checker, December 2022
Another plugin you can use to identify whether an X-Robots-Tag is being utilized, for instance, is the Web Designer plugin.
By clicking the plugin in your browser and navigating to “View Reaction Headers,” you can see the various HTTP headers being utilized.
Another method that can be used for scaling in order to determine concerns on websites with a million pages is Shrieking Frog
. After running a website through Shouting Frog, you can browse to the “X-Robots-Tag” column.
This will show you which areas of the site are using the tag, in addition to which particular directives.
Screenshot of Yelling Frog Report. X-Robot-Tag, December 2022 Utilizing X-Robots-Tags On Your Website Understanding and managing how search engines engage with your website is
the foundation of seo. And the X-Robots-Tag is a powerful tool you can use to do just that. Just know: It’s not without its dangers. It is extremely easy to slip up
and deindex your whole website. That stated, if you’re reading this piece, you’re probably not an SEO novice.
So long as you use it wisely, take your time and inspect your work, you’ll find the X-Robots-Tag to be a beneficial addition to your arsenal. More Resources: Featured Image: Song_about_summer/ Best SMM Panel