This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Google’s Not Supporting noindex in robots.txt – What Does that Even Mean?
Two days before Independence Day, Google announced its own sort of independence—freedom from supporting unsupported and/or unpublished robots.txt rules. One such unsupported robots.txt rule is “noindex.” Some SEOs may be scrambling to execute the same command through different (Google-supported) channels. Some SEOs are unaffected because they never implemented “noindex.” Most everyone else is just left wondering, “What does that mean?”
Here’s the quick and easy guide to Google’s declaration of no support for “noindex” and what it means for your website.
What Is robots.txt?
To understand the significance of Google’s July 2 announcement, you first need to understand what robots.txt is in the first place—it’s essentially a command file that provides directions to Google’s search bot about how to move through your site.
Sounds kind of counterintuitive, right? No one knows what exactly Google search bots use as ranking factors, how much weight they assign to ranking factors, how long they take to detect and index new content or changes to existing content. Yet, SEO professionals can, in part, use robots.txt to control what pages the search bot “reads,” indexes and what links it will follow.
What Is noindex in robots.txt?
One command SEO professionals have been able to give Google’s search bots in robots.txt is “noindex.” It means what it says—it tells the search bot not to index the page. The search engine crawler does not “read” the content or images, so nothing on the page will ever rank and show up on a search engine results page (SERP).
Why Would You “noindex” A Page?
Again, it might seem counterintuitive that SEO professionals would ever tell search bots not to index a page. After all, the point of SEO is to get better ranking and visibility for web pages and website.
“noindex” can actually help website ranking potential by removing pages that do not add value to a user’s search experience from consideration.
Perhaps you’re thinking, “Why not just remove the whole page?” That’s because the page is often necessary for an individual user experience and/or for internal purposes. So, pages that SEOs often “noindex” include:
- “Thank you” pages—these pages simply thank a user for some kind of submission. It’s good manners on the part of the company website, but it doesn’t do anything for ranking potential.
- Administrative and login pages intended for internal users—these pages are the gateway to internal content; only internal users need to know where they are and how to locate them, so they shouldn’t end up on “public” SERPs.
SEO pros may also “noindex” archives and auto-generated pages or posts that a client doesn’t really use (this often happens with ecommerce product page templates).
What Does This Mean for Your Website?
Google’s announcement that as of September 1 it will no longer support “noindex” in robots.txt does not mean SEO professionals have less control over search engine bots. They can direct search bots to move through a site, exclude a page, not follow a link, etc. same as always, just through different means.
SonicSEO.com can—and will—continue to direct search bots to high-quality pages and steer them away from internal pages and pages that don’t add value to public searchers’ experiences.
So, while Google announcements usually come with some sort of “brace yourself” warning, you’re just not likely to feel an impact from Google’s declaration of independence from unsupported robots.txt rules.