Despite Google’s Need To Go Deeper When Indexing, Google Fixes Self Indexing Glitch

google-g-logo-fade-1920

Over the week, there were various reports of Google going against its own webmaster guidelines, by indexing its own search results. Last night, Google updated its robot.txt file to ensure it blocks its own search results from being in the Google search results.

The guidelines read:

Use robots.txt to prevent crawling of search results pages or other auto-generated pages that don’t add much value for users coming from search engines.

Reports came from Chris Dyson and then I covered it yesterday and Google’s Gary Illyes commented on my Google+ saying “we’re going to look into what happened here.”

Then it made it on to Hacker News and we asked Google for a comment. Google responds in Google humor, “Indexing the index? We must go deeper!” Adding, “it’s a glitch with multiple slashes in web addresses that we’re working to fix now.”

Indeed, at around 6 p.m. EDT last night, Google updated their robots.txt file at google.com/robots.txt to prevent this from happening.

Here are before and after shots of the problem:

google-index-index-1410435967

google-index-indexs-1410435967

It is not common to find search results from other search engines and especially Google, within the Google search results. At least, Google doesn’t want to offer that as a search experience to its users.

The post Despite Google’s Need To Go Deeper When Indexing, Google Fixes Self Indexing Glitch appeared first on Search Engine Land.