Over the week, there were various reports of Google going against its own webmaster guidelines, by indexing its own search results. Last night, Google updated its robot.txt file to ensure it blocks its own search results from being in the Google search results.
The guidelines read:
Use robots.txt to prevent crawling of search results pages or other auto-generated pages that don’t add much value for users coming from search engines.
Then it made it on to Hacker News and we asked Google for a comment. Google responds in Google humor, “Indexing the index? We must go deeper!” Adding, “it’s a glitch with multiple slashes in web addresses that we’re working to fix now.”
Here are before and after shots of the problem:
It is not common to find search results from other search engines and especially Google, within the Google search results. At least, Google doesn’t want to offer that as a search experience to its users.
The post Despite Google’s Need To Go Deeper When Indexing, Google Fixes Self Indexing Glitch appeared first on Search Engine Land.