Technical SEO is an awesome field. There are so many little nuances to it that make it exciting, and its practitioners are required to have excellent problem-solving and critical thinking skills.
In this article, I cover some fun technical SEO facts. While they might not impress your date at a dinner party, they will beef up your technical SEO knowledge — and they could help you in making your website rank better in search results.
Let’s dive into the list.
Most think of slow load times as a nuisance for users, but its consequences go further than that. Page speed has long been a search ranking factor, and Google has even said that it may soon use mobile page speed as a factor in mobile search rankings. (Of course, your audience will appreciate faster page load times, too.)
Many have used Google’s PageSpeed Insights tool to get an analysis of their site speed and recommendations for improvement. For those looking to improve mobile site performance specifically, Google has a new page speed tool out that is mobile-focused. This tool will check the page load time, test your mobile site on a 3G connection, evaluate mobile usability and more.
The file must be named in all lower case (robots.txt) in order to be recognized. Additionally, crawlers only look in one place when they search for a robots.txt file: the site’s main directory. If they don’t find it there, oftentimes they’ll simply continue to crawl, assuming there is no such file.
And if crawlers can’t access it, the page may not rank.
When using infinite scroll for your site, make sure that there is a paginated series of pages in addition to the one long scroll. Make sure you implement replaceState/pushState on the infinite scroll page. This is a fun little optimization that most web developers are not aware of, so make sure to check your infinite scroll for rel=”next” and rel=”prev“ in the code.
As long as it’s XML, you can structure your sitemap however you’d like — category breakdown and overall structure is up to you and won’t affect how Google crawls your site.
This tag will keep Google from showing the cached version of a page in its search results, but it won’t negatively affect that page’s overall ranking.
It’s not a rule, but generally speaking, Google usually finds the home page first. An exception would be if there are a large number of links to a specific page within your site.
A link to your content or website from a third-party site is weighted differently than a link from your own site.
Your crawl budget is the number of pages that search engines can and want to crawl in a given amount of time. You can get an idea of yours in your Search Console. From there, you can try to increase it if necessary.
Pages that aren’t essential to your SEO efforts often include privacy policies, expired promotions or terms and conditions.
My rule is that if the page is not meant to rank, and it does not have 100 percent unique quality content, block it.
With Google migrating to a mobile-first index, it’s more important than ever to make sure your pages perform well on mobile devices.
Website security is becoming increasingly important. In addition to the ranking boost given to secure sites, Chrome is now issuing warnings to users when they encounter sites with forms that are not secure. And it looks like webmasters have responded to these updates: According to Moz, over half of websites on page one of search results are HTTPS.
Google Webmaster Trends Analyst John Mueller recommends a load time of two to three seconds (though a longer one won’t necessarily affect your rankings).
There is a lot of confusion over the “Disallow” directive in your robots.txt file. Your robots.txt file simply tells Google not to crawl the disallowed pages/folders/parameters specified, but that doesn’t mean these pages won’t be indexed. From Google’s Search Console Help documentation:
You should not use robots.txt as a means to hide your web pages from Google Search results. This is because other pages might point to your page, and your page could get indexed that way, avoiding the robots.txt file. If you want to block your page from search results, use another method such as password protection or noindex tags or directives.
This allows you to keep the value of the old domain while using a newer domain name in marketing materials and other places.
Because it can take months for Google to recognize that a site has moved, Google representative John Mueller has recommended keeping 301 redirects live and in place for at least a year.
Personally, for important pages — say, a page with rankings, links and good authority redirecting to another important page — I recommend you never get rid of redirects.
Google may sometimes include a search box with your listing. This search box is powered by Google Search and works to show users relevant content within your site.
If desired, you can choose to power this search box with your own search engine, or you can include results from your mobile app. You can also disable the search box in Google using the nositelinkssearchbox meta tag.
The “notranslate” meta tag tells Google that they should not provide a translation for this page for different language versions of Google search. This is a good option if you are skeptical about Google’s ability to properly translate your content.
If you have an app that you have not yet indexed, now is the time. By using Firebase app indexing, you can enable results from your app to appear when someone who’s installed your app searches for a related keyword.
If you would like to stay up to date with technical SEO, there are a few great places to do that.
I hope you enjoyed these 19 technical SEO facts. There are plenty more, but these are a few fun ones to chew on.