On June 13, 2017, in Seattle, Search Engine Land’s Danny Sullivan sat down with Google’s Gary Illyes to talk about all things Google. You can read live blog coverage from the session here. In this post, I’ve organized the content of this session into topical groups and added my own analysis.
Note: The questions and answers appearing herein are not direct quotes. I am paraphrasing Sullivan’s questions and Illyes’ answers, as well as providing my interpretation of what was said (and including additional context where appropriate). I’ve also omitted some content from the session.
Danny Sullivan asked: Are we going to keep getting more featured snippets?
Illyes has no idea about that, but he notes that featured snippets are very important to Google. They want the quality to be really high, and one consideration people don’t normally think about is that, in some cases (e.g., voice search results), the answers may be read out loud.
Sullivan then asked about getting data on featured snippets in Search Console, and Illyes indicated that they had internally worked on a project to report on that, but its release was being blocked by Google higher-ups.
(It was intriguing to get a glimpse of the inner workings at Google. Turns out, internal politics are an issue there — just like at any other company!)
Illyes said that the basic way to get a feature like this released would be to convince Google management that it would help publishers create better content. From my point of view, I think Search Console data on featured snippets will do exactly that. Here is my reasoning:
Illyes further indicated that access to data on voice search may well be forthcoming soon, and that they are considering releasing something there. The goal would be to give people more insight on when their results show up in voice queries.
Sullivan then asked if Google can stop the mix-and-match of content in featured snippets, where content is taken from one site and an image from another. Illyes said that he doesn’t think he can influence that in any way. But perhaps publishers should think about the base featured snippet as position 0A, and the image as position 0B.
Illyes essentially said that which one you use depends upon what you want to do, and he noted that there is also something called PWAMPs, which is a combination of PWA and AMP. He pointed out that for some, native apps don’t make sense. For example, Flipkart found that native apps did not work that well for them, so now they have built out a PWAMP and shifted over to that.
What’s cool about PWAs is that they have functionality that was only previously available to a native app, such as accessing phone hardware or push notifications. There is also a natural friction with native apps because you have to get people to install them, and that can be hard — after all, most users don’t install any apps at all in a given month. With a PWA, the user simply visits your website, and they are already using it.
Illyes further warned to be careful, because you can have your search visibility go to zero when you switch to a PWA if you don’t pay careful attention to the SEO side of things.
He also discussed AMP and indicated that it’s just a stripped-down version of your content, designed to load far more quickly than a normal web page. If you’re a news publication, and you want to monetize it, you want to use AMP. There are a ton of benefits to doing so. Access to the news carousel is one big aspect of that, but the overall speed of delivery matters a lot to users as well. As a publisher, you still retain full control over how you create it.
Later in the Q&A, Illyes notes that AMP is primarily interesting from a speed perspective, and if you can make your site really, really fast without AMP, then you may not need it. But overall, Illyes loves AMP, because it’s really fast when it loads from the search results.
He then reiterated what Google has said before, which is that you don’t get any rankings boost for implementing AMP (unless you’re lucky enough to be in the news space and get yourself into the AMP News Carousel).
Sullivan: What’s happening with RankBrain? Is this still used primarily for query refinement?
Illyes explained that RankBrain allows Google to better understand what would be the best result for the user’s query, based on historical data. It’s currently live in all languages. There is no plan to change it or launch new things into it, as the team is busy working on other things. They are looking at other ways to use machine learning in search, but they are nowhere near launching something new in this area.
My summary of what RankBrain does: To be clear, this is my interpretation of what I’ve heard in several public Google conversations about the topic and therefore does not represent Google’s statements on the matter. But it tracks pretty precisely to what Gary Illyes told me at a conference last year and what he said at SMX Advanced this year.
Here’s my summary in one sentence: RankBrain leverages the historical performance of essentially, or nearly, identical queries, to see what worked and what didn’t, and then leverages that information to adjust and improve the delivered results for the current query.
In more detail, RankBrain compares the user’s query with other historical queries of a similar nature. This is where the machine learning comes in, because they use it to identify historical queries that are the most similar to those Google has already responded to. In machine learning-speak, this is done in “high-dimensional vector space.”
This is then used to see how those historical queries performed. By looking at multiple queries, Google can find out what types of results performed well and which ones did not. That information is then used to tweak the results that came back from the regular Google algorithms for the new query, and in some cases it may even change what algorithms get invoked to address the query.
The reason RankBrain has the biggest impact in the long tail of search is because that’s where the value of this comparison is so high. For head term queries like “digital cameras,” the core algorithms already work extremely well. But for rare queries, leveraging the data from other past similar queries can be quite valuable.
Sullivan: Do you get a ranking boost if you implement HTTPS?
Illyes confirmed that yes, the ranking boost for HTTPS is still there — and there are no plans to update it. The boost has not increased from its original implementation. It may increase at some point in the future, but there are no current plans.
In my view, the HTTPS ranking boost is like the voting power of the Vice President of the US in the US Senate. If the vote is stuck at a stalemate, the VP casts the tie-breaking vote. This has only happened 258 times in the history of the US (and 25 times in the past 50 years), so it’s pretty rare. In other words, it’s a really weak signal.
Sullivan: What is the impact of page speed on search engine rankings?
Page speed already is a ranking factor, but Illyes noted that the algorithm currently looks at the desktop version of a page when taking this into account. Google is working to fix that, and Illyes has assured us that they will be quite “loud” about it when they do: They will blog about it, tweet about it and so on. They want people to make sites fast.
However, the ranking boost from page speed will be comparable to the HTTPS ranking boost, which is more like a tiebreaker.
Sullivan: What’s up with Fred (a recent, unconfirmed ranking algorithm update)?
Illyes said he can’t talk about it. The material info in this discussion is a reminder that Google does updates nearly every day and that he is not at liberty to discuss most of them. Fred was just a basic quality update, closely related to the quality section of the Webmaster Guidelines. He also noted that a lot of people make noise when their sites take a hit, but few make any commentary when they recover.
Google’s policy does include talking about major updates, however. Sullivan noted that the last update Google discussed was the one to address both fake news and some of the featured snippet quality problems. You can read more about that here.
Sullivan: You said over-optimization can hurt you, but in the past, you said it couldn’t. Which is it?
Illyes clarified that it’s a matter of degree. If you tend to put too many keywords in your content, but it’s not really egregious, Google will probably ignore it. But if you really push the limits, at some point, it’s probably going to be considered spammy.
Sullivan: Are there issues when you switch your site from non-secure to secure?
It depends on the size of your site. Illyes knew of quite a few media sites that had changed in the last few months. He recommended to them that they switch their site in sections, because it gives you damage control if you need it. Only one had a major problem.
Also, in the past, Google had signals in their algorithm that were sensitive to whether or not a site is HTTPS or HTTP, but these have all been fixed (except of course, the rankings boost).
In an important sidebar, Illyes also discussed how long it would take to recover from a more complex move, if you did indeed see an impact. He said that the HTTPS team wants to say two weeks, but if there are a lot of URLs that are crawled rarely, that it could take three months, or even more. If you’re doing something like moving domains, you can use the site move tool in Search Console, of course.
Here are some audience questions that Gary Illyes fielded during the keynote conversation, most of which have brief answers.
Question: Is Hreflang a fake tag?
It does work. Illyes developed it, and it does what he said it does.
Question: Does responsive web design have a higher opportunity to rank?
Illyes says no. Google recommends responsive web design because it makes maintaining your sites easier. For example, it’s easier for webmasters to keep their schema in place.
Question: Does Google adapt its crawling of lower network speeds for mobile, such as 3G?
There is no change in how it works.
Question: How important is schema for an e-commerce site?
It’s very important.
Question: How do you handle bad linking practices?
Google will normally just ignore bad links, as per the real-time version of Penguin that was recently released. Illyes noted that people pushed Google hard to switch to that method for handling bad links (devaluing them rather than penalizing them), but as soon as they did, others started asking why Google didn’t penalize these sites. However, he also said that Google does still send out manual action emails, though they tend to not be as harsh as they used to be. Also, if you are buying links, it’s extremely likely that you’re throwing money out the window.
Question: What are the top things to think about?
Here, Illyes gave a list:
That’s a wrap!
The post What I learned from the Danny Sullivan/Gary Illyes keynote at SMX Advanced appeared first on Search Engine Land.