A Wired article explains how Google is now using machine learning to understand and produce featured snippets in the Google search results.
Google “just went live” on their desktop search results with what they call “sentence-compression algorithms.” This sentence compression is able to learn how “to take a long sentence or paragraph from a relevant page on the web and extract the upshot — the information you’re looking for,” Wired added.
In short, Google is getting better at looking at content on the web and extracting the specific nuggets of information that directly answer the query.
How did they get this to work?
To train Google’s artificial Q&A brain, Orr and company also use old news stories, where machines start to see how headlines serve as short summaries of the longer articles that follow. But for now, the company still needs its team of PhD linguists. They not only demonstrate sentence compression, but actually label parts of speech in ways that help neural nets understand how human language works. Spanning about 100 PhD linguists across the globe, the Pygmalion team produces what Orr calls “the gold data,” while the news stories are the “silver.” The silver data is still useful, because there’s so much of it. But the gold data is essential. Linne Ha, who oversees Pygmalion, says the team will continue to grow in the years to come.
With Google Home, Google Assistant and the increase of featured snippets in the search results, it is no surprise Google is advancing their technology around this challenge.
The post Google’s machine learning now writes featured snippets descriptions appeared first on Search Engine Land.