Google is opening the kimono slightly to reveal how it manages the autocomplete function on its search engine, providing a peek into the logic behind the predictions, some of the policies that govern when a prediction will be removed and more. In a post on the Google Blog, search liaison Danny Sullivan also reveals that the company will be expanding the types of autocomplete predictions they disallow in the coming weeks.
The post elucidates one of the functions that has gotten the company into hot water over the years when the public has become outraged by certain predictions and accused the company of being racist or leaning a certain way politically. Autocomplete has even been the subject of legal disputes, with courts in Japan and Germany weighing in on results some deemed unfair. All along, though, Google has maintained that autocomplete results simply reflect searcher behavior, though it has had to institute policies and manually manage results after repeated controversies.
Today’s blog post explains and shows examples of how Google autocomplete works with Google search for desktop, mobile and Chrome, extolling the benefits of the function by saying it saves users “over 200 years of typing time per day” by reducing typing by about 25 percent.
The autocomplete shows predictions based on how Google thinks you “were likely to continue entering” the rest of your query. Google determines these predictions by looking at “the real searches that happen on Google and show common and trending ones relevant to the characters that are entered and also related to your location and previous searches.”
Google will remove some predictions when they are against its guidelines, specifically:
Google said in the upcoming weeks they will be expanding the types of predictions they remove from the autocomplete. These include hate- and violence-related removals. In addition to removing predictions that are hateful toward race, ethnic origin, religion, disability, gender, age, nationality, veteran status, sexual orientation or gender identity.
Google will also remove predictions that are “perceived as hateful or prejudiced toward individuals and groups, without particular demographics.”
With the greater protections for individuals and groups, there may be exceptions where compelling public interest allows for a prediction to be retained. With groups, predictions might also be retained if there’s clear “attribution of source” indicated. For example, predictions for song lyrics or book titles that might be sensitive may appear, but only when combined with words like “lyrics” or “book” or other cues that indicate a specific work is being sought.
As for violence, our policy will expand to cover removal of predictions which seem to advocate, glorify or trivialize violence and atrocities, or which disparage victims.
Sometimes predictions that are against Google’s guidelines slip through; Google admits they “aren’t perfect” and work hard and fast to remove those when they are alerted to the issue. The company offers a process for users to submit feedback to notify it of inappropriate predictions:
The post Google expanding types of predictions they remove from autocomplete appeared first on Search Engine Land.