Google BERT Update: AI Machine Learning in the Semantic Search Service
Background and History
Semantic Search
For a long time, search queries have gone beyond keyword sets. We may not know the name of what we are looking for or doubt how to correctly describe the desired results. Sometimes there is only a picture, photograph, or an abstract idea. Nevertheless, in most cases, it manages the task at hand. For the search algorithm to correctly perceive and process information, it needs to be informed about the context of the request. Such a search is known as semantic, that is, operating not so much with the entered words as with their meanings and relationships. See the article What Is Semantic Search? for more information about semantic search features.
The context and hierarchy of the language are important not only because users often use abstract concepts, but the very structure of human communication also requires that words can change content and have more than one meaning. For example, a tree can refer to both parts of flora and building material, or, for instance, the topography of a local network. This is called polysemy, and for example, in English, more than 40% of words have multiple meanings. This is all an additional challenge to the quality and reasonableness of the algorithms that are responsible for processing search queries.
Other factors determine the importance of semantic searching. For example, the growing popularity of voice commands and assistants like Siri and Alexa. Written and spoken language can be very different, not to mention the use of professional slang and a variety of accents.
The Implementation History
One of the first important steps towards understanding search queries as a whole, rather than individual keywords, was the launch of the Google Knowledge Network in 2012. Its motto was “things, not strings”, which demonstrates the accepted course of learning to understand the context. The Knowledge Network provided a quick summary of the data about the things that Google had information about, such as, personalities, architectural monuments, chemical elements, or celestial bodies.
A year later, in 2013, Google released an update to the Hummingbird algorithm that significantly improved the processing of complex queries with low precision wording. The update focused on the search message and the meaning of the entire query, rather than its components. You can learn more about algorithms in our article Google Algorithms That Affect SEO.
In the article LSI Keywords, we have already touched on the topic of keyword selection due to latent semantic relationships or LSI. Google has developed its algorithm that can solve similar problems, but with greater efficiency and with a huge amount of data flow. Rank Brain is not limited to a specific selection of documents or pages for analyzing the meaning of a word. Machine learning allows you to continuously improve the quality of recognizing the meaning of terms through an extensive analysis of the context in which these terms are most often found. However, Rank Brain can be called an extension or add-on for Hummingbird rather than a completely independent and separate algorithm. The next step for handling such queries was the introduction of the BERT algorithm.
What does BERT mean?
Bidirectional encoder representations from transformers, or BERT, is probably the largest update of Google’s search algorithms in recent years, especially in terms of semantics. Deeply generalized, BERT determines what each word in a query means by also analyzing the words that surround it. Earlier models that determined values processed each word separately, only partially taking into account the context of a specific input request. Analyzing the word environment requires great flexibility from the model since many variables can change simultaneously. And the more complex the model, the more powerful the computing hardware must be for it to work. In terms of quality and productivity, transformative models or simply transformers proved to be optimal. This is an innovative architecture of neural networks that specialize in natural language processing. Transformers were first introduced in August 2017 and showed the best performance in comparison with other existing models.
According to the Google Search Liaison account, BERT Google now supports more than 72 languages, including Spanish, Basque, Hebrew, Swahili, Ukrainian, French, and Chinese. It is the unique architecture and processing data that allows BERT to apply to such a wide range of such different natural languages.
BERT was first released in November 2018 as a neural-based technique for processing natural languages. Its uniqueness lies in the in-depth significance of two-way analysis and the absence of compulsory control of the results at each stage of data processing. The absence of restrictions in the form of monitoring results has already shown positive results, for example, when selecting similar news. Yes, this is done by artificial intelligence, analyzing the entered query, as well as relevant news from verified sources “on the fly”.
However, any algorithm requires a learning phase, and BERT was no exception. By opening public access to the code, Google announced that the main goal was to teach BERT. Of course, it was also a great opportunity for thousands of enthusiasts to try their hand at developing their ideal assistant that can answer any question.
Updating the Google Algorithm
Since 2019, BERT has affected about 10% of all search queries in English, constantly improving the quality of evaluating and understanding not only long phrases but also entire sentences with abstractly formulated or potentially ambiguous questions. For example, if you enter “what kind of dog was in John Wick 3”, we will immediately get the answer in the form of a snippet.
And even if you change the query to “what animal was in John Wick 3”, the snippet will be different, but the answer, the Belgian Malinois Shepherd, is essentially the same. At the same time, the answer more accurately repeats the form of the question asked.
Snippets often appear as illustrations of BERT’s work. In the official Google blog, it is mentioned that BERT primarily affected the ranking. Namely, the principles of page creation with search results and advanced search results, first of all being, snippets. You can read more about them in the article What are Rich Snippets?
The Main Functions of BERT
When analyzing the effectiveness, BERT showed outstanding results in solving a dozen problems related to natural language processing. The main field of activity of BERT will be:
- Identifying the objects with queries about them
- Context-based forecasting of the following proposals.
- Coreference resolution.
- Interpret questions and find suitable answers.
- Solving problems associated with choosing the correct word meaning.
- Automatic collection of information to generate short summaries.
- The solution to problems related to polysemy.
BERT and E-A-T
It is important to understand the difference between BERT and E-A-T to correctly approach the tasks that each of them solves. E-A-T are standards or metrics for assessing the quality of content, designed to help identify useful content. Since the nature of a natural language provides for the possibility of multiple meanings for a single word, BERT helps to perceive such information more qualitatively and correctly to build search results on its basis. In both cases, the quality of the content is crucial. The main goal of Google is to allow authors to focus on creating memorable, high-quality, and useful material, without the need to further optimize this material for search results.
BERT and RoBERTa
After Google provided free access to the BERT code, Facebook also decided to test the effectiveness of the new processing method. The social network has long been interested in ways to improve internal search results and has looked for ways to more efficiently process logically complex structures and sentences. The collaboration of Facebook engineers and researchers from the University of Washington to the creation of RoBERTa, an updated model based on the PyTorch Machine learning library. The main changes were related to predictive analysis, as well as accelerating the learning phase by analyzing many small data sets. For example, the original BERT was trained on a sample with a total data volume of 16 Gigabytes with a total number of training iterations of no more than 100 thousand. For training, RoBERTa used a more complex data sample of more than 70 Gigabytes, increasing the number of training iterations first to 300, and then to 500 thousand. The result showed significant potential in 4 out of 9 common GLUE tests, capable of surpassing even the original BERT model.
Google Comments
There were a lot of statements and rumors around the BERT update, but official sources addressed two main issues that concern most webmasters and optimizers. The first question, namely, what changes the update will bring to search results, advanced search functions, snippets, and other things, was raised by Duncan Osborn, the manager of the product division of Google search services. In his publication for the search section of the Google blog, The Keyword, devoted to organizing important stories, top news, and Google recommendations, he paid special attention to the role of machine learning and BERT in the selection and organization of information. Thus, when searching for news about current events, it became possible to not only offer a carousel of materials from verified sources with the most recent publication date. Thanks to BERT, the information will be streamlined for each proposed story or topic, and inside the topic or story, the news will be arranged according to its quality and relevance to the request rather than the date of publication.
The second important issue, how to optimize pages for BERT, became the subject of Danny Sullivan’s tweet.
In the exchange of messages, it was said that content authors would like to not think so much about the new factors that need to be taken into account when working on content in connection with updating, but to get creative freedom and create content for users. Sullivan confirmed Google’s long-standing position on prioritizing content quality. The company is actively developing tools for more effective interpretation and evaluation of content quality, so that information that is useful and important for users is always available first. John Mueller was recently asked a similar question in January 2020. In his response, he pays special attention to the fact that BERT has two sides: the text on pages and the text of search queries. Its task is to help match the first with the second, thereby improving the quality of search results. However, since search engine optimization experts cannot influence the queries that users enter, any kind of optimization is only possible on the side of the pages themselves.
Also, in this video, John Mueller describes BERT not as a new algorithm or a change in ranking rules, but only as a tool for better understanding the text. Mueller attributed any changes in page ranking not to BERT, but rather to other, sometimes minor, updates to the search algorithms and ranking parameters themselves that occur almost constantly.
The Impact and Consequences of BERT
PPC
There are many ways to select and successfully use keywords for purchasing traffic, but with the introduction of BERT, the rules for matching search queries and web page content are changing. This is likely to lead to several changes when planning advertising campaigns:
- The role of the continuous monitoring of organic traffic will change, especially in terms of keywords and basic queries. Only organic traffic will show the most accurate picture of how your site’s content is interpreted and compared with requests.
- Weakening the role of branded traffic.
- One of the main factors in planning will be search intentions. The question is, what is their interpretation? You can learn more about search intentions from this material.
- Ad targeting can be indirectly regulated more strictly since search results themselves will be generated based on the priority of the most accurate and narrow organic matches.
- As a result, the key skills will be the ability to correctly interpret metrics and work with content.
Optimization and Keywords
BERT is most likely to have the most impact on cold traffic or queries at the top of the funnel. If we talk about search intent, it will mostly be informational and less frequently, navigation queries. They are characterized by the use of open questions, general wording, and specific stop words, which could also be found in the BERT announcements, where Google representatives gave examples of how the update works.
The need to consider long-tail query optimization also contains nothing new. There is no reason to believe that the level of detail or length of a keyword will have any effect on ranking. Information about an increase in the weight of this search signal has yet to be confirmed. On the other hand, the possibilities of interpretation have changed. And now, from tools for expanding and capturing niche or random queries, long sequences of keywords will become full-fledged ranking participants. More related information on the subject can be found in the Long Tail Keywords article.
As for latent semantics, there are likely to be some possibilities. The better the search service understands all the subsidies and stylistics of communication, the less the ranking results will restrict creative freedoms. Google’s main message regarding content remains unchanged, high-quality, and useful materials will be a priority. Thus, the expansion of the synonymous series and rich stylistics will gradually become an integral part of search engine optimization.
Here are some simple but effective recommendations for improving your page results with the Google BERT update:
- Build content around thoughtful, specific questions that provide precise and meaningful answers.
- Do not neglect additional indexing tools. For example, by adding text transcripts to a video.
-
Do not strive to create voluminous content. It is better to break up large, but detailed material into several small articles that are focused on particular aspects.