
JavaScript SEO: What You Need to Know
Definition and Steps of HTML, JS, CSS
There are several building blocks that are used to create web pages. These are HTML, Cascading Style Sheets (CSS), and JavaScript. Everyone is responsible for a specific task. HTML provides the basis for a page by defining its main building blocks. CSS allows you to control the structure and display parameters of elements. Style sheets can set options from color and font size to position on the page and order of elements. What’s a JavaScript role? Scripting allows you to add dynamic elements to enhance the rendering and behavior of your content.
HTML tags are an integral element of every page. It is the skeleton. The body of a web page consists of elements and sections. The beginning and end are determined using special tags. The most important sections are the head and body. The head contains parameters and information about the page, such as the page title and tracking, and analytics. The body consists of elements and sections. The main content of the page. Cascading styles are also not limited to a single page. They are used to make several sections of the site appear the same. Write the code that sets the style and add a link to the CSS file in the head section. This is convenient not only in order not to duplicate the same code many times. This approach allows you to make edits in the main CSS file, rather than refreshing each page separately. Edit one file to see the results on all pages. In complex and large projects, style sheets can be thousands of lines long. This can affect the speed of their work, since loading an external file of this size is not a good practice. You may notice that on the first visit or after clearing the cache, some web pages jump or blink when loading. Most often this is due to the fact that the page elements are loaded first and only then the CSS file that determines the order in which they are displayed. Therefore, confusion can arise. The page begins to display the changes, having received new instructions from the loaded style sheet.
What is JavaScript? If you’ve worked with code and asked yourself what is .js, the links are defined at the head of the page. These are external JavaScript scripts. Google analytics services, drop-down menus, or pop-ups are mostly implemented using JavaScript scripts and libraries. Options include the Lightbox and Fancybox popup modules, the Lazysizes image library for lazy loading, and more. Unlike images or other page resources, scripts are rarely saved locally. A link is inserted into the code where HTML can always find, download, and execute the current version of the script. This method is convenient because it ensures that you always use the latest version of the library or script, taking updates into account. Since the instruction code is stored in an external file, there are a number of potential drawbacks to consider. If the loading of the script is delayed, the page will not be displayed correctly. If the script has to change or supplement the page code, the changes will not be visible while the download is in progress. This is one of the reasons Google does not recommend using the document.write method.
Modifying the properties or content of individual page elements is not as risky as tampering with the code structure. How does JavaScript and SEO work then? The point is that few search bots are capable of correctly recognizing, loading, and executing JavaScript. This means that page elements that depend on scripts are extremely difficult to index. The content of a pop-up in most cases will not be taken into account when analyzing the content. There are JavaScript search engines for your website that are capable of recognizing even scripted content. Google supports the execution and rendering of content on pages that use JavaScript. This allows search bots to analyze the version of the page that real users will see. The indexing process for these pages is slightly different and takes longer. This is due to additional computing resources. The page is immediately ready for analysis, but when JavaScript is detected, several additional steps are required: download and execute the script to get the final version of the page for indexing.
Read the related article – How and Why Search Engines Render Pages.
JS Processing, Validation, and Testing Features
It’s hard to find a memorable interactive website that doesn’t use JavaScript. When developing, it is important to consider the features of JavaScript SEO. Most scripts allow you to perform operations in the background, such as collecting analytic data. But the role of using JavaScript is not limited to this. A good example of using scripts on pages is sections with recommended, similar or popular products in online stores. User reviews and comments, pop-ups, keystroke navigation, dynamic search, local store locators, etc. are displayed below the product. Even pagination is another dynamic element that is used when creating a Javascript blog.
A number of shortcomings can be hidden behind the external beauty. We will cover the main features to help you answer the do I need JavaScript question. One of the issues is that the page does not contain all the content to index. When the user clicks on the link and the code starts to be processed, JavaScript can make the desired changes to the HTML or CSS. After that, we see the final version of the page. The page that the user opened and the one that they will see can be very different. It is not advisable to use JavaScript to generate important page content elements.
There are usually several scenarios. Some may require additional libraries or modules. Together, they increase the number of external requests. Multiple HTTP requests to external files are not inherently dangerous. When a page needs to go to hundreds of other servers, find and process the files therein to load and display the content, user experience can suffer. A scripted page can show poor performance on devices. It does not depend on the use of external or locally stored scripts. Similar problems can be observed with sites built on free CMS, such as WordPress. One of the reasons for this behavior can be a large number of installed plugins and widgets in the page code.
Even when there are a lot of external scripts to be had, there are proven ways to optimize performance like dynamic loading SEO. A page doesn’t need to contain all the script instructions in its code. It is enough to store the script and library files locally, on the same hosting as the rest of the website elements. This will not reduce the number of HTTP requests, but it will allow you to execute them more efficiently. Processing time is reduced because instead of many requests to external servers, all tasks will be performed within the same local storage. The downside of this approach is that you need to keep track of updates yourself and keep the files saved locally up to date. Not all platforms and use cases will allow you to implement the approach. If an external script is being accessed as a result of installing a WordPress plugin, it will take considerable effort to redirect pages to a local version of that script without disrupting the application itself. In the case of the Shopify platform, you won’t be able to manage your JavaScript file storage policy, which makes implementing local storage even more difficult.
Another way is code optimization. Being able to optimize your code implies that you have the correct permissions. You can optimize external storage where the file is located to make changes. An alternative would be saving the optimized code locally. Fortunately, there are many tools to automate this process, so you probably won’t have to do the optimization manually. Basically, these tools are aimed at routine checks. For example, reducing the amount of unused text information in the body of the script can reduce the size of the final file by at least 10%. One of the disadvantages of automatic optimization is the possible discomfort during further work with the code. This is only important if you plan to make changes in the future.
Combining the benefits of optimizing your code and reducing the number of external HTTP requests allows you to combine scripts. This is effective when the page needs to process many small JavaScript files. In this case, it might make sense to combine them into one or more general medium-sized scenarios. In order not to get confused in the component parts, it is enough to add the appropriate comments. In this case, script compatibility will be a priority. If they are designed to be run individually, conflicts or cyclical dependencies can occur. It is also possible to significantly optimize resources if the scripts to be shared use shared external libraries.
If your site does not allow using any of these optimization methods, you can go the other way. Make sure the loaded scripts use asynchronous loading. Its principle is quite simple. Instead of processing executable files strictly in turn, a web page does not wait for one script to load before moving on to the next. This allows you to reduce the rendering time of the final version of the page due to the optimal approach to its loading. You can prioritize key elements by using asynchronous loading for supporting scripts only. Asynchronous loading will not always help reduce the actual page load time. Creating a lengthy and slow chain of actions that is difficult to work around. Another example would be to dynamically add one script to the instruction page by another. When using JavaScript in such a situation, developers sometimes forget to specify the need for asynchronous loading in the properties of the newly created object.
When planning how JavaScript will run on your web page, extra care must be taken to properly allocate resources. We recommend uploading important information first. Remember that you need to download the script and each dynamic element. This will already take up part of the bandwidth of the data transmission channel. It will require some computing resources on the client side to execute the downloaded instructions. The visitor’s browser will need free memory to display the result. The more complex the result, the more memory is gone. Since we are talking about web pages, it is not difficult to name the key criteria for the success of your page in the eyes of the user. Ensure a smooth, continuous, and consistent loading and rendering process for all elements. To do this, you will need to prioritize the loading of scripts not only based on importance but also taking into account that the elements should not interfere with each other. In the event of a conflict or incorrect prioritization, there can be numerous delays in the execution of the instructions, which negatively affects the user experience of visiting the website.
JavaScript is placed in the <head> tag to ensure efficient operation. This position gives them priority over the content of the body of the page when loading, making it easier to access any modifications and changes. Try putting the least-priority scripts before the closing <body> tag and compare the speed before and after changes. This can often be an easy way to superficially diagnose the sources of performance problems. If you don’t want to experiment on important pages of your site, create a special JavaScript test page.
All checks should be carried out as isolated as possible so that you can accurately associate the result with a specific file or script. To quickly measure the effect of a particular JavaScript file on page speed, measure the load time before and after a script is disabled. In order not to resort to radical changes in the code, but at the same time to make sure that the script is disabled, it is enough to use the comment function. This is a great way to troubleshoot javascript. Just mark the invocation tag of the script of interest as part of the HTML comment, save your changes, and reload the page. When you call the code inspector in Google Chrome, you can make sure that the script of interest has not been loaded on the network activity monitoring tab.
Search Engine Optimization of JavaScript
1. Try to avoid possible JavaScript blocking for search engines
For correct indexing of pages, the search bot must have access to all external resources of the page. To external elements, the loading of which may be required for full rendering. This requirement is also due to the safety of users. JavaScript’s capabilities, in particular the dynamic management of page content, are often subject to abuse. For example, displaying one version of a page for a search bot and completely different content for real visitors. It is especially important to make sure that the search bot can see everything the same as your visitors.
If you are not sure if any of the pages are available for indexing by a search bot, use the Live Test as part of the link verification tool available in the Google Search Console.
2. Consider the power of search bots when indexing pages containing JavaScript when planning your user journey and experience
If your scripts manipulate the content of a page by tracking user actions, consider that the search bot will not be able to activate them. Pop-up windows or tooltips when scrolling down the page will simply not be displayed. If for a real user the visible area of the page is limited by the screen size, the search bot draws the entire page at once from top to bottom. The need for scrolling and mouse-related actions is eliminated. There is a simple rule of thumb for a page to successfully pass a Google audit: any content implemented via scroll tracking events must include passive attributes. This applies to events and attributes that the search bot simply does not activate.
A good illustration is the lazy loading of images. This technique is convenient, can drastically reduce the size of the upload data, and make the site faster. Since the loading of images is activated when scrolling through the content, they may be missing. If the images are not indexed, you will lose one of the valuable channels for attracting search traffic through multimedia content. Lazy loading doesn’t guarantee responsiveness. One of the main requirements of Google for uploading images is to use only what is in the visible area of the site. Let’s say you have created a gallery and thumbnails of the following pictures are displayed under the main image. All images are lazy-loaded, but if your page continues to load thumbnails when switching to the mobile interface, which will not actually be displayed on the device screen, the validation will fail.
3. Avoid manipulating the page URL and generating links using JavaScript
Despite the fact that Google takes into account links generated by JavaScript instructions when indexing, it is still considered an unreliable practice. In addition to creating entire links through script elements, use the management of the address properties of the window object as rarely as possible. Examples of this practice can be found when using the onClick attribute of HTML objects, which allows you to call a specific function after a specified event. Such a transition will only complicate the work of the search bot since it will not provide sufficient context in the form of the reason and logic of the transition. Implement important internal links in a reliable way.
4. Take a few basic checks if you notice that Google is not indexing pages
There may be very specific reasons for refusal to index some pages. Specifically, Google may postpone re-indexing in the following cases:
- Timeout on page load
- Errors when rendering the final version of the page
- Skipped critical external resources that are important for correct page rendering
- The content on this page has not passed basic Google testing for quality and reliability
- The page could not be accessed due to a lack of internal links or incorrect sitemap
Many factors can affect how quickly and how efficiently site content is indexed. Therefore, do not neglect regular baseline checks and real-time website health tracking.
5. Avoid generating metadata with JavaScript
Social media, dedicated messaging apps, and forums can be invaluable sources of awareness and traffic. People share interesting and useful content, so it’s important to consider how key channels work with your pages. This is necessary in order for the links to the pages to look equally good, reliable, and authoritative, both on the Google SERPs and when sending a direct link in a group chat or Slack channel. What does JavaScript have to do with it? The fact is that different services receive information about the page in different ways. Social networks Twitter and Facebook have their own structured markup scheme that allows them to correctly load and display information about submitted links. A problem can arise if the appropriate markup or information is passed through an executable JavaScript script. Since these social networks are not able to process such a request, the link will be displayed without markup.