Aug 31, 2023

Using lazy loading for js files | Does it harm SEO results | Angular Application

We are building a custom angular application with SSR approach and currently facing some issues with inconsistent Google page speed results. 

We did spend a significant amount of time to optimise everything possible but still, the results when testing with Google page insight/lighthouse are not optimal 90+ and they are very inconsistent. 

In some cases, the results may be 92 and then 75 with no obvious reason according to our dev team. 

We continued researching and optimising our platform and we came across with an approach that improved significantly our score. However, we do not know if the approach we have can potentially harm our SEO results. 

Here is what we are doing:

1. Lazy loading of our main JS chunk files
2. We download the JS files as soon as the user interacts with the website
3. We give to Google bot HTML and CSS and do not hide anything from the bot
4. We do not block the JS files in robots.txt 
5. The whole logic is served directly on our servers

The question:

Does the above-described approach can potentially harm our SEO rankings?

Thank you for your help in advance!
Locked
Informational notification.
This question is locked and replying has been disabled.
Community content may not be verified or up-to-date. Learn more.
Recommended Answer
Aug 31, 2023
To add to Barry and John's advice, delaying core JS files will probably cause a worse user experience as parts of the page will not be processed until after they try and interact with it. 

Googlebot also does not interact with the page. So if that JS is involved in showing any important content, Googlebot will probably not be aware of it. 
Last edited Aug 31, 2023
Original Poster csm54782 marked this as an answer
Helpful?
Recommended Answer
Aug 31, 2023
Hi csm54782

2.  Sounds like a really bad idea.  

All that's doing is fooling yourselves, because the lab test that PageSpeed Insights does, is purely a page load snapshot.  Later interactions will only show in the field results, which is why CLS for example,  is harder to debug, because it often manifests itself only in-the-field interactions (so as the user scrolls).

Doing anything that artificially makes the PageSpeed Insights results 'good' isn't going to stand-up to scrutiny in the real world of different device sizes and locations.  You might find this thread an interesting overview of tools that attempt to do that... 

You might find this link helpful specific to async javascript...

I'm not familiar with Angular, but here's an overview of how we get straight 100s...

Fonts
 
Keep the number of custom fonts to a minimum.  The more you have, the greater the load time.
 
Preload the fonts right up the top, served locally, using WOFF2 if you can tolerate the quality.
<link rel="preload" href="Fonts/myfont.woff2" as="font" crossorigin="anonymous">
 
We built custom fonts featuring only the glyphs we required (we wanted a ttf font for quality and this method made it the same size as a WOFF2 one).  FontSquirrel is a very good site that allow you to do that.  You won't probably need all the alternate glyphs, so just ditch them for smaller file size.
 
If you only need a font on a specific page, just reference it when required (i.e. don't load a whole bunch off the home page unless every page needs them).
 
CSS
 
Ideally you want just one external CSS file and it wants to be minified.  Our software (Dreamweaver) allows us to programatically swap between external/internal files (as well as build pages based on cascading templates), so we have three external files during dev - toplevel, vert and horz.  Those are not minified ever (it's a PITA to do that when working).
 
toplevel has all the bare minimum stuff that you have to have to get the layout to hold together and nothing more. It's always parsed first.  In there we have definitions common to the <divs> for both horz and vert, @font--face and a few key variables.  This file is eventually minified and ultimately provided as an inline style for every page (so the external one is swapped with a minified inline one - saved to the highest level template - when the pages are actually ready to go live). 
 
vert has all the definitions common to vertical (mobile) styling.  It relies heavily on @media in order to match the layout to the correct viewport size (smallest to biggest as that's how @media works).  I debated taking the very smallest layout from this and into toplevel (so that the smallest layout was always rendered using the inline, but given we were getting 100 everywhere, the speed meant there was little point and I felt it was best to keep the vertical stuff all together).  This loads second.
 
horz comes third and deals with the desktops, again with heavy use of @media to control sizing.
 
When we are ready to go, we combine vert+horz to form the one external CSS, minified. That's our one external file.
 
ANY CSS for anything specific to one page, should be inline on that page.  Don't bulk out (aka slow down) the external CSS with things that only one page uses - put that stuff inline on those pages only.
 
Javascript
 
Not all of our pages use Javascript and the ones that do, simply use it to swap out pictures/content and/or to run videos.  Accordingly we have no external Javascript at all.  Any page requiring that functionality has the coding in the page at the end (so it loads last and can't hang up the process).  I admit it helps a lot that I'm a programmer.
 
In essence, don't just rely on large external libraries, when you are using a tiny fraction of their content.
 
The principle here is, if you can try to rely on @media and the CSS to style your page, you end up with less of a need to rely so heavily on javascript. There is no doubt that any external library javascript is slow to load. There are async methods you can look at as I referenced above, but analyse what you are doing and if it can be coded in the page, to avoid the need in the first place.
 
Pictures and Video
 
HTML5 offers <picture> and <video> to serve the correct sized content to the viewport.  You should make a number of different sized images/videos and provide the correct one based on the viewport.  With pictures (certainly the small ones) you need to strip the metadata as that bulks out the file size.  In all cases you need to image compress to a quality you find acceptable and no bigger.
 
What all that does typically, is mean that we serve a 9Kb picture for the smallest mobile and a 100Kb version for 1080p.  NEVER simply scale a big picture to a small viewport, as forcing a huge picture down a slow 3G pipe is asking for trouble.
 
Server Compression
 
Serve everything natively compressed (NGinx or similar).  If your clients are in diverse locations look at using a CDN to get them nearer to the content (so a faster delivery).
 
Minification
 
Yes for the CSS as detailed above.  I personally (with Server compression) found little benefit with anything else (and it makes things way harder to manage).
 
Cache Age
 
Use something like  Header set Cache-Control "max-age=31536000, public" (in an .htacess file), to ensure all of the assets that are downloaded (that won't realistically change in the session) are served with a header that ensures they are only pulled from the server once.
 
As I said,  maybe none of that applies in your environment, but there could be a few nuggets there that might help.

In conclusion, I'm very much with  barryhunter here, unless you've the time and resources (which I had), the 'bang for buck' you get from this kind of optimisation, typically isn't worth the supreme effort required ;-)
 
Last edited Aug 31, 2023
Original Poster csm54782 marked this as an answer
Helpful?
Recommended Answer
Aug 31, 2023
Well seems like it might affect FID https://web.dev/fid/ and/or the newer INP metrics. 

Ie adds delay when user interacts with the page. 


I would caution against getting 'over fixated' on the 'page speed' score. It's just one metric. 

It's possible to target one metric at the expense of others. (like in this example, while might in theory get faster loading, you might end making other things worse) 

... its why Core Web Vitals tries to incorporate a range of metrics, NOT just the initial loading. 



Or to put another way its possible, your page speed score of 75 is still giving to good all round CWV metrics, such that can get the 'Good Experience'. 
... in your bit to get a higher score, could potentially could affect some CWV metrics, such that pages would no longer be considered Good. 


Original Poster csm54782 marked this as an answer
Helpful?
All Replies (3)
Recommended Answer
Aug 31, 2023
Well seems like it might affect FID https://web.dev/fid/ and/or the newer INP metrics. 

Ie adds delay when user interacts with the page. 


I would caution against getting 'over fixated' on the 'page speed' score. It's just one metric. 

It's possible to target one metric at the expense of others. (like in this example, while might in theory get faster loading, you might end making other things worse) 

... its why Core Web Vitals tries to incorporate a range of metrics, NOT just the initial loading. 



Or to put another way its possible, your page speed score of 75 is still giving to good all round CWV metrics, such that can get the 'Good Experience'. 
... in your bit to get a higher score, could potentially could affect some CWV metrics, such that pages would no longer be considered Good. 


Original Poster csm54782 marked this as an answer
Recommended Answer
Aug 31, 2023
Hi csm54782

2.  Sounds like a really bad idea.  

All that's doing is fooling yourselves, because the lab test that PageSpeed Insights does, is purely a page load snapshot.  Later interactions will only show in the field results, which is why CLS for example,  is harder to debug, because it often manifests itself only in-the-field interactions (so as the user scrolls).

Doing anything that artificially makes the PageSpeed Insights results 'good' isn't going to stand-up to scrutiny in the real world of different device sizes and locations.  You might find this thread an interesting overview of tools that attempt to do that... 

You might find this link helpful specific to async javascript...

I'm not familiar with Angular, but here's an overview of how we get straight 100s...

Fonts
 
Keep the number of custom fonts to a minimum.  The more you have, the greater the load time.
 
Preload the fonts right up the top, served locally, using WOFF2 if you can tolerate the quality.
<link rel="preload" href="Fonts/myfont.woff2" as="font" crossorigin="anonymous">
 
We built custom fonts featuring only the glyphs we required (we wanted a ttf font for quality and this method made it the same size as a WOFF2 one).  FontSquirrel is a very good site that allow you to do that.  You won't probably need all the alternate glyphs, so just ditch them for smaller file size.
 
If you only need a font on a specific page, just reference it when required (i.e. don't load a whole bunch off the home page unless every page needs them).
 
CSS
 
Ideally you want just one external CSS file and it wants to be minified.  Our software (Dreamweaver) allows us to programatically swap between external/internal files (as well as build pages based on cascading templates), so we have three external files during dev - toplevel, vert and horz.  Those are not minified ever (it's a PITA to do that when working).
 
toplevel has all the bare minimum stuff that you have to have to get the layout to hold together and nothing more. It's always parsed first.  In there we have definitions common to the <divs> for both horz and vert, @font--face and a few key variables.  This file is eventually minified and ultimately provided as an inline style for every page (so the external one is swapped with a minified inline one - saved to the highest level template - when the pages are actually ready to go live). 
 
vert has all the definitions common to vertical (mobile) styling.  It relies heavily on @media in order to match the layout to the correct viewport size (smallest to biggest as that's how @media works).  I debated taking the very smallest layout from this and into toplevel (so that the smallest layout was always rendered using the inline, but given we were getting 100 everywhere, the speed meant there was little point and I felt it was best to keep the vertical stuff all together).  This loads second.
 
horz comes third and deals with the desktops, again with heavy use of @media to control sizing.
 
When we are ready to go, we combine vert+horz to form the one external CSS, minified. That's our one external file.
 
ANY CSS for anything specific to one page, should be inline on that page.  Don't bulk out (aka slow down) the external CSS with things that only one page uses - put that stuff inline on those pages only.
 
Javascript
 
Not all of our pages use Javascript and the ones that do, simply use it to swap out pictures/content and/or to run videos.  Accordingly we have no external Javascript at all.  Any page requiring that functionality has the coding in the page at the end (so it loads last and can't hang up the process).  I admit it helps a lot that I'm a programmer.
 
In essence, don't just rely on large external libraries, when you are using a tiny fraction of their content.
 
The principle here is, if you can try to rely on @media and the CSS to style your page, you end up with less of a need to rely so heavily on javascript. There is no doubt that any external library javascript is slow to load. There are async methods you can look at as I referenced above, but analyse what you are doing and if it can be coded in the page, to avoid the need in the first place.
 
Pictures and Video
 
HTML5 offers <picture> and <video> to serve the correct sized content to the viewport.  You should make a number of different sized images/videos and provide the correct one based on the viewport.  With pictures (certainly the small ones) you need to strip the metadata as that bulks out the file size.  In all cases you need to image compress to a quality you find acceptable and no bigger.
 
What all that does typically, is mean that we serve a 9Kb picture for the smallest mobile and a 100Kb version for 1080p.  NEVER simply scale a big picture to a small viewport, as forcing a huge picture down a slow 3G pipe is asking for trouble.
 
Server Compression
 
Serve everything natively compressed (NGinx or similar).  If your clients are in diverse locations look at using a CDN to get them nearer to the content (so a faster delivery).
 
Minification
 
Yes for the CSS as detailed above.  I personally (with Server compression) found little benefit with anything else (and it makes things way harder to manage).
 
Cache Age
 
Use something like  Header set Cache-Control "max-age=31536000, public" (in an .htacess file), to ensure all of the assets that are downloaded (that won't realistically change in the session) are served with a header that ensures they are only pulled from the server once.
 
As I said,  maybe none of that applies in your environment, but there could be a few nuggets there that might help.

In conclusion, I'm very much with  barryhunter here, unless you've the time and resources (which I had), the 'bang for buck' you get from this kind of optimisation, typically isn't worth the supreme effort required ;-)
 
Last edited Aug 31, 2023
Original Poster csm54782 marked this as an answer
Recommended Answer
Aug 31, 2023
To add to Barry and John's advice, delaying core JS files will probably cause a worse user experience as parts of the page will not be processed until after they try and interact with it. 

Googlebot also does not interact with the page. So if that JS is involved in showing any important content, Googlebot will probably not be aware of it. 
Last edited Aug 31, 2023
Original Poster csm54782 marked this as an answer
Sep 1, 2023
We've implemented partial hydration and our content doesn't depend on javascript. We hydrate some components that need javascript to be interactive, only when some event happens (i.e. scroll, click, move). Most of our interactive components are being handled by CSS. 

In other words, we don't need those angular chunks to be downloaded initially, because our content is shown on the page. 

We are thinking of a way to download the required chunks only when user is going to interact with the page, which is probably going to affect SEO and we are looking for other opinions about this approach.

Do you have any ideas on how we can achieve this?
Sep 1, 2023
That sounds fine, Make sure this delayed loading does not cause a bad user experience, like slow response times to interactions or layout shifts as they scroll or images still loading as they come into view.

As PSI does not interact, you can't trust its results. I think with Web Page Test you can have it run some JavaScript that could fake interactions, and therefore get more valid results.

Also, do not delay loading of anything that generates meta data or structured data. You want the search engines to see that.
false
3021234706813140021
true
Search Help Center
true
true
true
true
true
83844
Search
Clear search
Close search
Main menu
false
false