How to improve Magento 2 Page Speed Insights Score?
Let's say you have a task to improve a site's performance about which you still know almost nothing. For example, if this new client comes to you with a specific problem, his website runs slowly. The site, of course, is on Magento2.
The team and I solved the task of assembling a set of universal solutions suitable for most projects. Requirement: solutions should be with a minimum estimate and as automated as possible to save the forecast. The conditions under which solutions can be applied include a minimum set of project knowledge so that an engineer outside the project context can use these solutions.
Google PageSpeed Insights
We use Google PageSpeed Insights to evaluate the results of our work.
It is a set of scripts for measuring metrics. But not only. First, this is a tool the client uses to confirm their feelings that their site is slow. That is, the same device can be used to show the client the effectiveness of your work in improving productivity. Yes, there may be purely subjective sensations of work speed ("It works fast for me"), but the numbers are better.
But the benefits of Google PageSpeed Insights continue. Besides the fact that he can measure and visually demonstrate the performance of the site, but also gives recommendations on how to improve this performance.
Google's Page Speed Insights performance score is a summary of 6 main metrics:
- First, Contentful Paint marks when the first text or image is painted.
- First Meaningful Paint: measures when the primary content of a page is visible.
- Speed Index: shows how quickly the contents of a page are visibly populated.
- Time to Interactive: the amount of time it takes for the page to become fully interactive.
- First CPU Idle marks the first time the page's main thread is quiet enough to handle input.
- Max Potential First Input Delay: The maximum potential First Input Delay that your users could experience is the most lengthy task's duration in milliseconds.
How do we see this process?
Quite often, you can see such a life flow of a project in a client.
At the start, Magento rises, which has a slight performance out of the box. Next, the nth number of modules is put, and the purchased topic - the performance sags even more. At the go-live stage, the content volume increases- many products, pages, a cloud of widgets on the main page, and a canvas of categories in the main navigation menu (read nodes in DOMe). The performance fails. After the go-live, they exhaled, saw that the site was working slowly, spent a week on "doing something with it," and slightly improved the indicators, but they did not even reach the needles at the beginning of development.
The second line is the hours that, with this approach, were spent on improving performance.
How would we like to see this process?
We would like to have everything to the maximum at the start of the project and during development. Naturally, a slight drawdown is allowed, especially at the go-live stage, but after relatively little effort, we should return the indicators to normal.
But for this, you need to make a small investment (requirement - up to 8 hours) at the start of the project and during the development period.
The article is about what can be done during these 8 hours at the start of the project. And it doesn't matter if you are starting a project from scratch or if it has just entered your company. Our task is to raise the standardized performance as much as possible and continue to live with it.
Prerequisites: measure performance
The main rule is that you need to conclude that you must measure performance.
It's great that there are tools that allow you to do this automatically. Very good if you have CI/CD configured. You can install Lighthouse CI, an npm package that can be run during pushes and deployments. It will report every pull request so you can see which pull request is breaking performance.
Let's say we installed a module and added 150K UI scripts to the front on all pages, although it is just a side locator. It is better to avoid the situation when you put a store locator and without measuring anything, make a store locator based on this in-store pickup checkout, and then it turns out that it is prolonged, and you need to cut this whole thing, and you already have a checkout built on it. Of course, it is better that you immediately receive notifications about this and can instantly fix it.
So, the first thing to start is setting up the performance measurement.
What does Google PageSpeed recommend to us?
To evaluate performance, Google Page Speed Insights measures many metrics and also evaluates some site parameters that it believes can significantly affect these metrics. For each such parameter, there are recommendations on how to improve it if it suddenly turns out to be in the red (and not only in the red) zone. Sometimes these are general recommendations; sometimes they are personalized for a specific platform. Let's go through them.
Do not upload images that are not visible on the first screen
Here, the user does not need to load the images into the browser that he will not see on the first screen. You only need to load once he scrolls to them or when your slider starts scrolling through these pictures. He may not scroll to them, and there is no point in wasting network and browser resources to download them.
There are several solutions to this problem:
The old way: go to the marketplace, find the extension with lazy load and install it.
New way: since HTML5 supports lazy loading of images on its own, you just need to add the loading=lazy attribute to the IMG tag.
Reduce server response time
The most painful topic. Leaving out the details, Magento is slow.
The main reason that is often mentioned in this problem is the significant number of modules that need to be guaranteed to be used in the application. Therefore, one way to improve this indicator is to turn off unused modules. You can find several reports on this topic on various sites, and even Mageconf mentioned this. There are many solutions on the net in which modules can be disabled if you are not using MSI or GraphQL, for example. Lots of solutions.
The effect is pronounced - the more modules we cut out, the fewer configs will be collected, and the whole thing will work faster. If you cut out the maximum number of modules, the effect will lead.
Preload Key Request
There are resources on the site that you may need from the very beginning of the page loading. It would be great if they were already loaded from the start and not delayed until the moment they appear in the DOM.
The solution may be as follows: you need to find these resources and add them to the XML layout through the link and specify the relation preload.
On web. dev, where the operation of this algorithm is described, you can see how to read it. Or you can start Google PageSpeed lnsights in the report to see which resources are essential to add to preload.
Terser squeezes out better, but there is little profit, so I don't see using it just for this. A little further Terser will be helpful to us, and you can see it on Github.com.
There are other options if you want to avoid installing and configuring Terser.
Remove unused CSS
As soon as we enable Merge CSS, Lighthouse starts complaining about it. We have one large file with all the CSS from the entire site, and on each specific CSS page, it starts diagnosing that Merge CSS files need to be turned off.
But with Merge CSS turned off, the indicators are lower. Therefore, as with render-blocking resources, we ignore this recommendation.
Core Web Vitals
Google Page Speed Insights measures metrics in two conceptually different contexts.
Lab Data - measurements are made in the laboratory. That is, on some standardized hardware, with particular characteristics, with the latest browser version installed, Google Chrome. Based on these metrics, you are given recommendations for improving performance.
Field Data - almost the same parameters, but which are measured on the devices of specific users.
Naturally, these data can be very different from each other. And accordingly, the effect of applying these recommendations can be very different.
For example, suppose most customers use the site through the Internet Explorer browser (because these are some government agencies where outdated software can be used). In that case, they may not notice that you added the loading=lazy attribute to the images because this browser does not support it.
By default, we consider that these recommendations apply only to lab data. It may take more time to analyze customer devices to improve customer performance.
So, we looked at some simple and reliable ways to improve site performance so that Google Page Speed Insights also "likes" it.
If you have other ways to solve the problem raised here, write in the comments.