Intro
Before decoding pagespeed score, you need to know the significance of pagespeed and its impact on your business. If your site performs better, it can engage and retain users better than before.
Pinterest reduced their website’s perceived load times by 40% then their search traffic and sign ups increased by 15%.
BBC found that they lost an additional 10% of users, when their site took an additional second to load.
For Mobify, for every 100ms reduction in homepage load time, conversions improved by 1.11% yielding an additional annual revenue of $380,000. For every 100ms decrease in checkout page load time, conversions increased by 1.55% yielding an additional annual revenue of $530,000.
When it comes to user experience, speed matters. If your website loads slowly then you’re losing business. More speed brings in more revenue. Since page speed is one of the important SEO factors, lower the website speed, lower the ranking followed by lower conversion rates. So if you’re not investing your time to make pagespeed improvements, then you’re making a costly mistake.
If your website takes 6s to load the visible area or to become interactive, then the probability of the user to bounce back increases by 106%. Google’s official pagespeed insights tool, along with chrome developer tools like lighthouse, coverage tab etc., can help us find the pagespeed issues and troubleshoot it.
You’re probably wondering, ‘how can you get a 100/100 score with PageSpeed Insights?’ Slow down. First, you need to know how to use the tool properly and how to implement the suggestions it makes. Otherwise, you might not see those visible performance improvements on your site in realtime. You should also understand what PageSpeed is and what it isn’t.
Generally, pagespeed is a score given out of 100. Pagespeed insights tool, which is powered by a performance tool called lighthouse analyzes certain performance metrics of your website and it gives the pagespeed score between 1 and 100. Pagespeed insights tool offers opportunities and diagnostics reports below the score. Even if we improve our site on those mentioned opportunities, they do not directly contribute to the performance scores. In a nutshell pagespeed score on its own, is not a true indicator of loading time of a site.
In contrast to pagespeed or pagespeed score, load time is a direct measure of time taken by a website to load on the browser window. It’s not a calculated score. But unlike pagespeed score, a simple load time value won’t give us any insight on how to improve our website’s load time.
So in reality, Pagespeed insights tool alone can’t be helpful. We need to constantly evaluate both pagespeed and load time in parallel to find the opportunities, diagnose any issues and check the actual load time gains.
To interpret and understand the pagespeed insights report, you need to know how it works. Pagespeed report provides us with Field data, Origin summary and Lab data.
Field data is collected from real world performance data when users load this particular page, over the previous 28 days.
Origin summary is the aggregate experience of all pages of this domain for your users, over the previous 28 days. If there aren’t a significant number of visitors visiting your site already, Google can’t collect this field data and origin summary. In that case you won’t get these 2 data.
Lab data is collected at present, while scanning the page. This data shows us the performance issues, solutions to troubleshoot those issues and opportunities to improve page speed
But these individual factors are not treated equally while deriving the performance score. When we see the lighthouse scoring calculator, we can see the weightage of these individual factors like Largest contentful paint, total blocking time etc. This weightage varies between each version of this calculator. So your pagespeed score may vary as Google releases new versions.
Coming to the overall pagespeed score, if your score is from
0 to 49, it will be shown in red and it’s considered as a poor score.
50 – 89 is shown in orange, which is an average score.
90 – 100 is shown in green, which is a good score.
To start with pagespeed optimization, enter your URL in pagespeed insights tool and press enter. Once the analysis is over, you’ll be shown the mobile score by default. You can switch to the desktop tab to see that score. For the same pagespeed optimization, pagespeed score is calculated with different weightages for mobile and desktop. Not only that, while calculating the lab data for mobile the simulated internet connection will be in 3G speed, whereas for analysing desktop version the simulated connection speed will be equivalent to a cable internet connection. That’s why desktop pagespeed score will always be better than mobile pagespeed score.
Pagespeed report consists of 3 sections.
Opportunities – These suggestions can direct you in the right path to make your website load faster. But these opportunities don’t directly affect your performance score. Only if the timing of the above 6 factors improve, then the pagespeed score will improve
Diagnostics – This shows recommendations on best practices that should be considered. But they might not improve your load time, even if corrected.
Passed Audits – These are the things which are already implemented well and don’t need our attention anymore.
Now you’ll be eager to get a 100/100 pagespeed score. Well let’s set our expectations practically.
It’s not easy to get 100 / 100 for a Mobile pagespeed score. It needs a superfast server, pagespeed optimized website, superfast CDN, lightweight pagespeed optimized themes, scripts and frontend plugins.
Well with a bit of effort, you can get a score above 90 for desktop. But only with a lightweight design and a decent hosting plan, you can even think about aiming for such scores in Mobile.
If you use a lightweight pagespeed optimized theme, our pagespeed optimization techniques can fetch you good scores. If you’re using a drag and drop builder based theme like Divi theme or elementor, you can get some average score in this range.
But don’t lose hope. At the end of the day, the real indicator of performance is the load time and not the pagespeed score. So, if your website loads under 3s in a normal 3G connection on a mobile device, then you don’t need to worry about your pagespeed score.
2 How to Improve Pagespeed score
In the page web.dev/learn you can see resources to learn about
Web vitals – that is Essential metrics for a healthy site
Measuring performance and user experience of your site
Techniques to improve your site’s performance – which is what we are concerned with in this lecture.
If you’re a web developer then you can make use of all the collections that are published in this website.
Moving on to the Fast Load times collection. I shall add a link to this page as an external resource to this lecture.
Whatever platform your website is on, the things that need optimization for faster page loading stays the same.
Starting with image optimization,
you need to use compressed images
Choosing right image format
Replace animated gifs with videos
Serve responsive images
Using Webp format
Using CDNs for images
Adding lazy load for images and video
Optimizing JavaScript
Minify JavaScripts
Preload Critical JavaScript
Lazyload non critical JavaScript
Remove unused JavaScript
Optimizing CSS
Minify CSS
Defer non critical CSS and extract critical CSS
Remove unused CSS
Optimize fonts
Cache static resources. If your site content is dynamically populated, then try to cache those generated content as static files
Add CDN to your site
Optimize server load and Reduce server response time
But if you’re not a web developer or if you’re not comfortable with coding your own site, then there are certain things which we need to be aware of. When it comes to removing CSS and JavaScripts, we can find those unused CSS and JavaScripts with a bit of effort, but to remove them from your code, you need to have coding knowledge, else you may break your site.
If we can remove or deactivate those unnecessary or unused plugins and scripts, then we can reduce the page loading time by a large part, but still if you’re using site builder plugins or drag and drop based themes, then you’ll get a lot of unused CSS.
These above said 7 steps are the guidelines to improve pagespeed of any website which is designed on any platform. Since there are multiple content management systems out there and a few websites are custom designed from scratch, it’s not feasible to do a hands on tutorial on pagespeed optimization on all of them. When we look at the market share of the top 10 million most visited sites, WordPress powers 64% of them. So after we discuss the detailed process of these 7 steps of pagespeed optimization, I’ll demonstrate to you the hands on pagespeed optimization tutorial for WordPress CMS alone. Even without the hands on tutorial for other Content management systems, our in depth discussion of these 7 steps will help you to Google the implementation part easily.
So, let’s start with Image optimization and image compression in the next lecture.
3 How to Optimize the images that you use on your website to improve pagespeed seo
Optimizing images is often the best place to start pagespeed optimization, because the impact of unoptimized images can be severe on both pagespeed and load time. It also takes more bandwidth to download, so the effect will be much more severe for users with slower internet connection speeds.
First let’s start with Image compression. When we capture an image with a smartphone or camera, the size of an image varies from 4MB to 10MB typically. Those images have higher resolutions like 12 to 16MP. Most of the images published on the web have a resolution less than 2MP. So a typical uncompressed image should vary around 500KB.
But we can losslessly compress those images to 75% of their size or even less. Lossless compression involves no loss in quality. With lossy compression we can reduce the file size to 50% or even less with minimal or near zero difference in perceived quality. You can use this Image magick script to reduce image resolution or image format or image size. Let me add a link to it as an external resource to this lecture.
But if you’re using any CMS i.e., content management systems like WordPress, you can easily find lots of plugins like TinyPNG or Smush it or Optimole or Litespeed cache plugin to compress images and optimize them further.
Second issue with images is using larger size images than what’s displayed. For e.g., the web page may display an image at 300 x 300 px but if the original image is of 500 x 500px resolution, which is resized using CSS, then unnecessarily the browser needs to download the larger image, resize it via CSS to a smaller size and display it to the user. Instead you can use properly sized images. If you’re displaying higher resolution images for desktop users and smaller resolution images for mobile users, then you can use responsive images with 3 to 5 sizes to load the appropriate image size for appropriate users.
For this you can use either media queries or Image_set function in stylesheets to obtain the user’s screen size in terms of pixels and deliver the appropriately sized images. While media query is a bit old technique, it’s supported by all browsers, whereas Image_set function is a new CSS technique and may not be supported by all browsers.
Lets see an example on how to use the media query to deliver 2 different sized images:
For mobile devices, you can use this code
@media (max-width: 480px) { body { background-image: url(images/background-mobile.jpg); } }
For desktop devices, you can use this code
@media (min-width: 481px) and (max-width: 1024px) { body { background-image: url(images/background-desktop.jpg); } }
Here mobile devices which have a maximum screen width of 480px are delivered a smaller background image, whereas for desktop devices which have screen sizes from 480px to 1024px are delivered a bigger background image.
Images are of 2 types: Vector and raster image.
Vector images use lines, points and polygons to represent an image. It’s best suited for images with simple geometric shapes like logos, text or icons. They will be sharp at any resolution or zoom. Its file format is typically .SVG. Irrespective of the resolution, image size won’t vary. So for displaying in High resolution screens, they are best suited.
When it comes to compressing vector images in .SVG format: It contains a lot of metadata such as layer information, comments and XML namespaces that are often unnecessary to render in the browser. So it’s always good to minify your SVG files using the SVGO Tool. Since SVG is an XML based format, you can apply GZip compression too, which is like zipping a text file. So ensure that your servers are configured to apply GZIP compression for SVG assets.
So prefer vector images whenever possible, as they are resolution and scale independent and will always deliver sharper images. Hence they are a perfect fit for multi devices and high resolution world. If you already have some images in .png format, which is basically not a photo taken with a camera or an image completely created with some vector image software like Illustrator, then you can convert that .png file to .svg file using some online converters, then minify, apply Gzip compression and use it in .svg format itself i.e as a vector image, rather than .png format a.k.a raster image. This way you can serve it for different sizes of devices without any loss in quality.
Raster images contain individual values of each pixel within a rectangular grid. They are best suited for complicated scenes like a photo. Its most common file formats are .jpg, .png, and .WebP. Raster images will appear jagged and blurry as you zoom in. That’s why we need to use multiple sizes of images at various resolutions for different types of devices.
Higher resolution images are of higher sizes. For e.g. a 100x100px image will be of 40KB, whereas a 3 times larger 300x300px image will be of 360KB. So a 3 times increase in resolution increases the file size by 12 times. In cases where you cannot opt for vector images, a raster image is required. But while using raster images, serve responsive images.
To serve responsive images you can use the Sharp npm package or ImageMagick CLI tool. You can also try services like Thumbor and Cloudinary. They provide responsive images on demand. For a detailed guide to implement responsive images on your website use the Web.dev guide in the external resource. Like I said earlier plugins like Smush it and Optimole in WordPress can give you these functions.
WebP images will usually be far smaller than older image formats like JPG or PNG. So, prefer the new generation WebP formats for raster images. You can convert images to WebP format using the cwebp command line tool, which is a good choice if you need to convert images once. You can also use the Imagemin WebP plugin, which is usually the best choice if your project uses build tools or build scripts. If you’re using a CMS like WordPress then you can use WebP express or Optimole or Litespeed cache plugins. Else you can use this guide to convert images to WebP.
You might’ve come across an animated GIF in Imgur or Gfycat, if you inspect it, you would have found out that that GIF was really a video. The reason is GIF images can be huge in size compared to a video. So if you’re using GIFs, then with a little to no effort you can easily convert it to Video and realize huge gains. MP4 is a widely supported format for video, whereas WebM is a relatively new format but can produce much smaller videos than MP4. You can use FFMpeg to convert GIF into MP4 or WebM formats. I shall attach a link to it in the external resource of this lecture. To replicate the properties of GIF in a video, make the converted video to play automatically, loop it continuously and the video will obviously be silent, if you’re converting it from GIF otherwise you can mute the video. The final code will look like this
<video autoplay loop muted playsinline>
<source src=”my-animation.webm” type=”video/webm”>
<source src=”my-animation.mp4″ type=”video/mp4″>
</video>
Finally try to automate all this stuff. You can invest your time and money into automated tools and infrastructure to make your image assets always optimized. You can achieve a 40 – 80% savings with image optimization using build scripts, but in practice it would be easier if done with Image CDNs i.e. Image content delivery networks. They are excellent at optimizing images. They specialize in transformation, optimization and delivery of images. They take care of various parameters like size, format and quality of image to be displayed. They create new versions of images as they are needed in real time. This reduces your server load and increases your pagespeed.
An image loaded via an image CDN can have a URL like https://example.com/dog.jpg?key=asD8hD&quality=auto&size=300w300h&format=webp
Though image CDNs offer up to 100s of transformations the most important ones are size, pixel density, format and compression. These 4 are the reason for huge savings in image file size. A CDN can determine which image format to serve to which user automatically. For e.g. it may serve JPEG XR to Edge browser, WebP to chrome and JPEG to a very old browser. That’s why auto settings in CDNs are so popular these days. Although self managed CDNs like Thumbor are available, owing to the need for engineering staff and limited capabilities, I would suggest you prefer 3rd party image CDNs. Imagekit.io, BunnyCDN and Cloudinary are some of the 3rd party CDNs that fit the needs of most users. If you want to try them, then Bunny CDN offers a free trial period after that you need to go for some premium plan, whereas Imagekit and Cloudinary offer a free plan but for a limited amount of traffic. I have tried both these in my wordpress websites to automate most of the image optimizations that we have discussed in this lecture.
4 How to Optimize the JavaScript for best Page speed SEO
JavaScript is the Programming Language for the Web. It can update and change both HTML and CSS. It was introduced in the year 1995 to add programs to the webpages in Netscape navigator browser. With Javascript, developers can build modern web applications to interact directly with the users, without reloading the page every time.
For a website, ‘above the fold’ area denotes the area of that site that appears on the screen before scrolling down. When it comes to pagespeed, you just need to make your website appear almost instantly in this above the fold area.
To implement that you need to follow the PRPL pattern.
Push or preload the most important resources
Render the first paint as soon as possible
Pre-cache remaining assets
Lazy load other routes and non critical assets
By preloading a certain resource, you’re telling the browser that you want to fetch it sooner than the browser would otherwise discover. By adding rel=”preload” in a link tag of a HTML document, we can ask the browser to set a higher priority level to that resource, to download it sooner.
This preload request doesn’t execute the JavaScript or CSS. A preloaded resource is just cached by the browser, so they are available immediately when needed. Let me add a link to the guide on preloading critical assets to external resources.
In most cases, certain resources may be linked inside a javascript code, which needs to be downloaded before the execution of that script. These resources end up delaying the first render of your website. First render or start render is the moment when something actually appears to the eyes of the user. This can be measured only by recording a video of the site loading in the browser. So you can check this ‘start render’ time only in webpagetest.org and not in the pagespeed insights tool. That ‘something’ which appears first, can be a meaningful text or just a simple background color or an image. Hence, by preloading that ‘certain resource’ in ‘above the fold’ section of your site, you can reduce the first render time or the perceived initial load time.
Render the first contentful paint as soon as possible: First contentful paint is the moment when your website starts rendering some meaningful text or content to the user’s browser. This time can be fetched from the browser itself and so it’s available in pagespeed insights itself.
When a browser loads a website, first it receives the HTML document of the site. But that HTML document may link to other resources like Images, CSS and JavaScripts. The browser can know these resources only when it parses the HTML document. So even after downloading the HTML document, the browser cannot render the content until these necessary resources are downloaded and parsed. Only after parsing these necessary resources, the browser can render the page. The time taken till this moment becomes the first contentful paint time. These resources which block the rendering of the HTML document are called render blocking resources. Google’s Pagespeed insights can identify these resources and show a warning. But if there is any javascript file that is present in those necessary additional resources, then the downloading and parsing of those Javascripts increase the First contentful time tremendously, as Javascripts are generally larger in size and take longer to parse, compared to HTML or CSS.
Authors or developers of non-pagespeed optimized websites include such render blocking resources inside the head section of the HTML document, as those scripts are necessary to be loaded first, in order to display that website correctly to the end user.
To counter those render blocking resources, you need to extract critical javascripts from those render blocking Javascripts, where critical JavaScript is the one that is needed necessarily in ‘above the fold’ area of the website. Once critical Javascript is extracted, there are 2 approaches to optimize them.
First approach is to inline critical JavaScripts in the HTML document. This can significantly improve the First contentful paint time. To further optimize those inlined Critical JavaScript, you can opt to load it once the DOM is ready. It means the entire HTML document is downloaded and parsed. In simple words load the critical JavaScripts once the document is fully loaded.
Second approach is to server-side render ‘above the fold’ HTML of your page. It means ‘above the fold’ section of the HTML document is parsed and rendered along with the necessary resources in the server itself. Then that entire rendered ‘above the fold’ HTML is cached in the server. So when a browser requests your site, your server will serve that cached – rendered ‘above the fold’ HTML, which can be downloaded and parsed quickly in the user’s browser. This can display your site’s ‘above the fold’ content immediately to the user, while the non critical resources are still being fetched, parsed and executed. However there are 2 downsides to this.
1. This can tremendously increase the rendered ‘above the fold’ HTML file size.
2. The rendered ‘above the fold’ section of the site won’t be interactive to the user until the necessary Javascripts are also downloaded and parsed, which will increase the ‘Time to interactive’ factor in pagespeed insights tool. ‘Time to interactive’ means the time taken by your site to respond to a user’s input.
Well, there is no single correct solution to improve first paint time. Try both inlining and server side rendering, go with the one whose benefits outweigh the tradeoffs. This is a complex concept which may need further reading and research, if you want to implement by yourself. So let me provide some external links for you to guide your implementation and better understanding.
To avoid these complications in the first contentful paint time, try not to use any render blocking javascript in ‘above the fold’ section of your site. If you can do that, then you don’t have to extract critical javascript at all. Also try not to complicate the design of ‘above the fold’ section of your site with CSS effects too, as it can increase the size of critical CSS. We will discuss critical CSS in the next lecture.
Precache assets – Instead of fetching assets like CSS and JavaScripts every time from the server, a tool called service worker can fetch them directly from a cache rather than the server on repeat visits. This not only allows users to use your application when the server is offline, but also results in faster page load times on repeat visits. A service worker is built into the browser and controlled by a JavaScript code, which you need to write. Alongside your website’s other JavaScript files you need to deploy this service worker JavaScript too.
Well, you can use a 3rd party library to simplify the process of generating a service worker. For example, one such service worker toolkit built on top of the service worker and cache storage APIs is Workbox. Workbox provides a collection of tools that allow you to create and maintain a service worker to cache assets. For a detailed guide on service workers, let me add some links to the external resources.
Lastly you need to defer loading the non critical JavaScript files, which is also called Lazy loading. Non- critical Javascripts are the ones that are needed only in ‘below the fold’ area of the website or the ones that are not that important and can be loaded later. To defer load a javascript file, add ‘defer’ attribute inside the <script> tag. Technically you can also use ‘async’ attribute to make a script non-render blocking.
While both ‘async’ and ‘defer’ attributes download the javascript asynchronously, ‘async’ executes the script as soon as it’s downloaded, whereas ‘defer’ executes the script only when the document parsing is completed, that is, in the same order as they are called. For example, if you use jQuery script on your site and some other javascripts that depend on jQuery, then you should use ‘defer’ attribute on both these jQuery and other scripts that depend on it, so as to not break your site.
Now you could have understood some of the basic concepts behind the PRPL pattern. Remember, it’s not important to apply all of these techniques together. By implementing any of the 4 techniques in PRPL, we can fetch you huge gains in pagespeed. If you want to discover the performance opportunities in this PRPL pattern, analyze your website using Lighthouse, I shall add a link to the detailed guide in the external resources.
Once the PRPL pattern is done, all your JavaScripts need minification, which means stripping out all the unnecessary spaces, formatting and comments from your JavaScript codes. This minification needs to be done on JavaScript, CSS and HTML files too.
Like I have been saying so far, if you’re using a CMS like WordPress, the implementation of this PRPL pattern and minification becomes much simpler than what you should do, if you are doing all by yourself in a custom made website.
Final part of JavaScript optimization is to remove unused code. We often include libraries in our code that we don’t fully utilize. Those codes are unnecessarily loaded which impact the pagespeed. To fix this, analyze your code and remove unused code and unneeded libraries.
Open your website in an incognito window in chrome browser.
Press ctrl + shift + J or cmd + opt + J to open Console
Go to the Network tab and select the ‘Disable cache’ checkbox.
Press Ctrl + Shift + P or cmd + Shift + P, type ‘coverage’ and press enter. Or Click Customize and control Dev Tools menu, then more tools and Select coverage.
Now click the reload button in the coverage tab.
This coverage tab will show a usage visualization chart in red and blue color. Here the red part denotes the amount of unused CSS and JavaScript codes in your website. But these unused bytes can start to reduce once you start to view the unviewed CSS and JavaScript effects. For e.g. if you mouse over this title, it appears underlined, as it does, some more code is used from the stylesheet.
So it’s not an easy task to find out the exact code or included libraries, which won’t be used in any resolution / device combination. If you’re well versed in coding, then this is the procedure to find out unused code and then you’ve to remove them.
In the case of CMS like wordpress, this is normally done by the theme and plugin author. So there will be a lot of unused code, especially if you display the same version of the responsive themes across multiple devices like desktop and mobile. So this is the only segment, where custom made websites win over Content Management systems like WordPress.
Because, in case of custom made websites, you can easily pay your website developer to find and remove unused Javascripts and CSS files using the above said way.
Whereas in case of preset themes used in WordPress you’ve to rely upon the theme author, who might not strip off those unused codes as they might be used by other users.
But you don’t need to worry about this a lot, because if you’ve made all other optimizations, if you use a pagespeed optimized theme and if you host your site on a faster server, then this unused CSS and Javascript codes won’t impact your page loading time by a significant margin.
5 How to Optimize CSS for Page speed SEO
CSS means cascading style sheets. Technically it just describes the HTML element and sets the font-size, font-family, color, background color on the page etc.
Optimizing CSS for speed involves 3 steps.
Minification of CSS
Eliminate render blocking CSS
Remove unused CSS
Minification
Like Javascript codes, CSS have unnecessary characters like comments, white spaces and indentation. These unnecessary characters can be safely removed without breaking your site’s design elements. So CSS can be minified too, to reduce their size. If you want to do this minification by yourself for your website then you can use webpacks to accomplish this task. Follow the external resource given in this lecture to know more about this. If you’re using any CMS platforms, then you can easily find a plugin to do this job for you.
Eliminate render blocking CSS
CSS are render blocking resources, which means, they must be downloaded to the browser and processed before the browser renders the page. Web pages with unnecessarily long CSS take longer time to render. So we need to eliminate render blocking CSS to speed up the page load time. To understand it let’s start with types of CSS.
CSS is of 3 types.
Inline CSS
Internal CSS and
External CSS
An inline CSS is used to apply a unique style to a single HTML element. They are essential to be rendered then and there, for the website to be shown in a proper design to the user. They are not so big, as they define the style for just a single HTML element at a time. So Inline CSS is not considered as a render blocking resource.
An internal CSS is used to define a style for a single HTML page. It’s defined in the <head> section of an HTML page, within a <style> element.
An external stylesheet or external CSS is used to define the style for many HTML pages. To use an external style sheet, you need to add a link to it in the <head> section of each HTML page using a HTML code that looks like this.
<html>
<head>
<link rel=”stylesheet” href=”styles.css”>
</head>
Whenever <link rel=”stylesheet”> code is used and if it does not have a ‘disabled’ attribute or a ‘media’ attribute that matches the user’s device, then it becomes a render blocking resource.
So, most of the time external CSS are considered as a render blocking resource. Until these render blocking resources are downloaded and rendered completely, the rest of the HTML document won’t load. Effectively the end users may feel like the browser has hung up for a while.
This render blocking issue can be resolved in the same manner that we did in the case of render blocking Javascripts too, that is we can split the stylesheets into critical and non critical styles. Here critical styles denote the styles used for the HTML portion displayed in ‘above the fold’ region. We can either inline these critical styles or make them as internal CSS that puts those styles inside a <style> block at the <head> section of the page and load them normally.
Ultimately our first aim of all these new era of pagespeed optimization is to load above the fold area of the webpage as quickly as possible. For that, the best practice is to keep the visible area’s size 14KB or less in its compressed form. To understand how we arrived at this 14KB size, first you need to know some basics of TCP connections. Here TCP stands for Transmission control protocol, which is one of the 2 important protocols of the internet. To know more about internet protocols, refer to the external resources section.
New TCP connections cannot immediately use the full available bandwidth between the client and the origin server (also known as hosting service provider), so all new TCP connections go through a slow-start to avoid overloading the connection with more data than it can carry.
In this process, the origin server starts the transfer with a small amount of data and if it reaches the client in perfect condition, then it doubles the amount in the next roundtrip. For most servers, 10 packets or approximately 14 KB is the maximum data that can be transferred in the first roundtrip. That’s why it’s best if you can keep the visible area size including critical resources to 14 KB or less.
When it comes to non critical styles, we can either defer loading them or better we can load them asynchronously. Asynchronously means the non critical CSS will be downloaded and rendered parallely along with the rest of the HTML document without blocking them. This way we won’t break our site’s design elements and we can eliminate render blocking resources too.
To load external CSS asynchronously you can use the following code.
<link rel=”preload” href=”styles.css” as=”style” onload=”this.onload=null;this.rel=’stylesheet'”>
<noscript><link rel=”stylesheet” href=”styles.css”></noscript>
Here’s how it works:
link rel=”preload” as=”style” requests the stylesheet asynchronously.
If you wish to learn more about loading your CSS asynchronously then follow the link that’s given in the external resource of this lecture.
When we eliminate the render blocking resources and load them asynchronously the ‘First contentful paint’ time in the lighthouse should get reduced by the amount of time mentioned in opportunities for ‘eliminate render blocking resources’.
To visualize how this CSS blocks render:
Open the unoptimized page in Chrome.
Press Control+Shift+J (or Command+Option+J on Mac) to open DevTools.
Click the Performance tab.
In the Performance panel, click Reload.
In the resulting trace, you’ll see that the FCP marker is placed immediately after the CSS and Javascript files finished loading. So the browser has to wait for all these render blocking resources to load and get processed before painting any single pixel on the screen.
We have already discussed that to optimize such a page we need to load the non critical resources asynchronously. But how to separate critical and non critical css from the website’s stylesheets? You can use the Coverage Tool for that:
In the Chrome browser, in DevTools, open the Command Menu, by pressing Control+Shift+P or Command+Shift+P (Mac).
Type “Coverage” and select Show Coverage.
Click the Reload button, to reload the page and start capturing the coverage.
Coverage for CSS file, showing 55.9% unused bytes.
Double-click on the report, to see stylesheets marked in two colors:
Green color (which is critical): These are the classes in the stylesheets that the browser needs to render above the fold content
Red color (which is non-critical): These styles apply to the content that’s not immediately visible (that is below the fold).
With this information, optimize your CSS so that the browser starts processing critical styles immediately after page loads, while loading non-critical CSS asynchronously.
You can also use some tools named Critical, criticalCSS and penthouse to do this job for you. To know more about these tools look for a link in the external resources section of this lecture.
In the case of CMS like WordPress, once we have the critical and non critical CSS separated, it’s easier to add critical css inside a <style> block in the <head> section of your website and load the non critical CSS asynchronously. To do this there are a lot of plugins available, but there are 2 issues.
Not many of them can extract critical CSS from individual webpages of your site
If your theme is not optimized for pagespeed, there is a higher chance that these plugins can break your site’s design elements
Remove unused CSS
This process cannot be automated, at least as of now. You need some amount of manual intervention and a good amount of web development knowledge to do this. The process is the same as that of removing unused code from JavaScript and same as how we found critical and non critical CSS. It can further be said that the non critical css that we found, comprises the unused CSS too.
To separate unused CSS from non critical CSS,
Press the reload button in the coverage tab.
Scroll all the way to the bottom of the page
Also try to view all the CSS effects such as mouse over all the links, view all the images in the slides if you’ve any, expand accorditions and do all the possible things that can bring out every CSS effect that you’ve added in that page.
Once you’ve done that, then the rest of the styles in the non critical CSS is the unused CSS, which you should remove only with the help of any web developer or your theme author, if you’re using a theme.
6 How to Resolve the SEO issues caused by Custom fonts & Google fonts
Apart from images, it’s the fonts that occupy a major chunk in the file size of modern websites. It’s a good practice to use a maximum of 2 custom fonts in your website. Alternatively, if you can use only system fonts or web safe fonts on your sites, then that’s the ideal situation.
Web safe fonts are displayed on all browsers and users, whereas custom fonts need to be downloaded and installed by the user to be displayed on their system.
These are some of the best web safe fonts that you can use on your website.
Arial (sans-serif)
Arial Black (sans-serif)
Verdana (sans-serif)
Tahoma (sans-serif)
Trebuchet MS (sans-serif)
Impact (sans-serif)
Times New Roman (serif)
Didot (serif)
Georgia (serif)
American Typewriter (serif)
Andalé Mono (monospace)
Courier (monospace)
Lucida Console (monospace)
Monaco (monospace)
Bradley Hand (cursive)
Brush Script MT (cursive)
Luminari (fantasy)
Comic Sans MS (cursive)
I shall attach this list as an external resource too.
If you’re using some custom fonts or Google fonts, chances are that the end user may or may not have installed that font on their browser. In that case that particular font should be downloaded and then your website’s text will be rendered.
To deal with this issue, some browsers hide the text on the web page, until the font is loaded, this method is called ‘Flash of invisible text’. While optimizing for pagespeed we should avoid this ‘Flash of invisible text’ and opt for ‘Flash of unstyled text’.
It means showing the text immediately using some system fonts and then changing them to custom fonts once the custom fonts are loaded. This can be done in 2 ways.
First method is not supported by 100% of the browsers, but at least by most of the modern browsers, whereas the second approach has full browser support.
Approach #1
Here we use font-display api to specify the font display strategy. In this api we shall use swap strategy, which tells the browser to display the text immediately using system font. Once the custom font is ready, the system font is swapped with a custom font.
Like I mentioned earlier, not 100% of the browsers support font-display. In that case, the browser continues to follow it’s default behaviour for loading fonts and this default behaviour varies from browser to browser.
These are the default font-loading behaviors for common browsers:
Browser
Default behavior if the font is not ready…
Edge
Uses a system font until the font is ready. Swaps out font.
Chrome
Will hide text for up to 3 seconds. If text is still not ready, it uses a system font until the font is ready. Then swaps out the font.
Firefox
Same as Chrome.
Will hide text for up to 3 seconds. If text is still not ready, it uses a system font until the font is ready. Then swaps out the font.
Safari
Hides text until the font is ready.
Apart from Opera mini, the latest version of almost all other browsers support font-display api. Here’s the code to use font-display api
@font-face { font-family: Helvetica; font-display: swap; }
But for some reasons, if your users are using a decade old browsers, then you need to follow the approach #2.
Approach #2
Here also the end behavior or the end result is the same, just the approach is different. That is the text is displayed immediately using system fonts. Using the FontFaceObserver library and a couple lines of code from JavaScript, detect when the custom font is loaded. After that update the page styling to use the custom font. If you want to learn how to use this approach #2, follow the guide in the external resource of this lecture.
To verify if your site is using font-display api correctly, run a performance report in lighthouse in chrome dev tools or run an analysis in pagespeed insights tool. Check if your site has passed ‘Ensure All text text remains visible during webfont load’ audit.
Beyond this, optimizing web fonts enters more into the web developer’s area of expertise. So let me just give you a quick overview of all the optimization that needs to be done on web fonts and add a link to a detailed guide on the external resources section of this lecture.
Quick Overview of Web Font Optimization
Don’t use more than 2 custom fonts. Try to use a maximum of 1 or 2 variants of each font.
You can split a large unicode font into smaller subsets such as Latin, Cyrillic or Greeks and also you can define a unicode interval range or wildcard range etc. When you use these subsets and separate files for each stylistic variant , visitors only download the variants and subsets they need, and thereby the fonts load faster and in a more efficient way.
You need to deliver an optimized font format to different browsers. WOFF2.0 is the latest and smallest font format. But some old browsers may not sup port it. So you need to provide your fonts in WOFF, EOT and TTF formats too. Also make sure you’ve enabled GZIP compression for EOT and TTF fonts.
Before loading the font from the URL try to look if that font is already installed in the user’s computer. That is, check locally before making a HTTP request.
Use <link rel=”preload”> to trigger a request for the webfont early in the critical rendering path. By preloading web fonts and using font-display api, you can prevent ‘flash of invisible text’ and large layout shifts.
Fonts are static resources and they dont change frequently, so cache them with a long max-age timestamp. If you’re using a service worker, a cache-first strategy is appropriate.
7 How to Leverage Different types of Caching to speed up your website
When someone enters your website’s URL on their browser and accesses your website, their browser sends http requests to your server to request data from it. Then the browser waits for your server to respond and return the necessary data, which is then downloaded and rendered by their browser to show your website to that user.
When the same visitor visits the same page of your website again, if there is no caching in place, then all those data are downloaded and rendered again. But if your web page is cached or stored in the user’s browser the previous time, then instead of requesting all that data and rendering it from your server, the user’s browser can retrieve that data from the local storage and show it to the user.
This type of caching content of a website on a user’s browser is called Browser caching.
Generally fonts, images, media files, Javascripts, CSS resources are the ones that don’t need to be updated so frequently. So these resources can leverage browser caching to speed up your website.
Sometimes, the contents of the website may change overnight. For example, in the case of webpages, which show stock movement, or if it’s a homepage of a site, then a lot of text and images may change. In those cases, as a web developer or a website editor, you should be able to inform the user’s browser that those cached resources are changed and it should redownload the new versions and re-render them again to the user. That’s where cache expiry and cache control headers come into play.
For example, if you’ve configured your server to return the following cache-control HTTP response header, when requested by a user’s browser,
Cache-Control: max-age=86400
Then the max age directive tells the user’s browser that the particular resource should be cached for 86400 seconds, which means 24 hours. After this expiry of 24 hours, the user’s browser would request the server for the same resource again from the server. If you want to learn more about caching resources, look for the guide in the external resources section of this lecture.
Whenever the browser makes a HTTP request, first it’s routed to the browser cache, where it checks if the requested resource is available. If there is a match, then the resource is retrieved from the cache, which avoids unnecessary network latency and data transfers costs.
If you’re using a CMS like WordPress, where a caching plugin like W3 Total cache or Litespeed cache takes care of all these caching for you, you don’t need to dig more on this Browser caching. But if you own a custom designed website from scratch then I suggest you have a look at the external resources given in this lecture, especially about ‘E-Tags’ and ‘Last Modified’ response headers.
But let me give you a quick guide on cache control headers.
Cache-Control: no-cache can be used for resources which need to be revalidated with the server before every use.
Cache-Control: no-store can be used for resources which are not needed to be cached ever.
Cache-Control: max-age=31536000 can be used for resources, which don’t expire for a year.
ETag or Last-Modified response headers can help you revalidate expired cache resources more efficiently.
Second type of caching that we are going to discuss is Site cache or Page cache. A page cache stores a website’s data the first time a web page is loaded. Though it almost sounds the same as Browser cache, there’s a difference. Compared to static HTML websites, dynamic sites that are powered by CMS like WordPress see a lot of improvement in pagespeed by using page cache.
In case of dynamic websites, when a browser requests a web page, then the contents of that page are extracted from the database that are stored in the server and the php files are compiled, resources like CSS and Js are rendered and then the final HTML document is displayed to the user. Instead, if the final HTML document is cached in the server, then whenever that particular page is requested again, then instead of doing all those compiling and rendering, the final HTML document can be delivered directly from the server’s cache to the user’s browser.
This way, the server’s CPU load can be reduced drastically and it can handle spikes in traffic better. Caching plugins have been available for CMS like WordPress since ages. But nowadays some of them such as Litespeed cache, WP Rocket cache etc come as an all in one solution for pagespeed optimization incorporating Image optimization, CSS, Js optimization etc.
There’s a 3rd type of caching, which is Server side caching or server cache. It includes CDN cache or content delivery network cache, object caching and opcode caching or OPcache.
Object caching involves storing database query results so that the next time a result is needed, it can be served from the cache without having to repeatedly query the database.
Let’s see what OPcache is. Websites which are written in php like wordpress websites need to be compiled every time, which means the human readable php code is converted into a machine understandable opcode. Caching this opcode to the server’s memory is called opcode cache or OPcache. Next time when that web page is requested, instead of compiling that php code, the cached opcode is served directly.
Now let’s have a look at the whole process now.
The extracted information from the database is cached in the Object cache.
Compiled opcode is cached in OPcache
When a web page is loaded, it takes both Object cache and OPcache, and other resources too such as Images, CSS, Js and renders the final HTML document, which is stored in Page cache.
This web page which is served to the user’s browser via a CDN i.e., probably a CDN cache from the origin server’s page cache is stored locally on the user’s browser as Browser cache.
That’s a lot of caches right! Well! That’s how we can speed up our page loading time. This is the most simplified form of explaining different types of caches involved in the pagespeed optimization process.
8 How to reduce the Impact of Third Party Scripts on your website’s Pagespeed
In this lecture let’s discuss how to reduce the impact of 3rd party scripts on your website’s pagespeed.
Third party scripts are nothing but JavaScripts of external websites which are not written by you, or not hosted on your domain. These are some of the 3rd party scripts.
Analytics and metrics scripts like Google analytics code,
Social Sharing buttons
Video embeds
Chat services or Facebook comments
Advertising iframes
A / B testing scripts
Helper libraries like animation and functional libraries
We might have manually added such codes directly inside the <Head> block of our website or on the sidebar or footer or they got added when we might have installed a theme or plugin to our site.
These scripts can add some powerful functionalities to our site. But they also bring in privacy, security and performance issues. Generally the main thread, which paints or renders the above the fold area of your site, runs all the javascript in your page too. So, if any third party JavaScript function takes a long time to run, then it blocks the main thread from rendering your site, which leads to an unresponsive website for quite some time.
Though we have already discussed how to optimize Javascript code on your website with the PRPL pattern, here we are dealing with Javascripts that are hosted on external domains other than your domain, hence they go by the name, 3rd party scripts. These 3rd party scripts are outside our control and bring in additional issues.
Issue 1: Network issues
Establishing connections between multiple external domains takes time. Sending multiple requests to multiple domains hosted on different servers causes slowdowns. If it’s a secure connection, then it involves additional steps like DNS lookups, redirects, and multiple round trips to multiple end-servers.
Not only that, they add to network overhead too, when they download unoptimized resources. Some of such instances are
Downloading unoptimized or uncompressed unnecessarily large images and other media files
Requesting uncached resources or if there is no proper cache mechanism in place in their servers
Downloading uncompressed resources or if that third party server hasn’t enabled compression
Pulling multiple instances of same javascript libraries for different third party resources
These are just network issues that are caused by third party resources. But the way in which these scripts get rendered matters a lot in regard to pagespeed.
Issue 2: Rendering Issues
If these scripts are rendered synchronously then it blocks the critical rendering path and makes the user experience much worse than those caused by network issues. What’s worse is, if the third party server fails or goes down and it couldn’t deliver the resources that are requested, then those render blocking resources may block the main thread even for 80 or 90 seconds.
So what can be done to avoid this. Practically some 3rd party scripts are often necessary and can’t be removed or replaced. But we can do something to reduce these adverse effects.
When you choose 3rd party scripts, choose the ones which use less amount of code but still can offer the functionalities you need
Try not to use the same functionalities from multiple 3rd parties. For example using analytics code or blog comments feature from 2 different parties
Set a maximum budget of 10 requests for the number of request made for the 3rd party origins
Check and remove unnecessary 3rd party scripts from your websites at regular intervals
9 How to Optimize the Content Delivery of your website’s resources
First you need to understand why you need to optimize the delivery of your resources. Let’s take this scenario, your website is hosted in New York and some of your visitors who are from Mumbai, India are visiting your website. When they do, each javascript file, CSS file and image is sent from North America to India or Australia, through various servers which connect them in between. This increases latency or Round trip time.
To resolve this issue, we use Content delivery networks or CDNs. No matter what you do, or what type of content you consume, chances are that you’ll find CDNs behind every character of text, every image pixel and every movie frame that gets delivered to your PC and mobile browser.
A Content delivery network consists of a network of servers that are optimized to quickly deliver resources and contents to users. Generally CDNs are optimally used to deliver resources of websites, which won’t change for days or at least hours, such as images, Javascript or CSS files. Since such resources don’t change for a period of time, they are cached in a server of CDN and delivered to the users. This reduces the round trip time or latency. Not only cacheable resources, CDNs can also be used to deliver uncacheable resources.
Content delivery networks are optimized for faster content transfer from one server to another, so they are always faster than transferring resources from the origin server to the user. If we apply the previously seen scenario here, the resources are loaded from the origin server in New york to a CDN server in New york, then it will travel through the optimized servers in the content delivery network to reach a CDN server near Mumbai, from where it will be sent to the user in Mumbai. This way when a next user from New Delhi, India visits your site all of these cached resources will be served much faster from the Mumbai CDN server to the New Delhi user, instead of requesting it from the New york origin server.
Connections between CDN servers occur over reliable and highly optimized routes compared to the routes determined by Broader Gateway Protocol or BGP. Although BGP is the Internet’s default protocol, it’s not performance oriented compared to the finely tuned routes between CDN servers.
So as you cache more and more content in CDN servers, your origin server load reduces more and more. It means lesser origin server cost and also it can handle higher spikes in traffic, when your post goes viral all of a sudden.
So using a CDN has twofold benefits.
By terminating most of the requests within the nearest CDN server, unnecessary connection setup cost is eliminated.
Instead of creating a new connection route, since the resources are transferred via a pre-warmed connection between CDN servers, the transfer is much faster.
That’s how CDN can provide faster transfer speeds at lower costs.
Most popular technique used by CDNs to request or pull resources from the origin server is called origin pull. Whenever a user visits a website for the first time, CDN servers pull resources from the origin server and store them in its own server. This way when the next visitor visits that page, the CDN doesn’t need to pull those resources again from the origin server.
As this process goes on and on, the CDN cache builds up and cache hit ratio increases, that means more resources are loaded from the CDN server than the origin server. Over time this CDN cache builds up a lot and it will be near its capacity. So, to make room for new caches, the old and unnecessary caches are deleted from the CDN server cache. This process is called Cache eviction.
Alternatively, the administrator can manually delete these CDN cache without having to wait for the cache to expire or get evicted. This process is called purging of caches. Wherever you make some changes to the visual design of your entire website or update some plugins or any scripts that affect the entire website, you need to purge these caches.
In addition to reducing the latency, a modern CDN can take care of several other things. Such as, it can
Improve page load speed
Handle sudden high traffic loads
Block spammers and other bad bots
Reduce your server’s bandwidth consumption
Protect your website from DDoS attacks and much more
These are some of the basic concepts you should be aware of Content delivery networks in terms of optimized content delivery. We shall see some more advanced concepts while we configure cloudflare – a CDN for your site.
10 How to Reduce the Server Response Time of your Website
We have come to the most important part of pagespeed optimization. Here you’ll learn how to reduce your server response time and use it optimally.
Server response time is also referred to as Time to First Byte.
Technically it’s the time taken by the user’s browser to receive the first byte of the website.
But actually the server response time goes like this:
When a user enters your website URL on their browser, that request is sent to the server. The speed of which depends upon Network transfer speed and latency between the user and your server
Once the request is received by your server, the server processes it, which means:
It queries the database, or retrieves the data from an Object cache, if there is one in place already
It Processes the back end code, like compiling a php, converting it to opcode or retrieving the opcode from opcache
Then building the page or retrieving a rendered HTML document from a page cache
Speed of these 3 things depends upon the efficiency of the application of your website, server hardware specifications and it’s software tuning.
Finally, the server sends back the necessary resources to the user’s browser, which again depends upon Network transfer speed and latency
The maximum allowed Time to first byte is 600ms. When a page takes longer to load, users don’t like them and server response time is one of the major factors for such slower loading.
For example, the users may be looking at their downloaded files history, in that case the server needs to look for that information and access it from the database. Then process it and send it as a web page to the user’s browser. If we can optimize our server to do such work efficiently, then we can improve our pagespeed effectively.
As this server response time mainly depends upon 4 factors: Network transfer speeds, latency, server’s hardware specification – software optimization and the efficiency of your web application, we can concentrate on the 3 of those 4 main factors, which we can control from our side. The 1 factor which we cannot control is ‘Network transfer speed, because it solely depends upon the speed of the internet connection between the user and the server.
But we can control the other 3 factors and reduce the server response time
To improve latency, we can use Content Delivery Networks
We can upgrade to a better hosting plan which offers high speed server to improve our server hardware specifications
Only in Dedicated hosting or Virtual private server (VPS), we can do the software optimization by ourselves.
That too if you’re not comfortable with the server management process, you can opt for managed Dedicated hosting or Managed VPS.
If you’re constrained by budget, then shared web hosting is the only option for you. In that case you need to select the shared hosting package, which has the best software optimization
Finally the efficiency of your web application, which depends upon the platform you use for your application. When it comes to content management systems I would suggest WordPress. Or you can hire some best web developers to build your site from scratch.
As a website owner or an SEO expert, instead of learning how to tune your server’s software for better speed, you should learn to choose the hosting provider which can fetch you the quickest server response time.
While choosing a shared hosting, look for a Litespeed web server. If you’re choosing a dedicated server or VPS look for a Litespeed or Nginx instead of just Apache. If you can do that then you’re headed towards a good start.
To leverage the advantages of opcode cache and memory caching, make sure the hosting provider supports Varnish, PHP opcode cache, memcached or Redis memory caching to reduce database query times.
When it comes to selecting themes and front end plugins, try choosing the pagespeed optimized ones, as the selection of Theme and the server contribute to the 66% of pagespeed optimization.
First let’s start with how to measure the speed of hosting provider
A hosting provider can be awarded the best hosting provider based on 3 important parameters, speed, support and price. Here speed refers to ‘how fast the websites hosted on this hosting provider loads on the browser window.
This is benchmarked using the response time of test websites hosted on several hosting providers. Load time of a website depends upon various factors, such as size of the website, pagespeed optimization done, Theme used, frontend plugins used, ads etc.
To exclude these factors, we generally test ‘Initial Server response time’. So when I say a hosting provider is fast, I mean its initial server response time is minimum or fast.
For the sake of SEO, we prioritize speed over best support and cheap price. But that doesn’t mean this new list of recommended hosting providers are pricey or they have terrible support.
They all give decent support, come with a reasonable price and they provide fast initial server response time.
The following suggestions are purely based on the pagespeed insights’ initial server response time tests. I did these tests by myself on test websites hosted on various hosting providers.
After running a pagespeed insight test for 5 times on each website.
The results are as follows.
When you test a website for the first time, note down its initial server response time from pagespeed insights. This denotes the non-cached time. Then do the analysis for 4 more times to artificially stress the server a bit and also to measure the cached server response time. By the end of 5th analysis note the server response time again. This way you can compare and decide by yourself.
These are the recommended hosting providers along with the respective plans. It’s recommended either to go for the same plan as mentioned above or higher plans, so as to attain the above said 5th average initial server response time.
I shall add links to the external resources of this lecture for these recommended hosting providers.
Earlier I suggested not to go for WordPress optimized hosting, because some optimized hosting providers don’t allow you to install W3 total cache or such optimization plugins. But when we take generic shared hosting packages, there will be no limitations on the pagespeed optimization plugin usage. For example, in cases like Siteground or A2 hosting, you may need to install and configure their own Supercacher or SG Optimizer plugin to get the maximum speed benefit.
But they also have detailed step by step tutorials to follow along. Once that’s done, you will get a good page speed or page loading time.
You need to understand one important thing. By just opting for a server with good initial server response time and doing all the pagespeed optimizations your website won’t load faster. You need to have a lightweight website with minimal to no usage of popups, sliders etc. In a nutshell you need to choose a pagespeed optimized theme and you shouldn’t use any plugins or scripts that makes your website heavy on the front end.
Other than these initial server response times, you can look for
Number of websites that you can host
Number of CPU Cores. Also look for latest CPUs if model is mentioned
Amount of RAM
Storage space. Check if it’s SSD or NVMe
Check for latest PHP version supported
Check Litespeed or Nginx, HTTP2, Gzip compression supports too
Above all, if you can enable these features in cpanel, then it’s good to go.
To reiterate again, based on the speed tests I did on pagespeed insights tool, the hosting providers and the plans that I suggest are
You can’t go wrong with the above said hosting providers. But then, you’re not restricted to these hosting providers alone. Check the initial server response time in pagespeed insights tool multiple times. The lower the response time, the better.
Once the server is chosen right, now you need to enable text compression. Brotli compression can reduce file size of resources more than the next best Gzip text compression algorithm. But brotli is supported by most of the browsers except for internet explorer and safari. Although Gzip is less efficient than Brotli, you need to use Gzip as a fallback for Brotli, as Gzip is widely supported by all the browsers.
You can enable these compressions server wide in server settings or you can contact your hosting provider to enable it for you.