Web integration project front-end performance optimization
If a user does not receive contents in a certain reasonable time, he/she goes away and may not come back again. Therefore the user should not be bothered with slow web loading in any way, and in the best case he/she should not even realize that he/she has to wait for something.
When a web or web applications are being made, the speed of imaging is influenced by a whole range of things – from the very source code, the infrastructure, up to the abilities of a web browser. Now I will concentrate on problems surrounding the web front-end performance, i. e. all the links between the browser and our web project after it was dynamically formed and sent as an HTML to the browser. I will focus on the main problems of front-end optimization and on the procedures how to minimize them and thus to achieve in the web integration project a better “user experience” – a subjective perception of speed by the end user.
In a number of cases it is possible to substantially increase the speed of the web loading, if we focus on several areas that are necessary to be optimized. As a result, we have to reduce the number of HTTP requests and the size of downloaded data to an essential minimum, and to write an optimized program code because, although the current computing performance of devices is high, even a very high-performance machine often “loses breath” and the user can feel it in the speed with which he/she can browse the web.
The main and most extended areas of optimization are as follows:
- URL addresses,
- caching (special purpose buffer storing).
- To remove from HTML everything that is not necessary there–from my point of view this mainly concerns comments. They should remain in templates from which HTML is generated, but they should not get into the output. Sometimes it is recommended also for the entire deletion of white spaces and “unnecessary” closing tags
(), but I would prefer to avoid it because it will be handled by the switched on compression.
- To switch on compression for all the text formats (gzip+deflate) –this mainly concerns HTML, XML, TXT, JS and CSS files.
- To prearrange contents with the help of “Link prefetching” –basically it is a mechanism with which it is possible to force the browser to “pre-download” certain contents at a time when it is idle, and thus to save the user’s time.
- To place CSS files at the beginning of HTML to the heading and, on the contrary, to place JS at the end.
- To combine CSS and JS files. The combination, at least in case of JS, should be ideally handled by the portal platform. In case of CSS, it can be handled e.g. with a preprocessor (LESS, SASS, …), or with the help of various more complex tools with diversely advanced GUI, such as Koala, Prepros App, WinLESS, etc., or a more robust tools based on the command line – Grunt, PHỞ DEVSTACK. It is also possible to use a number of online tools – YUI Compressor, Google Closure Compiler.
- To minificate CSS and JS files. Minification is usually handled by the very developer, e .g. by using the tools described above.
- As to JS libraries of third parties that are not necessary for the functioning and imaging of a page, to load them asynchronously (Google Analytics …), or to link them directly from the source or some common warehouse (Google Hosted Libraries).
- To make general optimizations of the code, e. g. not to use general selectors (*) in CSS, not to use the clause @import for loading other CSS because it increases HTTP requests, not to use “CSS Expressions”, not to waste resources in JS, to create a valid code, etc.
Images are part and parcel of the web and very often the entire web is built on them, and they can represent a substantial part of the web volume. It is therefore important to devote effort to their optimization.
- For a given image always to choose the right format (GIF, JPG, PNG) according to the image contents or its placing, i. e. to decide whether lossy or lossless compression and thus the image format is chosen. Simultaneously it is necessary to pay attention to the size of the final file, which has to be as small as possible while preserving the original quality.
- For images making up the very web design to use the so-called “CSS Sprites”, when individual images are put in a single file, and then placed in the web layout with the help of CSS (background-position). This technique saves HTTP requests. Images can be united manually directly in a graphic editor, or with the help of a tool – sprite me, stitches
- Always to specify image sizes either directly in HTML or in CSS. This will avoid different intermediate states and “skipping” elements on a page. At the same time, these sizes must correspond to real sizes of a given image. This means e. g. for previews of 20x20px not to use an image with real sizes of 250x250px. This saves transferred data.
- It is suitable to use the so-called lazy loading, when the contents of a page are loaded only when the user needs them. It can be used for different parts of the contents (for “infinity-scroll”), but it is suitable for gradual loading of images, which can save the volume of transferred data and HTTP requests. It is profusely used in responsive design.
- Data URIs – for reducing the number of HTTP requests in case of images of a small volume it is possible to use inserting images in the web with the help of the so-called “data URIs”, when the contents of an image is inserted directly in HTML or CSS. Such an image is then accessible immediately after the loading of a given HTML page (file CSS). The contents of such a data URI image are coded with the help of Base64 and subsequently inserted in the page: data[;charset=][;base64],. With respect to data, such an image is then larger by a third only due to the essence of the Base64 coding. It is profusely used in responsive design, or in mail templates. For more information go to –http://en.wikipedia.org/wiki/Data_URI_scheme
- To avoid requirements for inexistent files (error 404) –e. g. an erroneously set path to an image in a document or a path to an image in a CSS file.
- To use more hostnames (domains) for the possibility of using several simultaneous HTTP processes–see the part on caching (reverse proxy cache server).
- Serving static files through reverse proxies–to use a reverse proxy cache server for serving static resources (JS, CSS and images). It is suitable to choose a solution when different types of files are served from different hosts in order to avoid the problem with the browser limited number of HTTP requests per host.
- To use the cache of a visitor’s browser with the help of setting expiration headers for static resources (JS, CSS and images), and thus not to download the already downloaded files but to use their local versions. The response of the web server with a static source file must contain all expiration headers, which are necessary for a successful caching of both on reverse proxies and browsers.
- To issue static resources through the persistent HTTP connection (http://en.wikipedia.org/wiki/HTTP_persistent_connection), i. e. to set the Keep-Alive header on the web server to the optimal value of 15 seconds.
- To consider the use of CDN (Content Delivery Network) for issuing large static files (this particularly concerns video contents).
- If it is possible to use the so-called “full page caching” and to cache entire assembled pages, when the final page is not generated on the server side every time again, but after the first generation its form is stored in the server as HTML and this version is subsequently issued for visitors. It can be used for individual pages, sections or the entire web with a fixed or variable length of expiration in connection with the load performance optimization (performance tuning) of the web server.
Tools for analysis
If you wish to inquire about your pages or to get down to your own optimization, there is a number of tools with which the web can be analysed and which frequently even show a list of possible solutions. One of the best known tools is Google PageSpeed Tools, which exist as a separate online tool or a browser extension.
This tool analyses your page and returns a list of suggestions for its optimization, basing itself on PageSpeed Rules. But the results of the analysis should be seen from a dispassionate point of view, and not to chase thoughtlessly the highest PageSpeed rank. All the measurements may not be completely accurate.
Another interesting tool is e. g. WebPageTest, which works on a similar principle as Google PageSpeed, but a user anxious for more details will find it more enjoyable.
For a basic overview of the condition of your web it is also sufficient to use most current browsers where in the respective tools for developers you can access such basic information as HTTP requests, caches, compression and sizes of transferred data.
If you have a possibility of administering your WWW server Apache or if you use your own, it is recommended to use, apart from the above optimizations, also PageSpeed Module. It is an open-source module pro Apache, which automatically applies PageSpeed Rules to the web.
The described points constitute a basis of how to detect places that hinder the speed of the loading of your web and how to eliminate these shortcomings. Some techniques are simple, but they should be taken into account right from the beginning of our project, others are relatively complicated in terms of implementation and they also require appropriate development capacities and/or the coordination of the administrator of the operational web infrastructure (web server). It is also necessary to take into account that optimizations should be made repeatedly in relation to the development cycle (with every new release). Anyhow, the result of effort exercised in front-end optimizations is worthwhile and, apart from a substantial acceleration of the web loading, leads to a more satisfied user who is our top priority.