HTML5’s contributions to the Web’s historical technical shortcomings


The present paper discusses the historical technical issues and limitations of the Web’s successive technological paradigms. The main research question is whether the latest one, HTML5, appropriately remedies these identified shortcomings. The methodology is based on concrete examples, and on two case studies targeting the major contributions of HTML5, namely: the WebSockets and the Web Workers.


The observations demonstrate that performance and workload support, vary according to the browser. Web Workers appropriately leverage CPU cores while WebSockets bring substantial rapidity and bandwidth gains. HTML5 brings solid features to compensate pre-HTML5 shortcomings such as State Management, Session State’s limitations, Data Validation and lack of parallelism.


For once, I will talk about something unusual that deviates from my traditional Azure/Office 365/SharePoint writings as I recently made an in-depth study of HTML5 where I wanted to analyze whether it could really bring valuable alternatives to the Web’s historical technical shortcomings.

First things first, I needed to identify the said shortcomings and when they appeared in the Web’s history. Therefore, I have divided the Web into 4 different paradigms. By paradigm, I mean, a radical change in the way one develop web applications. Here are the paradigms that emerged :

  • Static (1990/1995) : back in the days, the Web was nothing else but a wide document repository. Between 1990 and 1995, the Web was mainly made up of HTML documents that were statically created. These pages were linked together via hyperlinks. This is what I’d call the stage zero of web application development. The JavaScript language didn’t exist and there was not yet a true server-side technology such as ASP, PHP etc. CGI was there but had nothing to see with nowadays’s server-side technologies. Per se, this stage did not allow any Web application development but I couldn’t ignore it because it was the Web’s birth.
  • Dynamism (1995/2000) : at that time, the Web became dynamic, meaning that HTML pages were rendered dynamically via server-side technologies such as ASP. The content of the pages came from databases. We saw things like forums, blogs, etc., emerging and the Web became more interactive. Real web applications started to popup with their set of shortcomings which were mostly : The full page load syndrome, the State Management, the Session State and the Data Validation. I will describe later why I have identified these things as shortcomings and which HTML5 features represent valuable alternatives.
  • The Web 2.0 (2000-2012) : this era is clearly Facebook’s one, or at least Facebook was a true pioneer when it came to implement very dynamic, interactive and user-friendly web applications. This paradigm saw the birth of AJAX, although XMLHttpRequest pre-existed. Browsers were at last able to send asynchronous HTTP requests to web servers and fixed the Full page Load Syndrome issue. Web applications became faster and much more responsive. However, the pre-HTML5 shortcomings of this paradigm were still visible : lack of parallelism and no guenuine real-time capabilities. Indeed AJAX gives the illusion things work in parallel but it’s only partially the case (I’ll illustrate that later) and techniques such as HTTP long-polling were not satisfying enough in large systems.
  • HTML5 (2012-2022) : This is the paradigm we’re currently in. HTML5’s first drafts were designed around 2008 and the first RC was released around 2012. The WHATWG estimates that browsers would reach a full compatibility with HTML5 in around 2022.

Let’s now focus on the shortcomings to see why I identified them as such.


Full Page Load Syndrome (FPLS)

The FPLS was the action of reloading a page entirely whenever the browser had to go back and forth to the web server for some reason. This was due to the fact that browsers were not able to perform HTTP requests silently in the background. It was like that since the very early days of the Web but became more visible as interactivity with end users increased. Per se, HTML5 did not bring a feature that *fixes* this issue as it was already workaround-ed by AJAX (3rd paradigm). However, thanks to the webStorage about which I’ll elaborate later, the need to make round-trips to a web server is probably less than before.

Main drawbacks of the FPLS : bandwidth waste, slowness, poor user experience

State Management

HTTP being a stateless protocol, it only transports data from the browser (in the context of the Web) to a server, but there is no correlation between requests. Therefore, a new request causes some kind of application reset since all the variables that were associated to the previous request vanished. Unlike non-web applications that declare, initialize and populate variables with end-user input, and keep these variables in memory until the application shuts down, web applications had to transmit the history of interactions from page to page. This small example (written in PHP for sake of simplicity) illustrates that problem :

<title>Posting values</title>
<form name="statemgt" method="post">
    Value 1 : <input type="text" name="variable1" value="<? echo $_POST["variable1"] ?>"/>

    Value 2 : <input type="text" name="variable2" value="<? echo $_POST["variable2"] ?>"/>

    Value 2 :
    <select name="list">
<option<?if(isset($_POST["list"]) && $_POST["list"] == "Option 1") echo" selected"?>>Option 1</option>
<option<?if(isset($_POST["list"]) && $_POST["list"] == "Option 2") echo" selected"?>>Option 2</option>
	<input type="submit"/>


The above code shows the work of the developer who had to retrieve the values that were already entered by the end user, this had to be transmitted/retrieved over and over whenever the page was refreshed, or whenever the user transitioned from one page to another. Why is it a problem? The above sample is a very basic form with only 3 input controls. Imagine an application made up of dozens of pages, each of them containing multiple controls and you have to ensure as a developer, that the whole application remains coherent and consistent. How cumbersome and tedious was this? On top of that, bandwidth was not spared as all the data had to be re-sent over and over again. In order to facilitate the state management, the Session State (described below) came to the rescue.

Session State

One of the best use case of Session State is an online shopping application. The user navigates from product to product and sometimes adds one into his basket. When ready, the user presses the Order button to confirm. Before the session mechanism, developers would have had to make sure that all the selected products were transmitted from page to page using the techniques described above. Thanks to the Session State mechanism, developers could simply transmit the information once to the server and that information was made available to all the other pages belonging to the same application domain, through the session. Here is again a very short example illustrating this:

<? session_start();  if(isset($_POST["variable1"])) {  $_SESSION['variable1']=$_POST['variable1']; } ?>

The above code is part of session1.php and simply sets the session variable named variable1 into the session storage. Then, another page, named session2.php can simply retrieve the value of this variable:

<?php  session_start();  echo"Variable from session: ".$_SESSION['variable1']; ?>

That became much more convenient : all the pages of the same application domain could share data made available through a single entry point. However, while this looks great, this comes with severe limitations. To understand them, we first need to analyse in detail the Session State mechanism. Here is the sequence of operations when a session takes place between a browser and a web server:

  • The server initiates a session, this results in sending a cookie with a session identifier back to the browser and instructs the browser to store that cookie as a session cookie.
  • Every subsequent HTTP requests made by the browser includes this cookie, making the server associating that particular browser with the session. Therefore, a correlation exist across HTTP requests.
  • Data sent by the browser are stored by the server into the session, either of the following ways : InProc (session data is stored in memory), FileSystem (session data is stored in the file system), Database (session data is stored in a centralized database).

While this looks great at first, it also requires some server-side resources. Indeed, the server infrastructure being in charge of storing session-related data, requires some hardware power to do so. Scalability might be compromised as transitioning from 100 concurrent users to 2000 could result in severe workloads causing the server infrastructure to crash. On top of the scalability potential problem, the Lifetime of the session is also a limitation. Indeed, in order to evict session data as fast as possible, sever-side technologies will eliminate all the sessions that are idling for xx minutes, causing the session data to be lost. As an end-user, everyone already experienced that behavior : you’re filling a form until your meeting starts, 1 hour later, you come back, fill-in the last part of the form and submit it : boom, session expired and you’re good to restart! At last, if one opt for InProc or FileSystem storage options, we often have to use the so-called Sticky Sessions to enforce load-balancers to use server-affinity, causing a given browser to be systematically redirected to the same physical server, and this, in order to preserve session data. The biggest drawback of this approach is that load-balancers might keep sending browsers to a crashed server which results in a total failure. You can avoid Sticky Sessions by using a database and/or a distributed cache system but you must plan for a good capacity planning and this makes your architecture more complex. I’ll explain later how HTML5 comes to the rescue.

AJAX / Single Thread limitations

As I stated earlier, AJAX gives the illusion that things work in parallel. To some extent, it does as the browser isn’t impacted by the response time of the server. However, when the response is available, the browser cannot do anything else but parsing this answer and potentially be too busy to handle the other events of the page. Not convinced? Just try out that piece of code :

      functionGetCities(country) {
    	  url: "getcities.php?country="+country,
    	  success: function(cities){
    	    while (true) { }
    	      //do something
    	  },failure: function(err){

If you place a button on the page showing an alert message whenever clicked, you’ll never see that message on screen if the above code gets executed. In the success callback handler, I purposely placed a infinite loop to illustrate the single-thread phenomenon. What I wanted to point out is that if the browser gets a large answer back from the server and needs some time to process it, it won’t be able to do anything else; the UI will freeze and become unresponsive. This is not an AJAX problem, this is a JavaScript problem but I wanted to associate that to AJAX because many people think AJAX == parallelism and it’s not the case, or at least not totally.

Another easy way to freeze a browser is by calculating Fibonacci, try this function on argument 50, your browser will become unresponsive during at least 1 minute :

   functionfib(n) {
    if(n <= 2) {
    } else {
        return fib(n - 1) + fib(n - 2);

Bandwitdh wise, another AJAX limitation is its overhead. The below screenshot is very explicit :


The parts highlighted in red represent all the HTTP Headers (request & response) that were necessary to receive the actual data shown in green. So a few kilos required to retrieve only a few bytes.

HTML5 Alternatives


The sessionStorage is a very valuable alternative to server sessions. It fixes all the painpoints I pointed out earlier, which were : Scalability, Stickiness and Lifetime. With the sessionStorage, the session data is stored within the browser, meaning that scalability isn’t an issue anymore since each browser holds its own data. If you pass from 100 users to 2000 users, you’ll have 2000 browsers to store your session data. No need to use stickiness either since their only purpose is to preserve server-side Session State. At last, unlike server-side sessions, client-side sessions do not have any lifetime limit. The browser removes them upon browser/tab closure. If you leave your page idle for 3 hours, your session will still be alive. Moreover, sessionStorage is damn easy to use and all the modern browsers support it.

Beyond the sessionStorage, the localStorage and other storage options such as indexedDB and even the user’s filesystem can be leveraged for more complex scenarios. While sessionStorage and localStorage are commonly supported, the other options are still at an experimental stage at the moment.

New Inputs

Data validation was also a painpoint but HTML5’s new inputs came to the rescue. It is now very easy to control whether users enter valid values for numbers, dates, e-mails etc. as HTML5 ships with corresponding input types. Before, every developer was implementing his own control logic and/or using third-party libraries to do so. Now, simply declare your input type and … that’s it!

Web Workers

I’ll elaborate more on Web Workers but they are the answer to the task parallelization which was missing before HTML5. In a nutshell, they allow background processing so that the UI always remain fluent and responsive but much more than that as the dedicated Case Study will demonstrate.


They are the answer to the lack of real-time capabilities, although the definition of a real-time system is abused in the context of HTML5. Indeed, the definition of a Real-Time System implies some advanced Task-Scheduling algorithms that ensure a given Task with a given priority is executed within xx amount of time. Any delay in the execution could result in the Task/System failure. About HTML5, we should rather talk about Fast Communication. WebSockets also help reducing AJAX’s overhead as they clearly spare the bandwitdh :


No more HTTP Headers, only messages with a very low overhead.


Many other HTML5 features such as Canvas, new Semantic tags, the Application Cache, the postMessage API, the FileSystem API, etc. are very handy to build modern, client-side driven applications such as SPA (Single Page Applications).

Case Studies

Web Workers

Instead of performing the traditional Fibonacci calculation which I mentioned earlier, I decided to find a more realistic scenario which consisted in the lemmatization of text blocks. In NLP (Natural Language Processing), the lemmatization regroups words based on their common lemma. To be concrete, the words “to eat”, “ate” “eaten” all belong to the lemma “to eat”. Lemmatization’s ultimate goal is to regroup the various inflected forms of a word.

That said, I developed a prototype to lemmatize 16MB of text (arbitrary amount here) with either 1 Web Worker, either 4 Web Workers. The CPU was closely monitored with Performance Monitor and the overall execution times too. In order to interpret the results, I used some basic mathematical functions such as the Standard Deviation and the Frequency Distribution, which allowed me to estimate the load spread across CPU cores. Of course, the exact same code was executed using the different browsers (Edge, Chrome and Firefox). In order to identify a trend, a pattern, I ran multiple tests which resulted in 480 CPU activity measures over 60 executions in total (20 by browser : 10 with 1 WW and 10 with 4 WW). The CPU was an Intel quad-core (8 logical processors with hyper-threading). What comes out of these experiments is that all browsers do use the CPU cores appropriately, although they do not show the same performance. Here are a few charts showing the overall results:


The above chart reflects the CPU activity with 1 Web Worker over 10 executions (Firefox only). We clearly see thanks to the Frequency Distribution, that almost 50% of the cores are at rest (between 0 and 7% of activity), meaning that most of the load is handled by a 1 or 2 cores. The Standard Deviation is almost as high as the Average which usually indicate a disparity among the values. The same test was run with 4 Web Workers and resulted in the following :


Here we see a much different curve with most of the values close to the Average. The Standard Deviation is way below the Average. We see that no CPU Core is at rest since the lowest percentage of activity is about 17%. This indicates a much better load spread over the different cores. Microsoft Edge and Chrome showed similar observations :

On top of the CPU usage, the overall performance got dramatically improved by leveraging the Web Workers:



Web Workers do not only allow parallel treatments leaving the UI thread fluent, they also spread the load correctly over the multiple cores of a CPU. The overall performance benefit isn’t divided by the amount of Web Workers but the ratio is rather good.


For this case study, I wanted to evaluate the robustness of WebSockets and compare them to traditional TCP/IP Sockets. To achieve these objectives, I built a WebSocket prototype JavaScript/ASP.NET SignalR versus a typical .NET Socket Server and Socket Client. Here are the results :


The above chart reflects the execution time to establish a connection and to send 1 /receive 1 message.


This chart is the same as the previous one but doesn’t take the connection time into account, which strongly reduces the overall execution time.

Now come two charts with a load test :


Here we see the time needed in ms to send 1 message and receive 1000 back from the server. While browsers show similar performance, the desktop prototype (.NET) is way faster. This is confirmed when multiplying the number of concurrent socket clients :


The .NET prototype remains stable, not impacted by the larger amount of concurrent clients while browsers do not support it very well. This behavior is expected since the RFC 7230 states : « A client ought to limit the number of simultaneous open connections that it maintains to a given server » [Fieldin (2014)]. Long story short : WebSockets are not meant to run multiple concurrent clients from the same device.


WebSockets seemed to be fast enough as it took in average less than 100ms to handle 1000 messages which is very acceptable. This is way faster than LongPolling and/or AJAX, and is not ridiculous compared to traditional TCP/IP Sockets. They certainly allow web developers to envision scenarios where rapidity is key.


HTML5 features do bring serious alternatives to traditional web development and their inherent shortcomings. The Web Workers and WebSockets enable new business application scenarios, as they allow shifting the cursor from light client (due to previous browser limitations) to fat client programming. Of course, this comes with new challenges and questions : should we implement business logic in JavaScript whose the source code is always visible despites of minifiers? What about security, are browsers safe enough? …


About Stephane Eyskens

Office 365, Azure PaaS and SharePoint platform expert
This entry was posted in Case Studies and tagged , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s