Who’s in the lead? Understanding browser market share statistics: Part 2
This is part 2 of a three part series on understanding how the web browser market share statistics are determined – and how you can interpret them for your own projects. In part 1 I highlighted the issue using a recent study released by Shareaholic. In this blog post, you’ll discover why the reported numbers are often different, yet usually all correct. Read on to discover how both can be true.
The differences in how these Internet usage, or browser usage, statistics are reported lie in the datasets used and how the measurements are made. To put it simply, each group has access to its own data – usually from a network of websites using their code, plugins, software or services. This allows them to track how many visitors come to those websites, the types of devices they use, browsers used, etc. Much like anyone running an analytics program for their own website can.
This is the first piece of the puzzle – where the data comes from and how much data they have.
Each organisation has a different number of websites and as a result a different amount of data to analyze. Some organisations include all types of traffic – desktop, mobile, tablet, and others – all in one statistic. Others break up the data by device. Still others allow you to filter the data as you choose.
Where the data comes from…
In the case of the Shareaholic survey results, they used a network of over 200,000 publishers (read “publisher” as “website”, though publishers may have more than one website) and they include all traffic in their analysis (for this article, the comparisons I use consider all traffic in order to match Shareaholic). From those publishers they track over 250 million web users and their choices of browser. That’s a lot of users and browsers.
Net Applications, mentioned as a comparison point to the Shareaholic report in a The Next Web blog post, has a network of over 40,000 websites. They claim to examine 160 million unique visits per month to their network. At netmarketshare.com you can sort through their data and examine it using filters. It’s possible to look at just device type, restrict the geographical source of the traffic (e.g. look at just U.K. traffic), or a number of other filters.
My favorite source for most browser related statistics is StatCounter.com. It’s my favorite because they actually give you access to the raw data and you can make pretty graphs fast and easy (see the graph below for the same time period as the Shareaholic study). The interface is much simpler than Net Applications as well. They also have one of the largest global databases available: 3 million websites, and over 15 billion page views per month. That’s even more users and browsers.
But is more, better? Sometimes. If it is actually more of what you want.
Which brings me to another set of data, offered by W3schools.com. They claim to be the worlds largest web developers site with over 28 million visits per month. In 2013 they had 1.2 billion page views. Again, lots of users and browsers. But the majority of their traffic is from web developers. Do web developers use the same browsers as the average folk? Looking at their data I’d have to say, mostly. But the numbers are in stark disagreement with Shareaholic when it comes to Safari.
… and what they do with the data matters
Besides using different data, the reporting organisations don’t always calculate statistics the same way.
Statcounter, W3schools and Sharaholic use web page visits as the basis for the market share calculation. That is, if one person using Chrome visits ten pages that will be counted as ten data points for Chrome. According to Statcounter website this provides “unbiased stats on internet usage trends”.
By contrast, Net Applications measures unique users to each of their network websites per day. Though it isn’t clear based on the description on their FAQ page, it would seem if a user visited a site twice in one day, but used different browsers they would still only be counted once. Even if that isn’t the case, unique visits will certainly provide a different statistic than counting web page visits as the others I mention do.
I want to be clear that none of these ways of measuring usage are wrong. They are just different. But knowing how they are different can help you better understand what the reported statistics mean.
Whether it’s just the sources of data, or how the numbers are calculated, the differences often lead to confusion for anyone trying to read the trends or make use of the information.
Come back for Part 3 of this blog series where I’ll cover some guidelines on how to interpret the data and share some best practices on how to apply it to your own projects.