Continuing our series on interpreting web analytics, this article looks at how factors like infinite scrolling affect bounce rate, and how it’s possible to skew your own analytics to produce nicer, “friendlier” results.
“This Is What Happens When Publishers Invest In Long Stories”, declared Co.Labs back in 2013, with a chart showing a dramatic drop in bounce rate:
Wow–looks like it dropped from around 75% to 20%! From our definition of bounce rate (the percentage of visits that only last one page), we know that this means that, pre-dip, roughly 75% of visits were only one page long; post-dip, 20% of visits were only one page long.
Or, in other words: pre-dip, only 25% of visits looked at more than one page; post-dip, 80% of visits looked at more than one page.
Note: Since publication, the author @chrisdannen has appended some corrections, a portion of which address mistakes made in the original data collection. This isn’t a problem for our examples, as the data still serves our points.
A little further down the article, this chart is shared:
The light blue line shows the average number of pages a visitor looks at in a single visit, and it’s barely changed. But wait–if the number of people looking at more than one page has gone from 25% to 80%, surely this line should have shot up too?
To understand why it doesn’t, let’s first take a little detour.
Let’s suppose you go to one of the Envato Tuts+ home pages, scroll down to the bottom, and click the link for Page 2. When Page 2 loads, your visit won’t count as a bounce for either the front page or Page 2.
Now, imagine we implemented infinite scroll, as seen on sites like Twitter and Facebook: when you scroll down to the last post in the list, the page automatically loads the next ten posts. You’ve done practically the same thing as above, but because you haven’t clicked a link to load a separate page, it doesn’t count as visiting two pages–if you left the site, it would still count as a bounce.
This seems like a fairly arbitrary distinction. Fortunately, Google Analytics provides a way around it: in the code that says “grab the next ten posts and add them to the list”, we can also say, “oh, and tell Google Analytics that this doesn’t count as a bounce”.
I think you can reasonably justify manually ignoring a bounce in that example, but–not surprisingly–many sites use that feature to manually ignore bounces in all sorts of other situations.
“Stayed on the page for more than thirty seconds? Tell Google Analytics that this doesn’t count as a bounce!” “Scrolled past the fold? Tell Google Analytics that this doesn’t count as a bounce!”
Personally, I’m not keen on this, because it muddies the definition of a bounce, which can add unnecessary confusion.
And this seems to be exactly what happened at Co.Labs. As soon as you scroll a few pixels down the page, your visit is no longer counted as a bounce. It’s possible that the editors investing in long stories led to visitors scrolling more, but I suspect that, actually, the scroll tracking was implemented at around the same time that Co.Labs editors started their experiment.
The lesson, again, is that context is important: bounce rate is not a magic number we should be aiming to keep within some range; it’s important to know what it actually represents.
You might have noticed that, although the average pages/visit didn’t change much in the second chart, the average time on page did. That’s a good result, right? Well… maybe not. I’ll explain why in my next post.