MANY of us have seen SEO click through rate (CTR) studies, performed on large data sets, but what can we learn from these, and, more to the point, are they truly representative? Given the ever changing nature of the SERPs – are click-through rate (CTR) studies too crude and limited in their scope to cater for the multi-faceted nature of a typical SERP? And in fact is there even such a thing as a typical SERP anymore?
Taking the above questions and the previous studies as a starting point and also thinking about the factors influencing the human user, their query and their intentions; in this post I explore a little about what it all means.
What we have seen so far
Several studies have already been performed looking at click through rate (CTR) over the last few years on this topic. So, how did they go about it?
Optify assumed that all searches resulted in a top 20 click while the Slingshot SEO study calculated CTR using the number of Google Analytics visits divided by AdWords search volumes.
Meanwhile, Catalyst and AWR defined CTR as the percentage of impressions that resulted in a click using data from Google Webmaster Tools. In these studies they plotted the mean CTR against, “exact”, “average” or “search rank” position.
How will CTR help me?
It is relatively straightforward to generate an online market share model using Analytics SEO’s keyword tracking tool. A simple CTR model can be applied to search volume data and rankings for each of your and your competitors’ keywords to forecast clicks (volume of visits). This is extremely powerful.
What we did
From our client database, we took data for the 1,153 websites who have GWT data accounts connected to our platform. Search rankings were all “organic”, not universal and analyses independently of territory. We then proceeded to segment the data by various sub-categories, some of which the other studies had not recognised.
The chart below shows a sample of our raw data. When we produced our own mean curve, it looks unsurprisingly, very, very similar to the others.
In all, we found the underlying data to be quite widely distributed about the mean (shown in blue); taking a closer look at the distribution shows a wide spread. The two grey lines in the graph below indicate the range of 50% of the data points. The median line is plotted in black; this is the middle of the data set.
What does this mean? That there is huge variation in the data and a mean line alone isn’t representative.
Segmentation
One common theme that we noticed in the other studies was the attempt to “segment” the dataset by certain factors. In order to better understand what really makes the user ‘click through’ the following factors were incorporated; whether the search intent was for a brand or not, the number of words per keyword phrase, and the source device type being a mobile or desktop.
We did the same segmentations and saw similar results- unsurprisingly. So, what next? Whilst we acknowledge that we can’t measure two key influential variables; the user’s state of mind and Google’s algorithm itself, we were able to add a few more metrics that we thought were easily obtainable and also missing from the previous studies.
The large website effect
The first interesting finding we discovered was that impact of the size of the pool of ranking keywords for a particular domain affects click-through rate immensely, for non-brand searches in particular. The click through pattern is so remarkably different that it should not be ignored.
Below you can see the large website effect for non-brand terms.
The long tail effect
The second non-standard finding that we can report is that variation in the number of impressions served (i.e. the number of searches logged) for a keyword has a significant impact. Greater numbers of impressions per keyword seem to reduce the magnitude of CTR across all positions in the top 10.
This phenomenon was not just true for our clients’ non-brand GWT data, AWR’s publicly available data set exhibits near identical properties. AWR chose a cut-off for their study; and only considered data points where the number of impressions was greater than 50. Here you can see the impact of changing that cut-off level.
This variation, seen clearly in both datasets may be related to the nature of the keyword phrases searched for. So-called “long tail” keywords, the rarer kind, attract far lower search volumes but may, conversely, attract a higher proportion of click through events than more commonplace keywords would. One reason for this increased CTR might be reduced competition in the same space (i.e. less choice).
When very high search volumes (greater than 1 million impressions served) are observed then we see click through behaviour similar to that for brand keywords; steep slope and much higher CTR at position 1.
Avoidance of weighting the data
In addition to the above, we noticed that our dataset was by no means evenly distributed between all sub-categories. We suspect that the other studies also suffered from this phenomenon. The sample size in terms of “impressions served” for our data set implies that our “mean” CTR values will be skewed by the biggest underlying class. An famous example of the resulting impact of not weighting data by sample size is well documented here.
As we have a majority of desktop, non-brand, single word searches in our sample- why should this be allowed to skew the reported CTR? On the other hand, perhaps our “sample” of client data is an accurate snapshot of universal clicking behaviour, and we should reflect these proportions in the data presented.
Finally, if the number of impressions served actually has an impact on CTR itself (i.e. the user sees “Amazon” more frequently in results, starts to trust the site more); we should not use it as a weighting factor, rather a segmentation factor. In fact our conclusion was that any CTR data should be heavily stratified; broken up into as many dimensions as possible.
Here is a sample of our segmented CTR findings in the table below. All values are percentages.
Keyword Type
Brand
Non-brand
Brand
Non-brand
Position & Device
Desktop
Mobile
Desktop
Mobile
1
40.2%
35.1%
21.4%
24.3%
2
21.7%
17.1%
14.5%
17.8%
3
11.5%
12.0%
10.9%
14.1%
4
8.7%
9.5%
8.2%
11.5%
5
6.6%
8.7%
6.6%
8.7%
6
4.1%
4.1%
5.5%
6.5%
7
3.6%
2.8%
4.5%
5.3%
8
1.7%
3.4%
3.9%
4.1%
9
1.5%
1.3%
3.2%
3.5%
10
1.5%
1.3%
2.7%
3.0%
11
1.1%
0.9%
2.2%
3.0%
12
2.9%
0.5%
1.9%
3.3%
13
1.5%
0.6%
1.8%
2.4%
14
1.3%
0.7%
1.7%
3.3%
15
1.0%
1.5%
1.5%
3.4%
16
1.8%
0.4%
1.4%
2.7%
17
0.7%
0.0%
1.3%
3.5%
18
1.3%
0.5%
1.3%
3.1%
19
0.1%
2.7%
1.0%
2.2%
20
0.4%
0.0%
1.0%
2.1%
We're ready for SGE. Are you?
The rollout of SGE will create unprecedented risks to your hard earned organic traffic, as well as new opportunities to succeed.
You need to be ready. The only question is, whether you want to be ready now or later?
As Google continues to evolve its search capabilities, the introduction of AI Overviews (AIOs), formerly known as Search Generative Experience (SGE), has created a huge upheaval for SEO professionals. Understanding the alignment between organic and generative search results has become critical for understanding search intent and maintaining competitive visibility. This post introduces the Authoritas GOA Score™ (Generative to Organic Alignment) and OGA Score™ (Organic to Generative Alignment) as two essential metrics for assessing this alignment and provides practical guidance on how to leverage these insights in your SEO strategy.
Google's new SERP layouts, featuring AI Overviews, are disrupting traditional SEO visibility and rank tracking models. The AI Overviews, now rolling out globally, introduce complex changes that challenge the definition of a top-ranking site. SEOs must adapt their strategies to navigate these significant shifts in search results presentation.
This is the next wave of our in-depth research into the impact Google's new AI-generative results are going to have on SEO. This time we focused on the impact on brand terms, brand + product or service, brand + generic terms and compare the results to generic terms and product or service terms. The SERP is going to change and it's going to have an impact across all 15 markets we studied.
We conducted a study of 1,000 popular commercial search terms to try and develop a picture of the probable impact of Google's new SGE experience on organic rankings and performance. The research depicts a scary picture of potentially dramatic falls in organic visibility for many sites.
Review our latest assessment of how Google's SGE feature is going to change the SERP as you know it. This has implications for rank tracking, content strategy, page optimisation, content quality and more.
If a web page listing is elevated into the featured snippet position, we no longer repeat the listing in the search results. This declutters the results & helps users locate relevant information more easily. Featured snippets count as one of the ten web page listings we show.
Up until last year, Wikipedia truly dominated as the cited source in knowledge panels for brands. The truth is much more complex, of course – Google gets its information from multiple sources and gets corroboration / cross checks that information across multiple other sources before including a brand in the Knowledge Graph.
If you want a quick and easy way to find your own IP address on Google Search, then simply go to your local version of Google and ask the question, “What is my IP address”. Google gives you the answer right at the top of the search results.
The quality of the job ad is probably the most significant ranking factor in Google for Jobs. On top of displaying key elements such as a company info, role description, skills and responsibilities, I warmly recommend to add extra layer of information for Google to digest. For example, working hours, salary, benefits, and a more in-depth company information could make the difference between a good and an excellent job ad copy.