1. Home
  2. Events
  3. Nine Talks in One Day! Brighton SEO

Nine Talks in One Day! Brighton SEO

One of the disadvantages of being a sponsor and exhibitor at Brighton SEO is that you often find yourself on your feet manning a stand and miss out on some of the talks. Thankfully, this year I managed to find myself a seat in a few of the talks – well, I say, I managed to find a seat…. here was my ‘seat’ in one particular talk (I think it was Gareth’s Simpson one on Machine Learning – see below):

BrightonSEO presentation sat on the floor

So, yes, Brighton SEO was heaving for yet another year. It still amazes me how Kelvin and the team down there have managed to make such a success of this conference year in year out – so much so that tickets still get snapped up as soon as they’re released, literally in minutes!

One of the main challenges for people attending such an event is making a call on which talks to attend and which to forego. Luckily for me, I knew that our marketing team would again be live streaming the talks in the main auditorium, so I could leave others to attend our stand and head to some of the side rooms to listen to some of the other speakers.

Please note that a lot of these notes were rushed, so apologies if they don’t give your talks full credit. I’ve added links to the full slides where I could find them.


#1: “8 Ways to Increase your Ecommerce Conversion Rate” – Faye Watt

Faye Watt presentation at BrightonSEO
A terrible picture of Faye delivering her presentation

I liked this one. There were some great tips. Obviously, e-commerce is massive – consumers purchased $2.8tn worth of goods and services last year online – so any marginal advantage you can exploit has the opportunity to generate great ROI for a lot of large sites.

Here are just a few of Faye’s tips:

  1. Personalisation is key. Everyone’s different and it’s amazing how much information websites can gather about their users – just ask Facebook and Cambridge Analytica! 😀- so trying to make a visit as personalised as possible is key.
  2. Providing recommendations to a visitor is a massive opportunity e-commerce cannot afford to miss out on. You probably don’t think much about how these are done or on what basis, but Faye gave some examples of what sites might use – previous purchasing history, other styles or colour options (for clothes e.g.)
  3. Cross Sells – I’d always call these ‘upsells’ (where a salesman would sell you Brand B when a customer was asking for Brand A and it was unavailable), but 🤷🏼‍♂️wotevs! Anyway, Faye was pointing out that you should always ensure that you’re maximising revenue from captive shoppers by providing additional purchasing options – “Hey, Mr Consumer, most people who bought Product A also bought Product B!” I know from over a decade in direct sales myself how lucrative incremental sales can be, so do do this! It’s an easy opportunity for extra dollars! Kerching! 🤑
  4. She had some warnings about visitors using ad blockers as these may lead to recommendations being hidden.
  5. She recommended using varied and high-quality images (including lifestyle images, which I thought was a nice tip) and provided John Lewis’s website as a good example of this technique.
  6. User Reviews – quantity is important here. Make this process as easy as possible. Nike was a bad example; Adidas a good one.
  7. Shipping Costs. Lots of users abandon their checkouts when the shipping costs are too high. If you can’t absorb the cost, be transparent about the delivery cost upfront. ancestry.com was given as a good example of what NOT to do in this regard. Apple do things right – well, don’t they always? 😀
  8. Guest checkouts – allow your users to fill their baskets anonymously at first. Don’t put up those barriers unnecessarily.
  9. Checkout processes – Faye mentioned making processes easy again – form designs are important, people! ASOS was given as a poor example when dealing with coupon fields (loved the level of specificity there, Faye, but then (as a Product Manager) I’ve a self-confessed ‘detail nerd’! 🤓) – ASOS have recently been suffering with online sales incidentally. Apple is a bad example – 35 form fields! Dystopian bureaucracy there, people! 
  10. Trust – if users don’t trust your site with their credit card details, they’ll abandon the checkout.
Faye Watt Guest Checkout
No, I don’t why it’s so grainy either! 🤷🏼‍♂️

There’s probably lots more tips you can glean from their full set of slides which can be found here.

#2: LMFAO: Leveraging Machines for Awesome Outreach –Gareth Simpson

  • Machine Learning – the most accessible, simplest form of AI.
  • Facebook interprets images down to the actual sport being played. 😲
  • They looked at Google Assistant (‘conversational technologies’). Bot use needs to be declared, though. 
  • 6 Step process: Objective -> Ideation -> Prospect -> Pitch -> Negotiate -> Deliver (Have to say that “ideation” is one of my least favourite buzzwords, but I let Gareth off using it here)
  • AI allows them to automate the parts of the process where human <-> human interaction isn’t quite so necessary.

Classification of emails with Monkey Learn was interesting. He used Pitchbox + Zapier + Monkey Learn. This process helps them to ‘email triage’ so they’re only following up on email contacts that have potential. Good idea! Who doesn’t hate dealing with thousands of emails!

He gave #journorequests as an example of a great opportunity for gaining links for your content. Huh. Hadn’t heard of that, but will give it a ganders. This did come with a warning, though – too much data can lead to #FOMO, so he recommended using tags and filters to find more relevant opportunities.

He also mentioned Phrasee to optimise subject lines in emails? Wow. 😲The tools people invent!

Have to say, I loved the use of the robot artwork in his presentation. He definitely got some geek points from me for that!

This was a good talk. AI and ML are hot topics at the moment in the industry, so his slides are definitely worth checking out (if you can find them – I’ve looked on slideshare – where are they, Gareth? )


#3: The Art of Content Necromancy: How to Resurrect a Dead Campaign –Kat Kynes

Kat started talking about a campaign for webuyanycar.com. They wanted to come up with a suitable topic for revitalising some content and thought that as driverless cars were a hot topic (thanks Google and Elon Musk!), they would look at that.

They conducted some research and found out that people, on average, spent around 220 hours each year driving to and from work. So they simply asked the question of what people would possibly do with this free time if they could get to and from work in a driverless car. There were some surprising (and not so surprising results) – 10% of people thought they’d be having sex, but the contrast between the amount of men that thought they’d be having sex and the amount of women that thought the same got a good laugh from Kat’s audience. Surely this wouldn’t be possible unless you’re car sharing, right? Right? 😜

Despite the interesting subject matter, the campaign still didn’t take off, so they refocused the campaign and got some decent coverage and links from nationals (The Sun and the Metro e.g.).

Their team had a decent process approach to content development:

  1. Pre-Outreach – ask journalists what might interest them
  2. Work with internal PR teams and agree an outreach approach (they’d hit an issue with one particular PR team before)
  3. Don’t just focus on obvious publishers
  4. Make the pitch personal – top tip! – personal gain is a good incentive in any type of selling opportunity, which is effectively what you’re doing when you’re pitching content ideas and you’re never just selling anything ‘to a business’ – you’re selling to people, people have their own motivations and ambitions so recognising this was a great tip because outreach is no different really to any form of sales or marketing, IMHO.
  5. Timing is important!
  6. Develop relationships
  7. Don’t give up! Try a different angle – re-package and re-purpose.

Good job, Kate! This was interesting and full of good content and some lovely visualisations, but then what else would you expect from someone working in the content marketing field? 🤷🏼‍♂️😀


#4: Restructuring Websites to Improve Indexability -Areej AbuAli

“Improving Indexability” – Areej Abuali, HOSEO, Verve Search

Disclaimer: OK, I’ve have to start with a disclaimer first – Areej is a friend and former work colleague of mine, so I’m going to sound a little biased, but she did an awesome job, even if it did involve 220 slides! That must be some sort of Brighton SEO record, surely, Kelvin?

Areej is HOSEO (‘Hoe-Say-O’, as we say here!) at one of the UK’s leading agencies – Verve Search – and here she was tackling a technical topic: restructuring websites 😬.

For her example, she told a story about a job  aggregator client which had been losing visibility and had sacked it previous agency as a result. Areej’s team spent 6 months analysing everything from links, to their technical setup and infrastructure to the actual content on the site. In the end, they provided their client with a 70-page report containing 50 recommendations, as well as the following observations:

  • 72% of the client’s backlinks came from just 3 unique referring domains
  • They had no sitemaps!!! 🤦🏼‍♂️#DeleteYourAccount
  • The site was full of duplicate content duplicate content 😀
  • Canonical tags were messed up
  • Their Internal linking was a nightmare 🍝

So Verve spent 1/2 a day going through their recommendations with the client. Still, Areej wasn’t quite happy so went back to the drawing board and came up with my ‘Supplementary Findings’ which was her phrase for:

This was my favourite slide of the day! 😂Nice.

One thing she’d noticed that was that the nature of their site structure meant that they were potentially generating an infinite number of URLs, all of which could get indexed and were being indexed.

She crawled the site again and found 2.5m URLs! 😌She realised that there were no robots.txt directives, so she devised a custom ‘combo script’ to enable them to decide how to selectively control what pages got indexed by Google. This reduced the effective size of the site, but their traffic dropped even further. Oops! 😬#Awks

So, Areej went back through Verve’s recommendations to the client and discovered they’d not implemented over 1/2 of them, despite saying they had. 🙄Clients, eh! And, what’s more, they’d mostly just done all the easiest ones. Well, that’s human nature, isn’t it? I read a post about prevarication recently (on Medium.com). Everyone does it. I’ve been doing it with this very post, which is why it’s over 2 weeks late! 🤦🏼‍♂️

The thing I really liked about Areej’s talk is that it was refreshing to hear someone at one of these conferences not tell a simple ‘success story’ along the lines of “Hey! Everyone! Look at how great I am/we are at my/our jobs!” and this made it both funny, honest and interesting. Kudos, Areej! Top job!


#5: Simple ways to visualise your crawl data with no coding knowledge required – Anders Riise Koch

Anders gave a rather technical talk on visualising crawl data. He talked about allow the visualisations to deliver ‘stories rather than stats’ as most people (65%) are ‘visual learners’.

Like Benjamin later (see below), Anders talked about using Python or R to crawl websites. Personally, I’d recommend Python. I’ve had no more than a fiddle with ‘R’ and, whilst it seemed pretty simple, it seemed to be ideal for people wanting to carry out complicated mathematical calculations. For crawling, Python is generally the go-to language (although Ruby and using the Mechanize gem is also a good choice).

There are many libraries you can use to visualise crawl data, but Anders plumped for Gephi as an advanced option. Again, I’ve taken a cursory look at Gephi and whilst it’s powerful, I’ve never been that impressed with the actual quality of the results.

He said, you should:

  • Pick a crawler
  • Export your inlinks (internal links)
  • Export your backlinks (links form other sites pointing to yours)
  • Use a PC with a GPU (unless it’s too busy bitcoin mining, of course!)
  • Use Excel (Well, what else? 🤷🏼‍♂️)

I’m, no doubt, not doing his talk any justice here (there was a lot to take into account), but his visualisations were cool – loved the use of force-directed graphs in some of them. His slides (above) are well worth a look.


#6: Crawl Budget is dead, please welcome Rendering Budget –Robin Eisenberg

Robin highlighted that there is a direct correlation between your page speed and Google’s crawl budget; simply put, the faster your site loads, the more pages Google can get through.

It’s been known for a good while that Google has the capability of crawling Javascript, but I’d not heard of anyone specifically referencing such a thing as a render budget, whereby Google will give your site a certain amount of leeway in order to be able to read content that is added through the use of Javascript code.

There’s lot of detail in his slides about the rendering process and he also touches on server-side rendering. He also referenced the excellent Open Source Lighthouse tool which will provide you with loads of technical analysis of your website’s rendering performance.

There’s quite a lot here, but if you have a website that involves a lot of frontend (or backend) Javascript running on your site, it’s worth checking out his recommendations to see if that Google is finding and indexing the content you want indexed.


#7: Screaming Frog + Xpath: A Guide to Analyse the Pants Off Your Competition – Sabine Langmann

Sabine’s talk was an indepth examination of how you can use a feature of the popular crawling tool, Screaming Frog, to extract specific details from websites using element Xpaths. There was a lot here and i didn’t take many notes because I already knew quite a bit about Xpaths, but this presentation is definitely worth checking out if you fancy gathering some specific competitor data from a website.


#8: Automate your SEO tasks with custom extraction –Max Coupland

“How to automate SEO tasks with custom extraction”, Max Coupland

Max gave another talk on crawling – I was clearly in the right room for this stuff (Brighton SEO organisers obviously keep the topics quite similar in different areas of the venue). His talk was mainly focused on Xpath and Regex. Regex is an extremely useful skillset if you can master it. It’s very useful when coding. Quite honestly, I don’t know everything about these topics, but I certainly know enough to be left that I didn’t get much out of this one. Or maybe, I was just flagging. It was my 8th on the trot!

One issue he particularly focused on for much of his talk was scraping Google SERPs results themselves and particularly PAA (People Also Ask) results. Now, we do this ourselves a lot here – so much so, in fact, that we have built a new module specifically designed for this purpose (⚠️ Warning! Shameless plug incoming!) – our *NEW* FAQ Explorer module 👇🏻

The Jobs Table for the Authoritas FAQ Explorer module
The Jobs Table for the Authoritas FAQ Explorer module
The Results Table for the Authoritas FAQ Explorer module
The Results Table for the Authoritas FAQ Explorer module

So, speaking from experience, I had to disagree with his suggestion that you should setup some custom scraping scripts (via Screaming Frog) for this. Why? Well, maybe I missed it, but I didn’t hear any reference to doing this via proxies, so this, to me, was just a recipe for getting your whole office’s public IP blocked by Google. We’ve certainly had some dealings with agencies in the past where they’ve told us stories about SEOs doing a similar thing using desktop software, only to have to deal with irate CEOs who are then blocked from using Google for some time due to unnatural behaviour being detected by Google’s bot defences. IIRC, I also seem to remember him saying that Google never changes certain aspects of their markup. That is quite simply not true. Trust me, I personally deal with markup changes almost every day. Google changes things all the time.

So, his talk was helpful, if you’re new to scraping and Regex, but I’d add a large warning caveat to his recommendations.


#9: Bringing the fun back to SEO with Python – Benjamin Goerler

Python is no doubt an extremely useful programming language and probably one of the most accessible and least intimidating and this was the third talk I’d sat through recommending its use. Can’t disagree with this – in fact, we use it a lot here.

It can be easy – to an extent – to crawl a website, but it can quickly become complicated. It’s going to depend upon what you’re really trying to achieve. I wouldn’t, for example, suggest jumping at using a Python script with Beautiful Soup to try to crawl through millions of pages on your favourite e-commerce site, even if I could incentivise you by showing you how much more Guinness it would allow you to drink. 🍺

One thing I really liked about his talk, though, was his correlation of the decline in the number of pubs in the UK to the number of times Excel had been updated. That was funny. 😁 And his time use comparison of the amount of time it takes to analyse data in Excel when compared to getting a PC to do a lot of the heavy lifting for you was sound.


So that was Brighton SEO done for another 6 months. Bring on September and another round of knowledge sharing! Now what time does the pub open?

Matt OToole, Product Manager for Authoritas & Linkdex

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.

You might also like
Menu