v.5.4.3 – 13 more countries supported & some lovely crawl data!
You ask and we deliver! Version v.5.4.3 deployed today includes more search engine support and a nice update to the crawl data we show in the platform.
12 more Latin and South American countries added (& Mauritius)
Well, the world Cup is heading that way this year and besides, a few of our customers were asking for additional support for these Google search engine variants. If you don’t recognise some of the flags above, the complete list is:
- Puerto Rico
- Trinidad & Tobago
- El Salvador
- Costa Rica
- Dominican Republic
These are now live. If you want to take advantage of them, you might just want to add a new campaign to an existing site or add a site on. It will depend upon what you’re trying to track. That takes our total of support search engine variants to over 120! Again, if there’s any you think we’re missing and you’d like to see added to the platform, just let us know. It doesn’t take long for us to add these on now.
Detailed Crawl Data
Our poor old piece of crawling code (“Curious George”) has to do quite a bit of work, but we’ve asked it to do more with this latest release. Now, when you login and go to the Pages Crawled module, you’ll notice that we now display the following (filterable) breakdown in a new column in the table under the chart:
- “200 OK”
- “Blocked by robots.txt”
- “Blocked by rel nofollow”
- “Blocked by meta noindex”
- “Blocked by meta nofollow”
- “Unable to connect”
We have also improved the messaging in the platform, so that now it’ll be clear to users if we’ve come across any issues which you need to address as a priority, including:
– The homepage redirects us to an “off-domain” site with a 301 or 302 redirect
– The homepage gives us a 4XX error
– The homepage gives us a 5XX error
– The homepage is marked as nofollow
– We are blocked by robots.txt
– We are unable to connect to their homepage at all
When we encounter any of these particular issues, all crawl-related components will be marked as grey (information only), with the relevant message, while Pages Crawled will be marked with red. If a component is responsible for the failure (e.g. the 4XX component) that will also be marked as red.
Lastly, the Robots.txt component now has a table listing all pages we found that were blocked by Robots.txt:
Nice, eh? 🙂
Oh, and if we’re ever having trouble crawling a site, you’ll see it showing up in the Pages Crawled component, but there are usually common reasons for this – see this Knowledge Base article for more information (Why are you having trouble crawling my site?)