Yet more Google variants (13 in all!) – para nuestros clientes españoles!

v.5.4.3 – 13 more countries supported & some lovely crawl data!

You ask and we deliver! Version v.5.4.3 deployed today includes more search engine support and a nice update to the crawl data we show in the platform.

12 more Latin and South American countries added (& Mauritius)

Well, the world Cup is heading that way this year and besides, a few of our customers were asking for additional support for these Google search engine variants. If you don’t recognise some of the flags above, the complete list is:

  • Mauritius
  • Guatemala
  • Uruguay
  • Puerto Rico
  • Jamaica
  • Trinidad & Tobago
  • Bolivia
  • Honduras
  • El Salvador
  • Costa Rica
  • Dominican Republic
  • Paraguay
  • Panama

These are now live. If you want to take advantage of them, you might just want to add a new campaign to an existing site or add a site on. It will depend upon what you’re trying to track. That takes our total of support search engine variants to over 120! Again, if there’s any you think we’re missing and you’d like to see added to the platform, just let us know. It doesn’t take long for us to add these on now.

Detailed Crawl Data

Our poor old piece of crawling code (“Curious George”) has to do quite a bit of work, but we’ve asked it to do more with this latest release. Now, when you login and go to the Pages Crawled module, you’ll notice that we now display the following (filterable) breakdown in a new column in the table under the chart:

  • “200 OK”
  • “Blocked by robots.txt”
  • “Blocked by rel nofollow”
  • “Blocked by meta noindex”
  • “Blocked by meta nofollow”
  • “Unable to connect”

We have also improved the messaging in the platform, so that now it’ll be clear to users if we’ve come across any issues which you need to address as a priority, including:

– The homepage redirects us to an “off-domain” site with a 301 or 302 redirect
– The homepage gives us a 4XX error
– The homepage gives us a 5XX error
– The homepage is marked as nofollow
– We are blocked by robots.txt
– We are unable to connect to their homepage at all

When we encounter any of these particular issues, all crawl-related components will be marked as grey (information only), with the relevant message, while Pages Crawled will be marked with red. If a component is responsible for the failure (e.g. the 4XX component) that will also be marked as red.

Lastly, the Robots.txt component now has a table listing all pages we found that were blocked by Robots.txt

Nice, eh? 🙂

Obviously, enabling Google to crawl your site is pretty essential. How else does it work out what your website is all about? There will also be pages you want crawled and some you don’t. Filtering the table in these modules can now provide you with some quick insights. If we can’t find pages, then it’s likely there’s something you need to examine as Google (and Bing) might be prevented from crawling those URLs too (although, please bear in mind, we don’t crawl Javascript links and Google might).

2 Comments. Leave new

Leave a Reply

Your email address will not be published.

Fill out this field
Fill out this field
Please enter a valid email address.