Introducing crawlbin - A Service for Testing SEO Directives and HTTP Responses

Towards the end of last year, I joined Tom Anthony in the R&D division at Distilled (you can read more about that here). We have been quietly working on a number of internal tools to support our consultants and help our clients. Today I’m announcing our first external tool which is the output of a hackday.

To help test some of the tools we are building, we needed a way of easily generating pages with various search engine directives enabled or disabled. One hackday later (and the inevitable few hours of overrun to finish the documentation) and we are ready to launch

The people who will benefit most from crawlbin are those writing or testing crawlers or tools that need to be able to handle potentially misleading signals. If you have a crawler as part of your inhouse tools then please do consider using crawlbin as a target for your tests.

We think crawlbin would also make an excellent addition to a list of resources for anyone wanting to learn SEO. You can use crawlbin to deliberately generate pages with technical issues which is a great starting place for anyone setting out to learn the technical side of SEO.

In the future we may introduce the ability to shorten and obfuscate the URL flags, and remove the help text from the output. This would expose pages with technical issues but would hide what those issues were, making it a perfect resource for someone learning technical SEO and wanting to see technical issues in the wild. At this point, watch out for crawlbin URLs in our DistilledU lessons.

crawlbin URLs:

crawlbin accepts a list of flags in the URL which toggle various directives and HTTP responses. For example, you can simulate a page with a noindex tag, by using the meta_noindex flag:<meta name="robots" content="noindex" />

You can add a (self referencing) canonical tag to the page using the html_canonical_self flag:<link rel="canonical" href="" />

The power of crawlbin comes when you start combining the various flags to allow you to generate the sorts of issues you are likely to encounter when doing any sort of technical audit.

For example, by combining a response_301 flag with an html_canonical_next_block flag, you can simulate a canonical tag that references a page that subsequently 301's. This is surprisingly common and whilst typically not disastrous, it is something that can and should be fixed. You see this sort of issue with automatic redirects setup to handle things like redirecting http -> https or automatically appending trailing slashes if they don’t exist.<link rel="canonical" href="" />

Alternatively, by combining two different canonical_tag flags, you can simulate a page that returns contradictory canonical tags. Sending contradictory signals to the search engines is never a great idea, and since this is something we can control it should be fixed.<link rel="canonical" href="" /> along with Link: <>; rel="canonical" (as a http header)

The full list of flags can be seen on the crawlbin homepage

We have decided to release crawlbin into the wild in the hope that it proves useful to others. In the future (once we've removed the ugliest code that is typical of a time constrained hackday) we plan on open sourcing the code. We’d love any feedback you have, or any thoughts on future enhancements for - just add a comment below.

Get blog posts via email

About the author
Duncan Morris

Duncan Morris

Duncan founded Distilled with Will in 2005. Duncan was CEO of Distilled for just over 5 years before he handed the reins to Will in 2014. Duncan is now Chairman, a non-executive role that, amongst...   read more