It’s a debate we have been having for a little while, as we often see better SEO results from newer, fresher, simpler and less complicated websites than those are starting to get a little bit outdated.
So we decided to see whether we could find anyone else who would back us up on the theory that high-quality website design leads to better and quicker SEO results.
HTML is King
There was a big trend a few years ago to build pretty looking websites, they were built in flash or Java but while they make look quite impressive it was soon remembered that search engines aren’t capable of crawling and digesting all the backflips your images are doing for the human eye. To be indexed in Google, or any of the other search engines, despite the advancement in the crawling technology, HTML is still king, and there are other ways in which you can fit up those fancy images and design. The other point to mention about flash is that it isn’t supported by certain systems, if you’re using a device designed by Apple, flash won’t be shown properly and as mobile browsing increases, it isn’t really welcomed here either.
Friendly navigation is good for users and search engines also take quite kindly to it. As well as having content to crawl, Google needs to be able to navigate through your pages easily and internal links need to be strong to make sure that pages are indexed.
If there isn’t a link to your pages, in Google’s eyes it effectively doesn’t exist as when Google’s spider crawls your site, there is no way to the page. In the infographic we can see that the sub-pages of A and B will be crawled, but C will not. If you’re trying to rank for one of these pages, no amount of keyword targeting, quality content and good links is going to make any difference at all.
Other than simply forgetting to place internal links (which should never happen if you’ve got a good website design and knowledgeable SEO professionals) there are a few other reasons why pages may not be crawlable:
- Content following a form submission
If you need to enter your details or submit a piece of information before accessing a page, search engines won’t be able to crawl the content
These links cannot be followed by the Google spiders resulting in a lack of indexing
- Robots.txt and meta robots
These are optionally put on by the webmaster to prevent Google spiders from crawling certain pages, be warned however it may unintentionally stop your website from being crawled properly
- Pages with thousands of links
Search engines will not crawl an infinite amount of links so if you’re page is full to the brim be aware that Google may not choose to go through each and every one.
When you’re linking internally, make sure it makes sense too – relevancy is yet again key, use sensible anchor text to link from page to page.
Although we aren’t saying that an old website will never rank well for relevant keywords, it is probably going to be harder work to optimise the site. Following some of these tips, and looking at how your current website is built can save you a great deal of time in the long run.