We’ve all made mistakes with our SEO, but most are hopefully learnt from and fixed before anyone else notices. Here are 7 common SEO mistake that can damage your sites ranking and leave you feeling stupid.
Keyword Stuffing
Keyword stuffing is pointless, looks unprofessional and make you look stupid. Google knows what a sentence looks like and you can be damn sure it doesn’t have the word Viagra in it 5 times per sentence.
Whilst you should be making sure you include keywords they should appear natural. Include keywords twice a paragraph at most unless it makes grammatical sense to use them more. Don’t forget you can use related and associated keywords too.
Canonical Links From The Home Page To Other Pages
Canonical links are a powerful way of reducing duplicate content and letting search engines know what the preferred version of a page is. They can be used by adding:
<link rel=”canonical” href=”Prefered-URL” />
Instead of having a root domain homepage some sites use canonical links or even 301 redirects to URLs like this: “example.com/homepage-1/pages/node-141”
WHY? I don’t know. It’s a good idea to use canonical links to ensure your getting maximum value from links and avoiding duplicate content but you should make sure they’re pointing to the correct page. Your homepage should be the root: “example.com”.
Sitewide Canonical To The Homepage
The ultimate in face palm moments. Adding a canonical link pointing to your homepage from every page on your site is a bad idea. If you’ve done this you’re probably wondering why your site has been slowly de-indexed from Google with only your homepage now showing. Use canonical links carefully.
Bad Robots
The robots.txt file tells crawlers/spiders/bots if they are allowed to crawl the site. It’s not uncommon to see sites come out of the development stage with a disallow all. This means most search engines cannot crawl or index your site. If you’re having trouble getting your site indexed, this is likely to be the issue. Make sure your sites robots.txt file doesn’t have this in it:
User-agent: *
Disallow: /
If it does. Just remove the “Disallow: /” and search engines will be able to crawl your site again.
Note: There are some situations where you might want to block some bots from crawling your site. If you’re unsure how to use robots.txt you can find out more here.
Not Preventing Directory Index Page Access
A confession form me. I left my directory index pages open on my personal site in order to use them for experiments. Unfortunately this lead to Google indexing my entire file structure, rather than just the pages I wanted. My 20 page site had around 600 indexed pages that I didn’t want anyone seeing. Not only that but it had the potential to cause duplicate content & security issues.
If nothing else it taught me how to get pages de-indexed using a combination of .htaccess and URL removal requests.
You too can avoid looking stupid by adding the following to your .htaccess file:
Options -Indexes
Copied Content
It seems like a good idea at the time, but copying content is at best unlikely to get you any ranking and at worst will land you with a penalty. I’ve seen hundreds of sites which have literally copied and pasted content and it takes just 5 seconds to spot, by copying and pasting the text into a Google search. If I can do it, Google can too.
There is no harm in looking at other peoples content and rewriting it. Who know’s, you might even improve it and generate some traffic. You can use CopyScape to help identify any content which has been copied on, or from, your site.
Buying Links
It sounds so simple at first, links are all over the place, “I can get some too”. Apparently not. To the untrained eye 1000 links for $100 looks like a great deal, but as recent events have transpired buying links is the best way to get your site heavily penalized manually or algorithmically.
Tell Us All About It
Everybody has made a mistake with their SEO at some point. There’s no judgment here, so tell us all about it over on our Google Plus Page