Duplicate content is a common occurrence on the web and in many cases can hurt search engine rankings. While the search engines may not always technically penalize webmasters for duplicate content, there are still a lot of ways it can hurt.
WebProNews is covering the Search Marketing Expo (SMX) East in New York, where representatives from the three major search engines (Google, Yahoo, and Bing) discussed how their respective web properties handle duplicate content issues. Following are some takeaways from each.
Duplicate Content in Google
The way Google handles duplicate content has been discussed a lot in recent memory. This is largely due to a video Google's Greg Grothaus uploaded, in which he discusses at length, the way Google handles a variety of different elements of the duplicate content conversation.
Joachim Kupke, Sr. Software Engineer of Google's Indexing Team reiterated much of what Grothaus said. He also said that Google has a ton of infrastructure for content duplication elimination:
- redirects
- detection of recurrent URL patterns (the ability to 'learn' recurrent url patterns to find duplicated content)
- actual contents
- most recently crawled version
- earlier content
- contents minus things that don’t change on a site
Kupke said to avoid dynamic URLs when possible (although Google is "rather good" at eliminating dupes). If all else fails, use the canonical link element. Kupke calls this a "Swiss Army Knife" for duplicate content issues.
Have you followed all the duplicate content rules and still been penalized?
Google says the canonical link element has been tremendously successful. It didn't even exist a year ago, and is has grown exponentially. It has had a huge impact on Google's canonicalization decisions, and 2 out of 3 times, the canonical tag actually alters the organic decision in Google.
Google says a common mistake is designating a 404 as canonical, and this is typically caused by unnecessary relative links. So, avoid changing rel="canonical" designations, and avoid designating permanent redirects as canonical.
Also, do not disallow directives in robots.txt to annotate duplicate content. It makes it harder to detect dupes, and disallowed 404s are a nuisance. There is an exception however, and that is that interstitial login pages may be a good candidate to "robot out," according to Kupke.
Kupke says that canonical works, but indexing takes time. "Be patient and we WILL use your designated canonicals." Cleaning up an existing part of the index takes even longer, and this may leave dupes serving for a while despite rel=canonical, Kupke adds.
At SMX, Google announced that cross domain rel=canonical is coming within this year. So for example, if the Chicago Tribune has an article on the New York Times, and the rel=canonical points to the Chicago Tribune then Google will only credit the Chicago Tribune with the content.
Duplicate Content in Bing
As far as how Bing views duplicate content, intention is key. If your intent is to manipulate the search engine, you will be penalized.
Sasi Parthasarathy, Program Manager of Bing says to consolidate all versions of a page under one URL. "Less is more, in terms of duplicate content." If possible, use only one URL per piece of content.
Bing isn't supporting the canonical link element (as a ranking factor) yet, but it is coming. They do say to use it, but it's just not really a ranking factor in Bing yet. Bing says that there has been an increase in the usage of canonical tags in the past 6 months, but adoption issues still exist. According to Parthasarathy, 30% of canonical tags point to the same domain (which is fine), and 9% use it to point to other domains. This could be a mistake or it could be manipulative. Bing says they will look for other factors to try and determine which it is.
Bing says canonical tags are hints and not directives. "Use it with caution," and not as an alternative to good web design.
With regards to www vs non-www, just pick one and stick with it consistently. Remove default filenames at the end of your URLs. Bing also says 301 redirects are your best friend for redirecting, use rel="nofollow" on useless pages, and use robots.txt to keep content you don't want crawled out.
Duplicate Content in Yahoo
If everything goes according to plan, you're going to need to worry about how Bing handles duplicate content if you're worried about how Yahoo handles it, but Yahoo's Cris Pierry, Sr. Director of Search, offered a few additional tips.
Pierry says descriptive URLs should be easily readable, and it's not a good idea to change URLs every year. In addition, use canonical, avoid case sensitivity, and avoid session IDs and parameters.
Pierry also says to use sitemaps, and submit them to Yahoo Site Explorer. Improve indexing by proper robots.txt usage, and use Site Explorer to delete URLs that you dont' want Yahoo to index. Finally, provide feeds to Yahoo Site Explorer, and report spam sites linking to you in Site Explorer.
Yahoo says metadata and SearchMonkey are enhancing presentation.
Tracking and effects of duplicate content
10:13:00 PM
e-course for search engine, effective marketing tool - SEO