If you are in the SEO industry, duplicate content can become one of your nightmares. If you are a savvy webmaster, you will definitely not want to post the same content twice. But, sometimes, despite all our efforts, duplicate content gets published without our knowledge. How can you prevent duplicate content from being published?
Here are some simple SEO checks.
As you cannot reclaim your stolen content with the help of any Internet police, you can use codes on your sites. The codes will prevent the scrapers from using your content as theirs. Instead of the relative URLs, you must use the absolute ones. For example:
- Absolute URLS : https://gauravtiwari.org/about/
- Relative URLS : /about/
When you are using the relative URLS, the browser may assume that the link is directed to a page within the browser that you are presently in. This assumption by Google might result in terrible outcomes for you. If your developer has not re-coded the entire site, you can use canonical tags which are self-referencing in nature. While the scraper places any duplicate content from your site, the canonical tags will not change the location and Google will know the original source is your site.
If your business operates in various geographical locations, a primary landing page must be present. The users can easily select the appropriate location and can be directed to the ideal sub-directory. For example:
Url 1: www.beautifulbaskets.com/au
Url 2: www.beautifulbaskets.com/fr
Though it is very logical, you must always evaluate before setting up these two different subdirectories. Both the subdirectories may be similar in the context of the products or the content. In managing this issue, you can take help from Google Search Console while setting up the location targeting.
HTTP and HTTPS URLS
This is one of the fastest ways for checking whether your site contains both the live versions or not. Your site can be redirected to the HTTPS version with the 301 redirect. For improving the security, many sites have only chosen some selected pages for implementing the HTTPS. For example, the checkout and the ‘Login’ pages do have the added security with HTTPS. Therefore, whenever the crawler is visiting the pages, it is actually creating two different versions of the same site. Similarly, you should check the www and the non www versions of your site.
If you are publishing the content in front of the new audience, syndication can be considered as a great method. But, you must set the proper terms and conditions for the interested persons who want to publish the content. You must ask them to use the canonical tags, so that the search engines can understand the original source. The syndicated content can also be no-indexed, so that the duplicate content issues never arise.
You may have canceled the sub-domains and selected the subdirectory. You may also have created a totally new site. Your old content may still be alive and can find similarity with your new site. So, it is always advisable that you use the 301 redirect for the discontinued sub-domains. This process is very important, if your old site contains a huge number of backlinks.
It is essential that you properly handle the duplicate content to avoid the issues that can prevent your new pages from being indexed or crawled. Some of the excellent tools are no index/no-follow tags, canonical tags, 301 redirects. These checks can be kept in the SEO monthly check routine for reducing the duplicate content issues.
If you liked this article, then please subscribe to our Newsletter for regular updates to your inbox. You can also find us on Twitter and Facebook. If you are an Android user, you can download our official app here. Have an important query? Email me at email@example.com.