Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Duplicate content, although page has "noindex"
- 
					
					
					
					
 Hello, I had an issue with some pages being listed as duplicate content in my weekly Moz report. I've since discussed it with my web dev team and we decided to stop the pages from being crawled. The web dev team added this coding to the pages <meta name='robots' content='max-image-preview:large, noindex dofollow' />, but the Moz report is still reporting the pages as duplicate content. Note from the developer "So as far as I can see we've added robots to prevent the issue but maybe there is some subtle change that's needed here. You could check in Google Search Console to see how its seeing this content or you could ask Moz why they are still reporting this and see if we've missed something?" Any help much appreciated! 
- 
					
					
					
					
 @rj_dale have you added a rel=canonical tag to the page to make sure you're marking to Google the correct version of the page? Even if it doesn't have a duplicate page, add a self-referencing canonical tag and if you need any more help - speak to a freelance seo consultant. 
- 
					
					
					
					
 Based on your description, it appears as though the page is already indexed in the search engine. This is being picked up by the Moz tool and being reported as duplicate content. What you can try to do is - for the page that is flagged by the Moz tool as duplicate enter it in Google as the following site:example.com/moz-flagged-duplicate-url This will show if the page is already indexed. If the page is indexed, I would recommend you to: a) Review the two or more similar URLs flagged as duplicates and see if they are actual duplicates or a tool-based error. If it is a tool based error you can ignore the issue. If you think the pages are actual duplicates you can evaluate pages in terms of backlinks or incoming traffic and choose the preferred page version that should be indexed in search engines. Place a canonical tag of the preferred page version on all the pages flagged as duplicate. b) Please remove the noindex tag. This is not the right technique to handle duplicate content for SEO results. c) You would have to wait for Google to crawl and update results. You can promote your new page on social media, email marketing campaigns, or build backlinks to the page. With time search engines will pick up these changes and updates its indexed results. 
Browse Questions
Explore more categories
- 
		
		Moz ToolsChat with the community about the Moz tools. 
- 
		
		SEO TacticsDiscuss the SEO process with fellow marketers 
- 
		
		CommunityDiscuss industry events, jobs, and news! 
- 
		
		Digital MarketingChat about tactics outside of SEO 
- 
		
		Research & TrendsDive into research and trends in the search industry. 
- 
		
		SupportConnect on product support and feature requests. 
Related Questions
- 
		
		
		
		
		
		Is it Ok to have multiple domains (separate website different content) rank for similar keywords?
 Is it 'OK' to have multiple domains in the following instance? Does Google actively discourage multiple (but completely different sites) domains from the same company appearing in the search results for the same and or similar keywords if the content is slightly different? This is where the 'main site' has the details, and you can purchase product, and the second site is a blog site only. We are creating a separate content blogsite; which would be on a second domain that will be related to one portion of content on main site. They would be linking back and forth, or maybe the blog site would just link over to the main site so they can purchase said product. This would be a similar scenario to give you an idea of how it would be structured: MAIN SITE: describes a few products, and you can purchase from this site SECOND SITE, different domain: a blog site that contains personal experiences with one of the products. BOTH sites will be linked back and forth....or as mentioned maybe the blog site could just link over to the 'main site' Logo would be a modified version of the main logo and look and feel of the sight would be similar but not exactly the same. MORE INFO: the main site has existed for way over 10 years, starting to gain some traction in an extremely competitive market, but does not rank super high, is gaining traction due to improvements in speed, content, onpage SEO, etc... So in addition to my main question of is this 'ok' to have this second domain, also will it hurt the rankings or negatively affect the 'main' site? Wondering about duplicate content issues, except it will be slightly different... SEO Tactics | | fourwhitesocks0
- 
		
		
		
		
		
		Duplicate Content and Subdirectories
 Hi there and thank you in advance for your help! I'm seeking guidance on how to structure a resources directory (white papers, webinars, etc.) while avoiding duplicate content penalties. If you go to /resources on our site, there is filter function. If you filter for webinars, the URL becomes /resources/?type=webinar We didn't want that dynamic URL to be the primary URL for webinars, so we created a new page with the URL /resources/webinar that lists all of our webinars and includes a featured webinar up top. However, the same webinar titles now appear on the /resources page and the /resources/webinar page. Will that cause duplicate content issues? P.S. Not sure if it matters, but we also changed the URLs for the individual resource pages to include the resource type. For example, one of our webinar URLs is /resources/webinar/forecasting-your-revenue Thank you! Technical SEO | | SAIM_Marketing0
- 
		
		
		
		
		
		Quick Fix to "Duplicate page without canonical tag"?
 When we pull up Google Search Console, in the Index Coverage section, under the category of Excluded, there is a sub-category called ‘Duplicate page without canonical tag’. The majority of the 665 pages in that section are from a test environment. If we were to include in the robots.txt file, a wildcard to cover every URL that started with the particular root URL ("www.domain.com/host/"), could we eliminate the majority of these errors? That solution is not one of the 5 or 6 recommended solutions that the Google Search Console Help section text suggests. It seems like a simple effective solution. Are we missing something? Technical SEO | | CREW-MARKETING1
- 
		
		
		
		
		
		Duplicate content through product variants
 Hi, Before you shout at me for not searching - I did and there are indeed lots of threads and articles on this problem. I therefore realise that this problem is not exactly new or unique. The situation: I am dealing with a website that has 1 to N (n being between 1 and 6 so far) variants of a product. There are no dropdown for variants. This is not technically possible short of a complete redesign which is not on the table right now. The product variants are also not linked to each other but share about 99% of content (obvious problem here). In the "search all" they show up individually. Each product-variant is a different page, unconnected in backend as well as frontend. The system is quite limited in what can be added and entered - I may have some opportunity to influence on smaller things such as enabling canonicals. In my opinion, the optimal choice would be to retain one page for each product, the base variant, and then add dropdowns to select extras/other variants. As that is not possible, I feel that the best solution is to canonicalise all versions to one version (either base variant or best-selling product?) and to offer customers a list at each product giving him a direct path to the other variants of the product. I'd be thankful for opinions, advice or showing completely new approaches I have not even thought of! Kind Regards, Nico Technical SEO | | netzkern_AG0
- 
		
		
		
		
		
		Can you noindex a page, but still index an image on that page?
 If a blog is centered around visual images, and we have specific pages with high quality content that we plan to index and drive our traffic, but we have many pages with our images...what is the best way to go about getting these images indexed? We want to noindex all the pages with just images because they are thin content... Can you noindex,follow a page, but still index the images on that page? Please explain how to go about this concept..... Technical SEO | | WebServiceConsulting.com0
- 
		
		
		
		
		
		Home Page .index.htm and .com Duplicate Page Content/Title
 I have been whittling away at the duplicate content on my clients' sites, thanks to SEOmoz's pro report, and have been getting push back from the account manager at register.com (the site was built here and the owner doesn't want to move it). He says these are the exact same page and he can't access one to redirect to the other. Any suggestions? The SEOmoz report says there is duplicate content on both these urls: Durango Mountain Biking | Durango Mountain Resort - Cascade Village http://www.cascadevillagehotel.com/index.htm Durango Mountain Biking | Durango Mountain Resort - Cascade Village http://www.cascadevillagehotel.com/ Your help is greatly appreciated! Sheryl Technical SEO | | TOMMarketingLtd.0
- 
		
		
		
		
		
		Duplicate content on ecommerce sites
 I just want to confirm something about duplicate content. On an eCommerce site, if the meta-titles, meta-descriptions and product descriptions are all unique, yet a big chunk at the bottom (featuring "why buy with us" etc) is copied across all product pages, would each page be penalised, or not indexed, for duplicate content? Does the whole page need to be a duplicate to be worried about this, or would this large chunk of text, bigger than the product description, have an effect on the page. If this would be a problem, what are some ways around it? Because the content is quite powerful, and is relavent to all products... Cheers, Intermediate & Advanced SEO | | Creode0
- 
		
		
		
		
		
		Meta tag "noindex,nofollow" by accident
 Hi, 3 weeks ago I wanted to release a new website (made in WordPress), so I neatly created 301 redirects for all files and folders of my old html website and transferred the WordPress site into the index folder. Job well done I thought, but after a few days, my site suddenly disappeared from google. I read in other Q&A's that this could happen so I waited a little longer till I finally saw today that there was a meta robots added on every page with "noindex, nofollow". For some reason, the WordPress setting "I want to forbid search engines, but allow normal visitors to my website" was selected, although I never even opened that section called "Privacy". So my question is, will this have a negative impact on my pagerank afterwards? Thanks, Sven Technical SEO | | Zitana0
 
			
		 
				
		 
			
		 
					
				 
					
				 
					
				 
					
				 
					
				 
					
				 
					
				