Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
De-indexing millions of pages - would this work?
- 
					
					
					
					
 Hi all, We run an e-commerce site with a catalogue of around 5 million products. Unfortunately, we have let Googlebot crawl and index tens of millions of search URLs, the majority of which are very thin of content or duplicates of other URLs. In short: we are in deep. Our bloated Google-index is hampering our real content to rank; Googlebot does not bother crawling our real content (product pages specifically) and hammers the life out of our servers. Since having Googlebot crawl and de-index tens of millions of old URLs would probably take years (?), my plan is this: - 301 redirect all old SERP URLs to a new SERP URL.
- If new URL should not be indexed, add meta robots noindex tag on new URL.
- When it is evident that Google has indexed most "high quality" new URLs, robots.txt disallow crawling of old SERP URLs. Then directory style remove all old SERP URLs in GWT URL Removal Tool
 - This would be an example of an old URL:
 www.site.com/cgi-bin/weirdapplicationname.cgi?word=bmw&what=1.2&how=2
- This would be an example of a new URL:
 www.site.com/search?q=bmw&category=cars&color=blue
 I have to specific questions: - Would Google both de-index the old URL and not index the new URL after 301 redirecting the old URL to the new URL (which is noindexed) as described in point 2 above?
- What risks are associated with removing tens of millions of URLs directory style in GWT URL Removal Tool? I have done this before but then I removed "only" some useless 50 000 "add to cart"-URLs.Google says themselves that you should not remove duplicate/thin content this way and that using this tool tools this way "may cause problems for your site".
 And yes, these tens of millions of SERP URLs is a result of a faceted navigation/search function let loose all to long. 
 And no, we cannot wait for Googlebot to crawl all these millions of URLs in order to discover the 301. By then we would be out of business.Best regards, 
 TalkInThePark
- 
					
					
					
					
 Thanks a lot, Tom. Time will tell... Just one last thing: 
 what damage are you (and Google) thinking of when advising against removing URLs on a large scale through GWMT?Personally, I think Google says so only because they want to keep as much information possible in their index. 
- 
					
					
					
					
 Thanks for the PM, I can now appreciate the problem a little more. I think it's something that you should not rush. What you've done seems the best thing you can do for now. Longer term, I'd look at your CMS options! 
- 
					
					
					
					
 Yes, I have put a conditional meta robots "noindex" on all pages whose URL contains more than 2 GET elements. It is also present on URLs containing parameters of little or no SEO value (e.g. the "price" parameter). Regarding the nofollow directive, my plan is to not put it in the head but on the individual links pointing to URLs that should not be indexed. If we happen to get a backlink to one of these noindexed pages, I want the link value to get passed on to listed product pages. My big worrie is what should I do if this de-indexation process takes forever... 
- 
					
					
					
					
 If you could put a conditional meta tag in to the source code, that will show the nofollow tag if the URL contains more than 3 GET elements, then that might help? You seem to have already thought hard about your options, and they sound ok. Let's just wait to see whether any Gurus are about to shout stop! 
- 
					
					
					
					
 Thanks for answering that quickly, Tom! We cannot robots.txt disallow all URLs. We get quite a lot of organic traffic to these URLs. In july, organic traffic landing on results pages gave us approximately $85 000 in revenue. Also, what is good to know is that pages resulting from searching and browsing share the same URL - the search phrase is treated as just another filtering parameter in the URL. Keeping the same URL structure is part of my preferred, 2-step solution: - Meta Robots "noindex" unwanted results pages (the overwhelming majority)
- When our Google index has shrunken enough, put rel=nofollow on internal links pointing to those results pages in order to prevent bots from crawling them.
 I have actually implemented step 1 (as of yesterday). The solution I was describing in my original post is my last resort solution. I wanted to get a professional opinion on that one in order to know if I should rule it out or not. Unfortunately, I cannot disclose our company name here (I have a feeling our competitors use Seomoz as well :)). But I'll send you some links in a private message. 
- 
					
					
					
					
 If I were you I'd keep the same URL structure. You're correct in thinking this won't be a quick fix. First, use the robots.txt to disallow robots access to the search pages. Don't remove all results just yet from GWT, this will be a long task and might damage your sites performance. Could you provide some links to your site? I'll have a closer look. 
Browse Questions
Explore more categories
- 
		
		Moz ToolsChat with the community about the Moz tools. 
- 
		
		SEO TacticsDiscuss the SEO process with fellow marketers 
- 
		
		CommunityDiscuss industry events, jobs, and news! 
- 
		
		Digital MarketingChat about tactics outside of SEO 
- 
		
		Research & TrendsDive into research and trends in the search industry. 
- 
		
		SupportConnect on product support and feature requests. 
Related Questions
- 
		
		
		
		
		
		Getting high priority issue for our xxx.com and xxx.com/home as duplicate pages and duplicate page titles can't seem to find anything that needs to be corrected, what might I be missing?
 I am getting high priority issue for our xxx.com and xxx.com/home as reporting both duplicate pages and duplicate page titles on crawl results, I can't seem to find anything that needs to be corrected, what am I be missing? Has anyone else had a similar issue, how was it corrected? Technical SEO | | tgwebmaster0
- 
		
		
		
		
		
		No index on subdomains
 Hi, We have a subdomain that is appearing in the search results - I want to hide this as it looks really bad. If I were to add the no index tag to the sub domain would URL would this affect the whole domain or just that sub domain? The main domain is vitally important - it is just that sub domain I need to hide. Many thanks Technical SEO | | Creditsafe0
- 
		
		
		
		
		
		How to block text on a page to be indexed?
 I would like to block the spider indexing a block of text inside a page , however I do not want to block the whole page with, for example , a noindex tag. I have tried already with a tag like this : chocolate pudding chocolate pudding However this is not working for my case, a travel related website. thanks in advance for your support. Best regards Gianluca Technical SEO | | CharmingGuy0
- 
		
		
		
		
		
		How to Stop Google from Indexing Old Pages
 We moved from a .php site to a java site on April 10th. It's almost 2 months later and Google continues to crawl old pages that no longer exist (225,430 Not Found Errors to be exact). These pages no longer exist on the site and there are no internal or external links pointing to these pages. Google has crawled the site since the go live, but continues to try and crawl these pages. What are my next steps? Technical SEO | | rhoadesjohn0
- 
		
		
		
		
		
		How Does Google's "index" find the location of pages in the "page directory" to return?
 This is my understanding of how Google's search works, and I am unsure about one thing in specific: Google continuously crawls websites and stores each page it finds (let's call it "page directory") Google's "page directory" is a cache so it isn't the "live" version of the page Google has separate storage called "the index" which contains all the keywords searched. These keywords in "the index" point to the pages in the "page directory" that contain the same keywords. When someone searches a keyword, that keyword is accessed in the "index" and returns all relevant pages in the "page directory" These returned pages are given ranks based on the algorithm The one part I'm unsure of is how Google's "index" knows the location of relevant pages in the "page directory". The keyword entries in the "index" point to the "page directory" somehow. I'm thinking each page has a url in the "page directory", and the entries in the "index" contain these urls. Since Google's "page directory" is a cache, would the urls be the same as the live website (and would the keywords in the "index" point to these urls)? For example if webpage is found at wwww.website.com/page1, would the "page directory" store this page under that url in Google's cache? The reason I want to discuss this is to know the effects of changing a pages url by understanding how the search process works better. Technical SEO | | reidsteven750
- 
		
		
		
		
		
		Pages removed from Google index?
 Hi All, I had around 2,300 pages in the google index until a week ago. The index removed a load and left me with 152 submitted, 152 indexed? I have just re-submitted my sitemap and will wait to see what happens. Any idea why it has done this? I have seen a drop in my rankings since. Thanks Technical SEO | | TomLondon0
- 
		
		
		
		
		
		What is the best way to find missing alt tags on my site (site wide - not page by page)?
 I am looking to find all the missing alt tags on my site at once. I have a FF extension that use to do it page by page, but my site is huge and that will take forever. Thanks!! Technical SEO | | franchisesolutions1
- 
		
		
		
		
		
		Dynamically-generated .PDF files, instead of normal pages, indexed by and ranking in Google
 Hi, I come across a tough problem. I am working on an online-store website which contains the functionlaity of viewing products details in .PDF format (by the way, the website is built on Joomla CMS), now when I search my site's name in Google, the SERP simply displays my .PDF files in the first couple positions (shown in normal .PDF files format: [PDF]...)and I cannot find the normal pages there on SERP #1 unless I search the full site domain in Google. I really don't want this! Would you please tell me how to figure the problem out and solve it. I can actually remove the corresponding component (Virtuemart) that are in charge of generating the .PDF files. Now I am trying to redirect all the .PDF pages ranking in Google to a 404 page and remove the functionality, I plan to regenerate a sitemap of my site and submit it to Google, will it be working for me? I really appreciate that if you could help solve this problem. Thanks very much. Sincerely SEOmoz Pro Member Technical SEO | | fugu0
 
			
		 
			
		 
					
				 
					
				 
					
				 
					
				 
					
				