Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
De-indexing millions of pages - would this work?
-
Hi all,
We run an e-commerce site with a catalogue of around 5 million products.
Unfortunately, we have let Googlebot crawl and index tens of millions of search URLs, the majority of which are very thin of content or duplicates of other URLs. In short: we are in deep. Our bloated Google-index is hampering our real content to rank; Googlebot does not bother crawling our real content (product pages specifically) and hammers the life out of our servers.
Since having Googlebot crawl and de-index tens of millions of old URLs would probably take years (?), my plan is this:
- 301 redirect all old SERP URLs to a new SERP URL.
- If new URL should not be indexed, add meta robots noindex tag on new URL.
- When it is evident that Google has indexed most "high quality" new URLs, robots.txt disallow crawling of old SERP URLs. Then directory style remove all old SERP URLs in GWT URL Removal Tool
- This would be an example of an old URL:
www.site.com/cgi-bin/weirdapplicationname.cgi?word=bmw&what=1.2&how=2 - This would be an example of a new URL:
www.site.com/search?q=bmw&category=cars&color=blue
I have to specific questions:
- Would Google both de-index the old URL and not index the new URL after 301 redirecting the old URL to the new URL (which is noindexed) as described in point 2 above?
- What risks are associated with removing tens of millions of URLs directory style in GWT URL Removal Tool? I have done this before but then I removed "only" some useless 50 000 "add to cart"-URLs.Google says themselves that you should not remove duplicate/thin content this way and that using this tool tools this way "may cause problems for your site".
And yes, these tens of millions of SERP URLs is a result of a faceted navigation/search function let loose all to long.
And no, we cannot wait for Googlebot to crawl all these millions of URLs in order to discover the 301. By then we would be out of business.Best regards,
TalkInThePark -
Thanks a lot, Tom. Time will tell...
Just one last thing:
what damage are you (and Google) thinking of when advising against removing URLs on a large scale through GWMT?Personally, I think Google says so only because they want to keep as much information possible in their index.
-
Thanks for the PM, I can now appreciate the problem a little more.
I think it's something that you should not rush. What you've done seems the best thing you can do for now.
Longer term, I'd look at your CMS options!
-
Yes, I have put a conditional meta robots "noindex" on all pages whose URL contains more than 2 GET elements. It is also present on URLs containing parameters of little or no SEO value (e.g. the "price" parameter).
Regarding the nofollow directive, my plan is to not put it in the head but on the individual links pointing to URLs that should not be indexed. If we happen to get a backlink to one of these noindexed pages, I want the link value to get passed on to listed product pages.
My big worrie is what should I do if this de-indexation process takes forever...
-
If you could put a conditional meta tag in to the source code, that will show the nofollow tag if the URL contains more than 3 GET elements, then that might help?
You seem to have already thought hard about your options, and they sound ok. Let's just wait to see whether any Gurus are about to shout stop!
-
Thanks for answering that quickly, Tom!
We cannot robots.txt disallow all URLs. We get quite a lot of organic traffic to these URLs. In july, organic traffic landing on results pages gave us approximately $85 000 in revenue. Also, what is good to know is that pages resulting from searching and browsing share the same URL - the search phrase is treated as just another filtering parameter in the URL.
Keeping the same URL structure is part of my preferred, 2-step solution:
- Meta Robots "noindex" unwanted results pages (the overwhelming majority)
- When our Google index has shrunken enough, put rel=nofollow on internal links pointing to those results pages in order to prevent bots from crawling them.
I have actually implemented step 1 (as of yesterday). The solution I was describing in my original post is my last resort solution. I wanted to get a professional opinion on that one in order to know if I should rule it out or not.
Unfortunately, I cannot disclose our company name here (I have a feeling our competitors use Seomoz as well :)). But I'll send you some links in a private message.
-
If I were you I'd keep the same URL structure. You're correct in thinking this won't be a quick fix.
First, use the robots.txt to disallow robots access to the search pages.
Don't remove all results just yet from GWT, this will be a long task and might damage your sites performance.
Could you provide some links to your site? I'll have a closer look.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Page Indexing without content
Hello. I have a problem of page indexing without content. I have website in 3 different languages and 2 of the pages are indexing just fine, but one language page (the most important one) is indexing without content. When searching using site: page comes up, but when searching unique keywords for which I should rank 100% nothing comes up. This page was indexing just fine and the problem arose couple of days ago after google update finished. Looking further, the problem is language related and every page in the given language that is newly indexed has this problem, while pages that were last crawled around one week ago are just fine. Has anyone ran into this type of problem?
Technical SEO | | AtuliSulava1 -
I want to move some pages of my website to a folder and nav menu in those pages should only show inner page links, will it hurt SEO?
Hi, My website has a few SaaS products, to make my website simple i want to move my website some pages to its specific folder structure , so eg website.com/product1/features
Technical SEO | | webbeemoz
website.com/product1/pricing
website.com/product1/information and same for product2 and so on, the website.com/product1/.. menu will only show the links of product1 and only one link to homepage (possibly in footer). Please share your opinion will it be a good idea, from UI perspective it will be simple , but i am not sure about SEO perspective, please help thanks1 -
Customer Reviews on Product Page / Pagination / Crawl 3 review pages only
Hi experts, I present customer feedback, reviews basically, on my website for the products that are sold. And with this comes the ability to read reviews and obviously with pagination to display the available reviews. Now I want users to be able to flick through and read the reviews to help them satisfy whatever curiosity they have. My only thinking is that the page that contains the reviews, with each click of the pagination will present roughly the same content. The only thing that changes is the title tags which will contain the number in the H1 to display the page number. I'm thinking this could be duplication but i have yet to be notified by Google in my Search console... Should i block crawlers from crawling beyond page 3 of reviews? Thanks
Technical SEO | | Train4Academy.co.uk0 -
Google Not Indexing Pages (Wordpress)
Hello, recently I started noticing that google is not indexing our new pages or our new blog posts. We are simply getting a "Discovered - Currently Not Indexed" message on all new pages. When I click "Request Indexing" is takes a few days, but eventually it does get indexed and is on Google. This is very strange, as our website has been around since the late 90's and the quality of the new content is neither duplicate nor "low quality". We started noticing this happening around February. We also do not have many pages - maybe 500 maximum? I have looked at all the obvious answers (allowing for indexing, etc.), but just can't seem to pinpoint a reason why. Has anyone had this happen recently? It is getting very annoying having to manually go in and request indexing for every page and makes me think there may be some underlying issues with the website that should be fixed.
Technical SEO | | Hasanovic1 -
Blog Page Titles - Page 1, Page 2 etc.
Hi All, I have a couple of crawl errors coming up in MOZ that I am trying to fix. They are duplicate page title issues with my blog area. For example we have a URL of www.ourwebsite.com/blog/page/1 and as we have quite a few blog posts they get put onto another page, example www.ourwebsite.com/blog/page/2 both of these urls have the same heading, title, meta description etc. I was just wondering if this was an actual SEO problem or not and if there is a way to fix it. I am using Wordpress for reference but I can't see anywhere to access the settings of these pages. Thanks
Technical SEO | | O2C0 -
Why google indexed pages are decreasing?
Hi, my website had around 400 pages indexed but from February, i noticed a huge decrease in indexed numbers and it is continually decreasing. can anyone help me to find out the reason. where i can get solution for that? will it effect my web page ranking ?
Technical SEO | | SierraPCB0 -
WordPress - How to stop both http:// and https:// pages being indexed?
Just published a static page 2 days ago on WordPress site but noticed that Google has indexed both http:// and https:// url's. Usually I only get http:// indexed though. Could anyone please explain why this may have happened and how I can fix? Thanks!
Technical SEO | | Clicksjim1