Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Why are pages still showing in SERPs, despite being NOINDEXed for months?
- 
					
					
					
					
 We have thousands of pages we're trying to have de-indexed in Google for months now. They've all got . But they simply will not go away in the SERPs. Here is just one example.... http://bitly.com/VutCFiIf you search this URL in Google, you will see that it is indexed, yet it's had for many months. This is just one example for thousands of pages, that will not get de-indexed. Am I missing something here? Does it have to do with using content="none" instead of content="noindex, follow"? Any help is very much appreciated. 
- 
					
					
					
					
 Thanks for your reply, Let me know if you are able to deindex those pages. I will wait. Also please share what you have implemented to deindex those pages. 
- 
					
					
					
					
 A page can have a link to it, and still not be indexed, so I disagree with you on that. But thanks for using the domain name. That will teach me to use a URL shortener... 
- 
					
					
					
					
 Hm, that is interesting. So you're saying that it will get crawled, and thus will eventually become deindexed (as noindex is part of the content="none" directive), but since it's a dead end page, it just takes an extra long time for that particular page to get crawled? 
- 
					
					
					
					
 Just to add to the other answers, you can also remove the URLs (or entire directory if necessary) via the URL removal tool in Webmaster Tools, although Google prefers you to use it for emergencies of sorts (I've had no problems with it). http://support.google.com/webmasters/bin/answer.py?hl=en&answer=164734 
- 
					
					
					
					
 No, nofollow will only tell the bot that the page is a dead end - that the bot should not follow any links on page. And that means any inks from those pages won't be visited by the bot - that is slowing the crawling process overall for those pages. If you block a page in robots.txt and the page is already in the index - that will remain in the index as the noindex or content=none won't be seen by the bot so it won't be removed from the index - it will just won't be visited anymore. 
- 
					
					
					
					
 Ok, so, nofollow is stopping the page from being read at all? I thought that nofollow just means the links on the page will not be followed. Is meta nofollow essentially the same as blocking a page in robots.txt? 
- 
					
					
					
					
 Hi Howard, The page is in Google index because you are still linking to that page from your website. Here is the page from where that page links: http://www.2mcctv.com/product_print-productinfoVeiluxVS70CDNRDhtml.html As you are linking that page Google indexing the page. Google come to know about "noindex" tag before that he has already indexed it. Sorry for bad English. Lindsay has written awesome post about it here: http://www.seomoz.org/blog/robot-access-indexation-restriction-techniques-avoiding-conflicts After reading above blog post, my all doubts about noindex, follow, robots.txt get clear. Thanks Lindsay 
- 
					
					
					
					
 We always use the noindex code in our robot.txt file. 
- 
					
					
					
					
 Hi, In order to deindex you should use noindex as content=none also means nofollow. You do need to follow now in order to reach all other pages and see the no index tag and remove those from the index. When you have all of them out of the index you can set the none back on. This is the main reason "none" as attribute is not very wide in usage as "shooting yourself in the foot" with it it's easy. On the otehr hand you need to see if google bot is actually reaching those pages: - 
see if you don't have any robots.txt restrictions first 
- 
see when google's bot last have a hit on any of the pages - that will give you a good idea and you can do a prediction. 
 If those pages are in the sup index you can wait for some time for Google bit to revisit. One last note: build xml sitemaps with all of those pages and submit those via WMT - that will help at 100% to get those in front of the firing squad and also to be able to monitor those better. Hope it helps. 
- 
Browse Questions
Explore more categories
- 
		
		Moz ToolsChat with the community about the Moz tools. 
- 
		
		SEO TacticsDiscuss the SEO process with fellow marketers 
- 
		
		CommunityDiscuss industry events, jobs, and news! 
- 
		
		Digital MarketingChat about tactics outside of SEO 
- 
		
		Research & TrendsDive into research and trends in the search industry. 
- 
		
		SupportConnect on product support and feature requests. 
Related Questions
- 
		
		
		
		
		
		I want to move some pages of my website to a folder and nav menu in those pages should only show inner page links, will it hurt SEO?
 Hi, My website has a few SaaS products, to make my website simple i want to move my website some pages to its specific folder structure , so eg website.com/product1/features Technical SEO | | webbeemoz
 website.com/product1/pricing
 website.com/product1/information and same for product2 and so on, the website.com/product1/.. menu will only show the links of product1 and only one link to homepage (possibly in footer). Please share your opinion will it be a good idea, from UI perspective it will be simple , but i am not sure about SEO perspective, please help thanks1
- 
		
		
		
		
		
		Safety Data Sheet PDFs are Showing Higher in Search Results than Product Pages
 I have a client who just launched an updated website that has WooCommerce added to it. The website also has a page of Safety Data Sheets that are PDFs that contain information about some of the products. When we do a Google search for many of the products the Safety Data Sheets show up first in the search results instead of the product pages. Has anyone had this happen and know how to solve the issue? Technical SEO | | teamodea0
- 
		
		
		
		
		
		Blog Page Titles - Page 1, Page 2 etc.
 Hi All, I have a couple of crawl errors coming up in MOZ that I am trying to fix. They are duplicate page title issues with my blog area. For example we have a URL of www.ourwebsite.com/blog/page/1 and as we have quite a few blog posts they get put onto another page, example www.ourwebsite.com/blog/page/2 both of these urls have the same heading, title, meta description etc. I was just wondering if this was an actual SEO problem or not and if there is a way to fix it. I am using Wordpress for reference but I can't see anywhere to access the settings of these pages. Thanks Technical SEO | | O2C0
- 
		
		
		
		
		
		Some Old date showing in SERP
 I see some old date Jan 21 2013 showing up for some categories in Google search results. These are category pages and I do not see the date in view source. This is not a wordpress site or a blog page. We keep changing this page by removing/adding items so it is not outdated. Technical SEO | | rbai0
- 
		
		
		
		
		
		Is it good to redirect million of pages on a single page?
 My site has 10 lakh approx. genuine urls. But due to some unidentified bugs site has created irrelevant urls 10 million approx. Since we don’t know the origin of these non-relevant links, we want to redirect or remove all these urls. Please suggest is it good to redirect such a high number urls to home page or to throw 404 for these pages. Or any other suggestions to solve this issue. Technical SEO | | vivekrathore0
- 
		
		
		
		
		
		Determining When to Break a Page Into Multiple Pages?
 Suppose you have a page on your site that is a couple thousand words long. How would you determine when to split the page into two and are there any SEO advantages to doing this like being more focused on a specific topic. I noticed the Beginner's Guide to SEO is split into several pages, although it would concentrate the link juice if it was all on one page. Suppose you have a lot of comments. Is it better to move comments to a second page at a certain point? Sometimes the comments are not super focused on the topic of the page compared to the main text. Technical SEO | | ProjectLabs1
- 
		
		
		
		
		
		Why are apostrophes and other characters still showing as code in my titles?
 Hi, I have a WordPress-based site and overall everything is working well. However, I can't seem to figure out how to get apostrophes and other characters to display normally. Now, the problem isn't that they are displaying as code to normal visitors or up in the title bar, they are displaying as code to Google's bots as well as to SEOMOZ. Example: Normal visitor sees: About **** | **** - Metro Vancouver's IT & Web Experts Google and SEOMOZ see: About **** | **** - Metro Vancouver's IT & Web Experts I've played around with different ways of typing the title (not using character codes vs. using character codes) and nothing seems to work. Any help or explanation would be appreciated. Technical SEO | | Function50
- 
		
		
		
		
		
		What's the difference between a category page and a content page
 Hello, Little confused on this matter. From a website architectural and content stand point, what is the difference between a category page and a content page? So lets say I was going to build a website around tea. My home page would be about tea. My category pages would be: White Tea, Black Tea, Oolong Team and British Tea correct? ( I Would write content for each of these topics on their respective category pages correct?) Then suppose I wrote articles on organic white tea, white tea recipes, how to brew white team etc...( Are these content pages?) Do I think link FROM my category page ( White Tea) to my ( Content pages ie; Organic White Tea, white tea receipes etc) or do I link from my content page to my category page? I hope this makes sense. Thanks, Bill Technical SEO | | wparlaman0
 
			
		 
			
		 
			
		 
			
		 
			
		 
					
				 
					
				 
					
				 
					
				 
					
				 
					
				 
					
				