Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Why are pages still showing in SERPs, despite being NOINDEXed for months?
- 
					
					
					
					
 We have thousands of pages we're trying to have de-indexed in Google for months now. They've all got . But they simply will not go away in the SERPs. Here is just one example.... http://bitly.com/VutCFiIf you search this URL in Google, you will see that it is indexed, yet it's had for many months. This is just one example for thousands of pages, that will not get de-indexed. Am I missing something here? Does it have to do with using content="none" instead of content="noindex, follow"? Any help is very much appreciated. 
- 
					
					
					
					
 Thanks for your reply, Let me know if you are able to deindex those pages. I will wait. Also please share what you have implemented to deindex those pages. 
- 
					
					
					
					
 A page can have a link to it, and still not be indexed, so I disagree with you on that. But thanks for using the domain name. That will teach me to use a URL shortener... 
- 
					
					
					
					
 Hm, that is interesting. So you're saying that it will get crawled, and thus will eventually become deindexed (as noindex is part of the content="none" directive), but since it's a dead end page, it just takes an extra long time for that particular page to get crawled? 
- 
					
					
					
					
 Just to add to the other answers, you can also remove the URLs (or entire directory if necessary) via the URL removal tool in Webmaster Tools, although Google prefers you to use it for emergencies of sorts (I've had no problems with it). http://support.google.com/webmasters/bin/answer.py?hl=en&answer=164734 
- 
					
					
					
					
 No, nofollow will only tell the bot that the page is a dead end - that the bot should not follow any links on page. And that means any inks from those pages won't be visited by the bot - that is slowing the crawling process overall for those pages. If you block a page in robots.txt and the page is already in the index - that will remain in the index as the noindex or content=none won't be seen by the bot so it won't be removed from the index - it will just won't be visited anymore. 
- 
					
					
					
					
 Ok, so, nofollow is stopping the page from being read at all? I thought that nofollow just means the links on the page will not be followed. Is meta nofollow essentially the same as blocking a page in robots.txt? 
- 
					
					
					
					
 Hi Howard, The page is in Google index because you are still linking to that page from your website. Here is the page from where that page links: http://www.2mcctv.com/product_print-productinfoVeiluxVS70CDNRDhtml.html As you are linking that page Google indexing the page. Google come to know about "noindex" tag before that he has already indexed it. Sorry for bad English. Lindsay has written awesome post about it here: http://www.seomoz.org/blog/robot-access-indexation-restriction-techniques-avoiding-conflicts After reading above blog post, my all doubts about noindex, follow, robots.txt get clear. Thanks Lindsay 
- 
					
					
					
					
 We always use the noindex code in our robot.txt file. 
- 
					
					
					
					
 Hi, In order to deindex you should use noindex as content=none also means nofollow. You do need to follow now in order to reach all other pages and see the no index tag and remove those from the index. When you have all of them out of the index you can set the none back on. This is the main reason "none" as attribute is not very wide in usage as "shooting yourself in the foot" with it it's easy. On the otehr hand you need to see if google bot is actually reaching those pages: - 
see if you don't have any robots.txt restrictions first 
- 
see when google's bot last have a hit on any of the pages - that will give you a good idea and you can do a prediction. 
 If those pages are in the sup index you can wait for some time for Google bit to revisit. One last note: build xml sitemaps with all of those pages and submit those via WMT - that will help at 100% to get those in front of the firing squad and also to be able to monitor those better. Hope it helps. 
- 
Browse Questions
Explore more categories
- 
		
		Moz ToolsChat with the community about the Moz tools. 
- 
		
		SEO TacticsDiscuss the SEO process with fellow marketers 
- 
		
		CommunityDiscuss industry events, jobs, and news! 
- 
		
		Digital MarketingChat about tactics outside of SEO 
- 
		
		Research & TrendsDive into research and trends in the search industry. 
- 
		
		SupportConnect on product support and feature requests. 
Related Questions
- 
		
		
		
		
		
		My WP website got attack by malware & now my website site:www.example.ca shows about 43000 indexed page in google.
 Hi All My wordpress website got attack by malware last week. It affected my index page in google badly. my typical site:example.ca shows about 130 indexed pages on google. Now it shows about 43000 indexed pages. I had my server company tech support scan my site and clean the malware yesterday. But it still shows the same number of indexed page on google. Technical SEO | | ChophelDoes anybody had ever experience such situation and how did you fixed it. Looking for help. Thanks FILE HIT LIST: 
 {YARA}Spam_PHP_WPVCD_ContentInjection : /home/example/public_html/wp-includes/wp-tmp.php
 {YARA}Backdoor_PHP_WPVCD_Deployer : /home/example/public_html/wp-includes/wp-vcd.php
 {YARA}Backdoor_PHP_WPVCD_Deployer : /home/example/public_html/wp-content/themes/oceanwp.zip
 {YARA}webshell_webshell_cnseay02_1 : /home/example2/public_html/content.php
 {YARA}eval_post : /home/example2/public_html/wp-includes/63292236.php
 {YARA}webshell_webshell_cnseay02_1 : /home/example3/public_html/content.php
 {YARA}eval_post : /home/example4/public_html/wp-admin/28855846.php
 {HEX}php.generic.malware.442 : /home/example5/public_html/wp-22.php
 {HEX}php.generic.cav7.421 : /home/example5/public_html/SEUN.php
 {HEX}php.generic.malware.442 : /home/example5/public_html/Webhook.php0
- 
		
		
		
		
		
		Google has deindexed a page it thinks is set to 'noindex', but is in fact still set to 'index'
 A page on our WordPress powered website has had an error message thrown up in GSC to say it is included in the sitemap but set to 'noindex'. The page has also been removed from Google's search results. Page is https://www.onlinemortgageadvisor.co.uk/bad-credit-mortgages/how-to-get-a-mortgage-with-bad-credit/ Looking at the page code, plus using Screaming Frog and Ahrefs crawlers, the page is very clearly still set to 'index'. The SEO plugin we use has not been changed to 'noindex' the page. I have asked for it to be reindexed via GSC but I'm concerned why Google thinks this page was asked to be noindexed. Can anyone help with this one? Has anyone seen this before, been hit with this recently, got any advice...? Technical SEO | | d.bird0
- 
		
		
		
		
		
		Low page impressions
 Hey there MOZ Geniuses; While checking my webmaster data I noticed that almost all my Google impressions are generated by the home page, most other content pages are showing virtually no impression data <50 (the home page is showing around 1500 - a couple of the pages are in the 150-200 range). the site has been up for about 8 months now. Traffic on average is about 500 visitors, but I'm seeing very little entry other then the home page. Checking the number Sitemap section 27 of 30 are index Webmaster tools are not reporting errors Webmaster keyword impressions are also extremely low 164 keywords with the highest impression count of 79 and dropping from there. MOZ is show very few minor issues although it says that it crawled 10k pages? -- we only have 30 or so. The answer seems obvious, Google is not showing my content ... the question is why and what steps can I take to analyze this? Could there be a possibility of some type of penalty? I welcome all your suggestions: The site is www.calibersi.com Technical SEO | | VanadiumInteractive0
- 
		
		
		
		
		
		How Does Google's "index" find the location of pages in the "page directory" to return?
 This is my understanding of how Google's search works, and I am unsure about one thing in specific: Google continuously crawls websites and stores each page it finds (let's call it "page directory") Google's "page directory" is a cache so it isn't the "live" version of the page Google has separate storage called "the index" which contains all the keywords searched. These keywords in "the index" point to the pages in the "page directory" that contain the same keywords. When someone searches a keyword, that keyword is accessed in the "index" and returns all relevant pages in the "page directory" These returned pages are given ranks based on the algorithm The one part I'm unsure of is how Google's "index" knows the location of relevant pages in the "page directory". The keyword entries in the "index" point to the "page directory" somehow. I'm thinking each page has a url in the "page directory", and the entries in the "index" contain these urls. Since Google's "page directory" is a cache, would the urls be the same as the live website (and would the keywords in the "index" point to these urls)? For example if webpage is found at wwww.website.com/page1, would the "page directory" store this page under that url in Google's cache? The reason I want to discuss this is to know the effects of changing a pages url by understanding how the search process works better. Technical SEO | | reidsteven750
- 
		
		
		
		
		
		Keyword not showing
 Hi, we are trying to rank this keyword "Human Resource Books" for Silvercreek.ca for a long time. But somehow, the keyword is not ranked by google at all. Is there a reason why Google is denying our site? What did we do wrong? Can anyone help to see what wrong with tis siet www.silvercreekpress.ca? thanks Technical SEO | | solution.advisor0
- 
		
		
		
		
		
		Why are old versions of images still showing for my site in Google Image Search?
 I have a number of images on my website with a watermark. We changed the watermark (on all of our images) in May, but when I search for my site getmecooking in Google Image Search, it still shows the old watermark (the old one is grey, the new one is orange). Is Google not updating the images its search results because they are cached in Google? Or because it is ignoring my images, having downloaded them once? Should we be giving our images a version number (at the end of the file name)? Our website cache is set to 7 days, so that's not the issue. Thanks. Technical SEO | | Techboy0
- 
		
		
		
		
		
		What's the difference between a category page and a content page
 Hello, Little confused on this matter. From a website architectural and content stand point, what is the difference between a category page and a content page? So lets say I was going to build a website around tea. My home page would be about tea. My category pages would be: White Tea, Black Tea, Oolong Team and British Tea correct? ( I Would write content for each of these topics on their respective category pages correct?) Then suppose I wrote articles on organic white tea, white tea recipes, how to brew white team etc...( Are these content pages?) Do I think link FROM my category page ( White Tea) to my ( Content pages ie; Organic White Tea, white tea receipes etc) or do I link from my content page to my category page? I hope this makes sense. Thanks, Bill Technical SEO | | wparlaman0
- 
		
		
		
		
		
		Which pages to "noindex"
 I have read through the many articles regarding the use of Meta Noindex, but what I haven't been able to find is a clear explanation of when, why or what to use this on. I'm thinking that it would be appropriate to use it on: legal pages such as privacy policy and terms of use Technical SEO | | mmaes
 search results page
 blog archive and category pages Thanks for any insight of this.0
 
			
		 
			
		 
			
		 
			
		 
			
		 
					
				 
					
				 
					
				 
					
				 
					
				 
					
				 
					
				