Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
How to block "print" pages from indexing
- 
					
					
					
					
 I have a fairly large FAQ section and every article has a "print" button. Unfortunately, this is creating a page for every article which is muddying up the index - especially on my own site using Google Custom Search. Can you recommend a way to block this from happening? Example Article: Example "Print" page: http://www.knottyboy.com/lore/article.php?id=052&action=print 
- 
					
					
					
					
 Donnie, I agree. However, we had the same problem on a website and here's what we did the canonical tag: Over a period of 3-4 weeks, all those print pages disappeared from the SERP. Now if I take a print URL and do a cache: for that page, it shows me the web version of that page. So yes, I agree the question was about blocking the pages from getting indexed. There's no real recipe here, it's about getting the right solution. Before canonical tag, robots.txt was the only solution. But now with canonical there (provided one has the time and resources available to implement it vs adding one line of text to robots.txt), you can technically 301 the pages and not have to stop/restrict the spiders from crawling them. Absolutely no offence to your solution in any way. Both are indeed workable solutions. The best part is that your robots.txt solution takes 30 seconds to implement since you provided the actually disallow code :), so it's better. 
- 
					
					
					
					
 Thanks Jennifer, will do! So much good information. 
- 
					
					
					
					
 Sorry, but I have to jump in - do NOT use all of those signals simultaneously. You'll make a mess, and they'll interfere with each other. You can try Robots.txt or NOINDEX on the page level - my experience suggests NOINDEX is much more effective. Also, do not nofollow the links yet - you'll block the crawl, and then the page-level cues (like NOINDEX) won't work. You can nofollow later. This is a common mistake and it will keep your fixes from working. 
- 
					
					
					
					
 Josh, please read my and Dr. Pete's comments below. Don't nofollow the links, but do use the meta noindex,follow on the page. 
- 
					
					
					
					
 Rel-canonical, in practice, does essentially de-index the non-canonical version. Technically, it's not a de-indexation method, but it works that way. 
- 
					
					
					
					
 You are right Donnie. I've "good answered" you too. I've gone ahead and updated my robots.txt file. As soon as I am able, I will use no indexon the page, no follow on the links, and rel=canonical. This is just what I needed, a quick fix until I can make a more permanent solution. 
- 
					
					
					
					
 Your welcome : ) 
- 
					
					
					
					
 Although you are correct... there is still more then one way to skin a chicken. 
- 
					
					
					
					
 But the spiders still run on the page and read the canonical link, however with the robot text the spiders will not. 
- 
					
					
					
					
 Yes, but Rel=Canonical does not block a page it only tells google which page to follow out of two pages.The question was how to block, not how to tell google which link to follow. I believe you gave credit to the wrong answer. http://en.wikipedia.org/wiki/Canonical_link_element This is not fair. lol 
- 
					
					
					
					
 I have to agree with Jen - Robots.txt isn't great for getting indexed pages out. It's good for prevention, but tends to be unreliable as a cure. META NOINDEX is probably more reliable. One trick - DON'T nofollow the print links, at least not yet. You need Google to crawl and read the NOINDEX tags. Once the ?print pages are de-indexed, you could nofollow the links, too. 
- 
					
					
					
					
 Yes, it's strongly recommended. It should be fairly simple to populate this tag with the "full" URL of the article based on the article ID. This approach will not only help you get rid of the duplicate content issue, but a canonical tag essentially works like a 301 redirect. So from all search engine perspective you are 301'ing your print pages to the real web urls without redirecting the actual user's who are browsing the print pages if they need to. 
- 
					
					
					
					
 Ya it is actually really useful. Unfortunately they are out of business now - so I'm hacking it on my own. I will take your advice. I've shamefully never used rel= canonical before - so now is a good time to start. 
- 
					
					
					
					
 True but using robots.txt does not keep them out of the index. Only using "noindex" will do that. 
- 
					
					
					
					
 Thanks Donnie. Much appreciated! 
- 
					
					
					
					
 I actually remember Lore from a while ago. It's an interesting, easy to use FAQ CMS. Anyways, I would also recommend implementing Canonical Tags for any possible duplicate content issues. So whether it's the print or the web version, each one of them will contain a canonical tag pointing to the web url of that article in the section of your website. rel="canonical" href="http://www.knottyboy.com/lore/idx.php/11/183/Maintenance-of-Mature-Locks-6-months-/article/How-do-I-get-sand-out-of-my-dreads.html" /> 
- 
					
					
					
					
 
- 
					
					
					
					
 Try This. User-agent: * Disallow: /*&action=print 
- 
					
					
					
					
 Theres more then one way to skin a chicken. 
- 
					
					
					
					
 Rather than using robots.txt I'd use a noindex,follow tag instead to the page. This code goes into the tag for each print page. And it will ensure that the pages don't get indexed but that the links are followed. 
- 
					
					
					
					
 That would be great. Do you mind giving me an example? 
- 
					
					
					
					
 you can block in .robot text, every page that ends in action=print 
Browse Questions
Explore more categories
- 
		
		Moz ToolsChat with the community about the Moz tools. 
- 
		
		SEO TacticsDiscuss the SEO process with fellow marketers 
- 
		
		CommunityDiscuss industry events, jobs, and news! 
- 
		
		Digital MarketingChat about tactics outside of SEO 
- 
		
		Research & TrendsDive into research and trends in the search industry. 
- 
		
		SupportConnect on product support and feature requests. 
Related Questions
- 
		
		
		
		
		
		Page Indexing without content
 Hello. I have a problem of page indexing without content. I have website in 3 different languages and 2 of the pages are indexing just fine, but one language page (the most important one) is indexing without content. When searching using site: page comes up, but when searching unique keywords for which I should rank 100% nothing comes up. This page was indexing just fine and the problem arose couple of days ago after google update finished. Looking further, the problem is language related and every page in the given language that is newly indexed has this problem, while pages that were last crawled around one week ago are just fine. Has anyone ran into this type of problem? Technical SEO | | AtuliSulava1
- 
		
		
		
		
		
		Should I "no-index" two exact pages on Google results?
 Hello everyone, I recently started a new wordpress website and created a static homepage. I noticed that on Google search results, there are two different URLs landing on same content page. I've attached an image to explain what I saw. Should I "no-index" the page url? Google url.JPG In this picture, the first result is the homepage and I try to rank for that page. The last result is landing on same content with different URL. So, should I no-index last result as shown in image? Technical SEO | | amanda59640
- 
		
		
		
		
		
		Does Google index internal anchors as separate pages?
 Hi, Back in September, I added a function that sets an anchor on each subheading (h[2-6]) and creates a Table of content that links to each of those anchors. These anchors did show up in the SERPs as JumpTo Links. Fine. Back then I also changed the canonicals to a slightly different structur and meanwhile there was some massive increase in the number of indexed pages - WAY over the top - which has since been fixed by removing (410) a complete section of the site. However ... there are still ~34.000 pages indexed to what really are more like 4.000 plus (all properly canonicalised). Naturally I am wondering, what google thinks it is indexing. The number is just way of and quite inexplainable. So I was wondering: Does Google save JumpTo links as unique pages? Also, does anybody know any method of actually getting all the pages in the google index? (Not actually existing sites via Screaming Frog etc, but actual pages in the index - all methods I found sadly do not work.) Finally: Does somebody have any other explanation for the incongruency in indexed vs. actual pages? Thanks for your replies! Nico Technical SEO | | netzkern_AG0
- 
		
		
		
		
		
		Will blocking the Wayback Machine (archive.org) have any impact on Google crawl and indexing/SEO?
 Will blocking the Wayback Machine (archive.org) by adding the code they give have any impact on Google crawl and indexing/SEO? Anyone know? Thanks! ~Brett Technical SEO | | BBuck0
- 
		
		
		
		
		
		De-indexing millions of pages - would this work?
 Hi all, We run an e-commerce site with a catalogue of around 5 million products. Unfortunately, we have let Googlebot crawl and index tens of millions of search URLs, the majority of which are very thin of content or duplicates of other URLs. In short: we are in deep. Our bloated Google-index is hampering our real content to rank; Googlebot does not bother crawling our real content (product pages specifically) and hammers the life out of our servers. Since having Googlebot crawl and de-index tens of millions of old URLs would probably take years (?), my plan is this: 301 redirect all old SERP URLs to a new SERP URL. If new URL should not be indexed, add meta robots noindex tag on new URL. When it is evident that Google has indexed most "high quality" new URLs, robots.txt disallow crawling of old SERP URLs. Then directory style remove all old SERP URLs in GWT URL Removal Tool This would be an example of an old URL: Technical SEO | | TalkInThePark
 www.site.com/cgi-bin/weirdapplicationname.cgi?word=bmw&what=1.2&how=2 This would be an example of a new URL:
 www.site.com/search?q=bmw&category=cars&color=blue I have to specific questions: Would Google both de-index the old URL and not index the new URL after 301 redirecting the old URL to the new URL (which is noindexed) as described in point 2 above? What risks are associated with removing tens of millions of URLs directory style in GWT URL Removal Tool? I have done this before but then I removed "only" some useless 50 000 "add to cart"-URLs.Google says themselves that you should not remove duplicate/thin content this way and that using this tool tools this way "may cause problems for your site". And yes, these tens of millions of SERP URLs is a result of a faceted navigation/search function let loose all to long.
 And no, we cannot wait for Googlebot to crawl all these millions of URLs in order to discover the 301. By then we would be out of business. Best regards,
 TalkInThePark0
- 
		
		
		
		
		
		NoIndex/NoFollow pages showing up when doing a Google search using "Site:" parameter
 We recently launched a beta version of our new website in a subdomain of our existing site. The existing site is www.fonts.com with the beta living at new.fonts.com. We do not want Google to crawl the new site until it's out of beta so we have added the following on all pages: However, one of our team members noticed that google is displaying results from new.fonts.com when doing an "site:new.fonts.com" search (see attached screenshot). Is it possible that Google is indexing the content despite the noindex, nofollow tags? We have double checked the syntax and it seems correct except the trailing "/". I know Google still crawls noindexed pages, however, the fact that they're showing up in search results using the site search syntax is unsettling. Any thoughts would be appreciated! DyWRP.png Technical SEO | | ChrisRoberts-MTI0
- 
		
		
		
		
		
		Should we use "and" or "&"?
 Our client has an ampersand in their brand name. The logo has "&", their url is spelled out. I'm trying to get them to standardize the use of the name for directories/listings. Should we use "and" or "&"? Technical SEO | | vernonmack0
- 
		
		
		
		
		
		Google is Showing Website as "Untitled"
 My freelance designer made some changes to my website and all of a sudden my homepage was showing the title I have in Dmoz. We thought maybe the NOODP tag was not correct, so we edited that a little and now the site is showing as "Untitled". The website is http://www.chemistrystore.com/. Of course he didn't save an old copy that we can revert to. That is a practice that will end. I have no idea why the title and description that we have set for the homepage is not showing in google when it previously was. Another weird thing that I noticed is that when I do ( site:chemistrystore.com ) in Google I get the https version of the site showing with the correct title and description. When I do ( site:www.chemistrystore.com ) in Google I don't have the hompage showing up from what I can tell, but there are 4,000+ pages to the site. My guess is that if it is showing up, it is showing up as "Untitled". My question is.... How can we get Google to start displaying the proper title and description again? Technical SEO | | slangdon0
 
			
		 
			
		 
			
		 
			
		 
			
		 
					
				 
					
				 
					
				 
					
				 
					
				 
					
				