Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Block an entire subdomain with robots.txt?
- 
					
					
					
					
 Is it possible to block an entire subdomain with robots.txt? I write for a blog that has their root domain as well as a subdomain pointing to the exact same IP. Getting rid of the option is not an option so I'd like to explore other options to avoid duplicate content. Any ideas? 
- 
					
					
					
					
 Awesome! That did the trick -- thanks for your help. The site is no longer listed  
- 
					
					
					
					
 Fact is, the robots file alone will never work (the link has a good explanation why - short form: all it does is stop the bots from indexing again). Best to request removal then wait a few days. 
- 
					
					
					
					
 Yeah. As of yet, the site has not been de-indexed. We placed the conditional rule in htaccess and are getting different robots.txt files for the domain and subdomain -- so that works. But I've never done this before so I don't know how long it's supposed to take? I'll try to verify via Webmaster Tools to speed up the process. Thanks 
- 
					
					
					
					
 You should do a remove request in Google Webmaster Tools. You have to first verify the sub-domain then request the removal. See this post on why the robots file alone won't work... http://www.seomoz.org/blog/robot-access-indexation-restriction-techniques-avoiding-conflicts 
- 
					
					
					
					
 Awesome. We used your second idea and so far it looks like it is working exactly how we want. Thanks for the idea. Will report back to confirm that the subdomain has been de-indexed. 
- 
					
					
					
					
 Option 1 could come with a small performance hit if you have a lot of txt files being used on the server. There shouldn't be any negative side effects to option 2 if the rewrite is clean (IE not accidently a redirect) and the content of the two files are robots compliant. Good luck 
- 
					
					
					
					
 Thanks for the suggestion. I'll definitely have to do a bit more research into this one to make sure that it doesn't have any negative side effects before implementation 
- 
					
					
					
					
 We have a plugin right now that places canonical tags, but unfortunately, the canonical for the subdomain points to the subdomain. I'll look around to see if I can tweak the settings 
- 
					
					
					
					
 Sounds like (from other discussions) you may be stuck requiring a dynamic robot.txt file which detects what domain the bot is on and changes the content accordingly. This means the server has to run all .txt file as (I presume) PHP. Or, you could conditionally rewrite the /robot.txt URL to a new file according to sub-domain RewriteEngine on 
 RewriteCond %{HTTP_HOST} ^subdomain.website.com$
 RewriteRule ^robotx.txt$ robots-subdomain.txtThen add: User-agent: * 
 Disallow: /to the robots-subdomain.txt file (untested) 
- 
					
					
					
					
 Placing canonical tags isn't an option? Detect that the page is being viewed through the subdomain, and if so, write the canonical tag on the page back to the root domain? Or, just place a canonical tag on every page pointing back to the root domain (so the subdomain and root domain pages would both have them). Apparently, it's ok to have a canonical tag on a page pointing to itself. I haven't tried this, but if Matt Cutts says it's ok... 
- 
					
					
					
					
 Hey Ryan, I wasn't directly involved with the decision to create the subdomain, but I'm told that it is necessary to create in order to bypass certain elements that were affecting the root domain. Nevertheless, it is a blog and the users now need to login to the subdomain in order to access the Wordpress backend to bypass those elements. Traffic for the site still goes to the root domain. 
- 
					
					
					
					
 They both point to the same location on the server? So there's not a different folder for the subdomain? If that's the case then I suggest adding a rule to your htaccess file to 301 the subdomain back to the main domain in exactly the same way people redirect from non-www to www or vice-versa. However, you should ask why the server is configured to have a duplicate subdomain? You might just edit your apache settings to get rid of that subdomain (usually done through a cpanel interface). Here is what your htaccess might look like: <ifmodule mod_rewrite.c="">RewriteEngine on 
 # Redirect non-www to wwww
 RewriteCond %{HTTP_HOST} !^www.mydomain.org [NC]
 RewriteRule ^(.*)$ http://www.mydomain.org/$1 [R=301,L]</ifmodule>
- 
					
					
					
					
 Not to me LOL  I think you'll need someone with a bit more expertise in this area than I to assist in this case. Kyle, I'm sorry I couldn't offer more assistance... but I don't want to tell you something if I'm not 100% sure. I suspect one of the many bright SEOmozer's will quickly come to the rescue on this one. I think you'll need someone with a bit more expertise in this area than I to assist in this case. Kyle, I'm sorry I couldn't offer more assistance... but I don't want to tell you something if I'm not 100% sure. I suspect one of the many bright SEOmozer's will quickly come to the rescue on this one.Andy  
- 
					
					
					
					
 Hey Andy, Herein lies the problem. Since the domain and subdomain point to the exact same place, they both utilize the same robots.txt file. Does that make sense? 
- 
					
					
					
					
 Hi Kyle  Yes, you can block an entire subdomain via robots.txt, however you'll need to create a robots.txt file and place it in the root of the subdomain, then add the code to direct the bots to stay away from the entire subdomain's content. Yes, you can block an entire subdomain via robots.txt, however you'll need to create a robots.txt file and place it in the root of the subdomain, then add the code to direct the bots to stay away from the entire subdomain's content.User-agent: * 
 Disallow: /hope this helps  
Browse Questions
Explore more categories
- 
		
		Moz ToolsChat with the community about the Moz tools. 
- 
		
		SEO TacticsDiscuss the SEO process with fellow marketers 
- 
		
		CommunityDiscuss industry events, jobs, and news! 
- 
		
		Digital MarketingChat about tactics outside of SEO 
- 
		
		Research & TrendsDive into research and trends in the search industry. 
- 
		
		SupportConnect on product support and feature requests. 
Related Questions
- 
		
		
		
		
		
		Blog subdomain not redirecting
 Over the last few weeks I have been focused on fixing high and medium priority issues, as reported by the Moz crawler, after a recent transition to WordPress. I've made great progress, getting the high priority issues down from several hundred (various reasons, but many duplicates for things like non-www and www versions) to just five last week. And then there's this weeks report. For reasons I can't fathom, I am suddenly getting hundreds of duplicate content pages of the form http://blog.<domain>.com</domain> (being duplicates with the http://www.<domain>.com</domain> versions). I'm really unclear on why these suddenly appeared. I host my own WordPress site ie WordPress.org stuff. In Options / General everything refers to http://www.<domain>.com</domain> and has done for a number of weeks. I have no idea why the blog versions of the pages have suddenly appeared. FWIW, the non-www version of my pages still redirect to the www version, as I would expect. I'm obviously pretty concerned by this so any pointers greatly appreciated. Thanks. Mark Intermediate & Advanced SEO | | MarkWill0
- 
		
		
		
		
		
		Baidu Spider appearing on robots.txt
 Hi, I'm not too sure what to do about this or what to think of it. This magically appeared in my companies robots.txt file (literally magically appeared/text is below) User-agent: Baiduspider Intermediate & Advanced SEO | | IceIcebaby
 User-agent: Baiduspider-video
 User-agent: Baiduspider-image
 Disallow: / I know that Baidu is the Google of China, but I'm not sure why this would appear in our robots.txt all of a sudden. Should I be worried about a hack? Also, would I want to disallow Baidu from crawling my companies website? Thanks for your help,
 -Reed0
- 
		
		
		
		
		
		Dev Subdomain Pages Indexed - How to Remove
 I own a website (domain.com) and used the subdomain "dev.domain.com" while adding a new section to the site (as a development link). I forgot to block the dev.domain.com in my robots file, and google indexed all of the dev pages (around 100 of them). I blocked the site (dev.domain.com) in robots, and then proceeded to just delete the entire subdomain altogether. It's been about a week now and I still see the subdomain pages indexed on Google. How do I get these pages removed from Google? Are they causing duplicate content/title issues, or does Google know that it's a development subdomain and it's just taking time for them to recognize that I deleted it already? Intermediate & Advanced SEO | | WebServiceConsulting.com0
- 
		
		
		
		
		
		Using subdomains for related landing pages?
 Seeking subdomain usage and related SEO advice... I'd like to use multiple subdomains for multiple landing pages all with content related to the main root domain. Why?...Cost: so I only have to register one domain. One root domain for better 'branding'. Multiple subdomains that each focus on one specific reason & set of specific keywords people would search a solution to their reason to hire us (or our competition). Intermediate & Advanced SEO | | nodiffrei0
- 
		
		
		
		
		
		Robots.txt: how to exclude sub-directories correctly?
 Hello here, I am trying to figure out the correct way to tell SEs to crawls this: http://www.mysite.com/directory/ But not this: http://www.mysite.com/directory/sub-directory/ or this: http://www.mysite.com/directory/sub-directory2/sub-directory/... But with the fact I have thousands of sub-directories with almost infinite combinations, I can't put the following definitions in a manageable way: disallow: /directory/sub-directory/ disallow: /directory/sub-directory2/ disallow: /directory/sub-directory/sub-directory/ disallow: /directory/sub-directory2/subdirectory/ etc... I would end up having thousands of definitions to disallow all the possible sub-directory combinations. So, is the following way a correct, better and shorter way to define what I want above: allow: /directory/$ disallow: /directory/* Would the above work? Any thoughts are very welcome! Thank you in advance. Best, Fab. Intermediate & Advanced SEO | | fablau1
- 
		
		
		
		
		
		Franchise sites on subdomains
 I've been asked by a client to optimise a a webpage for a location i.e. London. Turns out that the location is actually a franchise of the main company. When the company launch a new franchise, so far they have simply added a new page to the main site, for example: mysite.co.uk/sub-folder/london They have so far done this for 10 or so franchises and task someone with optimising that page for their main keyword + location. I think I know the answer to this, but would like to get a back up / additional info on it in terms of ranking / seo benefits. I am going to suggest the idea of using a subdomain for each location, example: london.mysite.co.uk Would this be the correct approach. If you think yes, why? Many thanks, Intermediate & Advanced SEO | | Webrevolve0
- 
		
		
		
		
		
		Robots.txt is blocking Wordpress Pages from Googlebot?
 I have a robots.txt file on my server, which I did not develop, it was done by the web designer at the company before me. Then there is a word press plugin that generates a robots.txt file. How Do I unblock all the wordpress pages from googlebot? Intermediate & Advanced SEO | | ENSO0
- 
		
		
		
		
		
		Best way to block a search engine from crawling a link?
 If we have one page on our site that is is only linked to by one other page, what is the best way to block crawler access to that page? I know we could set the link to "nofollow" and that would prevent the crawler from passing any authority, and we can set the page to "noindex" to prevent it from appearing in search results, but what is the best way to prevent the crawler from accessing that one link? Intermediate & Advanced SEO | | nicole.healthline0
 
			
		 
			
		 
			
		 
			
		 
					
				 
					
				 
					
				 
					
				 
					
				 
					
				