Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Robots.txt: how to exclude sub-directories correctly?
- 
					
					
					
					
 Hello here, I am trying to figure out the correct way to tell SEs to crawls this: http://www.mysite.com/directory/ But not this: http://www.mysite.com/directory/sub-directory/ or this: http://www.mysite.com/directory/sub-directory2/sub-directory/... But with the fact I have thousands of sub-directories with almost infinite combinations, I can't put the following definitions in a manageable way: disallow: /directory/sub-directory/ disallow: /directory/sub-directory2/ disallow: /directory/sub-directory/sub-directory/ disallow: /directory/sub-directory2/subdirectory/ etc... I would end up having thousands of definitions to disallow all the possible sub-directory combinations. So, is the following way a correct, better and shorter way to define what I want above: allow: /directory/$ disallow: /directory/* Would the above work? Any thoughts are very welcome! Thank you in advance. Best, Fab. 
- 
					
					
					
					
 I mentioned both. You add a meta robots to noindex and remove from the sitemap. 
- 
					
					
					
					
 But google is still free to index a link/page even if it is not included in xml sitemap. 
- 
					
					
					
					
 Install Yoast Wordpress SEO plugin and use that to restrict what is indexed and what is allowed in a sitemap. 
- 
					
					
					
					
 I am using wordpress, Enfold theme (themeforest). I want some files to be accessed by google, but those should not be indexed. Here is an example: http://prntscr.com/h8918o I have currently blocked some JS directories/files using robots.txt (check screenshot) But due to this I am not able to pass Mobile Friendly Test on Google: http://prntscr.com/h8925z (check screenshot) Is its possible to allow access, but use a tag like noindex in the robots.txt file. Or is there any other way out. 
- 
					
					
					
					
 Yes, everything looks good, Webmaster Tools gave me the expected results with the following directives: allow: /directory/$ disallow: /directory/* Which allows this URL: http://www.mysite.com/directory/ But doesn't allow the following one: http://www.mysite.com/directory/sub-directory2/... This page also gives an update similar to mine: https://support.google.com/webmasters/answer/156449?hl=en I think I am good! Thanks  
- 
					
					
					
					
 Thank you Michael, it is my understanding then that my idea of doing this: allow: /directory/$ disallow: /directory/* Should work just fine. I will test it within Google Webmaster Tools, and let you know if any problems arise. In the meantime if anyone else has more ideas about all this and can confirm me that would be great! Thank you again. 
- 
					
					
					
					
 I've always stuck to Disallow and followed - "This is currently a bit awkward, as there is no "Allow" field. The easy way is to put all files to be disallowed into a separate directory, say "stuff", and leave the one file in the level above this directory:" http://www.robotstxt.org/robotstxt.html From https://developers.google.com/webmasters/control-crawl-index/docs/robots_txt this seems contradictory | /*| equivalent to / | equivalent to / | Equivalent to "/" -- the trailing wildcard is ignored. |I think this post will be very useful for you - http://a-moz.groupbuyseo.org/community/q/allow-or-disallow-first-in-robots-txt 
- 
					
					
					
					
 Thank you Michael, Google and other SEs actually recognize the "allow:" command: https://developers.google.com/webmasters/control-crawl-index/docs/robots_txt The fact is: if I don't specify that, how can I be sure that the following single command: disallow: /directory/* Doesn't prevent SEs to spider the /directory/ index as I'd like to? 
- 
					
					
					
					
 As long as you dont have directories somewhere in /* that you want indexed then I think that will work. There is no allow so you don't need the first line just disallow: /directory/* You can test out here- https://support.google.com/webmasters/answer/156449?rd=1 
Browse Questions
Explore more categories
- 
		
		Moz ToolsChat with the community about the Moz tools. 
- 
		
		SEO TacticsDiscuss the SEO process with fellow marketers 
- 
		
		CommunityDiscuss industry events, jobs, and news! 
- 
		
		Digital MarketingChat about tactics outside of SEO 
- 
		
		Research & TrendsDive into research and trends in the search industry. 
- 
		
		SupportConnect on product support and feature requests. 
Related Questions
- 
		
		
		
		
		
		URL Structure & Best Practice when Facing 4+ Sub-levels
 Hi. I've spent the last day fiddling with the setup of a new URL structure for a site, and I can't "pull the trigger" on it. Example: - domain.com/games/type-of-game/provider-name/name-of-game/ Specific example: - arcade.com/games/pinball/deckerballs/starshooter2k/ The example is a good description of the content that I have to organize. The aim is to a) define url structure, b) facilitate good ux, **c) **create a good starting point for content marketing and SEO, avoiding multiple / stuffing keywords in urls'. The problem? Not all providers have the same type of game. Meaning, that once I get past the /type-of-game/, I must write a new category / page / content for /provider-name/. No matter how I switch the different "sub-levels" around in the url, at one point, the provider-name doesn't fit as its in need of new content, multiple times. The solution? I can skip "provider-name". The caveat though is that I lose out on ranking for provider keywords as I don't have a cornerstone content page for them. Question: Using the URL structure as outlined above in WordPress, would you A) go with "Pages", or B) use "Posts" Intermediate & Advanced SEO | | Dan-Louis0
- 
		
		
		
		
		
		SEO Best Practices regarding Robots.txt disallow
 I cannot find hard and fast direction about the following issue: It looks like the Robots.txt file on my server has been set up to disallow "account" and "search" pages within my site, so I am receiving warnings from the Google Search console that URLs are being blocked by Robots.txt. (Disallow: /Account/ and Disallow: /?search=). Do you recommend unblocking these URLs? I'm getting a warning that over 18,000 Urls are blocked by robots.txt. ("Sitemap contains urls which are blocked by robots.txt"). Seems that I wouldn't want that many urls blocked. ? Thank you!! Intermediate & Advanced SEO | | jamiegriz0
- 
		
		
		
		
		
		Should I use noindex or robots to remove pages from the Google index?
 I have a Magento site and just realized we have about 800 review pages indexed. The /review directory is disallowed in robots.txt but the pages are still indexed. From my understanding robots means it will not crawl the pages BUT if the pages are still indexed if they are linked from somewhere else. I can add the noindex tag to the review pages but they wont be crawled. https://www.seroundtable.com/google-do-not-use-noindex-in-robots-txt-20873.html Should I remove the robots.txt and add the noindex? Or just add the noindex to what I already have? Intermediate & Advanced SEO | | Tylerj0
- 
		
		
		
		
		
		Linking from & to in domains and sub-domains
 What's the best optimised linking between sub-domains and domains? And every time we'll give website link at top with logo...do we need to link sub-domain also with all it's pages? If example.com is domain and example.com/blog is sub-domain or sub-folder... Do we need to link to example.com from /blog? Do we need to give /blog link in all pages of /blog? Is there any difference in connecting domains with sub-domains and sub-folders? Intermediate & Advanced SEO | | vtmoz0
- 
		
		
		
		
		
		Will disallowing URL's in the robots.txt file stop those URL's being indexed by Google
 I found a lot of duplicate title tags showing in Google Webmaster Tools. When I visited the URL's that these duplicates belonged to, I found that they were just images from a gallery that we didn't particularly want Google to index. There is no benefit to the end user in these image pages being indexed in Google. Our developer has told us that these urls are created by a module and are not "real" pages in the CMS. They would like to add the following to our robots.txt file Disallow: /catalog/product/gallery/ QUESTION: If the these pages are already indexed by Google, will this adjustment to the robots.txt file help to remove the pages from the index? We don't want these pages to be found. Intermediate & Advanced SEO | | andyheath0
- 
		
		
		
		
		
		Baidu Spider appearing on robots.txt
 Hi, I'm not too sure what to do about this or what to think of it. This magically appeared in my companies robots.txt file (literally magically appeared/text is below) User-agent: Baiduspider Intermediate & Advanced SEO | | IceIcebaby
 User-agent: Baiduspider-video
 User-agent: Baiduspider-image
 Disallow: / I know that Baidu is the Google of China, but I'm not sure why this would appear in our robots.txt all of a sudden. Should I be worried about a hack? Also, would I want to disallow Baidu from crawling my companies website? Thanks for your help,
 -Reed0
- 
		
		
		
		
		
		Different Header on Home Page vs Sub pages
 Hello, I am an SEO/PPC manager for a company that does a medical detox. You can see the site in question here: http://opiates.com. My question is, I've never heard of it specifically being a problem to have a different header on the home page of the site than on the subpages, but I rarely see it either. Most sites, if i'm not mistaken, use a consistent header across most of the site. However, a person i'm working for now said that she has had other SEO's look at the site (above) and they always say that it is a big SEO problem to have a different header on the homepage than on the subpages. Any thoughts on this subject? I've never heard of this before. Thanks, Jesse Intermediate & Advanced SEO | | Waismann0
- 
		
		
		
		
		
		Could you use a robots.txt file to disalow a duplicate content page from being crawled?
 A website has duplicate content pages to make it easier for users to find the information from a couple spots in the site navigation. Site owner would like to keep it this way without hurting SEO. I've thought of using the robots.txt file to disallow search engines from crawling one of the pages. Would you think this is a workable/acceptable solution? Intermediate & Advanced SEO | | gregelwell0
 
			
		 
			
		 
					
				 
					
				 
					
				 
					
				 
					
				 
					
				 
					
				