Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Allow or Disallow First in Robots.txt
- 
					
					
					
					
 If I want to override a Disallow directive in robots.txt with an Allow command, do I have the Allow command before or after the Disallow command? example: Allow: /models/ford///page* Disallow: /models////page 
- 
					
					
					
					
 Just caught this a bit late and probably to late to add something but my two pence is test it in Webmaster Tools, via Crawl -> Robot.txt tester - if you've not used this before simply add the url you want to test and Google highlights the directive that allows or disallows it. 
- 
					
					
					
					
 Thank you Cyrus, yes, I have tried your suggested robots.txt checker and despite it validates the file, it shows me a couple of warnings about the "unusual" use of wildcard. It is my understanding that I would probably need to discuss all this with Google folks directly. Thank you for you answer... and, yes Keri, I know this is a old thread, but still useful today! Thanks  
- 
					
					
					
					
 Can't say with 100% confidence, but sounds like it might work. You could always upload it to a server and use a robots.txt checker to validate, although sometimes the validator tools may incorporate slight differences in edge cases like this that make them moot. 
- 
					
					
					
					
 Just a quick note, this question is actually from spring of 2012. 
- 
					
					
					
					
 What about something like: allow: /directory/$ disallow: /directory/* Where I want this to be indexed: http://www.mysite.com/directory/ But not this: http://www.mysite.com/directory/sub-directory/ Ideas? 
- 
					
					
					
					
 I really appreciate all that effort you put in to ensure your method was correct. many thanks. 
- 
					
					
					
					
 Interesting question - I've had this discussion a couple of times with different SEOs. Here's my best understanding: There are actually 2 different answers - one if you are talking about Google, and one for every other search engine. For most search engines, the "Allow" should come first. This is because the first matching pattern always wins, for the reasons Geoff stated. But Google is different. They state: "At a group-member level, in particular for allowanddisallowdirectives, the most specific rule based on the length of the [path] entry will trump the less specific (shorter) rule. The order of precedence for rules with wildcards is undefined."Robots.txt Specifications - Webmasters — Google Developers So for Google, order is not important, only the specificity of the rule based on the length of the entry. But the order of precedence for rules with wildcards is undefined. This last part is important, because your directives contain wildcards. If I'm reading this right, your particular directives: Allow: /models/ford///page* Disallow: /models////pageSo if it's "undefined" which directive will Google follow, if order isn't important? Fortunately, there's a simple way to find out.Google Webmaster allows you to test any robots.txt file. I created a dummy file based on your rules, In this case, your directives worked perfectly no matter what order I put them in. | http://cyrusshepard.com/models/ford/test/test/pages | Allowed by line 2: Allow: /models/ford///page* | Allowed by line 2: Allow: /models/ford///page* | 
 | http://cyrusshepard.com/models/chevy/test/test/pages | Blocked by line 3: Disallow: /models////page | Blocked by line 3: Disallow: /models////page |So, to summarize:1. Always put Allow directives first, as most search engines follow the "first rule counts" rule.2. Google doesn't care about order, but rather the specificity based on the length of the entry.3. The order of precedence for rules with wildcards is undefined.4. When in doubt, check your robots.txt file in Google Webmaster tools.Hope this helps.(sorry for the very long answer which basically says you were right all along  
- 
					
					
					
					
 I understand your concern. I am basing my answer based on the fact that if you don't have a robots.txt at all, Google will still crawl you, which means its an allow by default. So all that matters in my opinion is the disallow, but because you need an allow from the wildcard disallow, you could allow that and disallow next. Honestly, I don't think it matters. If you think the way a bot would work, it's not like robots.txt 1 line is read, then the bot goes crawling and then comes back reads the next line and so on. Does that make sense ? It reads all the lines in the robots.txt and then follows the directives. But to be sure, you can do either of the scenarios and see for yourself. I am sure the results would be same either way. 
- 
					
					
					
					
 The allow directives need to come before the disallow directives for the same directory/file paths. (I have never personally tested this although it makes logical sense to instruct a robot to access one particular path within a directory structure before it sees that it is blocked from crawling that directory). For example:- Allow: /profiles Disallow: /s2/profiles/me Allow: /s2/profiles Allow: /s2/photos Allow: /s2/static Disallow: /s2 As per how Google have formatted their robots.txt. 
- 
					
					
					
					
 Thanks. I want to make sure I get this right in a syntax universally understood by all engines. I have seen webmasters all over the place on this one with some saying that crawlers use a first matching rule and others that say that crawlers use a last matching rule. I am almost thinking to have the allow command twice - before and after, to cover all bases. 
- 
					
					
					
					
 I don't think it matters, but I think I would disallow first, because by default everything is an Allow. 
Browse Questions
Explore more categories
- 
		
		Moz ToolsChat with the community about the Moz tools. 
- 
		
		SEO TacticsDiscuss the SEO process with fellow marketers 
- 
		
		CommunityDiscuss industry events, jobs, and news! 
- 
		
		Digital MarketingChat about tactics outside of SEO 
- 
		
		Research & TrendsDive into research and trends in the search industry. 
- 
		
		SupportConnect on product support and feature requests. 
Related Questions
- 
		
		
		
		
		
		Robots.txt Tester - syntax not understood
 I've looked in the robots.txt Tester and I can see 3 warnings: There is a 'syntax not understood' warning for each of these. XML Sitemaps: Technical SEO | | JamesHancocks1
 https://www.pkeducation.co.uk/post-sitemap.xml
 https://www.pkeducation.co.uk/sitemap_index.xml How do I fix or reformat these to remove the warnings? Many thanks in advance.
 Jim0
- 
		
		
		
		
		
		How to fix: Attribute name not allowed on element meta at this point.
 Hello, HTML validator brings "Attribute name not allowed on element meta at this point" for all my meta tags. Yet, as I understand, it is essential to keep meta-description for SEO, for example. I read a couple of articles on how to fix that and one of them suggests considering HTML5 custom data attribute instead of name: Do you think I should try to validate my page? And instead of ? I will appreciate your advise very much! Technical SEO | | kirupa0
- 
		
		
		
		
		
		Should I Focus on Video Schema or a Video Sitemap First
 Hey all, I'm working on a website that is soon going to launch a video hub that contains over 500 videos. I'm interested in ensuring that the videos show up on the SERPs page in the highest position possible. I know Google recommends that you have on-page schema for your videos as well as an XML sitemap so they can be indexed for SERP. When I look at schema and the XML video sitemap they seem to communicate very similar kinds of information (Title, Description, Thumbnail, Duration). I'm not sure which one to start with; is it more important to have video schema or a video sitemap? Additionally, if anyone knows of any good video sitemap generators (free is best, but cheap is okay too) then please let me know. Cursory google searching has just churned up a number of tools that look sketchy. Technical SEO | | perfectsearch710
- 
		
		
		
		
		
		Adding multi-language sitemaps to robots.txt
 I am working on a revamped multi-language site that has moved to Magento. Each language runs off the core coding so there are no sub-directories per language. The developer has created sitemaps which have been uploaded to their respective GWT accounts. They have placed the sitemaps in new directories such as: /sitemap/uk/sitemap.xml /sitemap/de/sitemap.xml I want to add the sitemaps to the robots.txt but can't figure out how to do it. Also should they have placed the sitemaps in a single location with the file identifying each language: /sitemap/uk-sitemap.xml /sitemap/de-sitemap.xml What is the cleanest way of handling these sitemaps and can/should I get them on robots.txt? Technical SEO | | MickEdwards0
- 
		
		
		
		
		
		Two META Robots tags on a page - which will win?
 Hi, Does anybody know which meta-robots tag will "win" if there is more than one on a page? The situation: Technical SEO | | jmueller
 our CMS is not very flexible and so we have segments of META-Tags on the page that originate from templates.
 Now any author can add any meta-tag from within his article-editor.
 The logic delivering the pages does not care if there might be more than one meta-robots tag present (one from template, one from within the article). Now we could end up with something like this: Which one will be regarded by google & co?
 First?
 Last?
 None? Thanks a lot,
 Jan0
- 
		
		
		
		
		
		Googlebot does not obey robots.txt disallow
 Hi Mozzers! We are trying to get Googlebot to steer away from our internal search results pages by adding a parameter "nocrawl=1" to facet/filter links and then robots.txt disallow all URLs containing that parameter. We implemented this late august and since that, the GWMT message "Googlebot found an extremely high number of URLs on your site", stopped coming. But today we received yet another. The weird thing is that Google gives many of our nowadays robots.txt disallowed URLs as examples of URLs that may cause us problems. What could be the reason? Best regards, Martin Technical SEO | | TalkInThePark0
- 
		
		
		
		
		
		Can I Disallow Faceted Nav URLs - Robots.txt
 I have been disallowing /*? So I know that works without affecting crawling. I am wondering if I can disallow the faceted nav urls. So disallow: /category.html/? /category2.html/? /category3.html/*? To prevent the price faceted url from being cached: /category.html?price=1%2C1000 Technical SEO | | tylerfraser
 and
 /category.html?price=1%2C1000&product_material=88 Thanks!0
- 
		
		
		
		
		
		Should I set up a disallow in the robots.txt for catalog search results?
 When the crawl diagnostics came back for my site its showing around 3,000 pages of duplicate content. Almost all of them are of the catalog search results page. I also did a site search on Google and they have most of the results pages in their index too. I think I should just disallow the bots in the /catalogsearch/ sub folder, but I'm not sure if this will have any negative effect? Technical SEO | | JordanJudson0
 
			
		 
			
		 
			
		 
			
		 
			
		 
			
		 
					
				 
					
				 
					
				 
					
				 
					
				 
					
				 
					
				