• majorAlexa

        See all notifications

        Skip to content
        Moz logo Menu open Menu close
        • Products
          • Moz Pro
          • Moz Pro Home
          • Moz Local
          • Moz Local Home
          • STAT
          • Moz API
          • Moz API Home
          • Compare SEO Products
          • Moz Data
        • Free SEO Tools
          • Domain Analysis
          • Keyword Explorer
          • Link Explorer
          • Competitive Research
          • MozBar
          • More Free SEO Tools
        • Learn SEO
          • Beginner's Guide to SEO
          • SEO Learning Center
          • Moz Academy
          • MozCon
          • Webinars, Whitepapers, & Guides
        • Blog
        • Why Moz
          • Digital Marketers
          • Agency Solutions
          • Enterprise Solutions
          • Small Business Solutions
          • The Moz Story
          • New Releases
        • Log in
        • Log out
        • Products
          • Moz Pro

            Your all-in-one suite of SEO essentials.

          • Moz Local

            Raise your local SEO visibility with complete local SEO management.

          • STAT

            SERP tracking and analytics for enterprise SEO experts.

          • Moz API

            Power your SEO with our index of over 44 trillion links.

          • Compare SEO Products

            See which Moz SEO solution best meets your business needs.

          • Moz Data

            Power your SEO strategy & AI models with custom data solutions.

          Let your business shine with Listings AI
          Moz Local

          Let your business shine with Listings AI

          Learn more
        • Free SEO Tools
          • Domain Analysis

            Get top competitive SEO metrics like DA, top pages and more.

          • Keyword Explorer

            Find traffic-driving keywords with our 1.25 billion+ keyword index.

          • Link Explorer

            Explore over 40 trillion links for powerful backlink data.

          • Competitive Research

            Uncover valuable insights on your organic search competitors.

          • MozBar

            See top SEO metrics for free as you browse the web.

          • More Free SEO Tools

            Explore all the free SEO tools Moz has to offer.

          NEW Keyword Suggestions by Topic
          Moz Pro

          NEW Keyword Suggestions by Topic

          Learn more
        • Learn SEO
          • Beginner's Guide to SEO

            The #1 most popular introduction to SEO, trusted by millions.

          • SEO Learning Center

            Broaden your knowledge with SEO resources for all skill levels.

          • On-Demand Webinars

            Learn modern SEO best practices from industry experts.

          • How-To Guides

            Step-by-step guides to search success from the authority on SEO.

          • Moz Academy

            Upskill and get certified with on-demand courses & certifications.

          • MozCon

            Save on Early Bird tickets and join us in London or New York City

          Unlock flexible pricing & new endpoints
          Moz API

          Unlock flexible pricing & new endpoints

          Find your plan
        • Blog
        • Why Moz
          • Digital Marketers

            Simplify SEO tasks to save time and grow your traffic.

          • Small Business Solutions

            Uncover insights to make smarter marketing decisions in less time.

          • Agency Solutions

            Earn & keep valuable clients with unparalleled data & insights.

          • Enterprise Solutions

            Gain a competitive edge in the ever-changing world of search.

          • The Moz Story

            Moz was the first & remains the most trusted SEO company.

          • New Releases

            Get the scoop on the latest and greatest from Moz.

          Surface actionable competitive intel
          New Feature

          Surface actionable competitive intel

          Learn More
        • Log in
          • Moz Pro
          • Moz Local
          • Moz Local Dashboard
          • Moz API
          • Moz API Dashboard
          • Moz Academy
        • Avatar
          • Moz Home
          • Notifications
          • Account & Billing
          • Manage Users
          • Community Profile
          • My Q&A
          • My Videos
          • Log Out

        The Moz Q&A Forum

        • Forum
        • Questions
        • My Q&A
        • Users
        • Ask the Community

        Welcome to the Q&A Forum

        Browse the forum for helpful insights and fresh discussions about all things SEO.

        1. Home
        2. SEO Tactics
        3. Technical SEO
        4. Allow or Disallow First in Robots.txt

        Moz Q&A is closed.

        After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.

        Allow or Disallow First in Robots.txt

        Technical SEO
        7
        12
        30760
        Loading More Posts
        • Watching

          Notify me of new replies.
          Show question in unread.

        • Not Watching

          Do not notify me of new replies.
          Show question in unread if category is not ignored.

        • Ignoring

          Do not notify me of new replies.
          Do not show question in unread.

        • Oldest to Newest
        • Newest to Oldest
        • Most Votes
        Reply
        • Reply as question
        Locked
        This topic has been deleted. Only users with question management privileges can see it.
        • irvingw
          irvingw last edited by

          If I want to override a Disallow directive in robots.txt with an Allow command, do I have the Allow command before or after the Disallow command?

          example:

          Allow: /models/ford///page*

          Disallow: /models////page

          1 Reply Last reply Reply Quote 0
          • Net66SEO
            Net66SEO last edited by

            Just caught this a bit late and probably to late to add something but my two pence is test it in Webmaster Tools, via Crawl -> Robot.txt tester - if you've not used this before simply add the url you want to test and Google highlights the directive that allows or disallows it.

            1 Reply Last reply Reply Quote 0
            • fablau
              fablau @Cyrus-Shepard last edited by

              Thank you Cyrus, yes, I have tried your suggested robots.txt checker and despite it validates the file, it shows me a couple of warnings about the "unusual" use of wildcard. It is my understanding that I would probably need to discuss all this with Google folks directly.

              Thank you for you answer... and, yes Keri, I know this is a old thread, but still useful today!

              Thanks 🙂

              1 Reply Last reply Reply Quote 0
              • Cyrus-Shepard
                Cyrus-Shepard @fablau last edited by

                Can't say with 100% confidence, but sounds like it might work. You could always upload it to a server and use a robots.txt checker to validate, although sometimes the validator tools may incorporate slight differences in edge cases like this that make them moot.

                fablau 1 Reply Last reply Reply Quote 1
                • KeriMorgret
                  KeriMorgret @fablau last edited by

                  Just a quick note, this question is actually from spring of 2012.

                  1 Reply Last reply Reply Quote 0
                  • fablau
                    fablau last edited by

                    What about something like:

                    allow: /directory/$

                    disallow: /directory/*

                    Where I want this to be indexed:

                    http://www.mysite.com/directory/

                    But not this:

                    http://www.mysite.com/directory/sub-directory/

                    Ideas?

                    KeriMorgret Cyrus-Shepard 2 Replies Last reply Reply Quote 0
                    • irvingw
                      irvingw @Cyrus-Shepard last edited by

                      I really appreciate all that effort you put in to ensure your method was correct. many thanks.

                      1 Reply Last reply Reply Quote 0
                      • Cyrus-Shepard
                        Cyrus-Shepard last edited by

                        Interesting question - I've had this discussion a couple of times with different SEOs. Here's my best understanding: There are actually 2 different answers - one if you are talking about Google, and one for every other search engine.

                        For most search engines, the "Allow" should come first. This is because the first matching pattern always wins, for the reasons Geoff stated.

                        But Google is different. They state:

                        "At a group-member level, in particular for allow and disallow directives, the most specific rule based on the length of the [path] entry will trump the less specific (shorter) rule. The order of precedence for rules with wildcards is undefined."

                        Robots.txt Specifications - Webmasters — Google Developers

                        So for Google, order is not important, only the specificity of the rule based on the length of the entry. But the order of precedence for rules with wildcards is undefined.

                        This last part is important, because your directives contain wildcards. If I'm reading this right, your particular directives:

                        Allow: /models/ford///page*

                        Disallow: /models////pageSo if it's "undefined" which directive will Google follow, if order isn't important? Fortunately, there's a simple way to find out.Google Webmaster allows you to test any robots.txt file. I created a dummy file based on your rules, In this case, your directives worked perfectly no matter what order I put them in.

                        | http://cyrusshepard.com/models/ford/test/test/pages | Allowed by line 2: Allow: /models/ford///page* | Allowed by line 2: Allow: /models/ford///page* |
                        | http://cyrusshepard.com/models/chevy/test/test/pages | Blocked by line 3: Disallow: /models////page | Blocked by line 3: Disallow: /models////page |

                        So, to summarize:1. Always put Allow directives first, as most search engines follow the "first rule counts" rule.2. Google doesn't care about order, but rather the specificity based on the length of the entry.3. The order of precedence for rules with wildcards is undefined.4. When in doubt, check your robots.txt file in Google Webmaster tools.Hope this helps.(sorry for the very long answer which basically says you were right all along 🙂

                        irvingw 1 Reply Last reply Reply Quote 3
                        • NakulGoyal
                          NakulGoyal @irvingw last edited by

                          I understand your concern. I am basing my answer based on the fact that if you don't have a robots.txt at all, Google will still crawl you, which means its an allow by default. So all that matters in my opinion is the disallow, but because you need an allow from the wildcard disallow, you could allow that and disallow next.

                          Honestly, I don't think it matters. If you think the way a bot would work, it's not like robots.txt 1 line is read, then the bot goes crawling and then comes back reads the next line and so on. Does that make sense ? It reads all the lines in the robots.txt and then follows the directives. But to be sure, you can do either of the scenarios and see for yourself. I am sure the results would be same either way.

                          1 Reply Last reply Reply Quote 1
                          • zigojacko
                            zigojacko last edited by

                            The allow directives need to come before the disallow directives for the same directory/file paths. (I have never personally tested this although it makes logical sense to instruct a robot to access one particular path within a directory structure before it sees that it is blocked from crawling that directory).

                            For example:-

                            Allow: /profiles

                            Disallow: /s2/profiles/me

                            Allow: /s2/profiles

                            Allow: /s2/photos

                            Allow: /s2/static

                            Disallow: /s2

                            As per how Google have formatted their robots.txt.

                            1 Reply Last reply Reply Quote 2
                            • irvingw
                              irvingw @NakulGoyal last edited by

                              Thanks. I want to make sure I get this right in a syntax universally understood by all engines. I have seen webmasters all over the place on this one with some saying that crawlers use a first matching rule and others that say that crawlers use a last matching rule. I am almost thinking to have the allow command twice - before and after, to cover all bases.

                              NakulGoyal 1 Reply Last reply Reply Quote 0
                              • NakulGoyal
                                NakulGoyal last edited by

                                I don't think it matters, but I think I would disallow first, because by default everything is an Allow.

                                irvingw 1 Reply Last reply Reply Quote 0
                                • 1 / 1
                                • First post
                                  Last post

                                Browse Questions

                                Explore more categories

                                • Moz Tools

                                  Chat with the community about the Moz tools.

                                • SEO Tactics

                                  Discuss the SEO process with fellow marketers

                                • Community

                                  Discuss industry events, jobs, and news!

                                • Digital Marketing

                                  Chat about tactics outside of SEO

                                • Research & Trends

                                  Dive into research and trends in the search industry.

                                • Support

                                  Connect on product support and feature requests.

                                • See all categories

                                Related Questions

                                • AmandaBridge

                                  Disallow wildcard match in Robots.txt

                                  This is in my robots.txt file, does anyone know what this is supposed to accomplish, it doesn't appear to be blocking URLs with question marks Disallow: /?crawler=1
                                  Disallow: /?mobile=1 Thank you

                                  Technical SEO | | AmandaBridge
                                  0
                                • RoxBrock

                                  Robots.txt & meta noindex--site still shows up on Google Search

                                  I have set up my robots.txt like this: User-agent: *
                                  Disallow: / and I have this meta tag in my on a Wordpress site, set up with SEO Yoast name="robots" content="noindex,follow"/> I did "Fetch as Google" on my Google Search Console My website is still showing up in the search results and it says this: "A description for this result is not available because of this site's robots.txt" This site has not shown up for years and now it is ranking above my site that I want to rank for this keyword. How do I get Google to ignore this site? This seems really weird and I'm confused how a site with little content, that has not been updated for years can rank higher than a site that is constantly updated and improved.

                                  Technical SEO | | RoxBrock
                                  1
                                • danwebman

                                  Guys & Gals anyone know if urllist.txt is still used?

                                  I'm using a tool which generates urllist.txt and looking on the SEO Forums it seems that Yahoo used to use this. What I'd like to know is is it still used anywhere and should we have it on the site?

                                  Technical SEO | | danwebman
                                  0
                                • niconico101

                                  Disallow: /404/ - Best Practice?

                                  Hello Moz Community, My developer has added this to my robots.txt file: Disallow: /404/ Is this considered good practice in the world of SEO? Would you do it with your clients? I feel he has great development knowledge but isn't too well versed in SEO. Thank you in advanced, Nico.

                                  Technical SEO | | niconico101
                                  1
                                • Webmaster123

                                  I accidentally blocked Google with Robots.txt. What next?

                                  Last week I uploaded my site and forgot to remove the robots.txt file with this text: User-agent: * Disallow: / I dropped from page 11 on my main keywords to past page 50. I caught it 2-3 days later and have now fixed it. I re-imported my site map with Webmaster Tools and I also did a Fetch as Google through Webmaster Tools. I tweeted out my URL to hopefully get Google to crawl it faster too. Webmaster Tools no longer says that the site is experiencing outages, but when I look at my blocked URLs it still says 249 are blocked. That's actually gone up since I made the fix. In the Google search results, it still no longer has my page title and the description still says "A description for this result is not available because of this site's robots.txt – learn more." How will this affect me long-term? When will I recover my rankings? Is there anything else I can do? Thanks for your input! www.decalsforthewall.com

                                  Technical SEO | | Webmaster123
                                  0
                                • TalkInThePark

                                  Googlebot does not obey robots.txt disallow

                                  Hi Mozzers! We are trying to get Googlebot to steer away from our internal search results pages by adding a parameter "nocrawl=1" to facet/filter links and then robots.txt disallow all URLs containing that parameter. We implemented this late august and since that, the GWMT message "Googlebot found an extremely high number of URLs on your site", stopped coming. But today we received yet another. The weird thing is that Google gives many of our nowadays robots.txt disallowed URLs as examples of URLs that may cause us problems. What could be the reason? Best regards, Martin

                                  Technical SEO | | TalkInThePark
                                  0
                                • ErnieB

                                  Subdomain Removal in Robots.txt with Conditional Logic??

                                  I would like to see if there is a way to add conditional logic to the robots.txt file so that when we push from DEV to PRODUCTION and the robots.txt file is pushed, we don't have to remember to NOT push the robots.txt file OR edit it when it goes live. My specific situation is this: I have www.website.com, dev.website.com and new.website.com and somehow google has indexed the DEV.website.com and NEW.website.com and I'd like these to be removed from google's index as they are causing duplicate content. Should I: a) add 2 new GWT entries for DEV.website.com and NEW.website.com and VERIFY ownership - if I do this, then when the files are pushed to LIVE won't the files contain the VERIFY META CODE for the DEV version even though it's now LIVE? (hope that makes sense) b) write a robots.txt file that specifies "DISALLOW: DEV.website.com/" is that possible? I have only seen examples of DISALLOW with a "/" in the beginning... Hope this makes sense, can really use the help!  I'm on a Windows Server 2008 box running ColdFusion websites.

                                  Technical SEO | | ErnieB
                                  0
                                • JordanJudson

                                  Should I set up a disallow in the robots.txt for catalog search results?

                                  When the crawl diagnostics came back for my site its showing around 3,000 pages of duplicate content. Almost all of them are of the catalog search results page. I also did a site search on Google and they have most of the results pages in their index too. I think I should just disallow the bots in the /catalogsearch/ sub folder, but I'm not sure if this will have any negative effect?

                                  Technical SEO | | JordanJudson
                                  0

                                Get started with Moz Pro!

                                Unlock the power of advanced SEO tools and data-driven insights.

                                Start my free trial
                                Products
                                • Moz Pro
                                • Moz Local
                                • Moz API
                                • Moz Data
                                • STAT
                                • Product Updates
                                Moz Solutions
                                • SMB Solutions
                                • Agency Solutions
                                • Enterprise Solutions
                                • Digital Marketers
                                Free SEO Tools
                                • Domain Authority Checker
                                • Link Explorer
                                • Keyword Explorer
                                • Competitive Research
                                • Brand Authority Checker
                                • Local Citation Checker
                                • MozBar Extension
                                • MozCast
                                Resources
                                • Blog
                                • SEO Learning Center
                                • Help Hub
                                • Beginner's Guide to SEO
                                • How-to Guides
                                • Moz Academy
                                • API Docs
                                About Moz
                                • About
                                • Team
                                • Careers
                                • Contact
                                Why Moz
                                • Case Studies
                                • Testimonials
                                Get Involved
                                • Become an Affiliate
                                • MozCon
                                • Webinars
                                • Practical Marketer Series
                                • MozPod
                                Connect with us

                                Contact the Help team

                                Join our newsletter
                                Moz logo
                                © 2021 - 2025 SEOMoz, Inc., a Ziff Davis company. All rights reserved. Moz is a registered trademark of SEOMoz, Inc.
                                • Accessibility
                                • Terms of Use
                                • Privacy

                                Looks like your connection to Moz was lost, please wait while we try to reconnect.