• milanzokovich-sh3jqL0l

        See all notifications

        Skip to content
        Moz logo Menu open Menu close
        • Products
          • Moz Pro
          • Moz Pro Home
          • Moz Local
          • Moz Local Home
          • STAT
          • Moz API
          • Moz API Home
          • Compare SEO Products
          • Moz Data
        • Free SEO Tools
          • Domain Analysis
          • Keyword Explorer
          • Link Explorer
          • Competitive Research
          • MozBar
          • More Free SEO Tools
        • Learn SEO
          • Beginner's Guide to SEO
          • SEO Learning Center
          • Moz Academy
          • MozCon
          • Webinars, Whitepapers, & Guides
        • Blog
        • Why Moz
          • Digital Marketers
          • Agency Solutions
          • Enterprise Solutions
          • Small Business Solutions
          • The Moz Story
          • New Releases
        • Log in
        • Log out
        • Products
          • Moz Pro

            Your all-in-one suite of SEO essentials.

          • Moz Local

            Raise your local SEO visibility with complete local SEO management.

          • STAT

            SERP tracking and analytics for enterprise SEO experts.

          • Moz API

            Power your SEO with our index of over 44 trillion links.

          • Compare SEO Products

            See which Moz SEO solution best meets your business needs.

          • Moz Data

            Power your SEO strategy & AI models with custom data solutions.

          Let your reputation grow with Reviews AI
          Moz Local

          Let your reputation grow with Reviews AI

          Learn more
        • Free SEO Tools
          • Domain Analysis

            Get top competitive SEO metrics like DA, top pages and more.

          • Keyword Explorer

            Find traffic-driving keywords with our 1.25 billion+ keyword index.

          • Link Explorer

            Explore over 40 trillion links for powerful backlink data.

          • Competitive Research

            Uncover valuable insights on your organic search competitors.

          • MozBar

            See top SEO metrics for free as you browse the web.

          • More Free SEO Tools

            Explore all the free SEO tools Moz has to offer.

          NEW Keyword Suggestions by Topic
          Moz Pro

          NEW Keyword Suggestions by Topic

          Learn more
        • Learn SEO
          • Beginner's Guide to SEO

            The #1 most popular introduction to SEO, trusted by millions.

          • SEO Learning Center

            Broaden your knowledge with SEO resources for all skill levels.

          • On-Demand Webinars

            Learn modern SEO best practices from industry experts.

          • How-To Guides

            Step-by-step guides to search success from the authority on SEO.

          • Moz Academy

            Upskill and get certified with on-demand courses & certifications.

          • MozCon

            Save on Early Bird tickets and join us in London or New York City

          Unlock flexible pricing & new endpoints
          Moz API

          Unlock flexible pricing & new endpoints

          Find your plan
        • Blog
        • Why Moz
          • Digital Marketers

            Simplify SEO tasks to save time and grow your traffic.

          • Small Business Solutions

            Uncover insights to make smarter marketing decisions in less time.

          • Agency Solutions

            Earn & keep valuable clients with unparalleled data & insights.

          • Enterprise Solutions

            Gain a competitive edge in the ever-changing world of search.

          • The Moz Story

            Moz was the first & remains the most trusted SEO company.

          • New Releases

            Get the scoop on the latest and greatest from Moz.

          Surface actionable competitive intel
          New Feature

          Surface actionable competitive intel

          Learn More
        • Log in
          • Moz Pro
          • Moz Local
          • Moz Local Dashboard
          • Moz API
          • Moz API Dashboard
          • Moz Academy
        • Avatar
          • Moz Home
          • Notifications
          • Account & Billing
          • Manage Users
          • Community Profile
          • My Q&A
          • My Videos
          • Log Out

        The Moz Q&A Forum

        • Forum
        • Questions
        • My Q&A
        • Users
        • Ask the Community

        Welcome to the Q&A Forum

        Browse the forum for helpful insights and fresh discussions about all things SEO.

        1. Home
        2. SEO Tactics
        3. Intermediate & Advanced SEO
        4. SEO Best Practices regarding Robots.txt disallow

        Moz Q&A is closed.

        After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.

        SEO Best Practices regarding Robots.txt disallow

        Intermediate & Advanced SEO
        3
        5
        1746
        Loading More Posts
        • Watching

          Notify me of new replies.
          Show question in unread.

        • Not Watching

          Do not notify me of new replies.
          Show question in unread if category is not ignored.

        • Ignoring

          Do not notify me of new replies.
          Do not show question in unread.

        • Oldest to Newest
        • Newest to Oldest
        • Most Votes
        Reply
        • Reply as question
        Locked
        This topic has been deleted. Only users with question management privileges can see it.
        • jamiegriz
          jamiegriz last edited by

          I cannot find hard and fast direction about the following issue:

          It looks like the Robots.txt file on my server has been set up to disallow "account" and "search" pages within my site, so I am receiving warnings from the Google Search console that URLs are being blocked by Robots.txt. (Disallow: /Account/ and Disallow: /?search=). Do you recommend unblocking these URLs?

          I'm getting a warning that over 18,000 Urls are blocked by robots.txt. ("Sitemap contains urls which are blocked by robots.txt"). Seems that I wouldn't want that many urls blocked. ?

          Thank you!!

          1 Reply Last reply Reply Quote 0
          • mememax
            mememax @jamiegriz last edited by

            mmm it depends.

            it's really hard for me to answer without knowing your site but I would say that you're in the good direction. You want to provide google more ways to reach your quality content.

            Now do you have any other page that is bringing bots there via a normal user navigation or is it all search driven?

            While google can crawl pages that discovered via internal/external links it can't reproduce searches by typing in your nav bar, so I doubt those pages should be extremely valuable unless you link to them somehow. In that case you may want to keep google crawling them.

            A different thing would be if you want to "index" them, as being searches they are probably aggregating different information already present on the site. For indexation purposes you may want to keep them out of the index while still allowing the bot to run through them.

            Again beware of the crawl budget, you don't want google to be wandering around millions of search results instead of your money pages, unless you're able to let them crawl only a sub portion of that.

            I hope this made sense 🙂

            1 Reply Last reply Reply Quote 0
            • jamiegriz
              jamiegriz @mememax last edited by

              Thank you for your response! I'm going to do a bit more research but I think I will disallow "account", but unblock "search". The search feature on my site pulls up quality content, so seems like I would want that to be crawled. Does this sound logical to you? 🙂

              mememax 1 Reply Last reply Reply Quote 0
              • mememax
                mememax last edited by

                That could be completely normal. Google sends a warning because you're giving conflicting directions as you are preventing them to crawl pages (via robots) you asked them to index (via sitemap).

                They do not know how important those pages may be for you so you are the one that needs to assess what to do net.

                Are those pages important for you? Do you want them to be in the index? if that's the case change your robots.txt rule, if not then remove them from the sitemap.

                About the previous answer robots text is not used to block hackers but quite the opposite. Hackers can easily find via the robots txt which are the pages you'd like to block and visit them as they may be key pages (ex. wp-admin), but let's not focus on that as hackers have so many ways to find core pages that it's not the topic. Robots txt is normally used to avoid duplication issues and to prevent google from crawling low value pages and waste crawl budget.

                jamiegriz 1 Reply Last reply Reply Quote 1
                • TheKatzMeow
                  TheKatzMeow Subscriber last edited by

                  Typically, you only want robots.txt to block access points that would allow hackers into your site like an admin page (e.g. www.examplesite.com/admin/). You definitely don't want it blocking your whole site. A developer or webmaster would be better at speaking to the specifics, but that's the quick, high-level answer.

                  1 Reply Last reply Reply Quote 1
                  • 1 / 1
                  • First post
                    Last post

                  Browse Questions

                  Explore more categories

                  • Moz Tools

                    Chat with the community about the Moz tools.

                  • SEO Tactics

                    Discuss the SEO process with fellow marketers

                  • Community

                    Discuss industry events, jobs, and news!

                  • Digital Marketing

                    Chat about tactics outside of SEO

                  • Research & Trends

                    Dive into research and trends in the search industry.

                  • Support

                    Connect on product support and feature requests.

                  • See all categories

                  Related Questions

                  • Mat_C

                    Robots.txt blocked internal resources Wordpress

                    Hi all, We've recently migrated a Wordpress website from staging to live, but the robots.txt was deleted.  I've created the following new one: User-agent: *
                    Allow: /
                    Disallow: /wp-admin/
                    Disallow: /wp-includes/
                    Disallow: /wp-content/plugins/
                    Disallow: /wp-content/cache/
                    Disallow: /wp-content/themes/
                    Allow: /wp-admin/admin-ajax.php However, in the site audit on SemRush,  I now get the mention that a lot of pages have issues with blocked internal resources in robots.txt file. These blocked internal resources are all cached and minified css elements: links, images and scripts. Does this mean that Google won't crawl some parts of these pages with blocked resources correctly and thus won't be able to follow these links and index the images? In other words, is this any cause for concern regarding SEO? Of course I can change the robots.txt again, but will urls like https://example.com/wp-content/cache/minify/df983.js end up in the index? Thanks for your thoughts!

                    Intermediate & Advanced SEO | | Mat_C
                    2
                  • teddef

                    Best practice for deindexing large quantities of pages

                    We are trying to deindex a large quantity of pages on our site and want to know what the best practice for doing that is. For reference, the reason we are looking for methods that could help us speed it up is we have about 500,000 URLs that we want deindexed because of mis-formatted HTML code and google indexed them much faster than it is taking to unindex them unfortunately. We don't want to risk clogging up our limited crawl log/budget by submitting a sitemap of URLs that have "noindex" on them as a hack for deindexing. Although theoretically that should work, we are looking for white hat methods that are faster than "being patient and waiting it out", since that would likely take months if not years with Google's current crawl rate of our site.

                    Intermediate & Advanced SEO | | teddef
                    0
                  • atourgates

                    Best Practices for Converting PDFs to HTML

                    We're working with a client who gets about 80% of their organic, inbound search traffic from links to PDF files on their site. Obviously, this isn't ideal, because someone who just downloads a PDF file directly from a Google query is unlikely to interact with the site in any other way. I'm looking to develop a plan to convert those PDF files to HTML content, and try to get at least some of those visitors to convert into subscribers. What's the best way to go about this?  My plan so far is: Develop HTML landing pages for each of the popular PDFs, with the content from the PDF, as well as the option to download the PDF with an email signup. Gradually implement 301 redirects for the existing PDFs, and see what that does to our inbound SEO traffic.  I don't want to create a dip in traffic, although our current "direct to inbound" traffic is largely useless. Are their things I should watch out for?  Will I get penalized by Google for redirecting a PDF to HTML content? Other things I should be aware of?

                    Intermediate & Advanced SEO | | atourgates
                    0
                  • Malika1

                    If Robots.txt have blocked an Image (Image URL) but the other page which can be indexed has this image, how is the image treated?

                    Hi MOZers, This probably is a dumb question but I have a case where the robots.tags has an image url blocked but this image is used on a page (lets call it Page A) which can be indexed. If the image on Page A has an Alt tags, then how is this information digested by crawlers? A) would Google totally ignore the image and the ALT tags information? OR B) Google would consider the ALT tags information? I am asking this because all the images on the website are blocked by robots.txt at the moment but I would really like website crawlers to crawl the alt tags information. Chances are that I will ask the webmaster to allow indexing of images too but I would like to understand what's happening currently. Looking forward to all your responses 🙂 Malika

                    Intermediate & Advanced SEO | | Malika1
                    1
                  • TheDude

                    URL Rewriting Best Practices

                    Hey Moz! I’m getting ready to implement URL rewrites on my website to improve site structure/URL readability. More specifically I want to: Improve our website structure by removing redundant directories. Replace underscores with dashes and remove file extensions for our URLs. Please see my example below: Old structure: http://www.widgets.com/widgets/commercial-widgets/small_blue_widget.htm New structure: https://www.widgets.com/commercial-widgets/small-blue-widget I've read several URL rewriting guides online, all of which seem to provide similar but overall different methods to do this. I'm looking for what's considered best practices to implement these rewrites. From what I understand, the most common method is to implement rewrites in our .htaccess file using mod_rewrite (which will find the old URLs and rewrite them according to the rewrites I implement). One question I can't seem to find a definitive answer to is when I implement the rewrite to remove file extensions/replace underscores with dashes in our URLs, do the webpage file names need to be edited to the new format? From what I understand the webpage file names must remain the same for the rewrites in the .htaccess to work. However, our internal links (including canonical links) must be changed to the new URL format. Can anyone shed light on this? Also, I'm aware that implementing URL rewriting improperly could negatively affect our SERP rankings. If I redirect our old website directory structure to our new structure using this rewrite, are my bases covered in regards to having the proper 301 redirects in place to not affect our rankings negatively? Please offer any advice/reliable guides to handle this properly. Thanks in advance!

                    Intermediate & Advanced SEO | | TheDude
                    0
                  • ntcma

                    Should I use meta noindex and robots.txt disallow?

                    Hi, we have an alternate "list view" version of every one of our search results pages The list view has its own URL, indicated by a URL parameter I'm concerned about wasting our crawl budget on all these list view pages, which effectively doubles the amount of pages that need crawling When they were first launched, I had the noindex meta tag be placed on all list view pages, but I'm concerned that they are still being crawled Should I therefore go ahead and also apply a robots.txt disallow on that parameter to ensure that no crawling occurs? Or, will Googlebot/Bingbot also stop crawling that page over time? I assume that noindex still means "crawl"... Thanks 🙂

                    Intermediate & Advanced SEO | | ntcma
                    0
                  • masterfish

                    Is CloudFlare bad for SEO?

                    I have been hit by DDoS attacks lately...not on a huge scale, but probably done by some "script kiddies" or competitors of mine. Still, I need to take some action in order to protect my server and my site against all of this spam traffic that is being sent to it. In the process of researching the tools available for defending a website from a DDoS attack, I came across the service offered by CloudFlare.com. According to the CloudFlare website, they protect your site against a DDoS attack by showing users/visitors they find suspicious an interstitial that asks them if they are a real user or a bot...this interstitial contains a Captcha that suspicious users are asked to enter in order to visit the site. I'm just wondering what kind of an effect such an interstitial could have on my Google rankings...I can imagine that such a thing could add to increased click-backs to the SERPs and, if Google detects this, to lower rankings. Has anyone had experience with the DDoS protection services offered by CloudFlare, who can say a word or two regarding any effects this may have on SEO? Thanks

                    Intermediate & Advanced SEO | | masterfish
                    1
                  • AndrewY

                    Blocking Dynamic URLs with Robots.txt

                    Background: My e-commerce site uses a lot of layered navigation and sorting links.  While this is great for users, it ends up in a lot of URL variations of the same page being crawled by Google.  For example, a standard category page: www.mysite.com/widgets.html ...which uses a "Price" layered navigation sidebar to filter products based on price also produces the following URLs which link to the same page: http://www.mysite.com/widgets.html?price=1%2C250 http://www.mysite.com/widgets.html?price=2%2C250 http://www.mysite.com/widgets.html?price=3%2C250 As there are literally thousands of these URL variations being indexed, so I'd like to use Robots.txt to disallow these variations. Question: Is this a wise thing to do?  Or does Google take into account layered navigation links by default, and I don't need to worry. To implement, I was going to do the following in Robots.txt: User-agent: * Disallow: /*? Disallow: /*= ....which would prevent any dynamic URL with a '?" or '=' from being indexed.  Is there a better way to do this, or is this a good solution? Thank you!

                    Intermediate & Advanced SEO | | AndrewY
                    1

                  Get started with Moz Pro!

                  Unlock the power of advanced SEO tools and data-driven insights.

                  Start my free trial
                  Products
                  • Moz Pro
                  • Moz Local
                  • Moz API
                  • Moz Data
                  • STAT
                  • Product Updates
                  Moz Solutions
                  • SMB Solutions
                  • Agency Solutions
                  • Enterprise Solutions
                  • Digital Marketers
                  Free SEO Tools
                  • Domain Authority Checker
                  • Link Explorer
                  • Keyword Explorer
                  • Competitive Research
                  • Brand Authority Checker
                  • Local Citation Checker
                  • MozBar Extension
                  • MozCast
                  Resources
                  • Blog
                  • SEO Learning Center
                  • Help Hub
                  • Beginner's Guide to SEO
                  • How-to Guides
                  • Moz Academy
                  • API Docs
                  About Moz
                  • About
                  • Team
                  • Careers
                  • Contact
                  Why Moz
                  • Case Studies
                  • Testimonials
                  Get Involved
                  • Become an Affiliate
                  • MozCon
                  • Webinars
                  • Practical Marketer Series
                  • MozPod
                  Connect with us

                  Contact the Help team

                  Join our newsletter
                  Moz logo
                  © 2021 - 2025 SEOMoz, Inc., a Ziff Davis company. All rights reserved. Moz is a registered trademark of SEOMoz, Inc.
                  • Accessibility
                  • Terms of Use
                  • Privacy

                  Looks like your connection to Moz was lost, please wait while we try to reconnect.