Skip to content
    Moz logo Menu open Menu close
    • Products
      • Moz Pro
      • Moz Pro Home
      • Moz Local
      • Moz Local Home
      • STAT
      • Moz API
      • Moz API Home
      • Compare SEO Products
      • Moz Data
    • Free SEO Tools
      • Domain Analysis
      • Keyword Explorer
      • Link Explorer
      • Competitive Research
      • MozBar
      • More Free SEO Tools
    • Learn SEO
      • Beginner's Guide to SEO
      • SEO Learning Center
      • Moz Academy
      • MozCon
      • Webinars, Whitepapers, & Guides
    • Blog
    • Why Moz
      • Digital Marketers
      • Agency Solutions
      • Enterprise Solutions
      • Small Business Solutions
      • The Moz Story
      • New Releases
    • Log in
    • Log out
    • Products
      • Moz Pro

        Your all-in-one suite of SEO essentials.

      • Moz Local

        Raise your local SEO visibility with complete local SEO management.

      • STAT

        SERP tracking and analytics for enterprise SEO experts.

      • Moz API

        Power your SEO with our index of over 44 trillion links.

      • Compare SEO Products

        See which Moz SEO solution best meets your business needs.

      • Moz Data

        Power your SEO strategy & AI models with custom data solutions.

      Enhance Keyword Discovery with Bulk Analysis
      Moz Pro

      Enhance Keyword Discovery with Bulk Analysis

      Learn more
    • Free SEO Tools
      • Domain Analysis

        Get top competitive SEO metrics like DA, top pages and more.

      • Keyword Explorer

        Find traffic-driving keywords with our 1.25 billion+ keyword index.

      • Link Explorer

        Explore over 40 trillion links for powerful backlink data.

      • Competitive Research

        Uncover valuable insights on your organic search competitors.

      • MozBar

        See top SEO metrics for free as you browse the web.

      • More Free SEO Tools

        Explore all the free SEO tools Moz has to offer.

      NEW Keyword Suggestions by Topic
      Moz Pro

      NEW Keyword Suggestions by Topic

      Learn more
    • Learn SEO
      • Beginner's Guide to SEO

        The #1 most popular introduction to SEO, trusted by millions.

      • SEO Learning Center

        Broaden your knowledge with SEO resources for all skill levels.

      • On-Demand Webinars

        Learn modern SEO best practices from industry experts.

      • How-To Guides

        Step-by-step guides to search success from the authority on SEO.

      • Moz Academy

        Upskill and get certified with on-demand courses & certifications.

      • MozCon

        Save on Early Bird tickets and join us in London or New York City

      Access 20 years of data with flexible pricing
      Moz API

      Access 20 years of data with flexible pricing

      Find your plan
    • Blog
    • Why Moz
      • Digital Marketers

        Simplify SEO tasks to save time and grow your traffic.

      • Small Business Solutions

        Uncover insights to make smarter marketing decisions in less time.

      • Agency Solutions

        Earn & keep valuable clients with unparalleled data & insights.

      • Enterprise Solutions

        Gain a competitive edge in the ever-changing world of search.

      • The Moz Story

        Moz was the first & remains the most trusted SEO company.

      • New Releases

        Get the scoop on the latest and greatest from Moz.

      Surface actionable competitive intel
      New Feature

      Surface actionable competitive intel

      Learn More
    • Log in
      • Moz Pro
      • Moz Local
      • Moz Local Dashboard
      • Moz API
      • Moz API Dashboard
      • Moz Academy
    • Avatar
      • Moz Home
      • Notifications
      • Account & Billing
      • Manage Users
      • Community Profile
      • My Q&A
      • My Videos
      • Log Out

    The Moz Q&A Forum

    • Forum
    • Questions
    • Users
    • Ask the Community

    Welcome to the Q&A Forum

    Browse the forum for helpful insights and fresh discussions about all things SEO.

    1. Home
    2. SEO Tactics
    3. Intermediate & Advanced SEO
    4. Soft 404's from pages blocked by robots.txt -- cause for concern?

    Moz Q&A is closed.

    After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.

    Soft 404's from pages blocked by robots.txt -- cause for concern?

    Intermediate & Advanced SEO
    3
    6
    2995
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as question
    Log in to reply
    This topic has been deleted. Only users with question management privileges can see it.
    • nicole.healthline
      nicole.healthline last edited by

      We're seeing soft 404 errors appear in our google webmaster tools section on pages that are blocked by robots.txt (our search result pages).

      Should we be concerned? Is there anything we can do about this?

      1 Reply Last reply Reply Quote 4
      • CleverPhD
        CleverPhD @CleverPhD last edited by

        Me too.  It was that video that helped to clear things up for me.  Then I could see when to use robots.txt vs the noindex meta tag.  It has made a big difference in how I manage sites that have large amounts of content that can be sorted in a huge number of ways.

        1 Reply Last reply Reply Quote 0
        • Highland
          Highland @CleverPhD last edited by

          Good stuff. I was always under the impression they still crawled them (otherwise, how would you know if the block was removed).

          1 Reply Last reply Reply Quote 0
          • CleverPhD
            CleverPhD @Highland last edited by

            Take a look at

            http://www.youtube.com/watch?v=KBdEwpRQRD0

            to see what I am talking about.

            Robots.txt does prevent crawling according to Matt Cutts.

            Highland CleverPhD 2 Replies Last reply Reply Quote 1
            • Highland
              Highland last edited by

              Robots.txt prevents indexation, not crawling. The good news is that Googlebot stops crawling 404s.

              CleverPhD 1 Reply Last reply Reply Quote 0
              • CleverPhD
                CleverPhD last edited by

                Just a couple of under the hood things to check.

                1. Are you sure your robots.txt is setup correctly. Check in GWT to see that Google is reading it.

                2. This may be a timing issue.  Errors take 30-60 days to drop out (as what I have seen) so did they show soft 404 and then you added them to robots.txt?

                If that was the case, this may be a sequence issue.  If Google finds a soft 404 (or some other error) then it comes back to spider and is not able to crawl the page due to robots.txt - it does not know what the current status of the page is so it may just leave the last status that it  found.

                1. I tend to see soft 404 for pages that you have a 301 redirect on where you have a many to one association.  In other words, you have a bunch of pages that are 301ing to a single page.  You may want to consider changing where some of the 301s redirect so that they going to a specific page vs an index page.

                2. If you have a page in robots.txt - you do not want them in Google, here is what I would do.   Show a 200 on that page but then put in the meta tags a noindex nofollow.

                http://support.google.com/webmasters/bin/answer.py?hl=en&answer=93710

                "When we see the noindex meta tag on a page, Google will completely drop the page from our search results, even if other pages link to it"

                Let Google spider it so that it can see the 200 code - you get rid of the soft 404 errors.  Then toss in the noindex nofollow meta tags to have the page removed from the Google index.  It sounds backwards that you have to let Google spider to get it to remove stuff, but it works it you walk through the logic.

                Good luck!

                1 Reply Last reply Reply Quote 1
                • 1 / 1
                • First post
                  Last post

                Got a burning SEO question?

                Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.


                Start my free trial


                Browse Questions

                Explore more categories

                • Moz Tools

                  Chat with the community about the Moz tools.

                • SEO Tactics

                  Discuss the SEO process with fellow marketers

                • Community

                  Discuss industry events, jobs, and news!

                • Digital Marketing

                  Chat about tactics outside of SEO

                • Research & Trends

                  Dive into research and trends in the search industry.

                • Support

                  Connect on product support and feature requests.

                • See all categories

                Related Questions

                • pdrama231

                  Should I Add Location to ALL of My Client's URLs?

                  Hi Mozzers, My first Moz post! Yay! I'm excited to join the squad 🙂 My client is a full service entertainment company serving the Washington DC Metro area (DC, MD & VA) and offers a host of services for those wishing to throw events/parties. Think DJs for weddings, cool photo booths, ballroom lighting etc. I'm wondering what the right URL structure should be. I've noticed that some of our competitors do put DC area keywords in their URLs, but with the moves of SERPs to focus a lot more on quality over keyword density, I'm wondering if we should focus on location based keywords in traditional areas on page (e.g. title tags, headers, metas, content etc) instead of having keywords in the URLs alongside the traditional areas I just mentioned. So, on every product related page should we do something like: example.com/weddings/planners-washington-dc-md-va
                  example.com/weddings/djs-washington-dc-md-va
                  example.com/weddings/ballroom-lighting-washington-dc-md-va OR example.com/weddings/planners
                  example.com/weddings/djs
                  example.com/weddings/ballroom-lighting In both cases, we'd put the necessary location based keywords in the proper places on-page. If we follow the location-in-URL tactic, we'd use DC area terms in all subsequent product page URLs as well. Essentially, every page outside of the home page would have a location in it. Thoughts? Thank you!!

                  Intermediate & Advanced SEO | | pdrama231
                  0
                • andyheath

                  Will disallowing URL's in the robots.txt file stop those URL's being indexed by Google

                  I found a lot of duplicate title tags showing in Google Webmaster Tools. When I visited the URL's that these duplicates belonged to, I found that they were just images from a gallery that we didn't particularly want Google to index. There is no benefit to the end user in these image pages being indexed in Google. Our developer has told us that these urls are created by a module and are not "real" pages in the CMS. They would like to add the following to our robots.txt file Disallow: /catalog/product/gallery/ QUESTION: If the these pages are already indexed by Google, will this adjustment to the robots.txt file help to remove the pages from the index? We don't want these pages to be found.

                  Intermediate & Advanced SEO | | andyheath
                  0
                • Modbargains

                  Dilemma about "images" folder in robots.txt

                  Hi, Hope you're doing well. I am sure, you guys must be aware that Google has updated their webmaster technical guidelines saying that users should allow access to their css files and java-scripts file if it's possible. Used to be that Google would render the web pages only text based. Now it claims that it can read the css and java-scripts. According to their own terms, not allowing access to the css files can result in sub-optimal rankings. "Disallowing crawling of Javascript or CSS files in your site’s robots.txt directly harms how well our algorithms render and index your content and can result in suboptimal rankings."http://googlewebmastercentral.blogspot.com/2014/10/updating-our-technical-webmaster.htmlWe have allowed access to our CSS files. and Google bot, is seeing our webapges more like a normal user would do. (tested it in GWT)Anyhow, this is my dilemma. I am sure lot of other users might be facing the same situation. Like any other e commerce companies/websites.. we have lot of images. Used to be that our css files were inside our images folder, so I have allowed access to that. Here's the robots.txt --> http://www.modbargains.com/robots.txtRight now we are blocking images folder, as it is very huge, very heavy, and some of the images are very high res. The reason we are blocking that is because we feel that Google bot might spend almost all of its time trying to crawl that "images" folder only, that it might not have enough time to crawl other important pages. Not to mention, a very heavy server load on Google's and ours. we do have good high quality original pictures. We feel that we are losing potential rankings since we are blocking images. I was thinking to allow ONLY google-image bot, access to it. But I still feel that google might spend lot of time doing that. **I was wondering if Google makes a decision saying, hey let me spend 10 minutes for google image bot, and let me spend 20 minutes for google-mobile bot etc.. or something like that.. , or does it have separate "time spending" allocations for all of it's bot types. I want to unblock the images folder, for now only the google image bot, but at the same time, I fear that it might drastically hamper indexing of our important pages, as I mentioned before, because of having tons & tons of images, and Google spending enough time already just to crawl that folder.**Any advice? recommendations? suggestions? technical guidance? Plan of action? Pretty sure I answered my own question, but I need a confirmation from an Expert, if I am right, saying that allow only Google image access to my images folder. Sincerely,Shaleen Shah

                  Intermediate & Advanced SEO | | Modbargains
                  1
                • carlystemmer

                  404's - Do they impact search ranking/how do we get rid of them?

                  Hi, We recently ran the Moz website crawl report and saw a number of 404 pages from our site come back. These were returned as "high priority" issues to fix. My question is, how do 404's impact search ranking? From what Google support tells me, 404's are "normal" and not a big deal to fix, but if they are "high priority" shouldn't we be doing something to remove them? Also, if I do want to remove the pages, how would I go about doing so? Is it enough to go into Webmaster tools and list it as a link no to crawl anymore or do we need to do work from the website development side as well? Here are a couple of examples that came back..these are articles that were previously posted but we decided to close out: http://loyalty360.org/loyalty-management/september-2011/let-me-guessyour-loyalty-program-isnt-working http://loyalty360.org/resources/article/mark-johnson-speaks-at-motivation-show Thanks!

                  Intermediate & Advanced SEO | | carlystemmer
                  0
                • YairSpolter

                  Block in robots.txt instead of using canonical?

                  When I use a canonical tag for pages that are variations of the same page, it basically means that I don't want Google to index this page. But at the same time, spiders will go ahead and crawl the page. Isn't this a waste of my crawl budget? Wouldn't it be better to just disallow the page in robots.txt and let Google focus on crawling the pages that I do want indexed? In other words, why should I ever use rel=canonical as opposed to simply disallowing in robots.txt?

                  Intermediate & Advanced SEO | | YairSpolter
                  0
                • fablau

                  Robots.txt: how to exclude sub-directories correctly?

                  Hello here, I am trying to figure out the correct way to tell SEs to crawls this: http://www.mysite.com/directory/ But not this: http://www.mysite.com/directory/sub-directory/ or this: http://www.mysite.com/directory/sub-directory2/sub-directory/... But with the fact I have thousands of sub-directories with almost infinite combinations, I can't put the following definitions in a manageable way: disallow: /directory/sub-directory/ disallow: /directory/sub-directory2/ disallow: /directory/sub-directory/sub-directory/ disallow: /directory/sub-directory2/subdirectory/ etc... I would end up having thousands of definitions to disallow all the possible sub-directory combinations. So, is the following way a correct, better and shorter way to define what I want above: allow: /directory/$ disallow: /directory/* Would the above work? Any thoughts are very welcome! Thank you in advance. Best, Fab.

                  Intermediate & Advanced SEO | | fablau
                  1
                • NerdsOnCall

                  What's the deal with significantLinks?

                  http://schema.org/significantLink Schema.org has a definition for "non-navigation links that are clicked on the most." Presumably this means something like the big green buttons on Moz's homepage. But does anyone know how they affect anything? In http://a-moz.groupbuyseo.org/blog/schemaorg-a-new-approach-to-structured-data-for-seo#comment-142936, Jeremy Nelson says " It's quite possible that significant links will pass anchor text as well if a previous link to the page was set in navigation, effictively making obselete the first-link-counts rule, and I am interested in putting that to test." This is a pretty obscure comment but it's one of the only results I could find on the subject. Is this BS? I can't even make out what all of it is saying. So what's the deal with significantLinks and how can we use them to SEO?

                  Intermediate & Advanced SEO | | NerdsOnCall
                  0
                • esiow2013

                  May know what's the meaning of these parameters in .htaccess?

                  Begin HackRepair.com Blacklist RewriteEngine on Abuse Agent Blocking RewriteCond %{HTTP_USER_AGENT} ^BlackWidow [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Bolt\ 0 [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Bot\ mailto:[email protected] [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} CazoodleBot [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^ChinaClaw [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Custo [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Default\ Browser\ 0 [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^DIIbot [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^DISCo [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} discobot [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Download\ Demon [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^eCatch [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ecxi [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^EirGrabber [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^EmailCollector [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^EmailSiphon [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^EmailWolf [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Express\ WebPictures [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^ExtractorPro [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^EyeNetIE [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^FlashGet [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^GetRight [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^GetWeb! [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Go!Zilla [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Go-Ahead-Got-It [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^GrabNet [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Grafula [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} GT::WWW [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} heritrix [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^HMView [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} HTTP::Lite [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} HTTrack [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ia_archiver [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} IDBot [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} id-search [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} id-search.org [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Image\ Stripper [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Image\ Sucker [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} Indy\ Library [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^InterGET [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Internet\ Ninja [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^InternetSeer.com [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} IRLbot [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ISC\ Systems\ iRc\ Search\ 2.1 [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Java [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^JetCar [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^JOC\ Web\ Spider [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^larbin [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^LeechFTP [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} libwww [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} libwww-perl [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Link [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} LinksManager.com_bot [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} linkwalker [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} lwp-trivial [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Mass\ Downloader [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Maxthon$ [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} MFC_Tear_Sample [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^microsoft.url [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} Microsoft\ URL\ Control [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^MIDown\ tool [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Mister\ PiX [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} Missigua\ Locator [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Mozilla.*Indy [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Mozilla.NEWT [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^MSFrontPage [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Navroad [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^NearSite [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^NetAnts [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^NetSpider [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Net\ Vampire [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^NetZIP [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Nutch [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Octopus [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Offline\ Explorer [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Offline\ Navigator [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^PageGrabber [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} panscient.com [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Papa\ Foto [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^pavuk [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} PECL::HTTP [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^PeoplePal [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^pcBrowser [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} PHPCrawl [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} PleaseCrawl [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^psbot [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^RealDownload [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^ReGet [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Rippers\ 0 [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} SBIder [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^SeaMonkey$ [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^sitecheck.internetseer.com [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^SiteSnagger [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^SmartDownload [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} Snoopy [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} Steeler [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^SuperBot [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^SuperHTTP [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Surfbot [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^tAkeOut [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Teleport\ Pro [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Toata\ dragostea\ mea\ pentru\ diavola [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} URI::Fetch [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} urllib [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} User-Agent [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^VoidEYE [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Web\ Image\ Collector [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Web\ Sucker [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} Web\ Sucker [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} webalta [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^WebAuto [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^[Ww]eb[Bb]andit [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} WebCollage [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^WebCopier [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^WebFetch [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^WebGo\ IS [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^WebLeacher [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^WebReaper [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^WebSauger [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Website\ eXtractor [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Website\ Quester [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^WebStripper [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^WebWhacker [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^WebZIP [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} Wells\ Search\ II [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} WEP\ Search [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Wget [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Widow [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^WWW-Mechanize [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^WWWOFFLE [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Xaldon\ WebSpider [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} zermelo [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^Zeus [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ^(.)Zeus.Webster [NC,OR]
                  RewriteCond %{HTTP_USER_AGENT} ZyBorg [NC]
                  RewriteRule ^. - [F,L] Abuse bot blocking rule end End HackRepair.com Blacklist

                  Intermediate & Advanced SEO | | esiow2013
                  1

                Get started with Moz Pro!

                Unlock the power of advanced SEO tools and data-driven insights.

                Start my free trial
                Products
                • Moz Pro
                • Moz Local
                • Moz API
                • Moz Data
                • STAT
                • Product Updates
                Moz Solutions
                • SMB Solutions
                • Agency Solutions
                • Enterprise Solutions
                • Digital Marketers
                Free SEO Tools
                • Domain Authority Checker
                • Link Explorer
                • Keyword Explorer
                • Competitive Research
                • Brand Authority Checker
                • Local Citation Checker
                • MozBar Extension
                • MozCast
                Resources
                • Blog
                • SEO Learning Center
                • Help Hub
                • Beginner's Guide to SEO
                • How-to Guides
                • Moz Academy
                • API Docs
                About Moz
                • About
                • Team
                • Careers
                • Contact
                Why Moz
                • Case Studies
                • Testimonials
                Get Involved
                • Become an Affiliate
                • MozCon
                • Webinars
                • Practical Marketer Series
                • MozPod
                Connect with us

                Contact the Help team

                Join our newsletter
                Moz logo
                © 2021 - 2025 SEOMoz, Inc., a Ziff Davis company. All rights reserved. Moz is a registered trademark of SEOMoz, Inc.
                • Accessibility
                • Terms of Use
                • Privacy

                Looks like your connection to Moz was lost, please wait while we try to reconnect.