• BBgmoro

        See all notifications

        Skip to content
        Moz logo Menu open Menu close
        • Products
          • Moz Pro
          • Moz Pro Home
          • Moz Local
          • Moz Local Home
          • STAT
          • Moz API
          • Moz API Home
          • Compare SEO Products
          • Moz Data
        • Free SEO Tools
          • Domain Analysis
          • Keyword Explorer
          • Link Explorer
          • Competitive Research
          • MozBar
          • More Free SEO Tools
        • Learn SEO
          • Beginner's Guide to SEO
          • SEO Learning Center
          • Moz Academy
          • MozCon
          • Webinars, Whitepapers, & Guides
        • Blog
        • Why Moz
          • Digital Marketers
          • Agency Solutions
          • Enterprise Solutions
          • Small Business Solutions
          • The Moz Story
          • New Releases
        • Log in
        • Log out
        • Products
          • Moz Pro

            Your all-in-one suite of SEO essentials.

          • Moz Local

            Raise your local SEO visibility with complete local SEO management.

          • STAT

            SERP tracking and analytics for enterprise SEO experts.

          • Moz API

            Power your SEO with our index of over 44 trillion links.

          • Compare SEO Products

            See which Moz SEO solution best meets your business needs.

          • Moz Data

            Power your SEO strategy & AI models with custom data solutions.

          Turn SEO data into actionable content briefs

          Turn SEO data into actionable content briefs

          Learn more
        • Free SEO Tools
          • Domain Analysis

            Get top competitive SEO metrics like DA, top pages and more.

          • Keyword Explorer

            Find traffic-driving keywords with our 1.25 billion+ keyword index.

          • Link Explorer

            Explore over 40 trillion links for powerful backlink data.

          • Competitive Research

            Uncover valuable insights on your organic search competitors.

          • MozBar

            See top SEO metrics for free as you browse the web.

          • More Free SEO Tools

            Explore all the free SEO tools Moz has to offer.

          Let your business shine with Listings AI

          Let your business shine with Listings AI

          Get found
        • Learn SEO
          • Beginner's Guide to SEO

            The #1 most popular introduction to SEO, trusted by millions.

          • SEO Learning Center

            Broaden your knowledge with SEO resources for all skill levels.

          • On-Demand Webinars

            Learn modern SEO best practices from industry experts.

          • How-To Guides

            Step-by-step guides to search success from the authority on SEO.

          • Moz Academy

            Upskill and get certified with on-demand courses & certifications.

          • MozCon

            Save on Early Bird tickets and join us in London or New York City

          Access 20 years of data with flexible pricing
          Moz API

          Access 20 years of data with flexible pricing

          Find your plan
        • Blog
        • Why Moz
          • Digital Marketers

            Simplify SEO tasks to save time and grow your traffic.

          • Small Business Solutions

            Uncover insights to make smarter marketing decisions in less time.

          • Agency Solutions

            Earn & keep valuable clients with unparalleled data & insights.

          • Enterprise Solutions

            Gain a competitive edge in the ever-changing world of search.

          • The Moz Story

            Moz was the first & remains the most trusted SEO company.

          • New Releases

            Get the scoop on the latest and greatest from Moz.

          Surface actionable competitive intel
          New Feature

          Surface actionable competitive intel

          Learn More
        • Log in
          • Moz Pro
          • Moz Local
          • Moz Local Dashboard
          • Moz API
          • Moz API Dashboard
          • Moz Academy
        • Avatar
          • Moz Home
          • Notifications
          • Account & Billing
          • Manage Users
          • Community Profile
          • My Q&A
          • My Videos
          • Log Out

        The Moz Q&A Forum

        • Forum
        • Questions
        • My Q&A
        • Users
        • Ask the Community

        Welcome to the Q&A Forum

        Browse the forum for helpful insights and fresh discussions about all things SEO.

        1. Home
        2. SEO Tactics
        3. Technical SEO
        4. Does Google pass link juice a page receives if the URL parameter specifies content and has the Crawl setting in Webmaster Tools set to NO?

        Moz Q&A is closed.

        After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.

        Does Google pass link juice a page receives if the URL parameter specifies content and has the Crawl setting in Webmaster Tools set to NO?

        Technical SEO
        4
        13
        4605
        Loading More Posts
        • Watching

          Notify me of new replies.
          Show question in unread.

        • Not Watching

          Do not notify me of new replies.
          Show question in unread if category is not ignored.

        • Ignoring

          Do not notify me of new replies.
          Do not show question in unread.

        • Oldest to Newest
        • Newest to Oldest
        • Most Votes
        Reply
        • Reply as question
        Locked
        This topic has been deleted. Only users with question management privileges can see it.
        • surveygizmo
          surveygizmo last edited by

          The page in question receives a  lot of quality traffic but is only relevant to a small percent of my users. I want to keep the link juice received from this page but I do not want it to appear in the SERPs.

          1 Reply Last reply Reply Quote 0
          • Jen_Floyd
            Jen_Floyd Subscriber @Dr-Pete last edited by

            Update - Google has crawled this correctly and is returning the correct, redirected page.  Meaning, it seems to have understood that we don't want any of the parametered versions indexed ("return representative link") from our original page and all of its campaign-tracked brethren, and is then redirecting from the representative link correctly.

            And finally there was peace in the universe...for now.  ;>  Tim

            1 Reply Last reply Reply Quote 0
            • Jen_Floyd
              Jen_Floyd Subscriber @Dr-Pete last edited by

              Agree...it feels like leaving a bit to chance, but I'll keep an eye on it over the next few weeks to see what comes of it.  We seem to be re-indexed every couple of days, so maybe I can test it out Monday.

              BTW, this issue really came up when we were creating a server side 301 redirect for the root URL, and then I got to wondering if we'd need to set up an irule for all parameters. Hopefully not...hopefully Google will figure it out for us.

              Thanks Peter.  Tim

              1 Reply Last reply Reply Quote 0
              • Dr-Pete
                Dr-Pete Staff @Jen_Floyd last edited by

                It's really tough to say, but moving away from "Let Google decide" to a more definitive choice seems like a good next step. You know which URL should be canonical, and it's not the parameterized version (if I'm understanding correctly).

                If you say "Let Google decide", it seems a bit more like rel=prev/next. Google may allow any page in the set to rank, BUT they won't treat those pages as duplicates, etc. How does this actually impact the PR flow to any given page in that series? We have no idea. They're probably consolidating them on the fly, to some degree. They basically have to be, since the page they choose to rank form the set is query-dependent.

                Jen_Floyd 2 Replies Last reply Reply Quote 0
                • Jen_Floyd
                  Jen_Floyd Subscriber last edited by

                  This question deals with dynamically created pages, it seems, and Google seems to recommend NOT choosing the "no" option in WMT - choose "yes" when you edit the parameter settings for this and you'll see an option for your case, I think, Christian (I know this is 3 years late, but still).

                  BUT I have a situation where we use SiteCatalyst to create numerous tracking codes as parameters to a URL.  Since there is not a new page being created, we are following Google's advice to select "no" - apparently will:

                  "group the duplicate URLs into one cluster and select what we think is the "best" URL to represent the cluster in search results. We then consolidate properties of the URLs in the cluster, such as link popularity, to the representative URL."

                  What worries me is that a) the "root" URL will not be returned, somehow (perhaps due to freakish amount of inbound linking to one of our parametered URLs), and b) the root URL will not be getting the juice. The reason we got suspicious about this problem in the first place was that Google was returning one of our parametered URLs (PA=45) instead of the "root" URL (PA=58).

                  This may be an anomaly that will be sorted out now that we changed the parameter setting from "Let Google Decide" to "No, page does not change" i.e. return the "Representative" link, but would love your thoughts - esp on the juice passage.

                  Tim

                  Dr-Pete 1 Reply Last reply Reply Quote 0
                  • Dr-Pete
                    Dr-Pete Staff @surveygizmo last edited by

                    This sounds unusual enough that I'd almost have to see it in action. Is the JS-based URL even getting indexed? This might be a non-issue, honestly. I don't have solid evidence either way about GWT blocking passing link-juice, although I suspect it behaves like a canonical in most cases.

                    1 Reply Last reply Reply Quote 0
                    • surveygizmo
                      surveygizmo @Dr-Pete last edited by

                      I agree. The URL parameter option seems to be the best solution since this is not a unique page. It is the main page with javascript that calls for additional content to be displayed in the form of a lightbox overlay  if the condition is right. Since it is not an actual page, I cannot add the rel-canonical statement to the header.  It is not clear however, whether the link juice will be passed with this parameter setting in Webmaster Tools.

                      Dr-Pete 1 Reply Last reply Reply Quote 0
                      • Dr-Pete
                        Dr-Pete Staff last edited by

                        If you're already use rel-canonical, then there's really no reason to also block the parameter. Rel-canonical will preserve any link-juice, and will also keep the page available to visitors (unlike a 301-redirect).

                        Are you seeing a lot of these pages indexed (i.e. is the canonical tag not working)? You could block the parameter in that case, but my gut reaction is that it's unnecessary and probably counter-productive. Google may just need time to de-index (it can be a slow process).

                        I suspect that Google passes some link-juice through blocked parameters and treats it more like a canonical, but it may be situational and I haven't seen good data on that. So many things in Google Webmaster Tools end up being a bit of a black box. Typically, I view it as a last resort.

                        surveygizmo 1 Reply Last reply Reply Quote 1
                        • sesertin
                          sesertin @surveygizmo last edited by

                          I can just repeat myself: Set Crawl to yes and use rel canonical with website.com/?v3 pointing to website.com

                          1 Reply Last reply Reply Quote 0
                          • surveygizmo
                            surveygizmo @surveygizmo last edited by

                            My fault for not being clear.

                            I understand that the rel=canonical cannot be added to the robot.txt file. We are already using the canonical statement.

                            I do not want to add the page with the url parameter to the robot.txt file as that would prevent the link juice from being passed.

                            Perhaps this example will help clarify:

                            URL = website.com

                            ULR parameter = website.com/?v3

                            website.com/?v3 has a lot of backlinks. How can I pass the link juice to website.com and Not have website.com/?v3 appear in the SERP"s?

                            1 Reply Last reply Reply Quote 0
                            • sesertin
                              sesertin @surveygizmo last edited by

                              I'm getting a bit lost with your explanation, maybe it would be easier if I saw the urls, but here"s a brief:

                              I would not use parameters at all. Cleen urls are best for seo, remove everything not needed. You definately don't need an url parameter to indicate that content is unique for 25%of traffic. (I got a little bit lost here: how can a content be unique for just part of your traffic. If it is found elsewhere on your pae it is not unique, if it is not found elswehere, it is unique) So anyway those url parameters do not indicate nothing to google, just stuff your url structure with useles info (for google) so why use them?

                              I am already using a link rel=canonical statement. I don't want to add this to the robots.txt file as that would prevent the juice from being passed.

                              I totally don't get this one. You can't add canonical to robots.txt. This is not a robots.txt statement.

                              To sum up: If you do not want your parametered page to appear in the serps than as I said: Set Crawl to yes! and use rel canonical. This way page will no more apperar in serps, but will be available for readers and will pass link juice.

                              1 Reply Last reply Reply Quote 0
                              • surveygizmo
                                surveygizmo @sesertin last edited by

                                The parameter to this URL specifies unique content for 25% of my traffic to the home page. If I use a 301 redirect than those people will not see the unique content that is relevant to them. But since this parameter is only relevant to 25% of my traffic, I would like the main URL displayed in the SERPs rather then the unique one.

                                Google's Webmaster Tools let you choose how you would Google to handle URL parameters. When using this tool you must specify the parameters effect on content. You can then specify what you would like googlebot to crawl.  If I say NO crawl,  I understand that the page with this parameter will not be crawled but will the link juice be passed to the page without the parameter?

                                I am already using a link rel=canonical statement. I don't want to add this url parameter to the robots.txt file either as that would prevent the juice from being passed.

                                What is the best way to keep this parameter and pass the juice to the main page but not have the URL parameter displayed in the SERPs?

                                sesertin surveygizmo 3 Replies Last reply Reply Quote 0
                                • sesertin
                                  sesertin last edited by

                                  What do you men by url parameter specifies content?

                                  If a page is not crawled it definately won't pass link juice. Set Crawl to yes and use rel canonical: http://www.youtube.com/watch?v=Cm9onOGTgeM

                                  surveygizmo 1 Reply Last reply Reply Quote 0
                                  • 1 / 1
                                  • First post
                                    Last post

                                  Browse Questions

                                  Explore more categories

                                  • Moz Tools

                                    Chat with the community about the Moz tools.

                                  • SEO Tactics

                                    Discuss the SEO process with fellow marketers

                                  • Community

                                    Discuss industry events, jobs, and news!

                                  • Digital Marketing

                                    Chat about tactics outside of SEO

                                  • Research & Trends

                                    Dive into research and trends in the search industry.

                                  • Support

                                    Connect on product support and feature requests.

                                  • See all categories

                                  Related Questions

                                  • Train4Academy.co.uk

                                    Customer Reviews on Product Page / Pagination / Crawl 3 review pages only

                                    reviews pagination crawler disallow

                                    Hi experts, I present customer feedback, reviews basically, on my website for the products that are sold. And with this comes the ability to read reviews and obviously with pagination to display the available reviews. Now I want users to be able to flick through and read the reviews to help them satisfy whatever curiosity they have. My only thinking is that the page that contains the reviews, with each click of the pagination will present roughly the same content. The only thing that changes is the title tags which will contain the number in the H1 to display the page number. I'm thinking this could be duplication but i have yet to be notified by Google in my Search console... Should i block crawlers from crawling beyond page 3 of reviews? Thanks

                                    Technical SEO | | Train4Academy.co.uk
                                    0
                                  • nezona

                                    Site Audit Tools Not Picking Up Content Nor Does Google Cache

                                    Hi Guys, Got a site I am working with on the Wix platform. However site audit tools such as Screaming Frog, Ryte and even Moz's onpage crawler show the pages having no content, despite them having 200 words+. Fetching the site as Google clearly shows the rendered page with content, however when I look at the Google cached pages, they also show just blank pages. I have had issues with nofollow, noindex on here, but it shows the meta tags correct, just 0 content. What would you look to diagnose? I am guessing some rogue JS but why wasn't this picked up on the "fetch as Google".

                                    Technical SEO | | nezona
                                    0
                                  • Tom3_15

                                    Google is indexing bad URLS

                                    Hi All, The site I am working on is built on Wordpress. The plugin Revolution Slider was downloaded. While no longer utilized, it still remained on the site for some time. This plugin began creating hundreds of URLs containing nothing but code on the page. I noticed these URLs were being indexed by Google. The URLs follow the structure: www.mysite.com/wp-content/uploads/revslider/templates/this-part-changes/ I have done the following to prevent these URLs from being created & indexed: 1. Added a directive in my Htaccess to 404 all of these URLs 2. Blocked /wp-content/uploads/revslider/ in my robots.txt 3. Manually de-inedex each URL using the GSC tool 4. Deleted the plugin However, new URLs still appear in Google's index, despite being blocked by robots.txt and resolving to a 404. Can anyone suggest any next steps? I Thanks!

                                    Technical SEO | | Tom3_15
                                    0
                                  • jayoliverwright

                                    Tools/Software that can crawl all image URLs in a site

                                    Excluding Screaming Frog, what other tools/software to use in order to crawl all image URLs in a site? Because in Screaming Frog, they don't crawl image URLs which are not under the site domain. Example of an image URL outside the client site: http://cdn.shopify.com/images/this-is-just-a-sample.png If the client is: http://www.example.com, Screaming Frog only crawls images under it like, http://www.example.com/images/this-is-just-a-sample.png

                                    Technical SEO | | jayoliverwright
                                    0
                                  • MTalhaImtiaz

                                    Are image pages considered 'thin' content pages?

                                    I am currently doing a site audit. The total number of pages on the website are around 400... 187 of them are image pages and coming up as 'zero' word count in Screaming Frog report. I needed to know if they will be considered 'thin' content by search engines? Should I include them as an issue? An answer would be most appreciated.

                                    Technical SEO | | MTalhaImtiaz
                                    0
                                  • sparrowdog

                                    Using the Google Remove URL Tool to remove https pages

                                    I have found a way to get a list of 'some' of my 180,000+ garbage URLs now, and I'm going through the tedious task of using the URL removal tool to put them in one at a time. Between that and my robots.txt file and the URL Parameters, I'm hoping to see some change each week. I have noticed when I put URL's starting with https:// in to the removal tool, it adds the http:// main URL at the front. For example, I add to the removal tool:- https://www.mydomain.com/blah.html?search_garbage_url_addition On the confirmation page, the URL actually shows as:- http://www.mydomain.com/https://www.mydomain.com/blah.html?search_garbage_url_addition I don't want to accidentally remove my main URL or cause problems. Is this the right way this should look? AND PART 2 OF MY QUESTION If you see the search description in Google for a page you want removed that says the following in the SERP results, should I still go to the trouble of putting in the removal request? www.domain.com/url.html?xsearch_... A description for this result is not available because of this site's robots.txt – learn more.

                                    Technical SEO | | sparrowdog
                                    1
                                  • catalinmoraru

                                    Blocked URL parameters can still be crawled and indexed by google?

                                    Hy guys, I have two questions and one might be a dumb question but there it goes. I just want to be sure that I understand: IF I tell webmaster tools to ignore an URL Parameter, will google still index and rank my url? IS it ok if I don't append in the url structure the brand filter?, will I still rank for that brand? Thanks, PS: ok 3 questions :)...

                                    Technical SEO | | catalinmoraru
                                    0
                                  • lzhao

                                    Should we use Google's crawl delay setting?

                                    We’ve been noticing a huge uptick in Google’s spidering lately, and along with it a notable worsening of render times. Yesterday, for example, Google spidered our site at a rate of 30:1 (google spider vs. organic traffic.)   So in other words, for every organic page request, Google hits the site 30 times. Our render times have lengthened to an avg. of 2 seconds (and up to 2.5 seconds). Before this renewed interest Google has taken in us we were seeing closer to one second average render times, and often half of that. A year ago, the ratio of Spider to Organic was between 6:1 and 10:1. Is requesting a crawl-delay from Googlebot a viable option? Our goal would be only to reduce Googlebot traffic, and hopefully improve render times and organic traffic. Thanks, Trisha

                                    Technical SEO | | lzhao
                                    0

                                  Get started with Moz Pro!

                                  Unlock the power of advanced SEO tools and data-driven insights.

                                  Start my free trial
                                  Products
                                  • Moz Pro
                                  • Moz Local
                                  • Moz API
                                  • Moz Data
                                  • STAT
                                  • Product Updates
                                  Moz Solutions
                                  • SMB Solutions
                                  • Agency Solutions
                                  • Enterprise Solutions
                                  • Digital Marketers
                                  Free SEO Tools
                                  • Domain Authority Checker
                                  • Link Explorer
                                  • Keyword Explorer
                                  • Competitive Research
                                  • Brand Authority Checker
                                  • Local Citation Checker
                                  • MozBar Extension
                                  • MozCast
                                  Resources
                                  • Blog
                                  • SEO Learning Center
                                  • Help Hub
                                  • Beginner's Guide to SEO
                                  • How-to Guides
                                  • Moz Academy
                                  • API Docs
                                  About Moz
                                  • About
                                  • Team
                                  • Careers
                                  • Contact
                                  Why Moz
                                  • Case Studies
                                  • Testimonials
                                  Get Involved
                                  • Become an Affiliate
                                  • MozCon
                                  • Webinars
                                  • Practical Marketer Series
                                  • MozPod
                                  Connect with us

                                  Contact the Help team

                                  Join our newsletter
                                  Moz logo
                                  © 2021 - 2026 SEOMoz, Inc., a Ziff Davis company. All rights reserved. Moz is a registered trademark of SEOMoz, Inc.
                                  • Accessibility
                                  • Terms of Use
                                  • Privacy

                                  Looks like your connection to Moz was lost, please wait while we try to reconnect.