Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Nofollow and ecommerce cart/checkout pages
-
Hi!!
Another noob question:
Should I be nofollowing my site's cart and checkout pages? Or as SEs can't get to the checkout pages without either logging in or completing the form is it something I shouldn't worry about? Have read things saying both. Not sure which is correct.
Thank you! Appreciate the help.
Lynn
-
Thank you James!! I really appreciate the insight and your patience.

Lynn
-
yes that's all correct.
-
On my site the only things that are accessible via HTTPS are the checkout pages and the my account pages (or so I am told - still testing). So for these I could mark "noindex, nofollow" correct as don't really want Google to crawl these? And robots.txt can accomplish the same thing (robots.txt may be easier for me as requires no dev time; I can't control this tag via the CMS)?
Thanks for the input!

Lynn
-
1. yes
2. yes, robots.txt works too - there are numerous ways to have the same effect. personal preference comes into it, plus one may be easier than another in your site/CMS. The reason I use noindex is that any page on my site could be accessed by https - so I prefer to dynamically throw noindex into any page that is accessed that way.
-
Hello!
Thank you both for taking the time to answer. A follow-up question just so I understand:
1. "noindex, follow" will allow SEs to crawl a page but NOT put it in the index correct?
2. Can't I also stop SE access to certain directories/pages by putting an entry in the robots.txt? This would stop crawling AND indexing correct?
Why would one use one over the other? Just want to understand the idea behind it.
Thank you so much guys!!
Lynn
-
the safest route is to "noindex, follow" any page that is requested by https - this also squashes duplicate content when the user accesses non-cart pages using https...
-
Hey,
I'd 'noindex, nofollow' cart pages as they are no use to anyone searching and you're just going to dilute your authority through those extra pages.
DD
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Blog Page Titles - Page 1, Page 2 etc.
Hi All, I have a couple of crawl errors coming up in MOZ that I am trying to fix. They are duplicate page title issues with my blog area. For example we have a URL of www.ourwebsite.com/blog/page/1 and as we have quite a few blog posts they get put onto another page, example www.ourwebsite.com/blog/page/2 both of these urls have the same heading, title, meta description etc. I was just wondering if this was an actual SEO problem or not and if there is a way to fix it. I am using Wordpress for reference but I can't see anywhere to access the settings of these pages. Thanks
Technical SEO | | O2C0 -
What to do with temporary empty pages?
I have a website listing real estate in different areas that are for sale. In small villages, towns, and areas, sometimes there is nothing for sale and therefore the page is completely empty with no content except a and some footer text. I have thousand of landing pages for different areas. For example "Apartments in Tibro" or "Houses in Ljusdahl" and Moz Pro gives me some warnings for "Duplicate Content" on the empty ones (I think it does so because the pages are so empty that they are quite similar). I guess Google could also think bad of my site if I have hundreds or thousands of empty pages even if my total amount of pages are 100,000. So, what to do with these pages for these small cities, towns and villages where there is not always houses for sale? Should I remove them completely? Should I make a 404 when no houses for sale and a 200 OK when there is? Please note that I have totally 100,000+ pages and this is only about 5% of all my pages.
Technical SEO | | marcuslind900 -
<sub>& <sup>tags, any SEO issues?</sup></sub>
Hi - the content on our corporate website is pretty technical, and we include chemical element codes in the text that users would search on (like S02, C02, etc.) A lot of times our engineers request that we list the codes correctly, with a <sub>on the last number. Question - does adding this code into the keyword affect SEO? The code would look like SO<sub>2</sub>.</sub> Thanks.
Technical SEO | | Jenny10 -
Page titles in browser not matching WP page title
I have an issue with a few page titles not matching the title I have In WordPress. I have 2 pages, blog & creative gallery, that show the homepage title, which is causing duplicate title errors. This has been going on for 5 weeks, so its not an a crawl issue. Any ideas what could cause this? To clarify, I have the page title set in WP, and I checked "Disable PSP title format on this page/post:"...but this page is still showing the homepage title. Is there an additional title setting for a page in WP?
Technical SEO | | Branden_S0 -
WordPress - How to stop both http:// and https:// pages being indexed?
Just published a static page 2 days ago on WordPress site but noticed that Google has indexed both http:// and https:// url's. Usually I only get http:// indexed though. Could anyone please explain why this may have happened and how I can fix? Thanks!
Technical SEO | | Clicksjim1 -
OK to block /js/ folder using robots.txt?
I know Matt Cutts suggestions we allow bots to crawl css and javascript folders (http://www.youtube.com/watch?v=PNEipHjsEPU) But what if you have lots and lots of JS and you dont want to waste precious crawl resources? Also, as we update and improve the javascript on our site, we iterate the version number ?v=1.1... 1.2... 1.3... etc. And the legacy versions show up in Google Webmaster Tools as 404s. For example: http://www.discoverafrica.com/js/global_functions.js?v=1.1
Technical SEO | | AndreVanKets
http://www.discoverafrica.com/js/jquery.cookie.js?v=1.1
http://www.discoverafrica.com/js/global.js?v=1.2
http://www.discoverafrica.com/js/jquery.validate.min.js?v=1.1
http://www.discoverafrica.com/js/json2.js?v=1.1 Wouldn't it just be easier to prevent Googlebot from crawling the js folder altogether? Isn't that what robots.txt was made for? Just to be clear - we are NOT doing any sneaky redirects or other dodgy javascript hacks. We're just trying to power our content and UX elegantly with javascript. What do you guys say: Obey Matt? Or run the javascript gauntlet?0 -
Do I need to add canonical link tags to pages that I promote & track w/ UTM tags?
New to SEOmoz, loving it so far. I promote content on my site a lot and am diligent about using UTM tags to track conversions & attribute data properly. I was reading earlier about the use of link rel=canonical in the case of duplicate page content and can't find a conclusive answer whether or not I need to add the canonical tag to these pages. Do I need the canonical tag in this case? If so, can the canonical tag live in the HEAD section of the original / base page itself as well as any other URLs that call that content (that have UTM tags, etc)? Thank you.
Technical SEO | | askotzko1