Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Noindexing Thin Content Pages: Good or Bad?
-
If you have massive pages with super thin content (such as pagination pages) and you noindex them, once they are removed from googles index (and if these pages aren't viewable to the user and/or don't get any traffic) is it smart to completely remove them (404?) or is there any valid reason that they should be kept?
If you noindex them, should you keep all URLs in the sitemap so that google will recrawl and notice the noindex tag?
If you noindex them, and then remove the sitemap, can Google still recrawl and recognize the noindex tag on their own?
-
Sometimes you need to leave the crawl path open to Googlebot so they can get around the site. A specific example that may be relevant to you is in pagination. If you have 100 products and are only showing 10 on the first page Google will not be able to reach the other 90 product pages as easily if you block paginated pages in the robots.txt. Better options in such a case might be a robots noindex,follow meta tag, rel next/prev tags, or a "view all" canonical page.
If these pages aren't important to the crawlability of the site, such as internal search results, you could block them in the robots.txt file with little or no issues, and it would help to get them out of the index. If they aren't useful for spiders or users, or anything else, then yes you can and should probably let them 404, rather than blocking.
Yes, I do like to leave the blocked or removed URLs in the sitemap for just a little while to ensure Googlebog revisits them and sees the noindex tag, 404 error code, 301 redirect, or whatever it is they need to see in order to update their index. They'll get there on their own eventually, but I find it faster to send them to the pages myself. Once Googlebot visits these URls and updates their index you should remove them from your sitemaps.
-
If you want to noindex any of your pages, there is no way that Google or any other search engines will think something is fishy. Its up to the webmaster to decide what and what not to get indexed from his website. If you implement page level noindex, the link juice will still flow to the page but if you also have nofollow along with noindex, the link juice will flow to the page but will be contained on the page itself and will not be passed on the links that flow out of that page.
I conclude by saying, there is nothing wrong in making the pages non-indexable.
Here is an interesting discussion related to this on Moz:
http://a-moz.groupbuyseo.org/community/q/noindex-follow-is-a-waste-of-link-juice
Hope it helps.
Best,
Devanur Rafi
-
Devanur,
What I am asking is if the robots/google will view it as a negative thing for noindexing pages and still trying to pass the link juice, even though the pages aren't even viewable to the front end user.
-
If you wish not to show these pages even to the front end user, you can just block them using the page level robots meta tag so that these pages will never be indexed by the search engines as well.
Best,
Devanur Rafi
-
Yes, but what if these pages aren't even viewable to the front end user?
-
Hi there, it is a very good idea to block any and all the pages that do not provide any useful content to the visitors and especially when they are very thin content wise. So the idea is to keep away low quality content that does no good to the visitor, from the Internet. Search engines would love every webmaster doing so.
However, sometimes, no matter how thin the content is on some pages, they still provide good information to the visitors and serve the purpose of the visit. In this case, you can provide contextual links to those pages and add the nofollow attribute to the link. Of course you should ideally be implementing the page level blocking using the robots meta tag on those pages. I do not think you should return a 404 on these pages as there is no need to do so. When a page level blocking is implemented, Google will not index the blocked content even if it finds a third party reference to it from elsewhere on the Internet.
If you have implemented the page level noindex using the robots meta tag, there is no need to go for a sitemap with these URLs. With noindex in place, as I mentioned above, Google will not index the content even if it discovers the page using a reference from anywhere on the Internet.
Hope it helps my friend.Best,Devanur Rafi
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
New Flurry of thousands of bad links from 3 Spammy websites. Disavow?
I also discovered that a website www.prlog.ru put 32 links to my website. It is a russian site. It has a 32% spam score. Is that high? I think I need to disavow. Another spammy website link has spam score of 16% with with several thousand links. I added one link to the site medexplorer.com 6 years ago and it was fine. Now it has thousands of links. Should I disavow all three?
White Hat / Black Hat SEO | | Boodreaux0 -
Is toggle Good For seo
Hi there, I have Client Who dont want to show his content to publicly, So team decided to use toggle, So Google can also See Content, But i want bu sure. Does Google will really cache that Content?? Does it down my website Ranking?? Please any one can Help, I need urgent basis Thnx in advance Falguni
White Hat / Black Hat SEO | | iepl20010 -
Schema Markup for regular web pages?
I'm a bit confused about what Schema markup should be applied to such regular, informative web pages.
White Hat / Black Hat SEO | | gray_jedi
We have a few pages describing our technology and solutions. These pages are not products or news articles. And they are not something that should be reviewed/rated. What Schema markup should be used for a standard run-of-the mill web page?
Is there a good reference / tutorial for optimizing the schema markup of an informational website? Any advice is much appreciated, thank you!0 -
Internal Links to Ecommerce Category Pages
Hello, I read a while back, and I can't find it now, that you want to add internal links to your main category pages. Does that still apply? If so, for a small site (100 products) what is recommended? Thanks
White Hat / Black Hat SEO | | BobGW0 -
How to 301 redirect from old domain and their pages to new domain and pages?
Hi i am a real newbie to this and i hope for a guide on how to do this. I seen a few moz post and is quiet confusing hopefully somebody able to explain it in layman terms to me. I would like to 301 redirect this way, both website contain the same niche. oldwebsite.com > newwebsite.com and also its pages..... oldwebsite.com/test >newwebsite.com/test So my question here is i would like to host my old domain and its pages in my new website hosting in order to redirect to my new domain and its pages how do i do that? would my previous page link overwrite my new page link? or it add on the juice link? Do i need to host the whole old domain website into my new hosting in order to redirect the old pages? really confusing here, thanks!
White Hat / Black Hat SEO | | andzon0 -
Indexing content behind a login
Hi, I manage a website within the pharmaceutical industry where only healthcare professionals are allowed to access the content. For this reason most of the content is behind a login. My challenge is that we have a massive amount of interesting and unique content available on the site and I want the healthcare professionals to find this via Google! At the moment if a user tries to access this content they are prompted to register / login. My question is that if I look for the Google Bot user agent and allow this to access and index the content will this be classed as cloaking? I'm assuming that it will. If so, how can I get around this? We have a number of open landing pages but we're limited to what indexable content we can have on these pages! I look forward to all of your suggestions as I'm struggling for ideas now! Thanks Steve
White Hat / Black Hat SEO | | stever9990 -
Finding and Removing bad backlinks
Ok here goes. Over the past 2 years our traffic and rankings have slowly declined, most importantly, for keywords that we ranked #1 and #2 at for years. With the new Penguin updates this year, we never saw a huge drop but a constant slow loss. My boss has tasked me with cleaning up our bad links and reshaping our link profile so that it is cleaner and more natural. I currently have access to Google Analytics and Webmaster Tools, SEOMoz, and Link Builder. 1)What is the best program or process for identifying bad backlinks? What exactly am I looking for? Too many links from one domain? Links from Low PR or low “Trust URL” sites? I have gotten conflicting information reading about all this on the net, with some saying that too many good links(high PR) can be unnatural without some lower level PR links, so I just want to make sure that I am not asking for links to be removed that we need to create or maintain our link profile. 2)What is the best program or process for viewing our link profile and what exactly am I looking for? What constitutes a healthy link profile after the new google algorithm updates? What is the best way to change it? 3)Where do I start with this task? Remove spammy links first or figure out or profile first and then go after bad links? 4)We have some backlinks that are to our old .aspx that we moved to our new platform 2 years ago, there are quite a few (1000+). Some of these pages were redirected and some the redirects were broken at some point. Is there any residual juice in these backlinks still? Should we fix the broken redirects, or does it do nothing? My boss says the redirects wont do anything now that google no longer indexes the old pages but other people have said differently. Whats the deal should we still fix the redirects even though the pages are no longer indexed? I really appreciate any advice as basically if we cant get our site and sales turned around, my job is at stake. Our site is www.k9electronics.com if you want to take a look. We just moved hosts so there are some redirect issues and other things going on we know about.
White Hat / Black Hat SEO | | k9byron0 -
Merging four sites into one... Best way to combine content?
First of all, thank you in advance for taking the time to look at this. The law firm I work for once took a "more is better" approach and had multiple websites, with keyword rich domains. We are a family law firm, but we have a specific site for "Arizona Child Custody" as one example. We have four sites. All four of our sites rank well, although I don't know why. Only one site is in my control, the other three are managed by FindLaw. I have no idea why the FindLaw sites do well, other than being in the FindLaw directory. They have terrible spammy page titles, and using Copyscape, I realize that most of the content that FindLaw provides for it's attorneys are "spun articles." So I have a major task and I don't know how to begin. First of all, since all four sites rank well for all of the desired phrases-- will combining all of that power into one site rocket us to stardom? The sites all rank very well now, even though they are all technically terrible. Literally. I would hope that if I redirect the child custody site (as one example) to the child custody overview page on the final merged site, we would still maintain our current SERP for "arizona child custody lawyer." I have strongly encouraged my boss to merge our sites for many reasons. One of those being that it's playing havoc with our local places. On the other hand, if I take down the child custody site, redirect it, and we lose that ranking, I might be out of a job. Finally, that brings me down to my last question. As I mentioned, the child custody site is "done" very poorly. Should I actually keep the spun content and redirect each and every page to a duplicate on our "final" domain, or should I redirect each page to a better article? This is the part that I fear the most. I am considering subdomains. Like, redirecting the child custody site to childcustody.ourdomain.com-- I know, for a fact, that will work flawlessly. I've done that many times for other clients that have multiple domains. However, we have seven areas of practice and we don't have 7 nice sites. So child custody would be the only legal practice area that has it's own subdomain. Also, I wouldn't really be doing anything then, would I? We all know 301 redirects work. What I want is to harness all of this individual power to one mega-site. Between the four sites, I have 800 pages of content. I need to formulate a plan of action now, and then begin acting on it. I don't want to make the decision alone. Anybody care to chime in? Thank you in advance for your help. I really appreciate the time it took you to read this.
White Hat / Black Hat SEO | | SDSLaw0