Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Why is our noindex tag not working?
-
Hi,
I have the following page where we've implemented a no index tag. But when we run this page in screaming frog or this tool here to verify the noidex is present and functioning, it shows that it's not.
But if you view the source of the page, the code is present in the head tag. And unfortunately we've seen instances where Google is indexing pages we've noindexed. Any thoughts on the example above or why this is happening in Google?
Eddy
-
Hi Eddy,
Edit: this was already answered before I could post my reply. But I've left the example.
The issue with the meta robots tag is that you are using curly quotation marks around robots and noindex:
You have:
“robots**” content=“noindex”/>
Instead of:
name="robots" content="noindex"**/>This will fix your issue.
Cheers,
David
-
That SF response is from the robots.txt block, not a noindex tag though. SF is also ignoring the incorrectly formatted tag (as it should).
Paul
-
The example page does have a noindex tag in place, but it's not formatted correctly, so it's being ignored. Very subtle issue, but your tag is using "smart quotes" around the elements instead of the plain quotation marks that are required for code. If you look very carefully at the page source code, you'll see that they are quotation marks like you'd see in a Word document; the ones at the beginning of robots and noindex curl a different way than the ones at the end.) This usually occurs when the content was written in a word processor instead of a plain-text editor.
Because the tag's not formatted correctly, it's ignored by both the crawling tools and the search engines.
In addition, the site also has all pages blocked from crawling by the sitewide robots.txt file. This and noindex are conflicting instructions to search engines.
If a page is blocked in robots.txt, then the search engine will not crawl the page and so is not able to discover the noindex tag, even if it were formatted correctly. Therefore if the search engine becomes aware of the page in any other way than straight crawling (and there are a number of ways this can happen), then the page will still get indexed.
If it's a dev site, the proper way to keep it from being indexed is to either noindex all pages, or to put the site behind a password so the search engines and public visitors can't access it. If using noindex, the site must not be blocked with a robots.txt directive.
Does that all make sense?
Paul
-
I ran that page thru screaming frog and it came back with a "blocked by robots" status.
The second tool you suggested is not finding the noindex tag and I don't have an explanation for that, nor am I familiar with the tool.
A site command does not return any results.
Are you sure you have a problem? Is there another example you can provide?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Near Duplicate Title Tag Checker
Hi Everyone, I know there are a lot of tools like Siteliner, which can check the uniqueness of body copy, but are there any that can restrict the check to the title tags alone? Alternatively, is there an Excel or Google Sheets function that would allow me to do the same thing? Thanks, Andy
Intermediate & Advanced SEO | | AndyRSB0 -
Should I add no-follow tags to my widget links?
Matt Cutts recommended in a video in 2013 to add rel="nofollow" on widget links that link back to your website. Some background of my company: We're a software company for website chat. There's a 'powered by' link in our widgets that links back from our users' websites to our website. Currently these are all follow links. I checked out the links of our competitors, and it seems none of them have no follow on their widget backlinks. This, together with the fact that the video is quite old and information on this issue rather scarce, makes me doubt whether we should change our widget backlinks to no follow. Does anyone have thoughts on this?
Intermediate & Advanced SEO | | Maximuxxx0 -
H2 Tags- Can you have more than 1 H2 tag
Hi All, Screaming frog has identified that we have a few H2 tags on our pages , although we only have 1 H1 tag. We have numerous H3,H4's etc. I am wondering, is it good SEO to have only 1 H2 tag like with H1 tag or can you have more ? thanks Peter
Intermediate & Advanced SEO | | PeteC120 -
Does a UTM tag influence the linkvalue?
Will Google value a link with a UTM tag the same as a clean link without a UTM tag? I should say that a UTM tag link is not a natural link so the linkvalue is zero. Anyone any idea how to look at this?
Intermediate & Advanced SEO | | TT_Vakantiehuizen0 -
Partial duplicate content and canonical tags
Hi - I am rebuilding a consumer website, and each product page will contain a unique product image, and a sentence or two about the product (and we tend to use a lot of the same words in different ways across products). I'd like to have a tabbed area below the product info that talks about the overall product line, and this content would be duplicate across all the product pages (a "Why use our products" type of thing). I'd have this duplicate content also living on its own URL's so they can be found alone in the SERP's. Question is, do I need to add the canonical tag to this page, since there's partial duplicate content on the product pages? And if I did that, would my product pages go un-indexed?? I understand how to handle completely duplicated content, it's the partial duplicate that I'm having difficulty figuring out.
Intermediate & Advanced SEO | | Jenny10 -
HTML5 Nav Tag Issue - Be Aware
In checking my internal links with GWT, it is apparent that links within the nav tag in HTML5 are discounted by Google as "internal links" This could have major repercussions for designing your internal link structure for SEO purposes. I was surprised to see this result, as I have never seen it discussed. Anyone else notice this, or have any alternative views?
Intermediate & Advanced SEO | | veezer0 -
Should I remove Meta Keywords tags?
Hi, Do you recommend removing Meta Keywords or is there "nothing to lose" with having them? Thanks
Intermediate & Advanced SEO | | BeytzNet0 -
Rel=canonical tag on original page?
Afternoon All,
Intermediate & Advanced SEO | | Jellyfish-Agency
We are using Concrete5 as our CMS system, we are due to change but for the moment we have to play with what we have got. Part of the C5 system allows us to attribute our main page into other categories, via a page alaiser add-on. But what it also does is create several url paths and duplicate pages depending on how many times we take the original page and reference it in other categories. We have tried C5 canonical/SEO add-on's but they all seem to fall short. We have tried to address this issue in the most efficient way possible by using the rel=canonical tag. The only issue is the limitations of our cms system. We add the canonical tag to the original page header and this will automatically place this tag on all the duplicate pages and in turn fix the problem of duplicate content. The only problem is the canonical tag is on the original page as well, but it is referencing itself, effectively creating a tagging circle. Does anyone foresee a problem with the canonical tag being on the original page but in turn referencing itself? What we have done is try to simplify our duplicate content issues. We have over 2500 duplicate page issues because of this aliasing add-on and want to automate the canonical tag addition, rather than go to each individual page and manually add this tag, so the original reference page can remain the original. We have implemented this tag on one page at the moment with 9 duplicate pages/url's and are monitoring, but was curious if people had experienced this before or had any thoughts?0