Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Rel Canonical, Follow/No Follow in htaccess?
-
Very quick question, are rel canonical, follow/no follow tags, etc. written in the htaccess file?
-
Hello,
Thank you for this information, but I have a followup question. The links you sent me refer to images and PDF's, but this isn't relative to my situation. I need to write in follow/no follow and rel=canonical via htaccess because I do not know how to do it for each individual page on my ecommerce store - additionally, htaccess is easy for me to edit if ever I need to undo something and it is nice to have everything in one place.
Can you give me a formatted example of how a follow/no follow and rel=canonical can be placed into a page via the htaccess file please? I intend on doing this for every product category, product and also my home page on my ecommerce store.
Thank you
-
Robots directives and rel=canonical can be assigned by .htaccess. This is a very handy way to assign noindex or rel=canonical to .pdf documents, print formats, video transcripts, etc. You can also use it to apply noindex or rel=canonical at scale. Two Moz articles (of several) that describe these are....
https://a-moz.groupbuyseo.org/blog/how-to-advanced-relcanonical-http-headers
https://a-moz.groupbuyseo.org/blog/htaccess-file-snippets-for-seos
-
Hi there,
They'll be written in the source code of each applicable page.
Alternatively you can dynamically add these tags via google tag manager or within your CMS platform.
regards,
Sean
-
Hi
I dont think you can use rel-canonical tags in httaccess however you can define the rule for follow / no follow tag through htacess.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Rel=canonical on Godaddy Website builder
Hey crew! First off this is a last resort asking this question here. Godaddy has not been able to help so I need my Moz Fam on this one. So common problem My crawl report is showing I have duplicate home pages www.answer2cancer.org and www.answer2cancer.org/home.html I understand this is a common issue with apache webservers which is why the wonderful rel=canonical tag was created! I don't want to go through the hassle of a 301 redirect of course for such a simple issue. Now here's the issue. Godaddy website builder does not make any sense to me. In wordpress I could just go add the tag to the head in the back end. But no such thing exist in godaddy. You have to do this weird drag and drop html block and drag it somewhere on the site and plug in the code. I think putting before the code instead of just putting it in there. So I did that but when I publish and inspect in chrome I cannot see the tag in the head! This is confusing I know. the guy at godaddy didn't stand a chance lol. Anyway much love for any replies!
Technical SEO | | Answer2cancer0 -
Link rel="prev" AND canonical
Hi guys, When you have several tabs on your website with products, you can most likely navigate to page 2, 3, 4 etc...
Technical SEO | | AdenaSEO
You can add the link rel="prev" and link rel="next" tags to make sure that 1 page get's indexed / ranked by Google. am I correct? However this still means that all the pages can get indexed, right? For example a webshop makes use of the link rel="prev" and ="next" tags. In the Google results page though, all the seperate tabs pages are still visible/indexed..
http://www.domain.nl/watches/?tab=1
http://www.domain.nl/watches/?tab=24
http://www.domain.nl/watches/?tab=19
etc..... Can we prevent this, and make sure only the main page get's indexed and ranked, by adding a canonical link on every 'tab page' to the main page --> www.domain.nl/watches/ I hope I explained it well and I'm looking forward to hearing from you. Regards, Tom1 -
how to set rel canonical on wordpress.com sites
I know how to do this with a wordpress.org site but I have a client that does not want to switch and without a plugin I am lost. any help would be greatly appreciated. Jeremy Wood
Technical SEO | | SOtBOrlando0 -
Rel = prev next AND canonical?
I have product category pages that correctly have the prev next but the moz crawl is giving me duplicate content errors. I would not think I also need to have canonical - but do I ?
Technical SEO | | JohnBerger0 -
My .htaccess has changed, what do i do to avoid it again...?
Hello Today i notice that our site did not auto changed from without www to with, when i checked the .htaccess file i notice # in-front of each line and i know we did not insert it in there, after i removed it it worked fine. The only changes that we did recently was to a mobile version to the site but the call to autodirect is in a JS and not in the .htaccess, could it be the server..? is there any way that anything else might cause this...? The site is HTML and WP could it be because of that...? Thank's Simo
Technical SEO | | Yonnir0 -
Internal search : rel=canonical vs noindex vs robots.txt
Hi everyone, I have a website with a lot of internal search results pages indexed. I'm not asking if they should be indexed or not, I know they should not according to Google's guidelines. And they make a bunch of duplicated pages so I want to solve this problem. The thing is, if I noindex them, the site is gonna lose a non-negligible chunk of traffic : nearly 13% according to google analytics !!! I thought of blocking them in robots.txt. This solution would not keep them out of the index. But the pages appearing in GG SERPS would then look empty (no title, no description), thus their CTR would plummet and I would lose a bit of traffic too... The last idea I had was to use a rel=canonical tag pointing to the original search page (that is empty, without results), but it would probably have the same effect as noindexing them, wouldn't it ? (never tried so I'm not sure of this) Of course I did some research on the subject, but each of my finding recommanded one of the 3 methods only ! One even recommanded noindex+robots.txt block which is stupid because the noindex would then be useless... Is there somebody who can tell me which option is the best to keep this traffic ? Thanks a million
Technical SEO | | JohannCR0 -
Mod Rewrite / .htaccess avoid duplicate content
I have been searching and testing for hours but cannot find a solution. I am able to get a URL to display with out the file exntension. i.e domain.com/file instead of domain.com/file.php The problem is both versions of the URL above work, therefore a duplicate content issue. How can I force the URL with the file extension not to resolve and give a 404 error? Or just redirect to the non extension URL? IF it helps here is my code. Options +FollowSymLinks
Technical SEO | | MiamiWebCompany
RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_FILENAME}.php -f
RewriteRule ^(.+)$ $1.php [L,QSA]0 -
OK to block /js/ folder using robots.txt?
I know Matt Cutts suggestions we allow bots to crawl css and javascript folders (http://www.youtube.com/watch?v=PNEipHjsEPU) But what if you have lots and lots of JS and you dont want to waste precious crawl resources? Also, as we update and improve the javascript on our site, we iterate the version number ?v=1.1... 1.2... 1.3... etc. And the legacy versions show up in Google Webmaster Tools as 404s. For example: http://www.discoverafrica.com/js/global_functions.js?v=1.1
Technical SEO | | AndreVanKets
http://www.discoverafrica.com/js/jquery.cookie.js?v=1.1
http://www.discoverafrica.com/js/global.js?v=1.2
http://www.discoverafrica.com/js/jquery.validate.min.js?v=1.1
http://www.discoverafrica.com/js/json2.js?v=1.1 Wouldn't it just be easier to prevent Googlebot from crawling the js folder altogether? Isn't that what robots.txt was made for? Just to be clear - we are NOT doing any sneaky redirects or other dodgy javascript hacks. We're just trying to power our content and UX elegantly with javascript. What do you guys say: Obey Matt? Or run the javascript gauntlet?0