Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
How safe is it to use a meta-refresh to hide the referrer?
-
Hi guys,
So I have a review site and I'm affiliated with several partnership programs whose products I advertise on my site. I don't want these affiliate programs to see the source of my traffic (my site), so I'm looking for a safe solution to hide the referrer URL.
I have recently added a rel="noreferrer" tag to all my affiliate links, but this method isn't perfect as not all browsers respect that rule. After doing some research and checking my competitors I noticed that some of them use meta-refresh, which seems more reliable in this regard.
So, how safe is it to use meta-refresh as means of hiding referrer URL? I'm worrying that implementing a meta-refresh redirect might negatively affect my SEO. Does anybody have any suggestions on how to hide the referrer URL without damaging SEO?
Thank you.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Are rel=author and rel=publisher meta tags currently in use?
Hello, Do these meta tags have any current usage? <meta name="author" content="Author Name"><meta name="publisher" content="Publisher Name"> I have also seen this usage linking to a companies Google+ Page:Thank you
Intermediate & Advanced SEO | | srbello0 -
Should I use https schema markup after http-https migration?
Dear Moz community, Noticed that several groups of websites after HTTP -> HTTPS migration update their schema markup from, example : {
Intermediate & Advanced SEO | | admiral99
"@context": "http://schema.org",
"@type": "WebSite",
"name": "Your WebSite Name",
"alternateName": "An alternative name for your WebSite",
"url": "http://www.your-site.com"
} becomes {
"@context": "https://schema.org",
"@type": "WebSite",
"name": "Your WebSite Name",
"alternateName": "An alternative name for your WebSite",
"url": "https://www.example.com"
} Interesting to know, because Moz website is on https protocol but uses http version of markup. Looking forward for answers 🙂0 -
Are ALL CAPS construed as spamming if they are used in a meta description tag call to action?
I know this seems like an old school question. As a long time SEO I would never use ALL CAPS in a title tag (unless a brand name is capitalized). However I recently came across a Moz video about creating better calls to action in the meta description tags. Some of the examples had CTAs that were using all caps (i.e. CALL NOW! or LOWEST QUOTES!) I realize there is a debate about the user experience implications. However I'm more concerned about search engines penalizing websites that are using ALL CAPS CTAs in their meta description tags. Any feedback/advice would be appreciated. Thanks
Intermediate & Advanced SEO | | RosemaryB0 -
Do you suggest I use the Yoast or the Google XML sitemap for my blog?
I just shut off the All-In-One seo pack plugin for wordpress, and turned on the Yoast plugin. It's great! So much helpful, seo boosting info! So, in watching a video on how to configure the plugin, it mentions that I should update the sitemap, using the Yoast sitemap I'm afraid to do this, because I'm pretty technologically behind... I see I have a Google XML Sitemaps (by Arne Brachhold) plugin turned on (and have had it for many years). Should I leave this one on? Or would you recommend going through the steps to use the Yoast plugin sitemap? If so, what are the benefits of the Yoast plugin, over the Google XML? Thanks!
Intermediate & Advanced SEO | | DavidC.0 -
Layered navigation and hiding nav from user agent
I am trying to deal with the duplicate content issues presented by Magento's layered navigation feature (aka faceted navigation). I installed Amasty's Improved Navigation extension (https://amasty.com/improved-layered-navigation.html) and it offers the option to hide the layered navigation from specific user agents (ie googlebot, bingbot, etc). This seems like cloaking to me and I hesitate to try it, unless hiding faceted navigation from specific user agents is known to be acceptable to Google (white hat practice). Does anyone know if this the case?
Intermediate & Advanced SEO | | Kyle_M0 -
Can using nofollow on magento layered navigation hurt?
Howdy Mozzers! We would like to use no follow, no index on our magento layered navigation pages after any two filters are selected. (We are using single filter pages as landing page, so we would liked them indexed) Is it ok to use nofollow, noindex on these filter pages? Are there disadvantages of using nofollow on internal pages? Matt mentioned refraining from using nofollow internally https://www.youtube.com/watch?v=4SAPUx4Beh8 But we would like to conserve crawling bandwidth and PR flow on potentially 100's of thousands of irrelevant/duplicate filter pages.
Intermediate & Advanced SEO | | MozAddict0 -
Meta Keywords Good or Bad
Hi All, I've been reading more about the meta keyword tag and why it may not be a good idea to include them on pages and am looking for thoughts/feedback on this idea. If you have employed this tactic, can you give me some insight into any results you saw. If you decided to not employ this tactic, why did you choose not to? I wan to understand all sides of this before employing any changes to my company's websites. Thank you for your help!
Intermediate & Advanced SEO | | airnwater0 -
Meta NoIndex tag and Robots Disallow
Hi all, I hope you can spend some time to answer my first of a few questions 🙂 We are running a Magento site - layered/faceted navigation nightmare has created thousands of duplicate URLS! Anyway, during my process to tackle the issue, I disallowed in Robots.txt anything in the querystring that was not a p (allowed this for pagination). After checking some pages in Google, I did a site:www.mydomain.com/specificpage.html and a few duplicates came up along with the original with
Intermediate & Advanced SEO | | bjs2010
"There is no information about this page because it is blocked by robots.txt" So I had added in Meta Noindex, follow on all these duplicates also but I guess it wasnt being read because of Robots.txt. So coming to my question. Did robots.txt block access to these pages? If so, were these already in the index and after disallowing it with robots, Googlebot could not read Meta No index? Does Meta Noindex Follow on pages actually help Googlebot decide to remove these pages from index? I thought Robots would stop and prevent indexation? But I've read this:
"Noindex is a funny thing, it actually doesn’t mean “You can’t index this”, it means “You can’t show this in search results”. Robots.txt disallow means “You can’t index this” but it doesn’t mean “You can’t show it in the search results”. I'm a bit confused about how to use these in both preventing duplicate content in the first place and then helping to address dupe content once it's already in the index. Thanks! B0