Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Pitfalls when implementing the “VARY User-Agent” server response
-
We serve up different desktop/mobile optimized html on the same URL, based on a visitor’s device type.
While Google continue to recommend the HTTP Vary: User-Agent header for mobile specific versions of the page (http://www.youtube.com/watch?v=va6qtaiZRHg), we’re also aware of issues raised around CDN caching; http://searchengineland.com/mobile-site-configuration-the-varies-header-for-enterprise-seo-163004 / http://searchenginewatch.com/article/2249533/How-Googles-Mobile-Best-Practices-Can-Slow-Your-Site-Down / http://orcaman.blogspot.com/2013/08/cdn-caching-problems-vary-user-agent.html
As this is primarily for Google's benefit, it's been proposed that we only returning the Vary: User-Agent header when a Google user agent is detected (Googlebot/MobileBot/AdBot).
So here's the thing: as the server header response is not “content” per se I think this could be an okay solution, though wanted to throw it out there to the esteemed Moz community and get some additional feedback.
You guys see any issues/problems with implementing this solution?
Cheers!
linklater
-
So, there are lots of 'ifs' here, but the primary problem I see with your plan is that the CDN will return the content to Googlebot without the request hitting your server so you won't have the option to serve different headers to Googlebot.
Remember that every page is the main HTML content (which may be static or dynamically generated for every request), and then a whole bunch of other resources (Javascript and CSS files, images, font files etc.). These other resources are typically static and lend themselves far better to being cached.
Are your pages static or dynamic? If they are dynamic then you are possibly not benefitting from them being cached anyway, so you could use the 'vary' header on just these pages, and not on any static resources. This would ensure your static resources are cached by your CDN and give you a lot of the benefit of the CDN, and only the dynamic HTML content is served directly from the server.
If most of your pages are static you could still use this approach, but just without the full benefit of the CDN, which sucks.
Some of the CDNs are already working on this (see http://www.computerworld.com/s/article/9225343/Akamai_eyes_acceleration_boost_for_mobile_content and http://orcaman.blogspot.co.uk/2013/08/cdn-caching-problems-vary-user-agent.html) to try and find better solutions.
I hope some of this helps!

Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate without user-selected canonical excluded
We have pdf files uploaded in the media of wordpress and used in our website. As these pdfs are duplicate content of the original publishers, we have marked links to these pdf urls as nofollow. These pages are also disallowed in robots.txt Now, Google Search Console has shown these pages Excluded as "Duplicate without user-selected canonical" As it comes out we cannot use canonical tag with pdf pages so as to point to the original pdf source If we embed a pdf viewer in our website and fetch the pdfs by passing the urls of the original publisher, would the pdfs be still read as text by google and again create duplicate content issue? Another thing, when the pdf expires and is removed, it would lead to 404 error. If we direct our users to the third party website, then it would add up to our bounce rate. What should be the appropriate way to handle duplicate pdfs? Thanks
Intermediate & Advanced SEO | | dailynaukri1 -
How does Infinite Scrolling work with unique URLS as users scroll down? And is this SEO friendly?
I was on a site today and as i scrolled down and viewed the other posts that were below the top one i read, i noticed that each post below the top one had its own unique URL. I have not seen this and was curious if this method of infinite scrolling is SEO friendly. Will Google's spiders scroll down and index these posts below the top one and index them? The URLs of these lower posts by the way were the same URLs that would be seen if i clicked on each of these posts. Looking at Google's preferred method for Infinite scrolling they recommend something different - https://webmasters.googleblog.com/2014/02/infinite-scroll-search-friendly.html . Welcome all insight. Thanks! Christian
Intermediate & Advanced SEO | | Sundance_Kidd0 -
Using a US CDN (Cloudflare) for a UK Site. Should I use a UK Based CDN as it says my server is based in USA
Hi All, We are a UK Company with Uk customers only and use CloudFlare CND. Our Site is hosted by a UK company with servers here but from looking online and checking where my site is hosted etc etc , some sites are telling me the name of our UK Hosted company and other sites are telling me my site is hosted in San Fran (USA) , where I presume the Cloudflare is based. I know Cloudflare has a couple of servers in the UK it uses but given all my customers are UK based ,I don't want this is affect rankings etc , as I thought it was a ranking benefit to be hosted in the country you are based. Is there any issue with this and should I change or is google clever enough to know so i shouldn't worry. thanks Pet
Intermediate & Advanced SEO | | PeteC120 -
Best way to remove full demo (staging server) website from Google index
I've recently taken over an in-house role at a property auction company, they have a main site on the top-level domain (TLD) and 400+ agency sub domains! company.com agency1.company.com agency2.company.com... I recently found that the web development team have a demo domain per site, which is found on a subdomain of the original domain - mirroring the site. The problem is that they have all been found and indexed by Google: demo.company.com demo.agency1.company.com demo.agency2.company.com... Obviously this is a problem as it is duplicate content and so on, so my question is... what is the best way to remove the demo domain / sub domains from Google's index? We are taking action to add a noindex tag into the header (of all pages) on the individual domains but this isn't going to get it removed any time soon! Or is it? I was also going to add a robots.txt file into the root of each domain, just as a precaution! Within this file I had intended to disallow all. The final course of action (which I'm holding off in the hope someone comes up with a better solution) is to add each demo domain / sub domain into Google Webmaster and remove the URLs individually. Or would it be better to go down the canonical route?
Intermediate & Advanced SEO | | iam-sold0 -
Redirect at Registrar or Server
Hi folks, I have run into a situation were a new client has 3 TLDs (e.g. mycompany.com, mycompany.org and mycompany.biz), all with the same content. They are on a Windows IIS environment, which I am not familiar with. Until now, all of my clients have been Linux/Apache environment, so I always dealt with these issues utilizing htaccess. Currently all resolve to the same IP, but the URL remains the same in the browser address field (e.g. if you type-in mycompany.org - it remains as such). We want the .org and .biz version to 301 Redirect to the .com TLD. I am wondering what the best practice might be in this situation? Could we simply redirect at the registrar level or would implementation at the server level be best? If so, I would really appreciate an example from someone with experience implementing redirects on IIS. Thank you!
Intermediate & Advanced SEO | | SCW0 -
Why is my servers ip address showing up in Webmaster Tools?
In links to my site in Google Webmaster Tools I am showing over 28,000 links from an ip address. The ip address is the address that my server is hosted on. For example it shows 200.100.100.100/help, almost like there are two copies of my site, one under the domain name and one under the ip address. Is this bad? Or is it just showing up there and Google knows that it is the same since the ip and domain are from the same server?
Intermediate & Advanced SEO | | EcommerceSite0 -
Other domains hosted on same server showing up in SERP for 1st site's keywords
For the website in question, the first domain alphabetically on the shared hosting space, strange search results are appearing on the SERP for keywords associated with the site. Here is an example: A search for "unique company name" shows the results: www.uniquecompanyname.com as the top result. But on pages 2 and 3, we are getting results for the same content but for domains hosted on the same server. Here are some examples with the domain name replaced: UNIQUE DOMAIN NAME PAGE TITLE
Intermediate & Advanced SEO | | Motava
ftp.DOMAIN2.com/?action=news&id=63
META DESCRIPTION TEXT UNIQUE DOMAIN NAME PAGE TITLE 2
www.DOMAIN3.com/?action=news&id=120
META DESCRIPTION TEXT2 UNIQUE DOMAIN NAME PAGE TITLE 2
www.DOMAIN4.com/?action=news&id=120
META DESCRIPTION TEXT2 UNIQUE DOMAIN NAME PAGE TITLE 3
mail.DOMAIN5.com/?action=category&id=17
META DESCRIPTION TEXT3 ns5.DOMAIN6.com/?action=article&id=27 There are more but those are just some examples. These other domain names being listed are other customer domains on the same VPS shared server. When clicking the result the browser URL still shows the other customer domain name B but the content is usually the 404 page. The page title and meta description on that page is not displayed the same as on the SERP.As far as we can tell, this is the only domain this is occurring for.So far, no crawl errors detected in Webmaster Tools and moz crawl not completed yet.0 -
What is the average response time for Reconsideration request
I know that Google states 'several' weeks but just wondering if anybody has any experience with a Reconsideration request and if they got any type of reply and what their general experience was. thanks
Intermediate & Advanced SEO | | BelfastSEO0