Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
What referrer is shown in http request when google crawler visit a page?
-
Is it legit to show different content to http request having different referrer?
case a: user view one page of the site with plenty of information about one brand, and click on a link on that page to see a product detail page of that brand, here I don't want to repeat information about the brand itself
case b: a user view directly the product detail page clicking on a SERP result, in this case I would like to show him few paragraph about the brand
Is it bad? Anyone have experience in doing it?
My main concern is google crawler. Should not be considered cloaking because I am not differentiating on user-agent bot-no-bot.
But when google is crawling the site which referrer will use? I have no idea, does anyone know?
When going from one link to another on the website, is google crawler leaving the referrer empty?
-
After googling about it and thinking better I choose not too. I think you are absolutely right, too inconsistent.
-
You won't get a referrer. Googlebot is not like a real user, surfing from site to site. What Googlebot does is this (more or less)
- Googlebot requests a standalone page
- Googlebot parses the page out. During this process it notes the links on that page and, depending on various mechanisms (nofollow, internal page rank, the mood of Matt Cutts, etc) it will note those links for the system to parse later
- Googlebot is done so it grabs another page off the page list (likely without know how it got on said list) and goes back to #1
Now, to your question. Since Googlebot has no referrer it won't get your alternate content. This means that your alternate content page won't get indexed.
I would suggest here that a best practice is NOT to filter on referrer data, which can be inconsistent and potentially fake. Instead, I would make a separate page that contains your extra data and allow users to decide if they want more information Thus Googlebot finds all your content and your users get a better experience.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Website ranking stuck on 2nd page of google. How to bring it in top 10 position??
Hi I am working on a site indianhomeappliances.in that for search terms such as 'best washing machine in india' appears near the top of the 2nd page of Googl for url https://indianhomeappliances.in/best-washing-machine-in-india/ When looking at what is listed on the 1st page for 'best washing machine in india' I would appreciate any advice/guidance on what else could be done to give the site a final push to get on the 1st page of search results. I have made more than 60 backlinks along with sites from competitor analysis via moz for this page Looking at the sites that are on the 1st page I cant understand why many of them are ranking higher than me? Any insight and plan of attack would be most appreciated from any search experts on the forum. My website is 2.5 months old. Many Thanks
Intermediate & Advanced SEO | | Pank00770 -
Home Page Disappears From Google - But Rest of Site Still Ranked
As title suggests we are running into a serious issue of the home page disapearing from Google search results whilst the rest of the site still remains. We search for it naturally cannot find a trace, then use a "site:" command in Google and still the home page does not come up. We go into web masters and inspect the home page and even Google states that the page is indexable. We then run the "Request Indexing" and the site comes back on Google. This is having a damaging affect and we would like to understand why this issue is happening. Please note this is not happening on just one of our sites but has happened to three which are all located on the same server. One of our brand which has the issue is: www.henweekends.co.uk
Intermediate & Advanced SEO | | JH_OffLimits0 -
What are best page titles for sub-domain pages?
Hi Moz communtity, Let's say a website has multiple sub-domains with hundreds and thousands of pages. Generally we will be mentioning "primary keyword & "brand name" on every page of website. Can we do same on all pages of sub-domains to increase the authority of website for this primary keyword in Google? Or it gonna end up as negative impact if Google consider as duplicate content being mentioned same keyword and brand name on every page even on website and all pages of sub domains? Thanks
Intermediate & Advanced SEO | | vtmoz0 -
Is Google able to see child pages in our AJAX pagination?
We upgraded our site to a new platform the first week of August. The product listing pages have a canonical issue. Page 2 of the paginated series has a canonical pointing to page 1 of the series. Google lists this as a "mistake" and we're planning on implementing best practice (https://webmasters.googleblog.com/2013/04/5-common-mistakes-with-relcanonical.html) We want to implement rel=next,prev. The URLs are constructed using a hashtag and a string of query parameters. You'll notice that these parameters are ¶meter:value vs ¶meter=value. /products#facet:&productBeginIndex:0&orderBy:&pageView:grid&minPrice:&maxPrice:&pageSize:& None of the URLs are included in any indexed URLs because the canonical is the page URL without the AJAX parameters. So these results are expected. Screamingfrog only finds the product links on page 1 and doesn't move to page 2. The link to page 2 is AJAX. ScreamingFrog only crawls AJAX if its in Google's deprecated recommendations as far as I know. The "facet" parameter is noted in search console, but the example URLs are for an unrelated URL that uses the "?facet=" format. None of the other parameters have been added by Google to the console. Other unrelated parameters from the new site are in the console. When using the fetch as Google tool, Google ignores everything after the "#" and shows only the main URL. I tested to see if it was just pulling the canonical of the page for the test, but that was not the case. None of the "#facet" strings appear in the Moz crawl I don't think Google is reading the "productBeginIndex" to specify the start of a page 2 and so on. One thought is to add the parameter in search console, remove the canonical, and test one category to see how Google treats the pages. Making the URLs SEO friendly (/page2.../page3) is a heavy lift. Any ideas how to diagnose/solve this issue?
Intermediate & Advanced SEO | | Jason.Capshaw0 -
Google indexing pages from chrome history ?
We have pages that are not linked from site yet they are indexed in Google. It could be possible if Google got these pages from browser. Does Google takes data from chrome?
Intermediate & Advanced SEO | | vivekrathore0 -
Google indexed wrong pages of my website.
When I google site:www.ayurjeewan.com, after 8 pages, google shows Slider and shop pages. Which I don't want to be indexed. How can I get rid of these pages?
Intermediate & Advanced SEO | | bondhoward0 -
Best practice for removing indexed internal search pages from Google?
Hi Mozzers I know that it’s best practice to block Google from indexing internal search pages, but what’s best practice when “the damage is done”? I have a project where a substantial part of our visitors and income lands on an internal search page, because Google has indexed them (about 3 %). I would like to block Google from indexing the search pages via the meta noindex,follow tag because: Google Guidelines: “Use robots.txt to prevent crawling of search results pages or other auto-generated pages that don't add much value for users coming from search engines.” http://support.google.com/webmasters/bin/answer.py?hl=en&answer=35769 Bad user experience The search pages are (probably) stealing rankings from our real landing pages Webmaster Notification: “Googlebot found an extremely high number of URLs on your site” with links to our internal search results I want to use the meta tag to keep the link juice flowing. Do you recommend using the robots.txt instead? If yes, why? Should we just go dark on the internal search pages, or how shall we proceed with blocking them? I’m looking forward to your answer! Edit: Google have currently indexed several million of our internal search pages.
Intermediate & Advanced SEO | | HrThomsen0 -
Tool to calculate the number of pages in Google's index?
When working with a very large site, are there any tools that will help you calculate the number of links in the Google index? I know you can use site:www.domain.com to see all the links indexed for a particular url. But what if you want to see the number of pages indexed for 100 different subdirectories (i.e. www.domain.com/a, www.domain.com/b)? is there a tool to help automate the process of finding the number of pages from each subdirectory in Google's index?
Intermediate & Advanced SEO | | nicole.healthline0