Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Serving different content based on IP location
-
I have city centric website. For sake of simplicity, say I only have 2 cities -- City A and City B.
Depending on a user's IP address, they will either get City A or City B. Users can change their location through javascript on pages. But there is no cross-linking between cities. By this, I mean that unless you can read or execute javascript, there is no way for you to get from city A to City B.
My concern is this: googlebot comes to my site, and we serve them up City A. How does City B get discovered if Googlebot doesn't read javascript?
We have an xml sitemap plus plenty of backlinks to City B. Is this sufficient?
Should I provide a static link to City B (and vice versa) on the homepage for crawling purposes?
-
Adding to Daniel's comment, I'd say the big difference "...through our faceted search." It's important to have both the XML entries and a crawl path. An XML sitemap may be enough to get the pages indexed, but they won't inherit any internal link-juice. That comes through your internal links. Somewhere, there needs to be a link that Google can crawl to the other cities.
The direct back-links will help, and should get you indexed and possibly ranking, but you're still losing the authority from the domain as a whole that you'd inherit via internal links. The upshot is that you'll lose ranking power.
-
I do the exact same thing (local business pages based on visitor IP) but you can change your location based on what search terms you enter.
What we also do is allow anyone to browse any state/city results through our faceted search and we have XML sitemap entries for each state/category landing page which will then link down to city level searches.
We have seen no problem with google indexing our site (currently almost 500,000 pages indexed.)
As long as you don't actively hide content that doesn't pertain to the requesting site IP and you provide some way for Google to find it, you should be OK.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Hreflang and canonical tag for new country specific website - different base domain
I have a little different situation compared to most other questions which asks for hreflang and canonical tags for country specific version of websites. This is an SEO related question and I was hoping to get some insight on your recommendations. We have an existing Australian website - say - ausnight.com.au now we want to launch a UK version of this website - the domain is - uknight.co.uk please note, we are not only changing from .com.au to .co.uk.... but the base domain name as well changed - from ausnight to uknight as you can understand, the audience for both websites is different. Both websites has most pages same with same contents.... the questions I have is - Should we put canonical tag on the new website pages? If we don't put canon tag on new website pages, what is the impact on the SEO ranking of current website? I believe we need to put hreflang tag on both websites to tell google that we have another language version (en-au vs en-gb) of the same page. Is this correct?
Intermediate & Advanced SEO | | TinoSharp0 -
Do I need to add the actual language for meta tags and description for different languages? cited for duplicate content for different language
Hi, I am fairly new to SEO and this community so pardon my questions. We recently launched on our drupal site mandarin language version for the entire site. And when i do the crawl site, i get duplicate content for the pages that are in mandarin. Is this a problem or can i ignore this? Should i make different page titles for the different languages? Also, for the metatag and descriptions, would it better in the native language for google to search for? thanks in advance.
Intermediate & Advanced SEO | | lynetteboss0 -
Same content, different languages. Duplicate content issue? | international SEO
Hi, If the "content" is the same, but is written in different languages, will Google see the articles as duplicate content?
Intermediate & Advanced SEO | | chalet
If google won't see it as duplicate content. What is the profit of implementing the alternate lang tag?Kind regards,Jeroen0 -
What is the difference between Multilingual and multiregional websites?
Hi all, So, I have studied about multilingual and multiregional websites. As soon as possible, we will expand the website languages to english and spanish. The urls will be like this: http://example.com/pt-br
Intermediate & Advanced SEO | | mobic
http://example.com/en-us
http://example.com/es-ar Thereby, the tags will be like this: Great! But my doubt is: To /es-ar/ The indexing will be only to spanish languages in Argentina? What about the other countries that speak the same language, like Spain, Mexico, etc.I don't know if it will be possible develop a Spanish languages especially for each region. Should I do an multiregional website or only multilingual? How Google sees this case? Thanks for any advice!!1 -
Duplicate Content through 'Gclid'
Hello, We've had the known problem of duplicate content through the gclid parameter caused by Google Adwords. As per Google's recommendation - we added the canonical tag to every page on our site so when the bot came to each page they would go 'Ah-ha, this is the original page'. We also added the paramter to the URL parameters in Google Wemaster Tools. However, now it seems as though a canonical is automatically been given to these newly created gclid pages; below https://www.google.com.au/search?espv=2&q=site%3Awww.mypetwarehouse.com.au+inurl%3Agclid&oq=site%3A&gs_l=serp.3.0.35i39l2j0i67l4j0i10j0i67j0j0i131.58677.61871.0.63823.11.8.3.0.0.0.208.930.0j3j2.5.0....0...1c.1.64.serp..8.3.419.nUJod6dYZmI Therefore these new pages are now being indexed, causing duplicate content. Does anyone have any idea about what to do in this situation? Thanks, Stephen.
Intermediate & Advanced SEO | | MyPetWarehouse0 -
Why would our server return a 301 status code when Googlebot visits from one IP, but a 200 from a different IP?
I have begun a daily process of analyzing a site's Web server log files and have noticed something that seems odd. There are several IP addresses from which Googlebot crawls that our server returns a 301 status code for every request, consistently, day after day. In nearly all cases, these are not URLs that should 301. When Googlebot visits from other IP addresses, the exact same pages are returned with a 200 status code. Is this normal? If so, why? If not, why not? I am concerned that our server returning an inaccurate status code is interfering with the site being effectively crawled as quickly and as often as it might be if this weren't happening. Thanks guys!
Intermediate & Advanced SEO | | danatanseo0 -
How does google recognize original content?
Well, we wrote our own product descriptions for 99% of the products we have. They are all descriptive, has at least 4 bullet points to show best features of the product without reading the all description. So instead using a manufacturer description, we spent $$$$ and worked with a copywriter and still doing the same thing whenever we add a new product to the website. However since we are using a product datafeed and send it to amazon and google, they use our product descriptions too. I always wait couple of days until google crawl our product pages before i send recently added products to amazon or google. I believe if google crawls our product page first, we will be the owner of the content? Am i right? If not i believe amazon is taking advantage of my original content. I am asking it because we are a relatively new ecommerce store (online since feb 1st) while we didn't have a lot of organic traffic in the past, i see that our organic traffic dropped like 50% in April, seems like it was effected latest google update. Since we never bought a link or did black hat link building. Actually we didn't do any link building activity until last month. So google thought that we have a shallow or duplicated content and dropped our rankings? I see that our organic traffic is improving very very slowly since then but basically it is like between 5%-10% of our current daily traffic. What do you guys think? You think all our original content effort is going to trash?
Intermediate & Advanced SEO | | serkie1 -
How do you archive content?
In this video from Google Webmasters about content, https://www.youtube.com/watch?v=y8s6Y4mx9Vw around 0:57 it is advised to "archive any content that is no longer relevant". My question is how do you exactly do that? By adding noindex to those pages, by removing all internal links to that page, by completely removing those from the website? How do you technically archive content? watch?v=y8s6Y4mx9Vw
Intermediate & Advanced SEO | | SorinaDascalu1