Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Best practice for URL - Language/country
-
Hi,
We are planning on having our website localized into more languages. We already have an English and German version. The German version is currently a sub-domain:
www.example.com --> English version
de.example.com --> German version
Is this recommended? Or is it always better to have URLs with language prefixes such a:
Which is a better practice in terms of SEO?
-
Hi Peter,
Both really good answers to your questions above but maybe it would be good to give you some further pointing in the right direction. Perhaps you could answer the questions below and I can give you my personal opinion on which method would be best:
-
will you be putting an equal amount of marketing (content, PR, etc.) into the Spanish version for example compared with English?
-
are you able to offer fully localised service eg, Spanish customer service, Spanish sales team etc.?
-
is your company well-known globally?
It's important not to also forget that another option is using ccTLDs (eg, .co.uk, .com.au). These give the highest signal to search engines about the country being targeted and also importantly make you look more "local" which can do wonders for increasing conversion rate in countries where your company is not well-known.
-
-
I think that Tom gave you one of the best answers possible.
However I hope this helps your site structure should be very similar to one contained in the two URL's
If I may add a little bit of information that I thought was helpful
- https://support.google.com/webmasters/answer/189077?hl=en
- https://www.deepcrawl.com/knowledge/best-practice/hreflang-101-how-to-avoid-international-duplication/
WHERE TO ADD YOUR HREFLANG TAGS
You can add hreflang tags to your sitemaps, in the HTTP response headers, or on the page itself.
IN YOUR SITEMAPS
The best place to add hreflang is in your sitemap as including them in the headers or on the page adds weight to every single page request.
The following example will inform Google about the English version from the German version of the website:
<url> <loc>http://www.example.com/deutsch/</loc></url>
<xhtml:link< span=""> rel=”alternate” hreflang=”en” href=”http://www.example.com/english/” /> <xhtml:link < span="">rel=”alternate” hreflang=”de” href=”http://www.example.com/deutsch/” /></xhtml:link <></xhtml:link<>
This method would need to be repeated in full for every page on the site and for all the international websites.
IN YOUR HEADERS AND HTML
Hreflang tags can also be added to the HTTP header:
Link: http://www.example.com/english/; rel=”alternate”; hreflang=”en” Link: http://www.example.com/deutsch/; rel=”alternate”; hreflang=”de”
Or in the tag in the HTML:
http://www.example.com/english/” /> http://www.example.com/deutsch/
& because you will be creating a new site
https://www.candidsky.com/blog/the-seo-2015-guide-to-website-migration/
it would come down to your backlink profile if it were me I would use
Moz open site Explorer, Majestic, Ahrefs and Google Webmaster tools to determine whether or not I will be receiving a enough Backlinks for a subdomain or separate TLD otherwise I would use a subfolder and an extremely fast method of hosting the site Fastly is excellent or many other great methods as well.
Hope this helps,
Tom
PS use
http://hreflang.ninja/ to check
-
Hi Peter
Both are viable options.
I'd highly recommend going through Aleyda Solis' international SEO posts here on the Moz blog. They can teach how to prepare for international SEO, how to approach site structure and how to generate relevant code and hreflang tags.
Here is her international SEO checklist
Here is her Hreflang blog post and generator tool
And 40 tools to help advance your international SEO
They're great reading and nothing that I'd be able to do add to, so I hope this helps!
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google tries to index non existing language URLs. Why?
Hi, I am working for a SAAS client. He uses two different language versions by using two different subdomains.
Technical SEO | | TheHecksler
de.domain.com/company for german and en.domain.com for english. Many thousands URLs has been indexed correctly. But Google Search Console tries to index URLs which were never existing before and are still not existing. de.domain.com**/en/company
en.domain.com/de/**company ... and an thousand more using the /en/ or /de/ in between. We never use this variant and calling these URLs will throw up a 404 Page correctly (but with wrong respond code - we`re fixing that 😉 ). But Google tries to index these kind of URLs again and again. And, I couldnt find any source of these URLs. No Website is using this as an out going link, etc.
We do see in our logfiles, that a Screaming Frog Installation and a-moz.groupbuyseo.org w opensiteexplorer were trying to access this earlier. My Question: How does Google comes up with that? From where did they get these URLs, that (to our knowledge) never existed? Any ideas? Thanks 🙂0 -
URL Structure On Site - Currently it's domain/product-name NOT domain/category/product name is this bad?
I have a eCommerce site and the site structure is domain/product-name rather than domain/product-category/product-name Do you think this will have a negative impact SEO Wise? I have seen that some of my individual product pages do get better rankings than my categories.
Technical SEO | | the-gate-films0 -
Best way to noindex long dynamic urls?
I just got a Mozcrawl back and see lots of errors for overly dynamic urls. The site is a villa rental site that gives users the ability to search by bedroom, amenities, price, etc, so I'm wondering what the best way to keep these types of dynamically generated pages with urls like /property-search-page/?location=any&status=any&type=any&bedrooms=9&bathrooms=any&min-price=any&max-price=any from indexing. Any assistance will be greatly appreciated : )
Technical SEO | | wcbuckner0 -
How to Remove /feed URLs from Google's Index
Hey everyone, I have an issue with RSS /feed URLs being indexed by Google for some of our Wordpress sites. Have a look at this Google query, and click to show omitted search results. You'll see we have 500+ /feed URLs indexed by Google, for our many category pages/etc. Here is one of the example URLs: http://www.howdesign.com/design-creativity/fonts-typography/letterforms/attachment/gilhelveticatrade/feed/. Based on this content/code of the XML page, it looks like Wordpress is generating these: <generator>http://wordpress.org/?v=3.5.2</generator> Any idea how to get them out of Google's index without 301 redirecting them? We need the Wordpress-generated RSS feeds to work for various uses. My first two thoughts are trying to work with our Development team to see if we can get a "noindex" meta robots tag on the pages, by they are dynamically-generated pages...so I'm not sure if that will be possible. Or, perhaps we can add a "feed" paramater to GWT "URL Parameters" section...but I don't want to limit Google from crawling these again...I figure I need Google to crawl them and see some code that says to get the pages out of their index...and THEN not crawl the pages anymore. I don't think the "Remove URL" feature in GWT will work, since that tool only removes URLs from the search results, not the actual Google index. FWIW, this site is using the Yoast plugin. We set every page type to "noindex" except for the homepage, Posts, Pages and Categories. We have other sites on Yoast that do not have any /feed URLs indexed by Google at all. Side note, the /robots.txt file was previously blocking crawling of the /feed URLs on this site, which is why you'll see that note in the Google SERPs when you click on the query link given in the first paragraph.
Technical SEO | | M_D_Golden_Peak0 -
Can you have a /sitemap.xml and /sitemap.html on the same site?
Thanks in advance for any responses; we really appreciate the expertise of the SEOmoz community! My question: Since the file extensions are different, can a site have both a /sitemap.xml and /sitemap.html both siting at the root domain? For example, we've already put the html sitemap in place here: https://www.pioneermilitaryloans.com/sitemap Now, we're considering adding an XML sitemap. I know standard practice is to load it at the root (www.example.com/sitemap.xml), but am wondering if this will cause conflicts. I've been unable to find this topic addressed anywhere, or any real-life examples of sites currently doing this. What do you think?
Technical SEO | | PioneerServices0 -
ECommerce: Best Practice for expired product pages
I'm optimizing a pet supplies site (http://www.qualipet.ch/) and have a question about the best practice for expired product pages. We have thousands of products and hundreds of our offers just exist for a few months. Currently, when a product is no longer available, the site just returns a 404. Now I'm wondering what a better solution could be: 1. When a product disappears, a 301 redirect is established to the category page it in (i.e. leash would redirect to dog accessories). 2. After a product disappers, a customized 404 page appears, listing similar products (but the server returns a 404) I prefer solution 1, but am afraid that having hundreds of new redirects each month might look strange. But then again, returning lots of 404s to search engines is also not the best option. Do you know the best practice for large ecommerce sites where they have hundreds or even thousands of products that appear/disappear on a frequent basis? What should be done with those obsolete URLs?
Technical SEO | | zeepartner1 -
How does Google find /feed/ at the end of all pages on my site?
Hi! In Google Webmaster Tools I find *.../feed/ as a 404 page in crawl errors. The problem is that none of these pages exist and they have no inbound links (except the start page). FYI, it´s a wordpress site. Example: www.mysite.com/subpage1/feed/ www.mysite.com/subpage2/feed/ www.mysite.com/subpage3/feed/ etc Does Google search for /feed/ by default or why do I keep getting these 404´s every day?
Technical SEO | | Vivamedia0 -
OK to block /js/ folder using robots.txt?
I know Matt Cutts suggestions we allow bots to crawl css and javascript folders (http://www.youtube.com/watch?v=PNEipHjsEPU) But what if you have lots and lots of JS and you dont want to waste precious crawl resources? Also, as we update and improve the javascript on our site, we iterate the version number ?v=1.1... 1.2... 1.3... etc. And the legacy versions show up in Google Webmaster Tools as 404s. For example: http://www.discoverafrica.com/js/global_functions.js?v=1.1
Technical SEO | | AndreVanKets
http://www.discoverafrica.com/js/jquery.cookie.js?v=1.1
http://www.discoverafrica.com/js/global.js?v=1.2
http://www.discoverafrica.com/js/jquery.validate.min.js?v=1.1
http://www.discoverafrica.com/js/json2.js?v=1.1 Wouldn't it just be easier to prevent Googlebot from crawling the js folder altogether? Isn't that what robots.txt was made for? Just to be clear - we are NOT doing any sneaky redirects or other dodgy javascript hacks. We're just trying to power our content and UX elegantly with javascript. What do you guys say: Obey Matt? Or run the javascript gauntlet?0