Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
6 .htaccess Rewrites: Remove index.html, Remove .html, Force non-www, Force Trailing Slash
-
i've to give some information about my website Environment
1. i have static webpage in the root.
2. Wordpress installed in sub-dictionary www.domain.com/blog/
3. I have two .htaccess , one in the root and one in the wordpress
folder.i want to
- www to non on all URLs
- Remove index.html from url
- Remove all .html extension / Re-direct 301 to url
without .html extension - Add trailing slash to the static webpages / Re-direct 301 from non-trailing slash
- Force trailing slash to the Wordpress Webpages / Re-direct 301 from non-trailing slash
Some examples
domain.tld/index.html >> domain.tld/
domain.tld/file.html >> domain.tld/file/
domain.tld/file.html/ >> domain.tld/file/
domain.tld/wordpress/post-name >> domain.tld/wordpress/post-name/
My code in ROOT htaccess is
<ifmodule mod_rewrite.c="">Options +FollowSymLinks -MultiViews
RewriteEngine On
RewriteBase /#removing trailing slash
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^(.*)/$ $1 [R=301,L]#www to non
RewriteCond %{HTTP_HOST} ^www.(([a-z0-9_]+.)?domain.com)$ [NC]
RewriteRule .? http://%1%{REQUEST_URI} [R=301,L]#html
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^([^.]+)$ $1.html [NC,L]#index redirect
RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /index.html\ HTTP/
RewriteRule ^index.html$ http://domain.com/ [R=301,L]
RewriteCond %{THE_REQUEST} .html
RewriteRule ^(.*).html$ /$1 [R=301,L]</ifmodule>The above code do
1. redirect www to non-www
2. Remove trailing slash at the end (if exists)
3. Remove index.html
4. Remove all .html
5. Redirect 301 to filename but doesn't add trailing slash at the end -
#index redirect
RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /index.html\ HTTP/
RewriteRule ^index.html$ http://domain.com/ [R=301,L]
RewriteCond %{THE_REQUEST} .html
RewriteRule ^(.*).html$ /$1 [R=301,L]hi anyone please help I use this code but now getting 404 error. please help.
i also remove this code again but still same issue.
-
Hi Tom,
thanks for your reply.
i have some problems
the above code doesn't
1 - Add trailing slash to the static webpages / Re-direct 301 from non-trailing slash
so it should be http://ghadaalsaman.com/articles/ instead of http://ghadaalsaman.com/articles
2 - Force trailing slash to the Wordpress Webpages / Re-direct 301 from non-trailing slash
-
Hey NeatIT!
I see you have a working solution there. Did you have a specific question about the setup?
I did notice that your setup cane sometimes result in chaining 301 redirects, which is one area for possible improvement.
Let me know how we can help!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Moving site from www to non www and also hosting to vps what will be the effect?
Hi SEO gurus,
Intermediate & Advanced SEO | | SeoBlogs61
I am trying to move my site from shared hosting to VPS hosting and also moving from www to non www version.
What is the best possible way to avoid any issue and without losing the backlinks.
Is it good or bad to do? URL: https://buylikesservices.com/0 -
We are redirecting http and non www versions of our website. Should all versions http (non www version and www version) and https (non www version) should just have 1 redirect to the https www version?
We are redirecting http and non www versions of our website. Should all versions http (non www version and www version) and https (non www version) should just have 1 redirect to the https www version? Thant way all forms of the website are pointing to one version?
Intermediate & Advanced SEO | | Caffeine_Marketing0 -
Low text-HTML ratios
Are low text-HTML ratios still a negative SEO ranking factor? Today I ran SEMRUSH site audit that showed 344 out of 345 pages on our website (www.nyc-officespace-leader.com) show an text-HTML ratio that ranges from 8% to 22%. This is characterized as a warning on SEMRUSH. This error did not exist in April when the last SEMRUSH audit was conducted. Is it worthwhile to try to externalize code in order to improve this ratio? Or to add text (major project on a site of this size)? These pages generally have 200-400 words of text. Certain URLs, for example www.nyc-officespace-leader.com/blog/nycofficespaceforlease more text, yet it still shows an text-HTML ratio of only 16%. We recently upgraded to the WordPress 4.2.1. Could this have bloated the code (CSS etcetera) to the detriment of the text-HTML ratio? If Google has become accustomed to more complex code, is this a ratio that I can ignore. Thanks, Alan
Intermediate & Advanced SEO | | Kingalan10 -
Robots.txt - Do I block Bots from crawling the non-www version if I use www.site.com ?
my site uses is set up at http://www.site.com I have my site redirected from non- www to the www in htacess file. My question is... what should my robots.txt file look like for the non-www site? Do you block robots from crawling the site like this? Or do you leave it blank? User-agent: * Disallow: / Sitemap: http://www.morganlindsayphotography.com/sitemap.xml Sitemap: http://www.morganlindsayphotography.com/video-sitemap.xml
Intermediate & Advanced SEO | | morg454540 -
Best way to remove full demo (staging server) website from Google index
I've recently taken over an in-house role at a property auction company, they have a main site on the top-level domain (TLD) and 400+ agency sub domains! company.com agency1.company.com agency2.company.com... I recently found that the web development team have a demo domain per site, which is found on a subdomain of the original domain - mirroring the site. The problem is that they have all been found and indexed by Google: demo.company.com demo.agency1.company.com demo.agency2.company.com... Obviously this is a problem as it is duplicate content and so on, so my question is... what is the best way to remove the demo domain / sub domains from Google's index? We are taking action to add a noindex tag into the header (of all pages) on the individual domains but this isn't going to get it removed any time soon! Or is it? I was also going to add a robots.txt file into the root of each domain, just as a precaution! Within this file I had intended to disallow all. The final course of action (which I'm holding off in the hope someone comes up with a better solution) is to add each demo domain / sub domain into Google Webmaster and remove the URLs individually. Or would it be better to go down the canonical route?
Intermediate & Advanced SEO | | iam-sold0 -
How to detect a bad link and remove ?
As per google penguin, all the low quality back links are going to affect the website SERPS hugely. So, we need to find all the bad back links and then remove them one by one. What I would like to know is, what tool do you use to find all the bad back links ? And how do we know which is a bad back link or bad website, where our link should not be there ? Then what service what do you suggest for back links removal. I contacted LinkDelete.com and they quoted me 97$ for a month to remove all links in less than 3 weeks.
Intermediate & Advanced SEO | | monali123
Let me know, what you suggest.0 -
Yoast SEO Plugin: To Index or Not to index Categories?
Taking a poll out there......In most cases would you want to index or NOT index your category pages using the Yoast SEO plugin?
Intermediate & Advanced SEO | | webestate0 -
Best practice for removing indexed internal search pages from Google?
Hi Mozzers I know that it’s best practice to block Google from indexing internal search pages, but what’s best practice when “the damage is done”? I have a project where a substantial part of our visitors and income lands on an internal search page, because Google has indexed them (about 3 %). I would like to block Google from indexing the search pages via the meta noindex,follow tag because: Google Guidelines: “Use robots.txt to prevent crawling of search results pages or other auto-generated pages that don't add much value for users coming from search engines.” http://support.google.com/webmasters/bin/answer.py?hl=en&answer=35769 Bad user experience The search pages are (probably) stealing rankings from our real landing pages Webmaster Notification: “Googlebot found an extremely high number of URLs on your site” with links to our internal search results I want to use the meta tag to keep the link juice flowing. Do you recommend using the robots.txt instead? If yes, why? Should we just go dark on the internal search pages, or how shall we proceed with blocking them? I’m looking forward to your answer! Edit: Google have currently indexed several million of our internal search pages.
Intermediate & Advanced SEO | | HrThomsen0