Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Mass Removal Request from Google Index
-
Hi,
I am trying to cleanse a news website. When this website was first made, the people that set it up copied all kinds of articles they had as a newspaper, including tests, internal communication, and drafts. This site has lots of junk, but this kind of junk was on the initial backup, aka before 1st-June-2012. So, removing all mixed content prior to that date, we can have pure articles starting June 1st, 2012!
Therefore
- My dynamic sitemap now contains only articles with release date between 1st-June-2012 and now
- Any article that has release date prior to 1st-June-2012 returns a custom 404 page with "noindex" metatag, instead of the actual content of the article.
The question is how I can remove from the google index all this junk as fast as possible that is not on the site anymore, but still appears in google results?
I know that for individual URLs I need to request removal from this link
https://www.google.com/webmasters/tools/removalsThe problem is doing this in bulk, as there are tens of thousands of URLs I want to remove. Should I put the articles back to the sitemap so the search engines crawl the sitemap and see all the 404? I believe this is very wrong. As far as I know this will cause problems because search engines will try to access non existent content that is declared as existent by the sitemap, and return errors on the webmasters tools.
Should I submit a DELETED ITEMS SITEMAP using the <expires>tag? I think this is for custom search engines only, and not for the generic google search engine.
https://developers.google.com/custom-search/docs/indexing#on-demand-indexing</expires>The site unfortunatelly doesn't use any kind of "folder" hierarchy in its URLs, but instead the ugly GET params, and a kind of folder based pattern is impossible since all articles (removed junk and actual articles) are of the form:
http://www.example.com/docid=123456So, how can I bulk remove from the google index all the junk... relatively fast?
-
Hi Ioannis,
What about the first suggestion? Can you create a page linking to all of the pages that you'd like to remove, then have Google crawl that page?
Best,
Kristina
-
Thank you Kristina,
I know about the URL structure, I have been trying the past few months to cleanse this site that I was not involved in its creation. It has several more SEO problems that have either been fixed or not yet, but we are talking about more than 50 SEO problems I've found so far - most of these critical.
On the sitemap that I built, the junk pages do not exist, and because this is sitemap I have written myself, I can easily make another containing the articles that I have removed (just reverse a part of my select query for the sitemap to get the ones I have removed).
http://www.neakriti.gr/webservices/sitemap-index.aspx
So far I implemented the last of your suggestions and here is an example:
This is a valid article page
http://www.neakriti.gr/?page=newsdetail&DocID=1314221 - (Status Code: 200)This is a non existent article page (never existed at the first place) - (Status Code: 404)
http://www.neakriti.gr/?page=newsdetail&DocID=12345678This is one of the articles that I removed from sitemap and site - (Status Code: 410)
http://www.neakriti.gr/?page=newsdetail&DocID=894052Also I would like you to take a look at another question about the same site and see that it can relate to this question with garbage articles too...
https://a-moz.groupbuyseo.org/community/q/multiple-instances-of-the-same-articleThank you so much!
-
Hi Ioannis,
You're in quite a bind here, without a good URL structure! I don't think there's any one perfect option, but I think all of these will work:
- Create a page on your site that links to every article you would like to delete, keeping those articles 404/410ed. Then, use the Fetch as Googlebot tool, and ask Google to crawl the page plus all of its links. This will get Google to quickly crawl all of those pages, see that they're gone, and remove them from their index. Keep in mind that if you just use a 404, Google may keep the page around for a bit to make sure you didn't just mess up. As Eric said, a 410 is more of a sure thing.
- Create an XML sitemap of those deleted articles, and have Google crawl it. Yes, this will create errors in GSC, but errors in GSC mean that they're concerned you've made a mistake, not that they're necessarily penalizing you. Just mark those guys as fixed and take the sitemap down once Google's crawled it.
- 410 these pages, remove all internal links to them (use a tool like Screaming Frog to make sure you didn't miss any links!), and remove them from your sitemap. That'll distance you from that old, crappy content, and Google will slowly realize that it's been removed as it checks in on its old pages. This is probably the least satisfying option, but it's an option that'll get the job done eventually.
Hope this helps! Let us know what you decide to do.
Best,
Kristina
-
Thank you,
so you suggest that based on my date based query, instead of blocking everything before that date blindly, keep blocking it with 410, while anything that doesn't exist anyway return 404.
Also another question, about the blocked articles that return 410, should I put their URLs back on the xml sitemap or not?
-
Any article that has release date prior to 1st-June-2012 should return a custom 410 page with "noindex" metatag, instead of the actual content of the article.
The error returned should be a "410 gone" and not just a 404. That way Google will treat it differently, and may remove it from the index faster than just returning a 404. Also, you can use the Google removal tool, as well. Don't forget the robots.txt file, as well, there may be directories with the content that you need to disallow.
But overall, using a 410 is going to be better and most likely faster.
-
Thank you for your response.
I defenintelly cannot use noindex because as I explained I changed all articles prior to the minimum given date to return 404. So this content is not visibly available on the web in order to contain a noindex directive. Unless you mean to have it at my custom 404 page, where yes its there.
Also there is no folder to associate in robots, since they are in ugly form of GET params like DOCID=12345. So given that, there are thousands of DocIDs that are junk and removed, and thousands that are the actuall articles.
So I assumed that creating a "deleted articles" sitemap where each <url>will contain an <expires>2016-06-01</expires> tag seemed the most logical thing, but I am afraid its for "custom search engines", rather than for normal de-index requests as its provided bellow</url>
https://developers.google.com/custom-search/docs/indexing#on-demand-indexing
-
Sitemaps is definitely not the way to go for this as you can't just have an expires tag in there and it would make pages go away. The best option to go with is the meta robots and then put them either on nonindex, nofollow, or noindex, follow. With this approach and hopefully with a relative high crawl rate you can make sure that the data from these pages will be removed from the Google Index as soon as possible.
If you still want these pages to be indexed but maybe just not have them crawled anymore, which I don't think you'd like to do based on your explanation then go with robots.txt and excluding the pages in there that you'd like to.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Can Google Crawl & Index my Schema in CSR JavaScript
We currently only have one option for implementing our Schema. It is populated in the JSON which is rendered by JavaScript on the CLIENT side. I've heard tons of mixed reviews about if this will work or not. So, does anyone know for sure if this will or will not work. Also, how can I build a test to see if it does or does not work?
Intermediate & Advanced SEO | | MJTrevens0 -
My product category pages are not being indexed on google can someone help?
My website has been indexed on google and all of its pages can be found on google except for the product category pages - which are where we want our traffic heading to, so this is a big problem for us. Our website is www.skirtinguk.com And an example of a page that isn't being indexed is https://www.skirtinguk.com/product-category/mdf-skirting-board/
Intermediate & Advanced SEO | | chelseaskirtinguk0 -
Google Indexing Of Pages As HTTPS vs HTTP
We recently updated our site to be mobile optimized. As part of the update, we had also planned on adding SSL security to the site. However, we use an iframe on a lot of our site pages from a third party vendor for real estate listings and that iframe was not SSL friendly and the vendor does not have that solution yet. So, those iframes weren't displaying the content. As a result, we had to shift gears and go back to just being http and not the new https that we were hoping for. However, google seems to have indexed a lot of our pages as https and gives a security error to any visitors. The new site was launched about a week ago and there was code in the htaccess file that was pushing to www and https. I have fixed the htaccess file to no longer have https. My questions is will google "reindex" the site once it recognizes the new htaccess commands in the next couple weeks?
Intermediate & Advanced SEO | | vikasnwu1 -
6 .htaccess Rewrites: Remove index.html, Remove .html, Force non-www, Force Trailing Slash
i've to give some information about my website Environment 1. i have static webpage in the root. 2. Wordpress installed in sub-dictionary www.domain.com/blog/ 3. I have two .htaccess , one in the root and one in the wordpress
Intermediate & Advanced SEO | | NeatIT
folder. i want to www to non on all URLs Remove index.html from url Remove all .html extension / Re-direct 301 to url
without .html extension Add trailing slash to the static webpages / Re-direct 301 from non-trailing slash Force trailing slash to the Wordpress Webpages / Re-direct 301 from non-trailing slash Some examples domain.tld/index.html >> domain.tld/ domain.tld/file.html >> domain.tld/file/ domain.tld/file.html/ >> domain.tld/file/ domain.tld/wordpress/post-name >> domain.tld/wordpress/post-name/ My code in ROOT htaccess is <ifmodule mod_rewrite.c="">Options +FollowSymLinks -MultiViews RewriteEngine On
RewriteBase / #removing trailing slash
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^(.*)/$ $1 [R=301,L] #www to non
RewriteCond %{HTTP_HOST} ^www.(([a-z0-9_]+.)?domain.com)$ [NC]
RewriteRule .? http://%1%{REQUEST_URI} [R=301,L] #html
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^([^.]+)$ $1.html [NC,L] #index redirect
RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /index.html\ HTTP/
RewriteRule ^index.html$ http://domain.com/ [R=301,L]
RewriteCond %{THE_REQUEST} .html
RewriteRule ^(.*).html$ /$1 [R=301,L]</ifmodule> The above code do 1. redirect www to non-www
2. Remove trailing slash at the end (if exists)
3. Remove index.html
4. Remove all .html
5. Redirect 301 to filename but doesn't add trailing slash at the end0 -
How to do Country specific indexing ?
We are a business that operate in South East Asian countries and have medical professionals listed in Thailand, Philippines and Indonesia. When I go to Google Philippines and check I can see indexing of pages from all countries and no Philippines pages. Philippines is where we launched recently. How can I tell Google Philippines to give more priority to pages from Philippines and not from other countries Can someone help?
Intermediate & Advanced SEO | | ozil0 -
Why are bit.ly links being indexed and ranked by Google?
I did a quick search for "site:bit.ly" and it returns more than 10 million results. Given that bit.ly links are 301 redirects, why are they being indexed in Google and ranked according to their destination? I'm working on a similar project to bit.ly and I want to make sure I don't run into the same problem.
Intermediate & Advanced SEO | | JDatSB1 -
Will Google View Using Google Translate As Duplicate?
If I have a page in English, which exist on 100 other websites, we have a case where my website has duplicate content. What if I use Google Translate to translate the page from English to Japanese, as the only website doing this translation will my page get credit for producing original content? Or, will Google view my page as duplicate content, because Google can tell it is translated from an original English page, which runs on 100+ different websites, since Google Translate is Google's own software?
Intermediate & Advanced SEO | | khi50 -
Best way to de-index content from Google and not Bing?
We have a large quantity of URLs that we would like to de-index from Google (we are affected b Panda), but not Bing. What is the best way to go about doing this?
Intermediate & Advanced SEO | | nicole.healthline0