Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Google Not Indexing XML Sitemap Images
-
Hi Mozzers,
We are having an issue with our XML sitemap images not being indexed.
The site has over 39,000 pages and 17,500 images submitted in GWT. If you take a look at the attached screenshot, 'GWT Images - Not Indexed', you can see that the majority of the pages are being indexed - but none of the images are.
The first thing you should know about the images is that they are hosted on a content delivery network (CDN), rather than on the site itself. However, Google advice suggests hosting on a CDN is fine - see second screenshot, 'Google CDN Advice'. That advice says to either (i) ensure the hosting site is verified in GWT or (ii) submit in robots.txt. As we can't verify the hosting site in GWT, we had opted to submit via robots.txt.
There are 3 sitemap indexes: 1) http://www.greenplantswap.co.uk/sitemap_index.xml, 2) http://www.greenplantswap.co.uk/sitemap/plant_genera/listings.xml and 3) http://www.greenplantswap.co.uk/sitemap/plant_genera/plants.xml.
Each sitemap index is split up into often hundreds or thousands of smaller XML sitemaps. This is necessary due to the size of the site and how we have decided to pull URLs in. Essentially, if we did it another way, it may have involved some of the sitemaps being massive and thus taking upwards of a minute to load.
To give you an idea of what is being submitted to Google in one of the sitemaps, please see view-source:http://www.greenplantswap.co.uk/sitemap/plant_genera/4/listings.xml?page=1.
Originally, the images were SSL, so we decided to reverted to non-SSL URLs as that was an easy change. But over a week later, that seems to have had no impact. The image URLs are ugly... but should this prevent them from being indexed?
The strange thing is that a very small number of images have been indexed - see http://goo.gl/P8GMn. I don't know if this is an anomaly or whether it suggests no issue with how the images have been set up - thus, there may be another issue.
Sorry for the long message but I would be extremely grateful for any insight into this. I have tried to offer as much information as I can, however please do let me know if this is not enough.
Thank you for taking the time to read and help.
Regards,
Mark
-
Hi Mark,
I'm just following the thread as I have a similar problem. Would you mind sharing your results from the tests?
Thanks,
Bogdan -
Thanks Everett - that's exactly what I intend to do.
We will be testing two new sitemaps with 100 x URLs each. 1. With just the file extension removed and 2. With the entire cropping part of the URL removed, as suggested by Matt.
Will be interested to see whether just one or both of the sitemaps are successful. Will of course post the outcome here, for anyone who might have this problem in future.
-
It isn't always that simple. Maybe commas don't present a problem on their own. Maybe double file extensions don't present a problem on their own. Maybe a CDN doesn't present a problem on its own. Maybe very long, complicated URLs don't present a problem on their own.
You have all of these. Together, in any combination, they could make indexation of your images a problem for Google.
Just test it out on a few. Get rid of the file extension. If that doesn't work, get rid of the comma. That is all you can do. Start with whatever is easiest for the developer to implement, and test it out on a few before rolling it out across all of your images.
-
Cheers for that mate - especially the useful Excel formula.
I am going to try a few things in isolation so that we can accurately say which element/s caused the issue.
Thanks again, mate.
-
Ignore the developer - what worked for one doesn't mean it'll work for you

The easiest way to test this is to manually create a sitemap with 100 or so 'clean' image URLs. Just pull the messy ones into excel and use the formula below to create a clean version (Use A1 for messy, B1 for formula).
Good luck mate.
=CONCATENATE("image:imageimage:lochttp://res.cloudinary.com/greenplantswap/image/upload/",RIGHT(A1,LEN(A1)-(FIND("",(SUBSTITUTE(A1,"/","",(IF(LEN(TRIM(A1))=0,0,LEN(TRIM(A1))-LEN(SUBSTITUTE(A1,"/",""))))))))),"</image:loc></image:image>")
-
Thanks for the responses guys, much appreciated.
In terms of the commas, that was something that I put to the developer, however he was able to come back with examples where this has clearly not been an issue - e.g. apartable.com have commas in their URLs and use the same CDN (Coudinary).
However, I agree with you that double file extension could be the issue. I may have to wait until next week to find out as the developer is working on another project, but will post the outcome here once I know.
Thank you again for the help!
-
Hello Edlondon,
I think you're probably answering your own question here. Google typically doesn't have any problem indexing images served from a CDN. However, I've seen Google have problems with commas in the URL at times. Typically it happens when other elements in the URL are also troublesome, such as your double file extension.
Are you able to rename the files to get rid of the superfluous .jpg extension? If so, I'd recommend trying it out on a few dozen images. We could come up with a lot of hypothesis, but that would be the one I'd test first.
-
Hmmm I step off here, never used cloudinary.com or even heard of them. I personally use NetDNA, with pull zones (which means that they load the image/css/js from your origin and store a version on their servers) while handling cropping/resizing from my own end (via PHP and then loading that image, example: http://cdn.fulltraffic.net/blog/thumb/58x58/youtube-video-xQmQeKU25zg.jpg try changing the 58x58 to another size and my server will handle the crop/resize while NetDNA will serve it and store for future loads).
-
Found one of the sites with the same Cloudinary URLs with commas - apartable.com
See Google image results: https://www.google.co.uk/search?q=site:apartable.com&tbm=isch
Their images appear to be well indexed. One thing I have noticed, however, is that we often have .jpg twice in the image URL. E.g.:
- http://res.cloudinary.com/greenplantswap/image/upload/c_crop,g_north,h_0.9,w_1.0/c_fill,d_no_image_icon-720x720**.jpg**,g_center,h_900,q_80,w_900/v1352574983/oyfos82vwvmxdx91hxaw**.jpg**
- http://res.cloudinary.com/greenplantswap/image/upload/c_crop,g_north,h_0.9,w_1.0/c_fill,d_no_image_icon-720x720**.jpg**,g_center,h_900,q_80,w_900/v1352574989/s09cv3krfn7gbyvw3r2y**.jpg**
- http://res.cloudinary.com/greenplantswap/image/upload/c_crop,g_north,h_0.9,w_1.0/c_fill,d_no_image_icon-720x720**.jpg**,g_center,h_407,q_80,w_407/v1352575010/rl7cl4xi0timza1sgzxj**.jpg**
Wonder if that is confusing Google? If so, none of this is consistent, as they do have a few images indexed with exactly the same kind of URL as those listed above.
-
Thought I had them on email but must be within our fairly cumbersome Skype thread... let me have a dig through when I get chance and I'll post them up here.
-
Hmmmm, okay... Could you post the examples they gave, and an example page where the images are located on the site?
-
Hi Matt,
Thought I should let you know that (i) the X-Robots-Tag was not set, so that's not the issue and (ii) the URLs, although ugly, are not the issue either. We had a couple of examples of websites with the same thing (I'm told the commas facilitate on-the-fly sizing and cropping) and their images were indexed fine.
So, back to the drawing board for me! Thank you very much for the suggestions, really do appreciate it.
Mark
-
Hmm interesting - we hadn't thought of the X-Robots-Tag http header. I'm going to fire that over to the developer now.
As for the URLs, they are awful! But I am told that this is not a problem - but perhaps this is worth re-chasing up as other solutions have, so far, been unfruitful.
Thanks for taking the time to help, Matt - I'll let you know if that fixes it! Unfortunately it could be another week before I know, as the developer is currently working on another project so any changes may be early-mid next week.
Thanks again...
-
This is a bit of a long shot but if the files have been uploaded using their API it may have been that the 'X-Robots-Tag' http header is set to no-index...
Also, those URLs don't look great with the commas in them. Have you tried doing a small subset that just has the image id (e.g. http://res.cloudinary.com/greenplantswap/image/upload/nprvu0z6ri227cgnpmqc.jpg)?
Matt
-
Hi Federico,
Thanks very much for taking the time to respond.
To answer your question, we are using http://cloudinary.com/. So, taking one of the examples from the XML sitemap I posted above, an example of an image URL is http://res.cloudinary.com/greenplantswap/image/upload/c_crop,g_north,h_0.9,w_1.0/c_fill,d_no_image_icon-720x720.jpg,g_center,h_900,q_80,w_900/v1352575097/nprvu0z6ri227cgnpmqc.jpg (what a lovely URL!).
I had a look at http://res.cloudinary.com/robots.txt and it seems that they are not blocking anything - the disallow instruction is commented out. I assume that is indeed the robots.txt I should be looking at?
Assuming it is, this does not appear to get to the bottom of why the images are not being indexed.
Any further assistance would be greatly appreciated - we have 17k unique images that could be driving traffic and this is a key way that people find our kind of website.
Thanks,
Mark
-
Within that robot.txt file on the CDN (which one are you using?) have you set to allow Google to index them?
Most CDNs I know allows you to block engines via the robots.txt to avoid bandwidth consumption.
In the case you are using NetDNA (MaxCDN) or the like, make sure your robots file isn't disallowing robots to crawl.
We are using a CDN too to deliver images and static files and all of them are being indexed, we tested disallowing crawlers but it caused a lot of warnings, so instead we no allow all of them to read and index content (is a small price to pay to have your content indexed).
Hope that helps!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google indexed "Lorem Ipsum" content on an unfinished website
Hi guys. So I recently created a new WordPress site and started developing the homepage. I completely forgot to disallow robots to prevent Google from indexing it and the homepage of my site got quickly indexed with all the Lorem ipsum and some plagiarized content from sites of my competitors. What do I do now? I’m afraid that this might spoil my SEO strategy and devalue my site in the eyes of Google from the very beginning. Should I ask Google to remove the homepage using the removal tool in Google Webmaster Tools and ask it to recrawl the page after adding the unique content? Thank you so much for your replies.
Intermediate & Advanced SEO | | Ibis150 -
How can I make a list of all URLs indexed by Google?
I started working for this eCommerce site 2 months ago, and my SEO site audit revealed a massive spider trap. The site should have been 3500-ish pages, but Google has over 30K pages in its index. I'm trying to find a effective way of making a list of all URLs indexed by Google. Anyone? (I basically want to build a sitemap with all the indexed spider trap URLs, then set up 301 on those, then ping Google with the "defective" sitemap so they can see what the site really looks like and remove those URLs, shrinking the site back to around 3500 pages)
Intermediate & Advanced SEO | | Bryggselv.no0 -
URL Injection Hack - What to do with spammy URLs that keep appearing in Google's index?
A website was hacked (URL injection) but the malicious code has been cleaned up and removed from all pages. However, whenever we run a site:domain.com in Google, we keep finding more spammy URLs from the hack. They all lead to a 404 error page since the hack was cleaned up in the code. We have been using the Google WMT Remove URLs tool to have these spammy URLs removed from Google's index but new URLs keep appearing every day. We looked at the cache dates on these URLs and they are vary in dates but none are recent and most are from a month ago when the initial hack occurred. My question is...should we continue to check the index every day and keep submitting these URLs to be removed manually? Or since they all lead to a 404 page will Google eventually remove these spammy URLs from the index automatically? Thanks in advance Moz community for your feedback.
Intermediate & Advanced SEO | | peteboyd0 -
Would changing the file name of an image (not the alt attribute) have an effect of on seo / ranking of that image and thus the site?
Would changing the file name of image, not the alt attribute nor the image itself (so it would be exactly the same but just a name change) have any effect on : a) A sites seo ranking b) the individual images seo ranking (although i guess if b) would be true it would have an effect on a) although potentially small.) This is the sort of change i would be thinking of making :  changed to 
Intermediate & Advanced SEO | | Sam-P0 -
Multilingual Sitemaps
Hey there, I have a site with many languages. So here are my questions concerning the sitemaps. The correct way of creating a sitemap for a multilingual site is as followed ( by the official blog of Google ) <urlset xmlns="</span>http://www.sitemaps.org/schemas/sitemap/0.9" xmlns:xhtml="http://www.w3.org/1999/xhtml"> http://www.example.com/loc> <xhtml:link rel="alternate" hreflang="en" href="</span>http://www.example.com/"/> <xhtml:link rel="alternate" hreflang="de" href="</span>http://www.example.com/de"/> <xhtml:link rel="alternate" hreflang="fr" href="</span>http://www.example.com/fr"/><a href=" http:="" www.example.com="" fr"="" target="_blank"></xhtml:link><a href=" http:="" www.example.com="" de"="" target="_blank"></xhtml:link><a href=" http:="" www.example.com="" "="" target="_blank"></xhtml:link><a href=" http:="" www.sitemaps.org="" schemas="" sitemap="" 0.9"="" rel="nofollow" target="_blank"></urlset> **So here is my first question. My site has over 200.000 pages that all of them support around 5-6 languages. Am I suppose to do this example 200.000 times?****My second question is. My root domain is www.example.com but this one redirects with 301 to www.example.com/en should the sitemap be at ****www.example.com/sitemap.xmlorwww.example.com/en/sitemap.xml ???****My third question is as followed. On WMT do I submit my sitemap in all versions of my site? I have all my languages there.**Thanks in advance for taking the time to respond to this thread and by creating it I hope many people will solve their own questions.
Intermediate & Advanced SEO | | Angelos_Savvaidis0 -
How important is the optional <priority>tag in an XML sitemap of your website? Can this help search engines understand the hierarchy of a website?</priority>
Can the <priority>tag be used to tell search engines the hierarchy of a site or should it be used to let search engines know which priority to we want pages to be indexed in?</priority>
Intermediate & Advanced SEO | | mycity4kids0 -
Our login pages are being indexed by Google - How do you remove them?
Each of our login pages show up under different subdomains of our website. Currently these are accessible by Google which is a huge competitive advantage for our competitors looking for our client list. We've done a few things to try to rectify the problem: - No index/archive to each login page Robot.txt to all subdomains to block search engines gone into webmaster tools and added the subdomain of one of our bigger clients then requested to remove it from Google (This would be great to do for every subdomain but we have a LOT of clients and it would require tons of backend work to make this happen.) Other than the last option, is there something we can do that will remove subdomains from being viewed from search engines? We know the robots.txt are working since the message on search results say: "A description for this result is not available because of this site's robots.txt – learn more." But we'd like the whole link to disappear.. Any suggestions?
Intermediate & Advanced SEO | | desmond.liang1 -
Site Indexed by Google but not Bing or Yahoo
Hi, I have a site that is indexed (and ranking very well) in Google, but when I do a "site:www.domain.com" search in Bing and Yahoo it is not showing up. The team that purchased the domain a while back has no idea if it was indexed by Bing or Yahoo at the time of purchase. Just wondering if there is anything that might be preventing it from being indexed? Also, Im going to submit an index request, are there any other things I can do to get it picked up?
Intermediate & Advanced SEO | | dbfrench0