Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
If Robots.txt have blocked an Image (Image URL) but the other page which can be indexed has this image, how is the image treated?
-
Hi MOZers,
This probably is a dumb question but I have a case where the robots.tags has an image url blocked but this image is used on a page (lets call it Page A) which can be indexed. If the image on Page A has an Alt tags, then how is this information digested by crawlers?
A) would Google totally ignore the image and the ALT tags information? OR
B) Google would consider the ALT tags information?
I am asking this because all the images on the website are blocked by robots.txt at the moment but I would really like website crawlers to crawl the alt tags information. Chances are that I will ask the webmaster to allow indexing of images too but I would like to understand what's happening currently.
Looking forward to all your responses
Malika
-
May I ask why you/your webmaster would have noindexed your images in the first place?
-
-
Hi Malika,
Blocking image directories or images themselves in robots.txt only prevents the image from being added to "image" search results. You will still get the full benefit of the alt text on the page, the image just won't appear in the image results.
How this actually works is the crawler will crawl the site and index all the text and weight (h1, h2, alt etc..) then when the crawler moves to add the image to the search cache it finds it can't access it due to robots.txt and simply ignores it and goes on.This leaves your original text as what is indexed as a search result, and nothing for image results.
If you are using Apache you may want to not use robots.txt as the method of blocking images. I would recommend using the .htaccess file with a code like this...
<filesmatch ".(bmp|gif|jpg|png|tif)$"="">Header set X-Robots-Tag "noindex"</filesmatch>
This is a blanket declaration and would prevent indexing of any images with the noted extensions on your site. This is particularly useful if you have multiple image directories. Further more if there are a few images you want indexed you could pick a particular extension like .jpeg for example (note jpeg not jpg), then just convert those few images and know they will be indexed as they are not in the exclusion list.
Another benefit of handling it this way is if you already have images that are indexed, using the noindex tag will get them out out of the image directory much faster than blocking them. The reason is you are giving Google a new directive which is "noindex", otherwise they will just treat them as inaccessible and move on, leaving any cached version to appear in the directory for some time.
Hope that makes sense and helps,
Don
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How i can increase my page authority?
Hi, I have website and i want to increase my page authority. My website is latestdatabase.com I have making more backlinks but not good page authority so far. Please give me suggest.
Intermediate & Advanced SEO | | LatestMailingDatabase1 -
E-Commerce Site Collection Pages Not Being Indexed
Hello Everyone, So this is not really my strong suit but I’m going to do my best to explain the full scope of the issue and really hope someone has any insight. We have an e-commerce client (can't really share the domain) that uses Shopify; they have a large number of products categorized by Collections. The issue is when we do a site:search of our Collection Pages (site:Domain.com/Collections/) they don’t seem to be indexed. Also, not sure if it’s relevant but we also recently did an over-hall of our design. Because we haven’t been able to identify the issue here’s everything we know/have done so far: Moz Crawl Check and the Collection Pages came up. Checked Organic Landing Page Analytics (source/medium: Google) and the pages are getting traffic. Submitted the pages to Google Search Console. The URLs are listed on the sitemap.xml but when we tried to submit the Collections sitemap.xml to Google Search Console 99 were submitted but nothing came back as being indexed (like our other pages and products). We tested the URL in GSC’s robots.txt tester and it came up as being “allowed” but just in case below is the language used in our robots:
Intermediate & Advanced SEO | | Ben-R
User-agent: *
Disallow: /admin
Disallow: /cart
Disallow: /orders
Disallow: /checkout
Disallow: /9545580/checkouts
Disallow: /carts
Disallow: /account
Disallow: /collections/+
Disallow: /collections/%2B
Disallow: /collections/%2b
Disallow: /blogs/+
Disallow: /blogs/%2B
Disallow: /blogs/%2b
Disallow: /design_theme_id
Disallow: /preview_theme_id
Disallow: /preview_script_id
Disallow: /apple-app-site-association
Sitemap: https://domain.com/sitemap.xml A Google Cache:Search currently shows a collections/all page we have up that lists all of our products. Please let us know if there’s any other details we could provide that might help. Any insight or suggestions would be very much appreciated. Looking forward to hearing all of your thoughts! Thank you in advance. Best,0 -
URL Injection Hack - What to do with spammy URLs that keep appearing in Google's index?
A website was hacked (URL injection) but the malicious code has been cleaned up and removed from all pages. However, whenever we run a site:domain.com in Google, we keep finding more spammy URLs from the hack. They all lead to a 404 error page since the hack was cleaned up in the code. We have been using the Google WMT Remove URLs tool to have these spammy URLs removed from Google's index but new URLs keep appearing every day. We looked at the cache dates on these URLs and they are vary in dates but none are recent and most are from a month ago when the initial hack occurred. My question is...should we continue to check the index every day and keep submitting these URLs to be removed manually? Or since they all lead to a 404 page will Google eventually remove these spammy URLs from the index automatically? Thanks in advance Moz community for your feedback.
Intermediate & Advanced SEO | | peteboyd0 -
How Many Images on 1 Page Are Acceptable
Example I have a page with a slideshow of 35 pictures. They are all unique pictures and relevant to the page, have unique alt text, though no captions or description surrounding the images. Page also has a lot of unique written content. Question: is this large nr of pictures potentially overwhelming for search engines and they may think it is spammy and it would be a safer bet to only keep the top 10 pictures on such page? I did review this great whiteboard Friday - http://a-moz.groupbuyseo.org/blog/image-seo-basics-whiteboard-friday - and I noticed this at very end: "The other part, and I see this happen a lot especially with bigger clients, is when you put lots and lots of images on one page, like an image gallery, those pages tend to be very hard to get indexed. The reason for that is there's not a lot unique textual content. A lot of times it's just overwhelming to users. It doesn't provide a lot of benefit in a search result." My page has been indexed, but will ranking potentially be hurt and to play it safe I better reduce nr of pictures? I do understand the "do what is best for the user" scenario and that is what I am doing with a lot of amazing original pictures not found on any other website. However, with search engines we obviously have to consider how they operate as well. Thank you
Intermediate & Advanced SEO | | khi50 -
How to find all indexed pages in Google?
Hi, We have an ecommerce site with around 4000 real pages. But our index count is at 47,000 pages in Google Webmaster Tools. How can I get a list of all pages indexed of our domain? trying to locate the duplicate content. Doing a "site:www.mydomain.com" only returns up to 676 results... Any ideas? Thanks, Ben
Intermediate & Advanced SEO | | bjs20100 -
Google is indexing wordpress attachment pages
Hey, I have a bit of a problem/issue what is freaking me out a bit. I hope you can help me. If i do site:www.somesitename.com search in Google i see that Google is indexing my attachment pages. I want to redirect attachment URL's to parent post and stop google from indexing them. I have used different redirect plugins in hope that i can fix it myself but plugins don't work. I get a error:"too many redirects occurred trying to open www.somesitename.com/?attachment_id=1982 ". Do i need to change something in my attachment.php fail? Any idea what is causing this problem? get_header(); ?> /* Run the loop to output the attachment. * If you want to overload this in a child theme then include a file * called loop-attachment.php and that will be used instead. */ get_template_part( 'loop', 'attachment' ); ?>
Intermediate & Advanced SEO | | TauriU0 -
Blocking Dynamic URLs with Robots.txt
Background: My e-commerce site uses a lot of layered navigation and sorting links. While this is great for users, it ends up in a lot of URL variations of the same page being crawled by Google. For example, a standard category page: www.mysite.com/widgets.html ...which uses a "Price" layered navigation sidebar to filter products based on price also produces the following URLs which link to the same page: http://www.mysite.com/widgets.html?price=1%2C250 http://www.mysite.com/widgets.html?price=2%2C250 http://www.mysite.com/widgets.html?price=3%2C250 As there are literally thousands of these URL variations being indexed, so I'd like to use Robots.txt to disallow these variations. Question: Is this a wise thing to do? Or does Google take into account layered navigation links by default, and I don't need to worry. To implement, I was going to do the following in Robots.txt: User-agent: * Disallow: /*? Disallow: /*= ....which would prevent any dynamic URL with a '?" or '=' from being indexed. Is there a better way to do this, or is this a good solution? Thank you!
Intermediate & Advanced SEO | | AndrewY1 -
Can a XML sitemap index point to other sitemaps indexes?
We have a massive site that is having some issue being fully crawled due to some of our site architecture and linking. Is it possible to have a XML sitemap index point to other sitemap indexes rather than standalone XML sitemaps? Has anyone done this successfully? Based upon the description here: http://sitemaps.org/protocol.php#index it seems like it should be possible. Thanks in advance for your help!
Intermediate & Advanced SEO | | CareerBliss0