Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Crawl Stats Decline After Site Launch (Pages Crawled Per Day, KB Downloaded Per Day)
-
Hi all,
I have been looking into this for about a month and haven't been able to figure out what is going on with this situation. We recently did a website re-design and moved from a separate mobile site to responsive. After the launch, I immediately noticed a decline in pages crawled per day and KB downloaded per day in the crawl stats. I expected the opposite to happen as I figured Google would be crawling more pages for a while to figure out the new site. There was also an increase in time spent downloading a page. This has went back down but the pages crawled has never went back up. Some notes about the re-design:
- URLs did not change
- Mobile URLs were redirected
- Images were moved from a subdomain (images.sitename.com) to Amazon S3
- Had an immediate decline in both organic and paid traffic (roughly 20-30% for each channel)
I have not been able to find any glaring issues in search console as indexation looks good, no spike in 404s, or mobile usability issues. Just wondering if anyone has an idea or insight into what caused the drop in pages crawled? Here is the robots.txt and attaching a photo of the crawl stats.
User-agent: ShopWiki Disallow: / User-agent: deepcrawl Disallow: / User-agent: Speedy Disallow: / User-agent: SLI_Systems_Indexer Disallow: / User-agent: Yandex Disallow: / User-agent: MJ12bot Disallow: / User-agent: BrightEdge Crawler/1.0 ([email protected]) Disallow: / User-agent: * Crawl-delay: 5 Disallow: /cart/ Disallow: /compare/ ```[fSAOL0](https://ibb.co/fSAOL0)
-
Yea that's definitely tricky. I'm assuming you haven't taken out any load balancing that was previously in place between desktop and m. meaning your server is struggling a lot more? The Page Speed Insights tool can be good info but if possible I'd have a look at that user experience index to get an idea of how other users are experiencing the site.
A next port of call could be your server logs? Do you have any other subdomains which are performing differently in search console?
In terms of getting Google to crawl more, unfortunately at this point my instinct would be to keep trying to optimise the site to make it as crawl-friendly as possible and wait for Google to start crawling more. It does look like the original spike in time spent downloading has subsided a bit but it's still higher than it was. Without doing the maths, given that pages crawled and kilobytes downloaded have dropped, the level of slowdown may have persisted and the drop in that graph could have been caused by Google easing back. I'd keep working on making the site as efficient and consistent as possible and try to get that line tracking lower as an immediate tactic.
-
Hi Robin,
Thanks a lot for the reply. A lot of good information there.
- The crawl delay has been on the site as long as I have known so it was left in place just to minimize changes
- Have not changed any of the settings in Search Console. It has remained at "Let Google optimize for my site"
- Have not received the notification for mobile first indexing
- The redirects were one to one for the mobile site. I do not believe there are any redirect chains from those.
- The desktop pages remained roughly the same size but on a mobile device, pages are slightly heavier compared to the sepatate m dot site. The separate m dot site had a lot of content stripped out and was pretty bare to be fast. We introduced more image compression than we have ever done and also deferred image loading to make the user experience as fast as possible. The site scores in the 90s on Google's page speed insights tool.
- Yes, resizing based on viewport. Content is basically the same between devices. We have some information in accordions on product detail pages on and show fewer products on the grids on mobile.
- They are not the same images files but they are actually smaller than they were previously as we were not compressing them and using different sizes in different locations to minimize page weight.
I definitely lean towards it being performance related as in the crawl stats there seems to be a correlation between time spent downloading a page and the other two stats. I just wonder how you get Google to start crawling more once the performance is fixed or if they will figure it out.
-
Hi there, thanks for posting!
Sounds like an interesting one, some questions that come to mind which I'd just like to run through to make sure we're not missing anything;
- Why do you have Crawl-delay set for all user agents? Officially it's not something Google supports but the reason for that could be the cause of this
- Have you changed any settings in search console? There is a slider for how often you want Google to crawl a site
- Have you had the Search Console notification that you're now on the mobile-first index?
- When you redirected the mobile site, was it all one-to-one redirects? Is there any possibility you've introduced redirect chains?
- After the redesign - are the pages now significantly bigger (in terms of amount of data needed to fully load the page)? Are there any very large assets that are now on every page?
- When you say responsive, is it resizing based on viewport? How much duplication has been added to the page? Is there a bunch of content that is there for mobile but not loaded unless viewed from mobile (and vice versa)?
- When you moved the images, were they the same exact image files or might they now be the full-size image files?
This is just first blush so I could be off the mark but those graphs suggest to me that Google is having to work harder to crawl your pages and, as a result, is throttling the amount of time spent on your site. If the redesign or switch to responsive involved making the pages significantly "heavier" where that could be additional JavaScript, bigger images, more content etc. that could cause that effect. If you've got any sitespeed benchmarking in place you could have a look at that to see whether things have changed. Google also uses pagespeed as a ranking factor so that could explain the traffic drop.
The other thing to bear in mind is that combining the mobile and desktop sites was essentially a migration, particularly if you were on the mobile-first index. It may be that the traffic dip is less related to the crawl rate, but I understand why we'd make the connection there.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
XML sitemap generator only crawling 20% of my site
Hi guys, I am trying to submit the most recent XML sitemap but the sitemap generator tools are only crawling about 20% of my site. The site carries around 150 pages and only 37 show up on tools like xml-sitemaps.com. My goal is to get all the important URLs we care about into the XML sitemap. How should I go about this? Thanks
Intermediate & Advanced SEO | | TyEl0 -
My last site crawl shows over 700 404 errors all with void(0 added to the ends of my posts/pages.
Hello, My last site crawl shows over 700 404 errors all with void(0 added to the ends of my posts/pages. I have contacted my theme company but not sure what could have done this. Any ideas? The original posts/pages are still correct and working it just looks like it did duplicates and added void(0 to the end of each post/page. Questions: There is no way to undo this correct? Do I have to do a redirect on each of these? Will this hurt my rankings and domain authority? Any suggestions would be appreciated. Thanks, Wade
Intermediate & Advanced SEO | | neverenoughmusic.com0 -
Is it ok to repeat a (focus) keyword used on a previous page, on a new page?
I am cataloguing the pages on our website in terms of which focus keyword has been used with the page. I've noticed that some pages repeated the same keyword / term. I've heard that it's not really good practice, as it's like telling google conflicting information, as the pages with the same keywords will be competing against each other. Is this correct information? If so, is the alternative to use various long-winded keywords instead? If not, meaning it's ok to repeat the keyword on different pages, is there a maximum recommended number of times that we want to repeat the word? Still new-ish to SEO, so any help is much appreciated! V.
Intermediate & Advanced SEO | | Vitzz1 -
Redirecting homepage to internal page (2nd Tier page)
We are planning to experiment redirecting our homepage to one of the 2nd tier page. I mean....example.com to example.com/page. We need this page to rank well, but it doesn't have much internal links or external back-links, so we opt for this redirect. Advantage with this page is, it has "keyword" we want to rank for in URL. "page" in example.com/page. Will this help or hurt us in SEO? I think we are missing keyword in our root domain, so interested to highlight this page. Thanks, Satish
Intermediate & Advanced SEO | | vtmoz0 -
Is it bad for SEO to have a page that is not linked to anywhere on your site?
Hi, We had a content manager request to delete a page from our site. Looking at the traffic to the page, I noticed there were a lot of inbound links from credible sites. Rather than deleting the page, we simply removed it from the navigation, so that a user could still access the page by clicking on a link to it from an external site. Questions: Is it bad for SEO to have a page that is not directly accessible from your site? If no: do we keep this page in our Sitemap, or remove it? If yes: what is a better strategy to ensure the inbound links aren't considered "broken links" and also to minimize any negative impact to our SEO? Should we delete the page and 301 redirect users to the parent page for the page we had previously hidden?
Intermediate & Advanced SEO | | jnew9290 -
Would you rate-control Googlebot? How much crawling is too much crawling?
One of our sites is very large - over 500M pages. Google has indexed 1/8th of the site - and they tend to crawl between 800k and 1M pages per day. A few times a year, Google will significantly increase their crawl rate - overnight hitting 2M pages per day or more. This creates big problems for us, because at 1M pages per day Google is consuming 70% of our API capacity, and the API overall is at 90% capacity. At 2M pages per day, 20% of our page requests are 500 errors. I've lobbied for an investment / overhaul of the API configuration to allow for more Google bandwidth without compromising user experience. My tech team counters that it's a wasted investment - as Google will crawl to our capacity whatever that capacity is. Questions to Enterprise SEOs: *Is there any validity to the tech team's claim? I thought Google's crawl rate was based on a combination of PageRank and the frequency of page updates. This indicates there is some upper limit - which we perhaps haven't reached - but which would stabilize once reached. *We've asked Google to rate-limit our crawl rate in the past. Is that harmful? I've always looked at a robust crawl rate as a good problem to have. Is 1.5M Googlebot API calls a day desirable, or something any reasonable Enterprise SEO would seek to throttle back? *What about setting a longer refresh rate in the sitemaps? Would that reduce the daily crawl demand? We could set increase it to a month, but at 500M pages Google could still have a ball at the 2M pages/day rate. Thanks
Intermediate & Advanced SEO | | lzhao0 -
Do search engines crawl links on 404 pages?
I'm currently in the process of redesigning my site's 404 page. I know there's all sorts of best practices from UX standpoint but what about search engines? Since these pages are roadblocks in the crawl process, I was wondering if there's a way to help the search engine continue its crawl. Does putting links to "recent posts" or something along those lines allow the bot to continue on its way or does the crawl stop at that point because the 404 HTTP status code is thrown in the header response?
Intermediate & Advanced SEO | | brad-causes0