Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
What if page exists for desktop but not mobile?
-
I have a domain (no subdomains) that serves up different dynamic content for mobile/desktop pages--each having the exact same page url, kind of a semi responsive design, and will be using "Vary: User-Agent" to give Google a heads up on this setup.
However, some of the pages are only valid for mobile or only valid for desktop. In the case of when a page is valid only for mobile (call it mysite.com/mobile-page-only ), Google Webmaster Tools is giving me a soft 404 error under Desktop, saying that the page does not exist, Apparently it is doing that because my program is actually redirecting the user/crawler to the home page. It appears from the info about soft 404 errors that Google is saying since it "doesn't exist" I should give the user a 404 page--which I can make it customized and give the user an option to go to the home page, or choose links from a menu, etc..
My concern is that if I tell the desktop bot that mysite.com/mobile-page-only basically is a 404 error (ie doesn't exist), that it could mess up the mobile bot indexing for that page--since it definitely DOES exist for mobile users..
Does anyone here know for sure that Google will index a page for mobile that is a 404 not found for desktop and vice versa? Obviously it is important to not remove something from an index in which it belongs, so whether Google is careful to differential the two is a very important issue. Has anybody here dealt with this or seen anything from Google that addresses it? Might one be better off leaving it as a soft 404 error?
EDIT: also, what about Bing and Yahoo? Can we assume they will handle it the same way?
EDIT: closely related question--in a case like mine does Google need a separate sitemap for the valid mobile pages and valid desktop pages even though most links will be in both? I can't tell from reading several q&a on this.
Thanks, Ted
-
Monica,
I'm going to open a new thread to ask a similar question, as I think I didn't ask it very well.
Thanks for your input,
Ted
-
Thanks. If I understand you, the mobile bot won't crawl a url that the desktop bot has said needs to be fixed for it to work right for desktop. . Would you agree that doesn't really sound right on Google's part, since the url is fine for mobile use? I don't know why it wouldn't crawl for mobile, but if that's the way it is I can try fixing it on desktop to see if that enables the mobile to get crawled.
Once I do that I guess I'll find out whether a 404 not found for desktop will disable it from crawling for mobile (yes that link is accessible from other pages)--I was hoping to avoid trial and error on that because the time lag seems like it would be hard to pin down.
In a nutshell here's what I'm concerned will happen:
Google mobile bot crawls my mobile page and indexes it: Then the desktop bot crawls the same url and gets a 404 not found. Because of the desktop not found, Google removes it from the mobile page index.
I don't see a good way to test that since it depends on when each crawler is crawling. And, if this is what it is doing, I can't think of a good solution to having a responsive site with some content meant only for mobile indexing or only for desktop indexing.
-
If a URL is labeled a 404 it will not be crawled again unless there is a reason to, you mark it as fixed, or you edit the link in some form or fashion. Mark it as fixed and see if the error comes back. There is no harm in doing this.
Can you get to the page on your mobile device just by clicking through your site? If you can, that is good, it will eventually encourage a mobile bot to crawl it. If you can fetch and render as google, then I would just give it some time. I am not sure if there is a string of code you can add to the head of that page telling the robots that it is a mobile only page. I don't know how that works.
I would just mark it as fixed right now and see what happens over the next couple of days.
-
Hi Monica-thanks for your reply:
Ok, for a page that is supposed to be mobile only within a responsive-like setup(ie one domain) here's what I see:
The desktop bot crawls the link and gives a soft 404 error -- presumably because the page is currently being redirected to the home page.
The mobile bot is not crawling that link despite it being prominent on the main site home page, as my dbase is tracking the bot crawling and is not showing that it crawled that link for mobile (but is for desktop), and a search on my smartphone doesn't show that link either (even though it does show other links for pages used by both).. **Yet, if I fetch the mobile only page in webmaster tools using their mobile bot it finds it and renders it perfectly. ** So, why isn't it crawling it? Is it because when the mobile bot crawls it first looks and sees that that link is already 'flagged' as a soft 404 for the desktop? Or, is it because the mobile crawler is getting hung up on a link on the home page for mobile that has nothing to do with this mobile-only link?
It appears that the mobile bot is influenced by the desktop bot results--which is my fear: It seems to me their 2 bots should be independent of each other. If they aren't independent then if I change it to a 404 not found for desktop, would that even help, or would that prevent the mobile bot from ever trying to crawl it?
I would think that anybody who has a responsive page design and has blocked out certain content so that it renders only for mobile or only for non-mobile has had to face this issue.
Not sure what to do--I could fix the soft errors--change them to 404 not found and just see then if Google starts indexing for mobile or not, but was hoping to get some feedback before experimenting.
Thanks again, and please share more if you have more thoughts!
-
Did you look at your Mobile 404 errors? Google uses a different bot for mobile sites and anything related to that mobile page. Chances are, if it isn't reflecting a 404 in the Mobile errors in GWT, it is being indexed properly.
Check it out from you phone. Google the exact keyword and your company name. See if you can get to the page and if it is in fact the correct page.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Page rank and menus
Hi, My client has a large website and has a navigation with main categories. However, they also have a hamburger type navigation in the top right. If you click it it opens to a massive menu with every category and page visible. Do you know if having a navigation like this bleeds page rank? So if all deep pages are visible from the hamburger navigation this means that page rank is not being conserved to the main categories. If you click a main category in the main navigation (not the hamburger) you can see the sub pages. I think this is the right structure but the client has installed this huge menu to make it easier for people to see what there is. From a technical SEO is this not bad?
Intermediate & Advanced SEO | | AL123al0 -
Category Page as Shopping Aggregator Page
Hi, I have been reviewing the info from Google on structured data for products and started to ponder.
Intermediate & Advanced SEO | | Alexcox6
https://developers.google.com/search/docs/data-types/products Here is the scenario.
You have a Category Page and it lists 8 products, each products shows an image, price and review rating. As the individual products pages are already marked up they display Rich Snippets in the serps.
I wonder how do we get the rich snippets for the category page. Now Google suggest a markup for shopping aggregator pages that lists a single product, along with information about different sellers offering that product but nothing for categories. My ponder is this, Can we use the shopping aggregator markup for category pages to achieve the coveted rich results (from and to price, average reviews)? Keen to hear from anyone who has had any thoughts on the matter or had already tried this.0 -
What are best page titles for sub-domain pages?
Hi Moz communtity, Let's say a website has multiple sub-domains with hundreds and thousands of pages. Generally we will be mentioning "primary keyword & "brand name" on every page of website. Can we do same on all pages of sub-domains to increase the authority of website for this primary keyword in Google? Or it gonna end up as negative impact if Google consider as duplicate content being mentioned same keyword and brand name on every page even on website and all pages of sub domains? Thanks
Intermediate & Advanced SEO | | vtmoz0 -
Why does Google rank a product page rather than a category page?
Hi, everybody In the Moz ranking tool for one of our client's (the client sells sport equipment) account, there is a trend where more and more of their landing pages are product pages instead of category pages. The optimal landing page for the term "sleeping bag" is of course the sleeping bag category page, but Google is sending them to a product page for a specific sleeping bag.. What could be the critical factors that makes the product page more relevant than the category page as the landing page?
Intermediate & Advanced SEO | | Inevo0 -
Should my back links go to home page or internal pages
Right now we rank on page 2 for many KWs, so should i now focus my attention on getting links to my home page to build domain authority or continue to direct links to the internal pages for specific KWs? I am about to write some articles for several good ranking sites and want to know whether to link my company name (same as domain name) or KW to the home page or use individual KWs to the internal pages - I am only allowed one link per article to my site. Thanks Ash
Intermediate & Advanced SEO | | AshShep10 -
Can too many "noindex" pages compared to "index" pages be a problem?
Hello, I have a question for you: our website virtualsheetmusic.com includes thousands of product pages, and due to Panda penalties in the past, we have no-indexed most of the product pages hoping in a sort of recovery (not yet seen though!). So, currently we have about 4,000 "index" page compared to about 80,000 "noindex" pages. Now, we plan to add additional 100,000 new product pages from a new publisher to offer our customers more music choice, and these new pages will still be marked as "noindex, follow". At the end of the integration process, we will end up having something like 180,000 "noindex, follow" pages compared to about 4,000 "index, follow" pages. Here is my question: can this huge discrepancy between 180,000 "noindex" pages and 4,000 "index" pages be a problem? Can this kind of scenario have or cause any negative effect on our current natural SEs profile? or is this something that doesn't actually matter? Any thoughts on this issue are very welcome. Thank you! Fabrizio
Intermediate & Advanced SEO | | fablau0 -
Why is my Crawl Report Showing Thousands of Pages that Do Not Exist?
Hi, I just downloaded a Crawl Summary Report for a client's website. I am seeing THOUSANDS of duplicate page content errors. The overwhelming majority of them look something like this: ERROR: http://www.earlyinterventionsupport.com/resources/parentingtips/development/parentingtips/development/development/development/development/development/development/parentingtips/specialneeds/default.aspx This page doesn't exist and results in a 404 page. Why are these pages showing up? How do I get rid of them? Are they endangering the health of my site as a whole? Thank you, Jenna <colgroup><col width="1051"></colgroup>
Intermediate & Advanced SEO | | JennaCMag
| |0 -
Blocking Pages Via Robots, Can Images On Those Pages Be Included In Image Search
Hi! I have pages within my forum where visitors can upload photos. When they upload photos they provide a simple statement about the photo but no real information about the image,definitely not enough for the page to be deemed worthy of being indexed. The industry however is one that really leans on images and having the images in Google Image search is important to us. The url structure is like such: domain.com/community/photos/~username~/picture111111.aspx I wish to block the whole folder from Googlebot to prevent these low quality pages from being added to Google's main SERP results. This would be something like this: User-agent: googlebot Disallow: /community/photos/ Can I disallow Googlebot specifically rather than just using User-agent: * which would then allow googlebot-image to pick up the photos? I plan on configuring a way to add meaningful alt attributes and image names to assist in visibility, but the actual act of blocking the pages and getting the images picked up... Is this possible? Thanks! Leona
Intermediate & Advanced SEO | | HD_Leona0